00:00:00.001 Started by upstream project "autotest-per-patch" build number 130589 00:00:00.001 originally caused by: 00:00:00.001 Started by user sys_sgci 00:00:00.095 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-phy.groovy 00:00:45.574 The recommended git tool is: git 00:00:45.575 using credential 00000000-0000-0000-0000-000000000002 00:00:45.576 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:45.587 Fetching changes from the remote Git repository 00:00:45.589 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:45.602 Using shallow fetch with depth 1 00:00:45.602 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:45.602 > git --version # timeout=10 00:00:45.612 > git --version # 'git version 2.39.2' 00:00:45.612 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:45.625 Setting http proxy: proxy-dmz.intel.com:911 00:00:45.625 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:01:00.788 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:01:00.801 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:01:00.814 Checking out Revision d37d6e8a0abef39b377a5f0531b43b2efbbebf34 (FETCH_HEAD) 00:01:00.815 > git config core.sparsecheckout # timeout=10 00:01:00.828 > git read-tree -mu HEAD # timeout=10 00:01:00.846 > git checkout -f d37d6e8a0abef39b377a5f0531b43b2efbbebf34 # timeout=5 00:01:00.868 Commit message: "pool: serialize build page context to json" 00:01:00.869 > git rev-list --no-walk d37d6e8a0abef39b377a5f0531b43b2efbbebf34 # timeout=10 00:01:00.952 [Pipeline] Start of Pipeline 00:01:00.967 [Pipeline] library 00:01:00.969 Loading library shm_lib@master 00:01:00.969 Library shm_lib@master is cached. Copying from home. 00:01:00.986 [Pipeline] node 00:01:15.988 Still waiting to schedule task 00:01:15.989 ‘CYP13’ is offline 00:01:15.989 ‘CYP18’ doesn’t have label ‘DiskNvme&&NetCVL’ 00:01:15.989 ‘CYP19’ doesn’t have label ‘DiskNvme&&NetCVL’ 00:01:15.989 ‘FCP03’ doesn’t have label ‘DiskNvme&&NetCVL’ 00:01:15.989 ‘FCP04’ doesn’t have label ‘DiskNvme&&NetCVL’ 00:01:15.989 ‘FCP07’ doesn’t have label ‘DiskNvme&&NetCVL’ 00:01:15.989 ‘FCP08’ doesn’t have label ‘DiskNvme&&NetCVL’ 00:01:15.989 ‘FCP09’ doesn’t have label ‘DiskNvme&&NetCVL’ 00:01:15.989 ‘FCP10’ doesn’t have label ‘DiskNvme&&NetCVL’ 00:01:15.989 ‘FCP11’ doesn’t have label ‘DiskNvme&&NetCVL’ 00:01:15.989 ‘FCP12’ doesn’t have label ‘DiskNvme&&NetCVL’ 00:01:15.989 ‘GP10’ doesn’t have label ‘DiskNvme&&NetCVL’ 00:01:15.989 ‘GP13’ doesn’t have label ‘DiskNvme&&NetCVL’ 00:01:15.989 ‘GP15’ doesn’t have label ‘DiskNvme&&NetCVL’ 00:01:15.989 ‘GP16’ doesn’t have label ‘DiskNvme&&NetCVL’ 00:01:15.989 ‘GP18’ doesn’t have label ‘DiskNvme&&NetCVL’ 00:01:15.989 ‘GP19’ doesn’t have label ‘DiskNvme&&NetCVL’ 00:01:15.989 ‘GP21’ doesn’t have label ‘DiskNvme&&NetCVL’ 00:01:15.989 ‘GP23’ doesn’t have label ‘DiskNvme&&NetCVL’ 00:01:15.989 ‘GP4’ is offline 00:01:15.989 ‘GP5’ doesn’t have label ‘DiskNvme&&NetCVL’ 00:01:15.989 ‘ImageBuilder1’ doesn’t have label ‘DiskNvme&&NetCVL’ 00:01:15.989 ‘Jenkins’ doesn’t have label ‘DiskNvme&&NetCVL’ 00:01:15.989 ‘ME1’ doesn’t have label ‘DiskNvme&&NetCVL’ 00:01:15.989 ‘ME2’ doesn’t have label ‘DiskNvme&&NetCVL’ 00:01:15.989 ‘ME3’ doesn’t have label ‘DiskNvme&&NetCVL’ 00:01:15.989 ‘PE5’ doesn’t have label ‘DiskNvme&&NetCVL’ 00:01:15.989 ‘SM1’ doesn’t have label ‘DiskNvme&&NetCVL’ 00:01:15.989 ‘SM25’ doesn’t have label ‘DiskNvme&&NetCVL’ 00:01:15.989 ‘SM26’ doesn’t have label ‘DiskNvme&&NetCVL’ 00:01:15.989 ‘SM27’ doesn’t have label ‘DiskNvme&&NetCVL’ 00:01:15.989 ‘SM28’ doesn’t have label ‘DiskNvme&&NetCVL’ 00:01:15.989 ‘SM29’ doesn’t have label ‘DiskNvme&&NetCVL’ 00:01:15.989 ‘SM2’ doesn’t have label ‘DiskNvme&&NetCVL’ 00:01:15.989 ‘SM30’ doesn’t have label ‘DiskNvme&&NetCVL’ 00:01:15.989 ‘SM31’ doesn’t have label ‘DiskNvme&&NetCVL’ 00:01:15.989 ‘SM32’ doesn’t have label ‘DiskNvme&&NetCVL’ 00:01:15.989 ‘SM33’ doesn’t have label ‘DiskNvme&&NetCVL’ 00:01:15.989 ‘SM34’ doesn’t have label ‘DiskNvme&&NetCVL’ 00:01:15.989 ‘SM35’ doesn’t have label ‘DiskNvme&&NetCVL’ 00:01:15.989 ‘SM40’ doesn’t have label ‘DiskNvme&&NetCVL’ 00:01:15.989 ‘SM5’ doesn’t have label ‘DiskNvme&&NetCVL’ 00:01:15.989 ‘SM6’ doesn’t have label ‘DiskNvme&&NetCVL’ 00:01:15.989 ‘SM7’ doesn’t have label ‘DiskNvme&&NetCVL’ 00:01:15.989 ‘SM8’ doesn’t have label ‘DiskNvme&&NetCVL’ 00:01:15.989 ‘VM-host-PE1’ doesn’t have label ‘DiskNvme&&NetCVL’ 00:01:15.989 ‘VM-host-PE2’ doesn’t have label ‘DiskNvme&&NetCVL’ 00:01:15.989 ‘VM-host-PE3’ doesn’t have label ‘DiskNvme&&NetCVL’ 00:01:15.989 ‘VM-host-PE4’ doesn’t have label ‘DiskNvme&&NetCVL’ 00:01:15.989 ‘VM-host-SM18’ doesn’t have label ‘DiskNvme&&NetCVL’ 00:01:15.989 ‘WCP5’ doesn’t have label ‘DiskNvme&&NetCVL’ 00:01:15.989 ‘WCP8’ doesn’t have label ‘DiskNvme&&NetCVL’ 00:01:15.989 ‘WFP12’ doesn’t have label ‘DiskNvme&&NetCVL’ 00:01:15.989 ‘WFP13’ doesn’t have label ‘DiskNvme&&NetCVL’ 00:01:15.989 ‘WFP15’ is offline 00:01:15.990 ‘WFP20’ is offline 00:01:15.990 ‘WFP22’ is offline 00:01:15.990 ‘WFP26’ doesn’t have label ‘DiskNvme&&NetCVL’ 00:01:15.990 ‘WFP29’ doesn’t have label ‘DiskNvme&&NetCVL’ 00:01:15.990 ‘WFP2’ doesn’t have label ‘DiskNvme&&NetCVL’ 00:01:15.990 ‘WFP32’ doesn’t have label ‘DiskNvme&&NetCVL’ 00:01:15.990 ‘WFP56’ doesn’t have label ‘DiskNvme&&NetCVL’ 00:01:15.990 ‘WFP63’ doesn’t have label ‘DiskNvme&&NetCVL’ 00:01:15.990 ‘WFP69’ doesn’t have label ‘DiskNvme&&NetCVL’ 00:01:15.990 ‘WFP9’ is offline 00:01:15.990 ‘agt-_autotest_29482-17439’ doesn’t have label ‘DiskNvme&&NetCVL’ 00:01:15.990 ‘agt-_autotest_29484-17441’ doesn’t have label ‘DiskNvme&&NetCVL’ 00:01:15.990 ‘agt-_autotest_29488-17443’ doesn’t have label ‘DiskNvme&&NetCVL’ 00:01:15.990 ‘agt-_autotest_29489-17442’ doesn’t have label ‘DiskNvme&&NetCVL’ 00:01:15.990 ‘agt-r_autotest_2689-17440’ doesn’t have label ‘DiskNvme&&NetCVL’ 00:01:15.990 ‘ipxe-staging’ doesn’t have label ‘DiskNvme&&NetCVL’ 00:01:15.990 ‘prc_bsc_waikikibeach64’ doesn’t have label ‘DiskNvme&&NetCVL’ 00:01:15.990 ‘spdk-pxe-01’ doesn’t have label ‘DiskNvme&&NetCVL’ 00:01:15.990 ‘spdk-pxe-02’ doesn’t have label ‘DiskNvme&&NetCVL’ 00:12:56.263 Running on CYP10 in /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:12:56.265 [Pipeline] { 00:12:56.278 [Pipeline] catchError 00:12:56.280 [Pipeline] { 00:12:56.294 [Pipeline] wrap 00:12:56.302 [Pipeline] { 00:12:56.309 [Pipeline] stage 00:12:56.311 [Pipeline] { (Prologue) 00:12:56.503 [Pipeline] sh 00:12:56.808 + logger -p user.info -t JENKINS-CI 00:12:56.827 [Pipeline] echo 00:12:56.828 Node: CYP10 00:12:56.837 [Pipeline] sh 00:12:57.140 [Pipeline] setCustomBuildProperty 00:12:57.157 [Pipeline] echo 00:12:57.159 Cleanup processes 00:12:57.166 [Pipeline] sh 00:12:57.459 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:12:57.459 4072563 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:12:57.474 [Pipeline] sh 00:12:57.761 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:12:57.761 ++ grep -v 'sudo pgrep' 00:12:57.761 ++ awk '{print $1}' 00:12:57.761 + sudo kill -9 00:12:57.761 + true 00:12:57.776 [Pipeline] cleanWs 00:12:57.786 [WS-CLEANUP] Deleting project workspace... 00:12:57.786 [WS-CLEANUP] Deferred wipeout is used... 00:12:57.793 [WS-CLEANUP] done 00:12:57.797 [Pipeline] setCustomBuildProperty 00:12:57.811 [Pipeline] sh 00:12:58.096 + sudo git config --global --replace-all safe.directory '*' 00:12:58.200 [Pipeline] httpRequest 00:12:58.607 [Pipeline] echo 00:12:58.608 Sorcerer 10.211.164.101 is alive 00:12:58.617 [Pipeline] retry 00:12:58.618 [Pipeline] { 00:12:58.629 [Pipeline] httpRequest 00:12:58.633 HttpMethod: GET 00:12:58.634 URL: http://10.211.164.101/packages/jbp_d37d6e8a0abef39b377a5f0531b43b2efbbebf34.tar.gz 00:12:58.634 Sending request to url: http://10.211.164.101/packages/jbp_d37d6e8a0abef39b377a5f0531b43b2efbbebf34.tar.gz 00:12:58.637 Response Code: HTTP/1.1 200 OK 00:12:58.638 Success: Status code 200 is in the accepted range: 200,404 00:12:58.638 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/jbp_d37d6e8a0abef39b377a5f0531b43b2efbbebf34.tar.gz 00:12:58.784 [Pipeline] } 00:12:58.803 [Pipeline] // retry 00:12:58.810 [Pipeline] sh 00:12:59.121 + tar --no-same-owner -xf jbp_d37d6e8a0abef39b377a5f0531b43b2efbbebf34.tar.gz 00:12:59.138 [Pipeline] httpRequest 00:12:59.543 [Pipeline] echo 00:12:59.545 Sorcerer 10.211.164.101 is alive 00:12:59.554 [Pipeline] retry 00:12:59.556 [Pipeline] { 00:12:59.570 [Pipeline] httpRequest 00:12:59.575 HttpMethod: GET 00:12:59.575 URL: http://10.211.164.101/packages/spdk_1b1c3081e7433ef3ee5ea712b81b554bbbca8f0a.tar.gz 00:12:59.576 Sending request to url: http://10.211.164.101/packages/spdk_1b1c3081e7433ef3ee5ea712b81b554bbbca8f0a.tar.gz 00:12:59.579 Response Code: HTTP/1.1 200 OK 00:12:59.580 Success: Status code 200 is in the accepted range: 200,404 00:12:59.580 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk_1b1c3081e7433ef3ee5ea712b81b554bbbca8f0a.tar.gz 00:13:01.806 [Pipeline] } 00:13:01.824 [Pipeline] // retry 00:13:01.831 [Pipeline] sh 00:13:02.117 + tar --no-same-owner -xf spdk_1b1c3081e7433ef3ee5ea712b81b554bbbca8f0a.tar.gz 00:13:05.460 [Pipeline] sh 00:13:05.749 + git -C spdk log --oneline -n5 00:13:05.749 1b1c3081e bdev: explicitly inline bdev_channel_get_io() 00:13:05.749 165425556 bdev/passthru: add bdev_io_stack support 00:13:05.749 4f975f22c [TEST] bdev: save stack_frame instead of bdev_io in io_submitted TAILQ 00:13:05.749 392076696 bdev: Add spdk_bdev_io_submit API 00:13:05.749 ef2413376 bdev: Add spdk_bdev_io_to_ctx 00:13:05.761 [Pipeline] } 00:13:05.775 [Pipeline] // stage 00:13:05.785 [Pipeline] stage 00:13:05.788 [Pipeline] { (Prepare) 00:13:05.806 [Pipeline] writeFile 00:13:05.821 [Pipeline] sh 00:13:06.109 + logger -p user.info -t JENKINS-CI 00:13:06.124 [Pipeline] sh 00:13:06.411 + logger -p user.info -t JENKINS-CI 00:13:06.424 [Pipeline] sh 00:13:06.711 + cat autorun-spdk.conf 00:13:06.711 SPDK_RUN_FUNCTIONAL_TEST=1 00:13:06.711 SPDK_TEST_NVMF=1 00:13:06.711 SPDK_TEST_NVME_CLI=1 00:13:06.711 SPDK_TEST_NVMF_TRANSPORT=tcp 00:13:06.711 SPDK_TEST_NVMF_NICS=e810 00:13:06.711 SPDK_TEST_VFIOUSER=1 00:13:06.711 SPDK_RUN_UBSAN=1 00:13:06.711 NET_TYPE=phy 00:13:06.719 RUN_NIGHTLY=0 00:13:06.723 [Pipeline] readFile 00:13:06.746 [Pipeline] withEnv 00:13:06.748 [Pipeline] { 00:13:06.760 [Pipeline] sh 00:13:07.048 + set -ex 00:13:07.048 + [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf ]] 00:13:07.048 + source /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:13:07.048 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:13:07.048 ++ SPDK_TEST_NVMF=1 00:13:07.048 ++ SPDK_TEST_NVME_CLI=1 00:13:07.048 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:13:07.048 ++ SPDK_TEST_NVMF_NICS=e810 00:13:07.048 ++ SPDK_TEST_VFIOUSER=1 00:13:07.048 ++ SPDK_RUN_UBSAN=1 00:13:07.048 ++ NET_TYPE=phy 00:13:07.048 ++ RUN_NIGHTLY=0 00:13:07.048 + case $SPDK_TEST_NVMF_NICS in 00:13:07.048 + DRIVERS=ice 00:13:07.048 + [[ tcp == \r\d\m\a ]] 00:13:07.048 + [[ -n ice ]] 00:13:07.048 + sudo rmmod mlx4_ib mlx5_ib irdma i40iw iw_cxgb4 00:13:07.048 rmmod: ERROR: Module mlx4_ib is not currently loaded 00:13:12.340 rmmod: ERROR: Module irdma is not currently loaded 00:13:12.340 rmmod: ERROR: Module i40iw is not currently loaded 00:13:12.340 rmmod: ERROR: Module iw_cxgb4 is not currently loaded 00:13:12.340 + true 00:13:12.340 + for D in $DRIVERS 00:13:12.340 + sudo modprobe ice 00:13:12.340 + exit 0 00:13:12.350 [Pipeline] } 00:13:12.367 [Pipeline] // withEnv 00:13:12.373 [Pipeline] } 00:13:12.387 [Pipeline] // stage 00:13:12.396 [Pipeline] catchError 00:13:12.398 [Pipeline] { 00:13:12.415 [Pipeline] timeout 00:13:12.415 Timeout set to expire in 1 hr 0 min 00:13:12.417 [Pipeline] { 00:13:12.431 [Pipeline] stage 00:13:12.433 [Pipeline] { (Tests) 00:13:12.448 [Pipeline] sh 00:13:12.737 + jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:13:12.737 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:13:12.737 + DIR_ROOT=/var/jenkins/workspace/nvmf-tcp-phy-autotest 00:13:12.737 + [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest ]] 00:13:12.737 + DIR_SPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:13:12.737 + DIR_OUTPUT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:13:12.737 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk ]] 00:13:12.737 + [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:13:12.737 + mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:13:12.737 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:13:12.737 + [[ nvmf-tcp-phy-autotest == pkgdep-* ]] 00:13:12.737 + cd /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:13:12.737 + source /etc/os-release 00:13:12.737 ++ NAME='Fedora Linux' 00:13:12.737 ++ VERSION='39 (Cloud Edition)' 00:13:12.737 ++ ID=fedora 00:13:12.737 ++ VERSION_ID=39 00:13:12.737 ++ VERSION_CODENAME= 00:13:12.737 ++ PLATFORM_ID=platform:f39 00:13:12.737 ++ PRETTY_NAME='Fedora Linux 39 (Cloud Edition)' 00:13:12.737 ++ ANSI_COLOR='0;38;2;60;110;180' 00:13:12.737 ++ LOGO=fedora-logo-icon 00:13:12.737 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:39 00:13:12.737 ++ HOME_URL=https://fedoraproject.org/ 00:13:12.737 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f39/system-administrators-guide/ 00:13:12.737 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:13:12.737 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:13:12.737 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:13:12.737 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=39 00:13:12.737 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:13:12.737 ++ REDHAT_SUPPORT_PRODUCT_VERSION=39 00:13:12.737 ++ SUPPORT_END=2024-11-12 00:13:12.737 ++ VARIANT='Cloud Edition' 00:13:12.737 ++ VARIANT_ID=cloud 00:13:12.737 + uname -a 00:13:12.737 Linux spdk-cyp-10 6.8.9-200.fc39.x86_64 #1 SMP PREEMPT_DYNAMIC Wed Jul 24 03:04:40 UTC 2024 x86_64 GNU/Linux 00:13:12.737 + sudo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:13:16.108 Hugepages 00:13:16.108 node hugesize free / total 00:13:16.108 node0 1048576kB 0 / 0 00:13:16.108 node0 2048kB 0 / 0 00:13:16.108 node1 1048576kB 0 / 0 00:13:16.108 node1 2048kB 0 / 0 00:13:16.108 00:13:16.108 Type BDF Vendor Device NUMA Driver Device Block devices 00:13:16.108 I/OAT 0000:00:01.0 8086 0b00 0 ioatdma - - 00:13:16.108 I/OAT 0000:00:01.1 8086 0b00 0 ioatdma - - 00:13:16.108 I/OAT 0000:00:01.2 8086 0b00 0 ioatdma - - 00:13:16.108 I/OAT 0000:00:01.3 8086 0b00 0 ioatdma - - 00:13:16.108 I/OAT 0000:00:01.4 8086 0b00 0 ioatdma - - 00:13:16.108 I/OAT 0000:00:01.5 8086 0b00 0 ioatdma - - 00:13:16.108 I/OAT 0000:00:01.6 8086 0b00 0 ioatdma - - 00:13:16.108 I/OAT 0000:00:01.7 8086 0b00 0 ioatdma - - 00:13:16.108 NVMe 0000:65:00.0 144d a80a 0 nvme nvme0 nvme0n1 00:13:16.108 I/OAT 0000:80:01.0 8086 0b00 1 ioatdma - - 00:13:16.108 I/OAT 0000:80:01.1 8086 0b00 1 ioatdma - - 00:13:16.108 I/OAT 0000:80:01.2 8086 0b00 1 ioatdma - - 00:13:16.108 I/OAT 0000:80:01.3 8086 0b00 1 ioatdma - - 00:13:16.108 I/OAT 0000:80:01.4 8086 0b00 1 ioatdma - - 00:13:16.108 I/OAT 0000:80:01.5 8086 0b00 1 ioatdma - - 00:13:16.108 I/OAT 0000:80:01.6 8086 0b00 1 ioatdma - - 00:13:16.108 I/OAT 0000:80:01.7 8086 0b00 1 ioatdma - - 00:13:16.108 + rm -f /tmp/spdk-ld-path 00:13:16.108 + source autorun-spdk.conf 00:13:16.108 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:13:16.108 ++ SPDK_TEST_NVMF=1 00:13:16.108 ++ SPDK_TEST_NVME_CLI=1 00:13:16.108 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:13:16.108 ++ SPDK_TEST_NVMF_NICS=e810 00:13:16.108 ++ SPDK_TEST_VFIOUSER=1 00:13:16.108 ++ SPDK_RUN_UBSAN=1 00:13:16.108 ++ NET_TYPE=phy 00:13:16.108 ++ RUN_NIGHTLY=0 00:13:16.108 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:13:16.108 + [[ -n '' ]] 00:13:16.108 + sudo git config --global --add safe.directory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:13:16.108 + for M in /var/spdk/build-*-manifest.txt 00:13:16.108 + [[ -f /var/spdk/build-kernel-manifest.txt ]] 00:13:16.108 + cp /var/spdk/build-kernel-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:13:16.108 + for M in /var/spdk/build-*-manifest.txt 00:13:16.108 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:13:16.108 + cp /var/spdk/build-pkg-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:13:16.108 + for M in /var/spdk/build-*-manifest.txt 00:13:16.108 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:13:16.108 + cp /var/spdk/build-repo-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:13:16.108 ++ uname 00:13:16.108 + [[ Linux == \L\i\n\u\x ]] 00:13:16.108 + sudo dmesg -T 00:13:16.108 + sudo dmesg --clear 00:13:16.108 + dmesg_pid=4074102 00:13:16.108 + [[ Fedora Linux == FreeBSD ]] 00:13:16.108 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:13:16.108 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:13:16.108 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:13:16.108 + export VM_IMAGE=/var/spdk/dependencies/vhost/spdk_test_image.qcow2 00:13:16.108 + VM_IMAGE=/var/spdk/dependencies/vhost/spdk_test_image.qcow2 00:13:16.108 + [[ -x /usr/src/fio-static/fio ]] 00:13:16.108 + export FIO_BIN=/usr/src/fio-static/fio 00:13:16.108 + FIO_BIN=/usr/src/fio-static/fio 00:13:16.108 + sudo dmesg -Tw 00:13:16.108 + [[ '' == \/\v\a\r\/\j\e\n\k\i\n\s\/\w\o\r\k\s\p\a\c\e\/\n\v\m\f\-\t\c\p\-\p\h\y\-\a\u\t\o\t\e\s\t\/\q\e\m\u\_\v\f\i\o\/* ]] 00:13:16.108 + [[ ! -v VFIO_QEMU_BIN ]] 00:13:16.108 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:13:16.108 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:13:16.108 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:13:16.108 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:13:16.108 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:13:16.108 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:13:16.108 + spdk/autorun.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:13:16.108 Test configuration: 00:13:16.108 SPDK_RUN_FUNCTIONAL_TEST=1 00:13:16.108 SPDK_TEST_NVMF=1 00:13:16.108 SPDK_TEST_NVME_CLI=1 00:13:16.108 SPDK_TEST_NVMF_TRANSPORT=tcp 00:13:16.108 SPDK_TEST_NVMF_NICS=e810 00:13:16.108 SPDK_TEST_VFIOUSER=1 00:13:16.108 SPDK_RUN_UBSAN=1 00:13:16.108 NET_TYPE=phy 00:13:16.108 RUN_NIGHTLY=0 22:12:11 -- common/autotest_common.sh@1680 -- $ [[ n == y ]] 00:13:16.108 22:12:11 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:16.108 22:12:11 -- scripts/common.sh@15 -- $ shopt -s extglob 00:13:16.108 22:12:11 -- scripts/common.sh@544 -- $ [[ -e /bin/wpdk_common.sh ]] 00:13:16.108 22:12:11 -- scripts/common.sh@552 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:16.108 22:12:11 -- scripts/common.sh@553 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:16.109 22:12:11 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:16.109 22:12:11 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:16.109 22:12:11 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:16.109 22:12:11 -- paths/export.sh@5 -- $ export PATH 00:13:16.109 22:12:11 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:16.370 22:12:11 -- common/autobuild_common.sh@478 -- $ out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:13:16.370 22:12:11 -- common/autobuild_common.sh@479 -- $ date +%s 00:13:16.370 22:12:11 -- common/autobuild_common.sh@479 -- $ mktemp -dt spdk_1727813531.XXXXXX 00:13:16.370 22:12:11 -- common/autobuild_common.sh@479 -- $ SPDK_WORKSPACE=/tmp/spdk_1727813531.FTBLkN 00:13:16.370 22:12:11 -- common/autobuild_common.sh@481 -- $ [[ -n '' ]] 00:13:16.370 22:12:11 -- common/autobuild_common.sh@485 -- $ '[' -n '' ']' 00:13:16.370 22:12:11 -- common/autobuild_common.sh@488 -- $ scanbuild_exclude='--exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/' 00:13:16.370 22:12:11 -- common/autobuild_common.sh@492 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp' 00:13:16.370 22:12:11 -- common/autobuild_common.sh@494 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/ --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:13:16.370 22:12:11 -- common/autobuild_common.sh@495 -- $ get_config_params 00:13:16.370 22:12:11 -- common/autotest_common.sh@407 -- $ xtrace_disable 00:13:16.370 22:12:11 -- common/autotest_common.sh@10 -- $ set +x 00:13:16.370 22:12:11 -- common/autobuild_common.sh@495 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user' 00:13:16.370 22:12:11 -- common/autobuild_common.sh@497 -- $ start_monitor_resources 00:13:16.370 22:12:11 -- pm/common@17 -- $ local monitor 00:13:16.370 22:12:11 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:13:16.370 22:12:11 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:13:16.370 22:12:11 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:13:16.370 22:12:11 -- pm/common@21 -- $ date +%s 00:13:16.370 22:12:11 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:13:16.370 22:12:11 -- pm/common@21 -- $ date +%s 00:13:16.370 22:12:11 -- pm/common@25 -- $ sleep 1 00:13:16.370 22:12:11 -- pm/common@21 -- $ date +%s 00:13:16.370 22:12:11 -- pm/common@21 -- $ date +%s 00:13:16.370 22:12:11 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1727813531 00:13:16.370 22:12:11 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1727813531 00:13:16.370 22:12:11 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1727813531 00:13:16.370 22:12:11 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1727813531 00:13:16.370 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1727813531_collect-vmstat.pm.log 00:13:16.370 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1727813531_collect-cpu-load.pm.log 00:13:16.370 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1727813531_collect-cpu-temp.pm.log 00:13:16.370 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1727813531_collect-bmc-pm.bmc.pm.log 00:13:17.311 22:12:12 -- common/autobuild_common.sh@498 -- $ trap stop_monitor_resources EXIT 00:13:17.311 22:12:12 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:13:17.311 22:12:12 -- spdk/autobuild.sh@12 -- $ umask 022 00:13:17.311 22:12:12 -- spdk/autobuild.sh@13 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:13:17.311 22:12:12 -- spdk/autobuild.sh@16 -- $ date -u 00:13:17.311 Tue Oct 1 08:12:12 PM UTC 2024 00:13:17.311 22:12:12 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:13:17.311 v25.01-pre-28-g1b1c3081e 00:13:17.311 22:12:12 -- spdk/autobuild.sh@19 -- $ '[' 0 -eq 1 ']' 00:13:17.311 22:12:12 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:13:17.311 22:12:12 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:13:17.311 22:12:12 -- common/autotest_common.sh@1101 -- $ '[' 3 -le 1 ']' 00:13:17.311 22:12:12 -- common/autotest_common.sh@1107 -- $ xtrace_disable 00:13:17.311 22:12:12 -- common/autotest_common.sh@10 -- $ set +x 00:13:17.311 ************************************ 00:13:17.311 START TEST ubsan 00:13:17.311 ************************************ 00:13:17.311 22:12:12 ubsan -- common/autotest_common.sh@1125 -- $ echo 'using ubsan' 00:13:17.311 using ubsan 00:13:17.311 00:13:17.311 real 0m0.001s 00:13:17.311 user 0m0.000s 00:13:17.311 sys 0m0.000s 00:13:17.311 22:12:12 ubsan -- common/autotest_common.sh@1126 -- $ xtrace_disable 00:13:17.311 22:12:12 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:13:17.311 ************************************ 00:13:17.311 END TEST ubsan 00:13:17.311 ************************************ 00:13:17.311 22:12:12 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:13:17.311 22:12:12 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:13:17.311 22:12:12 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:13:17.311 22:12:12 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:13:17.311 22:12:12 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:13:17.311 22:12:12 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:13:17.311 22:12:12 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:13:17.311 22:12:12 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:13:17.311 22:12:12 -- spdk/autobuild.sh@67 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user --with-shared 00:13:17.571 Using default SPDK env in /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:13:17.571 Using default DPDK in /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:13:17.831 Using 'verbs' RDMA provider 00:13:33.691 Configuring ISA-L (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.spdk-isal.log)...done. 00:13:45.927 Configuring ISA-L-crypto (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.spdk-isal-crypto.log)...done. 00:13:46.188 Creating mk/config.mk...done. 00:13:46.188 Creating mk/cc.flags.mk...done. 00:13:46.188 Type 'make' to build. 00:13:46.188 22:12:41 -- spdk/autobuild.sh@70 -- $ run_test make make -j144 00:13:46.188 22:12:41 -- common/autotest_common.sh@1101 -- $ '[' 3 -le 1 ']' 00:13:46.188 22:12:41 -- common/autotest_common.sh@1107 -- $ xtrace_disable 00:13:46.188 22:12:41 -- common/autotest_common.sh@10 -- $ set +x 00:13:46.188 ************************************ 00:13:46.188 START TEST make 00:13:46.188 ************************************ 00:13:46.188 22:12:41 make -- common/autotest_common.sh@1125 -- $ make -j144 00:13:46.450 make[1]: Nothing to be done for 'all'. 00:13:47.832 The Meson build system 00:13:47.832 Version: 1.5.0 00:13:47.832 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user 00:13:47.832 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:13:47.832 Build type: native build 00:13:47.832 Project name: libvfio-user 00:13:47.832 Project version: 0.0.1 00:13:47.832 C compiler for the host machine: cc (gcc 13.3.1 "cc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:13:47.832 C linker for the host machine: cc ld.bfd 2.40-14 00:13:47.832 Host machine cpu family: x86_64 00:13:47.832 Host machine cpu: x86_64 00:13:47.832 Run-time dependency threads found: YES 00:13:47.832 Library dl found: YES 00:13:47.832 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:13:47.832 Run-time dependency json-c found: YES 0.17 00:13:47.832 Run-time dependency cmocka found: YES 1.1.7 00:13:47.832 Program pytest-3 found: NO 00:13:47.832 Program flake8 found: NO 00:13:47.832 Program misspell-fixer found: NO 00:13:47.832 Program restructuredtext-lint found: NO 00:13:47.832 Program valgrind found: YES (/usr/bin/valgrind) 00:13:47.832 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:13:47.832 Compiler for C supports arguments -Wmissing-declarations: YES 00:13:47.832 Compiler for C supports arguments -Wwrite-strings: YES 00:13:47.832 ../libvfio-user/test/meson.build:20: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:13:47.832 Program test-lspci.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user/test/test-lspci.sh) 00:13:47.832 Program test-linkage.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user/test/test-linkage.sh) 00:13:47.832 ../libvfio-user/test/py/meson.build:16: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:13:47.832 Build targets in project: 8 00:13:47.832 WARNING: Project specifies a minimum meson_version '>= 0.53.0' but uses features which were added in newer versions: 00:13:47.832 * 0.57.0: {'exclude_suites arg in add_test_setup'} 00:13:47.832 00:13:47.832 libvfio-user 0.0.1 00:13:47.832 00:13:47.832 User defined options 00:13:47.832 buildtype : debug 00:13:47.832 default_library: shared 00:13:47.832 libdir : /usr/local/lib 00:13:47.832 00:13:47.832 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:13:48.090 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug' 00:13:48.090 [1/37] Compiling C object samples/shadow_ioeventfd_server.p/shadow_ioeventfd_server.c.o 00:13:48.090 [2/37] Compiling C object lib/libvfio-user.so.0.0.1.p/irq.c.o 00:13:48.090 [3/37] Compiling C object lib/libvfio-user.so.0.0.1.p/migration.c.o 00:13:48.090 [4/37] Compiling C object samples/null.p/null.c.o 00:13:48.349 [5/37] Compiling C object samples/client.p/.._lib_migration.c.o 00:13:48.349 [6/37] Compiling C object test/unit_tests.p/.._lib_pci.c.o 00:13:48.349 [7/37] Compiling C object samples/gpio-pci-idio-16.p/gpio-pci-idio-16.c.o 00:13:48.349 [8/37] Compiling C object samples/client.p/.._lib_tran.c.o 00:13:48.349 [9/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran.c.o 00:13:48.349 [10/37] Compiling C object samples/lspci.p/lspci.c.o 00:13:48.349 [11/37] Compiling C object test/unit_tests.p/.._lib_irq.c.o 00:13:48.349 [12/37] Compiling C object test/unit_tests.p/mocks.c.o 00:13:48.349 [13/37] Compiling C object test/unit_tests.p/.._lib_tran_pipe.c.o 00:13:48.349 [14/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci_caps.c.o 00:13:48.349 [15/37] Compiling C object test/unit_tests.p/.._lib_tran.c.o 00:13:48.349 [16/37] Compiling C object test/unit_tests.p/.._lib_tran_sock.c.o 00:13:48.349 [17/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci.c.o 00:13:48.349 [18/37] Compiling C object test/unit_tests.p/.._lib_pci_caps.c.o 00:13:48.349 [19/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran_sock.c.o 00:13:48.349 [20/37] Compiling C object samples/server.p/server.c.o 00:13:48.349 [21/37] Compiling C object test/unit_tests.p/unit-tests.c.o 00:13:48.349 [22/37] Compiling C object test/unit_tests.p/.._lib_migration.c.o 00:13:48.349 [23/37] Compiling C object lib/libvfio-user.so.0.0.1.p/dma.c.o 00:13:48.349 [24/37] Compiling C object samples/client.p/.._lib_tran_sock.c.o 00:13:48.349 [25/37] Compiling C object test/unit_tests.p/.._lib_dma.c.o 00:13:48.349 [26/37] Compiling C object samples/client.p/client.c.o 00:13:48.349 [27/37] Compiling C object lib/libvfio-user.so.0.0.1.p/libvfio-user.c.o 00:13:48.349 [28/37] Linking target samples/client 00:13:48.349 [29/37] Linking target lib/libvfio-user.so.0.0.1 00:13:48.349 [30/37] Compiling C object test/unit_tests.p/.._lib_libvfio-user.c.o 00:13:48.658 [31/37] Generating symbol file lib/libvfio-user.so.0.0.1.p/libvfio-user.so.0.0.1.symbols 00:13:48.658 [32/37] Linking target test/unit_tests 00:13:48.658 [33/37] Linking target samples/null 00:13:48.658 [34/37] Linking target samples/gpio-pci-idio-16 00:13:48.658 [35/37] Linking target samples/server 00:13:48.658 [36/37] Linking target samples/shadow_ioeventfd_server 00:13:48.658 [37/37] Linking target samples/lspci 00:13:48.658 INFO: autodetecting backend as ninja 00:13:48.658 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:13:48.658 DESTDIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user meson install --quiet -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:13:48.917 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug' 00:13:48.917 ninja: no work to do. 00:13:55.503 The Meson build system 00:13:55.503 Version: 1.5.0 00:13:55.503 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk 00:13:55.503 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp 00:13:55.503 Build type: native build 00:13:55.503 Program cat found: YES (/usr/bin/cat) 00:13:55.503 Project name: DPDK 00:13:55.503 Project version: 24.03.0 00:13:55.503 C compiler for the host machine: cc (gcc 13.3.1 "cc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:13:55.503 C linker for the host machine: cc ld.bfd 2.40-14 00:13:55.503 Host machine cpu family: x86_64 00:13:55.503 Host machine cpu: x86_64 00:13:55.503 Message: ## Building in Developer Mode ## 00:13:55.503 Program pkg-config found: YES (/usr/bin/pkg-config) 00:13:55.503 Program check-symbols.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/buildtools/check-symbols.sh) 00:13:55.503 Program options-ibverbs-static.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:13:55.503 Program python3 found: YES (/usr/bin/python3) 00:13:55.503 Program cat found: YES (/usr/bin/cat) 00:13:55.503 Compiler for C supports arguments -march=native: YES 00:13:55.503 Checking for size of "void *" : 8 00:13:55.503 Checking for size of "void *" : 8 (cached) 00:13:55.503 Compiler for C supports link arguments -Wl,--undefined-version: YES 00:13:55.503 Library m found: YES 00:13:55.503 Library numa found: YES 00:13:55.503 Has header "numaif.h" : YES 00:13:55.503 Library fdt found: NO 00:13:55.503 Library execinfo found: NO 00:13:55.503 Has header "execinfo.h" : YES 00:13:55.503 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:13:55.503 Run-time dependency libarchive found: NO (tried pkgconfig) 00:13:55.503 Run-time dependency libbsd found: NO (tried pkgconfig) 00:13:55.503 Run-time dependency jansson found: NO (tried pkgconfig) 00:13:55.503 Run-time dependency openssl found: YES 3.1.1 00:13:55.503 Run-time dependency libpcap found: YES 1.10.4 00:13:55.503 Has header "pcap.h" with dependency libpcap: YES 00:13:55.503 Compiler for C supports arguments -Wcast-qual: YES 00:13:55.503 Compiler for C supports arguments -Wdeprecated: YES 00:13:55.503 Compiler for C supports arguments -Wformat: YES 00:13:55.503 Compiler for C supports arguments -Wformat-nonliteral: NO 00:13:55.503 Compiler for C supports arguments -Wformat-security: NO 00:13:55.503 Compiler for C supports arguments -Wmissing-declarations: YES 00:13:55.503 Compiler for C supports arguments -Wmissing-prototypes: YES 00:13:55.503 Compiler for C supports arguments -Wnested-externs: YES 00:13:55.503 Compiler for C supports arguments -Wold-style-definition: YES 00:13:55.503 Compiler for C supports arguments -Wpointer-arith: YES 00:13:55.503 Compiler for C supports arguments -Wsign-compare: YES 00:13:55.503 Compiler for C supports arguments -Wstrict-prototypes: YES 00:13:55.503 Compiler for C supports arguments -Wundef: YES 00:13:55.503 Compiler for C supports arguments -Wwrite-strings: YES 00:13:55.503 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:13:55.503 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:13:55.503 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:13:55.503 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:13:55.503 Program objdump found: YES (/usr/bin/objdump) 00:13:55.503 Compiler for C supports arguments -mavx512f: YES 00:13:55.503 Checking if "AVX512 checking" compiles: YES 00:13:55.503 Fetching value of define "__SSE4_2__" : 1 00:13:55.503 Fetching value of define "__AES__" : 1 00:13:55.503 Fetching value of define "__AVX__" : 1 00:13:55.503 Fetching value of define "__AVX2__" : 1 00:13:55.503 Fetching value of define "__AVX512BW__" : 1 00:13:55.503 Fetching value of define "__AVX512CD__" : 1 00:13:55.503 Fetching value of define "__AVX512DQ__" : 1 00:13:55.503 Fetching value of define "__AVX512F__" : 1 00:13:55.503 Fetching value of define "__AVX512VL__" : 1 00:13:55.503 Fetching value of define "__PCLMUL__" : 1 00:13:55.503 Fetching value of define "__RDRND__" : 1 00:13:55.503 Fetching value of define "__RDSEED__" : 1 00:13:55.503 Fetching value of define "__VPCLMULQDQ__" : 1 00:13:55.503 Fetching value of define "__znver1__" : (undefined) 00:13:55.503 Fetching value of define "__znver2__" : (undefined) 00:13:55.503 Fetching value of define "__znver3__" : (undefined) 00:13:55.503 Fetching value of define "__znver4__" : (undefined) 00:13:55.503 Compiler for C supports arguments -Wno-format-truncation: YES 00:13:55.503 Message: lib/log: Defining dependency "log" 00:13:55.503 Message: lib/kvargs: Defining dependency "kvargs" 00:13:55.503 Message: lib/telemetry: Defining dependency "telemetry" 00:13:55.503 Checking for function "getentropy" : NO 00:13:55.503 Message: lib/eal: Defining dependency "eal" 00:13:55.503 Message: lib/ring: Defining dependency "ring" 00:13:55.503 Message: lib/rcu: Defining dependency "rcu" 00:13:55.503 Message: lib/mempool: Defining dependency "mempool" 00:13:55.503 Message: lib/mbuf: Defining dependency "mbuf" 00:13:55.503 Fetching value of define "__PCLMUL__" : 1 (cached) 00:13:55.503 Fetching value of define "__AVX512F__" : 1 (cached) 00:13:55.503 Fetching value of define "__AVX512BW__" : 1 (cached) 00:13:55.503 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:13:55.503 Fetching value of define "__AVX512VL__" : 1 (cached) 00:13:55.503 Fetching value of define "__VPCLMULQDQ__" : 1 (cached) 00:13:55.503 Compiler for C supports arguments -mpclmul: YES 00:13:55.503 Compiler for C supports arguments -maes: YES 00:13:55.503 Compiler for C supports arguments -mavx512f: YES (cached) 00:13:55.503 Compiler for C supports arguments -mavx512bw: YES 00:13:55.503 Compiler for C supports arguments -mavx512dq: YES 00:13:55.503 Compiler for C supports arguments -mavx512vl: YES 00:13:55.503 Compiler for C supports arguments -mvpclmulqdq: YES 00:13:55.503 Compiler for C supports arguments -mavx2: YES 00:13:55.503 Compiler for C supports arguments -mavx: YES 00:13:55.503 Message: lib/net: Defining dependency "net" 00:13:55.503 Message: lib/meter: Defining dependency "meter" 00:13:55.503 Message: lib/ethdev: Defining dependency "ethdev" 00:13:55.503 Message: lib/pci: Defining dependency "pci" 00:13:55.503 Message: lib/cmdline: Defining dependency "cmdline" 00:13:55.503 Message: lib/hash: Defining dependency "hash" 00:13:55.503 Message: lib/timer: Defining dependency "timer" 00:13:55.503 Message: lib/compressdev: Defining dependency "compressdev" 00:13:55.503 Message: lib/cryptodev: Defining dependency "cryptodev" 00:13:55.503 Message: lib/dmadev: Defining dependency "dmadev" 00:13:55.503 Compiler for C supports arguments -Wno-cast-qual: YES 00:13:55.503 Message: lib/power: Defining dependency "power" 00:13:55.503 Message: lib/reorder: Defining dependency "reorder" 00:13:55.503 Message: lib/security: Defining dependency "security" 00:13:55.503 Has header "linux/userfaultfd.h" : YES 00:13:55.503 Has header "linux/vduse.h" : YES 00:13:55.503 Message: lib/vhost: Defining dependency "vhost" 00:13:55.503 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:13:55.503 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:13:55.503 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:13:55.503 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:13:55.503 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:13:55.503 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:13:55.503 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:13:55.503 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:13:55.503 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:13:55.503 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:13:55.503 Program doxygen found: YES (/usr/local/bin/doxygen) 00:13:55.503 Configuring doxy-api-html.conf using configuration 00:13:55.503 Configuring doxy-api-man.conf using configuration 00:13:55.503 Program mandb found: YES (/usr/bin/mandb) 00:13:55.503 Program sphinx-build found: NO 00:13:55.503 Configuring rte_build_config.h using configuration 00:13:55.503 Message: 00:13:55.503 ================= 00:13:55.503 Applications Enabled 00:13:55.503 ================= 00:13:55.503 00:13:55.503 apps: 00:13:55.503 00:13:55.503 00:13:55.503 Message: 00:13:55.503 ================= 00:13:55.503 Libraries Enabled 00:13:55.503 ================= 00:13:55.503 00:13:55.503 libs: 00:13:55.503 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:13:55.503 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:13:55.503 cryptodev, dmadev, power, reorder, security, vhost, 00:13:55.503 00:13:55.503 Message: 00:13:55.503 =============== 00:13:55.503 Drivers Enabled 00:13:55.503 =============== 00:13:55.503 00:13:55.503 common: 00:13:55.503 00:13:55.503 bus: 00:13:55.503 pci, vdev, 00:13:55.503 mempool: 00:13:55.503 ring, 00:13:55.503 dma: 00:13:55.503 00:13:55.503 net: 00:13:55.503 00:13:55.503 crypto: 00:13:55.503 00:13:55.503 compress: 00:13:55.503 00:13:55.503 vdpa: 00:13:55.503 00:13:55.503 00:13:55.503 Message: 00:13:55.503 ================= 00:13:55.503 Content Skipped 00:13:55.503 ================= 00:13:55.503 00:13:55.503 apps: 00:13:55.503 dumpcap: explicitly disabled via build config 00:13:55.503 graph: explicitly disabled via build config 00:13:55.503 pdump: explicitly disabled via build config 00:13:55.503 proc-info: explicitly disabled via build config 00:13:55.503 test-acl: explicitly disabled via build config 00:13:55.503 test-bbdev: explicitly disabled via build config 00:13:55.503 test-cmdline: explicitly disabled via build config 00:13:55.503 test-compress-perf: explicitly disabled via build config 00:13:55.503 test-crypto-perf: explicitly disabled via build config 00:13:55.503 test-dma-perf: explicitly disabled via build config 00:13:55.503 test-eventdev: explicitly disabled via build config 00:13:55.503 test-fib: explicitly disabled via build config 00:13:55.503 test-flow-perf: explicitly disabled via build config 00:13:55.503 test-gpudev: explicitly disabled via build config 00:13:55.503 test-mldev: explicitly disabled via build config 00:13:55.503 test-pipeline: explicitly disabled via build config 00:13:55.503 test-pmd: explicitly disabled via build config 00:13:55.503 test-regex: explicitly disabled via build config 00:13:55.503 test-sad: explicitly disabled via build config 00:13:55.503 test-security-perf: explicitly disabled via build config 00:13:55.503 00:13:55.503 libs: 00:13:55.503 argparse: explicitly disabled via build config 00:13:55.503 metrics: explicitly disabled via build config 00:13:55.503 acl: explicitly disabled via build config 00:13:55.503 bbdev: explicitly disabled via build config 00:13:55.503 bitratestats: explicitly disabled via build config 00:13:55.503 bpf: explicitly disabled via build config 00:13:55.503 cfgfile: explicitly disabled via build config 00:13:55.503 distributor: explicitly disabled via build config 00:13:55.503 efd: explicitly disabled via build config 00:13:55.503 eventdev: explicitly disabled via build config 00:13:55.503 dispatcher: explicitly disabled via build config 00:13:55.503 gpudev: explicitly disabled via build config 00:13:55.503 gro: explicitly disabled via build config 00:13:55.503 gso: explicitly disabled via build config 00:13:55.503 ip_frag: explicitly disabled via build config 00:13:55.503 jobstats: explicitly disabled via build config 00:13:55.503 latencystats: explicitly disabled via build config 00:13:55.503 lpm: explicitly disabled via build config 00:13:55.503 member: explicitly disabled via build config 00:13:55.503 pcapng: explicitly disabled via build config 00:13:55.503 rawdev: explicitly disabled via build config 00:13:55.503 regexdev: explicitly disabled via build config 00:13:55.503 mldev: explicitly disabled via build config 00:13:55.503 rib: explicitly disabled via build config 00:13:55.503 sched: explicitly disabled via build config 00:13:55.503 stack: explicitly disabled via build config 00:13:55.503 ipsec: explicitly disabled via build config 00:13:55.503 pdcp: explicitly disabled via build config 00:13:55.503 fib: explicitly disabled via build config 00:13:55.503 port: explicitly disabled via build config 00:13:55.503 pdump: explicitly disabled via build config 00:13:55.503 table: explicitly disabled via build config 00:13:55.503 pipeline: explicitly disabled via build config 00:13:55.503 graph: explicitly disabled via build config 00:13:55.503 node: explicitly disabled via build config 00:13:55.503 00:13:55.503 drivers: 00:13:55.503 common/cpt: not in enabled drivers build config 00:13:55.503 common/dpaax: not in enabled drivers build config 00:13:55.503 common/iavf: not in enabled drivers build config 00:13:55.503 common/idpf: not in enabled drivers build config 00:13:55.503 common/ionic: not in enabled drivers build config 00:13:55.503 common/mvep: not in enabled drivers build config 00:13:55.503 common/octeontx: not in enabled drivers build config 00:13:55.504 bus/auxiliary: not in enabled drivers build config 00:13:55.504 bus/cdx: not in enabled drivers build config 00:13:55.504 bus/dpaa: not in enabled drivers build config 00:13:55.504 bus/fslmc: not in enabled drivers build config 00:13:55.504 bus/ifpga: not in enabled drivers build config 00:13:55.504 bus/platform: not in enabled drivers build config 00:13:55.504 bus/uacce: not in enabled drivers build config 00:13:55.504 bus/vmbus: not in enabled drivers build config 00:13:55.504 common/cnxk: not in enabled drivers build config 00:13:55.504 common/mlx5: not in enabled drivers build config 00:13:55.504 common/nfp: not in enabled drivers build config 00:13:55.504 common/nitrox: not in enabled drivers build config 00:13:55.504 common/qat: not in enabled drivers build config 00:13:55.504 common/sfc_efx: not in enabled drivers build config 00:13:55.504 mempool/bucket: not in enabled drivers build config 00:13:55.504 mempool/cnxk: not in enabled drivers build config 00:13:55.504 mempool/dpaa: not in enabled drivers build config 00:13:55.504 mempool/dpaa2: not in enabled drivers build config 00:13:55.504 mempool/octeontx: not in enabled drivers build config 00:13:55.504 mempool/stack: not in enabled drivers build config 00:13:55.504 dma/cnxk: not in enabled drivers build config 00:13:55.504 dma/dpaa: not in enabled drivers build config 00:13:55.504 dma/dpaa2: not in enabled drivers build config 00:13:55.504 dma/hisilicon: not in enabled drivers build config 00:13:55.504 dma/idxd: not in enabled drivers build config 00:13:55.504 dma/ioat: not in enabled drivers build config 00:13:55.504 dma/skeleton: not in enabled drivers build config 00:13:55.504 net/af_packet: not in enabled drivers build config 00:13:55.504 net/af_xdp: not in enabled drivers build config 00:13:55.504 net/ark: not in enabled drivers build config 00:13:55.504 net/atlantic: not in enabled drivers build config 00:13:55.504 net/avp: not in enabled drivers build config 00:13:55.504 net/axgbe: not in enabled drivers build config 00:13:55.504 net/bnx2x: not in enabled drivers build config 00:13:55.504 net/bnxt: not in enabled drivers build config 00:13:55.504 net/bonding: not in enabled drivers build config 00:13:55.504 net/cnxk: not in enabled drivers build config 00:13:55.504 net/cpfl: not in enabled drivers build config 00:13:55.504 net/cxgbe: not in enabled drivers build config 00:13:55.504 net/dpaa: not in enabled drivers build config 00:13:55.504 net/dpaa2: not in enabled drivers build config 00:13:55.504 net/e1000: not in enabled drivers build config 00:13:55.504 net/ena: not in enabled drivers build config 00:13:55.504 net/enetc: not in enabled drivers build config 00:13:55.504 net/enetfec: not in enabled drivers build config 00:13:55.504 net/enic: not in enabled drivers build config 00:13:55.504 net/failsafe: not in enabled drivers build config 00:13:55.504 net/fm10k: not in enabled drivers build config 00:13:55.504 net/gve: not in enabled drivers build config 00:13:55.504 net/hinic: not in enabled drivers build config 00:13:55.504 net/hns3: not in enabled drivers build config 00:13:55.504 net/i40e: not in enabled drivers build config 00:13:55.504 net/iavf: not in enabled drivers build config 00:13:55.504 net/ice: not in enabled drivers build config 00:13:55.504 net/idpf: not in enabled drivers build config 00:13:55.504 net/igc: not in enabled drivers build config 00:13:55.504 net/ionic: not in enabled drivers build config 00:13:55.504 net/ipn3ke: not in enabled drivers build config 00:13:55.504 net/ixgbe: not in enabled drivers build config 00:13:55.504 net/mana: not in enabled drivers build config 00:13:55.504 net/memif: not in enabled drivers build config 00:13:55.504 net/mlx4: not in enabled drivers build config 00:13:55.504 net/mlx5: not in enabled drivers build config 00:13:55.504 net/mvneta: not in enabled drivers build config 00:13:55.504 net/mvpp2: not in enabled drivers build config 00:13:55.504 net/netvsc: not in enabled drivers build config 00:13:55.504 net/nfb: not in enabled drivers build config 00:13:55.504 net/nfp: not in enabled drivers build config 00:13:55.504 net/ngbe: not in enabled drivers build config 00:13:55.504 net/null: not in enabled drivers build config 00:13:55.504 net/octeontx: not in enabled drivers build config 00:13:55.504 net/octeon_ep: not in enabled drivers build config 00:13:55.504 net/pcap: not in enabled drivers build config 00:13:55.504 net/pfe: not in enabled drivers build config 00:13:55.504 net/qede: not in enabled drivers build config 00:13:55.504 net/ring: not in enabled drivers build config 00:13:55.504 net/sfc: not in enabled drivers build config 00:13:55.504 net/softnic: not in enabled drivers build config 00:13:55.504 net/tap: not in enabled drivers build config 00:13:55.504 net/thunderx: not in enabled drivers build config 00:13:55.504 net/txgbe: not in enabled drivers build config 00:13:55.504 net/vdev_netvsc: not in enabled drivers build config 00:13:55.504 net/vhost: not in enabled drivers build config 00:13:55.504 net/virtio: not in enabled drivers build config 00:13:55.504 net/vmxnet3: not in enabled drivers build config 00:13:55.504 raw/*: missing internal dependency, "rawdev" 00:13:55.504 crypto/armv8: not in enabled drivers build config 00:13:55.504 crypto/bcmfs: not in enabled drivers build config 00:13:55.504 crypto/caam_jr: not in enabled drivers build config 00:13:55.504 crypto/ccp: not in enabled drivers build config 00:13:55.504 crypto/cnxk: not in enabled drivers build config 00:13:55.504 crypto/dpaa_sec: not in enabled drivers build config 00:13:55.504 crypto/dpaa2_sec: not in enabled drivers build config 00:13:55.504 crypto/ipsec_mb: not in enabled drivers build config 00:13:55.504 crypto/mlx5: not in enabled drivers build config 00:13:55.504 crypto/mvsam: not in enabled drivers build config 00:13:55.504 crypto/nitrox: not in enabled drivers build config 00:13:55.504 crypto/null: not in enabled drivers build config 00:13:55.504 crypto/octeontx: not in enabled drivers build config 00:13:55.504 crypto/openssl: not in enabled drivers build config 00:13:55.504 crypto/scheduler: not in enabled drivers build config 00:13:55.504 crypto/uadk: not in enabled drivers build config 00:13:55.504 crypto/virtio: not in enabled drivers build config 00:13:55.504 compress/isal: not in enabled drivers build config 00:13:55.504 compress/mlx5: not in enabled drivers build config 00:13:55.504 compress/nitrox: not in enabled drivers build config 00:13:55.504 compress/octeontx: not in enabled drivers build config 00:13:55.504 compress/zlib: not in enabled drivers build config 00:13:55.504 regex/*: missing internal dependency, "regexdev" 00:13:55.504 ml/*: missing internal dependency, "mldev" 00:13:55.504 vdpa/ifc: not in enabled drivers build config 00:13:55.504 vdpa/mlx5: not in enabled drivers build config 00:13:55.504 vdpa/nfp: not in enabled drivers build config 00:13:55.504 vdpa/sfc: not in enabled drivers build config 00:13:55.504 event/*: missing internal dependency, "eventdev" 00:13:55.504 baseband/*: missing internal dependency, "bbdev" 00:13:55.504 gpu/*: missing internal dependency, "gpudev" 00:13:55.504 00:13:55.504 00:13:55.504 Build targets in project: 84 00:13:55.504 00:13:55.504 DPDK 24.03.0 00:13:55.504 00:13:55.504 User defined options 00:13:55.504 buildtype : debug 00:13:55.504 default_library : shared 00:13:55.504 libdir : lib 00:13:55.504 prefix : /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:13:55.504 c_args : -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds -fPIC -Werror 00:13:55.504 c_link_args : 00:13:55.504 cpu_instruction_set: native 00:13:55.504 disable_apps : test-sad,test-acl,test-dma-perf,test-pipeline,test-compress-perf,test-fib,test-flow-perf,test-crypto-perf,test-bbdev,test-eventdev,pdump,test-mldev,test-cmdline,graph,test-security-perf,test-pmd,test,proc-info,test-regex,dumpcap,test-gpudev 00:13:55.504 disable_libs : port,sched,rib,node,ipsec,distributor,gro,eventdev,pdcp,acl,member,latencystats,efd,stack,regexdev,rawdev,bpf,metrics,gpudev,pipeline,pdump,table,fib,dispatcher,mldev,gso,cfgfile,bitratestats,ip_frag,graph,lpm,jobstats,argparse,pcapng,bbdev 00:13:55.504 enable_docs : false 00:13:55.504 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring 00:13:55.504 enable_kmods : false 00:13:55.504 max_lcores : 128 00:13:55.504 tests : false 00:13:55.504 00:13:55.504 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:13:55.504 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp' 00:13:55.788 [1/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:13:55.788 [2/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:13:55.788 [3/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:13:55.788 [4/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:13:55.788 [5/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:13:55.788 [6/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:13:55.788 [7/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:13:55.788 [8/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:13:55.788 [9/267] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:13:55.788 [10/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:13:55.788 [11/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:13:55.788 [12/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:13:55.788 [13/267] Linking static target lib/librte_pci.a 00:13:55.788 [14/267] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:13:55.788 [15/267] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:13:55.788 [16/267] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:13:56.052 [17/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:13:56.052 [18/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:13:56.052 [19/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:13:56.052 [20/267] Linking static target lib/librte_kvargs.a 00:13:56.052 [21/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:13:56.052 [22/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:13:56.052 [23/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:13:56.052 [24/267] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:13:56.052 [25/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:13:56.052 [26/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:13:56.052 [27/267] Compiling C object lib/librte_log.a.p/log_log.c.o 00:13:56.052 [28/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:13:56.052 [29/267] Compiling C object lib/librte_hash.a.p/hash_rte_hash_crc.c.o 00:13:56.052 [30/267] Linking static target lib/librte_log.a 00:13:56.052 [31/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:13:56.052 [32/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:13:56.052 [33/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:13:56.052 [34/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:13:56.052 [35/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:13:56.052 [36/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:13:56.052 [37/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:13:56.052 [38/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:13:56.052 [39/267] Compiling C object lib/librte_hash.a.p/hash_rte_thash_gfni.c.o 00:13:56.052 [40/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:13:56.052 [41/267] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:13:56.052 [42/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:13:56.052 [43/267] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:13:56.052 [44/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:13:56.052 [45/267] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:13:56.052 [46/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:13:56.052 [47/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:13:56.052 [48/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:13:56.052 [49/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:13:56.052 [50/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:13:56.052 [51/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:13:56.052 [52/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:13:56.052 [53/267] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:13:56.052 [54/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:13:56.052 [55/267] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:13:56.052 [56/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:13:56.052 [57/267] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:13:56.052 [58/267] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:13:56.312 [59/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:13:56.312 [60/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:13:56.313 [61/267] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:13:56.313 [62/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:13:56.313 [63/267] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:13:56.313 [64/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:13:56.313 [65/267] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:13:56.313 [66/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:13:56.313 [67/267] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:13:56.313 [68/267] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:13:56.313 [69/267] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:13:56.313 [70/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:13:56.313 [71/267] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:13:56.313 [72/267] Compiling C object lib/librte_net.a.p/net_net_crc_avx512.c.o 00:13:56.313 [73/267] Linking static target lib/librte_ring.a 00:13:56.313 [74/267] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:13:56.313 [75/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:13:56.313 [76/267] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:13:56.313 [77/267] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:13:56.313 [78/267] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:13:56.313 [79/267] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:13:56.313 [80/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:13:56.313 [81/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:13:56.313 [82/267] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:13:56.313 [83/267] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:13:56.313 [84/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:13:56.313 [85/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:13:56.313 [86/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:13:56.313 [87/267] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:13:56.313 [88/267] Linking static target lib/librte_net.a 00:13:56.313 [89/267] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:13:56.313 [90/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:13:56.313 [91/267] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:13:56.313 [92/267] Linking static target lib/librte_dmadev.a 00:13:56.313 [93/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:13:56.313 [94/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:13:56.313 [95/267] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:13:56.313 [96/267] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:13:56.313 [97/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:13:56.313 [98/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:13:56.313 [99/267] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:13:56.313 [100/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:13:56.313 [101/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:13:56.313 [102/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:13:56.313 [103/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:13:56.313 [104/267] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:13:56.313 [105/267] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:13:56.313 [106/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:13:56.313 [107/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:13:56.313 [108/267] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:13:56.313 [109/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:13:56.313 [110/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:13:56.575 [111/267] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:13:56.575 [112/267] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:13:56.575 [113/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:13:56.575 [114/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:13:56.575 [115/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:13:56.575 [116/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:13:56.575 [117/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:13:56.575 [118/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:13:56.575 [119/267] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:13:56.575 [120/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:13:56.575 [121/267] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:13:56.575 [122/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:13:56.575 [123/267] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:13:56.575 [124/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:13:56.575 [125/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:13:56.575 [126/267] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:13:56.575 [127/267] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:13:56.575 [128/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:13:56.575 [129/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:13:56.575 [130/267] Linking static target lib/librte_meter.a 00:13:56.575 [131/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:13:56.575 [132/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:13:56.575 [133/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:13:56.575 [134/267] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:13:56.575 [135/267] Linking static target lib/librte_mbuf.a 00:13:56.575 [136/267] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:13:56.575 [137/267] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:13:56.575 [138/267] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:13:56.575 [139/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:13:56.575 [140/267] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:13:56.575 [141/267] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:13:56.575 [142/267] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:13:56.575 [143/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:13:56.575 [144/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:13:56.575 [145/267] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:13:56.575 [146/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:13:56.575 [147/267] Linking static target lib/librte_telemetry.a 00:13:56.575 [148/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_linux_ethtool.c.o 00:13:56.575 [149/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:13:56.575 [150/267] Linking static target lib/librte_timer.a 00:13:56.575 [151/267] Linking static target lib/librte_mempool.a 00:13:56.575 [152/267] Linking static target lib/librte_cmdline.a 00:13:56.575 [153/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:13:56.575 [154/267] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:13:56.575 [155/267] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:13:56.575 [156/267] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:13:56.575 [157/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:13:56.575 [158/267] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:13:56.575 [159/267] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:13:56.575 [160/267] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:13:56.575 [161/267] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:13:56.575 [162/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:13:56.575 [163/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:13:56.575 [164/267] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:13:56.575 [165/267] Linking static target lib/librte_compressdev.a 00:13:56.575 [166/267] Linking static target lib/librte_power.a 00:13:56.575 [167/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:13:56.575 [168/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:13:56.575 [169/267] Linking static target lib/librte_rcu.a 00:13:56.575 [170/267] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:13:56.575 [171/267] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:13:56.575 [172/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:13:56.575 [173/267] Linking static target drivers/libtmp_rte_bus_vdev.a 00:13:56.575 [174/267] Linking static target lib/librte_eal.a 00:13:56.575 [175/267] Linking static target drivers/libtmp_rte_bus_pci.a 00:13:56.575 [176/267] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:13:56.837 [177/267] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:13:56.837 [178/267] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:13:56.837 [179/267] Linking static target lib/librte_reorder.a 00:13:56.837 [180/267] Linking static target lib/librte_security.a 00:13:56.837 [181/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:13:56.837 [182/267] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:13:56.837 [183/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:13:56.837 [184/267] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:13:56.837 [185/267] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:13:56.837 [186/267] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:13:56.837 [187/267] Linking static target drivers/libtmp_rte_mempool_ring.a 00:13:56.837 [188/267] Linking static target lib/librte_cryptodev.a 00:13:56.837 [189/267] Linking target lib/librte_log.so.24.1 00:13:56.837 [190/267] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:13:56.837 [191/267] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:13:56.837 [192/267] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:13:56.837 [193/267] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:13:56.837 [194/267] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:13:56.837 [195/267] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:13:56.837 [196/267] Compiling C object drivers/librte_bus_pci.so.24.1.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:13:56.837 [197/267] Compiling C object drivers/librte_bus_vdev.so.24.1.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:13:56.837 [198/267] Generating symbol file lib/librte_log.so.24.1.p/librte_log.so.24.1.symbols 00:13:56.837 [199/267] Linking static target lib/librte_hash.a 00:13:56.837 [200/267] Linking static target drivers/librte_bus_pci.a 00:13:56.837 [201/267] Linking static target drivers/librte_bus_vdev.a 00:13:56.837 [202/267] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:13:57.098 [203/267] Linking target lib/librte_kvargs.so.24.1 00:13:57.098 [204/267] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:13:57.098 [205/267] Compiling C object drivers/librte_mempool_ring.so.24.1.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:13:57.098 [206/267] Linking static target drivers/librte_mempool_ring.a 00:13:57.098 [207/267] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:13:57.098 [208/267] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:13:57.098 [209/267] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:13:57.098 [210/267] Generating symbol file lib/librte_kvargs.so.24.1.p/librte_kvargs.so.24.1.symbols 00:13:57.098 [211/267] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:13:57.359 [212/267] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:13:57.359 [213/267] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:13:57.359 [214/267] Linking target lib/librte_telemetry.so.24.1 00:13:57.359 [215/267] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:13:57.359 [216/267] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:13:57.359 [217/267] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:13:57.359 [218/267] Generating symbol file lib/librte_telemetry.so.24.1.p/librte_telemetry.so.24.1.symbols 00:13:57.359 [219/267] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:13:57.620 [220/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:13:57.620 [221/267] Linking static target lib/librte_ethdev.a 00:13:57.620 [222/267] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:13:57.620 [223/267] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:13:57.882 [224/267] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:13:57.882 [225/267] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:13:57.882 [226/267] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:13:58.453 [227/267] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:13:58.453 [228/267] Linking static target lib/librte_vhost.a 00:13:59.026 [229/267] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:14:00.474 [230/267] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:14:07.063 [231/267] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:14:08.445 [232/267] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:14:08.445 [233/267] Linking target lib/librte_eal.so.24.1 00:14:08.445 [234/267] Generating symbol file lib/librte_eal.so.24.1.p/librte_eal.so.24.1.symbols 00:14:08.445 [235/267] Linking target lib/librte_ring.so.24.1 00:14:08.445 [236/267] Linking target lib/librte_dmadev.so.24.1 00:14:08.445 [237/267] Linking target lib/librte_meter.so.24.1 00:14:08.445 [238/267] Linking target lib/librte_timer.so.24.1 00:14:08.445 [239/267] Linking target lib/librte_pci.so.24.1 00:14:08.445 [240/267] Linking target drivers/librte_bus_vdev.so.24.1 00:14:08.445 [241/267] Generating symbol file lib/librte_meter.so.24.1.p/librte_meter.so.24.1.symbols 00:14:08.445 [242/267] Generating symbol file lib/librte_ring.so.24.1.p/librte_ring.so.24.1.symbols 00:14:08.445 [243/267] Generating symbol file lib/librte_pci.so.24.1.p/librte_pci.so.24.1.symbols 00:14:08.445 [244/267] Generating symbol file lib/librte_dmadev.so.24.1.p/librte_dmadev.so.24.1.symbols 00:14:08.445 [245/267] Generating symbol file lib/librte_timer.so.24.1.p/librte_timer.so.24.1.symbols 00:14:08.705 [246/267] Linking target lib/librte_rcu.so.24.1 00:14:08.705 [247/267] Linking target drivers/librte_bus_pci.so.24.1 00:14:08.705 [248/267] Linking target lib/librte_mempool.so.24.1 00:14:08.705 [249/267] Generating symbol file lib/librte_mempool.so.24.1.p/librte_mempool.so.24.1.symbols 00:14:08.705 [250/267] Generating symbol file lib/librte_rcu.so.24.1.p/librte_rcu.so.24.1.symbols 00:14:08.705 [251/267] Linking target lib/librte_mbuf.so.24.1 00:14:08.705 [252/267] Linking target drivers/librte_mempool_ring.so.24.1 00:14:08.966 [253/267] Generating symbol file lib/librte_mbuf.so.24.1.p/librte_mbuf.so.24.1.symbols 00:14:08.966 [254/267] Linking target lib/librte_compressdev.so.24.1 00:14:08.966 [255/267] Linking target lib/librte_net.so.24.1 00:14:08.966 [256/267] Linking target lib/librte_reorder.so.24.1 00:14:08.966 [257/267] Linking target lib/librte_cryptodev.so.24.1 00:14:08.966 [258/267] Generating symbol file lib/librte_net.so.24.1.p/librte_net.so.24.1.symbols 00:14:08.966 [259/267] Generating symbol file lib/librte_cryptodev.so.24.1.p/librte_cryptodev.so.24.1.symbols 00:14:08.966 [260/267] Linking target lib/librte_hash.so.24.1 00:14:08.966 [261/267] Linking target lib/librte_cmdline.so.24.1 00:14:08.966 [262/267] Linking target lib/librte_security.so.24.1 00:14:08.966 [263/267] Linking target lib/librte_ethdev.so.24.1 00:14:09.227 [264/267] Generating symbol file lib/librte_hash.so.24.1.p/librte_hash.so.24.1.symbols 00:14:09.227 [265/267] Generating symbol file lib/librte_ethdev.so.24.1.p/librte_ethdev.so.24.1.symbols 00:14:09.227 [266/267] Linking target lib/librte_power.so.24.1 00:14:09.227 [267/267] Linking target lib/librte_vhost.so.24.1 00:14:09.227 INFO: autodetecting backend as ninja 00:14:09.227 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp -j 144 00:14:13.433 CC lib/log/log.o 00:14:13.433 CC lib/log/log_flags.o 00:14:13.433 CC lib/log/log_deprecated.o 00:14:13.433 CC lib/ut_mock/mock.o 00:14:13.433 CC lib/ut/ut.o 00:14:13.433 LIB libspdk_ut.a 00:14:13.433 LIB libspdk_log.a 00:14:13.433 LIB libspdk_ut_mock.a 00:14:13.433 SO libspdk_ut.so.2.0 00:14:13.434 SO libspdk_log.so.7.0 00:14:13.434 SO libspdk_ut_mock.so.6.0 00:14:13.434 SYMLINK libspdk_ut.so 00:14:13.434 SYMLINK libspdk_log.so 00:14:13.434 SYMLINK libspdk_ut_mock.so 00:14:13.434 CC lib/util/base64.o 00:14:13.434 CC lib/dma/dma.o 00:14:13.434 CC lib/util/bit_array.o 00:14:13.434 CC lib/util/cpuset.o 00:14:13.434 CC lib/util/crc16.o 00:14:13.434 CC lib/util/crc32.o 00:14:13.434 CC lib/util/crc64.o 00:14:13.434 CC lib/util/crc32c.o 00:14:13.434 CC lib/util/crc32_ieee.o 00:14:13.434 CC lib/util/fd.o 00:14:13.434 CC lib/util/dif.o 00:14:13.434 CC lib/util/fd_group.o 00:14:13.434 CC lib/util/file.o 00:14:13.434 CC lib/util/hexlify.o 00:14:13.434 CC lib/util/iov.o 00:14:13.434 CC lib/util/math.o 00:14:13.434 CC lib/util/net.o 00:14:13.434 CXX lib/trace_parser/trace.o 00:14:13.434 CC lib/util/uuid.o 00:14:13.434 CC lib/util/pipe.o 00:14:13.434 CC lib/ioat/ioat.o 00:14:13.434 CC lib/util/strerror_tls.o 00:14:13.434 CC lib/util/string.o 00:14:13.434 CC lib/util/xor.o 00:14:13.434 CC lib/util/zipf.o 00:14:13.434 CC lib/util/md5.o 00:14:13.695 CC lib/vfio_user/host/vfio_user.o 00:14:13.695 CC lib/vfio_user/host/vfio_user_pci.o 00:14:13.695 LIB libspdk_dma.a 00:14:13.695 SO libspdk_dma.so.5.0 00:14:13.695 SYMLINK libspdk_dma.so 00:14:13.695 LIB libspdk_ioat.a 00:14:13.695 SO libspdk_ioat.so.7.0 00:14:13.695 SYMLINK libspdk_ioat.so 00:14:13.695 LIB libspdk_vfio_user.a 00:14:13.955 SO libspdk_vfio_user.so.5.0 00:14:13.955 LIB libspdk_util.a 00:14:13.955 SYMLINK libspdk_vfio_user.so 00:14:13.955 SO libspdk_util.so.10.0 00:14:13.955 SYMLINK libspdk_util.so 00:14:14.215 LIB libspdk_trace_parser.a 00:14:14.215 SO libspdk_trace_parser.so.6.0 00:14:14.215 SYMLINK libspdk_trace_parser.so 00:14:14.476 CC lib/rdma_provider/common.o 00:14:14.476 CC lib/rdma_provider/rdma_provider_verbs.o 00:14:14.476 CC lib/conf/conf.o 00:14:14.476 CC lib/vmd/vmd.o 00:14:14.476 CC lib/json/json_parse.o 00:14:14.476 CC lib/vmd/led.o 00:14:14.476 CC lib/rdma_utils/rdma_utils.o 00:14:14.476 CC lib/json/json_util.o 00:14:14.476 CC lib/json/json_write.o 00:14:14.476 CC lib/env_dpdk/env.o 00:14:14.476 CC lib/idxd/idxd.o 00:14:14.476 CC lib/env_dpdk/memory.o 00:14:14.476 CC lib/idxd/idxd_user.o 00:14:14.476 CC lib/env_dpdk/pci.o 00:14:14.476 CC lib/idxd/idxd_kernel.o 00:14:14.476 CC lib/env_dpdk/init.o 00:14:14.476 CC lib/env_dpdk/threads.o 00:14:14.476 CC lib/env_dpdk/pci_ioat.o 00:14:14.476 CC lib/env_dpdk/pci_virtio.o 00:14:14.476 CC lib/env_dpdk/pci_vmd.o 00:14:14.476 CC lib/env_dpdk/pci_idxd.o 00:14:14.476 CC lib/env_dpdk/pci_dpdk.o 00:14:14.476 CC lib/env_dpdk/pci_event.o 00:14:14.476 CC lib/env_dpdk/sigbus_handler.o 00:14:14.476 CC lib/env_dpdk/pci_dpdk_2207.o 00:14:14.476 CC lib/env_dpdk/pci_dpdk_2211.o 00:14:14.737 LIB libspdk_rdma_provider.a 00:14:14.737 LIB libspdk_conf.a 00:14:14.737 SO libspdk_rdma_provider.so.6.0 00:14:14.737 SO libspdk_conf.so.6.0 00:14:14.737 LIB libspdk_rdma_utils.a 00:14:14.737 LIB libspdk_json.a 00:14:14.737 SYMLINK libspdk_rdma_provider.so 00:14:14.737 SO libspdk_rdma_utils.so.1.0 00:14:14.737 SO libspdk_json.so.6.0 00:14:14.737 SYMLINK libspdk_conf.so 00:14:14.737 SYMLINK libspdk_rdma_utils.so 00:14:14.737 SYMLINK libspdk_json.so 00:14:14.998 LIB libspdk_idxd.a 00:14:14.998 SO libspdk_idxd.so.12.1 00:14:14.998 LIB libspdk_vmd.a 00:14:14.998 SYMLINK libspdk_idxd.so 00:14:14.998 SO libspdk_vmd.so.6.0 00:14:15.259 SYMLINK libspdk_vmd.so 00:14:15.259 CC lib/jsonrpc/jsonrpc_server.o 00:14:15.259 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:14:15.259 CC lib/jsonrpc/jsonrpc_client.o 00:14:15.259 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:14:15.521 LIB libspdk_jsonrpc.a 00:14:15.521 SO libspdk_jsonrpc.so.6.0 00:14:15.521 SYMLINK libspdk_jsonrpc.so 00:14:15.521 LIB libspdk_env_dpdk.a 00:14:15.781 SO libspdk_env_dpdk.so.15.0 00:14:15.781 SYMLINK libspdk_env_dpdk.so 00:14:15.781 CC lib/rpc/rpc.o 00:14:16.043 LIB libspdk_rpc.a 00:14:16.043 SO libspdk_rpc.so.6.0 00:14:16.305 SYMLINK libspdk_rpc.so 00:14:16.566 CC lib/keyring/keyring.o 00:14:16.566 CC lib/keyring/keyring_rpc.o 00:14:16.566 CC lib/notify/notify.o 00:14:16.566 CC lib/notify/notify_rpc.o 00:14:16.566 CC lib/trace/trace.o 00:14:16.566 CC lib/trace/trace_flags.o 00:14:16.566 CC lib/trace/trace_rpc.o 00:14:16.827 LIB libspdk_notify.a 00:14:16.827 SO libspdk_notify.so.6.0 00:14:16.827 LIB libspdk_keyring.a 00:14:16.827 LIB libspdk_trace.a 00:14:16.827 SO libspdk_keyring.so.2.0 00:14:16.827 SYMLINK libspdk_notify.so 00:14:16.827 SO libspdk_trace.so.11.0 00:14:16.827 SYMLINK libspdk_keyring.so 00:14:17.087 SYMLINK libspdk_trace.so 00:14:17.348 CC lib/thread/thread.o 00:14:17.348 CC lib/thread/iobuf.o 00:14:17.348 CC lib/sock/sock.o 00:14:17.348 CC lib/sock/sock_rpc.o 00:14:17.608 LIB libspdk_sock.a 00:14:17.608 SO libspdk_sock.so.10.0 00:14:17.944 SYMLINK libspdk_sock.so 00:14:18.229 CC lib/nvme/nvme_ctrlr.o 00:14:18.229 CC lib/nvme/nvme_ctrlr_cmd.o 00:14:18.229 CC lib/nvme/nvme_fabric.o 00:14:18.229 CC lib/nvme/nvme_ns_cmd.o 00:14:18.229 CC lib/nvme/nvme_ns.o 00:14:18.229 CC lib/nvme/nvme_pcie_common.o 00:14:18.229 CC lib/nvme/nvme_pcie.o 00:14:18.229 CC lib/nvme/nvme_qpair.o 00:14:18.229 CC lib/nvme/nvme.o 00:14:18.229 CC lib/nvme/nvme_quirks.o 00:14:18.229 CC lib/nvme/nvme_transport.o 00:14:18.229 CC lib/nvme/nvme_discovery.o 00:14:18.229 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:14:18.229 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:14:18.229 CC lib/nvme/nvme_tcp.o 00:14:18.229 CC lib/nvme/nvme_opal.o 00:14:18.229 CC lib/nvme/nvme_io_msg.o 00:14:18.229 CC lib/nvme/nvme_poll_group.o 00:14:18.229 CC lib/nvme/nvme_zns.o 00:14:18.229 CC lib/nvme/nvme_stubs.o 00:14:18.229 CC lib/nvme/nvme_auth.o 00:14:18.229 CC lib/nvme/nvme_cuse.o 00:14:18.229 CC lib/nvme/nvme_vfio_user.o 00:14:18.229 CC lib/nvme/nvme_rdma.o 00:14:18.518 LIB libspdk_thread.a 00:14:18.518 SO libspdk_thread.so.10.1 00:14:18.811 SYMLINK libspdk_thread.so 00:14:19.071 CC lib/virtio/virtio.o 00:14:19.071 CC lib/virtio/virtio_vhost_user.o 00:14:19.071 CC lib/virtio/virtio_vfio_user.o 00:14:19.071 CC lib/virtio/virtio_pci.o 00:14:19.071 CC lib/vfu_tgt/tgt_endpoint.o 00:14:19.071 CC lib/vfu_tgt/tgt_rpc.o 00:14:19.071 CC lib/accel/accel.o 00:14:19.071 CC lib/accel/accel_sw.o 00:14:19.071 CC lib/accel/accel_rpc.o 00:14:19.071 CC lib/blob/blobstore.o 00:14:19.071 CC lib/init/json_config.o 00:14:19.071 CC lib/blob/request.o 00:14:19.071 CC lib/init/subsystem.o 00:14:19.071 CC lib/blob/zeroes.o 00:14:19.071 CC lib/fsdev/fsdev.o 00:14:19.071 CC lib/blob/blob_bs_dev.o 00:14:19.071 CC lib/fsdev/fsdev_io.o 00:14:19.071 CC lib/init/subsystem_rpc.o 00:14:19.071 CC lib/init/rpc.o 00:14:19.071 CC lib/fsdev/fsdev_rpc.o 00:14:19.333 LIB libspdk_init.a 00:14:19.333 SO libspdk_init.so.6.0 00:14:19.333 LIB libspdk_virtio.a 00:14:19.333 LIB libspdk_vfu_tgt.a 00:14:19.333 SO libspdk_virtio.so.7.0 00:14:19.333 SYMLINK libspdk_init.so 00:14:19.333 SO libspdk_vfu_tgt.so.3.0 00:14:19.333 SYMLINK libspdk_virtio.so 00:14:19.333 SYMLINK libspdk_vfu_tgt.so 00:14:19.594 LIB libspdk_fsdev.a 00:14:19.594 SO libspdk_fsdev.so.1.0 00:14:19.594 CC lib/event/app.o 00:14:19.594 CC lib/event/reactor.o 00:14:19.594 CC lib/event/log_rpc.o 00:14:19.594 CC lib/event/app_rpc.o 00:14:19.594 CC lib/event/scheduler_static.o 00:14:19.855 SYMLINK libspdk_fsdev.so 00:14:19.855 LIB libspdk_nvme.a 00:14:19.855 LIB libspdk_accel.a 00:14:20.117 SO libspdk_accel.so.16.0 00:14:20.117 SO libspdk_nvme.so.14.0 00:14:20.117 SYMLINK libspdk_accel.so 00:14:20.117 CC lib/fuse_dispatcher/fuse_dispatcher.o 00:14:20.117 LIB libspdk_event.a 00:14:20.117 SO libspdk_event.so.14.0 00:14:20.378 SYMLINK libspdk_event.so 00:14:20.378 SYMLINK libspdk_nvme.so 00:14:20.378 CC lib/bdev/bdev.o 00:14:20.378 CC lib/bdev/bdev_rpc.o 00:14:20.378 CC lib/bdev/bdev_zone.o 00:14:20.378 CC lib/bdev/part.o 00:14:20.378 CC lib/bdev/scsi_nvme.o 00:14:20.638 LIB libspdk_fuse_dispatcher.a 00:14:20.638 SO libspdk_fuse_dispatcher.so.1.0 00:14:20.638 SYMLINK libspdk_fuse_dispatcher.so 00:14:21.210 LIB libspdk_blob.a 00:14:21.210 SO libspdk_blob.so.11.0 00:14:21.471 SYMLINK libspdk_blob.so 00:14:21.734 CC lib/blobfs/blobfs.o 00:14:21.734 CC lib/blobfs/tree.o 00:14:21.734 CC lib/lvol/lvol.o 00:14:22.681 LIB libspdk_blobfs.a 00:14:22.681 SO libspdk_blobfs.so.10.0 00:14:22.681 LIB libspdk_lvol.a 00:14:22.681 SYMLINK libspdk_blobfs.so 00:14:22.681 SO libspdk_lvol.so.10.0 00:14:22.681 SYMLINK libspdk_lvol.so 00:14:22.681 LIB libspdk_bdev.a 00:14:22.681 SO libspdk_bdev.so.17.0 00:14:22.942 SYMLINK libspdk_bdev.so 00:14:23.218 CC lib/ublk/ublk.o 00:14:23.218 CC lib/nbd/nbd.o 00:14:23.218 CC lib/ublk/ublk_rpc.o 00:14:23.218 CC lib/nbd/nbd_rpc.o 00:14:23.218 CC lib/scsi/dev.o 00:14:23.218 CC lib/nvmf/ctrlr.o 00:14:23.218 CC lib/scsi/lun.o 00:14:23.218 CC lib/nvmf/ctrlr_discovery.o 00:14:23.218 CC lib/nvmf/ctrlr_bdev.o 00:14:23.218 CC lib/scsi/port.o 00:14:23.218 CC lib/nvmf/subsystem.o 00:14:23.218 CC lib/scsi/scsi.o 00:14:23.218 CC lib/nvmf/nvmf.o 00:14:23.218 CC lib/scsi/scsi_bdev.o 00:14:23.218 CC lib/ftl/ftl_core.o 00:14:23.218 CC lib/nvmf/nvmf_rpc.o 00:14:23.218 CC lib/nvmf/transport.o 00:14:23.218 CC lib/scsi/task.o 00:14:23.218 CC lib/scsi/scsi_pr.o 00:14:23.218 CC lib/ftl/ftl_init.o 00:14:23.218 CC lib/scsi/scsi_rpc.o 00:14:23.218 CC lib/nvmf/tcp.o 00:14:23.218 CC lib/ftl/ftl_layout.o 00:14:23.218 CC lib/nvmf/stubs.o 00:14:23.218 CC lib/ftl/ftl_debug.o 00:14:23.218 CC lib/nvmf/mdns_server.o 00:14:23.218 CC lib/nvmf/vfio_user.o 00:14:23.218 CC lib/ftl/ftl_io.o 00:14:23.218 CC lib/ftl/ftl_sb.o 00:14:23.218 CC lib/nvmf/rdma.o 00:14:23.218 CC lib/nvmf/auth.o 00:14:23.218 CC lib/ftl/ftl_l2p.o 00:14:23.218 CC lib/ftl/ftl_l2p_flat.o 00:14:23.218 CC lib/ftl/ftl_nv_cache.o 00:14:23.218 CC lib/ftl/ftl_band.o 00:14:23.218 CC lib/ftl/ftl_writer.o 00:14:23.218 CC lib/ftl/ftl_band_ops.o 00:14:23.218 CC lib/ftl/ftl_rq.o 00:14:23.218 CC lib/ftl/ftl_reloc.o 00:14:23.218 CC lib/ftl/ftl_l2p_cache.o 00:14:23.218 CC lib/ftl/ftl_p2l.o 00:14:23.218 CC lib/ftl/ftl_p2l_log.o 00:14:23.218 CC lib/ftl/mngt/ftl_mngt.o 00:14:23.218 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:14:23.218 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:14:23.218 CC lib/ftl/mngt/ftl_mngt_startup.o 00:14:23.218 CC lib/ftl/mngt/ftl_mngt_md.o 00:14:23.218 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:14:23.218 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:14:23.218 CC lib/ftl/mngt/ftl_mngt_misc.o 00:14:23.218 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:14:23.218 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:14:23.218 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:14:23.218 CC lib/ftl/mngt/ftl_mngt_band.o 00:14:23.218 CC lib/ftl/utils/ftl_md.o 00:14:23.218 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:14:23.218 CC lib/ftl/utils/ftl_mempool.o 00:14:23.218 CC lib/ftl/utils/ftl_bitmap.o 00:14:23.218 CC lib/ftl/utils/ftl_conf.o 00:14:23.218 CC lib/ftl/utils/ftl_property.o 00:14:23.218 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:14:23.218 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:14:23.218 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:14:23.218 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:14:23.218 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:14:23.218 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:14:23.218 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:14:23.218 CC lib/ftl/upgrade/ftl_sb_v5.o 00:14:23.218 CC lib/ftl/nvc/ftl_nvc_dev.o 00:14:23.218 CC lib/ftl/nvc/ftl_nvc_bdev_non_vss.o 00:14:23.218 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:14:23.218 CC lib/ftl/nvc/ftl_nvc_bdev_common.o 00:14:23.218 CC lib/ftl/base/ftl_base_dev.o 00:14:23.218 CC lib/ftl/ftl_trace.o 00:14:23.218 CC lib/ftl/upgrade/ftl_sb_v3.o 00:14:23.218 CC lib/ftl/base/ftl_base_bdev.o 00:14:23.789 LIB libspdk_nbd.a 00:14:23.789 SO libspdk_nbd.so.7.0 00:14:23.789 SYMLINK libspdk_nbd.so 00:14:23.789 LIB libspdk_ublk.a 00:14:23.789 SO libspdk_ublk.so.3.0 00:14:24.055 LIB libspdk_scsi.a 00:14:24.055 SYMLINK libspdk_ublk.so 00:14:24.055 SO libspdk_scsi.so.9.0 00:14:24.055 SYMLINK libspdk_scsi.so 00:14:24.318 LIB libspdk_ftl.a 00:14:24.318 SO libspdk_ftl.so.9.0 00:14:24.579 CC lib/vhost/vhost.o 00:14:24.579 CC lib/vhost/vhost_rpc.o 00:14:24.579 CC lib/iscsi/conn.o 00:14:24.579 CC lib/vhost/rte_vhost_user.o 00:14:24.579 CC lib/vhost/vhost_scsi.o 00:14:24.579 CC lib/iscsi/init_grp.o 00:14:24.579 CC lib/vhost/vhost_blk.o 00:14:24.579 CC lib/iscsi/iscsi.o 00:14:24.579 CC lib/iscsi/param.o 00:14:24.579 CC lib/iscsi/portal_grp.o 00:14:24.579 CC lib/iscsi/tgt_node.o 00:14:24.579 CC lib/iscsi/iscsi_subsystem.o 00:14:24.579 CC lib/iscsi/iscsi_rpc.o 00:14:24.579 CC lib/iscsi/task.o 00:14:24.579 SYMLINK libspdk_ftl.so 00:14:25.153 LIB libspdk_nvmf.a 00:14:25.415 SO libspdk_nvmf.so.19.0 00:14:25.415 LIB libspdk_vhost.a 00:14:25.415 SO libspdk_vhost.so.8.0 00:14:25.415 SYMLINK libspdk_nvmf.so 00:14:25.675 SYMLINK libspdk_vhost.so 00:14:25.676 LIB libspdk_iscsi.a 00:14:25.676 SO libspdk_iscsi.so.8.0 00:14:25.676 SYMLINK libspdk_iscsi.so 00:14:26.247 CC module/env_dpdk/env_dpdk_rpc.o 00:14:26.247 CC module/vfu_device/vfu_virtio.o 00:14:26.247 CC module/vfu_device/vfu_virtio_blk.o 00:14:26.247 CC module/vfu_device/vfu_virtio_scsi.o 00:14:26.247 CC module/vfu_device/vfu_virtio_rpc.o 00:14:26.247 CC module/vfu_device/vfu_virtio_fs.o 00:14:26.508 LIB libspdk_env_dpdk_rpc.a 00:14:26.508 CC module/scheduler/gscheduler/gscheduler.o 00:14:26.508 CC module/keyring/file/keyring.o 00:14:26.508 SO libspdk_env_dpdk_rpc.so.6.0 00:14:26.508 CC module/keyring/file/keyring_rpc.o 00:14:26.508 CC module/blob/bdev/blob_bdev.o 00:14:26.508 CC module/sock/posix/posix.o 00:14:26.508 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:14:26.508 CC module/accel/error/accel_error.o 00:14:26.508 CC module/keyring/linux/keyring.o 00:14:26.508 CC module/keyring/linux/keyring_rpc.o 00:14:26.508 CC module/accel/error/accel_error_rpc.o 00:14:26.508 CC module/accel/ioat/accel_ioat.o 00:14:26.508 CC module/accel/ioat/accel_ioat_rpc.o 00:14:26.508 CC module/accel/iaa/accel_iaa.o 00:14:26.508 CC module/scheduler/dynamic/scheduler_dynamic.o 00:14:26.508 CC module/accel/iaa/accel_iaa_rpc.o 00:14:26.508 CC module/accel/dsa/accel_dsa.o 00:14:26.508 CC module/fsdev/aio/fsdev_aio.o 00:14:26.508 CC module/accel/dsa/accel_dsa_rpc.o 00:14:26.508 CC module/fsdev/aio/fsdev_aio_rpc.o 00:14:26.508 CC module/fsdev/aio/linux_aio_mgr.o 00:14:26.508 SYMLINK libspdk_env_dpdk_rpc.so 00:14:26.770 LIB libspdk_scheduler_gscheduler.a 00:14:26.770 LIB libspdk_keyring_linux.a 00:14:26.770 LIB libspdk_keyring_file.a 00:14:26.770 SO libspdk_scheduler_gscheduler.so.4.0 00:14:26.770 LIB libspdk_scheduler_dpdk_governor.a 00:14:26.770 SO libspdk_keyring_linux.so.1.0 00:14:26.770 LIB libspdk_accel_error.a 00:14:26.770 SO libspdk_keyring_file.so.2.0 00:14:26.770 SO libspdk_scheduler_dpdk_governor.so.4.0 00:14:26.770 LIB libspdk_accel_iaa.a 00:14:26.770 LIB libspdk_accel_ioat.a 00:14:26.770 LIB libspdk_scheduler_dynamic.a 00:14:26.770 SO libspdk_accel_error.so.2.0 00:14:26.770 SYMLINK libspdk_scheduler_gscheduler.so 00:14:26.770 SO libspdk_accel_ioat.so.6.0 00:14:26.770 SYMLINK libspdk_keyring_linux.so 00:14:26.770 SO libspdk_scheduler_dynamic.so.4.0 00:14:26.770 SO libspdk_accel_iaa.so.3.0 00:14:26.770 LIB libspdk_blob_bdev.a 00:14:26.770 SYMLINK libspdk_scheduler_dpdk_governor.so 00:14:26.770 SYMLINK libspdk_keyring_file.so 00:14:26.770 LIB libspdk_accel_dsa.a 00:14:26.770 SYMLINK libspdk_accel_error.so 00:14:26.770 SO libspdk_blob_bdev.so.11.0 00:14:26.770 SYMLINK libspdk_accel_ioat.so 00:14:26.770 SYMLINK libspdk_scheduler_dynamic.so 00:14:27.032 SO libspdk_accel_dsa.so.5.0 00:14:27.032 SYMLINK libspdk_accel_iaa.so 00:14:27.032 SYMLINK libspdk_blob_bdev.so 00:14:27.032 LIB libspdk_vfu_device.a 00:14:27.032 SYMLINK libspdk_accel_dsa.so 00:14:27.032 SO libspdk_vfu_device.so.3.0 00:14:27.032 SYMLINK libspdk_vfu_device.so 00:14:27.293 LIB libspdk_fsdev_aio.a 00:14:27.293 LIB libspdk_sock_posix.a 00:14:27.293 SO libspdk_fsdev_aio.so.1.0 00:14:27.293 SO libspdk_sock_posix.so.6.0 00:14:27.293 SYMLINK libspdk_fsdev_aio.so 00:14:27.293 SYMLINK libspdk_sock_posix.so 00:14:27.555 CC module/bdev/gpt/gpt.o 00:14:27.555 CC module/bdev/gpt/vbdev_gpt.o 00:14:27.555 CC module/bdev/null/bdev_null.o 00:14:27.555 CC module/bdev/null/bdev_null_rpc.o 00:14:27.555 CC module/blobfs/bdev/blobfs_bdev.o 00:14:27.555 CC module/bdev/passthru/vbdev_passthru.o 00:14:27.555 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:14:27.555 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:14:27.555 CC module/bdev/malloc/bdev_malloc.o 00:14:27.555 CC module/bdev/delay/vbdev_delay.o 00:14:27.555 CC module/bdev/raid/bdev_raid.o 00:14:27.555 CC module/bdev/malloc/bdev_malloc_rpc.o 00:14:27.555 CC module/bdev/iscsi/bdev_iscsi.o 00:14:27.555 CC module/bdev/delay/vbdev_delay_rpc.o 00:14:27.555 CC module/bdev/raid/bdev_raid_rpc.o 00:14:27.555 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:14:27.555 CC module/bdev/error/vbdev_error.o 00:14:27.555 CC module/bdev/raid/bdev_raid_sb.o 00:14:27.555 CC module/bdev/virtio/bdev_virtio_scsi.o 00:14:27.555 CC module/bdev/split/vbdev_split.o 00:14:27.555 CC module/bdev/error/vbdev_error_rpc.o 00:14:27.555 CC module/bdev/nvme/bdev_nvme.o 00:14:27.555 CC module/bdev/split/vbdev_split_rpc.o 00:14:27.555 CC module/bdev/raid/raid0.o 00:14:27.555 CC module/bdev/virtio/bdev_virtio_blk.o 00:14:27.555 CC module/bdev/nvme/bdev_nvme_rpc.o 00:14:27.555 CC module/bdev/raid/raid1.o 00:14:27.555 CC module/bdev/virtio/bdev_virtio_rpc.o 00:14:27.555 CC module/bdev/ftl/bdev_ftl.o 00:14:27.555 CC module/bdev/lvol/vbdev_lvol.o 00:14:27.555 CC module/bdev/raid/concat.o 00:14:27.555 CC module/bdev/nvme/nvme_rpc.o 00:14:27.555 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:14:27.555 CC module/bdev/nvme/bdev_mdns_client.o 00:14:27.555 CC module/bdev/ftl/bdev_ftl_rpc.o 00:14:27.555 CC module/bdev/nvme/vbdev_opal.o 00:14:27.555 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:14:27.555 CC module/bdev/nvme/vbdev_opal_rpc.o 00:14:27.555 CC module/bdev/aio/bdev_aio.o 00:14:27.555 CC module/bdev/zone_block/vbdev_zone_block.o 00:14:27.555 CC module/bdev/aio/bdev_aio_rpc.o 00:14:27.555 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:14:27.816 LIB libspdk_blobfs_bdev.a 00:14:27.816 SO libspdk_blobfs_bdev.so.6.0 00:14:27.816 LIB libspdk_bdev_split.a 00:14:27.816 LIB libspdk_bdev_error.a 00:14:27.816 LIB libspdk_bdev_null.a 00:14:27.816 SYMLINK libspdk_blobfs_bdev.so 00:14:27.816 SO libspdk_bdev_split.so.6.0 00:14:27.816 LIB libspdk_bdev_gpt.a 00:14:27.816 SO libspdk_bdev_error.so.6.0 00:14:27.816 SO libspdk_bdev_null.so.6.0 00:14:27.816 LIB libspdk_bdev_passthru.a 00:14:27.816 LIB libspdk_bdev_ftl.a 00:14:27.816 LIB libspdk_bdev_zone_block.a 00:14:27.816 LIB libspdk_bdev_aio.a 00:14:27.816 SO libspdk_bdev_gpt.so.6.0 00:14:27.816 SO libspdk_bdev_passthru.so.6.0 00:14:27.816 LIB libspdk_bdev_iscsi.a 00:14:27.816 SYMLINK libspdk_bdev_split.so 00:14:27.816 SYMLINK libspdk_bdev_error.so 00:14:27.816 SO libspdk_bdev_ftl.so.6.0 00:14:27.816 SYMLINK libspdk_bdev_null.so 00:14:27.816 SO libspdk_bdev_aio.so.6.0 00:14:27.816 SO libspdk_bdev_zone_block.so.6.0 00:14:27.816 LIB libspdk_bdev_malloc.a 00:14:28.077 SO libspdk_bdev_iscsi.so.6.0 00:14:28.077 SYMLINK libspdk_bdev_gpt.so 00:14:28.077 LIB libspdk_bdev_delay.a 00:14:28.077 SO libspdk_bdev_malloc.so.6.0 00:14:28.077 SYMLINK libspdk_bdev_passthru.so 00:14:28.077 SYMLINK libspdk_bdev_zone_block.so 00:14:28.077 SYMLINK libspdk_bdev_aio.so 00:14:28.077 SYMLINK libspdk_bdev_ftl.so 00:14:28.077 SO libspdk_bdev_delay.so.6.0 00:14:28.077 SYMLINK libspdk_bdev_iscsi.so 00:14:28.077 SYMLINK libspdk_bdev_malloc.so 00:14:28.077 LIB libspdk_bdev_virtio.a 00:14:28.077 SYMLINK libspdk_bdev_delay.so 00:14:28.077 SO libspdk_bdev_virtio.so.6.0 00:14:28.077 LIB libspdk_bdev_lvol.a 00:14:28.077 SO libspdk_bdev_lvol.so.6.0 00:14:28.077 SYMLINK libspdk_bdev_virtio.so 00:14:28.077 SYMLINK libspdk_bdev_lvol.so 00:14:28.338 LIB libspdk_bdev_raid.a 00:14:28.599 SO libspdk_bdev_raid.so.6.0 00:14:28.599 SYMLINK libspdk_bdev_raid.so 00:14:29.543 LIB libspdk_bdev_nvme.a 00:14:29.543 SO libspdk_bdev_nvme.so.7.0 00:14:29.804 SYMLINK libspdk_bdev_nvme.so 00:14:30.375 CC module/event/subsystems/keyring/keyring.o 00:14:30.375 CC module/event/subsystems/vfu_tgt/vfu_tgt.o 00:14:30.375 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:14:30.375 CC module/event/subsystems/sock/sock.o 00:14:30.375 CC module/event/subsystems/vmd/vmd.o 00:14:30.375 CC module/event/subsystems/vmd/vmd_rpc.o 00:14:30.375 CC module/event/subsystems/fsdev/fsdev.o 00:14:30.375 CC module/event/subsystems/iobuf/iobuf.o 00:14:30.375 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:14:30.375 CC module/event/subsystems/scheduler/scheduler.o 00:14:30.636 LIB libspdk_event_vfu_tgt.a 00:14:30.636 LIB libspdk_event_keyring.a 00:14:30.636 LIB libspdk_event_sock.a 00:14:30.636 LIB libspdk_event_vhost_blk.a 00:14:30.636 SO libspdk_event_vfu_tgt.so.3.0 00:14:30.636 LIB libspdk_event_fsdev.a 00:14:30.636 LIB libspdk_event_vmd.a 00:14:30.636 LIB libspdk_event_scheduler.a 00:14:30.636 LIB libspdk_event_iobuf.a 00:14:30.636 SO libspdk_event_keyring.so.1.0 00:14:30.636 SO libspdk_event_sock.so.5.0 00:14:30.636 SO libspdk_event_vhost_blk.so.3.0 00:14:30.636 SO libspdk_event_scheduler.so.4.0 00:14:30.636 SO libspdk_event_fsdev.so.1.0 00:14:30.636 SO libspdk_event_vmd.so.6.0 00:14:30.636 SYMLINK libspdk_event_vfu_tgt.so 00:14:30.636 SO libspdk_event_iobuf.so.3.0 00:14:30.636 SYMLINK libspdk_event_keyring.so 00:14:30.636 SYMLINK libspdk_event_sock.so 00:14:30.636 SYMLINK libspdk_event_vhost_blk.so 00:14:30.636 SYMLINK libspdk_event_fsdev.so 00:14:30.636 SYMLINK libspdk_event_scheduler.so 00:14:30.636 SYMLINK libspdk_event_vmd.so 00:14:30.636 SYMLINK libspdk_event_iobuf.so 00:14:31.207 CC module/event/subsystems/accel/accel.o 00:14:31.207 LIB libspdk_event_accel.a 00:14:31.207 SO libspdk_event_accel.so.6.0 00:14:31.207 SYMLINK libspdk_event_accel.so 00:14:31.778 CC module/event/subsystems/bdev/bdev.o 00:14:31.778 LIB libspdk_event_bdev.a 00:14:31.778 SO libspdk_event_bdev.so.6.0 00:14:32.039 SYMLINK libspdk_event_bdev.so 00:14:32.300 CC module/event/subsystems/scsi/scsi.o 00:14:32.300 CC module/event/subsystems/ublk/ublk.o 00:14:32.300 CC module/event/subsystems/nbd/nbd.o 00:14:32.300 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:14:32.300 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:14:32.562 LIB libspdk_event_scsi.a 00:14:32.562 LIB libspdk_event_ublk.a 00:14:32.562 LIB libspdk_event_nbd.a 00:14:32.562 SO libspdk_event_ublk.so.3.0 00:14:32.562 SO libspdk_event_scsi.so.6.0 00:14:32.562 SO libspdk_event_nbd.so.6.0 00:14:32.562 SYMLINK libspdk_event_scsi.so 00:14:32.562 LIB libspdk_event_nvmf.a 00:14:32.562 SYMLINK libspdk_event_ublk.so 00:14:32.562 SYMLINK libspdk_event_nbd.so 00:14:32.562 SO libspdk_event_nvmf.so.6.0 00:14:32.562 SYMLINK libspdk_event_nvmf.so 00:14:32.823 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:14:32.823 CC module/event/subsystems/iscsi/iscsi.o 00:14:33.084 LIB libspdk_event_vhost_scsi.a 00:14:33.084 LIB libspdk_event_iscsi.a 00:14:33.084 SO libspdk_event_vhost_scsi.so.3.0 00:14:33.084 SO libspdk_event_iscsi.so.6.0 00:14:33.084 SYMLINK libspdk_event_vhost_scsi.so 00:14:33.084 SYMLINK libspdk_event_iscsi.so 00:14:33.345 SO libspdk.so.6.0 00:14:33.345 SYMLINK libspdk.so 00:14:33.926 CXX app/trace/trace.o 00:14:33.926 CC app/trace_record/trace_record.o 00:14:33.926 CC app/spdk_nvme_identify/identify.o 00:14:33.926 CC app/spdk_top/spdk_top.o 00:14:33.926 CC app/spdk_lspci/spdk_lspci.o 00:14:33.926 TEST_HEADER include/spdk/accel.h 00:14:33.926 TEST_HEADER include/spdk/accel_module.h 00:14:33.926 CC test/rpc_client/rpc_client_test.o 00:14:33.926 TEST_HEADER include/spdk/assert.h 00:14:33.926 TEST_HEADER include/spdk/barrier.h 00:14:33.926 TEST_HEADER include/spdk/bdev.h 00:14:33.926 TEST_HEADER include/spdk/base64.h 00:14:33.926 CC app/spdk_nvme_discover/discovery_aer.o 00:14:33.926 TEST_HEADER include/spdk/bdev_module.h 00:14:33.926 TEST_HEADER include/spdk/bdev_zone.h 00:14:33.926 TEST_HEADER include/spdk/bit_array.h 00:14:33.926 TEST_HEADER include/spdk/bit_pool.h 00:14:33.926 TEST_HEADER include/spdk/blob_bdev.h 00:14:33.926 TEST_HEADER include/spdk/blobfs_bdev.h 00:14:33.926 TEST_HEADER include/spdk/blobfs.h 00:14:33.926 CC app/spdk_nvme_perf/perf.o 00:14:33.926 TEST_HEADER include/spdk/blob.h 00:14:33.926 TEST_HEADER include/spdk/conf.h 00:14:33.926 TEST_HEADER include/spdk/config.h 00:14:33.926 TEST_HEADER include/spdk/cpuset.h 00:14:33.926 TEST_HEADER include/spdk/crc16.h 00:14:33.926 TEST_HEADER include/spdk/crc32.h 00:14:33.926 TEST_HEADER include/spdk/dif.h 00:14:33.926 TEST_HEADER include/spdk/crc64.h 00:14:33.926 TEST_HEADER include/spdk/dma.h 00:14:33.926 TEST_HEADER include/spdk/env_dpdk.h 00:14:33.926 TEST_HEADER include/spdk/endian.h 00:14:33.926 TEST_HEADER include/spdk/event.h 00:14:33.926 TEST_HEADER include/spdk/env.h 00:14:33.926 CC examples/interrupt_tgt/interrupt_tgt.o 00:14:33.926 TEST_HEADER include/spdk/fd_group.h 00:14:33.926 TEST_HEADER include/spdk/fd.h 00:14:33.926 TEST_HEADER include/spdk/fsdev.h 00:14:33.926 TEST_HEADER include/spdk/file.h 00:14:33.926 TEST_HEADER include/spdk/fsdev_module.h 00:14:33.926 TEST_HEADER include/spdk/ftl.h 00:14:33.926 TEST_HEADER include/spdk/fuse_dispatcher.h 00:14:33.926 TEST_HEADER include/spdk/gpt_spec.h 00:14:33.926 TEST_HEADER include/spdk/histogram_data.h 00:14:33.926 TEST_HEADER include/spdk/hexlify.h 00:14:33.926 CC app/nvmf_tgt/nvmf_main.o 00:14:33.926 TEST_HEADER include/spdk/idxd.h 00:14:33.926 TEST_HEADER include/spdk/init.h 00:14:33.926 TEST_HEADER include/spdk/idxd_spec.h 00:14:33.926 TEST_HEADER include/spdk/ioat.h 00:14:33.926 TEST_HEADER include/spdk/ioat_spec.h 00:14:33.926 TEST_HEADER include/spdk/iscsi_spec.h 00:14:33.926 CC app/spdk_dd/spdk_dd.o 00:14:33.926 TEST_HEADER include/spdk/json.h 00:14:33.926 TEST_HEADER include/spdk/keyring.h 00:14:33.926 TEST_HEADER include/spdk/jsonrpc.h 00:14:33.926 TEST_HEADER include/spdk/keyring_module.h 00:14:33.926 TEST_HEADER include/spdk/likely.h 00:14:33.926 TEST_HEADER include/spdk/log.h 00:14:33.926 TEST_HEADER include/spdk/md5.h 00:14:33.926 TEST_HEADER include/spdk/lvol.h 00:14:33.926 TEST_HEADER include/spdk/memory.h 00:14:33.926 CC app/iscsi_tgt/iscsi_tgt.o 00:14:33.926 TEST_HEADER include/spdk/mmio.h 00:14:33.926 TEST_HEADER include/spdk/nbd.h 00:14:33.926 TEST_HEADER include/spdk/net.h 00:14:33.926 TEST_HEADER include/spdk/notify.h 00:14:33.926 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:14:33.926 TEST_HEADER include/spdk/nvme.h 00:14:33.926 TEST_HEADER include/spdk/nvme_intel.h 00:14:33.926 TEST_HEADER include/spdk/nvme_ocssd.h 00:14:33.926 TEST_HEADER include/spdk/nvme_spec.h 00:14:33.926 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:14:33.926 TEST_HEADER include/spdk/nvme_zns.h 00:14:33.926 TEST_HEADER include/spdk/nvmf_cmd.h 00:14:33.926 TEST_HEADER include/spdk/nvmf.h 00:14:33.926 CC app/spdk_tgt/spdk_tgt.o 00:14:33.926 TEST_HEADER include/spdk/nvmf_transport.h 00:14:33.926 TEST_HEADER include/spdk/nvmf_spec.h 00:14:33.926 TEST_HEADER include/spdk/opal_spec.h 00:14:33.926 TEST_HEADER include/spdk/pci_ids.h 00:14:33.926 TEST_HEADER include/spdk/queue.h 00:14:33.926 TEST_HEADER include/spdk/opal.h 00:14:33.926 TEST_HEADER include/spdk/pipe.h 00:14:33.926 TEST_HEADER include/spdk/scheduler.h 00:14:33.926 TEST_HEADER include/spdk/reduce.h 00:14:33.926 TEST_HEADER include/spdk/rpc.h 00:14:33.926 TEST_HEADER include/spdk/scsi.h 00:14:33.926 TEST_HEADER include/spdk/scsi_spec.h 00:14:33.926 TEST_HEADER include/spdk/stdinc.h 00:14:33.926 TEST_HEADER include/spdk/sock.h 00:14:33.926 TEST_HEADER include/spdk/string.h 00:14:33.926 TEST_HEADER include/spdk/thread.h 00:14:33.926 TEST_HEADER include/spdk/trace.h 00:14:33.926 TEST_HEADER include/spdk/trace_parser.h 00:14:33.926 TEST_HEADER include/spdk/tree.h 00:14:33.926 TEST_HEADER include/spdk/ublk.h 00:14:33.926 TEST_HEADER include/spdk/util.h 00:14:33.926 TEST_HEADER include/spdk/uuid.h 00:14:33.926 TEST_HEADER include/spdk/version.h 00:14:33.926 TEST_HEADER include/spdk/vfio_user_pci.h 00:14:33.926 TEST_HEADER include/spdk/vfio_user_spec.h 00:14:33.926 TEST_HEADER include/spdk/vhost.h 00:14:33.926 TEST_HEADER include/spdk/vmd.h 00:14:33.926 TEST_HEADER include/spdk/xor.h 00:14:33.926 TEST_HEADER include/spdk/zipf.h 00:14:33.926 CXX test/cpp_headers/accel.o 00:14:33.926 CXX test/cpp_headers/barrier.o 00:14:33.926 CXX test/cpp_headers/assert.o 00:14:33.926 CXX test/cpp_headers/accel_module.o 00:14:33.926 CXX test/cpp_headers/base64.o 00:14:33.926 CXX test/cpp_headers/bdev_module.o 00:14:33.926 CXX test/cpp_headers/bdev.o 00:14:33.926 CXX test/cpp_headers/bit_pool.o 00:14:33.926 CXX test/cpp_headers/bdev_zone.o 00:14:33.926 CXX test/cpp_headers/blob_bdev.o 00:14:33.926 CXX test/cpp_headers/bit_array.o 00:14:33.926 CXX test/cpp_headers/blobfs_bdev.o 00:14:33.926 CXX test/cpp_headers/blobfs.o 00:14:33.926 CXX test/cpp_headers/conf.o 00:14:33.926 CXX test/cpp_headers/blob.o 00:14:33.926 CXX test/cpp_headers/config.o 00:14:33.926 CXX test/cpp_headers/cpuset.o 00:14:33.926 CXX test/cpp_headers/crc16.o 00:14:33.926 CXX test/cpp_headers/crc32.o 00:14:33.926 CXX test/cpp_headers/dif.o 00:14:33.926 CXX test/cpp_headers/crc64.o 00:14:33.926 CXX test/cpp_headers/dma.o 00:14:33.926 CXX test/cpp_headers/event.o 00:14:33.926 CXX test/cpp_headers/fd_group.o 00:14:33.926 CXX test/cpp_headers/endian.o 00:14:33.926 CXX test/cpp_headers/env_dpdk.o 00:14:33.926 CXX test/cpp_headers/ftl.o 00:14:33.926 CXX test/cpp_headers/fsdev_module.o 00:14:33.926 CXX test/cpp_headers/gpt_spec.o 00:14:33.926 CXX test/cpp_headers/hexlify.o 00:14:33.926 CXX test/cpp_headers/idxd.o 00:14:33.926 CXX test/cpp_headers/env.o 00:14:33.926 CXX test/cpp_headers/fd.o 00:14:33.926 CXX test/cpp_headers/init.o 00:14:33.926 CXX test/cpp_headers/ioat_spec.o 00:14:33.926 CXX test/cpp_headers/idxd_spec.o 00:14:33.926 CXX test/cpp_headers/ioat.o 00:14:33.926 CXX test/cpp_headers/iscsi_spec.o 00:14:33.926 CXX test/cpp_headers/fsdev.o 00:14:33.926 CXX test/cpp_headers/file.o 00:14:33.926 CXX test/cpp_headers/jsonrpc.o 00:14:33.926 CXX test/cpp_headers/keyring_module.o 00:14:33.926 CC test/env/memory/memory_ut.o 00:14:33.926 CXX test/cpp_headers/fuse_dispatcher.o 00:14:33.926 CC examples/ioat/perf/perf.o 00:14:33.926 CXX test/cpp_headers/log.o 00:14:33.926 CXX test/cpp_headers/histogram_data.o 00:14:33.926 CXX test/cpp_headers/json.o 00:14:33.926 CXX test/cpp_headers/keyring.o 00:14:33.926 CXX test/cpp_headers/md5.o 00:14:33.926 CXX test/cpp_headers/lvol.o 00:14:33.926 CXX test/cpp_headers/nbd.o 00:14:33.926 CXX test/cpp_headers/notify.o 00:14:33.926 CXX test/cpp_headers/net.o 00:14:33.926 CXX test/cpp_headers/memory.o 00:14:33.926 CXX test/cpp_headers/nvme.o 00:14:33.926 CXX test/cpp_headers/mmio.o 00:14:33.926 CC test/env/vtophys/vtophys.o 00:14:33.926 CXX test/cpp_headers/nvme_ocssd_spec.o 00:14:33.926 CXX test/cpp_headers/likely.o 00:14:33.926 CXX test/cpp_headers/nvme_zns.o 00:14:33.926 CXX test/cpp_headers/nvmf_fc_spec.o 00:14:33.926 CC test/app/jsoncat/jsoncat.o 00:14:33.926 CXX test/cpp_headers/nvmf_spec.o 00:14:33.926 CC test/app/histogram_perf/histogram_perf.o 00:14:33.926 CXX test/cpp_headers/nvmf_transport.o 00:14:33.926 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:14:33.926 LINK spdk_lspci 00:14:33.926 CXX test/cpp_headers/opal_spec.o 00:14:33.926 CXX test/cpp_headers/nvme_intel.o 00:14:33.926 CXX test/cpp_headers/nvme_ocssd.o 00:14:33.926 CXX test/cpp_headers/pci_ids.o 00:14:33.926 CXX test/cpp_headers/pipe.o 00:14:33.926 CXX test/cpp_headers/queue.o 00:14:33.926 CXX test/cpp_headers/reduce.o 00:14:33.926 CXX test/cpp_headers/nvme_spec.o 00:14:33.926 CXX test/cpp_headers/nvmf.o 00:14:33.926 CXX test/cpp_headers/scheduler.o 00:14:33.926 CXX test/cpp_headers/scsi_spec.o 00:14:33.926 CXX test/cpp_headers/opal.o 00:14:33.926 CXX test/cpp_headers/scsi.o 00:14:33.926 CXX test/cpp_headers/sock.o 00:14:33.926 CXX test/cpp_headers/nvmf_cmd.o 00:14:33.926 CXX test/cpp_headers/string.o 00:14:33.926 CXX test/cpp_headers/trace.o 00:14:33.926 CXX test/cpp_headers/trace_parser.o 00:14:33.926 CXX test/cpp_headers/tree.o 00:14:33.926 CXX test/cpp_headers/ublk.o 00:14:33.926 CXX test/cpp_headers/stdinc.o 00:14:34.201 CXX test/cpp_headers/uuid.o 00:14:34.201 CXX test/cpp_headers/version.o 00:14:34.201 CXX test/cpp_headers/vfio_user_spec.o 00:14:34.201 CXX test/cpp_headers/vfio_user_pci.o 00:14:34.201 CXX test/cpp_headers/vhost.o 00:14:34.201 CC app/fio/nvme/fio_plugin.o 00:14:34.201 CXX test/cpp_headers/vmd.o 00:14:34.201 CXX test/cpp_headers/zipf.o 00:14:34.201 CXX test/cpp_headers/xor.o 00:14:34.201 CXX test/cpp_headers/rpc.o 00:14:34.201 CXX test/cpp_headers/thread.o 00:14:34.201 CXX test/cpp_headers/util.o 00:14:34.201 LINK spdk_nvme_discover 00:14:34.201 CC test/env/pci/pci_ut.o 00:14:34.201 CC test/dma/test_dma/test_dma.o 00:14:34.201 CC test/thread/poller_perf/poller_perf.o 00:14:34.201 CC test/app/bdev_svc/bdev_svc.o 00:14:34.201 LINK interrupt_tgt 00:14:34.201 CC examples/ioat/verify/verify.o 00:14:34.201 CC examples/util/zipf/zipf.o 00:14:34.201 CC app/fio/bdev/fio_plugin.o 00:14:34.201 CC test/app/stub/stub.o 00:14:34.464 LINK vtophys 00:14:34.464 LINK jsoncat 00:14:34.464 LINK spdk_dd 00:14:34.464 LINK bdev_svc 00:14:34.464 LINK env_dpdk_post_init 00:14:34.464 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:14:34.464 LINK rpc_client_test 00:14:34.464 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:14:34.464 CC test/env/mem_callbacks/mem_callbacks.o 00:14:34.464 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:14:34.464 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:14:34.724 LINK stub 00:14:34.724 LINK verify 00:14:34.724 LINK nvmf_tgt 00:14:34.724 LINK spdk_trace_record 00:14:34.724 LINK iscsi_tgt 00:14:34.724 LINK spdk_tgt 00:14:34.724 LINK ioat_perf 00:14:34.724 LINK test_dma 00:14:34.983 LINK histogram_perf 00:14:34.983 LINK poller_perf 00:14:34.983 LINK spdk_nvme_identify 00:14:34.983 LINK nvme_fuzz 00:14:34.983 LINK zipf 00:14:34.983 LINK vhost_fuzz 00:14:34.983 LINK pci_ut 00:14:34.983 LINK spdk_bdev 00:14:34.983 LINK spdk_trace 00:14:35.245 LINK mem_callbacks 00:14:35.245 LINK spdk_nvme 00:14:35.245 LINK spdk_nvme_perf 00:14:35.504 CC test/event/reactor_perf/reactor_perf.o 00:14:35.504 LINK memory_ut 00:14:35.504 CC test/event/event_perf/event_perf.o 00:14:35.504 CC test/nvme/reset/reset.o 00:14:35.504 CC test/nvme/fdp/fdp.o 00:14:35.504 CC test/nvme/startup/startup.o 00:14:35.504 CC test/nvme/compliance/nvme_compliance.o 00:14:35.504 CC test/nvme/fused_ordering/fused_ordering.o 00:14:35.504 CC test/event/reactor/reactor.o 00:14:35.504 CC test/nvme/connect_stress/connect_stress.o 00:14:35.504 CC test/nvme/reserve/reserve.o 00:14:35.504 CC test/nvme/aer/aer.o 00:14:35.504 CC test/nvme/sgl/sgl.o 00:14:35.504 CC test/nvme/doorbell_aers/doorbell_aers.o 00:14:35.504 CC examples/vmd/lsvmd/lsvmd.o 00:14:35.504 CC test/nvme/boot_partition/boot_partition.o 00:14:35.504 CC examples/vmd/led/led.o 00:14:35.504 CC test/nvme/simple_copy/simple_copy.o 00:14:35.504 CC test/nvme/cuse/cuse.o 00:14:35.504 CC test/nvme/overhead/overhead.o 00:14:35.504 CC test/nvme/e2edp/nvme_dp.o 00:14:35.504 CC test/event/app_repeat/app_repeat.o 00:14:35.504 CC test/nvme/err_injection/err_injection.o 00:14:35.504 CC examples/idxd/perf/perf.o 00:14:35.504 LINK spdk_top 00:14:35.504 CC test/blobfs/mkfs/mkfs.o 00:14:35.504 CC test/event/scheduler/scheduler.o 00:14:35.504 CC examples/sock/hello_world/hello_sock.o 00:14:35.504 CC test/accel/dif/dif.o 00:14:35.504 CC examples/thread/thread/thread_ex.o 00:14:35.504 CC app/vhost/vhost.o 00:14:35.504 CC test/lvol/esnap/esnap.o 00:14:35.504 LINK event_perf 00:14:35.504 LINK reactor_perf 00:14:35.504 LINK led 00:14:35.504 LINK lsvmd 00:14:35.504 LINK startup 00:14:35.504 LINK connect_stress 00:14:35.504 LINK reactor 00:14:35.765 LINK boot_partition 00:14:35.765 LINK doorbell_aers 00:14:35.765 LINK reserve 00:14:35.765 LINK app_repeat 00:14:35.765 LINK fused_ordering 00:14:35.765 LINK nvme_compliance 00:14:35.765 LINK err_injection 00:14:35.765 LINK simple_copy 00:14:35.765 LINK reset 00:14:35.765 LINK sgl 00:14:35.765 LINK nvme_dp 00:14:35.765 LINK mkfs 00:14:35.765 LINK hello_sock 00:14:35.765 LINK overhead 00:14:35.765 LINK aer 00:14:35.765 LINK vhost 00:14:35.765 LINK scheduler 00:14:35.765 LINK fdp 00:14:35.765 LINK thread 00:14:35.765 LINK idxd_perf 00:14:36.026 LINK iscsi_fuzz 00:14:36.026 LINK dif 00:14:36.286 CC examples/nvme/hotplug/hotplug.o 00:14:36.286 CC examples/nvme/abort/abort.o 00:14:36.286 CC examples/nvme/cmb_copy/cmb_copy.o 00:14:36.286 CC examples/nvme/nvme_manage/nvme_manage.o 00:14:36.286 CC examples/nvme/arbitration/arbitration.o 00:14:36.286 CC examples/nvme/hello_world/hello_world.o 00:14:36.286 CC examples/nvme/reconnect/reconnect.o 00:14:36.286 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:14:36.286 CC examples/accel/perf/accel_perf.o 00:14:36.286 CC examples/blob/cli/blobcli.o 00:14:36.286 CC examples/blob/hello_world/hello_blob.o 00:14:36.286 CC examples/fsdev/hello_world/hello_fsdev.o 00:14:36.547 LINK pmr_persistence 00:14:36.547 LINK cmb_copy 00:14:36.547 LINK hello_world 00:14:36.547 LINK hotplug 00:14:36.547 LINK cuse 00:14:36.547 LINK arbitration 00:14:36.547 LINK abort 00:14:36.547 LINK reconnect 00:14:36.807 LINK hello_blob 00:14:36.807 CC test/bdev/bdevio/bdevio.o 00:14:36.807 LINK nvme_manage 00:14:36.807 LINK hello_fsdev 00:14:36.807 LINK accel_perf 00:14:36.807 LINK blobcli 00:14:37.069 LINK bdevio 00:14:37.330 CC examples/bdev/hello_world/hello_bdev.o 00:14:37.330 CC examples/bdev/bdevperf/bdevperf.o 00:14:37.591 LINK hello_bdev 00:14:38.161 LINK bdevperf 00:14:38.733 CC examples/nvmf/nvmf/nvmf.o 00:14:38.993 LINK nvmf 00:14:39.934 LINK esnap 00:14:39.935 00:14:39.935 real 0m53.874s 00:14:39.935 user 7m43.466s 00:14:39.935 sys 4m26.397s 00:14:39.935 22:13:35 make -- common/autotest_common.sh@1126 -- $ xtrace_disable 00:14:39.935 22:13:35 make -- common/autotest_common.sh@10 -- $ set +x 00:14:39.935 ************************************ 00:14:39.935 END TEST make 00:14:39.935 ************************************ 00:14:39.935 22:13:35 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:14:39.935 22:13:35 -- pm/common@29 -- $ signal_monitor_resources TERM 00:14:39.935 22:13:35 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:14:39.935 22:13:35 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:14:39.935 22:13:35 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:14:39.935 22:13:35 -- pm/common@44 -- $ pid=4074145 00:14:39.935 22:13:35 -- pm/common@50 -- $ kill -TERM 4074145 00:14:39.935 22:13:35 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:14:39.935 22:13:35 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:14:39.935 22:13:35 -- pm/common@44 -- $ pid=4074146 00:14:40.196 22:13:35 -- pm/common@50 -- $ kill -TERM 4074146 00:14:40.196 22:13:35 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:14:40.196 22:13:35 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:14:40.196 22:13:35 -- pm/common@44 -- $ pid=4074149 00:14:40.196 22:13:35 -- pm/common@50 -- $ kill -TERM 4074149 00:14:40.196 22:13:35 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:14:40.196 22:13:35 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:14:40.196 22:13:35 -- pm/common@44 -- $ pid=4074174 00:14:40.196 22:13:35 -- pm/common@50 -- $ sudo -E kill -TERM 4074174 00:14:40.196 22:13:35 -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:14:40.196 22:13:35 -- common/autotest_common.sh@1681 -- # lcov --version 00:14:40.196 22:13:35 -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:14:40.196 22:13:35 -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:14:40.196 22:13:35 -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:14:40.196 22:13:35 -- scripts/common.sh@333 -- # local ver1 ver1_l 00:14:40.196 22:13:35 -- scripts/common.sh@334 -- # local ver2 ver2_l 00:14:40.196 22:13:35 -- scripts/common.sh@336 -- # IFS=.-: 00:14:40.196 22:13:35 -- scripts/common.sh@336 -- # read -ra ver1 00:14:40.196 22:13:35 -- scripts/common.sh@337 -- # IFS=.-: 00:14:40.196 22:13:35 -- scripts/common.sh@337 -- # read -ra ver2 00:14:40.196 22:13:35 -- scripts/common.sh@338 -- # local 'op=<' 00:14:40.196 22:13:35 -- scripts/common.sh@340 -- # ver1_l=2 00:14:40.196 22:13:35 -- scripts/common.sh@341 -- # ver2_l=1 00:14:40.196 22:13:35 -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:14:40.196 22:13:35 -- scripts/common.sh@344 -- # case "$op" in 00:14:40.196 22:13:35 -- scripts/common.sh@345 -- # : 1 00:14:40.196 22:13:35 -- scripts/common.sh@364 -- # (( v = 0 )) 00:14:40.196 22:13:35 -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:40.196 22:13:35 -- scripts/common.sh@365 -- # decimal 1 00:14:40.196 22:13:35 -- scripts/common.sh@353 -- # local d=1 00:14:40.196 22:13:35 -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:40.196 22:13:35 -- scripts/common.sh@355 -- # echo 1 00:14:40.196 22:13:35 -- scripts/common.sh@365 -- # ver1[v]=1 00:14:40.196 22:13:35 -- scripts/common.sh@366 -- # decimal 2 00:14:40.196 22:13:35 -- scripts/common.sh@353 -- # local d=2 00:14:40.196 22:13:35 -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:40.196 22:13:35 -- scripts/common.sh@355 -- # echo 2 00:14:40.196 22:13:35 -- scripts/common.sh@366 -- # ver2[v]=2 00:14:40.196 22:13:35 -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:14:40.196 22:13:35 -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:14:40.196 22:13:35 -- scripts/common.sh@368 -- # return 0 00:14:40.196 22:13:35 -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:40.196 22:13:35 -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:14:40.196 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:40.196 --rc genhtml_branch_coverage=1 00:14:40.196 --rc genhtml_function_coverage=1 00:14:40.196 --rc genhtml_legend=1 00:14:40.196 --rc geninfo_all_blocks=1 00:14:40.196 --rc geninfo_unexecuted_blocks=1 00:14:40.196 00:14:40.196 ' 00:14:40.196 22:13:35 -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:14:40.196 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:40.196 --rc genhtml_branch_coverage=1 00:14:40.196 --rc genhtml_function_coverage=1 00:14:40.196 --rc genhtml_legend=1 00:14:40.196 --rc geninfo_all_blocks=1 00:14:40.196 --rc geninfo_unexecuted_blocks=1 00:14:40.196 00:14:40.196 ' 00:14:40.196 22:13:35 -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:14:40.196 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:40.196 --rc genhtml_branch_coverage=1 00:14:40.196 --rc genhtml_function_coverage=1 00:14:40.196 --rc genhtml_legend=1 00:14:40.196 --rc geninfo_all_blocks=1 00:14:40.196 --rc geninfo_unexecuted_blocks=1 00:14:40.196 00:14:40.196 ' 00:14:40.196 22:13:35 -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:14:40.196 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:40.196 --rc genhtml_branch_coverage=1 00:14:40.196 --rc genhtml_function_coverage=1 00:14:40.196 --rc genhtml_legend=1 00:14:40.196 --rc geninfo_all_blocks=1 00:14:40.196 --rc geninfo_unexecuted_blocks=1 00:14:40.196 00:14:40.196 ' 00:14:40.196 22:13:35 -- spdk/autotest.sh@25 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:40.196 22:13:35 -- nvmf/common.sh@7 -- # uname -s 00:14:40.196 22:13:35 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:40.196 22:13:35 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:40.196 22:13:35 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:40.196 22:13:35 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:40.196 22:13:35 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:40.196 22:13:35 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:40.196 22:13:35 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:40.196 22:13:35 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:40.196 22:13:35 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:40.196 22:13:35 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:40.196 22:13:35 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 00:14:40.196 22:13:35 -- nvmf/common.sh@18 -- # NVME_HOSTID=80c5a598-37ec-ec11-9bc7-a4bf01928204 00:14:40.196 22:13:35 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:40.196 22:13:35 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:40.196 22:13:35 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:40.196 22:13:35 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:40.196 22:13:35 -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:40.196 22:13:35 -- scripts/common.sh@15 -- # shopt -s extglob 00:14:40.196 22:13:35 -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:40.196 22:13:35 -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:40.196 22:13:35 -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:40.197 22:13:35 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:40.197 22:13:35 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:40.197 22:13:35 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:40.197 22:13:35 -- paths/export.sh@5 -- # export PATH 00:14:40.197 22:13:35 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:40.197 22:13:35 -- nvmf/common.sh@51 -- # : 0 00:14:40.197 22:13:35 -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:14:40.197 22:13:35 -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:14:40.197 22:13:35 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:40.197 22:13:35 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:40.197 22:13:35 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:40.197 22:13:35 -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:14:40.197 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:14:40.197 22:13:35 -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:14:40.197 22:13:35 -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:14:40.197 22:13:35 -- nvmf/common.sh@55 -- # have_pci_nics=0 00:14:40.197 22:13:35 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:14:40.462 22:13:35 -- spdk/autotest.sh@32 -- # uname -s 00:14:40.462 22:13:35 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:14:40.462 22:13:35 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:14:40.463 22:13:35 -- spdk/autotest.sh@34 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:14:40.463 22:13:35 -- spdk/autotest.sh@39 -- # echo '|/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/core-collector.sh %P %s %t' 00:14:40.463 22:13:35 -- spdk/autotest.sh@40 -- # echo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:14:40.463 22:13:35 -- spdk/autotest.sh@44 -- # modprobe nbd 00:14:40.463 22:13:35 -- spdk/autotest.sh@46 -- # type -P udevadm 00:14:40.463 22:13:35 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:14:40.463 22:13:35 -- spdk/autotest.sh@48 -- # udevadm_pid=4139365 00:14:40.463 22:13:35 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:14:40.463 22:13:35 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:14:40.463 22:13:35 -- pm/common@17 -- # local monitor 00:14:40.463 22:13:35 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:14:40.463 22:13:35 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:14:40.463 22:13:35 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:14:40.463 22:13:35 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:14:40.463 22:13:35 -- pm/common@21 -- # date +%s 00:14:40.463 22:13:35 -- pm/common@25 -- # sleep 1 00:14:40.463 22:13:35 -- pm/common@21 -- # date +%s 00:14:40.463 22:13:35 -- pm/common@21 -- # date +%s 00:14:40.463 22:13:35 -- pm/common@21 -- # date +%s 00:14:40.463 22:13:35 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1727813615 00:14:40.463 22:13:35 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1727813615 00:14:40.463 22:13:35 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1727813615 00:14:40.463 22:13:35 -- pm/common@21 -- # sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1727813615 00:14:40.463 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1727813615_collect-vmstat.pm.log 00:14:40.463 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1727813615_collect-cpu-load.pm.log 00:14:40.463 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1727813615_collect-cpu-temp.pm.log 00:14:40.463 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1727813615_collect-bmc-pm.bmc.pm.log 00:14:41.407 22:13:36 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:14:41.407 22:13:36 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:14:41.407 22:13:36 -- common/autotest_common.sh@724 -- # xtrace_disable 00:14:41.407 22:13:36 -- common/autotest_common.sh@10 -- # set +x 00:14:41.407 22:13:36 -- spdk/autotest.sh@59 -- # create_test_list 00:14:41.407 22:13:36 -- common/autotest_common.sh@748 -- # xtrace_disable 00:14:41.407 22:13:36 -- common/autotest_common.sh@10 -- # set +x 00:14:41.407 22:13:36 -- spdk/autotest.sh@61 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/autotest.sh 00:14:41.407 22:13:36 -- spdk/autotest.sh@61 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:14:41.407 22:13:36 -- spdk/autotest.sh@61 -- # src=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:14:41.407 22:13:36 -- spdk/autotest.sh@62 -- # out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:14:41.407 22:13:36 -- spdk/autotest.sh@63 -- # cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:14:41.407 22:13:36 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:14:41.407 22:13:36 -- common/autotest_common.sh@1455 -- # uname 00:14:41.407 22:13:36 -- common/autotest_common.sh@1455 -- # '[' Linux = FreeBSD ']' 00:14:41.407 22:13:36 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:14:41.407 22:13:36 -- common/autotest_common.sh@1475 -- # uname 00:14:41.407 22:13:36 -- common/autotest_common.sh@1475 -- # [[ Linux = FreeBSD ]] 00:14:41.407 22:13:36 -- spdk/autotest.sh@68 -- # [[ y == y ]] 00:14:41.407 22:13:36 -- spdk/autotest.sh@70 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --version 00:14:41.407 lcov: LCOV version 1.15 00:14:41.407 22:13:36 -- spdk/autotest.sh@72 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -i -t Baseline -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info 00:14:56.319 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:14:56.319 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno 00:15:11.230 22:14:06 -- spdk/autotest.sh@76 -- # timing_enter pre_cleanup 00:15:11.230 22:14:06 -- common/autotest_common.sh@724 -- # xtrace_disable 00:15:11.230 22:14:06 -- common/autotest_common.sh@10 -- # set +x 00:15:11.230 22:14:06 -- spdk/autotest.sh@78 -- # rm -f 00:15:11.230 22:14:06 -- spdk/autotest.sh@81 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:15:14.549 0000:80:01.6 (8086 0b00): Already using the ioatdma driver 00:15:14.549 0000:80:01.7 (8086 0b00): Already using the ioatdma driver 00:15:14.810 0000:80:01.4 (8086 0b00): Already using the ioatdma driver 00:15:14.810 0000:80:01.5 (8086 0b00): Already using the ioatdma driver 00:15:14.810 0000:80:01.2 (8086 0b00): Already using the ioatdma driver 00:15:14.810 0000:80:01.3 (8086 0b00): Already using the ioatdma driver 00:15:14.810 0000:80:01.0 (8086 0b00): Already using the ioatdma driver 00:15:14.810 0000:80:01.1 (8086 0b00): Already using the ioatdma driver 00:15:14.810 0000:65:00.0 (144d a80a): Already using the nvme driver 00:15:14.810 0000:00:01.6 (8086 0b00): Already using the ioatdma driver 00:15:14.810 0000:00:01.7 (8086 0b00): Already using the ioatdma driver 00:15:14.810 0000:00:01.4 (8086 0b00): Already using the ioatdma driver 00:15:15.071 0000:00:01.5 (8086 0b00): Already using the ioatdma driver 00:15:15.071 0000:00:01.2 (8086 0b00): Already using the ioatdma driver 00:15:15.071 0000:00:01.3 (8086 0b00): Already using the ioatdma driver 00:15:15.071 0000:00:01.0 (8086 0b00): Already using the ioatdma driver 00:15:15.071 0000:00:01.1 (8086 0b00): Already using the ioatdma driver 00:15:15.332 22:14:10 -- spdk/autotest.sh@83 -- # get_zoned_devs 00:15:15.332 22:14:10 -- common/autotest_common.sh@1655 -- # zoned_devs=() 00:15:15.332 22:14:10 -- common/autotest_common.sh@1655 -- # local -gA zoned_devs 00:15:15.332 22:14:10 -- common/autotest_common.sh@1656 -- # local nvme bdf 00:15:15.332 22:14:10 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:15:15.332 22:14:10 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme0n1 00:15:15.332 22:14:10 -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:15:15.332 22:14:10 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:15:15.332 22:14:10 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:15:15.332 22:14:10 -- spdk/autotest.sh@85 -- # (( 0 > 0 )) 00:15:15.332 22:14:10 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:15:15.332 22:14:10 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:15:15.332 22:14:10 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme0n1 00:15:15.332 22:14:10 -- scripts/common.sh@381 -- # local block=/dev/nvme0n1 pt 00:15:15.332 22:14:10 -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:15:15.332 No valid GPT data, bailing 00:15:15.332 22:14:10 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:15:15.332 22:14:10 -- scripts/common.sh@394 -- # pt= 00:15:15.332 22:14:10 -- scripts/common.sh@395 -- # return 1 00:15:15.332 22:14:10 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:15:15.332 1+0 records in 00:15:15.332 1+0 records out 00:15:15.332 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00154857 s, 677 MB/s 00:15:15.332 22:14:10 -- spdk/autotest.sh@105 -- # sync 00:15:15.332 22:14:10 -- spdk/autotest.sh@107 -- # xtrace_disable_per_cmd reap_spdk_processes 00:15:15.332 22:14:10 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:15:15.332 22:14:10 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:15:25.449 22:14:18 -- spdk/autotest.sh@111 -- # uname -s 00:15:25.449 22:14:18 -- spdk/autotest.sh@111 -- # [[ Linux == Linux ]] 00:15:25.449 22:14:18 -- spdk/autotest.sh@111 -- # [[ 0 -eq 1 ]] 00:15:25.449 22:14:18 -- spdk/autotest.sh@115 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:15:27.365 Hugepages 00:15:27.365 node hugesize free / total 00:15:27.365 node0 1048576kB 0 / 0 00:15:27.365 node0 2048kB 0 / 0 00:15:27.366 node1 1048576kB 0 / 0 00:15:27.366 node1 2048kB 0 / 0 00:15:27.366 00:15:27.366 Type BDF Vendor Device NUMA Driver Device Block devices 00:15:27.366 I/OAT 0000:00:01.0 8086 0b00 0 ioatdma - - 00:15:27.366 I/OAT 0000:00:01.1 8086 0b00 0 ioatdma - - 00:15:27.366 I/OAT 0000:00:01.2 8086 0b00 0 ioatdma - - 00:15:27.366 I/OAT 0000:00:01.3 8086 0b00 0 ioatdma - - 00:15:27.366 I/OAT 0000:00:01.4 8086 0b00 0 ioatdma - - 00:15:27.366 I/OAT 0000:00:01.5 8086 0b00 0 ioatdma - - 00:15:27.366 I/OAT 0000:00:01.6 8086 0b00 0 ioatdma - - 00:15:27.366 I/OAT 0000:00:01.7 8086 0b00 0 ioatdma - - 00:15:27.366 NVMe 0000:65:00.0 144d a80a 0 nvme nvme0 nvme0n1 00:15:27.366 I/OAT 0000:80:01.0 8086 0b00 1 ioatdma - - 00:15:27.366 I/OAT 0000:80:01.1 8086 0b00 1 ioatdma - - 00:15:27.366 I/OAT 0000:80:01.2 8086 0b00 1 ioatdma - - 00:15:27.366 I/OAT 0000:80:01.3 8086 0b00 1 ioatdma - - 00:15:27.366 I/OAT 0000:80:01.4 8086 0b00 1 ioatdma - - 00:15:27.366 I/OAT 0000:80:01.5 8086 0b00 1 ioatdma - - 00:15:27.366 I/OAT 0000:80:01.6 8086 0b00 1 ioatdma - - 00:15:27.366 I/OAT 0000:80:01.7 8086 0b00 1 ioatdma - - 00:15:27.366 22:14:22 -- spdk/autotest.sh@117 -- # uname -s 00:15:27.366 22:14:22 -- spdk/autotest.sh@117 -- # [[ Linux == Linux ]] 00:15:27.366 22:14:22 -- spdk/autotest.sh@119 -- # nvme_namespace_revert 00:15:27.366 22:14:22 -- common/autotest_common.sh@1514 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:15:30.666 0000:80:01.6 (8086 0b00): ioatdma -> vfio-pci 00:15:30.666 0000:80:01.7 (8086 0b00): ioatdma -> vfio-pci 00:15:30.666 0000:80:01.4 (8086 0b00): ioatdma -> vfio-pci 00:15:30.666 0000:80:01.5 (8086 0b00): ioatdma -> vfio-pci 00:15:30.666 0000:80:01.2 (8086 0b00): ioatdma -> vfio-pci 00:15:30.666 0000:80:01.3 (8086 0b00): ioatdma -> vfio-pci 00:15:30.666 0000:80:01.0 (8086 0b00): ioatdma -> vfio-pci 00:15:30.666 0000:80:01.1 (8086 0b00): ioatdma -> vfio-pci 00:15:30.666 0000:00:01.6 (8086 0b00): ioatdma -> vfio-pci 00:15:30.666 0000:00:01.7 (8086 0b00): ioatdma -> vfio-pci 00:15:30.927 0000:00:01.4 (8086 0b00): ioatdma -> vfio-pci 00:15:30.927 0000:00:01.5 (8086 0b00): ioatdma -> vfio-pci 00:15:30.927 0000:00:01.2 (8086 0b00): ioatdma -> vfio-pci 00:15:30.927 0000:00:01.3 (8086 0b00): ioatdma -> vfio-pci 00:15:30.927 0000:00:01.0 (8086 0b00): ioatdma -> vfio-pci 00:15:30.927 0000:00:01.1 (8086 0b00): ioatdma -> vfio-pci 00:15:32.839 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:15:32.839 22:14:28 -- common/autotest_common.sh@1515 -- # sleep 1 00:15:34.222 22:14:29 -- common/autotest_common.sh@1516 -- # bdfs=() 00:15:34.222 22:14:29 -- common/autotest_common.sh@1516 -- # local bdfs 00:15:34.222 22:14:29 -- common/autotest_common.sh@1518 -- # bdfs=($(get_nvme_bdfs)) 00:15:34.222 22:14:29 -- common/autotest_common.sh@1518 -- # get_nvme_bdfs 00:15:34.222 22:14:29 -- common/autotest_common.sh@1496 -- # bdfs=() 00:15:34.222 22:14:29 -- common/autotest_common.sh@1496 -- # local bdfs 00:15:34.222 22:14:29 -- common/autotest_common.sh@1497 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:15:34.222 22:14:29 -- common/autotest_common.sh@1497 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:15:34.222 22:14:29 -- common/autotest_common.sh@1497 -- # jq -r '.config[].params.traddr' 00:15:34.222 22:14:29 -- common/autotest_common.sh@1498 -- # (( 1 == 0 )) 00:15:34.222 22:14:29 -- common/autotest_common.sh@1502 -- # printf '%s\n' 0000:65:00.0 00:15:34.222 22:14:29 -- common/autotest_common.sh@1520 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:15:37.523 Waiting for block devices as requested 00:15:37.523 0000:80:01.6 (8086 0b00): vfio-pci -> ioatdma 00:15:37.523 0000:80:01.7 (8086 0b00): vfio-pci -> ioatdma 00:15:37.523 0000:80:01.4 (8086 0b00): vfio-pci -> ioatdma 00:15:37.783 0000:80:01.5 (8086 0b00): vfio-pci -> ioatdma 00:15:37.783 0000:80:01.2 (8086 0b00): vfio-pci -> ioatdma 00:15:37.783 0000:80:01.3 (8086 0b00): vfio-pci -> ioatdma 00:15:38.043 0000:80:01.0 (8086 0b00): vfio-pci -> ioatdma 00:15:38.043 0000:80:01.1 (8086 0b00): vfio-pci -> ioatdma 00:15:38.043 0000:65:00.0 (144d a80a): vfio-pci -> nvme 00:15:38.304 0000:00:01.6 (8086 0b00): vfio-pci -> ioatdma 00:15:38.304 0000:00:01.7 (8086 0b00): vfio-pci -> ioatdma 00:15:38.304 0000:00:01.4 (8086 0b00): vfio-pci -> ioatdma 00:15:38.564 0000:00:01.5 (8086 0b00): vfio-pci -> ioatdma 00:15:38.564 0000:00:01.2 (8086 0b00): vfio-pci -> ioatdma 00:15:38.564 0000:00:01.3 (8086 0b00): vfio-pci -> ioatdma 00:15:38.564 0000:00:01.0 (8086 0b00): vfio-pci -> ioatdma 00:15:38.824 0000:00:01.1 (8086 0b00): vfio-pci -> ioatdma 00:15:39.084 22:14:34 -- common/autotest_common.sh@1522 -- # for bdf in "${bdfs[@]}" 00:15:39.085 22:14:34 -- common/autotest_common.sh@1523 -- # get_nvme_ctrlr_from_bdf 0000:65:00.0 00:15:39.085 22:14:34 -- common/autotest_common.sh@1485 -- # readlink -f /sys/class/nvme/nvme0 00:15:39.085 22:14:34 -- common/autotest_common.sh@1485 -- # grep 0000:65:00.0/nvme/nvme 00:15:39.085 22:14:34 -- common/autotest_common.sh@1485 -- # bdf_sysfs_path=/sys/devices/pci0000:64/0000:64:02.0/0000:65:00.0/nvme/nvme0 00:15:39.085 22:14:34 -- common/autotest_common.sh@1486 -- # [[ -z /sys/devices/pci0000:64/0000:64:02.0/0000:65:00.0/nvme/nvme0 ]] 00:15:39.085 22:14:34 -- common/autotest_common.sh@1490 -- # basename /sys/devices/pci0000:64/0000:64:02.0/0000:65:00.0/nvme/nvme0 00:15:39.085 22:14:34 -- common/autotest_common.sh@1490 -- # printf '%s\n' nvme0 00:15:39.085 22:14:34 -- common/autotest_common.sh@1523 -- # nvme_ctrlr=/dev/nvme0 00:15:39.085 22:14:34 -- common/autotest_common.sh@1524 -- # [[ -z /dev/nvme0 ]] 00:15:39.085 22:14:34 -- common/autotest_common.sh@1529 -- # nvme id-ctrl /dev/nvme0 00:15:39.085 22:14:34 -- common/autotest_common.sh@1529 -- # cut -d: -f2 00:15:39.085 22:14:34 -- common/autotest_common.sh@1529 -- # grep oacs 00:15:39.085 22:14:34 -- common/autotest_common.sh@1529 -- # oacs=' 0x5f' 00:15:39.085 22:14:34 -- common/autotest_common.sh@1530 -- # oacs_ns_manage=8 00:15:39.085 22:14:34 -- common/autotest_common.sh@1532 -- # [[ 8 -ne 0 ]] 00:15:39.085 22:14:34 -- common/autotest_common.sh@1538 -- # nvme id-ctrl /dev/nvme0 00:15:39.085 22:14:34 -- common/autotest_common.sh@1538 -- # grep unvmcap 00:15:39.085 22:14:34 -- common/autotest_common.sh@1538 -- # cut -d: -f2 00:15:39.085 22:14:34 -- common/autotest_common.sh@1538 -- # unvmcap=' 0' 00:15:39.085 22:14:34 -- common/autotest_common.sh@1539 -- # [[ 0 -eq 0 ]] 00:15:39.085 22:14:34 -- common/autotest_common.sh@1541 -- # continue 00:15:39.085 22:14:34 -- spdk/autotest.sh@122 -- # timing_exit pre_cleanup 00:15:39.085 22:14:34 -- common/autotest_common.sh@730 -- # xtrace_disable 00:15:39.085 22:14:34 -- common/autotest_common.sh@10 -- # set +x 00:15:39.085 22:14:34 -- spdk/autotest.sh@125 -- # timing_enter afterboot 00:15:39.085 22:14:34 -- common/autotest_common.sh@724 -- # xtrace_disable 00:15:39.085 22:14:34 -- common/autotest_common.sh@10 -- # set +x 00:15:39.085 22:14:34 -- spdk/autotest.sh@126 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:15:42.387 0000:80:01.6 (8086 0b00): ioatdma -> vfio-pci 00:15:42.388 0000:80:01.7 (8086 0b00): ioatdma -> vfio-pci 00:15:42.388 0000:80:01.4 (8086 0b00): ioatdma -> vfio-pci 00:15:42.388 0000:80:01.5 (8086 0b00): ioatdma -> vfio-pci 00:15:42.388 0000:80:01.2 (8086 0b00): ioatdma -> vfio-pci 00:15:42.388 0000:80:01.3 (8086 0b00): ioatdma -> vfio-pci 00:15:42.388 0000:80:01.0 (8086 0b00): ioatdma -> vfio-pci 00:15:42.388 0000:80:01.1 (8086 0b00): ioatdma -> vfio-pci 00:15:42.388 0000:00:01.6 (8086 0b00): ioatdma -> vfio-pci 00:15:42.388 0000:00:01.7 (8086 0b00): ioatdma -> vfio-pci 00:15:42.388 0000:00:01.4 (8086 0b00): ioatdma -> vfio-pci 00:15:42.648 0000:00:01.5 (8086 0b00): ioatdma -> vfio-pci 00:15:42.648 0000:00:01.2 (8086 0b00): ioatdma -> vfio-pci 00:15:42.648 0000:00:01.3 (8086 0b00): ioatdma -> vfio-pci 00:15:42.648 0000:00:01.0 (8086 0b00): ioatdma -> vfio-pci 00:15:42.648 0000:00:01.1 (8086 0b00): ioatdma -> vfio-pci 00:15:42.648 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:15:42.910 22:14:38 -- spdk/autotest.sh@127 -- # timing_exit afterboot 00:15:42.910 22:14:38 -- common/autotest_common.sh@730 -- # xtrace_disable 00:15:42.910 22:14:38 -- common/autotest_common.sh@10 -- # set +x 00:15:42.910 22:14:38 -- spdk/autotest.sh@131 -- # opal_revert_cleanup 00:15:42.910 22:14:38 -- common/autotest_common.sh@1576 -- # mapfile -t bdfs 00:15:42.910 22:14:38 -- common/autotest_common.sh@1576 -- # get_nvme_bdfs_by_id 0x0a54 00:15:42.910 22:14:38 -- common/autotest_common.sh@1561 -- # bdfs=() 00:15:42.910 22:14:38 -- common/autotest_common.sh@1561 -- # _bdfs=() 00:15:42.910 22:14:38 -- common/autotest_common.sh@1561 -- # local bdfs _bdfs 00:15:42.910 22:14:38 -- common/autotest_common.sh@1562 -- # _bdfs=($(get_nvme_bdfs)) 00:15:42.910 22:14:38 -- common/autotest_common.sh@1562 -- # get_nvme_bdfs 00:15:42.910 22:14:38 -- common/autotest_common.sh@1496 -- # bdfs=() 00:15:42.910 22:14:38 -- common/autotest_common.sh@1496 -- # local bdfs 00:15:42.910 22:14:38 -- common/autotest_common.sh@1497 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:15:42.910 22:14:38 -- common/autotest_common.sh@1497 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:15:42.910 22:14:38 -- common/autotest_common.sh@1497 -- # jq -r '.config[].params.traddr' 00:15:43.171 22:14:38 -- common/autotest_common.sh@1498 -- # (( 1 == 0 )) 00:15:43.171 22:14:38 -- common/autotest_common.sh@1502 -- # printf '%s\n' 0000:65:00.0 00:15:43.171 22:14:38 -- common/autotest_common.sh@1563 -- # for bdf in "${_bdfs[@]}" 00:15:43.171 22:14:38 -- common/autotest_common.sh@1564 -- # cat /sys/bus/pci/devices/0000:65:00.0/device 00:15:43.171 22:14:38 -- common/autotest_common.sh@1564 -- # device=0xa80a 00:15:43.171 22:14:38 -- common/autotest_common.sh@1565 -- # [[ 0xa80a == \0\x\0\a\5\4 ]] 00:15:43.171 22:14:38 -- common/autotest_common.sh@1570 -- # (( 0 > 0 )) 00:15:43.171 22:14:38 -- common/autotest_common.sh@1570 -- # return 0 00:15:43.171 22:14:38 -- common/autotest_common.sh@1577 -- # [[ -z '' ]] 00:15:43.171 22:14:38 -- common/autotest_common.sh@1578 -- # return 0 00:15:43.171 22:14:38 -- spdk/autotest.sh@137 -- # '[' 0 -eq 1 ']' 00:15:43.171 22:14:38 -- spdk/autotest.sh@141 -- # '[' 1 -eq 1 ']' 00:15:43.171 22:14:38 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:15:43.171 22:14:38 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:15:43.171 22:14:38 -- spdk/autotest.sh@149 -- # timing_enter lib 00:15:43.171 22:14:38 -- common/autotest_common.sh@724 -- # xtrace_disable 00:15:43.171 22:14:38 -- common/autotest_common.sh@10 -- # set +x 00:15:43.171 22:14:38 -- spdk/autotest.sh@151 -- # [[ 0 -eq 1 ]] 00:15:43.171 22:14:38 -- spdk/autotest.sh@155 -- # run_test env /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:15:43.171 22:14:38 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:15:43.171 22:14:38 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:15:43.171 22:14:38 -- common/autotest_common.sh@10 -- # set +x 00:15:43.171 ************************************ 00:15:43.171 START TEST env 00:15:43.171 ************************************ 00:15:43.171 22:14:38 env -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:15:43.171 * Looking for test storage... 00:15:43.171 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env 00:15:43.171 22:14:38 env -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:15:43.171 22:14:38 env -- common/autotest_common.sh@1681 -- # lcov --version 00:15:43.171 22:14:38 env -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:15:43.432 22:14:38 env -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:15:43.432 22:14:38 env -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:15:43.432 22:14:38 env -- scripts/common.sh@333 -- # local ver1 ver1_l 00:15:43.432 22:14:38 env -- scripts/common.sh@334 -- # local ver2 ver2_l 00:15:43.432 22:14:38 env -- scripts/common.sh@336 -- # IFS=.-: 00:15:43.432 22:14:38 env -- scripts/common.sh@336 -- # read -ra ver1 00:15:43.432 22:14:38 env -- scripts/common.sh@337 -- # IFS=.-: 00:15:43.432 22:14:38 env -- scripts/common.sh@337 -- # read -ra ver2 00:15:43.432 22:14:38 env -- scripts/common.sh@338 -- # local 'op=<' 00:15:43.432 22:14:38 env -- scripts/common.sh@340 -- # ver1_l=2 00:15:43.432 22:14:38 env -- scripts/common.sh@341 -- # ver2_l=1 00:15:43.432 22:14:38 env -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:15:43.432 22:14:38 env -- scripts/common.sh@344 -- # case "$op" in 00:15:43.432 22:14:38 env -- scripts/common.sh@345 -- # : 1 00:15:43.432 22:14:38 env -- scripts/common.sh@364 -- # (( v = 0 )) 00:15:43.432 22:14:38 env -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:43.432 22:14:38 env -- scripts/common.sh@365 -- # decimal 1 00:15:43.432 22:14:38 env -- scripts/common.sh@353 -- # local d=1 00:15:43.432 22:14:38 env -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:43.432 22:14:38 env -- scripts/common.sh@355 -- # echo 1 00:15:43.432 22:14:38 env -- scripts/common.sh@365 -- # ver1[v]=1 00:15:43.432 22:14:38 env -- scripts/common.sh@366 -- # decimal 2 00:15:43.432 22:14:38 env -- scripts/common.sh@353 -- # local d=2 00:15:43.432 22:14:38 env -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:43.433 22:14:38 env -- scripts/common.sh@355 -- # echo 2 00:15:43.433 22:14:38 env -- scripts/common.sh@366 -- # ver2[v]=2 00:15:43.433 22:14:38 env -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:15:43.433 22:14:38 env -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:15:43.433 22:14:38 env -- scripts/common.sh@368 -- # return 0 00:15:43.433 22:14:38 env -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:43.433 22:14:38 env -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:15:43.433 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:43.433 --rc genhtml_branch_coverage=1 00:15:43.433 --rc genhtml_function_coverage=1 00:15:43.433 --rc genhtml_legend=1 00:15:43.433 --rc geninfo_all_blocks=1 00:15:43.433 --rc geninfo_unexecuted_blocks=1 00:15:43.433 00:15:43.433 ' 00:15:43.433 22:14:38 env -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:15:43.433 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:43.433 --rc genhtml_branch_coverage=1 00:15:43.433 --rc genhtml_function_coverage=1 00:15:43.433 --rc genhtml_legend=1 00:15:43.433 --rc geninfo_all_blocks=1 00:15:43.433 --rc geninfo_unexecuted_blocks=1 00:15:43.433 00:15:43.433 ' 00:15:43.433 22:14:38 env -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:15:43.433 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:43.433 --rc genhtml_branch_coverage=1 00:15:43.433 --rc genhtml_function_coverage=1 00:15:43.433 --rc genhtml_legend=1 00:15:43.433 --rc geninfo_all_blocks=1 00:15:43.433 --rc geninfo_unexecuted_blocks=1 00:15:43.433 00:15:43.433 ' 00:15:43.433 22:14:38 env -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:15:43.433 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:43.433 --rc genhtml_branch_coverage=1 00:15:43.433 --rc genhtml_function_coverage=1 00:15:43.433 --rc genhtml_legend=1 00:15:43.433 --rc geninfo_all_blocks=1 00:15:43.433 --rc geninfo_unexecuted_blocks=1 00:15:43.433 00:15:43.433 ' 00:15:43.433 22:14:38 env -- env/env.sh@10 -- # run_test env_memory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:15:43.433 22:14:38 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:15:43.433 22:14:38 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:15:43.433 22:14:38 env -- common/autotest_common.sh@10 -- # set +x 00:15:43.433 ************************************ 00:15:43.433 START TEST env_memory 00:15:43.433 ************************************ 00:15:43.433 22:14:38 env.env_memory -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:15:43.433 00:15:43.433 00:15:43.433 CUnit - A unit testing framework for C - Version 2.1-3 00:15:43.433 http://cunit.sourceforge.net/ 00:15:43.433 00:15:43.433 00:15:43.433 Suite: memory 00:15:43.433 Test: alloc and free memory map ...[2024-10-01 22:14:38.532783] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:15:43.433 passed 00:15:43.433 Test: mem map translation ...[2024-10-01 22:14:38.550651] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:15:43.433 [2024-10-01 22:14:38.550678] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:15:43.433 [2024-10-01 22:14:38.550713] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 589:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:15:43.433 [2024-10-01 22:14:38.550720] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 605:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:15:43.433 passed 00:15:43.433 Test: mem map registration ...[2024-10-01 22:14:38.588804] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=200000 len=1234 00:15:43.433 [2024-10-01 22:14:38.588825] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=4d2 len=2097152 00:15:43.433 passed 00:15:43.433 Test: mem map adjacent registrations ...passed 00:15:43.433 00:15:43.433 Run Summary: Type Total Ran Passed Failed Inactive 00:15:43.433 suites 1 1 n/a 0 0 00:15:43.433 tests 4 4 4 0 0 00:15:43.433 asserts 152 152 152 0 n/a 00:15:43.433 00:15:43.433 Elapsed time = 0.126 seconds 00:15:43.433 00:15:43.433 real 0m0.133s 00:15:43.433 user 0m0.124s 00:15:43.433 sys 0m0.008s 00:15:43.433 22:14:38 env.env_memory -- common/autotest_common.sh@1126 -- # xtrace_disable 00:15:43.433 22:14:38 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:15:43.433 ************************************ 00:15:43.433 END TEST env_memory 00:15:43.433 ************************************ 00:15:43.433 22:14:38 env -- env/env.sh@11 -- # run_test env_vtophys /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:15:43.433 22:14:38 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:15:43.433 22:14:38 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:15:43.433 22:14:38 env -- common/autotest_common.sh@10 -- # set +x 00:15:43.694 ************************************ 00:15:43.694 START TEST env_vtophys 00:15:43.694 ************************************ 00:15:43.694 22:14:38 env.env_vtophys -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:15:43.694 EAL: lib.eal log level changed from notice to debug 00:15:43.694 EAL: Detected lcore 0 as core 0 on socket 0 00:15:43.694 EAL: Detected lcore 1 as core 1 on socket 0 00:15:43.694 EAL: Detected lcore 2 as core 2 on socket 0 00:15:43.694 EAL: Detected lcore 3 as core 3 on socket 0 00:15:43.694 EAL: Detected lcore 4 as core 4 on socket 0 00:15:43.694 EAL: Detected lcore 5 as core 5 on socket 0 00:15:43.694 EAL: Detected lcore 6 as core 6 on socket 0 00:15:43.694 EAL: Detected lcore 7 as core 7 on socket 0 00:15:43.694 EAL: Detected lcore 8 as core 8 on socket 0 00:15:43.695 EAL: Detected lcore 9 as core 9 on socket 0 00:15:43.695 EAL: Detected lcore 10 as core 10 on socket 0 00:15:43.695 EAL: Detected lcore 11 as core 11 on socket 0 00:15:43.695 EAL: Detected lcore 12 as core 12 on socket 0 00:15:43.695 EAL: Detected lcore 13 as core 13 on socket 0 00:15:43.695 EAL: Detected lcore 14 as core 14 on socket 0 00:15:43.695 EAL: Detected lcore 15 as core 15 on socket 0 00:15:43.695 EAL: Detected lcore 16 as core 16 on socket 0 00:15:43.695 EAL: Detected lcore 17 as core 17 on socket 0 00:15:43.695 EAL: Detected lcore 18 as core 18 on socket 0 00:15:43.695 EAL: Detected lcore 19 as core 19 on socket 0 00:15:43.695 EAL: Detected lcore 20 as core 20 on socket 0 00:15:43.695 EAL: Detected lcore 21 as core 21 on socket 0 00:15:43.695 EAL: Detected lcore 22 as core 22 on socket 0 00:15:43.695 EAL: Detected lcore 23 as core 23 on socket 0 00:15:43.695 EAL: Detected lcore 24 as core 24 on socket 0 00:15:43.695 EAL: Detected lcore 25 as core 25 on socket 0 00:15:43.695 EAL: Detected lcore 26 as core 26 on socket 0 00:15:43.695 EAL: Detected lcore 27 as core 27 on socket 0 00:15:43.695 EAL: Detected lcore 28 as core 28 on socket 0 00:15:43.695 EAL: Detected lcore 29 as core 29 on socket 0 00:15:43.695 EAL: Detected lcore 30 as core 30 on socket 0 00:15:43.695 EAL: Detected lcore 31 as core 31 on socket 0 00:15:43.695 EAL: Detected lcore 32 as core 32 on socket 0 00:15:43.695 EAL: Detected lcore 33 as core 33 on socket 0 00:15:43.695 EAL: Detected lcore 34 as core 34 on socket 0 00:15:43.695 EAL: Detected lcore 35 as core 35 on socket 0 00:15:43.695 EAL: Detected lcore 36 as core 0 on socket 1 00:15:43.695 EAL: Detected lcore 37 as core 1 on socket 1 00:15:43.695 EAL: Detected lcore 38 as core 2 on socket 1 00:15:43.695 EAL: Detected lcore 39 as core 3 on socket 1 00:15:43.695 EAL: Detected lcore 40 as core 4 on socket 1 00:15:43.695 EAL: Detected lcore 41 as core 5 on socket 1 00:15:43.695 EAL: Detected lcore 42 as core 6 on socket 1 00:15:43.695 EAL: Detected lcore 43 as core 7 on socket 1 00:15:43.695 EAL: Detected lcore 44 as core 8 on socket 1 00:15:43.695 EAL: Detected lcore 45 as core 9 on socket 1 00:15:43.695 EAL: Detected lcore 46 as core 10 on socket 1 00:15:43.695 EAL: Detected lcore 47 as core 11 on socket 1 00:15:43.695 EAL: Detected lcore 48 as core 12 on socket 1 00:15:43.695 EAL: Detected lcore 49 as core 13 on socket 1 00:15:43.695 EAL: Detected lcore 50 as core 14 on socket 1 00:15:43.695 EAL: Detected lcore 51 as core 15 on socket 1 00:15:43.695 EAL: Detected lcore 52 as core 16 on socket 1 00:15:43.695 EAL: Detected lcore 53 as core 17 on socket 1 00:15:43.695 EAL: Detected lcore 54 as core 18 on socket 1 00:15:43.695 EAL: Detected lcore 55 as core 19 on socket 1 00:15:43.695 EAL: Detected lcore 56 as core 20 on socket 1 00:15:43.695 EAL: Detected lcore 57 as core 21 on socket 1 00:15:43.695 EAL: Detected lcore 58 as core 22 on socket 1 00:15:43.695 EAL: Detected lcore 59 as core 23 on socket 1 00:15:43.695 EAL: Detected lcore 60 as core 24 on socket 1 00:15:43.695 EAL: Detected lcore 61 as core 25 on socket 1 00:15:43.695 EAL: Detected lcore 62 as core 26 on socket 1 00:15:43.695 EAL: Detected lcore 63 as core 27 on socket 1 00:15:43.695 EAL: Detected lcore 64 as core 28 on socket 1 00:15:43.695 EAL: Detected lcore 65 as core 29 on socket 1 00:15:43.695 EAL: Detected lcore 66 as core 30 on socket 1 00:15:43.695 EAL: Detected lcore 67 as core 31 on socket 1 00:15:43.695 EAL: Detected lcore 68 as core 32 on socket 1 00:15:43.695 EAL: Detected lcore 69 as core 33 on socket 1 00:15:43.695 EAL: Detected lcore 70 as core 34 on socket 1 00:15:43.695 EAL: Detected lcore 71 as core 35 on socket 1 00:15:43.695 EAL: Detected lcore 72 as core 0 on socket 0 00:15:43.695 EAL: Detected lcore 73 as core 1 on socket 0 00:15:43.695 EAL: Detected lcore 74 as core 2 on socket 0 00:15:43.695 EAL: Detected lcore 75 as core 3 on socket 0 00:15:43.695 EAL: Detected lcore 76 as core 4 on socket 0 00:15:43.695 EAL: Detected lcore 77 as core 5 on socket 0 00:15:43.695 EAL: Detected lcore 78 as core 6 on socket 0 00:15:43.695 EAL: Detected lcore 79 as core 7 on socket 0 00:15:43.695 EAL: Detected lcore 80 as core 8 on socket 0 00:15:43.695 EAL: Detected lcore 81 as core 9 on socket 0 00:15:43.695 EAL: Detected lcore 82 as core 10 on socket 0 00:15:43.695 EAL: Detected lcore 83 as core 11 on socket 0 00:15:43.695 EAL: Detected lcore 84 as core 12 on socket 0 00:15:43.695 EAL: Detected lcore 85 as core 13 on socket 0 00:15:43.695 EAL: Detected lcore 86 as core 14 on socket 0 00:15:43.695 EAL: Detected lcore 87 as core 15 on socket 0 00:15:43.695 EAL: Detected lcore 88 as core 16 on socket 0 00:15:43.695 EAL: Detected lcore 89 as core 17 on socket 0 00:15:43.695 EAL: Detected lcore 90 as core 18 on socket 0 00:15:43.695 EAL: Detected lcore 91 as core 19 on socket 0 00:15:43.695 EAL: Detected lcore 92 as core 20 on socket 0 00:15:43.695 EAL: Detected lcore 93 as core 21 on socket 0 00:15:43.695 EAL: Detected lcore 94 as core 22 on socket 0 00:15:43.695 EAL: Detected lcore 95 as core 23 on socket 0 00:15:43.695 EAL: Detected lcore 96 as core 24 on socket 0 00:15:43.695 EAL: Detected lcore 97 as core 25 on socket 0 00:15:43.695 EAL: Detected lcore 98 as core 26 on socket 0 00:15:43.695 EAL: Detected lcore 99 as core 27 on socket 0 00:15:43.695 EAL: Detected lcore 100 as core 28 on socket 0 00:15:43.695 EAL: Detected lcore 101 as core 29 on socket 0 00:15:43.695 EAL: Detected lcore 102 as core 30 on socket 0 00:15:43.695 EAL: Detected lcore 103 as core 31 on socket 0 00:15:43.695 EAL: Detected lcore 104 as core 32 on socket 0 00:15:43.695 EAL: Detected lcore 105 as core 33 on socket 0 00:15:43.695 EAL: Detected lcore 106 as core 34 on socket 0 00:15:43.695 EAL: Detected lcore 107 as core 35 on socket 0 00:15:43.695 EAL: Detected lcore 108 as core 0 on socket 1 00:15:43.695 EAL: Detected lcore 109 as core 1 on socket 1 00:15:43.695 EAL: Detected lcore 110 as core 2 on socket 1 00:15:43.695 EAL: Detected lcore 111 as core 3 on socket 1 00:15:43.695 EAL: Detected lcore 112 as core 4 on socket 1 00:15:43.695 EAL: Detected lcore 113 as core 5 on socket 1 00:15:43.695 EAL: Detected lcore 114 as core 6 on socket 1 00:15:43.695 EAL: Detected lcore 115 as core 7 on socket 1 00:15:43.695 EAL: Detected lcore 116 as core 8 on socket 1 00:15:43.695 EAL: Detected lcore 117 as core 9 on socket 1 00:15:43.695 EAL: Detected lcore 118 as core 10 on socket 1 00:15:43.695 EAL: Detected lcore 119 as core 11 on socket 1 00:15:43.695 EAL: Detected lcore 120 as core 12 on socket 1 00:15:43.695 EAL: Detected lcore 121 as core 13 on socket 1 00:15:43.695 EAL: Detected lcore 122 as core 14 on socket 1 00:15:43.695 EAL: Detected lcore 123 as core 15 on socket 1 00:15:43.695 EAL: Detected lcore 124 as core 16 on socket 1 00:15:43.695 EAL: Detected lcore 125 as core 17 on socket 1 00:15:43.695 EAL: Detected lcore 126 as core 18 on socket 1 00:15:43.695 EAL: Detected lcore 127 as core 19 on socket 1 00:15:43.695 EAL: Skipped lcore 128 as core 20 on socket 1 00:15:43.695 EAL: Skipped lcore 129 as core 21 on socket 1 00:15:43.695 EAL: Skipped lcore 130 as core 22 on socket 1 00:15:43.695 EAL: Skipped lcore 131 as core 23 on socket 1 00:15:43.695 EAL: Skipped lcore 132 as core 24 on socket 1 00:15:43.695 EAL: Skipped lcore 133 as core 25 on socket 1 00:15:43.695 EAL: Skipped lcore 134 as core 26 on socket 1 00:15:43.695 EAL: Skipped lcore 135 as core 27 on socket 1 00:15:43.695 EAL: Skipped lcore 136 as core 28 on socket 1 00:15:43.695 EAL: Skipped lcore 137 as core 29 on socket 1 00:15:43.695 EAL: Skipped lcore 138 as core 30 on socket 1 00:15:43.695 EAL: Skipped lcore 139 as core 31 on socket 1 00:15:43.695 EAL: Skipped lcore 140 as core 32 on socket 1 00:15:43.695 EAL: Skipped lcore 141 as core 33 on socket 1 00:15:43.695 EAL: Skipped lcore 142 as core 34 on socket 1 00:15:43.695 EAL: Skipped lcore 143 as core 35 on socket 1 00:15:43.695 EAL: Maximum logical cores by configuration: 128 00:15:43.695 EAL: Detected CPU lcores: 128 00:15:43.695 EAL: Detected NUMA nodes: 2 00:15:43.695 EAL: Checking presence of .so 'librte_eal.so.24.1' 00:15:43.695 EAL: Detected shared linkage of DPDK 00:15:43.695 EAL: No shared files mode enabled, IPC will be disabled 00:15:43.695 EAL: Bus pci wants IOVA as 'DC' 00:15:43.695 EAL: Buses did not request a specific IOVA mode. 00:15:43.695 EAL: IOMMU is available, selecting IOVA as VA mode. 00:15:43.695 EAL: Selected IOVA mode 'VA' 00:15:43.695 EAL: Probing VFIO support... 00:15:43.695 EAL: IOMMU type 1 (Type 1) is supported 00:15:43.695 EAL: IOMMU type 7 (sPAPR) is not supported 00:15:43.695 EAL: IOMMU type 8 (No-IOMMU) is not supported 00:15:43.695 EAL: VFIO support initialized 00:15:43.695 EAL: Ask a virtual area of 0x2e000 bytes 00:15:43.695 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:15:43.695 EAL: Setting up physically contiguous memory... 00:15:43.695 EAL: Setting maximum number of open files to 524288 00:15:43.695 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:15:43.695 EAL: Detected memory type: socket_id:1 hugepage_sz:2097152 00:15:43.695 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:15:43.695 EAL: Ask a virtual area of 0x61000 bytes 00:15:43.695 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:15:43.695 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:15:43.695 EAL: Ask a virtual area of 0x400000000 bytes 00:15:43.695 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:15:43.695 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:15:43.695 EAL: Ask a virtual area of 0x61000 bytes 00:15:43.695 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:15:43.695 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:15:43.695 EAL: Ask a virtual area of 0x400000000 bytes 00:15:43.695 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:15:43.695 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:15:43.695 EAL: Ask a virtual area of 0x61000 bytes 00:15:43.695 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:15:43.695 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:15:43.695 EAL: Ask a virtual area of 0x400000000 bytes 00:15:43.695 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:15:43.695 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:15:43.695 EAL: Ask a virtual area of 0x61000 bytes 00:15:43.695 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:15:43.695 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:15:43.695 EAL: Ask a virtual area of 0x400000000 bytes 00:15:43.695 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:15:43.696 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:15:43.696 EAL: Creating 4 segment lists: n_segs:8192 socket_id:1 hugepage_sz:2097152 00:15:43.696 EAL: Ask a virtual area of 0x61000 bytes 00:15:43.696 EAL: Virtual area found at 0x201000800000 (size = 0x61000) 00:15:43.696 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:15:43.696 EAL: Ask a virtual area of 0x400000000 bytes 00:15:43.696 EAL: Virtual area found at 0x201000a00000 (size = 0x400000000) 00:15:43.696 EAL: VA reserved for memseg list at 0x201000a00000, size 400000000 00:15:43.696 EAL: Ask a virtual area of 0x61000 bytes 00:15:43.696 EAL: Virtual area found at 0x201400a00000 (size = 0x61000) 00:15:43.696 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:15:43.696 EAL: Ask a virtual area of 0x400000000 bytes 00:15:43.696 EAL: Virtual area found at 0x201400c00000 (size = 0x400000000) 00:15:43.696 EAL: VA reserved for memseg list at 0x201400c00000, size 400000000 00:15:43.696 EAL: Ask a virtual area of 0x61000 bytes 00:15:43.696 EAL: Virtual area found at 0x201800c00000 (size = 0x61000) 00:15:43.696 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:15:43.696 EAL: Ask a virtual area of 0x400000000 bytes 00:15:43.696 EAL: Virtual area found at 0x201800e00000 (size = 0x400000000) 00:15:43.696 EAL: VA reserved for memseg list at 0x201800e00000, size 400000000 00:15:43.696 EAL: Ask a virtual area of 0x61000 bytes 00:15:43.696 EAL: Virtual area found at 0x201c00e00000 (size = 0x61000) 00:15:43.696 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:15:43.696 EAL: Ask a virtual area of 0x400000000 bytes 00:15:43.696 EAL: Virtual area found at 0x201c01000000 (size = 0x400000000) 00:15:43.696 EAL: VA reserved for memseg list at 0x201c01000000, size 400000000 00:15:43.696 EAL: Hugepages will be freed exactly as allocated. 00:15:43.696 EAL: No shared files mode enabled, IPC is disabled 00:15:43.696 EAL: No shared files mode enabled, IPC is disabled 00:15:43.696 EAL: TSC frequency is ~2400000 KHz 00:15:43.696 EAL: Main lcore 0 is ready (tid=7f78dd815a00;cpuset=[0]) 00:15:43.696 EAL: Trying to obtain current memory policy. 00:15:43.696 EAL: Setting policy MPOL_PREFERRED for socket 0 00:15:43.696 EAL: Restoring previous memory policy: 0 00:15:43.696 EAL: request: mp_malloc_sync 00:15:43.696 EAL: No shared files mode enabled, IPC is disabled 00:15:43.696 EAL: Heap on socket 0 was expanded by 2MB 00:15:43.696 EAL: No shared files mode enabled, IPC is disabled 00:15:43.696 EAL: No PCI address specified using 'addr=' in: bus=pci 00:15:43.696 EAL: Mem event callback 'spdk:(nil)' registered 00:15:43.696 00:15:43.696 00:15:43.696 CUnit - A unit testing framework for C - Version 2.1-3 00:15:43.696 http://cunit.sourceforge.net/ 00:15:43.696 00:15:43.696 00:15:43.696 Suite: components_suite 00:15:43.696 Test: vtophys_malloc_test ...passed 00:15:43.696 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:15:43.696 EAL: Setting policy MPOL_PREFERRED for socket 0 00:15:43.696 EAL: Restoring previous memory policy: 4 00:15:43.696 EAL: Calling mem event callback 'spdk:(nil)' 00:15:43.696 EAL: request: mp_malloc_sync 00:15:43.696 EAL: No shared files mode enabled, IPC is disabled 00:15:43.696 EAL: Heap on socket 0 was expanded by 4MB 00:15:43.696 EAL: Calling mem event callback 'spdk:(nil)' 00:15:43.696 EAL: request: mp_malloc_sync 00:15:43.696 EAL: No shared files mode enabled, IPC is disabled 00:15:43.696 EAL: Heap on socket 0 was shrunk by 4MB 00:15:43.696 EAL: Trying to obtain current memory policy. 00:15:43.696 EAL: Setting policy MPOL_PREFERRED for socket 0 00:15:43.696 EAL: Restoring previous memory policy: 4 00:15:43.696 EAL: Calling mem event callback 'spdk:(nil)' 00:15:43.696 EAL: request: mp_malloc_sync 00:15:43.696 EAL: No shared files mode enabled, IPC is disabled 00:15:43.696 EAL: Heap on socket 0 was expanded by 6MB 00:15:43.696 EAL: Calling mem event callback 'spdk:(nil)' 00:15:43.696 EAL: request: mp_malloc_sync 00:15:43.696 EAL: No shared files mode enabled, IPC is disabled 00:15:43.696 EAL: Heap on socket 0 was shrunk by 6MB 00:15:43.696 EAL: Trying to obtain current memory policy. 00:15:43.696 EAL: Setting policy MPOL_PREFERRED for socket 0 00:15:43.696 EAL: Restoring previous memory policy: 4 00:15:43.696 EAL: Calling mem event callback 'spdk:(nil)' 00:15:43.696 EAL: request: mp_malloc_sync 00:15:43.696 EAL: No shared files mode enabled, IPC is disabled 00:15:43.696 EAL: Heap on socket 0 was expanded by 10MB 00:15:43.696 EAL: Calling mem event callback 'spdk:(nil)' 00:15:43.696 EAL: request: mp_malloc_sync 00:15:43.696 EAL: No shared files mode enabled, IPC is disabled 00:15:43.696 EAL: Heap on socket 0 was shrunk by 10MB 00:15:43.696 EAL: Trying to obtain current memory policy. 00:15:43.696 EAL: Setting policy MPOL_PREFERRED for socket 0 00:15:43.696 EAL: Restoring previous memory policy: 4 00:15:43.696 EAL: Calling mem event callback 'spdk:(nil)' 00:15:43.696 EAL: request: mp_malloc_sync 00:15:43.696 EAL: No shared files mode enabled, IPC is disabled 00:15:43.696 EAL: Heap on socket 0 was expanded by 18MB 00:15:43.696 EAL: Calling mem event callback 'spdk:(nil)' 00:15:43.696 EAL: request: mp_malloc_sync 00:15:43.696 EAL: No shared files mode enabled, IPC is disabled 00:15:43.696 EAL: Heap on socket 0 was shrunk by 18MB 00:15:43.696 EAL: Trying to obtain current memory policy. 00:15:43.696 EAL: Setting policy MPOL_PREFERRED for socket 0 00:15:43.696 EAL: Restoring previous memory policy: 4 00:15:43.696 EAL: Calling mem event callback 'spdk:(nil)' 00:15:43.696 EAL: request: mp_malloc_sync 00:15:43.696 EAL: No shared files mode enabled, IPC is disabled 00:15:43.696 EAL: Heap on socket 0 was expanded by 34MB 00:15:43.696 EAL: Calling mem event callback 'spdk:(nil)' 00:15:43.696 EAL: request: mp_malloc_sync 00:15:43.696 EAL: No shared files mode enabled, IPC is disabled 00:15:43.696 EAL: Heap on socket 0 was shrunk by 34MB 00:15:43.696 EAL: Trying to obtain current memory policy. 00:15:43.696 EAL: Setting policy MPOL_PREFERRED for socket 0 00:15:43.696 EAL: Restoring previous memory policy: 4 00:15:43.696 EAL: Calling mem event callback 'spdk:(nil)' 00:15:43.696 EAL: request: mp_malloc_sync 00:15:43.696 EAL: No shared files mode enabled, IPC is disabled 00:15:43.696 EAL: Heap on socket 0 was expanded by 66MB 00:15:43.696 EAL: Calling mem event callback 'spdk:(nil)' 00:15:43.696 EAL: request: mp_malloc_sync 00:15:43.696 EAL: No shared files mode enabled, IPC is disabled 00:15:43.696 EAL: Heap on socket 0 was shrunk by 66MB 00:15:43.696 EAL: Trying to obtain current memory policy. 00:15:43.696 EAL: Setting policy MPOL_PREFERRED for socket 0 00:15:43.696 EAL: Restoring previous memory policy: 4 00:15:43.696 EAL: Calling mem event callback 'spdk:(nil)' 00:15:43.696 EAL: request: mp_malloc_sync 00:15:43.696 EAL: No shared files mode enabled, IPC is disabled 00:15:43.696 EAL: Heap on socket 0 was expanded by 130MB 00:15:43.696 EAL: Calling mem event callback 'spdk:(nil)' 00:15:43.696 EAL: request: mp_malloc_sync 00:15:43.696 EAL: No shared files mode enabled, IPC is disabled 00:15:43.696 EAL: Heap on socket 0 was shrunk by 130MB 00:15:43.696 EAL: Trying to obtain current memory policy. 00:15:43.696 EAL: Setting policy MPOL_PREFERRED for socket 0 00:15:43.696 EAL: Restoring previous memory policy: 4 00:15:43.696 EAL: Calling mem event callback 'spdk:(nil)' 00:15:43.696 EAL: request: mp_malloc_sync 00:15:43.696 EAL: No shared files mode enabled, IPC is disabled 00:15:43.696 EAL: Heap on socket 0 was expanded by 258MB 00:15:43.696 EAL: Calling mem event callback 'spdk:(nil)' 00:15:43.956 EAL: request: mp_malloc_sync 00:15:43.956 EAL: No shared files mode enabled, IPC is disabled 00:15:43.956 EAL: Heap on socket 0 was shrunk by 258MB 00:15:43.956 EAL: Trying to obtain current memory policy. 00:15:43.956 EAL: Setting policy MPOL_PREFERRED for socket 0 00:15:43.956 EAL: Restoring previous memory policy: 4 00:15:43.956 EAL: Calling mem event callback 'spdk:(nil)' 00:15:43.956 EAL: request: mp_malloc_sync 00:15:43.956 EAL: No shared files mode enabled, IPC is disabled 00:15:43.956 EAL: Heap on socket 0 was expanded by 514MB 00:15:43.956 EAL: Calling mem event callback 'spdk:(nil)' 00:15:43.956 EAL: request: mp_malloc_sync 00:15:43.956 EAL: No shared files mode enabled, IPC is disabled 00:15:43.956 EAL: Heap on socket 0 was shrunk by 514MB 00:15:43.956 EAL: Trying to obtain current memory policy. 00:15:43.956 EAL: Setting policy MPOL_PREFERRED for socket 0 00:15:44.215 EAL: Restoring previous memory policy: 4 00:15:44.215 EAL: Calling mem event callback 'spdk:(nil)' 00:15:44.215 EAL: request: mp_malloc_sync 00:15:44.215 EAL: No shared files mode enabled, IPC is disabled 00:15:44.215 EAL: Heap on socket 0 was expanded by 1026MB 00:15:44.215 EAL: Calling mem event callback 'spdk:(nil)' 00:15:44.476 EAL: request: mp_malloc_sync 00:15:44.476 EAL: No shared files mode enabled, IPC is disabled 00:15:44.476 EAL: Heap on socket 0 was shrunk by 1026MB 00:15:44.476 passed 00:15:44.476 00:15:44.476 Run Summary: Type Total Ran Passed Failed Inactive 00:15:44.476 suites 1 1 n/a 0 0 00:15:44.476 tests 2 2 2 0 0 00:15:44.476 asserts 497 497 497 0 n/a 00:15:44.476 00:15:44.476 Elapsed time = 0.650 seconds 00:15:44.476 EAL: Calling mem event callback 'spdk:(nil)' 00:15:44.476 EAL: request: mp_malloc_sync 00:15:44.476 EAL: No shared files mode enabled, IPC is disabled 00:15:44.476 EAL: Heap on socket 0 was shrunk by 2MB 00:15:44.476 EAL: No shared files mode enabled, IPC is disabled 00:15:44.476 EAL: No shared files mode enabled, IPC is disabled 00:15:44.476 EAL: No shared files mode enabled, IPC is disabled 00:15:44.476 00:15:44.477 real 0m0.757s 00:15:44.477 user 0m0.406s 00:15:44.477 sys 0m0.329s 00:15:44.477 22:14:39 env.env_vtophys -- common/autotest_common.sh@1126 -- # xtrace_disable 00:15:44.477 22:14:39 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:15:44.477 ************************************ 00:15:44.477 END TEST env_vtophys 00:15:44.477 ************************************ 00:15:44.477 22:14:39 env -- env/env.sh@12 -- # run_test env_pci /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:15:44.477 22:14:39 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:15:44.477 22:14:39 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:15:44.477 22:14:39 env -- common/autotest_common.sh@10 -- # set +x 00:15:44.477 ************************************ 00:15:44.477 START TEST env_pci 00:15:44.477 ************************************ 00:15:44.477 22:14:39 env.env_pci -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:15:44.477 00:15:44.477 00:15:44.477 CUnit - A unit testing framework for C - Version 2.1-3 00:15:44.477 http://cunit.sourceforge.net/ 00:15:44.477 00:15:44.477 00:15:44.477 Suite: pci 00:15:44.477 Test: pci_hook ...[2024-10-01 22:14:39.578106] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/pci.c:1049:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 4158620 has claimed it 00:15:44.477 EAL: Cannot find device (10000:00:01.0) 00:15:44.477 EAL: Failed to attach device on primary process 00:15:44.477 passed 00:15:44.477 00:15:44.477 Run Summary: Type Total Ran Passed Failed Inactive 00:15:44.477 suites 1 1 n/a 0 0 00:15:44.477 tests 1 1 1 0 0 00:15:44.477 asserts 25 25 25 0 n/a 00:15:44.477 00:15:44.477 Elapsed time = 0.031 seconds 00:15:44.477 00:15:44.477 real 0m0.051s 00:15:44.477 user 0m0.017s 00:15:44.477 sys 0m0.034s 00:15:44.477 22:14:39 env.env_pci -- common/autotest_common.sh@1126 -- # xtrace_disable 00:15:44.477 22:14:39 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:15:44.477 ************************************ 00:15:44.477 END TEST env_pci 00:15:44.477 ************************************ 00:15:44.477 22:14:39 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:15:44.477 22:14:39 env -- env/env.sh@15 -- # uname 00:15:44.477 22:14:39 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:15:44.477 22:14:39 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:15:44.477 22:14:39 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:15:44.477 22:14:39 env -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:15:44.477 22:14:39 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:15:44.477 22:14:39 env -- common/autotest_common.sh@10 -- # set +x 00:15:44.477 ************************************ 00:15:44.477 START TEST env_dpdk_post_init 00:15:44.477 ************************************ 00:15:44.477 22:14:39 env.env_dpdk_post_init -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:15:44.477 EAL: Detected CPU lcores: 128 00:15:44.477 EAL: Detected NUMA nodes: 2 00:15:44.477 EAL: Detected shared linkage of DPDK 00:15:44.737 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:15:44.737 EAL: Selected IOVA mode 'VA' 00:15:44.737 EAL: VFIO support initialized 00:15:44.737 TELEMETRY: No legacy callbacks, legacy socket not created 00:15:44.737 EAL: Using IOMMU type 1 (Type 1) 00:15:44.737 EAL: Ignore mapping IO port bar(1) 00:15:44.998 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.0 (socket 0) 00:15:44.998 EAL: Ignore mapping IO port bar(1) 00:15:45.259 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.1 (socket 0) 00:15:45.259 EAL: Ignore mapping IO port bar(1) 00:15:45.259 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.2 (socket 0) 00:15:45.518 EAL: Ignore mapping IO port bar(1) 00:15:45.518 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.3 (socket 0) 00:15:45.779 EAL: Ignore mapping IO port bar(1) 00:15:45.779 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.4 (socket 0) 00:15:46.039 EAL: Ignore mapping IO port bar(1) 00:15:46.039 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.5 (socket 0) 00:15:46.039 EAL: Ignore mapping IO port bar(1) 00:15:46.300 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.6 (socket 0) 00:15:46.300 EAL: Ignore mapping IO port bar(1) 00:15:46.560 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.7 (socket 0) 00:15:46.821 EAL: Probe PCI driver: spdk_nvme (144d:a80a) device: 0000:65:00.0 (socket 0) 00:15:46.821 EAL: Ignore mapping IO port bar(1) 00:15:46.821 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.0 (socket 1) 00:15:47.083 EAL: Ignore mapping IO port bar(1) 00:15:47.083 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.1 (socket 1) 00:15:47.344 EAL: Ignore mapping IO port bar(1) 00:15:47.344 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.2 (socket 1) 00:15:47.604 EAL: Ignore mapping IO port bar(1) 00:15:47.604 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.3 (socket 1) 00:15:47.604 EAL: Ignore mapping IO port bar(1) 00:15:47.865 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.4 (socket 1) 00:15:47.865 EAL: Ignore mapping IO port bar(1) 00:15:48.125 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.5 (socket 1) 00:15:48.125 EAL: Ignore mapping IO port bar(1) 00:15:48.386 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.6 (socket 1) 00:15:48.386 EAL: Ignore mapping IO port bar(1) 00:15:48.386 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.7 (socket 1) 00:15:48.386 EAL: Releasing PCI mapped resource for 0000:65:00.0 00:15:48.386 EAL: Calling pci_unmap_resource for 0000:65:00.0 at 0x202001020000 00:15:48.645 Starting DPDK initialization... 00:15:48.645 Starting SPDK post initialization... 00:15:48.645 SPDK NVMe probe 00:15:48.645 Attaching to 0000:65:00.0 00:15:48.645 Attached to 0000:65:00.0 00:15:48.645 Cleaning up... 00:15:50.558 00:15:50.558 real 0m5.718s 00:15:50.558 user 0m0.087s 00:15:50.558 sys 0m0.178s 00:15:50.558 22:14:45 env.env_dpdk_post_init -- common/autotest_common.sh@1126 -- # xtrace_disable 00:15:50.558 22:14:45 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:15:50.558 ************************************ 00:15:50.558 END TEST env_dpdk_post_init 00:15:50.558 ************************************ 00:15:50.558 22:14:45 env -- env/env.sh@26 -- # uname 00:15:50.558 22:14:45 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:15:50.558 22:14:45 env -- env/env.sh@29 -- # run_test env_mem_callbacks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:15:50.558 22:14:45 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:15:50.558 22:14:45 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:15:50.558 22:14:45 env -- common/autotest_common.sh@10 -- # set +x 00:15:50.558 ************************************ 00:15:50.558 START TEST env_mem_callbacks 00:15:50.558 ************************************ 00:15:50.558 22:14:45 env.env_mem_callbacks -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:15:50.558 EAL: Detected CPU lcores: 128 00:15:50.558 EAL: Detected NUMA nodes: 2 00:15:50.558 EAL: Detected shared linkage of DPDK 00:15:50.558 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:15:50.558 EAL: Selected IOVA mode 'VA' 00:15:50.558 EAL: VFIO support initialized 00:15:50.558 TELEMETRY: No legacy callbacks, legacy socket not created 00:15:50.558 00:15:50.558 00:15:50.558 CUnit - A unit testing framework for C - Version 2.1-3 00:15:50.558 http://cunit.sourceforge.net/ 00:15:50.558 00:15:50.558 00:15:50.558 Suite: memory 00:15:50.558 Test: test ... 00:15:50.558 register 0x200000200000 2097152 00:15:50.558 malloc 3145728 00:15:50.558 register 0x200000400000 4194304 00:15:50.558 buf 0x200000500000 len 3145728 PASSED 00:15:50.558 malloc 64 00:15:50.558 buf 0x2000004fff40 len 64 PASSED 00:15:50.558 malloc 4194304 00:15:50.558 register 0x200000800000 6291456 00:15:50.558 buf 0x200000a00000 len 4194304 PASSED 00:15:50.558 free 0x200000500000 3145728 00:15:50.558 free 0x2000004fff40 64 00:15:50.558 unregister 0x200000400000 4194304 PASSED 00:15:50.558 free 0x200000a00000 4194304 00:15:50.558 unregister 0x200000800000 6291456 PASSED 00:15:50.558 malloc 8388608 00:15:50.558 register 0x200000400000 10485760 00:15:50.558 buf 0x200000600000 len 8388608 PASSED 00:15:50.558 free 0x200000600000 8388608 00:15:50.558 unregister 0x200000400000 10485760 PASSED 00:15:50.558 passed 00:15:50.558 00:15:50.558 Run Summary: Type Total Ran Passed Failed Inactive 00:15:50.558 suites 1 1 n/a 0 0 00:15:50.558 tests 1 1 1 0 0 00:15:50.558 asserts 15 15 15 0 n/a 00:15:50.558 00:15:50.558 Elapsed time = 0.008 seconds 00:15:50.558 00:15:50.558 real 0m0.067s 00:15:50.558 user 0m0.024s 00:15:50.558 sys 0m0.043s 00:15:50.558 22:14:45 env.env_mem_callbacks -- common/autotest_common.sh@1126 -- # xtrace_disable 00:15:50.558 22:14:45 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:15:50.558 ************************************ 00:15:50.558 END TEST env_mem_callbacks 00:15:50.558 ************************************ 00:15:50.558 00:15:50.558 real 0m7.338s 00:15:50.558 user 0m0.934s 00:15:50.558 sys 0m0.966s 00:15:50.558 22:14:45 env -- common/autotest_common.sh@1126 -- # xtrace_disable 00:15:50.558 22:14:45 env -- common/autotest_common.sh@10 -- # set +x 00:15:50.558 ************************************ 00:15:50.558 END TEST env 00:15:50.558 ************************************ 00:15:50.558 22:14:45 -- spdk/autotest.sh@156 -- # run_test rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:15:50.558 22:14:45 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:15:50.558 22:14:45 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:15:50.558 22:14:45 -- common/autotest_common.sh@10 -- # set +x 00:15:50.558 ************************************ 00:15:50.558 START TEST rpc 00:15:50.558 ************************************ 00:15:50.558 22:14:45 rpc -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:15:50.558 * Looking for test storage... 00:15:50.558 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:15:50.558 22:14:45 rpc -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:15:50.558 22:14:45 rpc -- common/autotest_common.sh@1681 -- # lcov --version 00:15:50.558 22:14:45 rpc -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:15:50.819 22:14:45 rpc -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:15:50.819 22:14:45 rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:15:50.819 22:14:45 rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:15:50.819 22:14:45 rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:15:50.819 22:14:45 rpc -- scripts/common.sh@336 -- # IFS=.-: 00:15:50.819 22:14:45 rpc -- scripts/common.sh@336 -- # read -ra ver1 00:15:50.819 22:14:45 rpc -- scripts/common.sh@337 -- # IFS=.-: 00:15:50.819 22:14:45 rpc -- scripts/common.sh@337 -- # read -ra ver2 00:15:50.819 22:14:45 rpc -- scripts/common.sh@338 -- # local 'op=<' 00:15:50.819 22:14:45 rpc -- scripts/common.sh@340 -- # ver1_l=2 00:15:50.819 22:14:45 rpc -- scripts/common.sh@341 -- # ver2_l=1 00:15:50.819 22:14:45 rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:15:50.819 22:14:45 rpc -- scripts/common.sh@344 -- # case "$op" in 00:15:50.819 22:14:45 rpc -- scripts/common.sh@345 -- # : 1 00:15:50.819 22:14:45 rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:15:50.819 22:14:45 rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:50.819 22:14:45 rpc -- scripts/common.sh@365 -- # decimal 1 00:15:50.819 22:14:45 rpc -- scripts/common.sh@353 -- # local d=1 00:15:50.819 22:14:45 rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:50.819 22:14:45 rpc -- scripts/common.sh@355 -- # echo 1 00:15:50.819 22:14:45 rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:15:50.819 22:14:45 rpc -- scripts/common.sh@366 -- # decimal 2 00:15:50.819 22:14:45 rpc -- scripts/common.sh@353 -- # local d=2 00:15:50.819 22:14:45 rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:50.819 22:14:45 rpc -- scripts/common.sh@355 -- # echo 2 00:15:50.819 22:14:45 rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:15:50.819 22:14:45 rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:15:50.819 22:14:45 rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:15:50.819 22:14:45 rpc -- scripts/common.sh@368 -- # return 0 00:15:50.819 22:14:45 rpc -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:50.819 22:14:45 rpc -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:15:50.819 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:50.819 --rc genhtml_branch_coverage=1 00:15:50.819 --rc genhtml_function_coverage=1 00:15:50.819 --rc genhtml_legend=1 00:15:50.819 --rc geninfo_all_blocks=1 00:15:50.819 --rc geninfo_unexecuted_blocks=1 00:15:50.819 00:15:50.819 ' 00:15:50.819 22:14:45 rpc -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:15:50.819 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:50.819 --rc genhtml_branch_coverage=1 00:15:50.819 --rc genhtml_function_coverage=1 00:15:50.819 --rc genhtml_legend=1 00:15:50.819 --rc geninfo_all_blocks=1 00:15:50.819 --rc geninfo_unexecuted_blocks=1 00:15:50.819 00:15:50.819 ' 00:15:50.819 22:14:45 rpc -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:15:50.819 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:50.819 --rc genhtml_branch_coverage=1 00:15:50.819 --rc genhtml_function_coverage=1 00:15:50.819 --rc genhtml_legend=1 00:15:50.819 --rc geninfo_all_blocks=1 00:15:50.819 --rc geninfo_unexecuted_blocks=1 00:15:50.819 00:15:50.819 ' 00:15:50.819 22:14:45 rpc -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:15:50.819 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:50.819 --rc genhtml_branch_coverage=1 00:15:50.819 --rc genhtml_function_coverage=1 00:15:50.819 --rc genhtml_legend=1 00:15:50.819 --rc geninfo_all_blocks=1 00:15:50.819 --rc geninfo_unexecuted_blocks=1 00:15:50.819 00:15:50.819 ' 00:15:50.819 22:14:45 rpc -- rpc/rpc.sh@65 -- # spdk_pid=4160079 00:15:50.819 22:14:45 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:15:50.819 22:14:45 rpc -- rpc/rpc.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -e bdev 00:15:50.819 22:14:45 rpc -- rpc/rpc.sh@67 -- # waitforlisten 4160079 00:15:50.819 22:14:45 rpc -- common/autotest_common.sh@831 -- # '[' -z 4160079 ']' 00:15:50.819 22:14:45 rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:50.819 22:14:45 rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:15:50.819 22:14:45 rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:50.819 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:50.819 22:14:45 rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:15:50.819 22:14:45 rpc -- common/autotest_common.sh@10 -- # set +x 00:15:50.819 [2024-10-01 22:14:45.949262] Starting SPDK v25.01-pre git sha1 1b1c3081e / DPDK 24.03.0 initialization... 00:15:50.819 [2024-10-01 22:14:45.949314] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4160079 ] 00:15:50.819 [2024-10-01 22:14:46.010693] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:51.080 [2024-10-01 22:14:46.075359] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:15:51.080 [2024-10-01 22:14:46.075395] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 4160079' to capture a snapshot of events at runtime. 00:15:51.080 [2024-10-01 22:14:46.075403] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:51.080 [2024-10-01 22:14:46.075410] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:51.080 [2024-10-01 22:14:46.075416] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid4160079 for offline analysis/debug. 00:15:51.080 [2024-10-01 22:14:46.075444] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:15:51.080 22:14:46 rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:15:51.080 22:14:46 rpc -- common/autotest_common.sh@864 -- # return 0 00:15:51.080 22:14:46 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:15:51.080 22:14:46 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:15:51.080 22:14:46 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:15:51.080 22:14:46 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:15:51.080 22:14:46 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:15:51.080 22:14:46 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:15:51.080 22:14:46 rpc -- common/autotest_common.sh@10 -- # set +x 00:15:51.341 ************************************ 00:15:51.341 START TEST rpc_integrity 00:15:51.341 ************************************ 00:15:51.341 22:14:46 rpc.rpc_integrity -- common/autotest_common.sh@1125 -- # rpc_integrity 00:15:51.341 22:14:46 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:15:51.341 22:14:46 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:51.341 22:14:46 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:15:51.341 22:14:46 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:51.341 22:14:46 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:15:51.341 22:14:46 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:15:51.341 22:14:46 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:15:51.341 22:14:46 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:15:51.341 22:14:46 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:51.341 22:14:46 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:15:51.341 22:14:46 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:51.341 22:14:46 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:15:51.341 22:14:46 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:15:51.341 22:14:46 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:51.341 22:14:46 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:15:51.341 22:14:46 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:51.341 22:14:46 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:15:51.341 { 00:15:51.341 "name": "Malloc0", 00:15:51.341 "aliases": [ 00:15:51.341 "d95a280f-4f93-4ab0-873d-6d3b425c6748" 00:15:51.341 ], 00:15:51.341 "product_name": "Malloc disk", 00:15:51.341 "block_size": 512, 00:15:51.341 "num_blocks": 16384, 00:15:51.341 "uuid": "d95a280f-4f93-4ab0-873d-6d3b425c6748", 00:15:51.341 "assigned_rate_limits": { 00:15:51.341 "rw_ios_per_sec": 0, 00:15:51.341 "rw_mbytes_per_sec": 0, 00:15:51.341 "r_mbytes_per_sec": 0, 00:15:51.341 "w_mbytes_per_sec": 0 00:15:51.341 }, 00:15:51.341 "claimed": false, 00:15:51.341 "zoned": false, 00:15:51.341 "supported_io_types": { 00:15:51.341 "read": true, 00:15:51.341 "write": true, 00:15:51.341 "unmap": true, 00:15:51.341 "flush": true, 00:15:51.341 "reset": true, 00:15:51.341 "nvme_admin": false, 00:15:51.341 "nvme_io": false, 00:15:51.341 "nvme_io_md": false, 00:15:51.341 "write_zeroes": true, 00:15:51.341 "zcopy": true, 00:15:51.341 "get_zone_info": false, 00:15:51.341 "zone_management": false, 00:15:51.341 "zone_append": false, 00:15:51.341 "compare": false, 00:15:51.341 "compare_and_write": false, 00:15:51.341 "abort": true, 00:15:51.341 "seek_hole": false, 00:15:51.341 "seek_data": false, 00:15:51.341 "copy": true, 00:15:51.341 "nvme_iov_md": false 00:15:51.341 }, 00:15:51.341 "memory_domains": [ 00:15:51.341 { 00:15:51.341 "dma_device_id": "system", 00:15:51.341 "dma_device_type": 1 00:15:51.341 }, 00:15:51.341 { 00:15:51.341 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:51.341 "dma_device_type": 2 00:15:51.341 } 00:15:51.341 ], 00:15:51.341 "driver_specific": {} 00:15:51.341 } 00:15:51.341 ]' 00:15:51.341 22:14:46 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:15:51.341 22:14:46 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:15:51.341 22:14:46 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:15:51.341 22:14:46 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:51.341 22:14:46 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:15:51.341 [2024-10-01 22:14:46.490112] vbdev_passthru.c: 687:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:15:51.341 [2024-10-01 22:14:46.490144] vbdev_passthru.c: 715:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:51.341 [2024-10-01 22:14:46.490157] vbdev_passthru.c: 762:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x7793b0 00:15:51.341 [2024-10-01 22:14:46.490163] vbdev_passthru.c: 777:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:51.341 [2024-10-01 22:14:46.491524] vbdev_passthru.c: 790:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:51.341 [2024-10-01 22:14:46.491544] vbdev_passthru.c: 791:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:15:51.341 Passthru0 00:15:51.341 22:14:46 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:51.341 22:14:46 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:15:51.341 22:14:46 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:51.341 22:14:46 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:15:51.341 22:14:46 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:51.341 22:14:46 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:15:51.341 { 00:15:51.341 "name": "Malloc0", 00:15:51.341 "aliases": [ 00:15:51.341 "d95a280f-4f93-4ab0-873d-6d3b425c6748" 00:15:51.341 ], 00:15:51.341 "product_name": "Malloc disk", 00:15:51.341 "block_size": 512, 00:15:51.341 "num_blocks": 16384, 00:15:51.341 "uuid": "d95a280f-4f93-4ab0-873d-6d3b425c6748", 00:15:51.341 "assigned_rate_limits": { 00:15:51.341 "rw_ios_per_sec": 0, 00:15:51.341 "rw_mbytes_per_sec": 0, 00:15:51.341 "r_mbytes_per_sec": 0, 00:15:51.341 "w_mbytes_per_sec": 0 00:15:51.341 }, 00:15:51.341 "claimed": true, 00:15:51.341 "claim_type": "exclusive_write", 00:15:51.341 "zoned": false, 00:15:51.341 "supported_io_types": { 00:15:51.341 "read": true, 00:15:51.341 "write": true, 00:15:51.341 "unmap": true, 00:15:51.341 "flush": true, 00:15:51.341 "reset": true, 00:15:51.341 "nvme_admin": false, 00:15:51.341 "nvme_io": false, 00:15:51.341 "nvme_io_md": false, 00:15:51.341 "write_zeroes": true, 00:15:51.341 "zcopy": true, 00:15:51.341 "get_zone_info": false, 00:15:51.341 "zone_management": false, 00:15:51.341 "zone_append": false, 00:15:51.341 "compare": false, 00:15:51.341 "compare_and_write": false, 00:15:51.341 "abort": true, 00:15:51.341 "seek_hole": false, 00:15:51.341 "seek_data": false, 00:15:51.341 "copy": true, 00:15:51.341 "nvme_iov_md": false 00:15:51.341 }, 00:15:51.341 "memory_domains": [ 00:15:51.341 { 00:15:51.341 "dma_device_id": "system", 00:15:51.341 "dma_device_type": 1 00:15:51.342 }, 00:15:51.342 { 00:15:51.342 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:51.342 "dma_device_type": 2 00:15:51.342 } 00:15:51.342 ], 00:15:51.342 "driver_specific": {} 00:15:51.342 }, 00:15:51.342 { 00:15:51.342 "name": "Passthru0", 00:15:51.342 "aliases": [ 00:15:51.342 "21d7b278-f5f4-5508-a8af-5f9c16722982" 00:15:51.342 ], 00:15:51.342 "product_name": "passthru", 00:15:51.342 "block_size": 512, 00:15:51.342 "num_blocks": 16384, 00:15:51.342 "uuid": "21d7b278-f5f4-5508-a8af-5f9c16722982", 00:15:51.342 "assigned_rate_limits": { 00:15:51.342 "rw_ios_per_sec": 0, 00:15:51.342 "rw_mbytes_per_sec": 0, 00:15:51.342 "r_mbytes_per_sec": 0, 00:15:51.342 "w_mbytes_per_sec": 0 00:15:51.342 }, 00:15:51.342 "claimed": false, 00:15:51.342 "zoned": false, 00:15:51.342 "supported_io_types": { 00:15:51.342 "read": true, 00:15:51.342 "write": true, 00:15:51.342 "unmap": true, 00:15:51.342 "flush": true, 00:15:51.342 "reset": true, 00:15:51.342 "nvme_admin": false, 00:15:51.342 "nvme_io": false, 00:15:51.342 "nvme_io_md": false, 00:15:51.342 "write_zeroes": true, 00:15:51.342 "zcopy": true, 00:15:51.342 "get_zone_info": false, 00:15:51.342 "zone_management": false, 00:15:51.342 "zone_append": false, 00:15:51.342 "compare": false, 00:15:51.342 "compare_and_write": false, 00:15:51.342 "abort": true, 00:15:51.342 "seek_hole": false, 00:15:51.342 "seek_data": false, 00:15:51.342 "copy": true, 00:15:51.342 "nvme_iov_md": false 00:15:51.342 }, 00:15:51.342 "memory_domains": [ 00:15:51.342 { 00:15:51.342 "dma_device_id": "system", 00:15:51.342 "dma_device_type": 1 00:15:51.342 }, 00:15:51.342 { 00:15:51.342 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:51.342 "dma_device_type": 2 00:15:51.342 } 00:15:51.342 ], 00:15:51.342 "driver_specific": { 00:15:51.342 "passthru": { 00:15:51.342 "name": "Passthru0", 00:15:51.342 "base_bdev_name": "Malloc0" 00:15:51.342 } 00:15:51.342 } 00:15:51.342 } 00:15:51.342 ]' 00:15:51.342 22:14:46 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:15:51.342 22:14:46 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:15:51.342 22:14:46 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:15:51.342 22:14:46 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:51.342 22:14:46 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:15:51.342 22:14:46 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:51.342 22:14:46 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:15:51.342 22:14:46 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:51.342 22:14:46 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:15:51.342 22:14:46 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:51.342 22:14:46 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:15:51.342 22:14:46 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:51.342 22:14:46 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:15:51.603 22:14:46 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:51.603 22:14:46 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:15:51.603 22:14:46 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:15:51.603 22:14:46 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:15:51.603 00:15:51.603 real 0m0.298s 00:15:51.603 user 0m0.186s 00:15:51.603 sys 0m0.048s 00:15:51.603 22:14:46 rpc.rpc_integrity -- common/autotest_common.sh@1126 -- # xtrace_disable 00:15:51.603 22:14:46 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:15:51.603 ************************************ 00:15:51.603 END TEST rpc_integrity 00:15:51.603 ************************************ 00:15:51.603 22:14:46 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:15:51.603 22:14:46 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:15:51.603 22:14:46 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:15:51.603 22:14:46 rpc -- common/autotest_common.sh@10 -- # set +x 00:15:51.603 ************************************ 00:15:51.603 START TEST rpc_plugins 00:15:51.603 ************************************ 00:15:51.603 22:14:46 rpc.rpc_plugins -- common/autotest_common.sh@1125 -- # rpc_plugins 00:15:51.603 22:14:46 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:15:51.603 22:14:46 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:51.603 22:14:46 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:15:51.603 22:14:46 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:51.603 22:14:46 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:15:51.603 22:14:46 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:15:51.603 22:14:46 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:51.603 22:14:46 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:15:51.603 22:14:46 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:51.603 22:14:46 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:15:51.603 { 00:15:51.603 "name": "Malloc1", 00:15:51.603 "aliases": [ 00:15:51.603 "a043f14d-e765-47aa-9e3f-468b68a4d145" 00:15:51.603 ], 00:15:51.603 "product_name": "Malloc disk", 00:15:51.603 "block_size": 4096, 00:15:51.603 "num_blocks": 256, 00:15:51.603 "uuid": "a043f14d-e765-47aa-9e3f-468b68a4d145", 00:15:51.603 "assigned_rate_limits": { 00:15:51.603 "rw_ios_per_sec": 0, 00:15:51.603 "rw_mbytes_per_sec": 0, 00:15:51.603 "r_mbytes_per_sec": 0, 00:15:51.603 "w_mbytes_per_sec": 0 00:15:51.603 }, 00:15:51.603 "claimed": false, 00:15:51.603 "zoned": false, 00:15:51.603 "supported_io_types": { 00:15:51.603 "read": true, 00:15:51.603 "write": true, 00:15:51.603 "unmap": true, 00:15:51.603 "flush": true, 00:15:51.603 "reset": true, 00:15:51.603 "nvme_admin": false, 00:15:51.603 "nvme_io": false, 00:15:51.604 "nvme_io_md": false, 00:15:51.604 "write_zeroes": true, 00:15:51.604 "zcopy": true, 00:15:51.604 "get_zone_info": false, 00:15:51.604 "zone_management": false, 00:15:51.604 "zone_append": false, 00:15:51.604 "compare": false, 00:15:51.604 "compare_and_write": false, 00:15:51.604 "abort": true, 00:15:51.604 "seek_hole": false, 00:15:51.604 "seek_data": false, 00:15:51.604 "copy": true, 00:15:51.604 "nvme_iov_md": false 00:15:51.604 }, 00:15:51.604 "memory_domains": [ 00:15:51.604 { 00:15:51.604 "dma_device_id": "system", 00:15:51.604 "dma_device_type": 1 00:15:51.604 }, 00:15:51.604 { 00:15:51.604 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:51.604 "dma_device_type": 2 00:15:51.604 } 00:15:51.604 ], 00:15:51.604 "driver_specific": {} 00:15:51.604 } 00:15:51.604 ]' 00:15:51.604 22:14:46 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:15:51.604 22:14:46 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:15:51.604 22:14:46 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:15:51.604 22:14:46 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:51.604 22:14:46 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:15:51.604 22:14:46 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:51.604 22:14:46 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:15:51.604 22:14:46 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:51.604 22:14:46 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:15:51.604 22:14:46 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:51.604 22:14:46 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:15:51.604 22:14:46 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:15:51.864 22:14:46 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:15:51.864 00:15:51.864 real 0m0.151s 00:15:51.864 user 0m0.093s 00:15:51.864 sys 0m0.023s 00:15:51.864 22:14:46 rpc.rpc_plugins -- common/autotest_common.sh@1126 -- # xtrace_disable 00:15:51.864 22:14:46 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:15:51.864 ************************************ 00:15:51.864 END TEST rpc_plugins 00:15:51.864 ************************************ 00:15:51.864 22:14:46 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:15:51.864 22:14:46 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:15:51.864 22:14:46 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:15:51.864 22:14:46 rpc -- common/autotest_common.sh@10 -- # set +x 00:15:51.864 ************************************ 00:15:51.864 START TEST rpc_trace_cmd_test 00:15:51.864 ************************************ 00:15:51.864 22:14:46 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1125 -- # rpc_trace_cmd_test 00:15:51.864 22:14:46 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:15:51.864 22:14:46 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:15:51.864 22:14:46 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:51.864 22:14:46 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:15:51.864 22:14:46 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:51.864 22:14:46 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:15:51.864 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid4160079", 00:15:51.864 "tpoint_group_mask": "0x8", 00:15:51.864 "iscsi_conn": { 00:15:51.864 "mask": "0x2", 00:15:51.864 "tpoint_mask": "0x0" 00:15:51.864 }, 00:15:51.864 "scsi": { 00:15:51.864 "mask": "0x4", 00:15:51.864 "tpoint_mask": "0x0" 00:15:51.864 }, 00:15:51.864 "bdev": { 00:15:51.864 "mask": "0x8", 00:15:51.864 "tpoint_mask": "0xffffffffffffffff" 00:15:51.864 }, 00:15:51.864 "nvmf_rdma": { 00:15:51.864 "mask": "0x10", 00:15:51.864 "tpoint_mask": "0x0" 00:15:51.864 }, 00:15:51.864 "nvmf_tcp": { 00:15:51.864 "mask": "0x20", 00:15:51.864 "tpoint_mask": "0x0" 00:15:51.864 }, 00:15:51.864 "ftl": { 00:15:51.864 "mask": "0x40", 00:15:51.864 "tpoint_mask": "0x0" 00:15:51.864 }, 00:15:51.864 "blobfs": { 00:15:51.864 "mask": "0x80", 00:15:51.864 "tpoint_mask": "0x0" 00:15:51.864 }, 00:15:51.864 "dsa": { 00:15:51.864 "mask": "0x200", 00:15:51.864 "tpoint_mask": "0x0" 00:15:51.864 }, 00:15:51.864 "thread": { 00:15:51.864 "mask": "0x400", 00:15:51.864 "tpoint_mask": "0x0" 00:15:51.864 }, 00:15:51.864 "nvme_pcie": { 00:15:51.864 "mask": "0x800", 00:15:51.864 "tpoint_mask": "0x0" 00:15:51.864 }, 00:15:51.864 "iaa": { 00:15:51.864 "mask": "0x1000", 00:15:51.864 "tpoint_mask": "0x0" 00:15:51.864 }, 00:15:51.864 "nvme_tcp": { 00:15:51.864 "mask": "0x2000", 00:15:51.864 "tpoint_mask": "0x0" 00:15:51.864 }, 00:15:51.864 "bdev_nvme": { 00:15:51.864 "mask": "0x4000", 00:15:51.864 "tpoint_mask": "0x0" 00:15:51.864 }, 00:15:51.864 "sock": { 00:15:51.864 "mask": "0x8000", 00:15:51.864 "tpoint_mask": "0x0" 00:15:51.864 }, 00:15:51.864 "blob": { 00:15:51.864 "mask": "0x10000", 00:15:51.865 "tpoint_mask": "0x0" 00:15:51.865 }, 00:15:51.865 "bdev_raid": { 00:15:51.865 "mask": "0x20000", 00:15:51.865 "tpoint_mask": "0x0" 00:15:51.865 } 00:15:51.865 }' 00:15:51.865 22:14:46 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:15:51.865 22:14:47 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 18 -gt 2 ']' 00:15:51.865 22:14:47 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:15:51.865 22:14:47 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:15:51.865 22:14:47 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:15:51.865 22:14:47 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:15:51.865 22:14:47 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:15:52.125 22:14:47 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:15:52.125 22:14:47 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:15:52.125 22:14:47 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:15:52.125 00:15:52.125 real 0m0.254s 00:15:52.125 user 0m0.212s 00:15:52.125 sys 0m0.032s 00:15:52.125 22:14:47 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:15:52.125 22:14:47 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:15:52.125 ************************************ 00:15:52.125 END TEST rpc_trace_cmd_test 00:15:52.125 ************************************ 00:15:52.125 22:14:47 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:15:52.125 22:14:47 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:15:52.125 22:14:47 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:15:52.125 22:14:47 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:15:52.125 22:14:47 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:15:52.125 22:14:47 rpc -- common/autotest_common.sh@10 -- # set +x 00:15:52.125 ************************************ 00:15:52.125 START TEST rpc_daemon_integrity 00:15:52.125 ************************************ 00:15:52.125 22:14:47 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1125 -- # rpc_integrity 00:15:52.125 22:14:47 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:15:52.125 22:14:47 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:52.125 22:14:47 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:15:52.125 22:14:47 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:52.125 22:14:47 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:15:52.125 22:14:47 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:15:52.125 22:14:47 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:15:52.125 22:14:47 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:15:52.125 22:14:47 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:52.125 22:14:47 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:15:52.125 22:14:47 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:52.125 22:14:47 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:15:52.125 22:14:47 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:15:52.125 22:14:47 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:52.125 22:14:47 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:15:52.125 22:14:47 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:52.125 22:14:47 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:15:52.125 { 00:15:52.125 "name": "Malloc2", 00:15:52.125 "aliases": [ 00:15:52.125 "4b62e880-64b4-442b-95a0-c4abf0ff70b9" 00:15:52.125 ], 00:15:52.125 "product_name": "Malloc disk", 00:15:52.125 "block_size": 512, 00:15:52.125 "num_blocks": 16384, 00:15:52.125 "uuid": "4b62e880-64b4-442b-95a0-c4abf0ff70b9", 00:15:52.125 "assigned_rate_limits": { 00:15:52.125 "rw_ios_per_sec": 0, 00:15:52.125 "rw_mbytes_per_sec": 0, 00:15:52.125 "r_mbytes_per_sec": 0, 00:15:52.125 "w_mbytes_per_sec": 0 00:15:52.125 }, 00:15:52.125 "claimed": false, 00:15:52.125 "zoned": false, 00:15:52.125 "supported_io_types": { 00:15:52.125 "read": true, 00:15:52.125 "write": true, 00:15:52.125 "unmap": true, 00:15:52.125 "flush": true, 00:15:52.125 "reset": true, 00:15:52.125 "nvme_admin": false, 00:15:52.125 "nvme_io": false, 00:15:52.125 "nvme_io_md": false, 00:15:52.125 "write_zeroes": true, 00:15:52.125 "zcopy": true, 00:15:52.125 "get_zone_info": false, 00:15:52.125 "zone_management": false, 00:15:52.125 "zone_append": false, 00:15:52.125 "compare": false, 00:15:52.125 "compare_and_write": false, 00:15:52.125 "abort": true, 00:15:52.125 "seek_hole": false, 00:15:52.125 "seek_data": false, 00:15:52.125 "copy": true, 00:15:52.125 "nvme_iov_md": false 00:15:52.125 }, 00:15:52.125 "memory_domains": [ 00:15:52.125 { 00:15:52.125 "dma_device_id": "system", 00:15:52.125 "dma_device_type": 1 00:15:52.125 }, 00:15:52.125 { 00:15:52.125 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:52.125 "dma_device_type": 2 00:15:52.125 } 00:15:52.125 ], 00:15:52.125 "driver_specific": {} 00:15:52.125 } 00:15:52.125 ]' 00:15:52.125 22:14:47 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:15:52.386 22:14:47 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:15:52.386 22:14:47 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:15:52.386 22:14:47 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:52.386 22:14:47 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:15:52.386 [2024-10-01 22:14:47.420667] vbdev_passthru.c: 687:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:15:52.386 [2024-10-01 22:14:47.420696] vbdev_passthru.c: 715:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:52.386 [2024-10-01 22:14:47.420708] vbdev_passthru.c: 762:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x8a9b90 00:15:52.386 [2024-10-01 22:14:47.420716] vbdev_passthru.c: 777:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:52.386 [2024-10-01 22:14:47.422038] vbdev_passthru.c: 790:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:52.386 [2024-10-01 22:14:47.422062] vbdev_passthru.c: 791:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:15:52.386 Passthru0 00:15:52.386 22:14:47 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:52.386 22:14:47 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:15:52.386 22:14:47 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:52.386 22:14:47 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:15:52.386 22:14:47 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:52.386 22:14:47 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:15:52.386 { 00:15:52.386 "name": "Malloc2", 00:15:52.386 "aliases": [ 00:15:52.386 "4b62e880-64b4-442b-95a0-c4abf0ff70b9" 00:15:52.386 ], 00:15:52.386 "product_name": "Malloc disk", 00:15:52.386 "block_size": 512, 00:15:52.386 "num_blocks": 16384, 00:15:52.386 "uuid": "4b62e880-64b4-442b-95a0-c4abf0ff70b9", 00:15:52.386 "assigned_rate_limits": { 00:15:52.386 "rw_ios_per_sec": 0, 00:15:52.386 "rw_mbytes_per_sec": 0, 00:15:52.386 "r_mbytes_per_sec": 0, 00:15:52.386 "w_mbytes_per_sec": 0 00:15:52.386 }, 00:15:52.386 "claimed": true, 00:15:52.386 "claim_type": "exclusive_write", 00:15:52.386 "zoned": false, 00:15:52.386 "supported_io_types": { 00:15:52.386 "read": true, 00:15:52.386 "write": true, 00:15:52.386 "unmap": true, 00:15:52.386 "flush": true, 00:15:52.386 "reset": true, 00:15:52.386 "nvme_admin": false, 00:15:52.386 "nvme_io": false, 00:15:52.386 "nvme_io_md": false, 00:15:52.386 "write_zeroes": true, 00:15:52.386 "zcopy": true, 00:15:52.386 "get_zone_info": false, 00:15:52.386 "zone_management": false, 00:15:52.386 "zone_append": false, 00:15:52.386 "compare": false, 00:15:52.386 "compare_and_write": false, 00:15:52.386 "abort": true, 00:15:52.386 "seek_hole": false, 00:15:52.386 "seek_data": false, 00:15:52.386 "copy": true, 00:15:52.386 "nvme_iov_md": false 00:15:52.386 }, 00:15:52.386 "memory_domains": [ 00:15:52.386 { 00:15:52.386 "dma_device_id": "system", 00:15:52.386 "dma_device_type": 1 00:15:52.386 }, 00:15:52.386 { 00:15:52.386 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:52.386 "dma_device_type": 2 00:15:52.386 } 00:15:52.386 ], 00:15:52.386 "driver_specific": {} 00:15:52.386 }, 00:15:52.386 { 00:15:52.386 "name": "Passthru0", 00:15:52.386 "aliases": [ 00:15:52.386 "8570330d-dca2-52f2-a0d0-bed235e4c0c9" 00:15:52.386 ], 00:15:52.386 "product_name": "passthru", 00:15:52.386 "block_size": 512, 00:15:52.386 "num_blocks": 16384, 00:15:52.386 "uuid": "8570330d-dca2-52f2-a0d0-bed235e4c0c9", 00:15:52.386 "assigned_rate_limits": { 00:15:52.386 "rw_ios_per_sec": 0, 00:15:52.386 "rw_mbytes_per_sec": 0, 00:15:52.386 "r_mbytes_per_sec": 0, 00:15:52.386 "w_mbytes_per_sec": 0 00:15:52.386 }, 00:15:52.386 "claimed": false, 00:15:52.386 "zoned": false, 00:15:52.386 "supported_io_types": { 00:15:52.386 "read": true, 00:15:52.386 "write": true, 00:15:52.386 "unmap": true, 00:15:52.386 "flush": true, 00:15:52.386 "reset": true, 00:15:52.386 "nvme_admin": false, 00:15:52.386 "nvme_io": false, 00:15:52.386 "nvme_io_md": false, 00:15:52.386 "write_zeroes": true, 00:15:52.386 "zcopy": true, 00:15:52.386 "get_zone_info": false, 00:15:52.386 "zone_management": false, 00:15:52.386 "zone_append": false, 00:15:52.386 "compare": false, 00:15:52.387 "compare_and_write": false, 00:15:52.387 "abort": true, 00:15:52.387 "seek_hole": false, 00:15:52.387 "seek_data": false, 00:15:52.387 "copy": true, 00:15:52.387 "nvme_iov_md": false 00:15:52.387 }, 00:15:52.387 "memory_domains": [ 00:15:52.387 { 00:15:52.387 "dma_device_id": "system", 00:15:52.387 "dma_device_type": 1 00:15:52.387 }, 00:15:52.387 { 00:15:52.387 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:52.387 "dma_device_type": 2 00:15:52.387 } 00:15:52.387 ], 00:15:52.387 "driver_specific": { 00:15:52.387 "passthru": { 00:15:52.387 "name": "Passthru0", 00:15:52.387 "base_bdev_name": "Malloc2" 00:15:52.387 } 00:15:52.387 } 00:15:52.387 } 00:15:52.387 ]' 00:15:52.387 22:14:47 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:15:52.387 22:14:47 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:15:52.387 22:14:47 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:15:52.387 22:14:47 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:52.387 22:14:47 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:15:52.387 22:14:47 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:52.387 22:14:47 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:15:52.387 22:14:47 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:52.387 22:14:47 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:15:52.387 22:14:47 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:52.387 22:14:47 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:15:52.387 22:14:47 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:52.387 22:14:47 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:15:52.387 22:14:47 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:52.387 22:14:47 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:15:52.387 22:14:47 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:15:52.387 22:14:47 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:15:52.387 00:15:52.387 real 0m0.298s 00:15:52.387 user 0m0.194s 00:15:52.387 sys 0m0.040s 00:15:52.387 22:14:47 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1126 -- # xtrace_disable 00:15:52.387 22:14:47 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:15:52.387 ************************************ 00:15:52.387 END TEST rpc_daemon_integrity 00:15:52.387 ************************************ 00:15:52.387 22:14:47 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:15:52.387 22:14:47 rpc -- rpc/rpc.sh@84 -- # killprocess 4160079 00:15:52.387 22:14:47 rpc -- common/autotest_common.sh@950 -- # '[' -z 4160079 ']' 00:15:52.387 22:14:47 rpc -- common/autotest_common.sh@954 -- # kill -0 4160079 00:15:52.387 22:14:47 rpc -- common/autotest_common.sh@955 -- # uname 00:15:52.387 22:14:47 rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:15:52.387 22:14:47 rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 4160079 00:15:52.647 22:14:47 rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:15:52.647 22:14:47 rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:15:52.647 22:14:47 rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 4160079' 00:15:52.647 killing process with pid 4160079 00:15:52.647 22:14:47 rpc -- common/autotest_common.sh@969 -- # kill 4160079 00:15:52.647 22:14:47 rpc -- common/autotest_common.sh@974 -- # wait 4160079 00:15:52.908 00:15:52.908 real 0m2.266s 00:15:52.908 user 0m2.954s 00:15:52.908 sys 0m0.758s 00:15:52.908 22:14:47 rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:15:52.908 22:14:47 rpc -- common/autotest_common.sh@10 -- # set +x 00:15:52.908 ************************************ 00:15:52.908 END TEST rpc 00:15:52.908 ************************************ 00:15:52.908 22:14:47 -- spdk/autotest.sh@157 -- # run_test skip_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:15:52.908 22:14:47 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:15:52.908 22:14:47 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:15:52.908 22:14:47 -- common/autotest_common.sh@10 -- # set +x 00:15:52.908 ************************************ 00:15:52.908 START TEST skip_rpc 00:15:52.909 ************************************ 00:15:52.909 22:14:48 skip_rpc -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:15:52.909 * Looking for test storage... 00:15:52.909 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:15:52.909 22:14:48 skip_rpc -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:15:52.909 22:14:48 skip_rpc -- common/autotest_common.sh@1681 -- # lcov --version 00:15:52.909 22:14:48 skip_rpc -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:15:53.169 22:14:48 skip_rpc -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:15:53.169 22:14:48 skip_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:15:53.169 22:14:48 skip_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:15:53.169 22:14:48 skip_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:15:53.169 22:14:48 skip_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:15:53.169 22:14:48 skip_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:15:53.169 22:14:48 skip_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:15:53.169 22:14:48 skip_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:15:53.169 22:14:48 skip_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:15:53.169 22:14:48 skip_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:15:53.169 22:14:48 skip_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:15:53.169 22:14:48 skip_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:15:53.169 22:14:48 skip_rpc -- scripts/common.sh@344 -- # case "$op" in 00:15:53.169 22:14:48 skip_rpc -- scripts/common.sh@345 -- # : 1 00:15:53.169 22:14:48 skip_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:15:53.169 22:14:48 skip_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:53.169 22:14:48 skip_rpc -- scripts/common.sh@365 -- # decimal 1 00:15:53.169 22:14:48 skip_rpc -- scripts/common.sh@353 -- # local d=1 00:15:53.169 22:14:48 skip_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:53.169 22:14:48 skip_rpc -- scripts/common.sh@355 -- # echo 1 00:15:53.169 22:14:48 skip_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:15:53.169 22:14:48 skip_rpc -- scripts/common.sh@366 -- # decimal 2 00:15:53.169 22:14:48 skip_rpc -- scripts/common.sh@353 -- # local d=2 00:15:53.169 22:14:48 skip_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:53.169 22:14:48 skip_rpc -- scripts/common.sh@355 -- # echo 2 00:15:53.169 22:14:48 skip_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:15:53.169 22:14:48 skip_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:15:53.169 22:14:48 skip_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:15:53.170 22:14:48 skip_rpc -- scripts/common.sh@368 -- # return 0 00:15:53.170 22:14:48 skip_rpc -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:53.170 22:14:48 skip_rpc -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:15:53.170 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:53.170 --rc genhtml_branch_coverage=1 00:15:53.170 --rc genhtml_function_coverage=1 00:15:53.170 --rc genhtml_legend=1 00:15:53.170 --rc geninfo_all_blocks=1 00:15:53.170 --rc geninfo_unexecuted_blocks=1 00:15:53.170 00:15:53.170 ' 00:15:53.170 22:14:48 skip_rpc -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:15:53.170 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:53.170 --rc genhtml_branch_coverage=1 00:15:53.170 --rc genhtml_function_coverage=1 00:15:53.170 --rc genhtml_legend=1 00:15:53.170 --rc geninfo_all_blocks=1 00:15:53.170 --rc geninfo_unexecuted_blocks=1 00:15:53.170 00:15:53.170 ' 00:15:53.170 22:14:48 skip_rpc -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:15:53.170 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:53.170 --rc genhtml_branch_coverage=1 00:15:53.170 --rc genhtml_function_coverage=1 00:15:53.170 --rc genhtml_legend=1 00:15:53.170 --rc geninfo_all_blocks=1 00:15:53.170 --rc geninfo_unexecuted_blocks=1 00:15:53.170 00:15:53.170 ' 00:15:53.170 22:14:48 skip_rpc -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:15:53.170 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:53.170 --rc genhtml_branch_coverage=1 00:15:53.170 --rc genhtml_function_coverage=1 00:15:53.170 --rc genhtml_legend=1 00:15:53.170 --rc geninfo_all_blocks=1 00:15:53.170 --rc geninfo_unexecuted_blocks=1 00:15:53.170 00:15:53.170 ' 00:15:53.170 22:14:48 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:15:53.170 22:14:48 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:15:53.170 22:14:48 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:15:53.170 22:14:48 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:15:53.170 22:14:48 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:15:53.170 22:14:48 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:53.170 ************************************ 00:15:53.170 START TEST skip_rpc 00:15:53.170 ************************************ 00:15:53.170 22:14:48 skip_rpc.skip_rpc -- common/autotest_common.sh@1125 -- # test_skip_rpc 00:15:53.170 22:14:48 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=4160706 00:15:53.170 22:14:48 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:15:53.170 22:14:48 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:15:53.170 22:14:48 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:15:53.170 [2024-10-01 22:14:48.334182] Starting SPDK v25.01-pre git sha1 1b1c3081e / DPDK 24.03.0 initialization... 00:15:53.170 [2024-10-01 22:14:48.334239] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4160706 ] 00:15:53.170 [2024-10-01 22:14:48.399734] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:53.430 [2024-10-01 22:14:48.474875] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:15:58.714 22:14:53 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:15:58.714 22:14:53 skip_rpc.skip_rpc -- common/autotest_common.sh@650 -- # local es=0 00:15:58.714 22:14:53 skip_rpc.skip_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd spdk_get_version 00:15:58.714 22:14:53 skip_rpc.skip_rpc -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:15:58.714 22:14:53 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:58.714 22:14:53 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:15:58.714 22:14:53 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:58.714 22:14:53 skip_rpc.skip_rpc -- common/autotest_common.sh@653 -- # rpc_cmd spdk_get_version 00:15:58.714 22:14:53 skip_rpc.skip_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:58.714 22:14:53 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:58.714 22:14:53 skip_rpc.skip_rpc -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:15:58.714 22:14:53 skip_rpc.skip_rpc -- common/autotest_common.sh@653 -- # es=1 00:15:58.714 22:14:53 skip_rpc.skip_rpc -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:15:58.714 22:14:53 skip_rpc.skip_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:15:58.714 22:14:53 skip_rpc.skip_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:15:58.714 22:14:53 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:15:58.714 22:14:53 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 4160706 00:15:58.714 22:14:53 skip_rpc.skip_rpc -- common/autotest_common.sh@950 -- # '[' -z 4160706 ']' 00:15:58.714 22:14:53 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # kill -0 4160706 00:15:58.714 22:14:53 skip_rpc.skip_rpc -- common/autotest_common.sh@955 -- # uname 00:15:58.714 22:14:53 skip_rpc.skip_rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:15:58.714 22:14:53 skip_rpc.skip_rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 4160706 00:15:58.714 22:14:53 skip_rpc.skip_rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:15:58.714 22:14:53 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:15:58.714 22:14:53 skip_rpc.skip_rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 4160706' 00:15:58.714 killing process with pid 4160706 00:15:58.714 22:14:53 skip_rpc.skip_rpc -- common/autotest_common.sh@969 -- # kill 4160706 00:15:58.714 22:14:53 skip_rpc.skip_rpc -- common/autotest_common.sh@974 -- # wait 4160706 00:15:58.714 00:15:58.714 real 0m5.359s 00:15:58.714 user 0m5.097s 00:15:58.714 sys 0m0.296s 00:15:58.714 22:14:53 skip_rpc.skip_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:15:58.714 22:14:53 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:58.714 ************************************ 00:15:58.714 END TEST skip_rpc 00:15:58.714 ************************************ 00:15:58.714 22:14:53 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:15:58.714 22:14:53 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:15:58.714 22:14:53 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:15:58.714 22:14:53 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:58.714 ************************************ 00:15:58.714 START TEST skip_rpc_with_json 00:15:58.714 ************************************ 00:15:58.714 22:14:53 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1125 -- # test_skip_rpc_with_json 00:15:58.714 22:14:53 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:15:58.714 22:14:53 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=4161960 00:15:58.714 22:14:53 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:15:58.714 22:14:53 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 4161960 00:15:58.714 22:14:53 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:15:58.714 22:14:53 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@831 -- # '[' -z 4161960 ']' 00:15:58.714 22:14:53 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:58.714 22:14:53 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@836 -- # local max_retries=100 00:15:58.714 22:14:53 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:58.714 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:58.714 22:14:53 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@840 -- # xtrace_disable 00:15:58.714 22:14:53 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:15:58.714 [2024-10-01 22:14:53.754802] Starting SPDK v25.01-pre git sha1 1b1c3081e / DPDK 24.03.0 initialization... 00:15:58.714 [2024-10-01 22:14:53.754851] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4161960 ] 00:15:58.714 [2024-10-01 22:14:53.815304] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:58.714 [2024-10-01 22:14:53.880544] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:15:59.285 22:14:54 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:15:59.285 22:14:54 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@864 -- # return 0 00:15:59.285 22:14:54 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:15:59.285 22:14:54 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:59.285 22:14:54 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:15:59.285 [2024-10-01 22:14:54.538578] nvmf_rpc.c:2703:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:15:59.546 request: 00:15:59.546 { 00:15:59.546 "trtype": "tcp", 00:15:59.546 "method": "nvmf_get_transports", 00:15:59.546 "req_id": 1 00:15:59.546 } 00:15:59.546 Got JSON-RPC error response 00:15:59.546 response: 00:15:59.546 { 00:15:59.546 "code": -19, 00:15:59.546 "message": "No such device" 00:15:59.546 } 00:15:59.546 22:14:54 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:15:59.546 22:14:54 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:15:59.546 22:14:54 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:59.546 22:14:54 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:15:59.546 [2024-10-01 22:14:54.550708] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:59.546 22:14:54 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:59.546 22:14:54 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:15:59.546 22:14:54 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:59.546 22:14:54 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:15:59.546 22:14:54 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:59.546 22:14:54 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:15:59.546 { 00:15:59.546 "subsystems": [ 00:15:59.546 { 00:15:59.546 "subsystem": "fsdev", 00:15:59.546 "config": [ 00:15:59.546 { 00:15:59.546 "method": "fsdev_set_opts", 00:15:59.546 "params": { 00:15:59.546 "fsdev_io_pool_size": 65535, 00:15:59.546 "fsdev_io_cache_size": 256 00:15:59.546 } 00:15:59.546 } 00:15:59.546 ] 00:15:59.546 }, 00:15:59.546 { 00:15:59.546 "subsystem": "vfio_user_target", 00:15:59.546 "config": null 00:15:59.546 }, 00:15:59.546 { 00:15:59.546 "subsystem": "keyring", 00:15:59.546 "config": [] 00:15:59.546 }, 00:15:59.546 { 00:15:59.546 "subsystem": "iobuf", 00:15:59.546 "config": [ 00:15:59.546 { 00:15:59.546 "method": "iobuf_set_options", 00:15:59.546 "params": { 00:15:59.546 "small_pool_count": 8192, 00:15:59.546 "large_pool_count": 1024, 00:15:59.546 "small_bufsize": 8192, 00:15:59.546 "large_bufsize": 135168 00:15:59.546 } 00:15:59.546 } 00:15:59.546 ] 00:15:59.546 }, 00:15:59.546 { 00:15:59.546 "subsystem": "sock", 00:15:59.546 "config": [ 00:15:59.546 { 00:15:59.546 "method": "sock_set_default_impl", 00:15:59.546 "params": { 00:15:59.546 "impl_name": "posix" 00:15:59.546 } 00:15:59.546 }, 00:15:59.546 { 00:15:59.546 "method": "sock_impl_set_options", 00:15:59.546 "params": { 00:15:59.546 "impl_name": "ssl", 00:15:59.546 "recv_buf_size": 4096, 00:15:59.546 "send_buf_size": 4096, 00:15:59.546 "enable_recv_pipe": true, 00:15:59.546 "enable_quickack": false, 00:15:59.546 "enable_placement_id": 0, 00:15:59.546 "enable_zerocopy_send_server": true, 00:15:59.546 "enable_zerocopy_send_client": false, 00:15:59.546 "zerocopy_threshold": 0, 00:15:59.546 "tls_version": 0, 00:15:59.546 "enable_ktls": false 00:15:59.546 } 00:15:59.546 }, 00:15:59.546 { 00:15:59.546 "method": "sock_impl_set_options", 00:15:59.546 "params": { 00:15:59.546 "impl_name": "posix", 00:15:59.546 "recv_buf_size": 2097152, 00:15:59.546 "send_buf_size": 2097152, 00:15:59.546 "enable_recv_pipe": true, 00:15:59.546 "enable_quickack": false, 00:15:59.546 "enable_placement_id": 0, 00:15:59.546 "enable_zerocopy_send_server": true, 00:15:59.546 "enable_zerocopy_send_client": false, 00:15:59.546 "zerocopy_threshold": 0, 00:15:59.546 "tls_version": 0, 00:15:59.546 "enable_ktls": false 00:15:59.546 } 00:15:59.546 } 00:15:59.546 ] 00:15:59.546 }, 00:15:59.546 { 00:15:59.546 "subsystem": "vmd", 00:15:59.546 "config": [] 00:15:59.546 }, 00:15:59.546 { 00:15:59.546 "subsystem": "accel", 00:15:59.546 "config": [ 00:15:59.546 { 00:15:59.546 "method": "accel_set_options", 00:15:59.546 "params": { 00:15:59.547 "small_cache_size": 128, 00:15:59.547 "large_cache_size": 16, 00:15:59.547 "task_count": 2048, 00:15:59.547 "sequence_count": 2048, 00:15:59.547 "buf_count": 2048 00:15:59.547 } 00:15:59.547 } 00:15:59.547 ] 00:15:59.547 }, 00:15:59.547 { 00:15:59.547 "subsystem": "bdev", 00:15:59.547 "config": [ 00:15:59.547 { 00:15:59.547 "method": "bdev_set_options", 00:15:59.547 "params": { 00:15:59.547 "bdev_io_pool_size": 65535, 00:15:59.547 "bdev_io_cache_size": 256, 00:15:59.547 "bdev_auto_examine": true, 00:15:59.547 "iobuf_small_cache_size": 128, 00:15:59.547 "iobuf_large_cache_size": 16, 00:15:59.547 "bdev_io_stack_size": 4096 00:15:59.547 } 00:15:59.547 }, 00:15:59.547 { 00:15:59.547 "method": "bdev_raid_set_options", 00:15:59.547 "params": { 00:15:59.547 "process_window_size_kb": 1024, 00:15:59.547 "process_max_bandwidth_mb_sec": 0 00:15:59.547 } 00:15:59.547 }, 00:15:59.547 { 00:15:59.547 "method": "bdev_iscsi_set_options", 00:15:59.547 "params": { 00:15:59.547 "timeout_sec": 30 00:15:59.547 } 00:15:59.547 }, 00:15:59.547 { 00:15:59.547 "method": "bdev_nvme_set_options", 00:15:59.547 "params": { 00:15:59.547 "action_on_timeout": "none", 00:15:59.547 "timeout_us": 0, 00:15:59.547 "timeout_admin_us": 0, 00:15:59.547 "keep_alive_timeout_ms": 10000, 00:15:59.547 "arbitration_burst": 0, 00:15:59.547 "low_priority_weight": 0, 00:15:59.547 "medium_priority_weight": 0, 00:15:59.547 "high_priority_weight": 0, 00:15:59.547 "nvme_adminq_poll_period_us": 10000, 00:15:59.547 "nvme_ioq_poll_period_us": 0, 00:15:59.547 "io_queue_requests": 0, 00:15:59.547 "delay_cmd_submit": true, 00:15:59.547 "transport_retry_count": 4, 00:15:59.547 "bdev_retry_count": 3, 00:15:59.547 "transport_ack_timeout": 0, 00:15:59.547 "ctrlr_loss_timeout_sec": 0, 00:15:59.547 "reconnect_delay_sec": 0, 00:15:59.547 "fast_io_fail_timeout_sec": 0, 00:15:59.547 "disable_auto_failback": false, 00:15:59.547 "generate_uuids": false, 00:15:59.547 "transport_tos": 0, 00:15:59.547 "nvme_error_stat": false, 00:15:59.547 "rdma_srq_size": 0, 00:15:59.547 "io_path_stat": false, 00:15:59.547 "allow_accel_sequence": false, 00:15:59.547 "rdma_max_cq_size": 0, 00:15:59.547 "rdma_cm_event_timeout_ms": 0, 00:15:59.547 "dhchap_digests": [ 00:15:59.547 "sha256", 00:15:59.547 "sha384", 00:15:59.547 "sha512" 00:15:59.547 ], 00:15:59.547 "dhchap_dhgroups": [ 00:15:59.547 "null", 00:15:59.547 "ffdhe2048", 00:15:59.547 "ffdhe3072", 00:15:59.547 "ffdhe4096", 00:15:59.547 "ffdhe6144", 00:15:59.547 "ffdhe8192" 00:15:59.547 ] 00:15:59.547 } 00:15:59.547 }, 00:15:59.547 { 00:15:59.547 "method": "bdev_nvme_set_hotplug", 00:15:59.547 "params": { 00:15:59.547 "period_us": 100000, 00:15:59.547 "enable": false 00:15:59.547 } 00:15:59.547 }, 00:15:59.547 { 00:15:59.547 "method": "bdev_wait_for_examine" 00:15:59.547 } 00:15:59.547 ] 00:15:59.547 }, 00:15:59.547 { 00:15:59.547 "subsystem": "scsi", 00:15:59.547 "config": null 00:15:59.547 }, 00:15:59.547 { 00:15:59.547 "subsystem": "scheduler", 00:15:59.547 "config": [ 00:15:59.547 { 00:15:59.547 "method": "framework_set_scheduler", 00:15:59.547 "params": { 00:15:59.547 "name": "static" 00:15:59.547 } 00:15:59.547 } 00:15:59.547 ] 00:15:59.547 }, 00:15:59.547 { 00:15:59.547 "subsystem": "vhost_scsi", 00:15:59.547 "config": [] 00:15:59.547 }, 00:15:59.547 { 00:15:59.547 "subsystem": "vhost_blk", 00:15:59.547 "config": [] 00:15:59.547 }, 00:15:59.547 { 00:15:59.547 "subsystem": "ublk", 00:15:59.547 "config": [] 00:15:59.547 }, 00:15:59.547 { 00:15:59.547 "subsystem": "nbd", 00:15:59.547 "config": [] 00:15:59.547 }, 00:15:59.547 { 00:15:59.547 "subsystem": "nvmf", 00:15:59.547 "config": [ 00:15:59.547 { 00:15:59.547 "method": "nvmf_set_config", 00:15:59.547 "params": { 00:15:59.547 "discovery_filter": "match_any", 00:15:59.547 "admin_cmd_passthru": { 00:15:59.547 "identify_ctrlr": false 00:15:59.547 }, 00:15:59.547 "dhchap_digests": [ 00:15:59.547 "sha256", 00:15:59.547 "sha384", 00:15:59.547 "sha512" 00:15:59.547 ], 00:15:59.547 "dhchap_dhgroups": [ 00:15:59.547 "null", 00:15:59.547 "ffdhe2048", 00:15:59.547 "ffdhe3072", 00:15:59.547 "ffdhe4096", 00:15:59.547 "ffdhe6144", 00:15:59.547 "ffdhe8192" 00:15:59.547 ] 00:15:59.547 } 00:15:59.547 }, 00:15:59.547 { 00:15:59.547 "method": "nvmf_set_max_subsystems", 00:15:59.547 "params": { 00:15:59.547 "max_subsystems": 1024 00:15:59.547 } 00:15:59.547 }, 00:15:59.547 { 00:15:59.547 "method": "nvmf_set_crdt", 00:15:59.547 "params": { 00:15:59.547 "crdt1": 0, 00:15:59.547 "crdt2": 0, 00:15:59.547 "crdt3": 0 00:15:59.547 } 00:15:59.547 }, 00:15:59.547 { 00:15:59.547 "method": "nvmf_create_transport", 00:15:59.547 "params": { 00:15:59.547 "trtype": "TCP", 00:15:59.547 "max_queue_depth": 128, 00:15:59.547 "max_io_qpairs_per_ctrlr": 127, 00:15:59.547 "in_capsule_data_size": 4096, 00:15:59.547 "max_io_size": 131072, 00:15:59.547 "io_unit_size": 131072, 00:15:59.547 "max_aq_depth": 128, 00:15:59.547 "num_shared_buffers": 511, 00:15:59.547 "buf_cache_size": 4294967295, 00:15:59.547 "dif_insert_or_strip": false, 00:15:59.547 "zcopy": false, 00:15:59.547 "c2h_success": true, 00:15:59.547 "sock_priority": 0, 00:15:59.547 "abort_timeout_sec": 1, 00:15:59.547 "ack_timeout": 0, 00:15:59.547 "data_wr_pool_size": 0 00:15:59.547 } 00:15:59.547 } 00:15:59.547 ] 00:15:59.547 }, 00:15:59.547 { 00:15:59.547 "subsystem": "iscsi", 00:15:59.547 "config": [ 00:15:59.547 { 00:15:59.547 "method": "iscsi_set_options", 00:15:59.547 "params": { 00:15:59.547 "node_base": "iqn.2016-06.io.spdk", 00:15:59.547 "max_sessions": 128, 00:15:59.547 "max_connections_per_session": 2, 00:15:59.547 "max_queue_depth": 64, 00:15:59.547 "default_time2wait": 2, 00:15:59.547 "default_time2retain": 20, 00:15:59.547 "first_burst_length": 8192, 00:15:59.547 "immediate_data": true, 00:15:59.547 "allow_duplicated_isid": false, 00:15:59.547 "error_recovery_level": 0, 00:15:59.547 "nop_timeout": 60, 00:15:59.547 "nop_in_interval": 30, 00:15:59.547 "disable_chap": false, 00:15:59.547 "require_chap": false, 00:15:59.547 "mutual_chap": false, 00:15:59.547 "chap_group": 0, 00:15:59.547 "max_large_datain_per_connection": 64, 00:15:59.547 "max_r2t_per_connection": 4, 00:15:59.547 "pdu_pool_size": 36864, 00:15:59.547 "immediate_data_pool_size": 16384, 00:15:59.547 "data_out_pool_size": 2048 00:15:59.547 } 00:15:59.547 } 00:15:59.547 ] 00:15:59.547 } 00:15:59.547 ] 00:15:59.547 } 00:15:59.547 22:14:54 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:15:59.547 22:14:54 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 4161960 00:15:59.547 22:14:54 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@950 -- # '[' -z 4161960 ']' 00:15:59.547 22:14:54 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # kill -0 4161960 00:15:59.547 22:14:54 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # uname 00:15:59.547 22:14:54 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:15:59.547 22:14:54 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 4161960 00:15:59.547 22:14:54 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:15:59.547 22:14:54 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:15:59.547 22:14:54 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@968 -- # echo 'killing process with pid 4161960' 00:15:59.547 killing process with pid 4161960 00:15:59.547 22:14:54 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@969 -- # kill 4161960 00:15:59.547 22:14:54 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@974 -- # wait 4161960 00:16:00.118 22:14:55 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=4162152 00:16:00.118 22:14:55 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:16:00.118 22:14:55 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:16:05.408 22:15:00 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 4162152 00:16:05.408 22:15:00 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@950 -- # '[' -z 4162152 ']' 00:16:05.408 22:15:00 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # kill -0 4162152 00:16:05.408 22:15:00 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # uname 00:16:05.408 22:15:00 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:16:05.408 22:15:00 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 4162152 00:16:05.409 22:15:00 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:16:05.409 22:15:00 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:16:05.409 22:15:00 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@968 -- # echo 'killing process with pid 4162152' 00:16:05.409 killing process with pid 4162152 00:16:05.409 22:15:00 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@969 -- # kill 4162152 00:16:05.409 22:15:00 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@974 -- # wait 4162152 00:16:05.409 22:15:00 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:16:05.409 22:15:00 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:16:05.409 00:16:05.409 real 0m6.723s 00:16:05.409 user 0m6.547s 00:16:05.409 sys 0m0.606s 00:16:05.409 22:15:00 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1126 -- # xtrace_disable 00:16:05.409 22:15:00 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:16:05.409 ************************************ 00:16:05.409 END TEST skip_rpc_with_json 00:16:05.409 ************************************ 00:16:05.409 22:15:00 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:16:05.409 22:15:00 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:16:05.409 22:15:00 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:16:05.409 22:15:00 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:05.409 ************************************ 00:16:05.409 START TEST skip_rpc_with_delay 00:16:05.409 ************************************ 00:16:05.409 22:15:00 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1125 -- # test_skip_rpc_with_delay 00:16:05.409 22:15:00 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:16:05.409 22:15:00 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@650 -- # local es=0 00:16:05.409 22:15:00 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:16:05.409 22:15:00 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:16:05.409 22:15:00 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:05.409 22:15:00 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:16:05.409 22:15:00 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:05.409 22:15:00 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:16:05.409 22:15:00 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:05.409 22:15:00 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:16:05.409 22:15:00 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:16:05.409 22:15:00 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:16:05.409 [2024-10-01 22:15:00.568120] app.c: 840:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:16:05.409 [2024-10-01 22:15:00.568204] app.c: 719:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 0, errno: 2 00:16:05.409 22:15:00 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@653 -- # es=1 00:16:05.409 22:15:00 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:16:05.409 22:15:00 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:16:05.409 22:15:00 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:16:05.409 00:16:05.409 real 0m0.085s 00:16:05.409 user 0m0.053s 00:16:05.409 sys 0m0.032s 00:16:05.409 22:15:00 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1126 -- # xtrace_disable 00:16:05.409 22:15:00 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:16:05.409 ************************************ 00:16:05.409 END TEST skip_rpc_with_delay 00:16:05.409 ************************************ 00:16:05.409 22:15:00 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:16:05.409 22:15:00 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:16:05.409 22:15:00 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:16:05.409 22:15:00 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:16:05.409 22:15:00 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:16:05.409 22:15:00 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:05.671 ************************************ 00:16:05.671 START TEST exit_on_failed_rpc_init 00:16:05.671 ************************************ 00:16:05.671 22:15:00 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1125 -- # test_exit_on_failed_rpc_init 00:16:05.671 22:15:00 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=4163402 00:16:05.671 22:15:00 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 4163402 00:16:05.671 22:15:00 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:16:05.671 22:15:00 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@831 -- # '[' -z 4163402 ']' 00:16:05.671 22:15:00 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:05.671 22:15:00 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@836 -- # local max_retries=100 00:16:05.671 22:15:00 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:05.671 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:05.671 22:15:00 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@840 -- # xtrace_disable 00:16:05.671 22:15:00 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:16:05.671 [2024-10-01 22:15:00.728708] Starting SPDK v25.01-pre git sha1 1b1c3081e / DPDK 24.03.0 initialization... 00:16:05.671 [2024-10-01 22:15:00.728763] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4163402 ] 00:16:05.671 [2024-10-01 22:15:00.790736] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:05.671 [2024-10-01 22:15:00.857342] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:16:06.296 22:15:01 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:16:06.296 22:15:01 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@864 -- # return 0 00:16:06.296 22:15:01 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:16:06.296 22:15:01 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:16:06.296 22:15:01 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@650 -- # local es=0 00:16:06.296 22:15:01 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:16:06.296 22:15:01 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:16:06.296 22:15:01 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:06.296 22:15:01 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:16:06.296 22:15:01 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:06.296 22:15:01 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:16:06.296 22:15:01 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:06.296 22:15:01 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:16:06.296 22:15:01 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:16:06.296 22:15:01 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:16:06.558 [2024-10-01 22:15:01.589757] Starting SPDK v25.01-pre git sha1 1b1c3081e / DPDK 24.03.0 initialization... 00:16:06.558 [2024-10-01 22:15:01.589810] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4163529 ] 00:16:06.558 [2024-10-01 22:15:01.665949] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:06.558 [2024-10-01 22:15:01.730394] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:16:06.558 [2024-10-01 22:15:01.730452] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:16:06.558 [2024-10-01 22:15:01.730463] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:16:06.558 [2024-10-01 22:15:01.730470] app.c:1061:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:16:06.558 22:15:01 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@653 -- # es=234 00:16:06.558 22:15:01 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:16:06.558 22:15:01 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@662 -- # es=106 00:16:06.558 22:15:01 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@663 -- # case "$es" in 00:16:06.558 22:15:01 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@670 -- # es=1 00:16:06.558 22:15:01 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:16:06.558 22:15:01 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:16:06.559 22:15:01 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 4163402 00:16:06.559 22:15:01 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@950 -- # '[' -z 4163402 ']' 00:16:06.559 22:15:01 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # kill -0 4163402 00:16:06.559 22:15:01 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@955 -- # uname 00:16:06.559 22:15:01 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:16:06.559 22:15:01 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 4163402 00:16:06.820 22:15:01 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:16:06.820 22:15:01 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:16:06.820 22:15:01 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@968 -- # echo 'killing process with pid 4163402' 00:16:06.820 killing process with pid 4163402 00:16:06.820 22:15:01 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@969 -- # kill 4163402 00:16:06.820 22:15:01 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@974 -- # wait 4163402 00:16:07.082 00:16:07.082 real 0m1.462s 00:16:07.082 user 0m1.650s 00:16:07.082 sys 0m0.451s 00:16:07.082 22:15:02 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1126 -- # xtrace_disable 00:16:07.082 22:15:02 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:16:07.082 ************************************ 00:16:07.082 END TEST exit_on_failed_rpc_init 00:16:07.082 ************************************ 00:16:07.082 22:15:02 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:16:07.082 00:16:07.082 real 0m14.135s 00:16:07.082 user 0m13.573s 00:16:07.082 sys 0m1.693s 00:16:07.082 22:15:02 skip_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:16:07.082 22:15:02 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:07.082 ************************************ 00:16:07.082 END TEST skip_rpc 00:16:07.082 ************************************ 00:16:07.082 22:15:02 -- spdk/autotest.sh@158 -- # run_test rpc_client /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:16:07.082 22:15:02 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:16:07.082 22:15:02 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:16:07.082 22:15:02 -- common/autotest_common.sh@10 -- # set +x 00:16:07.082 ************************************ 00:16:07.082 START TEST rpc_client 00:16:07.082 ************************************ 00:16:07.082 22:15:02 rpc_client -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:16:07.344 * Looking for test storage... 00:16:07.344 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client 00:16:07.344 22:15:02 rpc_client -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:16:07.344 22:15:02 rpc_client -- common/autotest_common.sh@1681 -- # lcov --version 00:16:07.344 22:15:02 rpc_client -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:16:07.344 22:15:02 rpc_client -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:16:07.344 22:15:02 rpc_client -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:16:07.344 22:15:02 rpc_client -- scripts/common.sh@333 -- # local ver1 ver1_l 00:16:07.344 22:15:02 rpc_client -- scripts/common.sh@334 -- # local ver2 ver2_l 00:16:07.344 22:15:02 rpc_client -- scripts/common.sh@336 -- # IFS=.-: 00:16:07.344 22:15:02 rpc_client -- scripts/common.sh@336 -- # read -ra ver1 00:16:07.344 22:15:02 rpc_client -- scripts/common.sh@337 -- # IFS=.-: 00:16:07.344 22:15:02 rpc_client -- scripts/common.sh@337 -- # read -ra ver2 00:16:07.344 22:15:02 rpc_client -- scripts/common.sh@338 -- # local 'op=<' 00:16:07.344 22:15:02 rpc_client -- scripts/common.sh@340 -- # ver1_l=2 00:16:07.344 22:15:02 rpc_client -- scripts/common.sh@341 -- # ver2_l=1 00:16:07.344 22:15:02 rpc_client -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:16:07.344 22:15:02 rpc_client -- scripts/common.sh@344 -- # case "$op" in 00:16:07.344 22:15:02 rpc_client -- scripts/common.sh@345 -- # : 1 00:16:07.344 22:15:02 rpc_client -- scripts/common.sh@364 -- # (( v = 0 )) 00:16:07.344 22:15:02 rpc_client -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:07.344 22:15:02 rpc_client -- scripts/common.sh@365 -- # decimal 1 00:16:07.344 22:15:02 rpc_client -- scripts/common.sh@353 -- # local d=1 00:16:07.344 22:15:02 rpc_client -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:07.344 22:15:02 rpc_client -- scripts/common.sh@355 -- # echo 1 00:16:07.344 22:15:02 rpc_client -- scripts/common.sh@365 -- # ver1[v]=1 00:16:07.344 22:15:02 rpc_client -- scripts/common.sh@366 -- # decimal 2 00:16:07.344 22:15:02 rpc_client -- scripts/common.sh@353 -- # local d=2 00:16:07.344 22:15:02 rpc_client -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:07.344 22:15:02 rpc_client -- scripts/common.sh@355 -- # echo 2 00:16:07.344 22:15:02 rpc_client -- scripts/common.sh@366 -- # ver2[v]=2 00:16:07.344 22:15:02 rpc_client -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:16:07.344 22:15:02 rpc_client -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:16:07.344 22:15:02 rpc_client -- scripts/common.sh@368 -- # return 0 00:16:07.344 22:15:02 rpc_client -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:07.344 22:15:02 rpc_client -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:16:07.344 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:07.344 --rc genhtml_branch_coverage=1 00:16:07.344 --rc genhtml_function_coverage=1 00:16:07.344 --rc genhtml_legend=1 00:16:07.344 --rc geninfo_all_blocks=1 00:16:07.344 --rc geninfo_unexecuted_blocks=1 00:16:07.344 00:16:07.344 ' 00:16:07.344 22:15:02 rpc_client -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:16:07.344 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:07.344 --rc genhtml_branch_coverage=1 00:16:07.344 --rc genhtml_function_coverage=1 00:16:07.344 --rc genhtml_legend=1 00:16:07.344 --rc geninfo_all_blocks=1 00:16:07.344 --rc geninfo_unexecuted_blocks=1 00:16:07.344 00:16:07.344 ' 00:16:07.344 22:15:02 rpc_client -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:16:07.344 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:07.344 --rc genhtml_branch_coverage=1 00:16:07.344 --rc genhtml_function_coverage=1 00:16:07.344 --rc genhtml_legend=1 00:16:07.344 --rc geninfo_all_blocks=1 00:16:07.344 --rc geninfo_unexecuted_blocks=1 00:16:07.344 00:16:07.344 ' 00:16:07.344 22:15:02 rpc_client -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:16:07.344 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:07.344 --rc genhtml_branch_coverage=1 00:16:07.344 --rc genhtml_function_coverage=1 00:16:07.344 --rc genhtml_legend=1 00:16:07.344 --rc geninfo_all_blocks=1 00:16:07.344 --rc geninfo_unexecuted_blocks=1 00:16:07.344 00:16:07.344 ' 00:16:07.344 22:15:02 rpc_client -- rpc_client/rpc_client.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client_test 00:16:07.344 OK 00:16:07.344 22:15:02 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:16:07.344 00:16:07.344 real 0m0.227s 00:16:07.344 user 0m0.141s 00:16:07.344 sys 0m0.101s 00:16:07.344 22:15:02 rpc_client -- common/autotest_common.sh@1126 -- # xtrace_disable 00:16:07.344 22:15:02 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:16:07.344 ************************************ 00:16:07.344 END TEST rpc_client 00:16:07.344 ************************************ 00:16:07.344 22:15:02 -- spdk/autotest.sh@159 -- # run_test json_config /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:16:07.344 22:15:02 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:16:07.344 22:15:02 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:16:07.344 22:15:02 -- common/autotest_common.sh@10 -- # set +x 00:16:07.344 ************************************ 00:16:07.344 START TEST json_config 00:16:07.344 ************************************ 00:16:07.344 22:15:02 json_config -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:16:07.606 22:15:02 json_config -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:16:07.606 22:15:02 json_config -- common/autotest_common.sh@1681 -- # lcov --version 00:16:07.606 22:15:02 json_config -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:16:07.606 22:15:02 json_config -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:16:07.606 22:15:02 json_config -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:16:07.606 22:15:02 json_config -- scripts/common.sh@333 -- # local ver1 ver1_l 00:16:07.606 22:15:02 json_config -- scripts/common.sh@334 -- # local ver2 ver2_l 00:16:07.606 22:15:02 json_config -- scripts/common.sh@336 -- # IFS=.-: 00:16:07.606 22:15:02 json_config -- scripts/common.sh@336 -- # read -ra ver1 00:16:07.606 22:15:02 json_config -- scripts/common.sh@337 -- # IFS=.-: 00:16:07.606 22:15:02 json_config -- scripts/common.sh@337 -- # read -ra ver2 00:16:07.606 22:15:02 json_config -- scripts/common.sh@338 -- # local 'op=<' 00:16:07.606 22:15:02 json_config -- scripts/common.sh@340 -- # ver1_l=2 00:16:07.606 22:15:02 json_config -- scripts/common.sh@341 -- # ver2_l=1 00:16:07.606 22:15:02 json_config -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:16:07.606 22:15:02 json_config -- scripts/common.sh@344 -- # case "$op" in 00:16:07.606 22:15:02 json_config -- scripts/common.sh@345 -- # : 1 00:16:07.606 22:15:02 json_config -- scripts/common.sh@364 -- # (( v = 0 )) 00:16:07.606 22:15:02 json_config -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:07.606 22:15:02 json_config -- scripts/common.sh@365 -- # decimal 1 00:16:07.606 22:15:02 json_config -- scripts/common.sh@353 -- # local d=1 00:16:07.606 22:15:02 json_config -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:07.606 22:15:02 json_config -- scripts/common.sh@355 -- # echo 1 00:16:07.606 22:15:02 json_config -- scripts/common.sh@365 -- # ver1[v]=1 00:16:07.606 22:15:02 json_config -- scripts/common.sh@366 -- # decimal 2 00:16:07.606 22:15:02 json_config -- scripts/common.sh@353 -- # local d=2 00:16:07.606 22:15:02 json_config -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:07.606 22:15:02 json_config -- scripts/common.sh@355 -- # echo 2 00:16:07.606 22:15:02 json_config -- scripts/common.sh@366 -- # ver2[v]=2 00:16:07.606 22:15:02 json_config -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:16:07.606 22:15:02 json_config -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:16:07.606 22:15:02 json_config -- scripts/common.sh@368 -- # return 0 00:16:07.606 22:15:02 json_config -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:07.606 22:15:02 json_config -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:16:07.606 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:07.606 --rc genhtml_branch_coverage=1 00:16:07.606 --rc genhtml_function_coverage=1 00:16:07.606 --rc genhtml_legend=1 00:16:07.606 --rc geninfo_all_blocks=1 00:16:07.606 --rc geninfo_unexecuted_blocks=1 00:16:07.606 00:16:07.606 ' 00:16:07.606 22:15:02 json_config -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:16:07.606 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:07.606 --rc genhtml_branch_coverage=1 00:16:07.606 --rc genhtml_function_coverage=1 00:16:07.606 --rc genhtml_legend=1 00:16:07.606 --rc geninfo_all_blocks=1 00:16:07.606 --rc geninfo_unexecuted_blocks=1 00:16:07.606 00:16:07.606 ' 00:16:07.606 22:15:02 json_config -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:16:07.606 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:07.606 --rc genhtml_branch_coverage=1 00:16:07.606 --rc genhtml_function_coverage=1 00:16:07.606 --rc genhtml_legend=1 00:16:07.606 --rc geninfo_all_blocks=1 00:16:07.606 --rc geninfo_unexecuted_blocks=1 00:16:07.606 00:16:07.606 ' 00:16:07.606 22:15:02 json_config -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:16:07.606 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:07.606 --rc genhtml_branch_coverage=1 00:16:07.606 --rc genhtml_function_coverage=1 00:16:07.606 --rc genhtml_legend=1 00:16:07.606 --rc geninfo_all_blocks=1 00:16:07.606 --rc geninfo_unexecuted_blocks=1 00:16:07.606 00:16:07.606 ' 00:16:07.606 22:15:02 json_config -- json_config/json_config.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:07.606 22:15:02 json_config -- nvmf/common.sh@7 -- # uname -s 00:16:07.606 22:15:02 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:07.606 22:15:02 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:07.606 22:15:02 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:07.606 22:15:02 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:07.606 22:15:02 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:07.606 22:15:02 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:07.606 22:15:02 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:07.606 22:15:02 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:07.606 22:15:02 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:07.606 22:15:02 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:07.606 22:15:02 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 00:16:07.607 22:15:02 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=80c5a598-37ec-ec11-9bc7-a4bf01928204 00:16:07.607 22:15:02 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:07.607 22:15:02 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:07.607 22:15:02 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:16:07.607 22:15:02 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:07.607 22:15:02 json_config -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:07.607 22:15:02 json_config -- scripts/common.sh@15 -- # shopt -s extglob 00:16:07.607 22:15:02 json_config -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:07.607 22:15:02 json_config -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:07.607 22:15:02 json_config -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:07.607 22:15:02 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:07.607 22:15:02 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:07.607 22:15:02 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:07.607 22:15:02 json_config -- paths/export.sh@5 -- # export PATH 00:16:07.607 22:15:02 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:07.607 22:15:02 json_config -- nvmf/common.sh@51 -- # : 0 00:16:07.607 22:15:02 json_config -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:16:07.607 22:15:02 json_config -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:16:07.607 22:15:02 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:07.607 22:15:02 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:07.607 22:15:02 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:07.607 22:15:02 json_config -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:16:07.607 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:16:07.607 22:15:02 json_config -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:16:07.607 22:15:02 json_config -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:16:07.607 22:15:02 json_config -- nvmf/common.sh@55 -- # have_pci_nics=0 00:16:07.607 22:15:02 json_config -- json_config/json_config.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/common.sh 00:16:07.607 22:15:02 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:16:07.607 22:15:02 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:16:07.607 22:15:02 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:16:07.607 22:15:02 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:16:07.607 22:15:02 json_config -- json_config/json_config.sh@31 -- # app_pid=(['target']='' ['initiator']='') 00:16:07.607 22:15:02 json_config -- json_config/json_config.sh@31 -- # declare -A app_pid 00:16:07.607 22:15:02 json_config -- json_config/json_config.sh@32 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock' ['initiator']='/var/tmp/spdk_initiator.sock') 00:16:07.607 22:15:02 json_config -- json_config/json_config.sh@32 -- # declare -A app_socket 00:16:07.607 22:15:02 json_config -- json_config/json_config.sh@33 -- # app_params=(['target']='-m 0x1 -s 1536' ['initiator']='-m 0x2 -g -u -s 1024') 00:16:07.607 22:15:02 json_config -- json_config/json_config.sh@33 -- # declare -A app_params 00:16:07.607 22:15:02 json_config -- json_config/json_config.sh@34 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json' ['initiator']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json') 00:16:07.607 22:15:02 json_config -- json_config/json_config.sh@34 -- # declare -A configs_path 00:16:07.607 22:15:02 json_config -- json_config/json_config.sh@40 -- # last_event_id=0 00:16:07.607 22:15:02 json_config -- json_config/json_config.sh@362 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:16:07.607 22:15:02 json_config -- json_config/json_config.sh@363 -- # echo 'INFO: JSON configuration test init' 00:16:07.607 INFO: JSON configuration test init 00:16:07.607 22:15:02 json_config -- json_config/json_config.sh@364 -- # json_config_test_init 00:16:07.607 22:15:02 json_config -- json_config/json_config.sh@269 -- # timing_enter json_config_test_init 00:16:07.607 22:15:02 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:16:07.607 22:15:02 json_config -- common/autotest_common.sh@10 -- # set +x 00:16:07.607 22:15:02 json_config -- json_config/json_config.sh@270 -- # timing_enter json_config_setup_target 00:16:07.607 22:15:02 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:16:07.607 22:15:02 json_config -- common/autotest_common.sh@10 -- # set +x 00:16:07.607 22:15:02 json_config -- json_config/json_config.sh@272 -- # json_config_test_start_app target --wait-for-rpc 00:16:07.607 22:15:02 json_config -- json_config/common.sh@9 -- # local app=target 00:16:07.607 22:15:02 json_config -- json_config/common.sh@10 -- # shift 00:16:07.607 22:15:02 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:16:07.607 22:15:02 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:16:07.607 22:15:02 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:16:07.607 22:15:02 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:16:07.607 22:15:02 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:16:07.607 22:15:02 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=4163954 00:16:07.607 22:15:02 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:16:07.607 Waiting for target to run... 00:16:07.607 22:15:02 json_config -- json_config/common.sh@25 -- # waitforlisten 4163954 /var/tmp/spdk_tgt.sock 00:16:07.607 22:15:02 json_config -- common/autotest_common.sh@831 -- # '[' -z 4163954 ']' 00:16:07.607 22:15:02 json_config -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:16:07.607 22:15:02 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1536 -r /var/tmp/spdk_tgt.sock --wait-for-rpc 00:16:07.607 22:15:02 json_config -- common/autotest_common.sh@836 -- # local max_retries=100 00:16:07.607 22:15:02 json_config -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:16:07.607 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:16:07.607 22:15:02 json_config -- common/autotest_common.sh@840 -- # xtrace_disable 00:16:07.607 22:15:02 json_config -- common/autotest_common.sh@10 -- # set +x 00:16:07.607 [2024-10-01 22:15:02.844939] Starting SPDK v25.01-pre git sha1 1b1c3081e / DPDK 24.03.0 initialization... 00:16:07.607 [2024-10-01 22:15:02.845013] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1536 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4163954 ] 00:16:08.179 [2024-10-01 22:15:03.429747] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:08.439 [2024-10-01 22:15:03.491677] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:16:08.439 22:15:03 json_config -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:16:08.439 22:15:03 json_config -- common/autotest_common.sh@864 -- # return 0 00:16:08.439 22:15:03 json_config -- json_config/common.sh@26 -- # echo '' 00:16:08.439 00:16:08.439 22:15:03 json_config -- json_config/json_config.sh@276 -- # create_accel_config 00:16:08.439 22:15:03 json_config -- json_config/json_config.sh@100 -- # timing_enter create_accel_config 00:16:08.439 22:15:03 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:16:08.439 22:15:03 json_config -- common/autotest_common.sh@10 -- # set +x 00:16:08.439 22:15:03 json_config -- json_config/json_config.sh@102 -- # [[ 0 -eq 1 ]] 00:16:08.439 22:15:03 json_config -- json_config/json_config.sh@108 -- # timing_exit create_accel_config 00:16:08.439 22:15:03 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:16:08.439 22:15:03 json_config -- common/autotest_common.sh@10 -- # set +x 00:16:08.700 22:15:03 json_config -- json_config/json_config.sh@280 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh --json-with-subsystems 00:16:08.700 22:15:03 json_config -- json_config/json_config.sh@281 -- # tgt_rpc load_config 00:16:08.700 22:15:03 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock load_config 00:16:09.270 22:15:04 json_config -- json_config/json_config.sh@283 -- # tgt_check_notification_types 00:16:09.270 22:15:04 json_config -- json_config/json_config.sh@43 -- # timing_enter tgt_check_notification_types 00:16:09.270 22:15:04 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:16:09.270 22:15:04 json_config -- common/autotest_common.sh@10 -- # set +x 00:16:09.270 22:15:04 json_config -- json_config/json_config.sh@45 -- # local ret=0 00:16:09.270 22:15:04 json_config -- json_config/json_config.sh@46 -- # enabled_types=('bdev_register' 'bdev_unregister') 00:16:09.270 22:15:04 json_config -- json_config/json_config.sh@46 -- # local enabled_types 00:16:09.270 22:15:04 json_config -- json_config/json_config.sh@47 -- # [[ y == y ]] 00:16:09.270 22:15:04 json_config -- json_config/json_config.sh@48 -- # enabled_types+=("fsdev_register" "fsdev_unregister") 00:16:09.270 22:15:04 json_config -- json_config/json_config.sh@51 -- # tgt_rpc notify_get_types 00:16:09.270 22:15:04 json_config -- json_config/json_config.sh@51 -- # jq -r '.[]' 00:16:09.271 22:15:04 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_types 00:16:09.271 22:15:04 json_config -- json_config/json_config.sh@51 -- # get_types=('fsdev_register' 'fsdev_unregister' 'bdev_register' 'bdev_unregister') 00:16:09.271 22:15:04 json_config -- json_config/json_config.sh@51 -- # local get_types 00:16:09.271 22:15:04 json_config -- json_config/json_config.sh@53 -- # local type_diff 00:16:09.271 22:15:04 json_config -- json_config/json_config.sh@54 -- # echo bdev_register bdev_unregister fsdev_register fsdev_unregister fsdev_register fsdev_unregister bdev_register bdev_unregister 00:16:09.271 22:15:04 json_config -- json_config/json_config.sh@54 -- # tr ' ' '\n' 00:16:09.271 22:15:04 json_config -- json_config/json_config.sh@54 -- # sort 00:16:09.271 22:15:04 json_config -- json_config/json_config.sh@54 -- # uniq -u 00:16:09.271 22:15:04 json_config -- json_config/json_config.sh@54 -- # type_diff= 00:16:09.271 22:15:04 json_config -- json_config/json_config.sh@56 -- # [[ -n '' ]] 00:16:09.271 22:15:04 json_config -- json_config/json_config.sh@61 -- # timing_exit tgt_check_notification_types 00:16:09.271 22:15:04 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:16:09.271 22:15:04 json_config -- common/autotest_common.sh@10 -- # set +x 00:16:09.271 22:15:04 json_config -- json_config/json_config.sh@62 -- # return 0 00:16:09.271 22:15:04 json_config -- json_config/json_config.sh@285 -- # [[ 0 -eq 1 ]] 00:16:09.271 22:15:04 json_config -- json_config/json_config.sh@289 -- # [[ 0 -eq 1 ]] 00:16:09.271 22:15:04 json_config -- json_config/json_config.sh@293 -- # [[ 0 -eq 1 ]] 00:16:09.271 22:15:04 json_config -- json_config/json_config.sh@297 -- # [[ 1 -eq 1 ]] 00:16:09.271 22:15:04 json_config -- json_config/json_config.sh@298 -- # create_nvmf_subsystem_config 00:16:09.271 22:15:04 json_config -- json_config/json_config.sh@237 -- # timing_enter create_nvmf_subsystem_config 00:16:09.271 22:15:04 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:16:09.271 22:15:04 json_config -- common/autotest_common.sh@10 -- # set +x 00:16:09.271 22:15:04 json_config -- json_config/json_config.sh@239 -- # NVMF_FIRST_TARGET_IP=127.0.0.1 00:16:09.271 22:15:04 json_config -- json_config/json_config.sh@240 -- # [[ tcp == \r\d\m\a ]] 00:16:09.271 22:15:04 json_config -- json_config/json_config.sh@244 -- # [[ -z 127.0.0.1 ]] 00:16:09.271 22:15:04 json_config -- json_config/json_config.sh@249 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocForNvmf0 00:16:09.271 22:15:04 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocForNvmf0 00:16:09.531 MallocForNvmf0 00:16:09.531 22:15:04 json_config -- json_config/json_config.sh@250 -- # tgt_rpc bdev_malloc_create 4 1024 --name MallocForNvmf1 00:16:09.531 22:15:04 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 4 1024 --name MallocForNvmf1 00:16:09.792 MallocForNvmf1 00:16:09.792 22:15:04 json_config -- json_config/json_config.sh@252 -- # tgt_rpc nvmf_create_transport -t tcp -u 8192 -c 0 00:16:09.792 22:15:04 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_transport -t tcp -u 8192 -c 0 00:16:09.792 [2024-10-01 22:15:05.033697] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:10.053 22:15:05 json_config -- json_config/json_config.sh@253 -- # tgt_rpc nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:16:10.053 22:15:05 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:16:10.053 22:15:05 json_config -- json_config/json_config.sh@254 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:16:10.053 22:15:05 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:16:10.314 22:15:05 json_config -- json_config/json_config.sh@255 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:16:10.314 22:15:05 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:16:10.574 22:15:05 json_config -- json_config/json_config.sh@256 -- # tgt_rpc nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:16:10.574 22:15:05 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:16:10.574 [2024-10-01 22:15:05.743954] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:16:10.574 22:15:05 json_config -- json_config/json_config.sh@258 -- # timing_exit create_nvmf_subsystem_config 00:16:10.574 22:15:05 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:16:10.574 22:15:05 json_config -- common/autotest_common.sh@10 -- # set +x 00:16:10.574 22:15:05 json_config -- json_config/json_config.sh@300 -- # timing_exit json_config_setup_target 00:16:10.574 22:15:05 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:16:10.574 22:15:05 json_config -- common/autotest_common.sh@10 -- # set +x 00:16:10.835 22:15:05 json_config -- json_config/json_config.sh@302 -- # [[ 0 -eq 1 ]] 00:16:10.835 22:15:05 json_config -- json_config/json_config.sh@307 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:16:10.835 22:15:05 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:16:10.835 MallocBdevForConfigChangeCheck 00:16:10.835 22:15:06 json_config -- json_config/json_config.sh@309 -- # timing_exit json_config_test_init 00:16:10.835 22:15:06 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:16:10.835 22:15:06 json_config -- common/autotest_common.sh@10 -- # set +x 00:16:10.835 22:15:06 json_config -- json_config/json_config.sh@366 -- # tgt_rpc save_config 00:16:10.835 22:15:06 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:16:11.407 22:15:06 json_config -- json_config/json_config.sh@368 -- # echo 'INFO: shutting down applications...' 00:16:11.407 INFO: shutting down applications... 00:16:11.407 22:15:06 json_config -- json_config/json_config.sh@369 -- # [[ 0 -eq 1 ]] 00:16:11.407 22:15:06 json_config -- json_config/json_config.sh@375 -- # json_config_clear target 00:16:11.407 22:15:06 json_config -- json_config/json_config.sh@339 -- # [[ -n 22 ]] 00:16:11.407 22:15:06 json_config -- json_config/json_config.sh@340 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py -s /var/tmp/spdk_tgt.sock clear_config 00:16:11.669 Calling clear_iscsi_subsystem 00:16:11.669 Calling clear_nvmf_subsystem 00:16:11.669 Calling clear_nbd_subsystem 00:16:11.669 Calling clear_ublk_subsystem 00:16:11.670 Calling clear_vhost_blk_subsystem 00:16:11.670 Calling clear_vhost_scsi_subsystem 00:16:11.670 Calling clear_bdev_subsystem 00:16:11.670 22:15:06 json_config -- json_config/json_config.sh@344 -- # local config_filter=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py 00:16:11.670 22:15:06 json_config -- json_config/json_config.sh@350 -- # count=100 00:16:11.670 22:15:06 json_config -- json_config/json_config.sh@351 -- # '[' 100 -gt 0 ']' 00:16:11.670 22:15:06 json_config -- json_config/json_config.sh@352 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:16:11.670 22:15:06 json_config -- json_config/json_config.sh@352 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method delete_global_parameters 00:16:11.670 22:15:06 json_config -- json_config/json_config.sh@352 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method check_empty 00:16:11.930 22:15:07 json_config -- json_config/json_config.sh@352 -- # break 00:16:11.930 22:15:07 json_config -- json_config/json_config.sh@357 -- # '[' 100 -eq 0 ']' 00:16:11.930 22:15:07 json_config -- json_config/json_config.sh@376 -- # json_config_test_shutdown_app target 00:16:11.930 22:15:07 json_config -- json_config/common.sh@31 -- # local app=target 00:16:11.930 22:15:07 json_config -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:16:11.930 22:15:07 json_config -- json_config/common.sh@35 -- # [[ -n 4163954 ]] 00:16:11.930 22:15:07 json_config -- json_config/common.sh@38 -- # kill -SIGINT 4163954 00:16:11.930 22:15:07 json_config -- json_config/common.sh@40 -- # (( i = 0 )) 00:16:11.930 22:15:07 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:16:11.930 22:15:07 json_config -- json_config/common.sh@41 -- # kill -0 4163954 00:16:11.930 22:15:07 json_config -- json_config/common.sh@45 -- # sleep 0.5 00:16:12.501 22:15:07 json_config -- json_config/common.sh@40 -- # (( i++ )) 00:16:12.501 22:15:07 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:16:12.501 22:15:07 json_config -- json_config/common.sh@41 -- # kill -0 4163954 00:16:12.501 22:15:07 json_config -- json_config/common.sh@42 -- # app_pid["$app"]= 00:16:12.501 22:15:07 json_config -- json_config/common.sh@43 -- # break 00:16:12.501 22:15:07 json_config -- json_config/common.sh@48 -- # [[ -n '' ]] 00:16:12.501 22:15:07 json_config -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:16:12.501 SPDK target shutdown done 00:16:12.501 22:15:07 json_config -- json_config/json_config.sh@378 -- # echo 'INFO: relaunching applications...' 00:16:12.501 INFO: relaunching applications... 00:16:12.501 22:15:07 json_config -- json_config/json_config.sh@379 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:16:12.501 22:15:07 json_config -- json_config/common.sh@9 -- # local app=target 00:16:12.501 22:15:07 json_config -- json_config/common.sh@10 -- # shift 00:16:12.501 22:15:07 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:16:12.501 22:15:07 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:16:12.501 22:15:07 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:16:12.501 22:15:07 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:16:12.501 22:15:07 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:16:12.501 22:15:07 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=4165093 00:16:12.501 22:15:07 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:16:12.502 Waiting for target to run... 00:16:12.502 22:15:07 json_config -- json_config/common.sh@25 -- # waitforlisten 4165093 /var/tmp/spdk_tgt.sock 00:16:12.502 22:15:07 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1536 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:16:12.502 22:15:07 json_config -- common/autotest_common.sh@831 -- # '[' -z 4165093 ']' 00:16:12.502 22:15:07 json_config -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:16:12.502 22:15:07 json_config -- common/autotest_common.sh@836 -- # local max_retries=100 00:16:12.502 22:15:07 json_config -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:16:12.502 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:16:12.502 22:15:07 json_config -- common/autotest_common.sh@840 -- # xtrace_disable 00:16:12.502 22:15:07 json_config -- common/autotest_common.sh@10 -- # set +x 00:16:12.502 [2024-10-01 22:15:07.733769] Starting SPDK v25.01-pre git sha1 1b1c3081e / DPDK 24.03.0 initialization... 00:16:12.502 [2024-10-01 22:15:07.733827] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1536 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4165093 ] 00:16:13.072 [2024-10-01 22:15:08.107766] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:13.072 [2024-10-01 22:15:08.159802] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:16:13.645 [2024-10-01 22:15:08.688501] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:13.645 [2024-10-01 22:15:08.720854] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:16:13.645 22:15:08 json_config -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:16:13.645 22:15:08 json_config -- common/autotest_common.sh@864 -- # return 0 00:16:13.645 22:15:08 json_config -- json_config/common.sh@26 -- # echo '' 00:16:13.645 00:16:13.645 22:15:08 json_config -- json_config/json_config.sh@380 -- # [[ 0 -eq 1 ]] 00:16:13.645 22:15:08 json_config -- json_config/json_config.sh@384 -- # echo 'INFO: Checking if target configuration is the same...' 00:16:13.645 INFO: Checking if target configuration is the same... 00:16:13.645 22:15:08 json_config -- json_config/json_config.sh@385 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:16:13.645 22:15:08 json_config -- json_config/json_config.sh@385 -- # tgt_rpc save_config 00:16:13.645 22:15:08 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:16:13.645 + '[' 2 -ne 2 ']' 00:16:13.645 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:16:13.645 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:16:13.645 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:16:13.645 +++ basename /dev/fd/62 00:16:13.645 ++ mktemp /tmp/62.XXX 00:16:13.645 + tmp_file_1=/tmp/62.V8J 00:16:13.645 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:16:13.645 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:16:13.645 + tmp_file_2=/tmp/spdk_tgt_config.json.JdA 00:16:13.645 + ret=0 00:16:13.645 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:16:13.914 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:16:13.914 + diff -u /tmp/62.V8J /tmp/spdk_tgt_config.json.JdA 00:16:13.914 + echo 'INFO: JSON config files are the same' 00:16:13.914 INFO: JSON config files are the same 00:16:13.914 + rm /tmp/62.V8J /tmp/spdk_tgt_config.json.JdA 00:16:13.914 + exit 0 00:16:13.914 22:15:09 json_config -- json_config/json_config.sh@386 -- # [[ 0 -eq 1 ]] 00:16:13.914 22:15:09 json_config -- json_config/json_config.sh@391 -- # echo 'INFO: changing configuration and checking if this can be detected...' 00:16:13.914 INFO: changing configuration and checking if this can be detected... 00:16:13.914 22:15:09 json_config -- json_config/json_config.sh@393 -- # tgt_rpc bdev_malloc_delete MallocBdevForConfigChangeCheck 00:16:13.914 22:15:09 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_delete MallocBdevForConfigChangeCheck 00:16:14.177 22:15:09 json_config -- json_config/json_config.sh@394 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:16:14.177 22:15:09 json_config -- json_config/json_config.sh@394 -- # tgt_rpc save_config 00:16:14.177 22:15:09 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:16:14.177 + '[' 2 -ne 2 ']' 00:16:14.177 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:16:14.177 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:16:14.177 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:16:14.177 +++ basename /dev/fd/62 00:16:14.177 ++ mktemp /tmp/62.XXX 00:16:14.177 + tmp_file_1=/tmp/62.uNd 00:16:14.177 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:16:14.177 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:16:14.177 + tmp_file_2=/tmp/spdk_tgt_config.json.EXW 00:16:14.177 + ret=0 00:16:14.177 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:16:14.437 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:16:14.697 + diff -u /tmp/62.uNd /tmp/spdk_tgt_config.json.EXW 00:16:14.697 + ret=1 00:16:14.697 + echo '=== Start of file: /tmp/62.uNd ===' 00:16:14.697 + cat /tmp/62.uNd 00:16:14.697 + echo '=== End of file: /tmp/62.uNd ===' 00:16:14.697 + echo '' 00:16:14.697 + echo '=== Start of file: /tmp/spdk_tgt_config.json.EXW ===' 00:16:14.697 + cat /tmp/spdk_tgt_config.json.EXW 00:16:14.697 + echo '=== End of file: /tmp/spdk_tgt_config.json.EXW ===' 00:16:14.697 + echo '' 00:16:14.697 + rm /tmp/62.uNd /tmp/spdk_tgt_config.json.EXW 00:16:14.697 + exit 1 00:16:14.697 22:15:09 json_config -- json_config/json_config.sh@398 -- # echo 'INFO: configuration change detected.' 00:16:14.697 INFO: configuration change detected. 00:16:14.697 22:15:09 json_config -- json_config/json_config.sh@401 -- # json_config_test_fini 00:16:14.697 22:15:09 json_config -- json_config/json_config.sh@313 -- # timing_enter json_config_test_fini 00:16:14.697 22:15:09 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:16:14.697 22:15:09 json_config -- common/autotest_common.sh@10 -- # set +x 00:16:14.697 22:15:09 json_config -- json_config/json_config.sh@314 -- # local ret=0 00:16:14.697 22:15:09 json_config -- json_config/json_config.sh@316 -- # [[ -n '' ]] 00:16:14.697 22:15:09 json_config -- json_config/json_config.sh@324 -- # [[ -n 4165093 ]] 00:16:14.697 22:15:09 json_config -- json_config/json_config.sh@327 -- # cleanup_bdev_subsystem_config 00:16:14.697 22:15:09 json_config -- json_config/json_config.sh@191 -- # timing_enter cleanup_bdev_subsystem_config 00:16:14.697 22:15:09 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:16:14.697 22:15:09 json_config -- common/autotest_common.sh@10 -- # set +x 00:16:14.697 22:15:09 json_config -- json_config/json_config.sh@193 -- # [[ 0 -eq 1 ]] 00:16:14.697 22:15:09 json_config -- json_config/json_config.sh@200 -- # uname -s 00:16:14.697 22:15:09 json_config -- json_config/json_config.sh@200 -- # [[ Linux = Linux ]] 00:16:14.697 22:15:09 json_config -- json_config/json_config.sh@201 -- # rm -f /sample_aio 00:16:14.697 22:15:09 json_config -- json_config/json_config.sh@204 -- # [[ 0 -eq 1 ]] 00:16:14.697 22:15:09 json_config -- json_config/json_config.sh@208 -- # timing_exit cleanup_bdev_subsystem_config 00:16:14.697 22:15:09 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:16:14.697 22:15:09 json_config -- common/autotest_common.sh@10 -- # set +x 00:16:14.697 22:15:09 json_config -- json_config/json_config.sh@330 -- # killprocess 4165093 00:16:14.697 22:15:09 json_config -- common/autotest_common.sh@950 -- # '[' -z 4165093 ']' 00:16:14.697 22:15:09 json_config -- common/autotest_common.sh@954 -- # kill -0 4165093 00:16:14.697 22:15:09 json_config -- common/autotest_common.sh@955 -- # uname 00:16:14.697 22:15:09 json_config -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:16:14.697 22:15:09 json_config -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 4165093 00:16:14.697 22:15:09 json_config -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:16:14.697 22:15:09 json_config -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:16:14.697 22:15:09 json_config -- common/autotest_common.sh@968 -- # echo 'killing process with pid 4165093' 00:16:14.697 killing process with pid 4165093 00:16:14.697 22:15:09 json_config -- common/autotest_common.sh@969 -- # kill 4165093 00:16:14.697 22:15:09 json_config -- common/autotest_common.sh@974 -- # wait 4165093 00:16:14.957 22:15:10 json_config -- json_config/json_config.sh@333 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:16:14.957 22:15:10 json_config -- json_config/json_config.sh@334 -- # timing_exit json_config_test_fini 00:16:14.957 22:15:10 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:16:14.957 22:15:10 json_config -- common/autotest_common.sh@10 -- # set +x 00:16:15.218 22:15:10 json_config -- json_config/json_config.sh@335 -- # return 0 00:16:15.218 22:15:10 json_config -- json_config/json_config.sh@403 -- # echo 'INFO: Success' 00:16:15.218 INFO: Success 00:16:15.218 00:16:15.218 real 0m7.673s 00:16:15.218 user 0m8.879s 00:16:15.218 sys 0m2.418s 00:16:15.218 22:15:10 json_config -- common/autotest_common.sh@1126 -- # xtrace_disable 00:16:15.218 22:15:10 json_config -- common/autotest_common.sh@10 -- # set +x 00:16:15.218 ************************************ 00:16:15.218 END TEST json_config 00:16:15.218 ************************************ 00:16:15.218 22:15:10 -- spdk/autotest.sh@160 -- # run_test json_config_extra_key /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:16:15.218 22:15:10 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:16:15.218 22:15:10 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:16:15.218 22:15:10 -- common/autotest_common.sh@10 -- # set +x 00:16:15.218 ************************************ 00:16:15.218 START TEST json_config_extra_key 00:16:15.218 ************************************ 00:16:15.218 22:15:10 json_config_extra_key -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:16:15.218 22:15:10 json_config_extra_key -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:16:15.218 22:15:10 json_config_extra_key -- common/autotest_common.sh@1681 -- # lcov --version 00:16:15.218 22:15:10 json_config_extra_key -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:16:15.218 22:15:10 json_config_extra_key -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:16:15.218 22:15:10 json_config_extra_key -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:16:15.218 22:15:10 json_config_extra_key -- scripts/common.sh@333 -- # local ver1 ver1_l 00:16:15.218 22:15:10 json_config_extra_key -- scripts/common.sh@334 -- # local ver2 ver2_l 00:16:15.218 22:15:10 json_config_extra_key -- scripts/common.sh@336 -- # IFS=.-: 00:16:15.218 22:15:10 json_config_extra_key -- scripts/common.sh@336 -- # read -ra ver1 00:16:15.218 22:15:10 json_config_extra_key -- scripts/common.sh@337 -- # IFS=.-: 00:16:15.218 22:15:10 json_config_extra_key -- scripts/common.sh@337 -- # read -ra ver2 00:16:15.218 22:15:10 json_config_extra_key -- scripts/common.sh@338 -- # local 'op=<' 00:16:15.218 22:15:10 json_config_extra_key -- scripts/common.sh@340 -- # ver1_l=2 00:16:15.218 22:15:10 json_config_extra_key -- scripts/common.sh@341 -- # ver2_l=1 00:16:15.218 22:15:10 json_config_extra_key -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:16:15.218 22:15:10 json_config_extra_key -- scripts/common.sh@344 -- # case "$op" in 00:16:15.218 22:15:10 json_config_extra_key -- scripts/common.sh@345 -- # : 1 00:16:15.218 22:15:10 json_config_extra_key -- scripts/common.sh@364 -- # (( v = 0 )) 00:16:15.218 22:15:10 json_config_extra_key -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:15.218 22:15:10 json_config_extra_key -- scripts/common.sh@365 -- # decimal 1 00:16:15.218 22:15:10 json_config_extra_key -- scripts/common.sh@353 -- # local d=1 00:16:15.218 22:15:10 json_config_extra_key -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:15.218 22:15:10 json_config_extra_key -- scripts/common.sh@355 -- # echo 1 00:16:15.218 22:15:10 json_config_extra_key -- scripts/common.sh@365 -- # ver1[v]=1 00:16:15.572 22:15:10 json_config_extra_key -- scripts/common.sh@366 -- # decimal 2 00:16:15.572 22:15:10 json_config_extra_key -- scripts/common.sh@353 -- # local d=2 00:16:15.572 22:15:10 json_config_extra_key -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:15.572 22:15:10 json_config_extra_key -- scripts/common.sh@355 -- # echo 2 00:16:15.572 22:15:10 json_config_extra_key -- scripts/common.sh@366 -- # ver2[v]=2 00:16:15.572 22:15:10 json_config_extra_key -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:16:15.572 22:15:10 json_config_extra_key -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:16:15.572 22:15:10 json_config_extra_key -- scripts/common.sh@368 -- # return 0 00:16:15.572 22:15:10 json_config_extra_key -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:15.572 22:15:10 json_config_extra_key -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:16:15.572 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:15.572 --rc genhtml_branch_coverage=1 00:16:15.572 --rc genhtml_function_coverage=1 00:16:15.572 --rc genhtml_legend=1 00:16:15.572 --rc geninfo_all_blocks=1 00:16:15.572 --rc geninfo_unexecuted_blocks=1 00:16:15.572 00:16:15.572 ' 00:16:15.572 22:15:10 json_config_extra_key -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:16:15.572 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:15.572 --rc genhtml_branch_coverage=1 00:16:15.572 --rc genhtml_function_coverage=1 00:16:15.572 --rc genhtml_legend=1 00:16:15.572 --rc geninfo_all_blocks=1 00:16:15.572 --rc geninfo_unexecuted_blocks=1 00:16:15.572 00:16:15.572 ' 00:16:15.572 22:15:10 json_config_extra_key -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:16:15.572 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:15.572 --rc genhtml_branch_coverage=1 00:16:15.572 --rc genhtml_function_coverage=1 00:16:15.572 --rc genhtml_legend=1 00:16:15.572 --rc geninfo_all_blocks=1 00:16:15.572 --rc geninfo_unexecuted_blocks=1 00:16:15.572 00:16:15.572 ' 00:16:15.572 22:15:10 json_config_extra_key -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:16:15.572 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:15.572 --rc genhtml_branch_coverage=1 00:16:15.572 --rc genhtml_function_coverage=1 00:16:15.572 --rc genhtml_legend=1 00:16:15.572 --rc geninfo_all_blocks=1 00:16:15.572 --rc geninfo_unexecuted_blocks=1 00:16:15.572 00:16:15.572 ' 00:16:15.572 22:15:10 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:15.572 22:15:10 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:16:15.572 22:15:10 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:15.572 22:15:10 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:15.572 22:15:10 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:15.572 22:15:10 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:15.572 22:15:10 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:15.572 22:15:10 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:15.572 22:15:10 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:15.572 22:15:10 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:15.572 22:15:10 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:15.572 22:15:10 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:15.572 22:15:10 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 00:16:15.572 22:15:10 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=80c5a598-37ec-ec11-9bc7-a4bf01928204 00:16:15.572 22:15:10 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:15.573 22:15:10 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:15.573 22:15:10 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:16:15.573 22:15:10 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:15.573 22:15:10 json_config_extra_key -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:15.573 22:15:10 json_config_extra_key -- scripts/common.sh@15 -- # shopt -s extglob 00:16:15.573 22:15:10 json_config_extra_key -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:15.573 22:15:10 json_config_extra_key -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:15.573 22:15:10 json_config_extra_key -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:15.573 22:15:10 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:15.573 22:15:10 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:15.573 22:15:10 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:15.573 22:15:10 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:16:15.573 22:15:10 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:15.573 22:15:10 json_config_extra_key -- nvmf/common.sh@51 -- # : 0 00:16:15.573 22:15:10 json_config_extra_key -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:16:15.573 22:15:10 json_config_extra_key -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:16:15.573 22:15:10 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:15.573 22:15:10 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:15.573 22:15:10 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:15.573 22:15:10 json_config_extra_key -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:16:15.573 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:16:15.573 22:15:10 json_config_extra_key -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:16:15.573 22:15:10 json_config_extra_key -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:16:15.573 22:15:10 json_config_extra_key -- nvmf/common.sh@55 -- # have_pci_nics=0 00:16:15.573 22:15:10 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/common.sh 00:16:15.573 22:15:10 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:16:15.573 22:15:10 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:16:15.573 22:15:10 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:16:15.573 22:15:10 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:16:15.573 22:15:10 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1536') 00:16:15.573 22:15:10 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:16:15.573 22:15:10 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json') 00:16:15.573 22:15:10 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:16:15.573 22:15:10 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:16:15.573 22:15:10 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:16:15.573 INFO: launching applications... 00:16:15.573 22:15:10 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:16:15.573 22:15:10 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:16:15.573 22:15:10 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:16:15.573 22:15:10 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:16:15.573 22:15:10 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:16:15.573 22:15:10 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:16:15.573 22:15:10 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:16:15.573 22:15:10 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:16:15.573 22:15:10 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=4166240 00:16:15.573 22:15:10 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:16:15.573 Waiting for target to run... 00:16:15.573 22:15:10 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 4166240 /var/tmp/spdk_tgt.sock 00:16:15.573 22:15:10 json_config_extra_key -- common/autotest_common.sh@831 -- # '[' -z 4166240 ']' 00:16:15.573 22:15:10 json_config_extra_key -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:16:15.573 22:15:10 json_config_extra_key -- common/autotest_common.sh@836 -- # local max_retries=100 00:16:15.573 22:15:10 json_config_extra_key -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1536 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:16:15.573 22:15:10 json_config_extra_key -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:16:15.573 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:16:15.573 22:15:10 json_config_extra_key -- common/autotest_common.sh@840 -- # xtrace_disable 00:16:15.573 22:15:10 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:16:15.573 [2024-10-01 22:15:10.578984] Starting SPDK v25.01-pre git sha1 1b1c3081e / DPDK 24.03.0 initialization... 00:16:15.573 [2024-10-01 22:15:10.579041] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1536 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4166240 ] 00:16:15.871 [2024-10-01 22:15:11.030196] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:15.871 [2024-10-01 22:15:11.082018] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:16:16.168 22:15:11 json_config_extra_key -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:16:16.168 22:15:11 json_config_extra_key -- common/autotest_common.sh@864 -- # return 0 00:16:16.168 22:15:11 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:16:16.168 00:16:16.168 22:15:11 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:16:16.168 INFO: shutting down applications... 00:16:16.168 22:15:11 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:16:16.168 22:15:11 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:16:16.168 22:15:11 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:16:16.168 22:15:11 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 4166240 ]] 00:16:16.168 22:15:11 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 4166240 00:16:16.168 22:15:11 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:16:16.168 22:15:11 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:16:16.168 22:15:11 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 4166240 00:16:16.168 22:15:11 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:16:16.740 22:15:11 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:16:16.740 22:15:11 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:16:16.740 22:15:11 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 4166240 00:16:16.740 22:15:11 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:16:16.740 22:15:11 json_config_extra_key -- json_config/common.sh@43 -- # break 00:16:16.740 22:15:11 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:16:16.740 22:15:11 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:16:16.740 SPDK target shutdown done 00:16:16.740 22:15:11 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:16:16.740 Success 00:16:16.740 00:16:16.740 real 0m1.577s 00:16:16.740 user 0m1.131s 00:16:16.740 sys 0m0.586s 00:16:16.740 22:15:11 json_config_extra_key -- common/autotest_common.sh@1126 -- # xtrace_disable 00:16:16.740 22:15:11 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:16:16.740 ************************************ 00:16:16.740 END TEST json_config_extra_key 00:16:16.740 ************************************ 00:16:16.740 22:15:11 -- spdk/autotest.sh@161 -- # run_test alias_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:16:16.740 22:15:11 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:16:16.740 22:15:11 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:16:16.740 22:15:11 -- common/autotest_common.sh@10 -- # set +x 00:16:16.740 ************************************ 00:16:16.740 START TEST alias_rpc 00:16:16.740 ************************************ 00:16:16.740 22:15:11 alias_rpc -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:16:17.001 * Looking for test storage... 00:16:17.001 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc 00:16:17.001 22:15:12 alias_rpc -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:16:17.001 22:15:12 alias_rpc -- common/autotest_common.sh@1681 -- # lcov --version 00:16:17.001 22:15:12 alias_rpc -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:16:17.001 22:15:12 alias_rpc -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:16:17.001 22:15:12 alias_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:16:17.001 22:15:12 alias_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:16:17.001 22:15:12 alias_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:16:17.001 22:15:12 alias_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:16:17.001 22:15:12 alias_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:16:17.001 22:15:12 alias_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:16:17.001 22:15:12 alias_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:16:17.001 22:15:12 alias_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:16:17.001 22:15:12 alias_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:16:17.001 22:15:12 alias_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:16:17.001 22:15:12 alias_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:16:17.001 22:15:12 alias_rpc -- scripts/common.sh@344 -- # case "$op" in 00:16:17.001 22:15:12 alias_rpc -- scripts/common.sh@345 -- # : 1 00:16:17.001 22:15:12 alias_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:16:17.001 22:15:12 alias_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:17.001 22:15:12 alias_rpc -- scripts/common.sh@365 -- # decimal 1 00:16:17.001 22:15:12 alias_rpc -- scripts/common.sh@353 -- # local d=1 00:16:17.001 22:15:12 alias_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:17.001 22:15:12 alias_rpc -- scripts/common.sh@355 -- # echo 1 00:16:17.001 22:15:12 alias_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:16:17.001 22:15:12 alias_rpc -- scripts/common.sh@366 -- # decimal 2 00:16:17.001 22:15:12 alias_rpc -- scripts/common.sh@353 -- # local d=2 00:16:17.001 22:15:12 alias_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:17.001 22:15:12 alias_rpc -- scripts/common.sh@355 -- # echo 2 00:16:17.001 22:15:12 alias_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:16:17.001 22:15:12 alias_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:16:17.001 22:15:12 alias_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:16:17.001 22:15:12 alias_rpc -- scripts/common.sh@368 -- # return 0 00:16:17.001 22:15:12 alias_rpc -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:17.001 22:15:12 alias_rpc -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:16:17.001 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:17.001 --rc genhtml_branch_coverage=1 00:16:17.001 --rc genhtml_function_coverage=1 00:16:17.001 --rc genhtml_legend=1 00:16:17.001 --rc geninfo_all_blocks=1 00:16:17.001 --rc geninfo_unexecuted_blocks=1 00:16:17.001 00:16:17.001 ' 00:16:17.001 22:15:12 alias_rpc -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:16:17.001 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:17.001 --rc genhtml_branch_coverage=1 00:16:17.001 --rc genhtml_function_coverage=1 00:16:17.001 --rc genhtml_legend=1 00:16:17.001 --rc geninfo_all_blocks=1 00:16:17.001 --rc geninfo_unexecuted_blocks=1 00:16:17.001 00:16:17.001 ' 00:16:17.001 22:15:12 alias_rpc -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:16:17.001 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:17.001 --rc genhtml_branch_coverage=1 00:16:17.001 --rc genhtml_function_coverage=1 00:16:17.001 --rc genhtml_legend=1 00:16:17.001 --rc geninfo_all_blocks=1 00:16:17.001 --rc geninfo_unexecuted_blocks=1 00:16:17.001 00:16:17.001 ' 00:16:17.001 22:15:12 alias_rpc -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:16:17.001 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:17.001 --rc genhtml_branch_coverage=1 00:16:17.001 --rc genhtml_function_coverage=1 00:16:17.001 --rc genhtml_legend=1 00:16:17.001 --rc geninfo_all_blocks=1 00:16:17.001 --rc geninfo_unexecuted_blocks=1 00:16:17.002 00:16:17.002 ' 00:16:17.002 22:15:12 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:16:17.002 22:15:12 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=4166741 00:16:17.002 22:15:12 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 4166741 00:16:17.002 22:15:12 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:16:17.002 22:15:12 alias_rpc -- common/autotest_common.sh@831 -- # '[' -z 4166741 ']' 00:16:17.002 22:15:12 alias_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:17.002 22:15:12 alias_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:16:17.002 22:15:12 alias_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:17.002 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:17.002 22:15:12 alias_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:16:17.002 22:15:12 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:17.002 [2024-10-01 22:15:12.242303] Starting SPDK v25.01-pre git sha1 1b1c3081e / DPDK 24.03.0 initialization... 00:16:17.002 [2024-10-01 22:15:12.242379] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4166741 ] 00:16:17.261 [2024-10-01 22:15:12.305210] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:17.261 [2024-10-01 22:15:12.369700] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:16:17.832 22:15:13 alias_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:16:17.832 22:15:13 alias_rpc -- common/autotest_common.sh@864 -- # return 0 00:16:17.832 22:15:13 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_config -i 00:16:18.092 22:15:13 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 4166741 00:16:18.092 22:15:13 alias_rpc -- common/autotest_common.sh@950 -- # '[' -z 4166741 ']' 00:16:18.092 22:15:13 alias_rpc -- common/autotest_common.sh@954 -- # kill -0 4166741 00:16:18.092 22:15:13 alias_rpc -- common/autotest_common.sh@955 -- # uname 00:16:18.092 22:15:13 alias_rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:16:18.092 22:15:13 alias_rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 4166741 00:16:18.092 22:15:13 alias_rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:16:18.092 22:15:13 alias_rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:16:18.092 22:15:13 alias_rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 4166741' 00:16:18.092 killing process with pid 4166741 00:16:18.092 22:15:13 alias_rpc -- common/autotest_common.sh@969 -- # kill 4166741 00:16:18.092 22:15:13 alias_rpc -- common/autotest_common.sh@974 -- # wait 4166741 00:16:18.353 00:16:18.353 real 0m1.611s 00:16:18.353 user 0m1.730s 00:16:18.353 sys 0m0.449s 00:16:18.353 22:15:13 alias_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:16:18.353 22:15:13 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:18.353 ************************************ 00:16:18.353 END TEST alias_rpc 00:16:18.353 ************************************ 00:16:18.614 22:15:13 -- spdk/autotest.sh@163 -- # [[ 0 -eq 0 ]] 00:16:18.614 22:15:13 -- spdk/autotest.sh@164 -- # run_test spdkcli_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:16:18.614 22:15:13 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:16:18.614 22:15:13 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:16:18.614 22:15:13 -- common/autotest_common.sh@10 -- # set +x 00:16:18.614 ************************************ 00:16:18.614 START TEST spdkcli_tcp 00:16:18.614 ************************************ 00:16:18.614 22:15:13 spdkcli_tcp -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:16:18.614 * Looking for test storage... 00:16:18.614 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:16:18.614 22:15:13 spdkcli_tcp -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:16:18.614 22:15:13 spdkcli_tcp -- common/autotest_common.sh@1681 -- # lcov --version 00:16:18.614 22:15:13 spdkcli_tcp -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:16:18.614 22:15:13 spdkcli_tcp -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:16:18.614 22:15:13 spdkcli_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:16:18.614 22:15:13 spdkcli_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:16:18.614 22:15:13 spdkcli_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:16:18.614 22:15:13 spdkcli_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:16:18.614 22:15:13 spdkcli_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:16:18.614 22:15:13 spdkcli_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:16:18.614 22:15:13 spdkcli_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:16:18.614 22:15:13 spdkcli_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:16:18.614 22:15:13 spdkcli_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:16:18.614 22:15:13 spdkcli_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:16:18.614 22:15:13 spdkcli_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:16:18.614 22:15:13 spdkcli_tcp -- scripts/common.sh@344 -- # case "$op" in 00:16:18.614 22:15:13 spdkcli_tcp -- scripts/common.sh@345 -- # : 1 00:16:18.614 22:15:13 spdkcli_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:16:18.614 22:15:13 spdkcli_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:18.614 22:15:13 spdkcli_tcp -- scripts/common.sh@365 -- # decimal 1 00:16:18.614 22:15:13 spdkcli_tcp -- scripts/common.sh@353 -- # local d=1 00:16:18.614 22:15:13 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:18.614 22:15:13 spdkcli_tcp -- scripts/common.sh@355 -- # echo 1 00:16:18.614 22:15:13 spdkcli_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:16:18.614 22:15:13 spdkcli_tcp -- scripts/common.sh@366 -- # decimal 2 00:16:18.614 22:15:13 spdkcli_tcp -- scripts/common.sh@353 -- # local d=2 00:16:18.614 22:15:13 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:18.614 22:15:13 spdkcli_tcp -- scripts/common.sh@355 -- # echo 2 00:16:18.614 22:15:13 spdkcli_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:16:18.614 22:15:13 spdkcli_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:16:18.614 22:15:13 spdkcli_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:16:18.614 22:15:13 spdkcli_tcp -- scripts/common.sh@368 -- # return 0 00:16:18.614 22:15:13 spdkcli_tcp -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:18.614 22:15:13 spdkcli_tcp -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:16:18.614 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:18.614 --rc genhtml_branch_coverage=1 00:16:18.614 --rc genhtml_function_coverage=1 00:16:18.614 --rc genhtml_legend=1 00:16:18.614 --rc geninfo_all_blocks=1 00:16:18.614 --rc geninfo_unexecuted_blocks=1 00:16:18.614 00:16:18.614 ' 00:16:18.615 22:15:13 spdkcli_tcp -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:16:18.615 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:18.615 --rc genhtml_branch_coverage=1 00:16:18.615 --rc genhtml_function_coverage=1 00:16:18.615 --rc genhtml_legend=1 00:16:18.615 --rc geninfo_all_blocks=1 00:16:18.615 --rc geninfo_unexecuted_blocks=1 00:16:18.615 00:16:18.615 ' 00:16:18.615 22:15:13 spdkcli_tcp -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:16:18.615 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:18.615 --rc genhtml_branch_coverage=1 00:16:18.615 --rc genhtml_function_coverage=1 00:16:18.615 --rc genhtml_legend=1 00:16:18.615 --rc geninfo_all_blocks=1 00:16:18.615 --rc geninfo_unexecuted_blocks=1 00:16:18.615 00:16:18.615 ' 00:16:18.615 22:15:13 spdkcli_tcp -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:16:18.615 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:18.615 --rc genhtml_branch_coverage=1 00:16:18.615 --rc genhtml_function_coverage=1 00:16:18.615 --rc genhtml_legend=1 00:16:18.615 --rc geninfo_all_blocks=1 00:16:18.615 --rc geninfo_unexecuted_blocks=1 00:16:18.615 00:16:18.615 ' 00:16:18.615 22:15:13 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:16:18.615 22:15:13 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:16:18.615 22:15:13 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:16:18.615 22:15:13 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:16:18.615 22:15:13 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:16:18.615 22:15:13 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:16:18.615 22:15:13 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:16:18.615 22:15:13 spdkcli_tcp -- common/autotest_common.sh@724 -- # xtrace_disable 00:16:18.615 22:15:13 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:16:18.615 22:15:13 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=4167138 00:16:18.615 22:15:13 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 4167138 00:16:18.615 22:15:13 spdkcli_tcp -- common/autotest_common.sh@831 -- # '[' -z 4167138 ']' 00:16:18.615 22:15:13 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:16:18.615 22:15:13 spdkcli_tcp -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:18.615 22:15:13 spdkcli_tcp -- common/autotest_common.sh@836 -- # local max_retries=100 00:16:18.875 22:15:13 spdkcli_tcp -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:18.875 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:18.875 22:15:13 spdkcli_tcp -- common/autotest_common.sh@840 -- # xtrace_disable 00:16:18.875 22:15:13 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:16:18.875 [2024-10-01 22:15:13.928300] Starting SPDK v25.01-pre git sha1 1b1c3081e / DPDK 24.03.0 initialization... 00:16:18.875 [2024-10-01 22:15:13.928373] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4167138 ] 00:16:18.875 [2024-10-01 22:15:13.995877] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:16:18.875 [2024-10-01 22:15:14.070545] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:16:18.875 [2024-10-01 22:15:14.070547] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:16:19.817 22:15:14 spdkcli_tcp -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:16:19.817 22:15:14 spdkcli_tcp -- common/autotest_common.sh@864 -- # return 0 00:16:19.817 22:15:14 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=4167154 00:16:19.817 22:15:14 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:16:19.817 22:15:14 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:16:19.817 [ 00:16:19.817 "bdev_malloc_delete", 00:16:19.817 "bdev_malloc_create", 00:16:19.817 "bdev_null_resize", 00:16:19.817 "bdev_null_delete", 00:16:19.817 "bdev_null_create", 00:16:19.817 "bdev_nvme_cuse_unregister", 00:16:19.817 "bdev_nvme_cuse_register", 00:16:19.817 "bdev_opal_new_user", 00:16:19.818 "bdev_opal_set_lock_state", 00:16:19.818 "bdev_opal_delete", 00:16:19.818 "bdev_opal_get_info", 00:16:19.818 "bdev_opal_create", 00:16:19.818 "bdev_nvme_opal_revert", 00:16:19.818 "bdev_nvme_opal_init", 00:16:19.818 "bdev_nvme_send_cmd", 00:16:19.818 "bdev_nvme_set_keys", 00:16:19.818 "bdev_nvme_get_path_iostat", 00:16:19.818 "bdev_nvme_get_mdns_discovery_info", 00:16:19.818 "bdev_nvme_stop_mdns_discovery", 00:16:19.818 "bdev_nvme_start_mdns_discovery", 00:16:19.818 "bdev_nvme_set_multipath_policy", 00:16:19.818 "bdev_nvme_set_preferred_path", 00:16:19.818 "bdev_nvme_get_io_paths", 00:16:19.818 "bdev_nvme_remove_error_injection", 00:16:19.818 "bdev_nvme_add_error_injection", 00:16:19.818 "bdev_nvme_get_discovery_info", 00:16:19.818 "bdev_nvme_stop_discovery", 00:16:19.818 "bdev_nvme_start_discovery", 00:16:19.818 "bdev_nvme_get_controller_health_info", 00:16:19.818 "bdev_nvme_disable_controller", 00:16:19.818 "bdev_nvme_enable_controller", 00:16:19.818 "bdev_nvme_reset_controller", 00:16:19.818 "bdev_nvme_get_transport_statistics", 00:16:19.818 "bdev_nvme_apply_firmware", 00:16:19.818 "bdev_nvme_detach_controller", 00:16:19.818 "bdev_nvme_get_controllers", 00:16:19.818 "bdev_nvme_attach_controller", 00:16:19.818 "bdev_nvme_set_hotplug", 00:16:19.818 "bdev_nvme_set_options", 00:16:19.818 "bdev_passthru_delete", 00:16:19.818 "bdev_passthru_create", 00:16:19.818 "bdev_lvol_set_parent_bdev", 00:16:19.818 "bdev_lvol_set_parent", 00:16:19.818 "bdev_lvol_check_shallow_copy", 00:16:19.818 "bdev_lvol_start_shallow_copy", 00:16:19.818 "bdev_lvol_grow_lvstore", 00:16:19.818 "bdev_lvol_get_lvols", 00:16:19.818 "bdev_lvol_get_lvstores", 00:16:19.818 "bdev_lvol_delete", 00:16:19.818 "bdev_lvol_set_read_only", 00:16:19.818 "bdev_lvol_resize", 00:16:19.818 "bdev_lvol_decouple_parent", 00:16:19.818 "bdev_lvol_inflate", 00:16:19.818 "bdev_lvol_rename", 00:16:19.818 "bdev_lvol_clone_bdev", 00:16:19.818 "bdev_lvol_clone", 00:16:19.818 "bdev_lvol_snapshot", 00:16:19.818 "bdev_lvol_create", 00:16:19.818 "bdev_lvol_delete_lvstore", 00:16:19.818 "bdev_lvol_rename_lvstore", 00:16:19.818 "bdev_lvol_create_lvstore", 00:16:19.818 "bdev_raid_set_options", 00:16:19.818 "bdev_raid_remove_base_bdev", 00:16:19.818 "bdev_raid_add_base_bdev", 00:16:19.818 "bdev_raid_delete", 00:16:19.818 "bdev_raid_create", 00:16:19.818 "bdev_raid_get_bdevs", 00:16:19.818 "bdev_error_inject_error", 00:16:19.818 "bdev_error_delete", 00:16:19.818 "bdev_error_create", 00:16:19.818 "bdev_split_delete", 00:16:19.818 "bdev_split_create", 00:16:19.818 "bdev_delay_delete", 00:16:19.818 "bdev_delay_create", 00:16:19.818 "bdev_delay_update_latency", 00:16:19.818 "bdev_zone_block_delete", 00:16:19.818 "bdev_zone_block_create", 00:16:19.818 "blobfs_create", 00:16:19.818 "blobfs_detect", 00:16:19.818 "blobfs_set_cache_size", 00:16:19.818 "bdev_aio_delete", 00:16:19.818 "bdev_aio_rescan", 00:16:19.818 "bdev_aio_create", 00:16:19.818 "bdev_ftl_set_property", 00:16:19.818 "bdev_ftl_get_properties", 00:16:19.818 "bdev_ftl_get_stats", 00:16:19.818 "bdev_ftl_unmap", 00:16:19.818 "bdev_ftl_unload", 00:16:19.818 "bdev_ftl_delete", 00:16:19.818 "bdev_ftl_load", 00:16:19.818 "bdev_ftl_create", 00:16:19.818 "bdev_virtio_attach_controller", 00:16:19.818 "bdev_virtio_scsi_get_devices", 00:16:19.818 "bdev_virtio_detach_controller", 00:16:19.818 "bdev_virtio_blk_set_hotplug", 00:16:19.818 "bdev_iscsi_delete", 00:16:19.818 "bdev_iscsi_create", 00:16:19.818 "bdev_iscsi_set_options", 00:16:19.818 "accel_error_inject_error", 00:16:19.818 "ioat_scan_accel_module", 00:16:19.818 "dsa_scan_accel_module", 00:16:19.818 "iaa_scan_accel_module", 00:16:19.818 "vfu_virtio_create_fs_endpoint", 00:16:19.818 "vfu_virtio_create_scsi_endpoint", 00:16:19.818 "vfu_virtio_scsi_remove_target", 00:16:19.818 "vfu_virtio_scsi_add_target", 00:16:19.818 "vfu_virtio_create_blk_endpoint", 00:16:19.818 "vfu_virtio_delete_endpoint", 00:16:19.818 "keyring_file_remove_key", 00:16:19.818 "keyring_file_add_key", 00:16:19.818 "keyring_linux_set_options", 00:16:19.818 "fsdev_aio_delete", 00:16:19.818 "fsdev_aio_create", 00:16:19.818 "iscsi_get_histogram", 00:16:19.818 "iscsi_enable_histogram", 00:16:19.818 "iscsi_set_options", 00:16:19.818 "iscsi_get_auth_groups", 00:16:19.818 "iscsi_auth_group_remove_secret", 00:16:19.818 "iscsi_auth_group_add_secret", 00:16:19.818 "iscsi_delete_auth_group", 00:16:19.818 "iscsi_create_auth_group", 00:16:19.818 "iscsi_set_discovery_auth", 00:16:19.818 "iscsi_get_options", 00:16:19.818 "iscsi_target_node_request_logout", 00:16:19.818 "iscsi_target_node_set_redirect", 00:16:19.818 "iscsi_target_node_set_auth", 00:16:19.818 "iscsi_target_node_add_lun", 00:16:19.818 "iscsi_get_stats", 00:16:19.818 "iscsi_get_connections", 00:16:19.818 "iscsi_portal_group_set_auth", 00:16:19.818 "iscsi_start_portal_group", 00:16:19.818 "iscsi_delete_portal_group", 00:16:19.818 "iscsi_create_portal_group", 00:16:19.818 "iscsi_get_portal_groups", 00:16:19.818 "iscsi_delete_target_node", 00:16:19.818 "iscsi_target_node_remove_pg_ig_maps", 00:16:19.818 "iscsi_target_node_add_pg_ig_maps", 00:16:19.818 "iscsi_create_target_node", 00:16:19.818 "iscsi_get_target_nodes", 00:16:19.818 "iscsi_delete_initiator_group", 00:16:19.818 "iscsi_initiator_group_remove_initiators", 00:16:19.818 "iscsi_initiator_group_add_initiators", 00:16:19.818 "iscsi_create_initiator_group", 00:16:19.818 "iscsi_get_initiator_groups", 00:16:19.818 "nvmf_set_crdt", 00:16:19.818 "nvmf_set_config", 00:16:19.818 "nvmf_set_max_subsystems", 00:16:19.818 "nvmf_stop_mdns_prr", 00:16:19.818 "nvmf_publish_mdns_prr", 00:16:19.818 "nvmf_subsystem_get_listeners", 00:16:19.818 "nvmf_subsystem_get_qpairs", 00:16:19.818 "nvmf_subsystem_get_controllers", 00:16:19.818 "nvmf_get_stats", 00:16:19.818 "nvmf_get_transports", 00:16:19.818 "nvmf_create_transport", 00:16:19.818 "nvmf_get_targets", 00:16:19.818 "nvmf_delete_target", 00:16:19.818 "nvmf_create_target", 00:16:19.818 "nvmf_subsystem_allow_any_host", 00:16:19.818 "nvmf_subsystem_set_keys", 00:16:19.818 "nvmf_subsystem_remove_host", 00:16:19.818 "nvmf_subsystem_add_host", 00:16:19.818 "nvmf_ns_remove_host", 00:16:19.818 "nvmf_ns_add_host", 00:16:19.818 "nvmf_subsystem_remove_ns", 00:16:19.818 "nvmf_subsystem_set_ns_ana_group", 00:16:19.818 "nvmf_subsystem_add_ns", 00:16:19.818 "nvmf_subsystem_listener_set_ana_state", 00:16:19.818 "nvmf_discovery_get_referrals", 00:16:19.818 "nvmf_discovery_remove_referral", 00:16:19.818 "nvmf_discovery_add_referral", 00:16:19.818 "nvmf_subsystem_remove_listener", 00:16:19.818 "nvmf_subsystem_add_listener", 00:16:19.818 "nvmf_delete_subsystem", 00:16:19.818 "nvmf_create_subsystem", 00:16:19.818 "nvmf_get_subsystems", 00:16:19.818 "env_dpdk_get_mem_stats", 00:16:19.818 "nbd_get_disks", 00:16:19.818 "nbd_stop_disk", 00:16:19.818 "nbd_start_disk", 00:16:19.818 "ublk_recover_disk", 00:16:19.818 "ublk_get_disks", 00:16:19.818 "ublk_stop_disk", 00:16:19.818 "ublk_start_disk", 00:16:19.818 "ublk_destroy_target", 00:16:19.818 "ublk_create_target", 00:16:19.818 "virtio_blk_create_transport", 00:16:19.818 "virtio_blk_get_transports", 00:16:19.818 "vhost_controller_set_coalescing", 00:16:19.818 "vhost_get_controllers", 00:16:19.818 "vhost_delete_controller", 00:16:19.818 "vhost_create_blk_controller", 00:16:19.818 "vhost_scsi_controller_remove_target", 00:16:19.818 "vhost_scsi_controller_add_target", 00:16:19.818 "vhost_start_scsi_controller", 00:16:19.818 "vhost_create_scsi_controller", 00:16:19.818 "thread_set_cpumask", 00:16:19.818 "scheduler_set_options", 00:16:19.818 "framework_get_governor", 00:16:19.818 "framework_get_scheduler", 00:16:19.818 "framework_set_scheduler", 00:16:19.818 "framework_get_reactors", 00:16:19.818 "thread_get_io_channels", 00:16:19.818 "thread_get_pollers", 00:16:19.818 "thread_get_stats", 00:16:19.818 "framework_monitor_context_switch", 00:16:19.818 "spdk_kill_instance", 00:16:19.818 "log_enable_timestamps", 00:16:19.818 "log_get_flags", 00:16:19.818 "log_clear_flag", 00:16:19.818 "log_set_flag", 00:16:19.818 "log_get_level", 00:16:19.818 "log_set_level", 00:16:19.818 "log_get_print_level", 00:16:19.818 "log_set_print_level", 00:16:19.818 "framework_enable_cpumask_locks", 00:16:19.818 "framework_disable_cpumask_locks", 00:16:19.818 "framework_wait_init", 00:16:19.818 "framework_start_init", 00:16:19.818 "scsi_get_devices", 00:16:19.818 "bdev_get_histogram", 00:16:19.818 "bdev_enable_histogram", 00:16:19.818 "bdev_set_qos_limit", 00:16:19.818 "bdev_set_qd_sampling_period", 00:16:19.818 "bdev_get_bdevs", 00:16:19.818 "bdev_reset_iostat", 00:16:19.818 "bdev_get_iostat", 00:16:19.818 "bdev_examine", 00:16:19.818 "bdev_wait_for_examine", 00:16:19.818 "bdev_set_options", 00:16:19.818 "accel_get_stats", 00:16:19.818 "accel_set_options", 00:16:19.818 "accel_set_driver", 00:16:19.818 "accel_crypto_key_destroy", 00:16:19.818 "accel_crypto_keys_get", 00:16:19.818 "accel_crypto_key_create", 00:16:19.818 "accel_assign_opc", 00:16:19.818 "accel_get_module_info", 00:16:19.818 "accel_get_opc_assignments", 00:16:19.818 "vmd_rescan", 00:16:19.818 "vmd_remove_device", 00:16:19.818 "vmd_enable", 00:16:19.818 "sock_get_default_impl", 00:16:19.818 "sock_set_default_impl", 00:16:19.818 "sock_impl_set_options", 00:16:19.818 "sock_impl_get_options", 00:16:19.818 "iobuf_get_stats", 00:16:19.818 "iobuf_set_options", 00:16:19.819 "keyring_get_keys", 00:16:19.819 "vfu_tgt_set_base_path", 00:16:19.819 "framework_get_pci_devices", 00:16:19.819 "framework_get_config", 00:16:19.819 "framework_get_subsystems", 00:16:19.819 "fsdev_set_opts", 00:16:19.819 "fsdev_get_opts", 00:16:19.819 "trace_get_info", 00:16:19.819 "trace_get_tpoint_group_mask", 00:16:19.819 "trace_disable_tpoint_group", 00:16:19.819 "trace_enable_tpoint_group", 00:16:19.819 "trace_clear_tpoint_mask", 00:16:19.819 "trace_set_tpoint_mask", 00:16:19.819 "notify_get_notifications", 00:16:19.819 "notify_get_types", 00:16:19.819 "spdk_get_version", 00:16:19.819 "rpc_get_methods" 00:16:19.819 ] 00:16:19.819 22:15:14 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:16:19.819 22:15:14 spdkcli_tcp -- common/autotest_common.sh@730 -- # xtrace_disable 00:16:19.819 22:15:14 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:16:19.819 22:15:14 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:16:19.819 22:15:14 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 4167138 00:16:19.819 22:15:14 spdkcli_tcp -- common/autotest_common.sh@950 -- # '[' -z 4167138 ']' 00:16:19.819 22:15:14 spdkcli_tcp -- common/autotest_common.sh@954 -- # kill -0 4167138 00:16:19.819 22:15:14 spdkcli_tcp -- common/autotest_common.sh@955 -- # uname 00:16:19.819 22:15:14 spdkcli_tcp -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:16:19.819 22:15:14 spdkcli_tcp -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 4167138 00:16:19.819 22:15:14 spdkcli_tcp -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:16:19.819 22:15:14 spdkcli_tcp -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:16:19.819 22:15:14 spdkcli_tcp -- common/autotest_common.sh@968 -- # echo 'killing process with pid 4167138' 00:16:19.819 killing process with pid 4167138 00:16:19.819 22:15:14 spdkcli_tcp -- common/autotest_common.sh@969 -- # kill 4167138 00:16:19.819 22:15:14 spdkcli_tcp -- common/autotest_common.sh@974 -- # wait 4167138 00:16:20.080 00:16:20.080 real 0m1.626s 00:16:20.080 user 0m2.857s 00:16:20.080 sys 0m0.512s 00:16:20.081 22:15:15 spdkcli_tcp -- common/autotest_common.sh@1126 -- # xtrace_disable 00:16:20.081 22:15:15 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:16:20.081 ************************************ 00:16:20.081 END TEST spdkcli_tcp 00:16:20.081 ************************************ 00:16:20.081 22:15:15 -- spdk/autotest.sh@167 -- # run_test dpdk_mem_utility /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:16:20.081 22:15:15 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:16:20.081 22:15:15 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:16:20.081 22:15:15 -- common/autotest_common.sh@10 -- # set +x 00:16:20.342 ************************************ 00:16:20.342 START TEST dpdk_mem_utility 00:16:20.342 ************************************ 00:16:20.342 22:15:15 dpdk_mem_utility -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:16:20.342 * Looking for test storage... 00:16:20.342 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility 00:16:20.342 22:15:15 dpdk_mem_utility -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:16:20.342 22:15:15 dpdk_mem_utility -- common/autotest_common.sh@1681 -- # lcov --version 00:16:20.342 22:15:15 dpdk_mem_utility -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:16:20.342 22:15:15 dpdk_mem_utility -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:16:20.342 22:15:15 dpdk_mem_utility -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:16:20.342 22:15:15 dpdk_mem_utility -- scripts/common.sh@333 -- # local ver1 ver1_l 00:16:20.342 22:15:15 dpdk_mem_utility -- scripts/common.sh@334 -- # local ver2 ver2_l 00:16:20.342 22:15:15 dpdk_mem_utility -- scripts/common.sh@336 -- # IFS=.-: 00:16:20.342 22:15:15 dpdk_mem_utility -- scripts/common.sh@336 -- # read -ra ver1 00:16:20.342 22:15:15 dpdk_mem_utility -- scripts/common.sh@337 -- # IFS=.-: 00:16:20.342 22:15:15 dpdk_mem_utility -- scripts/common.sh@337 -- # read -ra ver2 00:16:20.342 22:15:15 dpdk_mem_utility -- scripts/common.sh@338 -- # local 'op=<' 00:16:20.342 22:15:15 dpdk_mem_utility -- scripts/common.sh@340 -- # ver1_l=2 00:16:20.342 22:15:15 dpdk_mem_utility -- scripts/common.sh@341 -- # ver2_l=1 00:16:20.342 22:15:15 dpdk_mem_utility -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:16:20.342 22:15:15 dpdk_mem_utility -- scripts/common.sh@344 -- # case "$op" in 00:16:20.342 22:15:15 dpdk_mem_utility -- scripts/common.sh@345 -- # : 1 00:16:20.342 22:15:15 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v = 0 )) 00:16:20.342 22:15:15 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:20.342 22:15:15 dpdk_mem_utility -- scripts/common.sh@365 -- # decimal 1 00:16:20.342 22:15:15 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=1 00:16:20.342 22:15:15 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:20.342 22:15:15 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 1 00:16:20.342 22:15:15 dpdk_mem_utility -- scripts/common.sh@365 -- # ver1[v]=1 00:16:20.342 22:15:15 dpdk_mem_utility -- scripts/common.sh@366 -- # decimal 2 00:16:20.342 22:15:15 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=2 00:16:20.342 22:15:15 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:20.342 22:15:15 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 2 00:16:20.342 22:15:15 dpdk_mem_utility -- scripts/common.sh@366 -- # ver2[v]=2 00:16:20.342 22:15:15 dpdk_mem_utility -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:16:20.343 22:15:15 dpdk_mem_utility -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:16:20.343 22:15:15 dpdk_mem_utility -- scripts/common.sh@368 -- # return 0 00:16:20.343 22:15:15 dpdk_mem_utility -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:20.343 22:15:15 dpdk_mem_utility -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:16:20.343 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:20.343 --rc genhtml_branch_coverage=1 00:16:20.343 --rc genhtml_function_coverage=1 00:16:20.343 --rc genhtml_legend=1 00:16:20.343 --rc geninfo_all_blocks=1 00:16:20.343 --rc geninfo_unexecuted_blocks=1 00:16:20.343 00:16:20.343 ' 00:16:20.343 22:15:15 dpdk_mem_utility -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:16:20.343 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:20.343 --rc genhtml_branch_coverage=1 00:16:20.343 --rc genhtml_function_coverage=1 00:16:20.343 --rc genhtml_legend=1 00:16:20.343 --rc geninfo_all_blocks=1 00:16:20.343 --rc geninfo_unexecuted_blocks=1 00:16:20.343 00:16:20.343 ' 00:16:20.343 22:15:15 dpdk_mem_utility -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:16:20.343 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:20.343 --rc genhtml_branch_coverage=1 00:16:20.343 --rc genhtml_function_coverage=1 00:16:20.343 --rc genhtml_legend=1 00:16:20.343 --rc geninfo_all_blocks=1 00:16:20.343 --rc geninfo_unexecuted_blocks=1 00:16:20.343 00:16:20.343 ' 00:16:20.343 22:15:15 dpdk_mem_utility -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:16:20.343 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:20.343 --rc genhtml_branch_coverage=1 00:16:20.343 --rc genhtml_function_coverage=1 00:16:20.343 --rc genhtml_legend=1 00:16:20.343 --rc geninfo_all_blocks=1 00:16:20.343 --rc geninfo_unexecuted_blocks=1 00:16:20.343 00:16:20.343 ' 00:16:20.343 22:15:15 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:16:20.343 22:15:15 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=4167553 00:16:20.343 22:15:15 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 4167553 00:16:20.343 22:15:15 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:16:20.343 22:15:15 dpdk_mem_utility -- common/autotest_common.sh@831 -- # '[' -z 4167553 ']' 00:16:20.343 22:15:15 dpdk_mem_utility -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:20.343 22:15:15 dpdk_mem_utility -- common/autotest_common.sh@836 -- # local max_retries=100 00:16:20.343 22:15:15 dpdk_mem_utility -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:20.343 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:20.343 22:15:15 dpdk_mem_utility -- common/autotest_common.sh@840 -- # xtrace_disable 00:16:20.343 22:15:15 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:16:20.604 [2024-10-01 22:15:15.610955] Starting SPDK v25.01-pre git sha1 1b1c3081e / DPDK 24.03.0 initialization... 00:16:20.604 [2024-10-01 22:15:15.611031] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4167553 ] 00:16:20.604 [2024-10-01 22:15:15.672704] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:20.604 [2024-10-01 22:15:15.738639] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:16:21.173 22:15:16 dpdk_mem_utility -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:16:21.173 22:15:16 dpdk_mem_utility -- common/autotest_common.sh@864 -- # return 0 00:16:21.173 22:15:16 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:16:21.173 22:15:16 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:16:21.173 22:15:16 dpdk_mem_utility -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:21.173 22:15:16 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:16:21.173 { 00:16:21.173 "filename": "/tmp/spdk_mem_dump.txt" 00:16:21.173 } 00:16:21.173 22:15:16 dpdk_mem_utility -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:21.173 22:15:16 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:16:21.434 DPDK memory size 1100.000000 MiB in 1 heap(s) 00:16:21.434 1 heaps totaling size 1100.000000 MiB 00:16:21.434 size: 1100.000000 MiB heap id: 0 00:16:21.434 end heaps---------- 00:16:21.434 9 mempools totaling size 883.273621 MiB 00:16:21.434 size: 333.169250 MiB name: bdev_io_4167553 00:16:21.434 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:16:21.434 size: 158.602051 MiB name: PDU_data_out_Pool 00:16:21.434 size: 51.011292 MiB name: evtpool_4167553 00:16:21.434 size: 50.003479 MiB name: msgpool_4167553 00:16:21.434 size: 36.509338 MiB name: fsdev_io_4167553 00:16:21.434 size: 21.763794 MiB name: PDU_Pool 00:16:21.434 size: 19.513306 MiB name: SCSI_TASK_Pool 00:16:21.434 size: 0.026123 MiB name: Session_Pool 00:16:21.434 end mempools------- 00:16:21.434 6 memzones totaling size 4.142822 MiB 00:16:21.434 size: 1.000366 MiB name: RG_ring_0_4167553 00:16:21.434 size: 1.000366 MiB name: RG_ring_1_4167553 00:16:21.434 size: 1.000366 MiB name: RG_ring_4_4167553 00:16:21.434 size: 1.000366 MiB name: RG_ring_5_4167553 00:16:21.434 size: 0.125366 MiB name: RG_ring_2_4167553 00:16:21.434 size: 0.015991 MiB name: RG_ring_3_4167553 00:16:21.434 end memzones------- 00:16:21.434 22:15:16 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py -m 0 00:16:21.434 heap id: 0 total size: 1100.000000 MiB number of busy elements: 44 number of free elements: 16 00:16:21.434 list of free elements. size: 13.360901 MiB 00:16:21.434 element at address: 0x200000400000 with size: 1.999512 MiB 00:16:21.434 element at address: 0x200000800000 with size: 1.996948 MiB 00:16:21.434 element at address: 0x20002ac00000 with size: 0.999878 MiB 00:16:21.434 element at address: 0x20002ae00000 with size: 0.999878 MiB 00:16:21.434 element at address: 0x200043a00000 with size: 0.994446 MiB 00:16:21.434 element at address: 0x200009600000 with size: 0.959839 MiB 00:16:21.434 element at address: 0x20002b000000 with size: 0.936584 MiB 00:16:21.434 element at address: 0x200000200000 with size: 0.841614 MiB 00:16:21.434 element at address: 0x20002c800000 with size: 0.582886 MiB 00:16:21.434 element at address: 0x200003e00000 with size: 0.495605 MiB 00:16:21.434 element at address: 0x20000d800000 with size: 0.490723 MiB 00:16:21.434 element at address: 0x20002b200000 with size: 0.485657 MiB 00:16:21.434 element at address: 0x200007000000 with size: 0.481934 MiB 00:16:21.434 element at address: 0x200039c00000 with size: 0.410034 MiB 00:16:21.434 element at address: 0x200003a00000 with size: 0.354858 MiB 00:16:21.434 element at address: 0x200015e00000 with size: 0.330505 MiB 00:16:21.434 list of standard malloc elements. size: 199.218628 MiB 00:16:21.434 element at address: 0x20000d9fff80 with size: 132.000122 MiB 00:16:21.434 element at address: 0x2000097fff80 with size: 64.000122 MiB 00:16:21.434 element at address: 0x20002acfff80 with size: 1.000122 MiB 00:16:21.434 element at address: 0x20002aefff80 with size: 1.000122 MiB 00:16:21.434 element at address: 0x20002b0fff80 with size: 1.000122 MiB 00:16:21.434 element at address: 0x2000003d9f00 with size: 0.140747 MiB 00:16:21.434 element at address: 0x20002b0eff00 with size: 0.062622 MiB 00:16:21.434 element at address: 0x2000003fdf80 with size: 0.007935 MiB 00:16:21.434 element at address: 0x20002b0efdc0 with size: 0.000305 MiB 00:16:21.434 element at address: 0x2000002d7740 with size: 0.000183 MiB 00:16:21.434 element at address: 0x2000002d7800 with size: 0.000183 MiB 00:16:21.434 element at address: 0x2000002d78c0 with size: 0.000183 MiB 00:16:21.434 element at address: 0x2000002d7ac0 with size: 0.000183 MiB 00:16:21.434 element at address: 0x2000002d7b80 with size: 0.000183 MiB 00:16:21.434 element at address: 0x2000002d7c40 with size: 0.000183 MiB 00:16:21.434 element at address: 0x2000003d9e40 with size: 0.000183 MiB 00:16:21.434 element at address: 0x200003a5ad80 with size: 0.000183 MiB 00:16:21.434 element at address: 0x200003a5af80 with size: 0.000183 MiB 00:16:21.434 element at address: 0x200003a5f240 with size: 0.000183 MiB 00:16:21.434 element at address: 0x200003a7f500 with size: 0.000183 MiB 00:16:21.434 element at address: 0x200003a7f5c0 with size: 0.000183 MiB 00:16:21.434 element at address: 0x200003aff880 with size: 0.000183 MiB 00:16:21.434 element at address: 0x200003affa80 with size: 0.000183 MiB 00:16:21.434 element at address: 0x200003affb40 with size: 0.000183 MiB 00:16:21.434 element at address: 0x200003e7ee00 with size: 0.000183 MiB 00:16:21.434 element at address: 0x200003eff0c0 with size: 0.000183 MiB 00:16:21.434 element at address: 0x20000707b600 with size: 0.000183 MiB 00:16:21.435 element at address: 0x20000707b6c0 with size: 0.000183 MiB 00:16:21.435 element at address: 0x2000070fb980 with size: 0.000183 MiB 00:16:21.435 element at address: 0x2000096fdd80 with size: 0.000183 MiB 00:16:21.435 element at address: 0x20000d87da00 with size: 0.000183 MiB 00:16:21.435 element at address: 0x20000d87dac0 with size: 0.000183 MiB 00:16:21.435 element at address: 0x20000d8fdd80 with size: 0.000183 MiB 00:16:21.435 element at address: 0x200015e549c0 with size: 0.000183 MiB 00:16:21.435 element at address: 0x20002b0efc40 with size: 0.000183 MiB 00:16:21.435 element at address: 0x20002b0efd00 with size: 0.000183 MiB 00:16:21.435 element at address: 0x20002b2bc740 with size: 0.000183 MiB 00:16:21.435 element at address: 0x20002c895380 with size: 0.000183 MiB 00:16:21.435 element at address: 0x20002c895440 with size: 0.000183 MiB 00:16:21.435 element at address: 0x200039c68f80 with size: 0.000183 MiB 00:16:21.435 element at address: 0x200039c69040 with size: 0.000183 MiB 00:16:21.435 element at address: 0x200039c6fc40 with size: 0.000183 MiB 00:16:21.435 element at address: 0x200039c6fe40 with size: 0.000183 MiB 00:16:21.435 element at address: 0x200039c6ff00 with size: 0.000183 MiB 00:16:21.435 list of memzone associated elements. size: 887.420471 MiB 00:16:21.435 element at address: 0x200015f54c80 with size: 332.668823 MiB 00:16:21.435 associated memzone info: size: 332.668701 MiB name: MP_bdev_io_4167553_0 00:16:21.435 element at address: 0x20002c895500 with size: 211.416748 MiB 00:16:21.435 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:16:21.435 element at address: 0x200039c6ffc0 with size: 157.562561 MiB 00:16:21.435 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:16:21.435 element at address: 0x2000009ff380 with size: 48.003052 MiB 00:16:21.435 associated memzone info: size: 48.002930 MiB name: MP_evtpool_4167553_0 00:16:21.435 element at address: 0x200003fff380 with size: 48.003052 MiB 00:16:21.435 associated memzone info: size: 48.002930 MiB name: MP_msgpool_4167553_0 00:16:21.435 element at address: 0x2000071fdb80 with size: 36.008911 MiB 00:16:21.435 associated memzone info: size: 36.008789 MiB name: MP_fsdev_io_4167553_0 00:16:21.435 element at address: 0x20002b3be940 with size: 20.255554 MiB 00:16:21.435 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:16:21.435 element at address: 0x200043bfeb40 with size: 18.005066 MiB 00:16:21.435 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:16:21.435 element at address: 0x2000005ffe00 with size: 2.000488 MiB 00:16:21.435 associated memzone info: size: 2.000366 MiB name: RG_MP_evtpool_4167553 00:16:21.435 element at address: 0x200003bffe00 with size: 2.000488 MiB 00:16:21.435 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_4167553 00:16:21.435 element at address: 0x2000002d7d00 with size: 1.008118 MiB 00:16:21.435 associated memzone info: size: 1.007996 MiB name: MP_evtpool_4167553 00:16:21.435 element at address: 0x20000d8fde40 with size: 1.008118 MiB 00:16:21.435 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:16:21.435 element at address: 0x20002b2bc800 with size: 1.008118 MiB 00:16:21.435 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:16:21.435 element at address: 0x2000096fde40 with size: 1.008118 MiB 00:16:21.435 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:16:21.435 element at address: 0x2000070fba40 with size: 1.008118 MiB 00:16:21.435 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:16:21.435 element at address: 0x200003eff180 with size: 1.000488 MiB 00:16:21.435 associated memzone info: size: 1.000366 MiB name: RG_ring_0_4167553 00:16:21.435 element at address: 0x200003affc00 with size: 1.000488 MiB 00:16:21.435 associated memzone info: size: 1.000366 MiB name: RG_ring_1_4167553 00:16:21.435 element at address: 0x200015e54a80 with size: 1.000488 MiB 00:16:21.435 associated memzone info: size: 1.000366 MiB name: RG_ring_4_4167553 00:16:21.435 element at address: 0x200043afe940 with size: 1.000488 MiB 00:16:21.435 associated memzone info: size: 1.000366 MiB name: RG_ring_5_4167553 00:16:21.435 element at address: 0x200003a7f680 with size: 0.500488 MiB 00:16:21.435 associated memzone info: size: 0.500366 MiB name: RG_MP_fsdev_io_4167553 00:16:21.435 element at address: 0x200003e7eec0 with size: 0.500488 MiB 00:16:21.435 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_4167553 00:16:21.435 element at address: 0x20000d87db80 with size: 0.500488 MiB 00:16:21.435 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:16:21.435 element at address: 0x20000707b780 with size: 0.500488 MiB 00:16:21.435 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:16:21.435 element at address: 0x20002b27c540 with size: 0.250488 MiB 00:16:21.435 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:16:21.435 element at address: 0x200003a5f300 with size: 0.125488 MiB 00:16:21.435 associated memzone info: size: 0.125366 MiB name: RG_ring_2_4167553 00:16:21.435 element at address: 0x2000096f5b80 with size: 0.031738 MiB 00:16:21.435 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:16:21.435 element at address: 0x200039c69100 with size: 0.023743 MiB 00:16:21.435 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:16:21.435 element at address: 0x200003a5b040 with size: 0.016113 MiB 00:16:21.435 associated memzone info: size: 0.015991 MiB name: RG_ring_3_4167553 00:16:21.435 element at address: 0x200039c6f240 with size: 0.002441 MiB 00:16:21.435 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:16:21.435 element at address: 0x2000002d7980 with size: 0.000305 MiB 00:16:21.435 associated memzone info: size: 0.000183 MiB name: MP_msgpool_4167553 00:16:21.435 element at address: 0x200003aff940 with size: 0.000305 MiB 00:16:21.435 associated memzone info: size: 0.000183 MiB name: MP_fsdev_io_4167553 00:16:21.435 element at address: 0x200003a5ae40 with size: 0.000305 MiB 00:16:21.435 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_4167553 00:16:21.435 element at address: 0x200039c6fd00 with size: 0.000305 MiB 00:16:21.435 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:16:21.435 22:15:16 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:16:21.435 22:15:16 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 4167553 00:16:21.435 22:15:16 dpdk_mem_utility -- common/autotest_common.sh@950 -- # '[' -z 4167553 ']' 00:16:21.435 22:15:16 dpdk_mem_utility -- common/autotest_common.sh@954 -- # kill -0 4167553 00:16:21.435 22:15:16 dpdk_mem_utility -- common/autotest_common.sh@955 -- # uname 00:16:21.435 22:15:16 dpdk_mem_utility -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:16:21.435 22:15:16 dpdk_mem_utility -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 4167553 00:16:21.435 22:15:16 dpdk_mem_utility -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:16:21.435 22:15:16 dpdk_mem_utility -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:16:21.435 22:15:16 dpdk_mem_utility -- common/autotest_common.sh@968 -- # echo 'killing process with pid 4167553' 00:16:21.435 killing process with pid 4167553 00:16:21.435 22:15:16 dpdk_mem_utility -- common/autotest_common.sh@969 -- # kill 4167553 00:16:21.435 22:15:16 dpdk_mem_utility -- common/autotest_common.sh@974 -- # wait 4167553 00:16:21.696 00:16:21.696 real 0m1.506s 00:16:21.696 user 0m1.543s 00:16:21.696 sys 0m0.455s 00:16:21.696 22:15:16 dpdk_mem_utility -- common/autotest_common.sh@1126 -- # xtrace_disable 00:16:21.696 22:15:16 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:16:21.696 ************************************ 00:16:21.696 END TEST dpdk_mem_utility 00:16:21.696 ************************************ 00:16:21.696 22:15:16 -- spdk/autotest.sh@168 -- # run_test event /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:16:21.696 22:15:16 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:16:21.696 22:15:16 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:16:21.696 22:15:16 -- common/autotest_common.sh@10 -- # set +x 00:16:21.696 ************************************ 00:16:21.696 START TEST event 00:16:21.696 ************************************ 00:16:21.696 22:15:16 event -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:16:21.958 * Looking for test storage... 00:16:21.958 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:16:21.958 22:15:17 event -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:16:21.958 22:15:17 event -- common/autotest_common.sh@1681 -- # lcov --version 00:16:21.958 22:15:17 event -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:16:21.958 22:15:17 event -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:16:21.958 22:15:17 event -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:16:21.958 22:15:17 event -- scripts/common.sh@333 -- # local ver1 ver1_l 00:16:21.958 22:15:17 event -- scripts/common.sh@334 -- # local ver2 ver2_l 00:16:21.958 22:15:17 event -- scripts/common.sh@336 -- # IFS=.-: 00:16:21.958 22:15:17 event -- scripts/common.sh@336 -- # read -ra ver1 00:16:21.958 22:15:17 event -- scripts/common.sh@337 -- # IFS=.-: 00:16:21.958 22:15:17 event -- scripts/common.sh@337 -- # read -ra ver2 00:16:21.958 22:15:17 event -- scripts/common.sh@338 -- # local 'op=<' 00:16:21.958 22:15:17 event -- scripts/common.sh@340 -- # ver1_l=2 00:16:21.958 22:15:17 event -- scripts/common.sh@341 -- # ver2_l=1 00:16:21.958 22:15:17 event -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:16:21.958 22:15:17 event -- scripts/common.sh@344 -- # case "$op" in 00:16:21.958 22:15:17 event -- scripts/common.sh@345 -- # : 1 00:16:21.958 22:15:17 event -- scripts/common.sh@364 -- # (( v = 0 )) 00:16:21.958 22:15:17 event -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:21.958 22:15:17 event -- scripts/common.sh@365 -- # decimal 1 00:16:21.958 22:15:17 event -- scripts/common.sh@353 -- # local d=1 00:16:21.958 22:15:17 event -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:21.958 22:15:17 event -- scripts/common.sh@355 -- # echo 1 00:16:21.958 22:15:17 event -- scripts/common.sh@365 -- # ver1[v]=1 00:16:21.958 22:15:17 event -- scripts/common.sh@366 -- # decimal 2 00:16:21.958 22:15:17 event -- scripts/common.sh@353 -- # local d=2 00:16:21.958 22:15:17 event -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:21.958 22:15:17 event -- scripts/common.sh@355 -- # echo 2 00:16:21.958 22:15:17 event -- scripts/common.sh@366 -- # ver2[v]=2 00:16:21.958 22:15:17 event -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:16:21.958 22:15:17 event -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:16:21.958 22:15:17 event -- scripts/common.sh@368 -- # return 0 00:16:21.958 22:15:17 event -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:21.958 22:15:17 event -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:16:21.958 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:21.958 --rc genhtml_branch_coverage=1 00:16:21.958 --rc genhtml_function_coverage=1 00:16:21.958 --rc genhtml_legend=1 00:16:21.958 --rc geninfo_all_blocks=1 00:16:21.958 --rc geninfo_unexecuted_blocks=1 00:16:21.958 00:16:21.958 ' 00:16:21.958 22:15:17 event -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:16:21.958 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:21.958 --rc genhtml_branch_coverage=1 00:16:21.958 --rc genhtml_function_coverage=1 00:16:21.958 --rc genhtml_legend=1 00:16:21.958 --rc geninfo_all_blocks=1 00:16:21.958 --rc geninfo_unexecuted_blocks=1 00:16:21.958 00:16:21.958 ' 00:16:21.958 22:15:17 event -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:16:21.958 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:21.959 --rc genhtml_branch_coverage=1 00:16:21.959 --rc genhtml_function_coverage=1 00:16:21.959 --rc genhtml_legend=1 00:16:21.959 --rc geninfo_all_blocks=1 00:16:21.959 --rc geninfo_unexecuted_blocks=1 00:16:21.959 00:16:21.959 ' 00:16:21.959 22:15:17 event -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:16:21.959 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:21.959 --rc genhtml_branch_coverage=1 00:16:21.959 --rc genhtml_function_coverage=1 00:16:21.959 --rc genhtml_legend=1 00:16:21.959 --rc geninfo_all_blocks=1 00:16:21.959 --rc geninfo_unexecuted_blocks=1 00:16:21.959 00:16:21.959 ' 00:16:21.959 22:15:17 event -- event/event.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/nbd_common.sh 00:16:21.959 22:15:17 event -- bdev/nbd_common.sh@6 -- # set -e 00:16:21.959 22:15:17 event -- event/event.sh@45 -- # run_test event_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:16:21.959 22:15:17 event -- common/autotest_common.sh@1101 -- # '[' 6 -le 1 ']' 00:16:21.959 22:15:17 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:16:21.959 22:15:17 event -- common/autotest_common.sh@10 -- # set +x 00:16:21.959 ************************************ 00:16:21.959 START TEST event_perf 00:16:21.959 ************************************ 00:16:21.959 22:15:17 event.event_perf -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:16:21.959 Running I/O for 1 seconds...[2024-10-01 22:15:17.197355] Starting SPDK v25.01-pre git sha1 1b1c3081e / DPDK 24.03.0 initialization... 00:16:21.959 [2024-10-01 22:15:17.197462] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4167956 ] 00:16:22.220 [2024-10-01 22:15:17.267613] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:16:22.220 [2024-10-01 22:15:17.343747] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:16:22.220 [2024-10-01 22:15:17.343861] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:16:22.220 [2024-10-01 22:15:17.344016] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:16:22.220 Running I/O for 1 seconds...[2024-10-01 22:15:17.344016] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:16:23.163 00:16:23.163 lcore 0: 187917 00:16:23.163 lcore 1: 187917 00:16:23.163 lcore 2: 187918 00:16:23.163 lcore 3: 187919 00:16:23.163 done. 00:16:23.163 00:16:23.163 real 0m1.222s 00:16:23.163 user 0m4.139s 00:16:23.163 sys 0m0.081s 00:16:23.163 22:15:18 event.event_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:16:23.163 22:15:18 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:16:23.163 ************************************ 00:16:23.163 END TEST event_perf 00:16:23.163 ************************************ 00:16:23.423 22:15:18 event -- event/event.sh@46 -- # run_test event_reactor /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:16:23.423 22:15:18 event -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:16:23.423 22:15:18 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:16:23.423 22:15:18 event -- common/autotest_common.sh@10 -- # set +x 00:16:23.423 ************************************ 00:16:23.423 START TEST event_reactor 00:16:23.423 ************************************ 00:16:23.423 22:15:18 event.event_reactor -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:16:23.423 [2024-10-01 22:15:18.498150] Starting SPDK v25.01-pre git sha1 1b1c3081e / DPDK 24.03.0 initialization... 00:16:23.423 [2024-10-01 22:15:18.498249] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4168148 ] 00:16:23.423 [2024-10-01 22:15:18.564118] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:23.423 [2024-10-01 22:15:18.633195] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:16:24.810 test_start 00:16:24.810 oneshot 00:16:24.810 tick 100 00:16:24.810 tick 100 00:16:24.810 tick 250 00:16:24.810 tick 100 00:16:24.810 tick 100 00:16:24.810 tick 100 00:16:24.810 tick 250 00:16:24.810 tick 500 00:16:24.810 tick 100 00:16:24.810 tick 100 00:16:24.810 tick 250 00:16:24.810 tick 100 00:16:24.810 tick 100 00:16:24.810 test_end 00:16:24.810 00:16:24.810 real 0m1.209s 00:16:24.810 user 0m1.136s 00:16:24.810 sys 0m0.068s 00:16:24.810 22:15:19 event.event_reactor -- common/autotest_common.sh@1126 -- # xtrace_disable 00:16:24.810 22:15:19 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:16:24.810 ************************************ 00:16:24.810 END TEST event_reactor 00:16:24.810 ************************************ 00:16:24.810 22:15:19 event -- event/event.sh@47 -- # run_test event_reactor_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:16:24.810 22:15:19 event -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:16:24.810 22:15:19 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:16:24.810 22:15:19 event -- common/autotest_common.sh@10 -- # set +x 00:16:24.810 ************************************ 00:16:24.810 START TEST event_reactor_perf 00:16:24.810 ************************************ 00:16:24.810 22:15:19 event.event_reactor_perf -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:16:24.810 [2024-10-01 22:15:19.783331] Starting SPDK v25.01-pre git sha1 1b1c3081e / DPDK 24.03.0 initialization... 00:16:24.810 [2024-10-01 22:15:19.783410] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4168345 ] 00:16:24.810 [2024-10-01 22:15:19.848564] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:24.810 [2024-10-01 22:15:19.914410] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:16:25.750 test_start 00:16:25.750 test_end 00:16:25.750 Performance: 366597 events per second 00:16:25.750 00:16:25.750 real 0m1.207s 00:16:25.750 user 0m1.126s 00:16:25.750 sys 0m0.077s 00:16:25.750 22:15:20 event.event_reactor_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:16:25.750 22:15:20 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:16:25.750 ************************************ 00:16:25.750 END TEST event_reactor_perf 00:16:25.751 ************************************ 00:16:26.012 22:15:21 event -- event/event.sh@49 -- # uname -s 00:16:26.012 22:15:21 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:16:26.012 22:15:21 event -- event/event.sh@50 -- # run_test event_scheduler /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:16:26.012 22:15:21 event -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:16:26.012 22:15:21 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:16:26.012 22:15:21 event -- common/autotest_common.sh@10 -- # set +x 00:16:26.012 ************************************ 00:16:26.012 START TEST event_scheduler 00:16:26.012 ************************************ 00:16:26.012 22:15:21 event.event_scheduler -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:16:26.012 * Looking for test storage... 00:16:26.012 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler 00:16:26.012 22:15:21 event.event_scheduler -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:16:26.012 22:15:21 event.event_scheduler -- common/autotest_common.sh@1681 -- # lcov --version 00:16:26.012 22:15:21 event.event_scheduler -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:16:26.012 22:15:21 event.event_scheduler -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:16:26.012 22:15:21 event.event_scheduler -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:16:26.012 22:15:21 event.event_scheduler -- scripts/common.sh@333 -- # local ver1 ver1_l 00:16:26.012 22:15:21 event.event_scheduler -- scripts/common.sh@334 -- # local ver2 ver2_l 00:16:26.012 22:15:21 event.event_scheduler -- scripts/common.sh@336 -- # IFS=.-: 00:16:26.012 22:15:21 event.event_scheduler -- scripts/common.sh@336 -- # read -ra ver1 00:16:26.012 22:15:21 event.event_scheduler -- scripts/common.sh@337 -- # IFS=.-: 00:16:26.012 22:15:21 event.event_scheduler -- scripts/common.sh@337 -- # read -ra ver2 00:16:26.012 22:15:21 event.event_scheduler -- scripts/common.sh@338 -- # local 'op=<' 00:16:26.012 22:15:21 event.event_scheduler -- scripts/common.sh@340 -- # ver1_l=2 00:16:26.012 22:15:21 event.event_scheduler -- scripts/common.sh@341 -- # ver2_l=1 00:16:26.012 22:15:21 event.event_scheduler -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:16:26.012 22:15:21 event.event_scheduler -- scripts/common.sh@344 -- # case "$op" in 00:16:26.012 22:15:21 event.event_scheduler -- scripts/common.sh@345 -- # : 1 00:16:26.012 22:15:21 event.event_scheduler -- scripts/common.sh@364 -- # (( v = 0 )) 00:16:26.012 22:15:21 event.event_scheduler -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:26.012 22:15:21 event.event_scheduler -- scripts/common.sh@365 -- # decimal 1 00:16:26.012 22:15:21 event.event_scheduler -- scripts/common.sh@353 -- # local d=1 00:16:26.012 22:15:21 event.event_scheduler -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:26.012 22:15:21 event.event_scheduler -- scripts/common.sh@355 -- # echo 1 00:16:26.012 22:15:21 event.event_scheduler -- scripts/common.sh@365 -- # ver1[v]=1 00:16:26.012 22:15:21 event.event_scheduler -- scripts/common.sh@366 -- # decimal 2 00:16:26.012 22:15:21 event.event_scheduler -- scripts/common.sh@353 -- # local d=2 00:16:26.012 22:15:21 event.event_scheduler -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:26.012 22:15:21 event.event_scheduler -- scripts/common.sh@355 -- # echo 2 00:16:26.012 22:15:21 event.event_scheduler -- scripts/common.sh@366 -- # ver2[v]=2 00:16:26.012 22:15:21 event.event_scheduler -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:16:26.012 22:15:21 event.event_scheduler -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:16:26.012 22:15:21 event.event_scheduler -- scripts/common.sh@368 -- # return 0 00:16:26.012 22:15:21 event.event_scheduler -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:26.012 22:15:21 event.event_scheduler -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:16:26.012 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:26.012 --rc genhtml_branch_coverage=1 00:16:26.012 --rc genhtml_function_coverage=1 00:16:26.012 --rc genhtml_legend=1 00:16:26.012 --rc geninfo_all_blocks=1 00:16:26.012 --rc geninfo_unexecuted_blocks=1 00:16:26.012 00:16:26.012 ' 00:16:26.012 22:15:21 event.event_scheduler -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:16:26.012 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:26.012 --rc genhtml_branch_coverage=1 00:16:26.012 --rc genhtml_function_coverage=1 00:16:26.012 --rc genhtml_legend=1 00:16:26.012 --rc geninfo_all_blocks=1 00:16:26.012 --rc geninfo_unexecuted_blocks=1 00:16:26.012 00:16:26.012 ' 00:16:26.012 22:15:21 event.event_scheduler -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:16:26.012 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:26.012 --rc genhtml_branch_coverage=1 00:16:26.012 --rc genhtml_function_coverage=1 00:16:26.012 --rc genhtml_legend=1 00:16:26.012 --rc geninfo_all_blocks=1 00:16:26.012 --rc geninfo_unexecuted_blocks=1 00:16:26.012 00:16:26.012 ' 00:16:26.012 22:15:21 event.event_scheduler -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:16:26.012 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:26.012 --rc genhtml_branch_coverage=1 00:16:26.012 --rc genhtml_function_coverage=1 00:16:26.012 --rc genhtml_legend=1 00:16:26.012 --rc geninfo_all_blocks=1 00:16:26.012 --rc geninfo_unexecuted_blocks=1 00:16:26.012 00:16:26.012 ' 00:16:26.012 22:15:21 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:16:26.012 22:15:21 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=4168735 00:16:26.012 22:15:21 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:16:26.012 22:15:21 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 4168735 00:16:26.012 22:15:21 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:16:26.012 22:15:21 event.event_scheduler -- common/autotest_common.sh@831 -- # '[' -z 4168735 ']' 00:16:26.012 22:15:21 event.event_scheduler -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:26.012 22:15:21 event.event_scheduler -- common/autotest_common.sh@836 -- # local max_retries=100 00:16:26.012 22:15:21 event.event_scheduler -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:26.012 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:26.012 22:15:21 event.event_scheduler -- common/autotest_common.sh@840 -- # xtrace_disable 00:16:26.012 22:15:21 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:16:26.272 [2024-10-01 22:15:21.299307] Starting SPDK v25.01-pre git sha1 1b1c3081e / DPDK 24.03.0 initialization... 00:16:26.272 [2024-10-01 22:15:21.299363] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4168735 ] 00:16:26.272 [2024-10-01 22:15:21.352176] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:16:26.272 [2024-10-01 22:15:21.407028] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:16:26.272 [2024-10-01 22:15:21.407184] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:16:26.272 [2024-10-01 22:15:21.407339] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:16:26.272 [2024-10-01 22:15:21.407340] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:16:27.213 22:15:22 event.event_scheduler -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:16:27.213 22:15:22 event.event_scheduler -- common/autotest_common.sh@864 -- # return 0 00:16:27.213 22:15:22 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:16:27.213 22:15:22 event.event_scheduler -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:27.213 22:15:22 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:16:27.213 [2024-10-01 22:15:22.105487] dpdk_governor.c: 173:_init: *ERROR*: App core mask contains some but not all of a set of SMT siblings 00:16:27.213 [2024-10-01 22:15:22.105502] scheduler_dynamic.c: 280:init: *NOTICE*: Unable to initialize dpdk governor 00:16:27.213 [2024-10-01 22:15:22.105510] scheduler_dynamic.c: 427:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:16:27.213 [2024-10-01 22:15:22.105514] scheduler_dynamic.c: 429:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:16:27.214 [2024-10-01 22:15:22.105518] scheduler_dynamic.c: 431:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:16:27.214 22:15:22 event.event_scheduler -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:27.214 22:15:22 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:16:27.214 22:15:22 event.event_scheduler -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:27.214 22:15:22 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:16:27.214 [2024-10-01 22:15:22.212859] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:16:27.214 22:15:22 event.event_scheduler -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:27.214 22:15:22 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:16:27.214 22:15:22 event.event_scheduler -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:16:27.214 22:15:22 event.event_scheduler -- common/autotest_common.sh@1107 -- # xtrace_disable 00:16:27.214 22:15:22 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:16:27.214 ************************************ 00:16:27.214 START TEST scheduler_create_thread 00:16:27.214 ************************************ 00:16:27.214 22:15:22 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1125 -- # scheduler_create_thread 00:16:27.214 22:15:22 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:16:27.214 22:15:22 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:27.214 22:15:22 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:16:27.214 2 00:16:27.214 22:15:22 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:27.214 22:15:22 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:16:27.214 22:15:22 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:27.214 22:15:22 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:16:27.214 3 00:16:27.214 22:15:22 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:27.214 22:15:22 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:16:27.214 22:15:22 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:27.214 22:15:22 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:16:27.214 4 00:16:27.214 22:15:22 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:27.214 22:15:22 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:16:27.214 22:15:22 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:27.214 22:15:22 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:16:27.214 5 00:16:27.214 22:15:22 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:27.214 22:15:22 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:16:27.214 22:15:22 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:27.214 22:15:22 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:16:27.214 6 00:16:27.214 22:15:22 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:27.214 22:15:22 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:16:27.214 22:15:22 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:27.214 22:15:22 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:16:27.214 7 00:16:27.214 22:15:22 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:27.214 22:15:22 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:16:27.214 22:15:22 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:27.214 22:15:22 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:16:27.214 8 00:16:27.214 22:15:22 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:27.214 22:15:22 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:16:27.214 22:15:22 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:27.214 22:15:22 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:16:27.214 9 00:16:27.214 22:15:22 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:27.214 22:15:22 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:16:27.214 22:15:22 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:27.214 22:15:22 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:16:27.784 10 00:16:27.784 22:15:22 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:27.784 22:15:22 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:16:27.784 22:15:22 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:27.784 22:15:22 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:16:29.165 22:15:24 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:29.165 22:15:24 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:16:29.165 22:15:24 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:16:29.165 22:15:24 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:29.165 22:15:24 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:16:29.734 22:15:24 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:29.734 22:15:24 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:16:29.734 22:15:24 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:29.734 22:15:24 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:16:30.675 22:15:25 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:30.675 22:15:25 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:16:30.675 22:15:25 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:16:30.675 22:15:25 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:30.675 22:15:25 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:16:31.246 22:15:26 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:31.246 00:16:31.246 real 0m4.225s 00:16:31.246 user 0m0.024s 00:16:31.246 sys 0m0.007s 00:16:31.246 22:15:26 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1126 -- # xtrace_disable 00:16:31.246 22:15:26 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:16:31.246 ************************************ 00:16:31.246 END TEST scheduler_create_thread 00:16:31.246 ************************************ 00:16:31.507 22:15:26 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:16:31.507 22:15:26 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 4168735 00:16:31.507 22:15:26 event.event_scheduler -- common/autotest_common.sh@950 -- # '[' -z 4168735 ']' 00:16:31.507 22:15:26 event.event_scheduler -- common/autotest_common.sh@954 -- # kill -0 4168735 00:16:31.507 22:15:26 event.event_scheduler -- common/autotest_common.sh@955 -- # uname 00:16:31.507 22:15:26 event.event_scheduler -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:16:31.507 22:15:26 event.event_scheduler -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 4168735 00:16:31.507 22:15:26 event.event_scheduler -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:16:31.507 22:15:26 event.event_scheduler -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:16:31.507 22:15:26 event.event_scheduler -- common/autotest_common.sh@968 -- # echo 'killing process with pid 4168735' 00:16:31.507 killing process with pid 4168735 00:16:31.507 22:15:26 event.event_scheduler -- common/autotest_common.sh@969 -- # kill 4168735 00:16:31.507 22:15:26 event.event_scheduler -- common/autotest_common.sh@974 -- # wait 4168735 00:16:31.507 [2024-10-01 22:15:26.758105] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:16:31.768 00:16:31.768 real 0m5.925s 00:16:31.768 user 0m13.166s 00:16:31.768 sys 0m0.427s 00:16:31.768 22:15:26 event.event_scheduler -- common/autotest_common.sh@1126 -- # xtrace_disable 00:16:31.768 22:15:26 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:16:31.768 ************************************ 00:16:31.768 END TEST event_scheduler 00:16:31.768 ************************************ 00:16:31.768 22:15:27 event -- event/event.sh@51 -- # modprobe -n nbd 00:16:32.029 22:15:27 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:16:32.029 22:15:27 event -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:16:32.029 22:15:27 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:16:32.029 22:15:27 event -- common/autotest_common.sh@10 -- # set +x 00:16:32.029 ************************************ 00:16:32.029 START TEST app_repeat 00:16:32.029 ************************************ 00:16:32.029 22:15:27 event.app_repeat -- common/autotest_common.sh@1125 -- # app_repeat_test 00:16:32.029 22:15:27 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:16:32.029 22:15:27 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:16:32.029 22:15:27 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:16:32.029 22:15:27 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:16:32.029 22:15:27 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:16:32.029 22:15:27 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:16:32.029 22:15:27 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:16:32.029 22:15:27 event.app_repeat -- event/event.sh@19 -- # repeat_pid=4170010 00:16:32.029 22:15:27 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:16:32.029 22:15:27 event.app_repeat -- event/event.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:16:32.029 22:15:27 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 4170010' 00:16:32.029 Process app_repeat pid: 4170010 00:16:32.029 22:15:27 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:16:32.029 22:15:27 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:16:32.029 spdk_app_start Round 0 00:16:32.029 22:15:27 event.app_repeat -- event/event.sh@25 -- # waitforlisten 4170010 /var/tmp/spdk-nbd.sock 00:16:32.029 22:15:27 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 4170010 ']' 00:16:32.029 22:15:27 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:16:32.029 22:15:27 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:16:32.029 22:15:27 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:16:32.029 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:16:32.029 22:15:27 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:16:32.029 22:15:27 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:16:32.029 [2024-10-01 22:15:27.099786] Starting SPDK v25.01-pre git sha1 1b1c3081e / DPDK 24.03.0 initialization... 00:16:32.029 [2024-10-01 22:15:27.099847] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4170010 ] 00:16:32.029 [2024-10-01 22:15:27.162796] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:16:32.029 [2024-10-01 22:15:27.229234] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:16:32.029 [2024-10-01 22:15:27.229238] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:16:32.972 22:15:27 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:16:32.972 22:15:27 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:16:32.972 22:15:27 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:16:32.972 Malloc0 00:16:32.972 22:15:28 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:16:33.242 Malloc1 00:16:33.242 22:15:28 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:16:33.242 22:15:28 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:16:33.242 22:15:28 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:16:33.242 22:15:28 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:16:33.242 22:15:28 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:16:33.242 22:15:28 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:16:33.242 22:15:28 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:16:33.242 22:15:28 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:16:33.242 22:15:28 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:16:33.242 22:15:28 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:16:33.242 22:15:28 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:16:33.242 22:15:28 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:16:33.242 22:15:28 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:16:33.242 22:15:28 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:16:33.242 22:15:28 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:16:33.242 22:15:28 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:16:33.242 /dev/nbd0 00:16:33.242 22:15:28 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:16:33.243 22:15:28 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:16:33.243 22:15:28 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:16:33.243 22:15:28 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:16:33.243 22:15:28 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:16:33.243 22:15:28 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:16:33.243 22:15:28 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:16:33.503 22:15:28 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:16:33.503 22:15:28 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:16:33.503 22:15:28 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:16:33.503 22:15:28 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:16:33.503 1+0 records in 00:16:33.503 1+0 records out 00:16:33.503 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000274342 s, 14.9 MB/s 00:16:33.503 22:15:28 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:16:33.503 22:15:28 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:16:33.503 22:15:28 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:16:33.503 22:15:28 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:16:33.503 22:15:28 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:16:33.503 22:15:28 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:16:33.503 22:15:28 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:16:33.503 22:15:28 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:16:33.503 /dev/nbd1 00:16:33.503 22:15:28 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:16:33.503 22:15:28 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:16:33.503 22:15:28 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:16:33.503 22:15:28 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:16:33.503 22:15:28 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:16:33.503 22:15:28 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:16:33.503 22:15:28 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:16:33.503 22:15:28 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:16:33.503 22:15:28 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:16:33.503 22:15:28 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:16:33.503 22:15:28 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:16:33.503 1+0 records in 00:16:33.503 1+0 records out 00:16:33.503 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000286653 s, 14.3 MB/s 00:16:33.503 22:15:28 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:16:33.504 22:15:28 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:16:33.504 22:15:28 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:16:33.504 22:15:28 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:16:33.504 22:15:28 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:16:33.504 22:15:28 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:16:33.504 22:15:28 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:16:33.504 22:15:28 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:16:33.504 22:15:28 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:16:33.504 22:15:28 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:16:33.765 22:15:28 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:16:33.765 { 00:16:33.765 "nbd_device": "/dev/nbd0", 00:16:33.765 "bdev_name": "Malloc0" 00:16:33.765 }, 00:16:33.765 { 00:16:33.765 "nbd_device": "/dev/nbd1", 00:16:33.765 "bdev_name": "Malloc1" 00:16:33.765 } 00:16:33.765 ]' 00:16:33.765 22:15:28 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:16:33.765 { 00:16:33.765 "nbd_device": "/dev/nbd0", 00:16:33.765 "bdev_name": "Malloc0" 00:16:33.765 }, 00:16:33.765 { 00:16:33.765 "nbd_device": "/dev/nbd1", 00:16:33.765 "bdev_name": "Malloc1" 00:16:33.765 } 00:16:33.765 ]' 00:16:33.765 22:15:28 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:16:33.765 22:15:28 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:16:33.765 /dev/nbd1' 00:16:33.765 22:15:28 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:16:33.765 /dev/nbd1' 00:16:33.765 22:15:28 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:16:33.765 22:15:28 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:16:33.765 22:15:28 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:16:33.765 22:15:28 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:16:33.765 22:15:28 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:16:33.765 22:15:28 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:16:33.765 22:15:28 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:16:33.765 22:15:28 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:16:33.765 22:15:28 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:16:33.765 22:15:28 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:16:33.765 22:15:28 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:16:33.765 22:15:28 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:16:33.765 256+0 records in 00:16:33.765 256+0 records out 00:16:33.765 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0127383 s, 82.3 MB/s 00:16:33.765 22:15:28 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:16:33.765 22:15:28 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:16:33.765 256+0 records in 00:16:33.765 256+0 records out 00:16:33.765 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0163094 s, 64.3 MB/s 00:16:34.026 22:15:29 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:16:34.026 22:15:29 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:16:34.026 256+0 records in 00:16:34.026 256+0 records out 00:16:34.026 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0188601 s, 55.6 MB/s 00:16:34.026 22:15:29 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:16:34.026 22:15:29 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:16:34.026 22:15:29 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:16:34.026 22:15:29 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:16:34.026 22:15:29 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:16:34.026 22:15:29 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:16:34.026 22:15:29 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:16:34.026 22:15:29 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:16:34.026 22:15:29 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:16:34.026 22:15:29 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:16:34.026 22:15:29 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:16:34.026 22:15:29 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:16:34.026 22:15:29 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:16:34.026 22:15:29 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:16:34.026 22:15:29 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:16:34.026 22:15:29 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:16:34.026 22:15:29 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:16:34.026 22:15:29 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:16:34.026 22:15:29 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:16:34.026 22:15:29 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:16:34.026 22:15:29 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:16:34.026 22:15:29 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:16:34.026 22:15:29 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:16:34.026 22:15:29 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:16:34.026 22:15:29 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:16:34.026 22:15:29 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:16:34.026 22:15:29 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:16:34.026 22:15:29 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:16:34.026 22:15:29 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:16:34.287 22:15:29 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:16:34.287 22:15:29 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:16:34.287 22:15:29 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:16:34.287 22:15:29 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:16:34.287 22:15:29 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:16:34.287 22:15:29 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:16:34.287 22:15:29 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:16:34.288 22:15:29 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:16:34.288 22:15:29 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:16:34.288 22:15:29 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:16:34.288 22:15:29 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:16:34.548 22:15:29 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:16:34.548 22:15:29 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:16:34.548 22:15:29 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:16:34.548 22:15:29 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:16:34.548 22:15:29 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:16:34.548 22:15:29 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:16:34.548 22:15:29 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:16:34.548 22:15:29 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:16:34.548 22:15:29 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:16:34.548 22:15:29 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:16:34.548 22:15:29 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:16:34.548 22:15:29 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:16:34.548 22:15:29 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:16:34.810 22:15:29 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:16:34.810 [2024-10-01 22:15:30.032230] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:16:35.077 [2024-10-01 22:15:30.099520] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:16:35.077 [2024-10-01 22:15:30.099522] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:16:35.077 [2024-10-01 22:15:30.132324] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:16:35.077 [2024-10-01 22:15:30.132358] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:16:37.619 22:15:32 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:16:37.619 22:15:32 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:16:37.619 spdk_app_start Round 1 00:16:37.619 22:15:32 event.app_repeat -- event/event.sh@25 -- # waitforlisten 4170010 /var/tmp/spdk-nbd.sock 00:16:37.619 22:15:32 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 4170010 ']' 00:16:37.619 22:15:32 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:16:37.619 22:15:32 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:16:37.619 22:15:32 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:16:37.619 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:16:37.619 22:15:32 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:16:37.619 22:15:32 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:16:37.878 22:15:33 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:16:37.879 22:15:33 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:16:37.879 22:15:33 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:16:38.140 Malloc0 00:16:38.140 22:15:33 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:16:38.140 Malloc1 00:16:38.140 22:15:33 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:16:38.140 22:15:33 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:16:38.140 22:15:33 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:16:38.140 22:15:33 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:16:38.140 22:15:33 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:16:38.140 22:15:33 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:16:38.140 22:15:33 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:16:38.140 22:15:33 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:16:38.140 22:15:33 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:16:38.140 22:15:33 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:16:38.140 22:15:33 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:16:38.140 22:15:33 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:16:38.140 22:15:33 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:16:38.140 22:15:33 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:16:38.140 22:15:33 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:16:38.140 22:15:33 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:16:38.401 /dev/nbd0 00:16:38.401 22:15:33 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:16:38.401 22:15:33 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:16:38.401 22:15:33 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:16:38.401 22:15:33 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:16:38.401 22:15:33 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:16:38.401 22:15:33 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:16:38.401 22:15:33 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:16:38.401 22:15:33 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:16:38.401 22:15:33 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:16:38.401 22:15:33 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:16:38.401 22:15:33 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:16:38.401 1+0 records in 00:16:38.401 1+0 records out 00:16:38.401 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000341288 s, 12.0 MB/s 00:16:38.401 22:15:33 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:16:38.401 22:15:33 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:16:38.401 22:15:33 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:16:38.401 22:15:33 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:16:38.401 22:15:33 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:16:38.401 22:15:33 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:16:38.401 22:15:33 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:16:38.401 22:15:33 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:16:38.662 /dev/nbd1 00:16:38.662 22:15:33 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:16:38.662 22:15:33 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:16:38.662 22:15:33 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:16:38.662 22:15:33 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:16:38.662 22:15:33 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:16:38.662 22:15:33 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:16:38.662 22:15:33 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:16:38.662 22:15:33 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:16:38.662 22:15:33 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:16:38.662 22:15:33 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:16:38.662 22:15:33 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:16:38.662 1+0 records in 00:16:38.662 1+0 records out 00:16:38.662 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000275971 s, 14.8 MB/s 00:16:38.662 22:15:33 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:16:38.662 22:15:33 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:16:38.662 22:15:33 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:16:38.662 22:15:33 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:16:38.662 22:15:33 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:16:38.662 22:15:33 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:16:38.662 22:15:33 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:16:38.662 22:15:33 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:16:38.662 22:15:33 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:16:38.662 22:15:33 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:16:38.923 22:15:33 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:16:38.923 { 00:16:38.923 "nbd_device": "/dev/nbd0", 00:16:38.923 "bdev_name": "Malloc0" 00:16:38.923 }, 00:16:38.923 { 00:16:38.923 "nbd_device": "/dev/nbd1", 00:16:38.923 "bdev_name": "Malloc1" 00:16:38.923 } 00:16:38.923 ]' 00:16:38.923 22:15:33 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:16:38.923 { 00:16:38.923 "nbd_device": "/dev/nbd0", 00:16:38.923 "bdev_name": "Malloc0" 00:16:38.923 }, 00:16:38.923 { 00:16:38.923 "nbd_device": "/dev/nbd1", 00:16:38.923 "bdev_name": "Malloc1" 00:16:38.923 } 00:16:38.923 ]' 00:16:38.923 22:15:33 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:16:38.923 22:15:34 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:16:38.923 /dev/nbd1' 00:16:38.923 22:15:34 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:16:38.923 /dev/nbd1' 00:16:38.923 22:15:34 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:16:38.923 22:15:34 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:16:38.923 22:15:34 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:16:38.923 22:15:34 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:16:38.923 22:15:34 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:16:38.923 22:15:34 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:16:38.923 22:15:34 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:16:38.923 22:15:34 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:16:38.923 22:15:34 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:16:38.923 22:15:34 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:16:38.923 22:15:34 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:16:38.923 22:15:34 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:16:38.923 256+0 records in 00:16:38.923 256+0 records out 00:16:38.923 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0117356 s, 89.4 MB/s 00:16:38.923 22:15:34 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:16:38.923 22:15:34 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:16:38.923 256+0 records in 00:16:38.923 256+0 records out 00:16:38.923 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0172549 s, 60.8 MB/s 00:16:38.923 22:15:34 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:16:38.923 22:15:34 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:16:38.923 256+0 records in 00:16:38.923 256+0 records out 00:16:38.923 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0197349 s, 53.1 MB/s 00:16:38.923 22:15:34 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:16:38.923 22:15:34 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:16:38.923 22:15:34 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:16:38.923 22:15:34 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:16:38.923 22:15:34 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:16:38.923 22:15:34 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:16:38.923 22:15:34 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:16:38.923 22:15:34 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:16:38.923 22:15:34 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:16:38.923 22:15:34 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:16:38.923 22:15:34 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:16:38.923 22:15:34 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:16:38.923 22:15:34 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:16:38.923 22:15:34 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:16:38.923 22:15:34 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:16:38.923 22:15:34 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:16:38.923 22:15:34 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:16:38.923 22:15:34 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:16:38.923 22:15:34 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:16:39.185 22:15:34 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:16:39.185 22:15:34 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:16:39.185 22:15:34 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:16:39.185 22:15:34 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:16:39.185 22:15:34 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:16:39.185 22:15:34 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:16:39.185 22:15:34 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:16:39.185 22:15:34 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:16:39.185 22:15:34 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:16:39.185 22:15:34 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:16:39.445 22:15:34 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:16:39.445 22:15:34 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:16:39.445 22:15:34 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:16:39.445 22:15:34 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:16:39.445 22:15:34 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:16:39.445 22:15:34 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:16:39.445 22:15:34 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:16:39.445 22:15:34 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:16:39.445 22:15:34 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:16:39.445 22:15:34 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:16:39.445 22:15:34 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:16:39.445 22:15:34 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:16:39.445 22:15:34 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:16:39.445 22:15:34 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:16:39.706 22:15:34 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:16:39.706 22:15:34 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:16:39.706 22:15:34 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:16:39.706 22:15:34 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:16:39.706 22:15:34 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:16:39.706 22:15:34 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:16:39.706 22:15:34 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:16:39.706 22:15:34 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:16:39.706 22:15:34 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:16:39.706 22:15:34 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:16:39.706 22:15:34 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:16:39.968 [2024-10-01 22:15:35.076377] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:16:39.968 [2024-10-01 22:15:35.141225] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:16:39.968 [2024-10-01 22:15:35.141228] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:16:39.968 [2024-10-01 22:15:35.173699] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:16:39.968 [2024-10-01 22:15:35.173736] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:16:43.269 22:15:37 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:16:43.269 22:15:37 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:16:43.269 spdk_app_start Round 2 00:16:43.269 22:15:37 event.app_repeat -- event/event.sh@25 -- # waitforlisten 4170010 /var/tmp/spdk-nbd.sock 00:16:43.269 22:15:37 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 4170010 ']' 00:16:43.269 22:15:37 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:16:43.269 22:15:37 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:16:43.269 22:15:37 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:16:43.269 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:16:43.269 22:15:37 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:16:43.269 22:15:37 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:16:43.269 22:15:38 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:16:43.269 22:15:38 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:16:43.269 22:15:38 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:16:43.269 Malloc0 00:16:43.269 22:15:38 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:16:43.269 Malloc1 00:16:43.269 22:15:38 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:16:43.269 22:15:38 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:16:43.269 22:15:38 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:16:43.269 22:15:38 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:16:43.269 22:15:38 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:16:43.269 22:15:38 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:16:43.269 22:15:38 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:16:43.269 22:15:38 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:16:43.269 22:15:38 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:16:43.269 22:15:38 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:16:43.269 22:15:38 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:16:43.269 22:15:38 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:16:43.269 22:15:38 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:16:43.269 22:15:38 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:16:43.269 22:15:38 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:16:43.269 22:15:38 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:16:43.530 /dev/nbd0 00:16:43.530 22:15:38 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:16:43.530 22:15:38 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:16:43.530 22:15:38 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:16:43.530 22:15:38 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:16:43.530 22:15:38 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:16:43.530 22:15:38 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:16:43.530 22:15:38 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:16:43.530 22:15:38 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:16:43.530 22:15:38 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:16:43.530 22:15:38 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:16:43.530 22:15:38 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:16:43.530 1+0 records in 00:16:43.530 1+0 records out 00:16:43.530 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000231375 s, 17.7 MB/s 00:16:43.530 22:15:38 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:16:43.530 22:15:38 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:16:43.530 22:15:38 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:16:43.530 22:15:38 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:16:43.530 22:15:38 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:16:43.530 22:15:38 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:16:43.530 22:15:38 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:16:43.530 22:15:38 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:16:43.790 /dev/nbd1 00:16:43.790 22:15:38 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:16:43.790 22:15:38 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:16:43.790 22:15:38 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:16:43.790 22:15:38 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:16:43.790 22:15:38 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:16:43.790 22:15:38 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:16:43.790 22:15:38 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:16:43.790 22:15:38 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:16:43.790 22:15:38 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:16:43.790 22:15:38 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:16:43.790 22:15:38 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:16:43.790 1+0 records in 00:16:43.790 1+0 records out 00:16:43.790 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000294899 s, 13.9 MB/s 00:16:43.790 22:15:38 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:16:43.790 22:15:38 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:16:43.790 22:15:38 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:16:43.790 22:15:38 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:16:43.790 22:15:38 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:16:43.790 22:15:38 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:16:43.790 22:15:38 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:16:43.790 22:15:38 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:16:43.790 22:15:38 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:16:43.790 22:15:38 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:16:44.051 22:15:39 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:16:44.051 { 00:16:44.051 "nbd_device": "/dev/nbd0", 00:16:44.051 "bdev_name": "Malloc0" 00:16:44.051 }, 00:16:44.051 { 00:16:44.051 "nbd_device": "/dev/nbd1", 00:16:44.051 "bdev_name": "Malloc1" 00:16:44.051 } 00:16:44.051 ]' 00:16:44.051 22:15:39 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:16:44.051 { 00:16:44.051 "nbd_device": "/dev/nbd0", 00:16:44.051 "bdev_name": "Malloc0" 00:16:44.051 }, 00:16:44.051 { 00:16:44.051 "nbd_device": "/dev/nbd1", 00:16:44.051 "bdev_name": "Malloc1" 00:16:44.051 } 00:16:44.051 ]' 00:16:44.051 22:15:39 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:16:44.051 22:15:39 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:16:44.051 /dev/nbd1' 00:16:44.051 22:15:39 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:16:44.051 /dev/nbd1' 00:16:44.051 22:15:39 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:16:44.051 22:15:39 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:16:44.051 22:15:39 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:16:44.051 22:15:39 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:16:44.051 22:15:39 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:16:44.051 22:15:39 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:16:44.051 22:15:39 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:16:44.051 22:15:39 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:16:44.051 22:15:39 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:16:44.051 22:15:39 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:16:44.051 22:15:39 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:16:44.051 22:15:39 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:16:44.051 256+0 records in 00:16:44.051 256+0 records out 00:16:44.051 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0126121 s, 83.1 MB/s 00:16:44.051 22:15:39 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:16:44.051 22:15:39 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:16:44.051 256+0 records in 00:16:44.051 256+0 records out 00:16:44.051 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0163544 s, 64.1 MB/s 00:16:44.051 22:15:39 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:16:44.051 22:15:39 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:16:44.051 256+0 records in 00:16:44.051 256+0 records out 00:16:44.051 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0200417 s, 52.3 MB/s 00:16:44.051 22:15:39 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:16:44.051 22:15:39 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:16:44.051 22:15:39 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:16:44.051 22:15:39 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:16:44.051 22:15:39 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:16:44.051 22:15:39 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:16:44.051 22:15:39 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:16:44.051 22:15:39 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:16:44.051 22:15:39 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:16:44.051 22:15:39 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:16:44.051 22:15:39 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:16:44.051 22:15:39 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:16:44.051 22:15:39 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:16:44.051 22:15:39 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:16:44.051 22:15:39 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:16:44.051 22:15:39 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:16:44.051 22:15:39 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:16:44.051 22:15:39 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:16:44.051 22:15:39 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:16:44.312 22:15:39 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:16:44.312 22:15:39 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:16:44.312 22:15:39 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:16:44.312 22:15:39 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:16:44.312 22:15:39 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:16:44.312 22:15:39 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:16:44.312 22:15:39 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:16:44.312 22:15:39 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:16:44.312 22:15:39 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:16:44.312 22:15:39 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:16:44.312 22:15:39 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:16:44.312 22:15:39 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:16:44.312 22:15:39 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:16:44.312 22:15:39 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:16:44.312 22:15:39 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:16:44.312 22:15:39 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:16:44.312 22:15:39 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:16:44.312 22:15:39 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:16:44.573 22:15:39 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:16:44.573 22:15:39 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:16:44.573 22:15:39 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:16:44.573 22:15:39 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:16:44.573 22:15:39 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:16:44.573 22:15:39 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:16:44.573 22:15:39 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:16:44.573 22:15:39 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:16:44.573 22:15:39 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:16:44.573 22:15:39 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:16:44.573 22:15:39 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:16:44.573 22:15:39 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:16:44.573 22:15:39 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:16:44.573 22:15:39 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:16:44.573 22:15:39 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:16:44.573 22:15:39 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:16:44.833 22:15:39 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:16:45.093 [2024-10-01 22:15:40.162697] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:16:45.093 [2024-10-01 22:15:40.228245] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:16:45.093 [2024-10-01 22:15:40.228248] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:16:45.093 [2024-10-01 22:15:40.259793] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:16:45.093 [2024-10-01 22:15:40.259830] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:16:48.392 22:15:42 event.app_repeat -- event/event.sh@38 -- # waitforlisten 4170010 /var/tmp/spdk-nbd.sock 00:16:48.392 22:15:42 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 4170010 ']' 00:16:48.392 22:15:42 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:16:48.392 22:15:42 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:16:48.392 22:15:42 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:16:48.392 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:16:48.392 22:15:42 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:16:48.392 22:15:42 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:16:48.392 22:15:43 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:16:48.392 22:15:43 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:16:48.392 22:15:43 event.app_repeat -- event/event.sh@39 -- # killprocess 4170010 00:16:48.392 22:15:43 event.app_repeat -- common/autotest_common.sh@950 -- # '[' -z 4170010 ']' 00:16:48.392 22:15:43 event.app_repeat -- common/autotest_common.sh@954 -- # kill -0 4170010 00:16:48.392 22:15:43 event.app_repeat -- common/autotest_common.sh@955 -- # uname 00:16:48.392 22:15:43 event.app_repeat -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:16:48.392 22:15:43 event.app_repeat -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 4170010 00:16:48.392 22:15:43 event.app_repeat -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:16:48.392 22:15:43 event.app_repeat -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:16:48.392 22:15:43 event.app_repeat -- common/autotest_common.sh@968 -- # echo 'killing process with pid 4170010' 00:16:48.392 killing process with pid 4170010 00:16:48.392 22:15:43 event.app_repeat -- common/autotest_common.sh@969 -- # kill 4170010 00:16:48.392 22:15:43 event.app_repeat -- common/autotest_common.sh@974 -- # wait 4170010 00:16:48.392 spdk_app_start is called in Round 0. 00:16:48.392 Shutdown signal received, stop current app iteration 00:16:48.392 Starting SPDK v25.01-pre git sha1 1b1c3081e / DPDK 24.03.0 reinitialization... 00:16:48.392 spdk_app_start is called in Round 1. 00:16:48.392 Shutdown signal received, stop current app iteration 00:16:48.392 Starting SPDK v25.01-pre git sha1 1b1c3081e / DPDK 24.03.0 reinitialization... 00:16:48.392 spdk_app_start is called in Round 2. 00:16:48.392 Shutdown signal received, stop current app iteration 00:16:48.392 Starting SPDK v25.01-pre git sha1 1b1c3081e / DPDK 24.03.0 reinitialization... 00:16:48.392 spdk_app_start is called in Round 3. 00:16:48.392 Shutdown signal received, stop current app iteration 00:16:48.392 22:15:43 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:16:48.392 22:15:43 event.app_repeat -- event/event.sh@42 -- # return 0 00:16:48.392 00:16:48.392 real 0m16.319s 00:16:48.392 user 0m35.207s 00:16:48.392 sys 0m2.460s 00:16:48.392 22:15:43 event.app_repeat -- common/autotest_common.sh@1126 -- # xtrace_disable 00:16:48.392 22:15:43 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:16:48.392 ************************************ 00:16:48.392 END TEST app_repeat 00:16:48.392 ************************************ 00:16:48.392 22:15:43 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:16:48.392 22:15:43 event -- event/event.sh@55 -- # run_test cpu_locks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:16:48.392 22:15:43 event -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:16:48.392 22:15:43 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:16:48.392 22:15:43 event -- common/autotest_common.sh@10 -- # set +x 00:16:48.392 ************************************ 00:16:48.392 START TEST cpu_locks 00:16:48.392 ************************************ 00:16:48.392 22:15:43 event.cpu_locks -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:16:48.392 * Looking for test storage... 00:16:48.392 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:16:48.392 22:15:43 event.cpu_locks -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:16:48.392 22:15:43 event.cpu_locks -- common/autotest_common.sh@1681 -- # lcov --version 00:16:48.392 22:15:43 event.cpu_locks -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:16:48.392 22:15:43 event.cpu_locks -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:16:48.392 22:15:43 event.cpu_locks -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:16:48.392 22:15:43 event.cpu_locks -- scripts/common.sh@333 -- # local ver1 ver1_l 00:16:48.392 22:15:43 event.cpu_locks -- scripts/common.sh@334 -- # local ver2 ver2_l 00:16:48.392 22:15:43 event.cpu_locks -- scripts/common.sh@336 -- # IFS=.-: 00:16:48.392 22:15:43 event.cpu_locks -- scripts/common.sh@336 -- # read -ra ver1 00:16:48.392 22:15:43 event.cpu_locks -- scripts/common.sh@337 -- # IFS=.-: 00:16:48.392 22:15:43 event.cpu_locks -- scripts/common.sh@337 -- # read -ra ver2 00:16:48.393 22:15:43 event.cpu_locks -- scripts/common.sh@338 -- # local 'op=<' 00:16:48.393 22:15:43 event.cpu_locks -- scripts/common.sh@340 -- # ver1_l=2 00:16:48.393 22:15:43 event.cpu_locks -- scripts/common.sh@341 -- # ver2_l=1 00:16:48.393 22:15:43 event.cpu_locks -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:16:48.393 22:15:43 event.cpu_locks -- scripts/common.sh@344 -- # case "$op" in 00:16:48.393 22:15:43 event.cpu_locks -- scripts/common.sh@345 -- # : 1 00:16:48.393 22:15:43 event.cpu_locks -- scripts/common.sh@364 -- # (( v = 0 )) 00:16:48.393 22:15:43 event.cpu_locks -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:48.393 22:15:43 event.cpu_locks -- scripts/common.sh@365 -- # decimal 1 00:16:48.393 22:15:43 event.cpu_locks -- scripts/common.sh@353 -- # local d=1 00:16:48.393 22:15:43 event.cpu_locks -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:48.683 22:15:43 event.cpu_locks -- scripts/common.sh@355 -- # echo 1 00:16:48.683 22:15:43 event.cpu_locks -- scripts/common.sh@365 -- # ver1[v]=1 00:16:48.683 22:15:43 event.cpu_locks -- scripts/common.sh@366 -- # decimal 2 00:16:48.683 22:15:43 event.cpu_locks -- scripts/common.sh@353 -- # local d=2 00:16:48.683 22:15:43 event.cpu_locks -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:48.683 22:15:43 event.cpu_locks -- scripts/common.sh@355 -- # echo 2 00:16:48.683 22:15:43 event.cpu_locks -- scripts/common.sh@366 -- # ver2[v]=2 00:16:48.683 22:15:43 event.cpu_locks -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:16:48.683 22:15:43 event.cpu_locks -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:16:48.683 22:15:43 event.cpu_locks -- scripts/common.sh@368 -- # return 0 00:16:48.683 22:15:43 event.cpu_locks -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:48.683 22:15:43 event.cpu_locks -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:16:48.683 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:48.683 --rc genhtml_branch_coverage=1 00:16:48.683 --rc genhtml_function_coverage=1 00:16:48.683 --rc genhtml_legend=1 00:16:48.683 --rc geninfo_all_blocks=1 00:16:48.683 --rc geninfo_unexecuted_blocks=1 00:16:48.683 00:16:48.683 ' 00:16:48.683 22:15:43 event.cpu_locks -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:16:48.683 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:48.683 --rc genhtml_branch_coverage=1 00:16:48.683 --rc genhtml_function_coverage=1 00:16:48.683 --rc genhtml_legend=1 00:16:48.683 --rc geninfo_all_blocks=1 00:16:48.683 --rc geninfo_unexecuted_blocks=1 00:16:48.683 00:16:48.683 ' 00:16:48.683 22:15:43 event.cpu_locks -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:16:48.683 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:48.683 --rc genhtml_branch_coverage=1 00:16:48.683 --rc genhtml_function_coverage=1 00:16:48.683 --rc genhtml_legend=1 00:16:48.683 --rc geninfo_all_blocks=1 00:16:48.683 --rc geninfo_unexecuted_blocks=1 00:16:48.683 00:16:48.683 ' 00:16:48.683 22:15:43 event.cpu_locks -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:16:48.683 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:48.683 --rc genhtml_branch_coverage=1 00:16:48.683 --rc genhtml_function_coverage=1 00:16:48.683 --rc genhtml_legend=1 00:16:48.683 --rc geninfo_all_blocks=1 00:16:48.683 --rc geninfo_unexecuted_blocks=1 00:16:48.683 00:16:48.683 ' 00:16:48.683 22:15:43 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:16:48.683 22:15:43 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:16:48.683 22:15:43 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:16:48.683 22:15:43 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:16:48.683 22:15:43 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:16:48.683 22:15:43 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:16:48.683 22:15:43 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:16:48.683 ************************************ 00:16:48.683 START TEST default_locks 00:16:48.683 ************************************ 00:16:48.683 22:15:43 event.cpu_locks.default_locks -- common/autotest_common.sh@1125 -- # default_locks 00:16:48.683 22:15:43 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:16:48.683 22:15:43 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=4173508 00:16:48.683 22:15:43 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 4173508 00:16:48.683 22:15:43 event.cpu_locks.default_locks -- common/autotest_common.sh@831 -- # '[' -z 4173508 ']' 00:16:48.683 22:15:43 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:48.683 22:15:43 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # local max_retries=100 00:16:48.683 22:15:43 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:48.683 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:48.683 22:15:43 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # xtrace_disable 00:16:48.683 22:15:43 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:16:48.683 [2024-10-01 22:15:43.733534] Starting SPDK v25.01-pre git sha1 1b1c3081e / DPDK 24.03.0 initialization... 00:16:48.683 [2024-10-01 22:15:43.733584] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4173508 ] 00:16:48.683 [2024-10-01 22:15:43.789299] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:48.683 [2024-10-01 22:15:43.856166] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:16:48.944 22:15:44 event.cpu_locks.default_locks -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:16:48.944 22:15:44 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # return 0 00:16:48.944 22:15:44 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 4173508 00:16:48.944 22:15:44 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 4173508 00:16:48.944 22:15:44 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:16:49.205 lslocks: write error 00:16:49.205 22:15:44 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 4173508 00:16:49.205 22:15:44 event.cpu_locks.default_locks -- common/autotest_common.sh@950 -- # '[' -z 4173508 ']' 00:16:49.205 22:15:44 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # kill -0 4173508 00:16:49.206 22:15:44 event.cpu_locks.default_locks -- common/autotest_common.sh@955 -- # uname 00:16:49.206 22:15:44 event.cpu_locks.default_locks -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:16:49.206 22:15:44 event.cpu_locks.default_locks -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 4173508 00:16:49.206 22:15:44 event.cpu_locks.default_locks -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:16:49.206 22:15:44 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:16:49.206 22:15:44 event.cpu_locks.default_locks -- common/autotest_common.sh@968 -- # echo 'killing process with pid 4173508' 00:16:49.206 killing process with pid 4173508 00:16:49.206 22:15:44 event.cpu_locks.default_locks -- common/autotest_common.sh@969 -- # kill 4173508 00:16:49.206 22:15:44 event.cpu_locks.default_locks -- common/autotest_common.sh@974 -- # wait 4173508 00:16:49.466 22:15:44 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 4173508 00:16:49.466 22:15:44 event.cpu_locks.default_locks -- common/autotest_common.sh@650 -- # local es=0 00:16:49.466 22:15:44 event.cpu_locks.default_locks -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 4173508 00:16:49.466 22:15:44 event.cpu_locks.default_locks -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:16:49.466 22:15:44 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:49.466 22:15:44 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:16:49.466 22:15:44 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:49.466 22:15:44 event.cpu_locks.default_locks -- common/autotest_common.sh@653 -- # waitforlisten 4173508 00:16:49.466 22:15:44 event.cpu_locks.default_locks -- common/autotest_common.sh@831 -- # '[' -z 4173508 ']' 00:16:49.466 22:15:44 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:49.466 22:15:44 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # local max_retries=100 00:16:49.466 22:15:44 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:49.467 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:49.467 22:15:44 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # xtrace_disable 00:16:49.467 22:15:44 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:16:49.467 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 846: kill: (4173508) - No such process 00:16:49.467 ERROR: process (pid: 4173508) is no longer running 00:16:49.467 22:15:44 event.cpu_locks.default_locks -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:16:49.467 22:15:44 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # return 1 00:16:49.467 22:15:44 event.cpu_locks.default_locks -- common/autotest_common.sh@653 -- # es=1 00:16:49.467 22:15:44 event.cpu_locks.default_locks -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:16:49.467 22:15:44 event.cpu_locks.default_locks -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:16:49.467 22:15:44 event.cpu_locks.default_locks -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:16:49.467 22:15:44 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:16:49.467 22:15:44 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:16:49.467 22:15:44 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:16:49.467 22:15:44 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:16:49.467 00:16:49.467 real 0m0.917s 00:16:49.467 user 0m0.863s 00:16:49.467 sys 0m0.415s 00:16:49.467 22:15:44 event.cpu_locks.default_locks -- common/autotest_common.sh@1126 -- # xtrace_disable 00:16:49.467 22:15:44 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:16:49.467 ************************************ 00:16:49.467 END TEST default_locks 00:16:49.467 ************************************ 00:16:49.467 22:15:44 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:16:49.467 22:15:44 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:16:49.467 22:15:44 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:16:49.467 22:15:44 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:16:49.467 ************************************ 00:16:49.467 START TEST default_locks_via_rpc 00:16:49.467 ************************************ 00:16:49.467 22:15:44 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1125 -- # default_locks_via_rpc 00:16:49.467 22:15:44 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=4173752 00:16:49.467 22:15:44 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 4173752 00:16:49.467 22:15:44 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:16:49.467 22:15:44 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 4173752 ']' 00:16:49.467 22:15:44 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:49.467 22:15:44 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:16:49.467 22:15:44 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:49.467 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:49.467 22:15:44 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:16:49.467 22:15:44 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:49.727 [2024-10-01 22:15:44.753152] Starting SPDK v25.01-pre git sha1 1b1c3081e / DPDK 24.03.0 initialization... 00:16:49.727 [2024-10-01 22:15:44.753206] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4173752 ] 00:16:49.727 [2024-10-01 22:15:44.815467] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:49.727 [2024-10-01 22:15:44.886309] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:16:50.298 22:15:45 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:16:50.298 22:15:45 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:16:50.298 22:15:45 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:16:50.298 22:15:45 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:50.298 22:15:45 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:50.298 22:15:45 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:50.298 22:15:45 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:16:50.298 22:15:45 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:16:50.298 22:15:45 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:16:50.298 22:15:45 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:16:50.298 22:15:45 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:16:50.298 22:15:45 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:50.298 22:15:45 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:50.298 22:15:45 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:50.298 22:15:45 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 4173752 00:16:50.298 22:15:45 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 4173752 00:16:50.298 22:15:45 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:16:50.559 22:15:45 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 4173752 00:16:50.559 22:15:45 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@950 -- # '[' -z 4173752 ']' 00:16:50.559 22:15:45 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # kill -0 4173752 00:16:50.559 22:15:45 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@955 -- # uname 00:16:50.559 22:15:45 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:16:50.559 22:15:45 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 4173752 00:16:50.559 22:15:45 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:16:50.559 22:15:45 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:16:50.559 22:15:45 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 4173752' 00:16:50.559 killing process with pid 4173752 00:16:50.559 22:15:45 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@969 -- # kill 4173752 00:16:50.559 22:15:45 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@974 -- # wait 4173752 00:16:50.819 00:16:50.819 real 0m1.340s 00:16:50.819 user 0m1.386s 00:16:50.819 sys 0m0.458s 00:16:50.819 22:15:46 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:16:50.819 22:15:46 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:50.819 ************************************ 00:16:50.819 END TEST default_locks_via_rpc 00:16:50.819 ************************************ 00:16:50.819 22:15:46 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:16:50.819 22:15:46 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:16:50.819 22:15:46 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:16:50.819 22:15:46 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:16:51.079 ************************************ 00:16:51.079 START TEST non_locking_app_on_locked_coremask 00:16:51.079 ************************************ 00:16:51.080 22:15:46 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1125 -- # non_locking_app_on_locked_coremask 00:16:51.080 22:15:46 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=4174120 00:16:51.080 22:15:46 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 4174120 /var/tmp/spdk.sock 00:16:51.080 22:15:46 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:16:51.080 22:15:46 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # '[' -z 4174120 ']' 00:16:51.080 22:15:46 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:51.080 22:15:46 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:16:51.080 22:15:46 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:51.080 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:51.080 22:15:46 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:16:51.080 22:15:46 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:16:51.080 [2024-10-01 22:15:46.156974] Starting SPDK v25.01-pre git sha1 1b1c3081e / DPDK 24.03.0 initialization... 00:16:51.080 [2024-10-01 22:15:46.157028] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4174120 ] 00:16:51.080 [2024-10-01 22:15:46.220785] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:51.080 [2024-10-01 22:15:46.292492] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:16:52.023 22:15:46 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:16:52.023 22:15:46 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # return 0 00:16:52.023 22:15:46 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:16:52.023 22:15:46 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=4174156 00:16:52.023 22:15:46 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 4174156 /var/tmp/spdk2.sock 00:16:52.023 22:15:46 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # '[' -z 4174156 ']' 00:16:52.023 22:15:46 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:16:52.023 22:15:46 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:16:52.023 22:15:46 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:16:52.023 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:16:52.023 22:15:46 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:16:52.023 22:15:46 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:16:52.023 [2024-10-01 22:15:46.954447] Starting SPDK v25.01-pre git sha1 1b1c3081e / DPDK 24.03.0 initialization... 00:16:52.023 [2024-10-01 22:15:46.954488] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4174156 ] 00:16:52.023 [2024-10-01 22:15:47.034374] app.c: 914:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:16:52.023 [2024-10-01 22:15:47.034399] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:52.023 [2024-10-01 22:15:47.167676] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:16:52.595 22:15:47 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:16:52.595 22:15:47 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # return 0 00:16:52.595 22:15:47 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 4174120 00:16:52.595 22:15:47 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:16:52.595 22:15:47 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 4174120 00:16:53.537 lslocks: write error 00:16:53.537 22:15:48 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 4174120 00:16:53.537 22:15:48 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@950 -- # '[' -z 4174120 ']' 00:16:53.537 22:15:48 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # kill -0 4174120 00:16:53.537 22:15:48 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # uname 00:16:53.537 22:15:48 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:16:53.537 22:15:48 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 4174120 00:16:53.799 22:15:48 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:16:53.799 22:15:48 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:16:53.799 22:15:48 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 4174120' 00:16:53.799 killing process with pid 4174120 00:16:53.799 22:15:48 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@969 -- # kill 4174120 00:16:53.799 22:15:48 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@974 -- # wait 4174120 00:16:54.369 22:15:49 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 4174156 00:16:54.369 22:15:49 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@950 -- # '[' -z 4174156 ']' 00:16:54.369 22:15:49 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # kill -0 4174156 00:16:54.369 22:15:49 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # uname 00:16:54.369 22:15:49 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:16:54.369 22:15:49 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 4174156 00:16:54.369 22:15:49 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:16:54.369 22:15:49 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:16:54.369 22:15:49 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 4174156' 00:16:54.369 killing process with pid 4174156 00:16:54.369 22:15:49 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@969 -- # kill 4174156 00:16:54.369 22:15:49 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@974 -- # wait 4174156 00:16:54.631 00:16:54.631 real 0m3.622s 00:16:54.631 user 0m3.821s 00:16:54.631 sys 0m1.239s 00:16:54.631 22:15:49 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1126 -- # xtrace_disable 00:16:54.631 22:15:49 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:16:54.631 ************************************ 00:16:54.631 END TEST non_locking_app_on_locked_coremask 00:16:54.631 ************************************ 00:16:54.631 22:15:49 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:16:54.631 22:15:49 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:16:54.631 22:15:49 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:16:54.631 22:15:49 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:16:54.631 ************************************ 00:16:54.631 START TEST locking_app_on_unlocked_coremask 00:16:54.631 ************************************ 00:16:54.631 22:15:49 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1125 -- # locking_app_on_unlocked_coremask 00:16:54.631 22:15:49 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=4174831 00:16:54.631 22:15:49 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 4174831 /var/tmp/spdk.sock 00:16:54.631 22:15:49 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:16:54.631 22:15:49 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@831 -- # '[' -z 4174831 ']' 00:16:54.631 22:15:49 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:54.631 22:15:49 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:16:54.631 22:15:49 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:54.631 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:54.631 22:15:49 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:16:54.631 22:15:49 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:16:54.631 [2024-10-01 22:15:49.848805] Starting SPDK v25.01-pre git sha1 1b1c3081e / DPDK 24.03.0 initialization... 00:16:54.631 [2024-10-01 22:15:49.848854] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4174831 ] 00:16:54.901 [2024-10-01 22:15:49.910111] app.c: 914:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:16:54.901 [2024-10-01 22:15:49.910139] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:54.901 [2024-10-01 22:15:49.975805] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:16:55.472 22:15:50 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:16:55.472 22:15:50 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # return 0 00:16:55.472 22:15:50 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=4175036 00:16:55.472 22:15:50 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 4175036 /var/tmp/spdk2.sock 00:16:55.472 22:15:50 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@831 -- # '[' -z 4175036 ']' 00:16:55.472 22:15:50 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:16:55.472 22:15:50 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:16:55.472 22:15:50 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:16:55.472 22:15:50 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:16:55.472 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:16:55.472 22:15:50 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:16:55.472 22:15:50 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:16:55.472 [2024-10-01 22:15:50.687666] Starting SPDK v25.01-pre git sha1 1b1c3081e / DPDK 24.03.0 initialization... 00:16:55.472 [2024-10-01 22:15:50.687722] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4175036 ] 00:16:55.733 [2024-10-01 22:15:50.776415] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:55.733 [2024-10-01 22:15:50.909873] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:16:56.304 22:15:51 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:16:56.304 22:15:51 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # return 0 00:16:56.304 22:15:51 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 4175036 00:16:56.304 22:15:51 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 4175036 00:16:56.304 22:15:51 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:16:56.565 lslocks: write error 00:16:56.565 22:15:51 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 4174831 00:16:56.565 22:15:51 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@950 -- # '[' -z 4174831 ']' 00:16:56.565 22:15:51 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # kill -0 4174831 00:16:56.565 22:15:51 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # uname 00:16:56.565 22:15:51 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:16:56.565 22:15:51 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 4174831 00:16:56.826 22:15:51 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:16:56.826 22:15:51 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:16:56.826 22:15:51 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 4174831' 00:16:56.826 killing process with pid 4174831 00:16:56.826 22:15:51 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@969 -- # kill 4174831 00:16:56.826 22:15:51 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@974 -- # wait 4174831 00:16:57.398 22:15:52 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 4175036 00:16:57.398 22:15:52 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@950 -- # '[' -z 4175036 ']' 00:16:57.398 22:15:52 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # kill -0 4175036 00:16:57.398 22:15:52 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # uname 00:16:57.398 22:15:52 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:16:57.398 22:15:52 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 4175036 00:16:57.398 22:15:52 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:16:57.398 22:15:52 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:16:57.398 22:15:52 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 4175036' 00:16:57.398 killing process with pid 4175036 00:16:57.398 22:15:52 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@969 -- # kill 4175036 00:16:57.398 22:15:52 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@974 -- # wait 4175036 00:16:57.658 00:16:57.658 real 0m2.941s 00:16:57.658 user 0m3.111s 00:16:57.658 sys 0m0.928s 00:16:57.658 22:15:52 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1126 -- # xtrace_disable 00:16:57.658 22:15:52 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:16:57.658 ************************************ 00:16:57.658 END TEST locking_app_on_unlocked_coremask 00:16:57.658 ************************************ 00:16:57.658 22:15:52 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:16:57.658 22:15:52 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:16:57.658 22:15:52 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:16:57.658 22:15:52 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:16:57.658 ************************************ 00:16:57.658 START TEST locking_app_on_locked_coremask 00:16:57.658 ************************************ 00:16:57.658 22:15:52 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1125 -- # locking_app_on_locked_coremask 00:16:57.658 22:15:52 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=4175538 00:16:57.658 22:15:52 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 4175538 /var/tmp/spdk.sock 00:16:57.658 22:15:52 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:16:57.658 22:15:52 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # '[' -z 4175538 ']' 00:16:57.659 22:15:52 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:57.659 22:15:52 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:16:57.659 22:15:52 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:57.659 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:57.659 22:15:52 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:16:57.659 22:15:52 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:16:57.659 [2024-10-01 22:15:52.867070] Starting SPDK v25.01-pre git sha1 1b1c3081e / DPDK 24.03.0 initialization... 00:16:57.659 [2024-10-01 22:15:52.867121] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4175538 ] 00:16:57.919 [2024-10-01 22:15:52.927948] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:57.919 [2024-10-01 22:15:52.990844] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:16:58.500 22:15:53 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:16:58.500 22:15:53 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # return 0 00:16:58.500 22:15:53 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=4175579 00:16:58.500 22:15:53 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 4175579 /var/tmp/spdk2.sock 00:16:58.500 22:15:53 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@650 -- # local es=0 00:16:58.500 22:15:53 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:16:58.500 22:15:53 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 4175579 /var/tmp/spdk2.sock 00:16:58.500 22:15:53 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:16:58.500 22:15:53 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:58.500 22:15:53 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:16:58.500 22:15:53 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:58.500 22:15:53 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@653 -- # waitforlisten 4175579 /var/tmp/spdk2.sock 00:16:58.500 22:15:53 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # '[' -z 4175579 ']' 00:16:58.500 22:15:53 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:16:58.500 22:15:53 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:16:58.500 22:15:53 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:16:58.501 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:16:58.501 22:15:53 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:16:58.501 22:15:53 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:16:58.501 [2024-10-01 22:15:53.688421] Starting SPDK v25.01-pre git sha1 1b1c3081e / DPDK 24.03.0 initialization... 00:16:58.501 [2024-10-01 22:15:53.688474] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4175579 ] 00:16:58.780 [2024-10-01 22:15:53.778324] app.c: 779:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 4175538 has claimed it. 00:16:58.780 [2024-10-01 22:15:53.778364] app.c: 910:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:16:59.375 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 846: kill: (4175579) - No such process 00:16:59.375 ERROR: process (pid: 4175579) is no longer running 00:16:59.375 22:15:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:16:59.375 22:15:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # return 1 00:16:59.375 22:15:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@653 -- # es=1 00:16:59.375 22:15:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:16:59.375 22:15:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:16:59.375 22:15:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:16:59.375 22:15:54 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 4175538 00:16:59.375 22:15:54 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 4175538 00:16:59.375 22:15:54 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:16:59.635 lslocks: write error 00:16:59.635 22:15:54 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 4175538 00:16:59.635 22:15:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@950 -- # '[' -z 4175538 ']' 00:16:59.635 22:15:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # kill -0 4175538 00:16:59.635 22:15:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # uname 00:16:59.635 22:15:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:16:59.635 22:15:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 4175538 00:16:59.635 22:15:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:16:59.635 22:15:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:16:59.635 22:15:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 4175538' 00:16:59.635 killing process with pid 4175538 00:16:59.635 22:15:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@969 -- # kill 4175538 00:16:59.635 22:15:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@974 -- # wait 4175538 00:16:59.896 00:16:59.896 real 0m2.325s 00:16:59.896 user 0m2.559s 00:16:59.896 sys 0m0.679s 00:16:59.896 22:15:55 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1126 -- # xtrace_disable 00:16:59.896 22:15:55 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:16:59.896 ************************************ 00:16:59.896 END TEST locking_app_on_locked_coremask 00:16:59.896 ************************************ 00:17:00.157 22:15:55 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:17:00.157 22:15:55 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:17:00.157 22:15:55 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:17:00.157 22:15:55 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:17:00.157 ************************************ 00:17:00.157 START TEST locking_overlapped_coremask 00:17:00.157 ************************************ 00:17:00.157 22:15:55 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1125 -- # locking_overlapped_coremask 00:17:00.157 22:15:55 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=4175928 00:17:00.157 22:15:55 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 4175928 /var/tmp/spdk.sock 00:17:00.157 22:15:55 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 00:17:00.157 22:15:55 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@831 -- # '[' -z 4175928 ']' 00:17:00.157 22:15:55 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:00.157 22:15:55 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:17:00.157 22:15:55 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:00.157 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:00.157 22:15:55 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:17:00.157 22:15:55 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:17:00.157 [2024-10-01 22:15:55.276685] Starting SPDK v25.01-pre git sha1 1b1c3081e / DPDK 24.03.0 initialization... 00:17:00.157 [2024-10-01 22:15:55.276737] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4175928 ] 00:17:00.157 [2024-10-01 22:15:55.341411] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:17:00.417 [2024-10-01 22:15:55.414607] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:17:00.418 [2024-10-01 22:15:55.414745] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:17:00.418 [2024-10-01 22:15:55.414839] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:17:00.988 22:15:56 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:17:00.988 22:15:56 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # return 0 00:17:00.988 22:15:56 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=4176260 00:17:00.988 22:15:56 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 4176260 /var/tmp/spdk2.sock 00:17:00.988 22:15:56 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@650 -- # local es=0 00:17:00.988 22:15:56 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:17:00.988 22:15:56 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 4176260 /var/tmp/spdk2.sock 00:17:00.988 22:15:56 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:17:00.988 22:15:56 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:00.988 22:15:56 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:17:00.988 22:15:56 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:00.988 22:15:56 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@653 -- # waitforlisten 4176260 /var/tmp/spdk2.sock 00:17:00.988 22:15:56 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@831 -- # '[' -z 4176260 ']' 00:17:00.988 22:15:56 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:17:00.988 22:15:56 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:17:00.988 22:15:56 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:17:00.989 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:17:00.989 22:15:56 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:17:00.989 22:15:56 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:17:00.989 [2024-10-01 22:15:56.125961] Starting SPDK v25.01-pre git sha1 1b1c3081e / DPDK 24.03.0 initialization... 00:17:00.989 [2024-10-01 22:15:56.126013] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4176260 ] 00:17:00.989 [2024-10-01 22:15:56.199323] app.c: 779:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 4175928 has claimed it. 00:17:00.989 [2024-10-01 22:15:56.199351] app.c: 910:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:17:01.559 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 846: kill: (4176260) - No such process 00:17:01.559 ERROR: process (pid: 4176260) is no longer running 00:17:01.559 22:15:56 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:17:01.559 22:15:56 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # return 1 00:17:01.559 22:15:56 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@653 -- # es=1 00:17:01.559 22:15:56 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:17:01.559 22:15:56 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:17:01.559 22:15:56 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:17:01.559 22:15:56 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:17:01.559 22:15:56 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:17:01.559 22:15:56 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:17:01.559 22:15:56 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:17:01.559 22:15:56 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 4175928 00:17:01.559 22:15:56 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@950 -- # '[' -z 4175928 ']' 00:17:01.559 22:15:56 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # kill -0 4175928 00:17:01.559 22:15:56 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@955 -- # uname 00:17:01.559 22:15:56 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:17:01.559 22:15:56 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 4175928 00:17:01.820 22:15:56 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:17:01.820 22:15:56 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:17:01.820 22:15:56 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 4175928' 00:17:01.820 killing process with pid 4175928 00:17:01.820 22:15:56 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@969 -- # kill 4175928 00:17:01.820 22:15:56 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@974 -- # wait 4175928 00:17:02.081 00:17:02.081 real 0m1.888s 00:17:02.081 user 0m5.317s 00:17:02.081 sys 0m0.419s 00:17:02.081 22:15:57 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1126 -- # xtrace_disable 00:17:02.081 22:15:57 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:17:02.081 ************************************ 00:17:02.081 END TEST locking_overlapped_coremask 00:17:02.081 ************************************ 00:17:02.081 22:15:57 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:17:02.081 22:15:57 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:17:02.081 22:15:57 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:17:02.081 22:15:57 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:17:02.081 ************************************ 00:17:02.081 START TEST locking_overlapped_coremask_via_rpc 00:17:02.081 ************************************ 00:17:02.081 22:15:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1125 -- # locking_overlapped_coremask_via_rpc 00:17:02.081 22:15:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=4176411 00:17:02.081 22:15:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 4176411 /var/tmp/spdk.sock 00:17:02.081 22:15:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:17:02.081 22:15:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 4176411 ']' 00:17:02.081 22:15:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:02.081 22:15:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:17:02.081 22:15:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:02.081 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:02.081 22:15:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:17:02.081 22:15:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:02.081 [2024-10-01 22:15:57.235571] Starting SPDK v25.01-pre git sha1 1b1c3081e / DPDK 24.03.0 initialization... 00:17:02.081 [2024-10-01 22:15:57.235621] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4176411 ] 00:17:02.081 [2024-10-01 22:15:57.296125] app.c: 914:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:17:02.081 [2024-10-01 22:15:57.296152] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:17:02.343 [2024-10-01 22:15:57.362247] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:17:02.343 [2024-10-01 22:15:57.362364] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:17:02.343 [2024-10-01 22:15:57.362366] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:17:02.915 22:15:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:17:02.915 22:15:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:17:02.915 22:15:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=4176632 00:17:02.915 22:15:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:17:02.915 22:15:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 4176632 /var/tmp/spdk2.sock 00:17:02.915 22:15:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 4176632 ']' 00:17:02.915 22:15:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:17:02.915 22:15:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:17:02.915 22:15:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:17:02.915 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:17:02.915 22:15:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:17:02.915 22:15:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:02.915 [2024-10-01 22:15:58.076225] Starting SPDK v25.01-pre git sha1 1b1c3081e / DPDK 24.03.0 initialization... 00:17:02.915 [2024-10-01 22:15:58.076280] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4176632 ] 00:17:02.915 [2024-10-01 22:15:58.151859] app.c: 914:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:17:02.915 [2024-10-01 22:15:58.151883] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:17:03.175 [2024-10-01 22:15:58.258556] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:17:03.175 [2024-10-01 22:15:58.261744] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:17:03.175 [2024-10-01 22:15:58.261746] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 4 00:17:03.747 22:15:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:17:03.747 22:15:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:17:03.747 22:15:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:17:03.747 22:15:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:03.747 22:15:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:03.747 22:15:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:03.747 22:15:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:17:03.747 22:15:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@650 -- # local es=0 00:17:03.747 22:15:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:17:03.748 22:15:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:17:03.748 22:15:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:03.748 22:15:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:17:03.748 22:15:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:03.748 22:15:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:17:03.748 22:15:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:03.748 22:15:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:03.748 [2024-10-01 22:15:58.886687] app.c: 779:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 4176411 has claimed it. 00:17:03.748 request: 00:17:03.748 { 00:17:03.748 "method": "framework_enable_cpumask_locks", 00:17:03.748 "req_id": 1 00:17:03.748 } 00:17:03.748 Got JSON-RPC error response 00:17:03.748 response: 00:17:03.748 { 00:17:03.748 "code": -32603, 00:17:03.748 "message": "Failed to claim CPU core: 2" 00:17:03.748 } 00:17:03.748 22:15:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:17:03.748 22:15:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@653 -- # es=1 00:17:03.748 22:15:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:17:03.748 22:15:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:17:03.748 22:15:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:17:03.748 22:15:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 4176411 /var/tmp/spdk.sock 00:17:03.748 22:15:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 4176411 ']' 00:17:03.748 22:15:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:03.748 22:15:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:17:03.748 22:15:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:03.748 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:03.748 22:15:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:17:03.748 22:15:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:04.008 22:15:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:17:04.008 22:15:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:17:04.008 22:15:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 4176632 /var/tmp/spdk2.sock 00:17:04.008 22:15:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 4176632 ']' 00:17:04.008 22:15:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:17:04.008 22:15:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:17:04.008 22:15:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:17:04.008 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:17:04.008 22:15:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:17:04.008 22:15:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:04.008 22:15:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:17:04.008 22:15:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:17:04.008 22:15:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:17:04.008 22:15:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:17:04.008 22:15:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:17:04.008 22:15:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:17:04.008 00:17:04.008 real 0m2.084s 00:17:04.008 user 0m0.852s 00:17:04.008 sys 0m0.157s 00:17:04.008 22:15:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:17:04.008 22:15:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:04.008 ************************************ 00:17:04.008 END TEST locking_overlapped_coremask_via_rpc 00:17:04.008 ************************************ 00:17:04.269 22:15:59 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:17:04.269 22:15:59 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 4176411 ]] 00:17:04.269 22:15:59 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 4176411 00:17:04.269 22:15:59 event.cpu_locks -- common/autotest_common.sh@950 -- # '[' -z 4176411 ']' 00:17:04.269 22:15:59 event.cpu_locks -- common/autotest_common.sh@954 -- # kill -0 4176411 00:17:04.269 22:15:59 event.cpu_locks -- common/autotest_common.sh@955 -- # uname 00:17:04.269 22:15:59 event.cpu_locks -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:17:04.269 22:15:59 event.cpu_locks -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 4176411 00:17:04.269 22:15:59 event.cpu_locks -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:17:04.269 22:15:59 event.cpu_locks -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:17:04.269 22:15:59 event.cpu_locks -- common/autotest_common.sh@968 -- # echo 'killing process with pid 4176411' 00:17:04.269 killing process with pid 4176411 00:17:04.269 22:15:59 event.cpu_locks -- common/autotest_common.sh@969 -- # kill 4176411 00:17:04.269 22:15:59 event.cpu_locks -- common/autotest_common.sh@974 -- # wait 4176411 00:17:04.530 22:15:59 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 4176632 ]] 00:17:04.530 22:15:59 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 4176632 00:17:04.530 22:15:59 event.cpu_locks -- common/autotest_common.sh@950 -- # '[' -z 4176632 ']' 00:17:04.530 22:15:59 event.cpu_locks -- common/autotest_common.sh@954 -- # kill -0 4176632 00:17:04.530 22:15:59 event.cpu_locks -- common/autotest_common.sh@955 -- # uname 00:17:04.530 22:15:59 event.cpu_locks -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:17:04.530 22:15:59 event.cpu_locks -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 4176632 00:17:04.530 22:15:59 event.cpu_locks -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:17:04.530 22:15:59 event.cpu_locks -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:17:04.530 22:15:59 event.cpu_locks -- common/autotest_common.sh@968 -- # echo 'killing process with pid 4176632' 00:17:04.530 killing process with pid 4176632 00:17:04.530 22:15:59 event.cpu_locks -- common/autotest_common.sh@969 -- # kill 4176632 00:17:04.530 22:15:59 event.cpu_locks -- common/autotest_common.sh@974 -- # wait 4176632 00:17:04.791 22:15:59 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:17:04.791 22:15:59 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:17:04.791 22:15:59 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 4176411 ]] 00:17:04.791 22:15:59 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 4176411 00:17:04.791 22:15:59 event.cpu_locks -- common/autotest_common.sh@950 -- # '[' -z 4176411 ']' 00:17:04.791 22:15:59 event.cpu_locks -- common/autotest_common.sh@954 -- # kill -0 4176411 00:17:04.791 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 954: kill: (4176411) - No such process 00:17:04.791 22:15:59 event.cpu_locks -- common/autotest_common.sh@977 -- # echo 'Process with pid 4176411 is not found' 00:17:04.791 Process with pid 4176411 is not found 00:17:04.791 22:15:59 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 4176632 ]] 00:17:04.791 22:15:59 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 4176632 00:17:04.791 22:15:59 event.cpu_locks -- common/autotest_common.sh@950 -- # '[' -z 4176632 ']' 00:17:04.791 22:15:59 event.cpu_locks -- common/autotest_common.sh@954 -- # kill -0 4176632 00:17:04.791 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 954: kill: (4176632) - No such process 00:17:04.791 22:15:59 event.cpu_locks -- common/autotest_common.sh@977 -- # echo 'Process with pid 4176632 is not found' 00:17:04.791 Process with pid 4176632 is not found 00:17:04.791 22:15:59 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:17:04.791 00:17:04.791 real 0m16.533s 00:17:04.791 user 0m28.459s 00:17:04.791 sys 0m5.331s 00:17:04.791 22:15:59 event.cpu_locks -- common/autotest_common.sh@1126 -- # xtrace_disable 00:17:04.791 22:15:59 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:17:04.791 ************************************ 00:17:04.791 END TEST cpu_locks 00:17:04.791 ************************************ 00:17:04.791 00:17:04.791 real 0m43.098s 00:17:04.791 user 1m23.511s 00:17:04.791 sys 0m8.889s 00:17:04.791 22:16:00 event -- common/autotest_common.sh@1126 -- # xtrace_disable 00:17:04.791 22:16:00 event -- common/autotest_common.sh@10 -- # set +x 00:17:04.791 ************************************ 00:17:04.791 END TEST event 00:17:04.791 ************************************ 00:17:05.052 22:16:00 -- spdk/autotest.sh@169 -- # run_test thread /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:17:05.052 22:16:00 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:17:05.052 22:16:00 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:17:05.052 22:16:00 -- common/autotest_common.sh@10 -- # set +x 00:17:05.052 ************************************ 00:17:05.052 START TEST thread 00:17:05.052 ************************************ 00:17:05.052 22:16:00 thread -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:17:05.052 * Looking for test storage... 00:17:05.052 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread 00:17:05.052 22:16:00 thread -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:17:05.052 22:16:00 thread -- common/autotest_common.sh@1681 -- # lcov --version 00:17:05.052 22:16:00 thread -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:17:05.052 22:16:00 thread -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:17:05.052 22:16:00 thread -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:05.052 22:16:00 thread -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:05.052 22:16:00 thread -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:05.052 22:16:00 thread -- scripts/common.sh@336 -- # IFS=.-: 00:17:05.052 22:16:00 thread -- scripts/common.sh@336 -- # read -ra ver1 00:17:05.052 22:16:00 thread -- scripts/common.sh@337 -- # IFS=.-: 00:17:05.052 22:16:00 thread -- scripts/common.sh@337 -- # read -ra ver2 00:17:05.052 22:16:00 thread -- scripts/common.sh@338 -- # local 'op=<' 00:17:05.052 22:16:00 thread -- scripts/common.sh@340 -- # ver1_l=2 00:17:05.052 22:16:00 thread -- scripts/common.sh@341 -- # ver2_l=1 00:17:05.052 22:16:00 thread -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:05.052 22:16:00 thread -- scripts/common.sh@344 -- # case "$op" in 00:17:05.052 22:16:00 thread -- scripts/common.sh@345 -- # : 1 00:17:05.052 22:16:00 thread -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:05.052 22:16:00 thread -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:05.052 22:16:00 thread -- scripts/common.sh@365 -- # decimal 1 00:17:05.052 22:16:00 thread -- scripts/common.sh@353 -- # local d=1 00:17:05.052 22:16:00 thread -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:05.052 22:16:00 thread -- scripts/common.sh@355 -- # echo 1 00:17:05.052 22:16:00 thread -- scripts/common.sh@365 -- # ver1[v]=1 00:17:05.052 22:16:00 thread -- scripts/common.sh@366 -- # decimal 2 00:17:05.052 22:16:00 thread -- scripts/common.sh@353 -- # local d=2 00:17:05.052 22:16:00 thread -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:05.313 22:16:00 thread -- scripts/common.sh@355 -- # echo 2 00:17:05.313 22:16:00 thread -- scripts/common.sh@366 -- # ver2[v]=2 00:17:05.313 22:16:00 thread -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:05.313 22:16:00 thread -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:05.313 22:16:00 thread -- scripts/common.sh@368 -- # return 0 00:17:05.313 22:16:00 thread -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:05.313 22:16:00 thread -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:17:05.313 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:05.313 --rc genhtml_branch_coverage=1 00:17:05.313 --rc genhtml_function_coverage=1 00:17:05.313 --rc genhtml_legend=1 00:17:05.313 --rc geninfo_all_blocks=1 00:17:05.313 --rc geninfo_unexecuted_blocks=1 00:17:05.313 00:17:05.313 ' 00:17:05.313 22:16:00 thread -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:17:05.313 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:05.313 --rc genhtml_branch_coverage=1 00:17:05.313 --rc genhtml_function_coverage=1 00:17:05.313 --rc genhtml_legend=1 00:17:05.313 --rc geninfo_all_blocks=1 00:17:05.313 --rc geninfo_unexecuted_blocks=1 00:17:05.313 00:17:05.313 ' 00:17:05.313 22:16:00 thread -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:17:05.313 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:05.313 --rc genhtml_branch_coverage=1 00:17:05.313 --rc genhtml_function_coverage=1 00:17:05.313 --rc genhtml_legend=1 00:17:05.313 --rc geninfo_all_blocks=1 00:17:05.313 --rc geninfo_unexecuted_blocks=1 00:17:05.313 00:17:05.313 ' 00:17:05.313 22:16:00 thread -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:17:05.313 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:05.313 --rc genhtml_branch_coverage=1 00:17:05.313 --rc genhtml_function_coverage=1 00:17:05.313 --rc genhtml_legend=1 00:17:05.313 --rc geninfo_all_blocks=1 00:17:05.313 --rc geninfo_unexecuted_blocks=1 00:17:05.313 00:17:05.313 ' 00:17:05.313 22:16:00 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:17:05.313 22:16:00 thread -- common/autotest_common.sh@1101 -- # '[' 8 -le 1 ']' 00:17:05.313 22:16:00 thread -- common/autotest_common.sh@1107 -- # xtrace_disable 00:17:05.313 22:16:00 thread -- common/autotest_common.sh@10 -- # set +x 00:17:05.313 ************************************ 00:17:05.313 START TEST thread_poller_perf 00:17:05.313 ************************************ 00:17:05.313 22:16:00 thread.thread_poller_perf -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:17:05.313 [2024-10-01 22:16:00.373735] Starting SPDK v25.01-pre git sha1 1b1c3081e / DPDK 24.03.0 initialization... 00:17:05.313 [2024-10-01 22:16:00.373851] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4177109 ] 00:17:05.313 [2024-10-01 22:16:00.438000] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:05.313 [2024-10-01 22:16:00.503657] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:17:05.313 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:17:06.695 ====================================== 00:17:06.695 busy:2411067312 (cyc) 00:17:06.695 total_run_count: 288000 00:17:06.695 tsc_hz: 2400000000 (cyc) 00:17:06.695 ====================================== 00:17:06.695 poller_cost: 8371 (cyc), 3487 (nsec) 00:17:06.695 00:17:06.695 real 0m1.214s 00:17:06.695 user 0m1.140s 00:17:06.695 sys 0m0.069s 00:17:06.695 22:16:01 thread.thread_poller_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:17:06.695 22:16:01 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:17:06.695 ************************************ 00:17:06.695 END TEST thread_poller_perf 00:17:06.695 ************************************ 00:17:06.695 22:16:01 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:17:06.695 22:16:01 thread -- common/autotest_common.sh@1101 -- # '[' 8 -le 1 ']' 00:17:06.695 22:16:01 thread -- common/autotest_common.sh@1107 -- # xtrace_disable 00:17:06.695 22:16:01 thread -- common/autotest_common.sh@10 -- # set +x 00:17:06.695 ************************************ 00:17:06.695 START TEST thread_poller_perf 00:17:06.695 ************************************ 00:17:06.695 22:16:01 thread.thread_poller_perf -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:17:06.696 [2024-10-01 22:16:01.665509] Starting SPDK v25.01-pre git sha1 1b1c3081e / DPDK 24.03.0 initialization... 00:17:06.696 [2024-10-01 22:16:01.665611] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4177436 ] 00:17:06.696 [2024-10-01 22:16:01.731489] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:06.696 [2024-10-01 22:16:01.794000] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:17:06.696 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:17:07.643 ====================================== 00:17:07.643 busy:2402366266 (cyc) 00:17:07.643 total_run_count: 3801000 00:17:07.643 tsc_hz: 2400000000 (cyc) 00:17:07.643 ====================================== 00:17:07.643 poller_cost: 632 (cyc), 263 (nsec) 00:17:07.643 00:17:07.643 real 0m1.206s 00:17:07.643 user 0m1.127s 00:17:07.643 sys 0m0.075s 00:17:07.643 22:16:02 thread.thread_poller_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:17:07.643 22:16:02 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:17:07.643 ************************************ 00:17:07.643 END TEST thread_poller_perf 00:17:07.643 ************************************ 00:17:07.643 22:16:02 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:17:07.643 00:17:07.643 real 0m2.778s 00:17:07.643 user 0m2.440s 00:17:07.643 sys 0m0.349s 00:17:07.643 22:16:02 thread -- common/autotest_common.sh@1126 -- # xtrace_disable 00:17:07.643 22:16:02 thread -- common/autotest_common.sh@10 -- # set +x 00:17:07.643 ************************************ 00:17:07.643 END TEST thread 00:17:07.643 ************************************ 00:17:07.906 22:16:02 -- spdk/autotest.sh@171 -- # [[ 0 -eq 1 ]] 00:17:07.906 22:16:02 -- spdk/autotest.sh@176 -- # run_test app_cmdline /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:17:07.906 22:16:02 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:17:07.906 22:16:02 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:17:07.906 22:16:02 -- common/autotest_common.sh@10 -- # set +x 00:17:07.906 ************************************ 00:17:07.906 START TEST app_cmdline 00:17:07.906 ************************************ 00:17:07.906 22:16:02 app_cmdline -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:17:07.906 * Looking for test storage... 00:17:07.906 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:17:07.906 22:16:03 app_cmdline -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:17:07.906 22:16:03 app_cmdline -- common/autotest_common.sh@1681 -- # lcov --version 00:17:07.906 22:16:03 app_cmdline -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:17:07.906 22:16:03 app_cmdline -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:17:07.906 22:16:03 app_cmdline -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:07.906 22:16:03 app_cmdline -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:07.906 22:16:03 app_cmdline -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:07.906 22:16:03 app_cmdline -- scripts/common.sh@336 -- # IFS=.-: 00:17:07.906 22:16:03 app_cmdline -- scripts/common.sh@336 -- # read -ra ver1 00:17:07.906 22:16:03 app_cmdline -- scripts/common.sh@337 -- # IFS=.-: 00:17:07.906 22:16:03 app_cmdline -- scripts/common.sh@337 -- # read -ra ver2 00:17:07.906 22:16:03 app_cmdline -- scripts/common.sh@338 -- # local 'op=<' 00:17:07.906 22:16:03 app_cmdline -- scripts/common.sh@340 -- # ver1_l=2 00:17:07.906 22:16:03 app_cmdline -- scripts/common.sh@341 -- # ver2_l=1 00:17:07.906 22:16:03 app_cmdline -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:07.906 22:16:03 app_cmdline -- scripts/common.sh@344 -- # case "$op" in 00:17:07.906 22:16:03 app_cmdline -- scripts/common.sh@345 -- # : 1 00:17:07.906 22:16:03 app_cmdline -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:07.906 22:16:03 app_cmdline -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:07.906 22:16:03 app_cmdline -- scripts/common.sh@365 -- # decimal 1 00:17:07.906 22:16:03 app_cmdline -- scripts/common.sh@353 -- # local d=1 00:17:07.906 22:16:03 app_cmdline -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:07.906 22:16:03 app_cmdline -- scripts/common.sh@355 -- # echo 1 00:17:07.906 22:16:03 app_cmdline -- scripts/common.sh@365 -- # ver1[v]=1 00:17:07.906 22:16:03 app_cmdline -- scripts/common.sh@366 -- # decimal 2 00:17:07.906 22:16:03 app_cmdline -- scripts/common.sh@353 -- # local d=2 00:17:07.906 22:16:03 app_cmdline -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:07.906 22:16:03 app_cmdline -- scripts/common.sh@355 -- # echo 2 00:17:07.906 22:16:03 app_cmdline -- scripts/common.sh@366 -- # ver2[v]=2 00:17:07.906 22:16:03 app_cmdline -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:07.906 22:16:03 app_cmdline -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:07.906 22:16:03 app_cmdline -- scripts/common.sh@368 -- # return 0 00:17:07.906 22:16:03 app_cmdline -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:07.906 22:16:03 app_cmdline -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:17:07.906 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:07.906 --rc genhtml_branch_coverage=1 00:17:07.906 --rc genhtml_function_coverage=1 00:17:07.906 --rc genhtml_legend=1 00:17:07.906 --rc geninfo_all_blocks=1 00:17:07.906 --rc geninfo_unexecuted_blocks=1 00:17:07.906 00:17:07.906 ' 00:17:07.906 22:16:03 app_cmdline -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:17:07.906 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:07.906 --rc genhtml_branch_coverage=1 00:17:07.906 --rc genhtml_function_coverage=1 00:17:07.906 --rc genhtml_legend=1 00:17:07.906 --rc geninfo_all_blocks=1 00:17:07.906 --rc geninfo_unexecuted_blocks=1 00:17:07.906 00:17:07.906 ' 00:17:07.906 22:16:03 app_cmdline -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:17:07.906 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:07.906 --rc genhtml_branch_coverage=1 00:17:07.906 --rc genhtml_function_coverage=1 00:17:07.906 --rc genhtml_legend=1 00:17:07.906 --rc geninfo_all_blocks=1 00:17:07.906 --rc geninfo_unexecuted_blocks=1 00:17:07.906 00:17:07.906 ' 00:17:07.906 22:16:03 app_cmdline -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:17:07.906 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:07.906 --rc genhtml_branch_coverage=1 00:17:07.906 --rc genhtml_function_coverage=1 00:17:07.906 --rc genhtml_legend=1 00:17:07.906 --rc geninfo_all_blocks=1 00:17:07.906 --rc geninfo_unexecuted_blocks=1 00:17:07.906 00:17:07.906 ' 00:17:07.906 22:16:03 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:17:07.906 22:16:03 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=4177836 00:17:07.906 22:16:03 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 4177836 00:17:07.906 22:16:03 app_cmdline -- common/autotest_common.sh@831 -- # '[' -z 4177836 ']' 00:17:07.906 22:16:03 app_cmdline -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:07.906 22:16:03 app_cmdline -- common/autotest_common.sh@836 -- # local max_retries=100 00:17:07.906 22:16:03 app_cmdline -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:07.906 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:07.906 22:16:03 app_cmdline -- common/autotest_common.sh@840 -- # xtrace_disable 00:17:07.906 22:16:03 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:17:07.906 22:16:03 app_cmdline -- app/cmdline.sh@16 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:17:08.167 [2024-10-01 22:16:03.206588] Starting SPDK v25.01-pre git sha1 1b1c3081e / DPDK 24.03.0 initialization... 00:17:08.167 [2024-10-01 22:16:03.206658] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4177836 ] 00:17:08.167 [2024-10-01 22:16:03.271540] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:08.167 [2024-10-01 22:16:03.345617] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:17:08.739 22:16:03 app_cmdline -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:17:08.739 22:16:03 app_cmdline -- common/autotest_common.sh@864 -- # return 0 00:17:08.739 22:16:03 app_cmdline -- app/cmdline.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py spdk_get_version 00:17:08.999 { 00:17:08.999 "version": "SPDK v25.01-pre git sha1 1b1c3081e", 00:17:09.000 "fields": { 00:17:09.000 "major": 25, 00:17:09.000 "minor": 1, 00:17:09.000 "patch": 0, 00:17:09.000 "suffix": "-pre", 00:17:09.000 "commit": "1b1c3081e" 00:17:09.000 } 00:17:09.000 } 00:17:09.000 22:16:04 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:17:09.000 22:16:04 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:17:09.000 22:16:04 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:17:09.000 22:16:04 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:17:09.000 22:16:04 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:17:09.000 22:16:04 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:17:09.000 22:16:04 app_cmdline -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:09.000 22:16:04 app_cmdline -- app/cmdline.sh@26 -- # sort 00:17:09.000 22:16:04 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:17:09.000 22:16:04 app_cmdline -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:09.000 22:16:04 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:17:09.000 22:16:04 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:17:09.000 22:16:04 app_cmdline -- app/cmdline.sh@30 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:17:09.000 22:16:04 app_cmdline -- common/autotest_common.sh@650 -- # local es=0 00:17:09.000 22:16:04 app_cmdline -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:17:09.000 22:16:04 app_cmdline -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:09.000 22:16:04 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:09.000 22:16:04 app_cmdline -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:09.000 22:16:04 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:09.000 22:16:04 app_cmdline -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:09.000 22:16:04 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:09.000 22:16:04 app_cmdline -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:09.000 22:16:04 app_cmdline -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:17:09.000 22:16:04 app_cmdline -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:17:09.261 request: 00:17:09.261 { 00:17:09.261 "method": "env_dpdk_get_mem_stats", 00:17:09.261 "req_id": 1 00:17:09.261 } 00:17:09.261 Got JSON-RPC error response 00:17:09.261 response: 00:17:09.261 { 00:17:09.261 "code": -32601, 00:17:09.261 "message": "Method not found" 00:17:09.261 } 00:17:09.261 22:16:04 app_cmdline -- common/autotest_common.sh@653 -- # es=1 00:17:09.261 22:16:04 app_cmdline -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:17:09.261 22:16:04 app_cmdline -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:17:09.261 22:16:04 app_cmdline -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:17:09.261 22:16:04 app_cmdline -- app/cmdline.sh@1 -- # killprocess 4177836 00:17:09.261 22:16:04 app_cmdline -- common/autotest_common.sh@950 -- # '[' -z 4177836 ']' 00:17:09.261 22:16:04 app_cmdline -- common/autotest_common.sh@954 -- # kill -0 4177836 00:17:09.261 22:16:04 app_cmdline -- common/autotest_common.sh@955 -- # uname 00:17:09.261 22:16:04 app_cmdline -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:17:09.261 22:16:04 app_cmdline -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 4177836 00:17:09.261 22:16:04 app_cmdline -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:17:09.261 22:16:04 app_cmdline -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:17:09.261 22:16:04 app_cmdline -- common/autotest_common.sh@968 -- # echo 'killing process with pid 4177836' 00:17:09.261 killing process with pid 4177836 00:17:09.261 22:16:04 app_cmdline -- common/autotest_common.sh@969 -- # kill 4177836 00:17:09.261 22:16:04 app_cmdline -- common/autotest_common.sh@974 -- # wait 4177836 00:17:09.521 00:17:09.521 real 0m1.731s 00:17:09.521 user 0m2.001s 00:17:09.521 sys 0m0.470s 00:17:09.521 22:16:04 app_cmdline -- common/autotest_common.sh@1126 -- # xtrace_disable 00:17:09.521 22:16:04 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:17:09.521 ************************************ 00:17:09.521 END TEST app_cmdline 00:17:09.521 ************************************ 00:17:09.521 22:16:04 -- spdk/autotest.sh@177 -- # run_test version /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:17:09.521 22:16:04 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:17:09.521 22:16:04 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:17:09.521 22:16:04 -- common/autotest_common.sh@10 -- # set +x 00:17:09.521 ************************************ 00:17:09.521 START TEST version 00:17:09.521 ************************************ 00:17:09.521 22:16:04 version -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:17:09.782 * Looking for test storage... 00:17:09.782 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:17:09.782 22:16:04 version -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:17:09.782 22:16:04 version -- common/autotest_common.sh@1681 -- # lcov --version 00:17:09.782 22:16:04 version -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:17:09.782 22:16:04 version -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:17:09.782 22:16:04 version -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:09.782 22:16:04 version -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:09.782 22:16:04 version -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:09.782 22:16:04 version -- scripts/common.sh@336 -- # IFS=.-: 00:17:09.782 22:16:04 version -- scripts/common.sh@336 -- # read -ra ver1 00:17:09.782 22:16:04 version -- scripts/common.sh@337 -- # IFS=.-: 00:17:09.782 22:16:04 version -- scripts/common.sh@337 -- # read -ra ver2 00:17:09.782 22:16:04 version -- scripts/common.sh@338 -- # local 'op=<' 00:17:09.782 22:16:04 version -- scripts/common.sh@340 -- # ver1_l=2 00:17:09.782 22:16:04 version -- scripts/common.sh@341 -- # ver2_l=1 00:17:09.782 22:16:04 version -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:09.782 22:16:04 version -- scripts/common.sh@344 -- # case "$op" in 00:17:09.782 22:16:04 version -- scripts/common.sh@345 -- # : 1 00:17:09.783 22:16:04 version -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:09.783 22:16:04 version -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:09.783 22:16:04 version -- scripts/common.sh@365 -- # decimal 1 00:17:09.783 22:16:04 version -- scripts/common.sh@353 -- # local d=1 00:17:09.783 22:16:04 version -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:09.783 22:16:04 version -- scripts/common.sh@355 -- # echo 1 00:17:09.783 22:16:04 version -- scripts/common.sh@365 -- # ver1[v]=1 00:17:09.783 22:16:04 version -- scripts/common.sh@366 -- # decimal 2 00:17:09.783 22:16:04 version -- scripts/common.sh@353 -- # local d=2 00:17:09.783 22:16:04 version -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:09.783 22:16:04 version -- scripts/common.sh@355 -- # echo 2 00:17:09.783 22:16:04 version -- scripts/common.sh@366 -- # ver2[v]=2 00:17:09.783 22:16:04 version -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:09.783 22:16:04 version -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:09.783 22:16:04 version -- scripts/common.sh@368 -- # return 0 00:17:09.783 22:16:04 version -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:09.783 22:16:04 version -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:17:09.783 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:09.783 --rc genhtml_branch_coverage=1 00:17:09.783 --rc genhtml_function_coverage=1 00:17:09.783 --rc genhtml_legend=1 00:17:09.783 --rc geninfo_all_blocks=1 00:17:09.783 --rc geninfo_unexecuted_blocks=1 00:17:09.783 00:17:09.783 ' 00:17:09.783 22:16:04 version -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:17:09.783 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:09.783 --rc genhtml_branch_coverage=1 00:17:09.783 --rc genhtml_function_coverage=1 00:17:09.783 --rc genhtml_legend=1 00:17:09.783 --rc geninfo_all_blocks=1 00:17:09.783 --rc geninfo_unexecuted_blocks=1 00:17:09.783 00:17:09.783 ' 00:17:09.783 22:16:04 version -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:17:09.783 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:09.783 --rc genhtml_branch_coverage=1 00:17:09.783 --rc genhtml_function_coverage=1 00:17:09.783 --rc genhtml_legend=1 00:17:09.783 --rc geninfo_all_blocks=1 00:17:09.783 --rc geninfo_unexecuted_blocks=1 00:17:09.783 00:17:09.783 ' 00:17:09.783 22:16:04 version -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:17:09.783 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:09.783 --rc genhtml_branch_coverage=1 00:17:09.783 --rc genhtml_function_coverage=1 00:17:09.783 --rc genhtml_legend=1 00:17:09.783 --rc geninfo_all_blocks=1 00:17:09.783 --rc geninfo_unexecuted_blocks=1 00:17:09.783 00:17:09.783 ' 00:17:09.783 22:16:04 version -- app/version.sh@17 -- # get_header_version major 00:17:09.783 22:16:04 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:17:09.783 22:16:04 version -- app/version.sh@14 -- # cut -f2 00:17:09.783 22:16:04 version -- app/version.sh@14 -- # tr -d '"' 00:17:09.783 22:16:04 version -- app/version.sh@17 -- # major=25 00:17:09.783 22:16:04 version -- app/version.sh@18 -- # get_header_version minor 00:17:09.783 22:16:04 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:17:09.783 22:16:04 version -- app/version.sh@14 -- # cut -f2 00:17:09.783 22:16:04 version -- app/version.sh@14 -- # tr -d '"' 00:17:09.783 22:16:04 version -- app/version.sh@18 -- # minor=1 00:17:09.783 22:16:04 version -- app/version.sh@19 -- # get_header_version patch 00:17:09.783 22:16:04 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:17:09.783 22:16:04 version -- app/version.sh@14 -- # cut -f2 00:17:09.783 22:16:04 version -- app/version.sh@14 -- # tr -d '"' 00:17:09.783 22:16:04 version -- app/version.sh@19 -- # patch=0 00:17:09.783 22:16:04 version -- app/version.sh@20 -- # get_header_version suffix 00:17:09.783 22:16:04 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:17:09.783 22:16:04 version -- app/version.sh@14 -- # cut -f2 00:17:09.783 22:16:04 version -- app/version.sh@14 -- # tr -d '"' 00:17:09.783 22:16:04 version -- app/version.sh@20 -- # suffix=-pre 00:17:09.783 22:16:04 version -- app/version.sh@22 -- # version=25.1 00:17:09.783 22:16:04 version -- app/version.sh@25 -- # (( patch != 0 )) 00:17:09.783 22:16:04 version -- app/version.sh@28 -- # version=25.1rc0 00:17:09.783 22:16:04 version -- app/version.sh@30 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:17:09.783 22:16:04 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:17:10.044 22:16:05 version -- app/version.sh@30 -- # py_version=25.1rc0 00:17:10.044 22:16:05 version -- app/version.sh@31 -- # [[ 25.1rc0 == \2\5\.\1\r\c\0 ]] 00:17:10.044 00:17:10.044 real 0m0.283s 00:17:10.044 user 0m0.176s 00:17:10.044 sys 0m0.156s 00:17:10.044 22:16:05 version -- common/autotest_common.sh@1126 -- # xtrace_disable 00:17:10.044 22:16:05 version -- common/autotest_common.sh@10 -- # set +x 00:17:10.044 ************************************ 00:17:10.044 END TEST version 00:17:10.044 ************************************ 00:17:10.044 22:16:05 -- spdk/autotest.sh@179 -- # '[' 0 -eq 1 ']' 00:17:10.044 22:16:05 -- spdk/autotest.sh@188 -- # [[ 0 -eq 1 ]] 00:17:10.044 22:16:05 -- spdk/autotest.sh@194 -- # uname -s 00:17:10.044 22:16:05 -- spdk/autotest.sh@194 -- # [[ Linux == Linux ]] 00:17:10.044 22:16:05 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:17:10.044 22:16:05 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:17:10.044 22:16:05 -- spdk/autotest.sh@207 -- # '[' 0 -eq 1 ']' 00:17:10.044 22:16:05 -- spdk/autotest.sh@252 -- # '[' 0 -eq 1 ']' 00:17:10.044 22:16:05 -- spdk/autotest.sh@256 -- # timing_exit lib 00:17:10.044 22:16:05 -- common/autotest_common.sh@730 -- # xtrace_disable 00:17:10.044 22:16:05 -- common/autotest_common.sh@10 -- # set +x 00:17:10.044 22:16:05 -- spdk/autotest.sh@258 -- # '[' 0 -eq 1 ']' 00:17:10.044 22:16:05 -- spdk/autotest.sh@263 -- # '[' 0 -eq 1 ']' 00:17:10.044 22:16:05 -- spdk/autotest.sh@272 -- # '[' 1 -eq 1 ']' 00:17:10.044 22:16:05 -- spdk/autotest.sh@273 -- # export NET_TYPE 00:17:10.044 22:16:05 -- spdk/autotest.sh@276 -- # '[' tcp = rdma ']' 00:17:10.044 22:16:05 -- spdk/autotest.sh@279 -- # '[' tcp = tcp ']' 00:17:10.044 22:16:05 -- spdk/autotest.sh@280 -- # run_test nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:17:10.044 22:16:05 -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:17:10.044 22:16:05 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:17:10.044 22:16:05 -- common/autotest_common.sh@10 -- # set +x 00:17:10.044 ************************************ 00:17:10.044 START TEST nvmf_tcp 00:17:10.044 ************************************ 00:17:10.044 22:16:05 nvmf_tcp -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:17:10.044 * Looking for test storage... 00:17:10.044 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:17:10.044 22:16:05 nvmf_tcp -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:17:10.044 22:16:05 nvmf_tcp -- common/autotest_common.sh@1681 -- # lcov --version 00:17:10.044 22:16:05 nvmf_tcp -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:17:10.305 22:16:05 nvmf_tcp -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:17:10.305 22:16:05 nvmf_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:10.305 22:16:05 nvmf_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:10.305 22:16:05 nvmf_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:10.305 22:16:05 nvmf_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:17:10.305 22:16:05 nvmf_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:17:10.305 22:16:05 nvmf_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:17:10.305 22:16:05 nvmf_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:17:10.305 22:16:05 nvmf_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:17:10.305 22:16:05 nvmf_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:17:10.305 22:16:05 nvmf_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:17:10.305 22:16:05 nvmf_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:10.305 22:16:05 nvmf_tcp -- scripts/common.sh@344 -- # case "$op" in 00:17:10.305 22:16:05 nvmf_tcp -- scripts/common.sh@345 -- # : 1 00:17:10.305 22:16:05 nvmf_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:10.305 22:16:05 nvmf_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:10.305 22:16:05 nvmf_tcp -- scripts/common.sh@365 -- # decimal 1 00:17:10.305 22:16:05 nvmf_tcp -- scripts/common.sh@353 -- # local d=1 00:17:10.305 22:16:05 nvmf_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:10.305 22:16:05 nvmf_tcp -- scripts/common.sh@355 -- # echo 1 00:17:10.305 22:16:05 nvmf_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:17:10.305 22:16:05 nvmf_tcp -- scripts/common.sh@366 -- # decimal 2 00:17:10.305 22:16:05 nvmf_tcp -- scripts/common.sh@353 -- # local d=2 00:17:10.305 22:16:05 nvmf_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:10.305 22:16:05 nvmf_tcp -- scripts/common.sh@355 -- # echo 2 00:17:10.305 22:16:05 nvmf_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:17:10.305 22:16:05 nvmf_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:10.305 22:16:05 nvmf_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:10.305 22:16:05 nvmf_tcp -- scripts/common.sh@368 -- # return 0 00:17:10.305 22:16:05 nvmf_tcp -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:10.305 22:16:05 nvmf_tcp -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:17:10.305 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:10.305 --rc genhtml_branch_coverage=1 00:17:10.305 --rc genhtml_function_coverage=1 00:17:10.305 --rc genhtml_legend=1 00:17:10.305 --rc geninfo_all_blocks=1 00:17:10.305 --rc geninfo_unexecuted_blocks=1 00:17:10.305 00:17:10.305 ' 00:17:10.305 22:16:05 nvmf_tcp -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:17:10.305 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:10.305 --rc genhtml_branch_coverage=1 00:17:10.305 --rc genhtml_function_coverage=1 00:17:10.305 --rc genhtml_legend=1 00:17:10.305 --rc geninfo_all_blocks=1 00:17:10.305 --rc geninfo_unexecuted_blocks=1 00:17:10.305 00:17:10.305 ' 00:17:10.305 22:16:05 nvmf_tcp -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:17:10.305 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:10.305 --rc genhtml_branch_coverage=1 00:17:10.305 --rc genhtml_function_coverage=1 00:17:10.305 --rc genhtml_legend=1 00:17:10.305 --rc geninfo_all_blocks=1 00:17:10.305 --rc geninfo_unexecuted_blocks=1 00:17:10.305 00:17:10.305 ' 00:17:10.305 22:16:05 nvmf_tcp -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:17:10.305 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:10.305 --rc genhtml_branch_coverage=1 00:17:10.305 --rc genhtml_function_coverage=1 00:17:10.305 --rc genhtml_legend=1 00:17:10.305 --rc geninfo_all_blocks=1 00:17:10.305 --rc geninfo_unexecuted_blocks=1 00:17:10.305 00:17:10.305 ' 00:17:10.305 22:16:05 nvmf_tcp -- nvmf/nvmf.sh@10 -- # uname -s 00:17:10.305 22:16:05 nvmf_tcp -- nvmf/nvmf.sh@10 -- # '[' '!' Linux = Linux ']' 00:17:10.305 22:16:05 nvmf_tcp -- nvmf/nvmf.sh@14 -- # run_test nvmf_target_core /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp 00:17:10.305 22:16:05 nvmf_tcp -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:17:10.305 22:16:05 nvmf_tcp -- common/autotest_common.sh@1107 -- # xtrace_disable 00:17:10.305 22:16:05 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:17:10.305 ************************************ 00:17:10.305 START TEST nvmf_target_core 00:17:10.305 ************************************ 00:17:10.305 22:16:05 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp 00:17:10.305 * Looking for test storage... 00:17:10.305 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:17:10.305 22:16:05 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:17:10.305 22:16:05 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1681 -- # lcov --version 00:17:10.305 22:16:05 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:17:10.566 22:16:05 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:17:10.566 22:16:05 nvmf_tcp.nvmf_target_core -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:10.566 22:16:05 nvmf_tcp.nvmf_target_core -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:10.566 22:16:05 nvmf_tcp.nvmf_target_core -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:10.566 22:16:05 nvmf_tcp.nvmf_target_core -- scripts/common.sh@336 -- # IFS=.-: 00:17:10.566 22:16:05 nvmf_tcp.nvmf_target_core -- scripts/common.sh@336 -- # read -ra ver1 00:17:10.566 22:16:05 nvmf_tcp.nvmf_target_core -- scripts/common.sh@337 -- # IFS=.-: 00:17:10.566 22:16:05 nvmf_tcp.nvmf_target_core -- scripts/common.sh@337 -- # read -ra ver2 00:17:10.566 22:16:05 nvmf_tcp.nvmf_target_core -- scripts/common.sh@338 -- # local 'op=<' 00:17:10.566 22:16:05 nvmf_tcp.nvmf_target_core -- scripts/common.sh@340 -- # ver1_l=2 00:17:10.567 22:16:05 nvmf_tcp.nvmf_target_core -- scripts/common.sh@341 -- # ver2_l=1 00:17:10.567 22:16:05 nvmf_tcp.nvmf_target_core -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:10.567 22:16:05 nvmf_tcp.nvmf_target_core -- scripts/common.sh@344 -- # case "$op" in 00:17:10.567 22:16:05 nvmf_tcp.nvmf_target_core -- scripts/common.sh@345 -- # : 1 00:17:10.567 22:16:05 nvmf_tcp.nvmf_target_core -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:10.567 22:16:05 nvmf_tcp.nvmf_target_core -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:10.567 22:16:05 nvmf_tcp.nvmf_target_core -- scripts/common.sh@365 -- # decimal 1 00:17:10.567 22:16:05 nvmf_tcp.nvmf_target_core -- scripts/common.sh@353 -- # local d=1 00:17:10.567 22:16:05 nvmf_tcp.nvmf_target_core -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:10.567 22:16:05 nvmf_tcp.nvmf_target_core -- scripts/common.sh@355 -- # echo 1 00:17:10.567 22:16:05 nvmf_tcp.nvmf_target_core -- scripts/common.sh@365 -- # ver1[v]=1 00:17:10.567 22:16:05 nvmf_tcp.nvmf_target_core -- scripts/common.sh@366 -- # decimal 2 00:17:10.567 22:16:05 nvmf_tcp.nvmf_target_core -- scripts/common.sh@353 -- # local d=2 00:17:10.567 22:16:05 nvmf_tcp.nvmf_target_core -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:10.567 22:16:05 nvmf_tcp.nvmf_target_core -- scripts/common.sh@355 -- # echo 2 00:17:10.567 22:16:05 nvmf_tcp.nvmf_target_core -- scripts/common.sh@366 -- # ver2[v]=2 00:17:10.567 22:16:05 nvmf_tcp.nvmf_target_core -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:10.567 22:16:05 nvmf_tcp.nvmf_target_core -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:10.567 22:16:05 nvmf_tcp.nvmf_target_core -- scripts/common.sh@368 -- # return 0 00:17:10.567 22:16:05 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:10.567 22:16:05 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:17:10.567 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:10.567 --rc genhtml_branch_coverage=1 00:17:10.567 --rc genhtml_function_coverage=1 00:17:10.567 --rc genhtml_legend=1 00:17:10.567 --rc geninfo_all_blocks=1 00:17:10.567 --rc geninfo_unexecuted_blocks=1 00:17:10.567 00:17:10.567 ' 00:17:10.567 22:16:05 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:17:10.567 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:10.567 --rc genhtml_branch_coverage=1 00:17:10.567 --rc genhtml_function_coverage=1 00:17:10.567 --rc genhtml_legend=1 00:17:10.567 --rc geninfo_all_blocks=1 00:17:10.567 --rc geninfo_unexecuted_blocks=1 00:17:10.567 00:17:10.567 ' 00:17:10.567 22:16:05 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:17:10.567 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:10.567 --rc genhtml_branch_coverage=1 00:17:10.567 --rc genhtml_function_coverage=1 00:17:10.567 --rc genhtml_legend=1 00:17:10.567 --rc geninfo_all_blocks=1 00:17:10.567 --rc geninfo_unexecuted_blocks=1 00:17:10.567 00:17:10.567 ' 00:17:10.567 22:16:05 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:17:10.567 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:10.567 --rc genhtml_branch_coverage=1 00:17:10.567 --rc genhtml_function_coverage=1 00:17:10.567 --rc genhtml_legend=1 00:17:10.567 --rc geninfo_all_blocks=1 00:17:10.567 --rc geninfo_unexecuted_blocks=1 00:17:10.567 00:17:10.567 ' 00:17:10.567 22:16:05 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@10 -- # uname -s 00:17:10.567 22:16:05 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@10 -- # '[' '!' Linux = Linux ']' 00:17:10.567 22:16:05 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:10.567 22:16:05 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@7 -- # uname -s 00:17:10.567 22:16:05 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:10.567 22:16:05 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:10.567 22:16:05 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:10.567 22:16:05 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:10.567 22:16:05 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:10.567 22:16:05 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:10.567 22:16:05 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:10.567 22:16:05 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:10.567 22:16:05 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:10.567 22:16:05 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:10.567 22:16:05 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 00:17:10.567 22:16:05 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@18 -- # NVME_HOSTID=80c5a598-37ec-ec11-9bc7-a4bf01928204 00:17:10.567 22:16:05 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:10.567 22:16:05 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:10.567 22:16:05 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:10.567 22:16:05 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:10.567 22:16:05 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:10.567 22:16:05 nvmf_tcp.nvmf_target_core -- scripts/common.sh@15 -- # shopt -s extglob 00:17:10.567 22:16:05 nvmf_tcp.nvmf_target_core -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:10.567 22:16:05 nvmf_tcp.nvmf_target_core -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:10.567 22:16:05 nvmf_tcp.nvmf_target_core -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:10.567 22:16:05 nvmf_tcp.nvmf_target_core -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:10.567 22:16:05 nvmf_tcp.nvmf_target_core -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:10.567 22:16:05 nvmf_tcp.nvmf_target_core -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:10.567 22:16:05 nvmf_tcp.nvmf_target_core -- paths/export.sh@5 -- # export PATH 00:17:10.567 22:16:05 nvmf_tcp.nvmf_target_core -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:10.567 22:16:05 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@51 -- # : 0 00:17:10.567 22:16:05 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:17:10.567 22:16:05 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:17:10.567 22:16:05 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:10.567 22:16:05 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:10.567 22:16:05 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:10.567 22:16:05 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:17:10.567 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:17:10.567 22:16:05 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:17:10.567 22:16:05 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:17:10.567 22:16:05 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@55 -- # have_pci_nics=0 00:17:10.567 22:16:05 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:17:10.567 22:16:05 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@18 -- # TEST_ARGS=("$@") 00:17:10.567 22:16:05 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@20 -- # [[ 0 -eq 0 ]] 00:17:10.567 22:16:05 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@21 -- # run_test nvmf_abort /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:17:10.567 22:16:05 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:17:10.567 22:16:05 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:17:10.567 22:16:05 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:17:10.567 ************************************ 00:17:10.567 START TEST nvmf_abort 00:17:10.567 ************************************ 00:17:10.567 22:16:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:17:10.567 * Looking for test storage... 00:17:10.567 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:17:10.567 22:16:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:17:10.567 22:16:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1681 -- # lcov --version 00:17:10.567 22:16:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:17:10.829 22:16:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:17:10.829 22:16:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:10.829 22:16:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:10.829 22:16:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:10.829 22:16:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@336 -- # IFS=.-: 00:17:10.829 22:16:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@336 -- # read -ra ver1 00:17:10.829 22:16:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@337 -- # IFS=.-: 00:17:10.829 22:16:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@337 -- # read -ra ver2 00:17:10.829 22:16:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@338 -- # local 'op=<' 00:17:10.829 22:16:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@340 -- # ver1_l=2 00:17:10.829 22:16:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@341 -- # ver2_l=1 00:17:10.829 22:16:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:10.829 22:16:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@344 -- # case "$op" in 00:17:10.829 22:16:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@345 -- # : 1 00:17:10.829 22:16:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:10.829 22:16:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:10.829 22:16:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@365 -- # decimal 1 00:17:10.829 22:16:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@353 -- # local d=1 00:17:10.829 22:16:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:10.829 22:16:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@355 -- # echo 1 00:17:10.829 22:16:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@365 -- # ver1[v]=1 00:17:10.829 22:16:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@366 -- # decimal 2 00:17:10.829 22:16:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@353 -- # local d=2 00:17:10.829 22:16:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:10.829 22:16:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@355 -- # echo 2 00:17:10.829 22:16:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@366 -- # ver2[v]=2 00:17:10.829 22:16:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:10.829 22:16:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:10.829 22:16:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@368 -- # return 0 00:17:10.829 22:16:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:10.829 22:16:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:17:10.829 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:10.829 --rc genhtml_branch_coverage=1 00:17:10.829 --rc genhtml_function_coverage=1 00:17:10.829 --rc genhtml_legend=1 00:17:10.829 --rc geninfo_all_blocks=1 00:17:10.829 --rc geninfo_unexecuted_blocks=1 00:17:10.829 00:17:10.829 ' 00:17:10.829 22:16:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:17:10.829 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:10.829 --rc genhtml_branch_coverage=1 00:17:10.829 --rc genhtml_function_coverage=1 00:17:10.829 --rc genhtml_legend=1 00:17:10.829 --rc geninfo_all_blocks=1 00:17:10.829 --rc geninfo_unexecuted_blocks=1 00:17:10.829 00:17:10.829 ' 00:17:10.829 22:16:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:17:10.829 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:10.829 --rc genhtml_branch_coverage=1 00:17:10.829 --rc genhtml_function_coverage=1 00:17:10.829 --rc genhtml_legend=1 00:17:10.829 --rc geninfo_all_blocks=1 00:17:10.829 --rc geninfo_unexecuted_blocks=1 00:17:10.829 00:17:10.829 ' 00:17:10.829 22:16:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:17:10.829 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:10.829 --rc genhtml_branch_coverage=1 00:17:10.829 --rc genhtml_function_coverage=1 00:17:10.829 --rc genhtml_legend=1 00:17:10.829 --rc geninfo_all_blocks=1 00:17:10.829 --rc geninfo_unexecuted_blocks=1 00:17:10.829 00:17:10.829 ' 00:17:10.829 22:16:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:10.829 22:16:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@7 -- # uname -s 00:17:10.829 22:16:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:10.829 22:16:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:10.829 22:16:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:10.829 22:16:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:10.829 22:16:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:10.829 22:16:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:10.829 22:16:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:10.829 22:16:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:10.829 22:16:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:10.829 22:16:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:10.829 22:16:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 00:17:10.829 22:16:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@18 -- # NVME_HOSTID=80c5a598-37ec-ec11-9bc7-a4bf01928204 00:17:10.829 22:16:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:10.829 22:16:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:10.829 22:16:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:10.829 22:16:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:10.829 22:16:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:10.829 22:16:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@15 -- # shopt -s extglob 00:17:10.829 22:16:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:10.829 22:16:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:10.829 22:16:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:10.829 22:16:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:10.829 22:16:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:10.829 22:16:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:10.829 22:16:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@5 -- # export PATH 00:17:10.829 22:16:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:10.829 22:16:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@51 -- # : 0 00:17:10.829 22:16:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:17:10.829 22:16:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:17:10.829 22:16:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:10.829 22:16:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:10.829 22:16:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:10.829 22:16:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:17:10.829 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:17:10.829 22:16:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:17:10.829 22:16:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:17:10.829 22:16:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@55 -- # have_pci_nics=0 00:17:10.830 22:16:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@11 -- # MALLOC_BDEV_SIZE=64 00:17:10.830 22:16:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@12 -- # MALLOC_BLOCK_SIZE=4096 00:17:10.830 22:16:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@14 -- # nvmftestinit 00:17:10.830 22:16:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:17:10.830 22:16:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:10.830 22:16:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@474 -- # prepare_net_devs 00:17:10.830 22:16:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@436 -- # local -g is_hw=no 00:17:10.830 22:16:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@438 -- # remove_spdk_ns 00:17:10.830 22:16:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:10.830 22:16:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:10.830 22:16:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:10.830 22:16:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:17:10.830 22:16:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:17:10.830 22:16:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@309 -- # xtrace_disable 00:17:10.830 22:16:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:17:18.968 22:16:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:17:18.968 22:16:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@315 -- # pci_devs=() 00:17:18.968 22:16:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@315 -- # local -a pci_devs 00:17:18.968 22:16:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@316 -- # pci_net_devs=() 00:17:18.968 22:16:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:17:18.968 22:16:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@317 -- # pci_drivers=() 00:17:18.968 22:16:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@317 -- # local -A pci_drivers 00:17:18.968 22:16:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@319 -- # net_devs=() 00:17:18.968 22:16:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@319 -- # local -ga net_devs 00:17:18.968 22:16:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@320 -- # e810=() 00:17:18.969 22:16:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@320 -- # local -ga e810 00:17:18.969 22:16:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@321 -- # x722=() 00:17:18.969 22:16:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@321 -- # local -ga x722 00:17:18.969 22:16:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@322 -- # mlx=() 00:17:18.969 22:16:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@322 -- # local -ga mlx 00:17:18.969 22:16:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:18.969 22:16:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:18.969 22:16:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:18.969 22:16:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:18.969 22:16:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:18.969 22:16:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:18.969 22:16:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:18.969 22:16:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:17:18.969 22:16:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:18.969 22:16:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:18.969 22:16:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:18.969 22:16:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:18.969 22:16:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:17:18.969 22:16:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:17:18.969 22:16:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:17:18.969 22:16:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:17:18.969 22:16:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:17:18.969 22:16:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:17:18.969 22:16:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:17:18.969 22:16:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:17:18.969 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:17:18.969 22:16:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:17:18.969 22:16:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:17:18.969 22:16:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:18.969 22:16:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:18.969 22:16:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:17:18.969 22:16:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:17:18.969 22:16:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:17:18.969 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:17:18.969 22:16:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:17:18.969 22:16:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:17:18.969 22:16:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:18.969 22:16:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:18.969 22:16:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:17:18.969 22:16:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:17:18.969 22:16:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:17:18.969 22:16:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:17:18.969 22:16:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:17:18.969 22:16:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:18.969 22:16:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:17:18.969 22:16:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:18.969 22:16:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@416 -- # [[ up == up ]] 00:17:18.969 22:16:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:17:18.969 22:16:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:18.969 22:16:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:17:18.969 Found net devices under 0000:4b:00.0: cvl_0_0 00:17:18.969 22:16:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:17:18.969 22:16:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:17:18.969 22:16:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:18.969 22:16:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:17:18.969 22:16:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:18.969 22:16:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@416 -- # [[ up == up ]] 00:17:18.969 22:16:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:17:18.969 22:16:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:18.969 22:16:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:17:18.969 Found net devices under 0000:4b:00.1: cvl_0_1 00:17:18.969 22:16:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:17:18.969 22:16:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:17:18.969 22:16:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@440 -- # is_hw=yes 00:17:18.969 22:16:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:17:18.969 22:16:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:17:18.969 22:16:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:17:18.969 22:16:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:17:18.969 22:16:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:18.969 22:16:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:18.969 22:16:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:17:18.969 22:16:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:17:18.969 22:16:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:17:18.969 22:16:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:17:18.969 22:16:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:17:18.969 22:16:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:17:18.969 22:16:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:17:18.969 22:16:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:18.969 22:16:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:17:18.969 22:16:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:17:18.969 22:16:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:17:18.969 22:16:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:17:18.969 22:16:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:17:18.969 22:16:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:17:18.969 22:16:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:17:18.969 22:16:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:17:18.969 22:16:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:17:18.969 22:16:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:17:18.969 22:16:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:17:18.969 22:16:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:17:18.969 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:18.969 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.675 ms 00:17:18.969 00:17:18.969 --- 10.0.0.2 ping statistics --- 00:17:18.969 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:18.969 rtt min/avg/max/mdev = 0.675/0.675/0.675/0.000 ms 00:17:18.969 22:16:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:17:18.969 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:18.969 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.319 ms 00:17:18.969 00:17:18.969 --- 10.0.0.1 ping statistics --- 00:17:18.969 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:18.969 rtt min/avg/max/mdev = 0.319/0.319/0.319/0.000 ms 00:17:18.969 22:16:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:18.969 22:16:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@448 -- # return 0 00:17:18.969 22:16:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:17:18.969 22:16:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:18.969 22:16:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:17:18.969 22:16:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:17:18.969 22:16:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:18.969 22:16:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:17:18.969 22:16:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:17:18.969 22:16:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@15 -- # nvmfappstart -m 0xE 00:17:18.969 22:16:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:17:18.969 22:16:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@724 -- # xtrace_disable 00:17:18.969 22:16:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:17:18.969 22:16:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@507 -- # nvmfpid=4182320 00:17:18.969 22:16:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@508 -- # waitforlisten 4182320 00:17:18.970 22:16:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:17:18.970 22:16:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@831 -- # '[' -z 4182320 ']' 00:17:18.970 22:16:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:18.970 22:16:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@836 -- # local max_retries=100 00:17:18.970 22:16:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:18.970 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:18.970 22:16:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@840 -- # xtrace_disable 00:17:18.970 22:16:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:17:18.970 [2024-10-01 22:16:13.368330] Starting SPDK v25.01-pre git sha1 1b1c3081e / DPDK 24.03.0 initialization... 00:17:18.970 [2024-10-01 22:16:13.368402] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:18.970 [2024-10-01 22:16:13.458859] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:17:18.970 [2024-10-01 22:16:13.555665] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:18.970 [2024-10-01 22:16:13.555726] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:18.970 [2024-10-01 22:16:13.555735] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:18.970 [2024-10-01 22:16:13.555742] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:18.970 [2024-10-01 22:16:13.555749] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:18.970 [2024-10-01 22:16:13.555902] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:17:18.970 [2024-10-01 22:16:13.556168] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:17:18.970 [2024-10-01 22:16:13.556170] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:17:18.970 22:16:14 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:17:18.970 22:16:14 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@864 -- # return 0 00:17:18.970 22:16:14 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:17:18.970 22:16:14 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@730 -- # xtrace_disable 00:17:18.970 22:16:14 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:17:19.231 22:16:14 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:19.231 22:16:14 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -a 256 00:17:19.231 22:16:14 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:19.231 22:16:14 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:17:19.231 [2024-10-01 22:16:14.229040] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:19.231 22:16:14 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:19.231 22:16:14 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@20 -- # rpc_cmd bdev_malloc_create 64 4096 -b Malloc0 00:17:19.231 22:16:14 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:19.231 22:16:14 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:17:19.231 Malloc0 00:17:19.231 22:16:14 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:19.231 22:16:14 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@21 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:17:19.231 22:16:14 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:19.231 22:16:14 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:17:19.231 Delay0 00:17:19.231 22:16:14 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:19.231 22:16:14 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:17:19.231 22:16:14 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:19.231 22:16:14 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:17:19.231 22:16:14 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:19.231 22:16:14 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Delay0 00:17:19.231 22:16:14 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:19.231 22:16:14 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:17:19.231 22:16:14 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:19.231 22:16:14 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:17:19.231 22:16:14 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:19.231 22:16:14 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:17:19.231 [2024-10-01 22:16:14.296609] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:19.231 22:16:14 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:19.231 22:16:14 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:17:19.231 22:16:14 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:19.231 22:16:14 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:17:19.231 22:16:14 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:19.231 22:16:14 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0x1 -t 1 -l warning -q 128 00:17:19.231 [2024-10-01 22:16:14.449876] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:17:21.776 Initializing NVMe Controllers 00:17:21.776 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:17:21.776 controller IO queue size 128 less than required 00:17:21.776 Consider using lower queue depth or small IO size because IO requests may be queued at the NVMe driver. 00:17:21.776 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:17:21.776 Initialization complete. Launching workers. 00:17:21.776 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 I/O completed: 123, failed: 30333 00:17:21.776 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) abort submitted 30394, failed to submit 62 00:17:21.776 success 30333, unsuccessful 61, failed 0 00:17:21.776 22:16:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:17:21.776 22:16:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:21.776 22:16:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:17:21.776 22:16:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:21.776 22:16:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:17:21.776 22:16:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@38 -- # nvmftestfini 00:17:21.776 22:16:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@514 -- # nvmfcleanup 00:17:21.776 22:16:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@121 -- # sync 00:17:21.776 22:16:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:17:21.776 22:16:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@124 -- # set +e 00:17:21.776 22:16:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@125 -- # for i in {1..20} 00:17:21.776 22:16:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:17:21.776 rmmod nvme_tcp 00:17:21.776 rmmod nvme_fabrics 00:17:21.776 rmmod nvme_keyring 00:17:21.776 22:16:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:17:21.776 22:16:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@128 -- # set -e 00:17:21.776 22:16:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@129 -- # return 0 00:17:21.776 22:16:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@515 -- # '[' -n 4182320 ']' 00:17:21.776 22:16:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@516 -- # killprocess 4182320 00:17:21.776 22:16:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@950 -- # '[' -z 4182320 ']' 00:17:21.776 22:16:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@954 -- # kill -0 4182320 00:17:21.776 22:16:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@955 -- # uname 00:17:21.776 22:16:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:17:21.776 22:16:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 4182320 00:17:21.776 22:16:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:17:21.776 22:16:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:17:21.776 22:16:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@968 -- # echo 'killing process with pid 4182320' 00:17:21.776 killing process with pid 4182320 00:17:21.776 22:16:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@969 -- # kill 4182320 00:17:21.776 22:16:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@974 -- # wait 4182320 00:17:21.776 22:16:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:17:21.776 22:16:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:17:21.776 22:16:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:17:21.776 22:16:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@297 -- # iptr 00:17:21.776 22:16:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@789 -- # iptables-restore 00:17:21.776 22:16:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@789 -- # iptables-save 00:17:21.776 22:16:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:17:21.776 22:16:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:17:21.776 22:16:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@302 -- # remove_spdk_ns 00:17:21.776 22:16:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:21.776 22:16:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:21.776 22:16:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:23.741 22:16:18 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:17:23.741 00:17:23.741 real 0m13.292s 00:17:23.741 user 0m14.081s 00:17:23.741 sys 0m6.332s 00:17:23.741 22:16:18 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1126 -- # xtrace_disable 00:17:23.741 22:16:18 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:17:23.741 ************************************ 00:17:23.741 END TEST nvmf_abort 00:17:23.741 ************************************ 00:17:24.001 22:16:18 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@22 -- # run_test nvmf_ns_hotplug_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:17:24.001 22:16:18 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:17:24.001 22:16:18 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:17:24.001 22:16:18 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:17:24.001 ************************************ 00:17:24.001 START TEST nvmf_ns_hotplug_stress 00:17:24.001 ************************************ 00:17:24.001 22:16:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:17:24.001 * Looking for test storage... 00:17:24.001 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:17:24.001 22:16:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:17:24.001 22:16:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1681 -- # lcov --version 00:17:24.001 22:16:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:17:24.001 22:16:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:17:24.001 22:16:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:24.001 22:16:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:24.001 22:16:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:24.001 22:16:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # IFS=.-: 00:17:24.001 22:16:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # read -ra ver1 00:17:24.001 22:16:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # IFS=.-: 00:17:24.001 22:16:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # read -ra ver2 00:17:24.001 22:16:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@338 -- # local 'op=<' 00:17:24.001 22:16:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@340 -- # ver1_l=2 00:17:24.001 22:16:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@341 -- # ver2_l=1 00:17:24.001 22:16:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:24.001 22:16:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@344 -- # case "$op" in 00:17:24.001 22:16:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@345 -- # : 1 00:17:24.001 22:16:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:24.001 22:16:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:24.001 22:16:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # decimal 1 00:17:24.001 22:16:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=1 00:17:24.001 22:16:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:24.001 22:16:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 1 00:17:24.001 22:16:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # ver1[v]=1 00:17:24.001 22:16:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # decimal 2 00:17:24.001 22:16:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=2 00:17:24.001 22:16:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:24.001 22:16:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 2 00:17:24.001 22:16:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # ver2[v]=2 00:17:24.001 22:16:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:24.001 22:16:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:24.001 22:16:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # return 0 00:17:24.001 22:16:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:24.001 22:16:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:17:24.001 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:24.001 --rc genhtml_branch_coverage=1 00:17:24.001 --rc genhtml_function_coverage=1 00:17:24.001 --rc genhtml_legend=1 00:17:24.001 --rc geninfo_all_blocks=1 00:17:24.001 --rc geninfo_unexecuted_blocks=1 00:17:24.001 00:17:24.001 ' 00:17:24.001 22:16:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:17:24.001 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:24.001 --rc genhtml_branch_coverage=1 00:17:24.001 --rc genhtml_function_coverage=1 00:17:24.001 --rc genhtml_legend=1 00:17:24.001 --rc geninfo_all_blocks=1 00:17:24.001 --rc geninfo_unexecuted_blocks=1 00:17:24.001 00:17:24.001 ' 00:17:24.001 22:16:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:17:24.001 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:24.001 --rc genhtml_branch_coverage=1 00:17:24.001 --rc genhtml_function_coverage=1 00:17:24.001 --rc genhtml_legend=1 00:17:24.001 --rc geninfo_all_blocks=1 00:17:24.001 --rc geninfo_unexecuted_blocks=1 00:17:24.001 00:17:24.001 ' 00:17:24.001 22:16:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:17:24.001 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:24.001 --rc genhtml_branch_coverage=1 00:17:24.001 --rc genhtml_function_coverage=1 00:17:24.001 --rc genhtml_legend=1 00:17:24.001 --rc geninfo_all_blocks=1 00:17:24.001 --rc geninfo_unexecuted_blocks=1 00:17:24.001 00:17:24.001 ' 00:17:24.001 22:16:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:24.001 22:16:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # uname -s 00:17:24.001 22:16:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:24.001 22:16:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:24.001 22:16:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:24.001 22:16:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:24.001 22:16:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:24.001 22:16:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:24.001 22:16:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:24.001 22:16:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:24.001 22:16:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:24.001 22:16:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:24.001 22:16:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 00:17:24.001 22:16:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=80c5a598-37ec-ec11-9bc7-a4bf01928204 00:17:24.001 22:16:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:24.001 22:16:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:24.001 22:16:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:24.002 22:16:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:24.002 22:16:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:24.002 22:16:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@15 -- # shopt -s extglob 00:17:24.262 22:16:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:24.262 22:16:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:24.262 22:16:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:24.262 22:16:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:24.262 22:16:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:24.262 22:16:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:24.262 22:16:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@5 -- # export PATH 00:17:24.262 22:16:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:24.262 22:16:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@51 -- # : 0 00:17:24.262 22:16:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:17:24.262 22:16:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:17:24.262 22:16:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:24.262 22:16:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:24.263 22:16:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:24.263 22:16:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:17:24.263 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:17:24.263 22:16:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:17:24.263 22:16:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:17:24.263 22:16:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@55 -- # have_pci_nics=0 00:17:24.263 22:16:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:24.263 22:16:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@22 -- # nvmftestinit 00:17:24.263 22:16:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:17:24.263 22:16:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:24.263 22:16:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@474 -- # prepare_net_devs 00:17:24.263 22:16:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@436 -- # local -g is_hw=no 00:17:24.263 22:16:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@438 -- # remove_spdk_ns 00:17:24.263 22:16:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:24.263 22:16:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:24.263 22:16:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:24.263 22:16:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:17:24.263 22:16:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:17:24.263 22:16:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@309 -- # xtrace_disable 00:17:24.263 22:16:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:17:32.400 22:16:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:17:32.401 22:16:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # pci_devs=() 00:17:32.401 22:16:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # local -a pci_devs 00:17:32.401 22:16:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # pci_net_devs=() 00:17:32.401 22:16:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:17:32.401 22:16:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # pci_drivers=() 00:17:32.401 22:16:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # local -A pci_drivers 00:17:32.401 22:16:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # net_devs=() 00:17:32.401 22:16:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # local -ga net_devs 00:17:32.401 22:16:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # e810=() 00:17:32.401 22:16:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # local -ga e810 00:17:32.401 22:16:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # x722=() 00:17:32.401 22:16:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # local -ga x722 00:17:32.401 22:16:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # mlx=() 00:17:32.401 22:16:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # local -ga mlx 00:17:32.401 22:16:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:32.401 22:16:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:32.401 22:16:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:32.401 22:16:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:32.401 22:16:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:32.401 22:16:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:32.401 22:16:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:32.401 22:16:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:17:32.401 22:16:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:32.401 22:16:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:32.401 22:16:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:32.401 22:16:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:32.401 22:16:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:17:32.401 22:16:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:17:32.401 22:16:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:17:32.401 22:16:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:17:32.401 22:16:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:17:32.401 22:16:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:17:32.401 22:16:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:17:32.401 22:16:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:17:32.401 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:17:32.401 22:16:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:17:32.401 22:16:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:17:32.401 22:16:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:32.401 22:16:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:32.401 22:16:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:17:32.401 22:16:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:17:32.401 22:16:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:17:32.401 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:17:32.401 22:16:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:17:32.401 22:16:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:17:32.401 22:16:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:32.401 22:16:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:32.401 22:16:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:17:32.401 22:16:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:17:32.401 22:16:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:17:32.401 22:16:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:17:32.401 22:16:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:17:32.401 22:16:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:32.401 22:16:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:17:32.401 22:16:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:32.401 22:16:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ up == up ]] 00:17:32.401 22:16:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:17:32.401 22:16:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:32.401 22:16:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:17:32.401 Found net devices under 0000:4b:00.0: cvl_0_0 00:17:32.401 22:16:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:17:32.401 22:16:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:17:32.401 22:16:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:32.401 22:16:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:17:32.401 22:16:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:32.401 22:16:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ up == up ]] 00:17:32.401 22:16:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:17:32.401 22:16:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:32.401 22:16:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:17:32.401 Found net devices under 0000:4b:00.1: cvl_0_1 00:17:32.401 22:16:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:17:32.401 22:16:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:17:32.401 22:16:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@440 -- # is_hw=yes 00:17:32.401 22:16:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:17:32.401 22:16:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:17:32.401 22:16:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:17:32.401 22:16:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:17:32.401 22:16:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:32.401 22:16:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:32.401 22:16:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:17:32.401 22:16:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:17:32.401 22:16:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:17:32.401 22:16:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:17:32.401 22:16:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:17:32.401 22:16:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:17:32.401 22:16:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:17:32.401 22:16:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:32.401 22:16:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:17:32.401 22:16:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:17:32.401 22:16:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:17:32.401 22:16:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:17:32.401 22:16:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:17:32.401 22:16:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:17:32.401 22:16:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:17:32.401 22:16:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:17:32.401 22:16:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:17:32.401 22:16:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:17:32.401 22:16:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:17:32.401 22:16:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:17:32.401 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:32.401 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.682 ms 00:17:32.401 00:17:32.401 --- 10.0.0.2 ping statistics --- 00:17:32.401 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:32.401 rtt min/avg/max/mdev = 0.682/0.682/0.682/0.000 ms 00:17:32.401 22:16:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:17:32.401 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:32.401 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.298 ms 00:17:32.401 00:17:32.401 --- 10.0.0.1 ping statistics --- 00:17:32.401 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:32.401 rtt min/avg/max/mdev = 0.298/0.298/0.298/0.000 ms 00:17:32.401 22:16:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:32.401 22:16:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@448 -- # return 0 00:17:32.401 22:16:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:17:32.401 22:16:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:32.401 22:16:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:17:32.401 22:16:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:17:32.401 22:16:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:32.401 22:16:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:17:32.401 22:16:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:17:32.401 22:16:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@23 -- # nvmfappstart -m 0xE 00:17:32.401 22:16:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:17:32.401 22:16:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@724 -- # xtrace_disable 00:17:32.401 22:16:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:17:32.401 22:16:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@507 -- # nvmfpid=4187226 00:17:32.401 22:16:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@508 -- # waitforlisten 4187226 00:17:32.401 22:16:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:17:32.401 22:16:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@831 -- # '[' -z 4187226 ']' 00:17:32.401 22:16:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:32.401 22:16:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@836 -- # local max_retries=100 00:17:32.401 22:16:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:32.401 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:32.401 22:16:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@840 -- # xtrace_disable 00:17:32.401 22:16:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:17:32.401 [2024-10-01 22:16:26.689250] Starting SPDK v25.01-pre git sha1 1b1c3081e / DPDK 24.03.0 initialization... 00:17:32.402 [2024-10-01 22:16:26.689316] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:32.402 [2024-10-01 22:16:26.777699] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:17:32.402 [2024-10-01 22:16:26.873618] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:32.402 [2024-10-01 22:16:26.873692] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:32.402 [2024-10-01 22:16:26.873701] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:32.402 [2024-10-01 22:16:26.873708] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:32.402 [2024-10-01 22:16:26.873715] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:32.402 [2024-10-01 22:16:26.873899] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:17:32.402 [2024-10-01 22:16:26.874038] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:17:32.402 [2024-10-01 22:16:26.874040] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:17:32.402 22:16:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:17:32.402 22:16:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@864 -- # return 0 00:17:32.402 22:16:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:17:32.402 22:16:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@730 -- # xtrace_disable 00:17:32.402 22:16:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:17:32.402 22:16:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:32.402 22:16:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@25 -- # null_size=1000 00:17:32.402 22:16:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:17:32.662 [2024-10-01 22:16:27.690273] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:32.662 22:16:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:17:32.662 22:16:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:32.923 [2024-10-01 22:16:28.051835] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:32.923 22:16:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:17:33.182 22:16:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 512 -b Malloc0 00:17:33.182 Malloc0 00:17:33.441 22:16:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:17:33.441 Delay0 00:17:33.441 22:16:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:17:33.701 22:16:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create NULL1 1000 512 00:17:33.965 NULL1 00:17:33.965 22:16:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:17:33.965 22:16:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 30 -q 128 -w randread -o 512 -Q 1000 00:17:33.965 22:16:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@42 -- # PERF_PID=4187745 00:17:33.965 22:16:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 4187745 00:17:33.965 22:16:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:17:34.230 22:16:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:17:34.490 22:16:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1001 00:17:34.490 22:16:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1001 00:17:34.490 true 00:17:34.490 22:16:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 4187745 00:17:34.490 22:16:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:17:34.751 22:16:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:17:35.011 22:16:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1002 00:17:35.011 22:16:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1002 00:17:35.011 true 00:17:35.272 22:16:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 4187745 00:17:35.272 22:16:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:17:35.272 22:16:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:17:35.532 22:16:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1003 00:17:35.532 22:16:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1003 00:17:35.791 true 00:17:35.791 22:16:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 4187745 00:17:35.791 22:16:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:17:35.791 22:16:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:17:36.051 22:16:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1004 00:17:36.051 22:16:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1004 00:17:36.312 true 00:17:36.312 22:16:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 4187745 00:17:36.312 22:16:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:17:36.572 22:16:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:17:36.572 22:16:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1005 00:17:36.572 22:16:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1005 00:17:36.832 true 00:17:36.832 22:16:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 4187745 00:17:36.832 22:16:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:17:37.092 22:16:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:17:37.092 22:16:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1006 00:17:37.092 22:16:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1006 00:17:37.352 true 00:17:37.352 22:16:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 4187745 00:17:37.352 22:16:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:17:37.612 22:16:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:17:37.612 22:16:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1007 00:17:37.612 22:16:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1007 00:17:37.871 true 00:17:37.871 22:16:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 4187745 00:17:37.871 22:16:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:17:38.131 22:16:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:17:38.391 22:16:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1008 00:17:38.391 22:16:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1008 00:17:38.391 true 00:17:38.391 22:16:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 4187745 00:17:38.391 22:16:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:17:38.651 22:16:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:17:38.911 22:16:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1009 00:17:38.911 22:16:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1009 00:17:38.911 true 00:17:38.911 22:16:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 4187745 00:17:38.911 22:16:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:17:39.172 22:16:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:17:39.433 22:16:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1010 00:17:39.433 22:16:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1010 00:17:39.433 true 00:17:39.694 22:16:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 4187745 00:17:39.694 22:16:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:17:39.694 22:16:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:17:39.954 22:16:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1011 00:17:39.954 22:16:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1011 00:17:40.215 true 00:17:40.215 22:16:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 4187745 00:17:40.215 22:16:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:17:40.215 22:16:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:17:40.475 22:16:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1012 00:17:40.475 22:16:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1012 00:17:40.735 true 00:17:40.735 22:16:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 4187745 00:17:40.735 22:16:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:17:40.735 22:16:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:17:40.995 22:16:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1013 00:17:40.995 22:16:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1013 00:17:41.257 true 00:17:41.257 22:16:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 4187745 00:17:41.257 22:16:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:17:41.518 22:16:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:17:41.518 22:16:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1014 00:17:41.518 22:16:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1014 00:17:41.778 true 00:17:41.778 22:16:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 4187745 00:17:41.778 22:16:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:17:42.038 22:16:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:17:42.038 22:16:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1015 00:17:42.038 22:16:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1015 00:17:42.323 true 00:17:42.323 22:16:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 4187745 00:17:42.323 22:16:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:17:42.583 22:16:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:17:42.583 22:16:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1016 00:17:42.583 22:16:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1016 00:17:42.843 true 00:17:42.843 22:16:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 4187745 00:17:42.843 22:16:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:17:43.104 22:16:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:17:43.104 22:16:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1017 00:17:43.104 22:16:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1017 00:17:43.365 true 00:17:43.365 22:16:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 4187745 00:17:43.365 22:16:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:17:43.626 22:16:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:17:43.886 22:16:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1018 00:17:43.886 22:16:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1018 00:17:43.886 true 00:17:43.886 22:16:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 4187745 00:17:43.886 22:16:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:17:44.147 22:16:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:17:44.408 22:16:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1019 00:17:44.408 22:16:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1019 00:17:44.408 true 00:17:44.408 22:16:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 4187745 00:17:44.408 22:16:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:17:44.669 22:16:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:17:44.930 22:16:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1020 00:17:44.930 22:16:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1020 00:17:44.930 true 00:17:44.930 22:16:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 4187745 00:17:44.930 22:16:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:17:45.190 22:16:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:17:45.451 22:16:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1021 00:17:45.451 22:16:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1021 00:17:45.711 true 00:17:45.711 22:16:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 4187745 00:17:45.711 22:16:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:17:45.711 22:16:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:17:45.972 22:16:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1022 00:17:45.972 22:16:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1022 00:17:46.232 true 00:17:46.232 22:16:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 4187745 00:17:46.232 22:16:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:17:46.232 22:16:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:17:46.493 22:16:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1023 00:17:46.493 22:16:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1023 00:17:46.754 true 00:17:46.754 22:16:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 4187745 00:17:46.754 22:16:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:17:47.017 22:16:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:17:47.017 22:16:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1024 00:17:47.017 22:16:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1024 00:17:47.279 true 00:17:47.279 22:16:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 4187745 00:17:47.279 22:16:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:17:47.539 22:16:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:17:47.539 22:16:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1025 00:17:47.539 22:16:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1025 00:17:47.800 true 00:17:47.800 22:16:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 4187745 00:17:47.800 22:16:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:17:48.065 22:16:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:17:48.065 22:16:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1026 00:17:48.065 22:16:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1026 00:17:48.374 true 00:17:48.374 22:16:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 4187745 00:17:48.374 22:16:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:17:48.693 22:16:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:17:48.693 22:16:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1027 00:17:48.693 22:16:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1027 00:17:48.953 true 00:17:48.953 22:16:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 4187745 00:17:48.953 22:16:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:17:49.214 22:16:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:17:49.214 22:16:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1028 00:17:49.214 22:16:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1028 00:17:49.474 true 00:17:49.474 22:16:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 4187745 00:17:49.474 22:16:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:17:49.735 22:16:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:17:49.735 22:16:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1029 00:17:49.735 22:16:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1029 00:17:49.997 true 00:17:49.997 22:16:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 4187745 00:17:49.997 22:16:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:17:50.257 22:16:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:17:50.257 22:16:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1030 00:17:50.257 22:16:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1030 00:17:50.517 true 00:17:50.517 22:16:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 4187745 00:17:50.517 22:16:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:17:50.777 22:16:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:17:51.037 22:16:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1031 00:17:51.037 22:16:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1031 00:17:51.037 true 00:17:51.037 22:16:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 4187745 00:17:51.037 22:16:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:17:51.297 22:16:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:17:51.557 22:16:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1032 00:17:51.557 22:16:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1032 00:17:51.557 true 00:17:51.557 22:16:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 4187745 00:17:51.557 22:16:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:17:51.817 22:16:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:17:52.077 22:16:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1033 00:17:52.077 22:16:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1033 00:17:52.077 true 00:17:52.338 22:16:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 4187745 00:17:52.338 22:16:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:17:52.338 22:16:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:17:52.598 22:16:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1034 00:17:52.598 22:16:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1034 00:17:52.859 true 00:17:52.859 22:16:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 4187745 00:17:52.859 22:16:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:17:52.859 22:16:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:17:53.120 22:16:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1035 00:17:53.120 22:16:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1035 00:17:53.380 true 00:17:53.380 22:16:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 4187745 00:17:53.380 22:16:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:17:53.381 22:16:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:17:53.641 22:16:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1036 00:17:53.641 22:16:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1036 00:17:53.902 true 00:17:53.902 22:16:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 4187745 00:17:53.902 22:16:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:17:54.162 22:16:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:17:54.162 22:16:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1037 00:17:54.162 22:16:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1037 00:17:54.422 true 00:17:54.422 22:16:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 4187745 00:17:54.422 22:16:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:17:54.683 22:16:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:17:54.683 22:16:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1038 00:17:54.683 22:16:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1038 00:17:54.945 true 00:17:54.945 22:16:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 4187745 00:17:54.945 22:16:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:17:55.207 22:16:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:17:55.468 22:16:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1039 00:17:55.468 22:16:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1039 00:17:55.468 true 00:17:55.468 22:16:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 4187745 00:17:55.468 22:16:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:17:55.728 22:16:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:17:55.988 22:16:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1040 00:17:55.988 22:16:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1040 00:17:55.988 true 00:17:55.988 22:16:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 4187745 00:17:55.988 22:16:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:17:56.250 22:16:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:17:56.510 22:16:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1041 00:17:56.510 22:16:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1041 00:17:56.510 true 00:17:56.771 22:16:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 4187745 00:17:56.771 22:16:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:17:56.771 22:16:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:17:57.032 22:16:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1042 00:17:57.032 22:16:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1042 00:17:57.292 true 00:17:57.292 22:16:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 4187745 00:17:57.292 22:16:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:17:57.292 22:16:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:17:57.553 22:16:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1043 00:17:57.553 22:16:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1043 00:17:57.814 true 00:17:57.814 22:16:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 4187745 00:17:57.814 22:16:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:17:58.076 22:16:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:17:58.076 22:16:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1044 00:17:58.076 22:16:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1044 00:17:58.337 true 00:17:58.337 22:16:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 4187745 00:17:58.337 22:16:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:17:58.598 22:16:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:17:58.598 22:16:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1045 00:17:58.598 22:16:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1045 00:17:58.858 true 00:17:58.858 22:16:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 4187745 00:17:58.858 22:16:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:17:59.119 22:16:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:17:59.119 22:16:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1046 00:17:59.119 22:16:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1046 00:17:59.378 true 00:17:59.378 22:16:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 4187745 00:17:59.378 22:16:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:17:59.638 22:16:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:17:59.898 22:16:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1047 00:17:59.898 22:16:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1047 00:17:59.898 true 00:17:59.898 22:16:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 4187745 00:17:59.898 22:16:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:18:00.158 22:16:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:18:00.418 22:16:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1048 00:18:00.418 22:16:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1048 00:18:00.418 true 00:18:00.418 22:16:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 4187745 00:18:00.418 22:16:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:18:00.677 22:16:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:18:00.937 22:16:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1049 00:18:00.937 22:16:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1049 00:18:00.937 true 00:18:00.937 22:16:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 4187745 00:18:00.937 22:16:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:18:01.197 22:16:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:18:01.457 22:16:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1050 00:18:01.457 22:16:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1050 00:18:01.457 true 00:18:01.457 22:16:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 4187745 00:18:01.457 22:16:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:18:01.717 22:16:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:18:01.977 22:16:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1051 00:18:01.977 22:16:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1051 00:18:02.236 true 00:18:02.236 22:16:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 4187745 00:18:02.236 22:16:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:18:02.236 22:16:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:18:02.496 22:16:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1052 00:18:02.496 22:16:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1052 00:18:02.756 true 00:18:02.756 22:16:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 4187745 00:18:02.756 22:16:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:18:02.756 22:16:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:18:03.017 22:16:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1053 00:18:03.017 22:16:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1053 00:18:03.277 true 00:18:03.277 22:16:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 4187745 00:18:03.277 22:16:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:18:03.537 22:16:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:18:03.537 22:16:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1054 00:18:03.537 22:16:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1054 00:18:03.797 true 00:18:03.797 22:16:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 4187745 00:18:03.797 22:16:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:18:04.057 22:16:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:18:04.057 22:16:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1055 00:18:04.057 22:16:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1055 00:18:04.317 Initializing NVMe Controllers 00:18:04.317 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:18:04.317 Controller SPDK bdev Controller (SPDK00000000000001 ): Skipping inactive NS 1 00:18:04.317 Controller IO queue size 128, less than required. 00:18:04.317 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:18:04.317 WARNING: Some requested NVMe devices were skipped 00:18:04.317 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:18:04.317 Initialization complete. Launching workers. 00:18:04.317 ======================================================== 00:18:04.317 Latency(us) 00:18:04.317 Device Information : IOPS MiB/s Average min max 00:18:04.317 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 30844.12 15.06 4149.73 1596.61 7924.46 00:18:04.317 ======================================================== 00:18:04.317 Total : 30844.12 15.06 4149.73 1596.61 7924.46 00:18:04.317 00:18:04.317 true 00:18:04.317 22:16:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 4187745 00:18:04.317 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh: line 44: kill: (4187745) - No such process 00:18:04.317 22:16:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@53 -- # wait 4187745 00:18:04.317 22:16:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:18:04.576 22:16:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:18:04.836 22:16:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # nthreads=8 00:18:04.836 22:16:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # pids=() 00:18:04.836 22:16:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i = 0 )) 00:18:04.836 22:16:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:18:04.836 22:16:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null0 100 4096 00:18:04.836 null0 00:18:04.836 22:17:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:18:04.836 22:17:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:18:04.836 22:17:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null1 100 4096 00:18:05.096 null1 00:18:05.096 22:17:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:18:05.096 22:17:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:18:05.096 22:17:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null2 100 4096 00:18:05.356 null2 00:18:05.356 22:17:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:18:05.356 22:17:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:18:05.356 22:17:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null3 100 4096 00:18:05.356 null3 00:18:05.356 22:17:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:18:05.356 22:17:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:18:05.356 22:17:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null4 100 4096 00:18:05.616 null4 00:18:05.616 22:17:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:18:05.616 22:17:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:18:05.616 22:17:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null5 100 4096 00:18:05.875 null5 00:18:05.875 22:17:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:18:05.875 22:17:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:18:05.875 22:17:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null6 100 4096 00:18:05.875 null6 00:18:05.875 22:17:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:18:05.875 22:17:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:18:05.875 22:17:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null7 100 4096 00:18:06.135 null7 00:18:06.135 22:17:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:18:06.135 22:17:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:18:06.135 22:17:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i = 0 )) 00:18:06.135 22:17:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:18:06.135 22:17:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:18:06.135 22:17:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 1 null0 00:18:06.135 22:17:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:18:06.135 22:17:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:18:06.135 22:17:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=1 bdev=null0 00:18:06.135 22:17:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:18:06.135 22:17:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:18:06.135 22:17:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:18:06.135 22:17:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:18:06.135 22:17:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:18:06.135 22:17:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:18:06.135 22:17:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 2 null1 00:18:06.135 22:17:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=2 bdev=null1 00:18:06.135 22:17:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:18:06.135 22:17:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:18:06.135 22:17:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:18:06.135 22:17:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 3 null2 00:18:06.135 22:17:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:18:06.135 22:17:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:18:06.135 22:17:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=3 bdev=null2 00:18:06.135 22:17:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:18:06.135 22:17:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:18:06.135 22:17:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:18:06.135 22:17:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:18:06.135 22:17:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:18:06.135 22:17:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:18:06.135 22:17:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 4 null3 00:18:06.135 22:17:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:18:06.135 22:17:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=4 bdev=null3 00:18:06.135 22:17:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:18:06.135 22:17:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:18:06.135 22:17:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:18:06.135 22:17:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:18:06.135 22:17:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:18:06.135 22:17:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:18:06.135 22:17:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 5 null4 00:18:06.135 22:17:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=5 bdev=null4 00:18:06.135 22:17:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:18:06.135 22:17:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:18:06.135 22:17:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:18:06.135 22:17:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:18:06.135 22:17:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:18:06.135 22:17:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:18:06.135 22:17:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 6 null5 00:18:06.135 22:17:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=6 bdev=null5 00:18:06.135 22:17:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:18:06.135 22:17:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:18:06.135 22:17:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:18:06.135 22:17:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:18:06.135 22:17:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:18:06.135 22:17:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 7 null6 00:18:06.135 22:17:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:18:06.135 22:17:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=7 bdev=null6 00:18:06.135 22:17:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:18:06.135 22:17:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:18:06.135 22:17:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:18:06.135 22:17:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:18:06.135 22:17:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:18:06.135 22:17:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 8 null7 00:18:06.135 22:17:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:18:06.135 22:17:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@66 -- # wait 4194293 4194294 4194296 4194298 4194300 4194302 302 314 00:18:06.135 22:17:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=8 bdev=null7 00:18:06.135 22:17:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:18:06.135 22:17:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:18:06.135 22:17:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:18:06.395 22:17:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:18:06.395 22:17:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:18:06.395 22:17:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:18:06.395 22:17:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:18:06.395 22:17:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:18:06.395 22:17:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:18:06.395 22:17:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:18:06.395 22:17:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:18:06.655 22:17:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:18:06.655 22:17:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:18:06.655 22:17:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:18:06.655 22:17:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:18:06.655 22:17:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:18:06.655 22:17:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:18:06.655 22:17:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:18:06.655 22:17:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:18:06.655 22:17:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:18:06.655 22:17:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:18:06.655 22:17:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:18:06.655 22:17:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:18:06.655 22:17:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:18:06.655 22:17:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:18:06.655 22:17:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:18:06.655 22:17:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:18:06.655 22:17:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:18:06.655 22:17:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:18:06.655 22:17:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:18:06.655 22:17:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:18:06.655 22:17:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:18:06.655 22:17:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:18:06.655 22:17:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:18:06.655 22:17:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:18:06.655 22:17:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:18:06.655 22:17:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:18:06.916 22:17:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:18:06.916 22:17:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:18:06.916 22:17:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:18:06.916 22:17:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:18:06.916 22:17:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:18:06.916 22:17:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:18:06.916 22:17:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:18:06.916 22:17:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:18:06.916 22:17:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:18:06.916 22:17:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:18:06.916 22:17:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:18:06.916 22:17:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:18:06.916 22:17:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:18:06.916 22:17:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:18:06.916 22:17:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:18:06.916 22:17:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:18:06.916 22:17:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:18:06.916 22:17:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:18:06.916 22:17:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:18:06.916 22:17:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:18:06.916 22:17:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:18:06.916 22:17:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:18:06.916 22:17:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:18:06.916 22:17:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:18:07.176 22:17:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:18:07.176 22:17:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:18:07.176 22:17:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:18:07.176 22:17:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:18:07.176 22:17:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:18:07.176 22:17:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:18:07.176 22:17:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:18:07.176 22:17:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:18:07.176 22:17:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:18:07.176 22:17:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:18:07.176 22:17:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:18:07.176 22:17:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:18:07.176 22:17:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:18:07.176 22:17:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:18:07.176 22:17:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:18:07.176 22:17:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:18:07.176 22:17:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:18:07.436 22:17:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:18:07.436 22:17:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:18:07.436 22:17:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:18:07.436 22:17:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:18:07.436 22:17:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:18:07.436 22:17:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:18:07.436 22:17:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:18:07.436 22:17:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:18:07.436 22:17:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:18:07.436 22:17:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:18:07.436 22:17:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:18:07.436 22:17:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:18:07.436 22:17:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:18:07.436 22:17:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:18:07.436 22:17:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:18:07.436 22:17:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:18:07.436 22:17:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:18:07.436 22:17:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:18:07.436 22:17:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:18:07.436 22:17:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:18:07.436 22:17:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:18:07.436 22:17:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:18:07.436 22:17:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:18:07.436 22:17:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:18:07.436 22:17:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:18:07.436 22:17:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:18:07.697 22:17:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:18:07.697 22:17:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:18:07.697 22:17:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:18:07.697 22:17:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:18:07.697 22:17:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:18:07.697 22:17:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:18:07.697 22:17:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:18:07.697 22:17:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:18:07.697 22:17:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:18:07.697 22:17:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:18:07.697 22:17:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:18:07.697 22:17:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:18:07.697 22:17:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:18:07.697 22:17:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:18:07.697 22:17:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:18:07.697 22:17:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:18:07.697 22:17:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:18:07.697 22:17:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:18:07.697 22:17:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:18:07.697 22:17:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:18:07.697 22:17:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:18:07.697 22:17:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:18:07.697 22:17:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:18:07.697 22:17:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:18:07.697 22:17:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:18:07.958 22:17:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:18:07.958 22:17:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:18:07.958 22:17:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:18:07.958 22:17:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:18:07.958 22:17:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:18:07.958 22:17:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:18:07.958 22:17:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:18:07.958 22:17:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:18:07.958 22:17:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:18:07.958 22:17:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:18:07.958 22:17:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:18:07.958 22:17:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:18:07.958 22:17:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:18:07.958 22:17:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:18:07.958 22:17:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:18:07.958 22:17:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:18:07.958 22:17:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:18:07.958 22:17:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:18:07.958 22:17:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:18:07.958 22:17:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:18:07.958 22:17:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:18:07.958 22:17:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:18:07.958 22:17:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:18:07.958 22:17:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:18:07.958 22:17:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:18:08.219 22:17:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:18:08.219 22:17:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:18:08.219 22:17:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:18:08.219 22:17:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:18:08.219 22:17:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:18:08.219 22:17:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:18:08.219 22:17:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:18:08.219 22:17:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:18:08.219 22:17:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:18:08.219 22:17:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:18:08.219 22:17:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:18:08.219 22:17:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:18:08.219 22:17:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:18:08.219 22:17:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:18:08.219 22:17:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:18:08.480 22:17:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:18:08.480 22:17:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:18:08.480 22:17:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:18:08.480 22:17:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:18:08.480 22:17:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:18:08.480 22:17:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:18:08.480 22:17:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:18:08.480 22:17:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:18:08.480 22:17:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:18:08.480 22:17:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:18:08.480 22:17:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:18:08.480 22:17:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:18:08.480 22:17:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:18:08.480 22:17:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:18:08.480 22:17:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:18:08.480 22:17:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:18:08.480 22:17:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:18:08.480 22:17:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:18:08.480 22:17:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:18:08.480 22:17:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:18:08.480 22:17:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:18:08.480 22:17:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:18:08.480 22:17:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:18:08.480 22:17:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:18:08.480 22:17:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:18:08.480 22:17:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:18:08.480 22:17:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:18:08.741 22:17:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:18:08.741 22:17:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:18:08.741 22:17:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:18:08.741 22:17:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:18:08.741 22:17:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:18:08.741 22:17:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:18:08.741 22:17:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:18:08.741 22:17:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:18:08.741 22:17:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:18:08.741 22:17:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:18:08.741 22:17:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:18:08.741 22:17:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:18:08.741 22:17:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:18:08.741 22:17:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:18:08.741 22:17:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:18:08.741 22:17:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:18:08.741 22:17:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:18:08.741 22:17:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:18:08.741 22:17:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:18:08.741 22:17:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:18:08.741 22:17:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:18:08.741 22:17:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:18:08.741 22:17:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:18:08.741 22:17:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:18:08.741 22:17:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:18:09.002 22:17:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:18:09.002 22:17:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:18:09.002 22:17:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:18:09.002 22:17:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:18:09.002 22:17:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:18:09.002 22:17:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:18:09.002 22:17:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:18:09.002 22:17:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:18:09.002 22:17:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:18:09.002 22:17:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:18:09.002 22:17:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:18:09.002 22:17:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:18:09.002 22:17:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:18:09.002 22:17:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:18:09.002 22:17:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:18:09.002 22:17:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:18:09.002 22:17:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:18:09.002 22:17:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:18:09.002 22:17:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:18:09.264 22:17:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:18:09.264 22:17:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:18:09.264 22:17:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:18:09.264 22:17:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:18:09.264 22:17:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:18:09.264 22:17:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:18:09.264 22:17:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:18:09.264 22:17:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:18:09.264 22:17:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:18:09.264 22:17:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:18:09.264 22:17:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:18:09.264 22:17:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:18:09.264 22:17:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:18:09.264 22:17:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:18:09.264 22:17:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:18:09.264 22:17:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:18:09.264 22:17:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:18:09.264 22:17:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:18:09.264 22:17:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:18:09.264 22:17:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:18:09.526 22:17:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:18:09.526 22:17:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:18:09.526 22:17:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:18:09.526 22:17:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:18:09.526 22:17:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:18:09.526 22:17:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:18:09.526 22:17:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:18:09.526 22:17:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:18:09.526 22:17:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:18:09.526 22:17:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:18:09.526 22:17:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:18:09.526 22:17:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:18:09.526 22:17:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:18:09.526 22:17:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:18:09.526 22:17:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:18:09.526 22:17:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:18:09.526 22:17:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:18:09.526 22:17:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:18:09.526 22:17:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:18:09.526 22:17:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:18:09.526 22:17:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:18:09.526 22:17:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:18:09.526 22:17:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:18:09.788 22:17:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:18:09.788 22:17:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:18:09.788 22:17:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:18:09.788 22:17:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:18:09.788 22:17:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:18:09.788 22:17:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:18:09.788 22:17:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:18:09.788 22:17:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:18:09.788 22:17:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:18:09.788 22:17:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:18:09.788 22:17:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:18:09.788 22:17:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:18:09.788 22:17:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:18:09.788 22:17:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:18:09.788 22:17:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:18:09.789 22:17:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:18:09.789 22:17:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:18:09.789 22:17:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:18:09.789 22:17:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:18:09.789 22:17:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:18:09.789 22:17:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:18:09.789 22:17:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:18:09.789 22:17:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:18:10.050 22:17:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:18:10.051 22:17:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:18:10.051 22:17:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:18:10.051 22:17:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:18:10.051 22:17:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:18:10.051 22:17:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:18:10.051 22:17:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:18:10.051 22:17:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:18:10.051 22:17:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:18:10.051 22:17:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:18:10.051 22:17:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@70 -- # nvmftestfini 00:18:10.051 22:17:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@514 -- # nvmfcleanup 00:18:10.051 22:17:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@121 -- # sync 00:18:10.051 22:17:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:18:10.051 22:17:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@124 -- # set +e 00:18:10.051 22:17:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@125 -- # for i in {1..20} 00:18:10.051 22:17:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:18:10.051 rmmod nvme_tcp 00:18:10.051 rmmod nvme_fabrics 00:18:10.312 rmmod nvme_keyring 00:18:10.312 22:17:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:18:10.312 22:17:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@128 -- # set -e 00:18:10.312 22:17:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@129 -- # return 0 00:18:10.312 22:17:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@515 -- # '[' -n 4187226 ']' 00:18:10.312 22:17:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@516 -- # killprocess 4187226 00:18:10.312 22:17:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@950 -- # '[' -z 4187226 ']' 00:18:10.313 22:17:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@954 -- # kill -0 4187226 00:18:10.313 22:17:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@955 -- # uname 00:18:10.313 22:17:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:18:10.313 22:17:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 4187226 00:18:10.313 22:17:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:18:10.313 22:17:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:18:10.313 22:17:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@968 -- # echo 'killing process with pid 4187226' 00:18:10.313 killing process with pid 4187226 00:18:10.313 22:17:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@969 -- # kill 4187226 00:18:10.313 22:17:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@974 -- # wait 4187226 00:18:10.574 22:17:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:18:10.574 22:17:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:18:10.574 22:17:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:18:10.574 22:17:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # iptr 00:18:10.574 22:17:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@789 -- # iptables-save 00:18:10.574 22:17:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:18:10.574 22:17:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@789 -- # iptables-restore 00:18:10.574 22:17:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:18:10.574 22:17:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@302 -- # remove_spdk_ns 00:18:10.574 22:17:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:10.574 22:17:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:10.574 22:17:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:12.491 22:17:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:18:12.491 00:18:12.491 real 0m48.619s 00:18:12.491 user 3m19.645s 00:18:12.491 sys 0m16.726s 00:18:12.491 22:17:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1126 -- # xtrace_disable 00:18:12.491 22:17:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:18:12.491 ************************************ 00:18:12.491 END TEST nvmf_ns_hotplug_stress 00:18:12.491 ************************************ 00:18:12.491 22:17:07 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@23 -- # run_test nvmf_delete_subsystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:18:12.491 22:17:07 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:18:12.491 22:17:07 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:18:12.491 22:17:07 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:18:12.491 ************************************ 00:18:12.491 START TEST nvmf_delete_subsystem 00:18:12.491 ************************************ 00:18:12.491 22:17:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:18:12.754 * Looking for test storage... 00:18:12.754 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:18:12.755 22:17:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:18:12.755 22:17:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1681 -- # lcov --version 00:18:12.755 22:17:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:18:12.755 22:17:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:18:12.755 22:17:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:18:12.755 22:17:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:18:12.755 22:17:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:18:12.755 22:17:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@336 -- # IFS=.-: 00:18:12.755 22:17:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@336 -- # read -ra ver1 00:18:12.755 22:17:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@337 -- # IFS=.-: 00:18:12.755 22:17:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@337 -- # read -ra ver2 00:18:12.755 22:17:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@338 -- # local 'op=<' 00:18:12.755 22:17:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@340 -- # ver1_l=2 00:18:12.755 22:17:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@341 -- # ver2_l=1 00:18:12.755 22:17:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:18:12.755 22:17:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@344 -- # case "$op" in 00:18:12.755 22:17:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@345 -- # : 1 00:18:12.755 22:17:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:18:12.755 22:17:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:12.755 22:17:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@365 -- # decimal 1 00:18:12.755 22:17:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=1 00:18:12.755 22:17:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:12.755 22:17:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 1 00:18:12.755 22:17:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@365 -- # ver1[v]=1 00:18:12.755 22:17:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@366 -- # decimal 2 00:18:12.755 22:17:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=2 00:18:12.755 22:17:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:12.755 22:17:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 2 00:18:12.755 22:17:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@366 -- # ver2[v]=2 00:18:12.755 22:17:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:12.755 22:17:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:18:12.755 22:17:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@368 -- # return 0 00:18:12.755 22:17:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:12.755 22:17:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:18:12.755 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:12.755 --rc genhtml_branch_coverage=1 00:18:12.755 --rc genhtml_function_coverage=1 00:18:12.755 --rc genhtml_legend=1 00:18:12.755 --rc geninfo_all_blocks=1 00:18:12.755 --rc geninfo_unexecuted_blocks=1 00:18:12.755 00:18:12.755 ' 00:18:12.755 22:17:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:18:12.755 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:12.755 --rc genhtml_branch_coverage=1 00:18:12.755 --rc genhtml_function_coverage=1 00:18:12.755 --rc genhtml_legend=1 00:18:12.755 --rc geninfo_all_blocks=1 00:18:12.755 --rc geninfo_unexecuted_blocks=1 00:18:12.755 00:18:12.755 ' 00:18:12.755 22:17:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:18:12.755 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:12.755 --rc genhtml_branch_coverage=1 00:18:12.755 --rc genhtml_function_coverage=1 00:18:12.755 --rc genhtml_legend=1 00:18:12.755 --rc geninfo_all_blocks=1 00:18:12.755 --rc geninfo_unexecuted_blocks=1 00:18:12.755 00:18:12.755 ' 00:18:12.755 22:17:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:18:12.755 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:12.755 --rc genhtml_branch_coverage=1 00:18:12.755 --rc genhtml_function_coverage=1 00:18:12.755 --rc genhtml_legend=1 00:18:12.755 --rc geninfo_all_blocks=1 00:18:12.755 --rc geninfo_unexecuted_blocks=1 00:18:12.755 00:18:12.755 ' 00:18:12.755 22:17:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:18:12.755 22:17:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # uname -s 00:18:12.755 22:17:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:12.755 22:17:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:12.755 22:17:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:12.755 22:17:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:12.755 22:17:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:12.755 22:17:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:12.755 22:17:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:12.755 22:17:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:12.755 22:17:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:12.755 22:17:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:12.755 22:17:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 00:18:12.755 22:17:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@18 -- # NVME_HOSTID=80c5a598-37ec-ec11-9bc7-a4bf01928204 00:18:12.755 22:17:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:12.755 22:17:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:12.755 22:17:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:12.755 22:17:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:12.755 22:17:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:18:12.755 22:17:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@15 -- # shopt -s extglob 00:18:12.755 22:17:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:12.755 22:17:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:12.755 22:17:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:12.755 22:17:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:12.756 22:17:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:12.756 22:17:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:12.756 22:17:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@5 -- # export PATH 00:18:12.756 22:17:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:12.756 22:17:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@51 -- # : 0 00:18:12.756 22:17:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:18:12.756 22:17:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:18:12.756 22:17:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:12.756 22:17:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:12.756 22:17:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:12.756 22:17:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:18:12.756 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:18:12.756 22:17:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:18:12.756 22:17:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:18:12.756 22:17:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@55 -- # have_pci_nics=0 00:18:12.756 22:17:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@12 -- # nvmftestinit 00:18:12.756 22:17:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:18:12.756 22:17:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:12.756 22:17:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@474 -- # prepare_net_devs 00:18:12.756 22:17:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@436 -- # local -g is_hw=no 00:18:12.756 22:17:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@438 -- # remove_spdk_ns 00:18:12.756 22:17:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:12.756 22:17:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:12.756 22:17:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:12.756 22:17:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:18:12.756 22:17:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:18:12.756 22:17:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@309 -- # xtrace_disable 00:18:12.756 22:17:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:18:20.907 22:17:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:18:20.907 22:17:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # pci_devs=() 00:18:20.907 22:17:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # local -a pci_devs 00:18:20.907 22:17:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # pci_net_devs=() 00:18:20.907 22:17:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:18:20.907 22:17:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # pci_drivers=() 00:18:20.907 22:17:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # local -A pci_drivers 00:18:20.907 22:17:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # net_devs=() 00:18:20.907 22:17:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # local -ga net_devs 00:18:20.907 22:17:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # e810=() 00:18:20.907 22:17:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # local -ga e810 00:18:20.907 22:17:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # x722=() 00:18:20.907 22:17:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # local -ga x722 00:18:20.907 22:17:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # mlx=() 00:18:20.907 22:17:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # local -ga mlx 00:18:20.907 22:17:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:20.907 22:17:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:20.907 22:17:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:20.907 22:17:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:20.907 22:17:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:20.907 22:17:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:20.907 22:17:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:20.907 22:17:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:18:20.907 22:17:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:20.907 22:17:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:20.907 22:17:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:20.907 22:17:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:20.907 22:17:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:18:20.907 22:17:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:18:20.907 22:17:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:18:20.907 22:17:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:18:20.907 22:17:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:18:20.907 22:17:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:18:20.907 22:17:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:18:20.907 22:17:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:18:20.907 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:18:20.907 22:17:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:18:20.907 22:17:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:18:20.907 22:17:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:20.907 22:17:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:20.907 22:17:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:18:20.907 22:17:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:18:20.907 22:17:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:18:20.907 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:18:20.907 22:17:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:18:20.907 22:17:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:18:20.907 22:17:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:20.907 22:17:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:20.907 22:17:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:18:20.907 22:17:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:18:20.907 22:17:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:18:20.907 22:17:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:18:20.907 22:17:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:18:20.907 22:17:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:20.907 22:17:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:18:20.907 22:17:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:20.907 22:17:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ up == up ]] 00:18:20.907 22:17:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:18:20.907 22:17:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:20.907 22:17:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:18:20.907 Found net devices under 0000:4b:00.0: cvl_0_0 00:18:20.907 22:17:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:18:20.907 22:17:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:18:20.907 22:17:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:20.907 22:17:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:18:20.907 22:17:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:20.907 22:17:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ up == up ]] 00:18:20.907 22:17:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:18:20.907 22:17:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:20.907 22:17:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:18:20.907 Found net devices under 0000:4b:00.1: cvl_0_1 00:18:20.907 22:17:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:18:20.907 22:17:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:18:20.907 22:17:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@440 -- # is_hw=yes 00:18:20.907 22:17:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:18:20.907 22:17:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:18:20.907 22:17:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:18:20.908 22:17:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:18:20.908 22:17:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:20.908 22:17:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:20.908 22:17:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:18:20.908 22:17:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:18:20.908 22:17:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:18:20.908 22:17:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:18:20.908 22:17:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:18:20.908 22:17:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:18:20.908 22:17:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:18:20.908 22:17:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:20.908 22:17:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:18:20.908 22:17:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:18:20.908 22:17:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:18:20.908 22:17:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:18:20.908 22:17:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:18:20.908 22:17:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:18:20.908 22:17:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:18:20.908 22:17:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:18:20.908 22:17:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:18:20.908 22:17:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:18:20.908 22:17:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:18:20.908 22:17:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:18:20.908 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:20.908 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.572 ms 00:18:20.908 00:18:20.908 --- 10.0.0.2 ping statistics --- 00:18:20.908 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:20.908 rtt min/avg/max/mdev = 0.572/0.572/0.572/0.000 ms 00:18:20.908 22:17:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:18:20.908 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:20.908 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.280 ms 00:18:20.908 00:18:20.908 --- 10.0.0.1 ping statistics --- 00:18:20.908 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:20.908 rtt min/avg/max/mdev = 0.280/0.280/0.280/0.000 ms 00:18:20.908 22:17:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:20.908 22:17:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@448 -- # return 0 00:18:20.908 22:17:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:18:20.908 22:17:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:20.908 22:17:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:18:20.908 22:17:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:18:20.908 22:17:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:20.908 22:17:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:18:20.908 22:17:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:18:20.908 22:17:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@13 -- # nvmfappstart -m 0x3 00:18:20.908 22:17:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:18:20.908 22:17:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@724 -- # xtrace_disable 00:18:20.908 22:17:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:18:20.908 22:17:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@507 -- # nvmfpid=6500 00:18:20.908 22:17:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@508 -- # waitforlisten 6500 00:18:20.908 22:17:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:18:20.908 22:17:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@831 -- # '[' -z 6500 ']' 00:18:20.908 22:17:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:20.908 22:17:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@836 -- # local max_retries=100 00:18:20.908 22:17:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:20.908 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:20.908 22:17:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@840 -- # xtrace_disable 00:18:20.908 22:17:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:18:20.908 [2024-10-01 22:17:15.366082] Starting SPDK v25.01-pre git sha1 1b1c3081e / DPDK 24.03.0 initialization... 00:18:20.908 [2024-10-01 22:17:15.366149] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:20.908 [2024-10-01 22:17:15.437252] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:18:20.908 [2024-10-01 22:17:15.511757] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:20.908 [2024-10-01 22:17:15.511796] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:20.908 [2024-10-01 22:17:15.511804] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:20.908 [2024-10-01 22:17:15.511812] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:20.908 [2024-10-01 22:17:15.511817] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:20.908 [2024-10-01 22:17:15.511903] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:18:20.908 [2024-10-01 22:17:15.512083] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:18:21.170 22:17:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:18:21.170 22:17:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@864 -- # return 0 00:18:21.170 22:17:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:18:21.170 22:17:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@730 -- # xtrace_disable 00:18:21.170 22:17:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:18:21.170 22:17:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:21.170 22:17:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:18:21.170 22:17:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:21.170 22:17:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:18:21.170 [2024-10-01 22:17:16.226308] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:21.170 22:17:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:21.170 22:17:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:18:21.170 22:17:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:21.170 22:17:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:18:21.170 22:17:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:21.170 22:17:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:21.170 22:17:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:21.170 22:17:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:18:21.170 [2024-10-01 22:17:16.250477] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:21.170 22:17:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:21.170 22:17:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:18:21.170 22:17:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:21.170 22:17:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:18:21.170 NULL1 00:18:21.170 22:17:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:21.170 22:17:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@23 -- # rpc_cmd bdev_delay_create -b NULL1 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:18:21.170 22:17:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:21.170 22:17:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:18:21.170 Delay0 00:18:21.170 22:17:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:21.170 22:17:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:18:21.170 22:17:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:21.170 22:17:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:18:21.170 22:17:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:21.170 22:17:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@28 -- # perf_pid=6829 00:18:21.170 22:17:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@30 -- # sleep 2 00:18:21.170 22:17:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 5 -q 128 -w randrw -M 70 -o 512 -P 4 00:18:21.170 [2024-10-01 22:17:16.347416] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:18:23.085 22:17:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@32 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:18:23.085 22:17:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:23.085 22:17:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:18:23.657 Write completed with error (sct=0, sc=8) 00:18:23.657 Write completed with error (sct=0, sc=8) 00:18:23.657 starting I/O failed: -6 00:18:23.657 Read completed with error (sct=0, sc=8) 00:18:23.657 Read completed with error (sct=0, sc=8) 00:18:23.657 Read completed with error (sct=0, sc=8) 00:18:23.657 Read completed with error (sct=0, sc=8) 00:18:23.657 starting I/O failed: -6 00:18:23.657 Read completed with error (sct=0, sc=8) 00:18:23.657 Read completed with error (sct=0, sc=8) 00:18:23.657 Read completed with error (sct=0, sc=8) 00:18:23.657 Read completed with error (sct=0, sc=8) 00:18:23.657 starting I/O failed: -6 00:18:23.657 Write completed with error (sct=0, sc=8) 00:18:23.657 Write completed with error (sct=0, sc=8) 00:18:23.657 Write completed with error (sct=0, sc=8) 00:18:23.657 Read completed with error (sct=0, sc=8) 00:18:23.657 starting I/O failed: -6 00:18:23.657 Read completed with error (sct=0, sc=8) 00:18:23.657 Read completed with error (sct=0, sc=8) 00:18:23.657 Read completed with error (sct=0, sc=8) 00:18:23.657 Read completed with error (sct=0, sc=8) 00:18:23.657 starting I/O failed: -6 00:18:23.657 Read completed with error (sct=0, sc=8) 00:18:23.657 Read completed with error (sct=0, sc=8) 00:18:23.657 Read completed with error (sct=0, sc=8) 00:18:23.657 Read completed with error (sct=0, sc=8) 00:18:23.657 starting I/O failed: -6 00:18:23.657 Write completed with error (sct=0, sc=8) 00:18:23.657 Read completed with error (sct=0, sc=8) 00:18:23.657 Read completed with error (sct=0, sc=8) 00:18:23.657 Read completed with error (sct=0, sc=8) 00:18:23.657 starting I/O failed: -6 00:18:23.657 Read completed with error (sct=0, sc=8) 00:18:23.657 Read completed with error (sct=0, sc=8) 00:18:23.657 Write completed with error (sct=0, sc=8) 00:18:23.657 Write completed with error (sct=0, sc=8) 00:18:23.657 starting I/O failed: -6 00:18:23.657 Read completed with error (sct=0, sc=8) 00:18:23.657 Write completed with error (sct=0, sc=8) 00:18:23.657 Read completed with error (sct=0, sc=8) 00:18:23.657 Read completed with error (sct=0, sc=8) 00:18:23.657 starting I/O failed: -6 00:18:23.657 Read completed with error (sct=0, sc=8) 00:18:23.657 Read completed with error (sct=0, sc=8) 00:18:23.657 Read completed with error (sct=0, sc=8) 00:18:23.657 Read completed with error (sct=0, sc=8) 00:18:23.657 starting I/O failed: -6 00:18:23.657 Read completed with error (sct=0, sc=8) 00:18:23.657 Read completed with error (sct=0, sc=8) 00:18:23.657 Read completed with error (sct=0, sc=8) 00:18:23.657 Write completed with error (sct=0, sc=8) 00:18:23.657 starting I/O failed: -6 00:18:23.657 Read completed with error (sct=0, sc=8) 00:18:23.657 Write completed with error (sct=0, sc=8) 00:18:23.657 Read completed with error (sct=0, sc=8) 00:18:23.657 Read completed with error (sct=0, sc=8) 00:18:23.657 starting I/O failed: -6 00:18:23.657 [2024-10-01 22:17:18.602448] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1760390 is same with the state(6) to be set 00:18:23.657 Read completed with error (sct=0, sc=8) 00:18:23.657 Read completed with error (sct=0, sc=8) 00:18:23.657 Read completed with error (sct=0, sc=8) 00:18:23.657 Write completed with error (sct=0, sc=8) 00:18:23.657 Read completed with error (sct=0, sc=8) 00:18:23.657 Write completed with error (sct=0, sc=8) 00:18:23.657 Read completed with error (sct=0, sc=8) 00:18:23.657 Write completed with error (sct=0, sc=8) 00:18:23.657 Write completed with error (sct=0, sc=8) 00:18:23.657 Write completed with error (sct=0, sc=8) 00:18:23.657 Read completed with error (sct=0, sc=8) 00:18:23.657 Read completed with error (sct=0, sc=8) 00:18:23.657 Write completed with error (sct=0, sc=8) 00:18:23.657 Read completed with error (sct=0, sc=8) 00:18:23.657 Write completed with error (sct=0, sc=8) 00:18:23.657 Write completed with error (sct=0, sc=8) 00:18:23.657 Read completed with error (sct=0, sc=8) 00:18:23.657 Write completed with error (sct=0, sc=8) 00:18:23.657 Read completed with error (sct=0, sc=8) 00:18:23.657 Read completed with error (sct=0, sc=8) 00:18:23.657 Read completed with error (sct=0, sc=8) 00:18:23.657 Write completed with error (sct=0, sc=8) 00:18:23.657 Read completed with error (sct=0, sc=8) 00:18:23.657 Read completed with error (sct=0, sc=8) 00:18:23.657 Read completed with error (sct=0, sc=8) 00:18:23.657 Read completed with error (sct=0, sc=8) 00:18:23.657 Read completed with error (sct=0, sc=8) 00:18:23.657 Read completed with error (sct=0, sc=8) 00:18:23.657 Read completed with error (sct=0, sc=8) 00:18:23.657 Read completed with error (sct=0, sc=8) 00:18:23.657 Read completed with error (sct=0, sc=8) 00:18:23.657 Read completed with error (sct=0, sc=8) 00:18:23.657 Read completed with error (sct=0, sc=8) 00:18:23.657 Read completed with error (sct=0, sc=8) 00:18:23.657 Write completed with error (sct=0, sc=8) 00:18:23.657 Write completed with error (sct=0, sc=8) 00:18:23.657 Read completed with error (sct=0, sc=8) 00:18:23.657 Read completed with error (sct=0, sc=8) 00:18:23.657 Read completed with error (sct=0, sc=8) 00:18:23.657 Read completed with error (sct=0, sc=8) 00:18:23.657 Write completed with error (sct=0, sc=8) 00:18:23.657 Read completed with error (sct=0, sc=8) 00:18:23.657 Write completed with error (sct=0, sc=8) 00:18:23.657 Read completed with error (sct=0, sc=8) 00:18:23.657 Read completed with error (sct=0, sc=8) 00:18:23.657 Read completed with error (sct=0, sc=8) 00:18:23.657 Read completed with error (sct=0, sc=8) 00:18:23.657 Read completed with error (sct=0, sc=8) 00:18:23.657 Read completed with error (sct=0, sc=8) 00:18:23.657 Read completed with error (sct=0, sc=8) 00:18:23.657 Read completed with error (sct=0, sc=8) 00:18:23.657 Read completed with error (sct=0, sc=8) 00:18:23.657 Read completed with error (sct=0, sc=8) 00:18:23.657 Read completed with error (sct=0, sc=8) 00:18:23.657 Read completed with error (sct=0, sc=8) 00:18:23.657 Write completed with error (sct=0, sc=8) 00:18:23.657 Write completed with error (sct=0, sc=8) 00:18:23.657 Write completed with error (sct=0, sc=8) 00:18:23.657 Read completed with error (sct=0, sc=8) 00:18:23.657 starting I/O failed: -6 00:18:23.657 Write completed with error (sct=0, sc=8) 00:18:23.657 Read completed with error (sct=0, sc=8) 00:18:23.657 Read completed with error (sct=0, sc=8) 00:18:23.657 Read completed with error (sct=0, sc=8) 00:18:23.657 starting I/O failed: -6 00:18:23.657 Read completed with error (sct=0, sc=8) 00:18:23.657 Read completed with error (sct=0, sc=8) 00:18:23.657 Write completed with error (sct=0, sc=8) 00:18:23.658 Read completed with error (sct=0, sc=8) 00:18:23.658 starting I/O failed: -6 00:18:23.658 Read completed with error (sct=0, sc=8) 00:18:23.658 Write completed with error (sct=0, sc=8) 00:18:23.658 Write completed with error (sct=0, sc=8) 00:18:23.658 Read completed with error (sct=0, sc=8) 00:18:23.658 starting I/O failed: -6 00:18:23.658 Read completed with error (sct=0, sc=8) 00:18:23.658 Write completed with error (sct=0, sc=8) 00:18:23.658 Read completed with error (sct=0, sc=8) 00:18:23.658 Write completed with error (sct=0, sc=8) 00:18:23.658 starting I/O failed: -6 00:18:23.658 Read completed with error (sct=0, sc=8) 00:18:23.658 Read completed with error (sct=0, sc=8) 00:18:23.658 Write completed with error (sct=0, sc=8) 00:18:23.658 Read completed with error (sct=0, sc=8) 00:18:23.658 starting I/O failed: -6 00:18:23.658 Read completed with error (sct=0, sc=8) 00:18:23.658 Read completed with error (sct=0, sc=8) 00:18:23.658 Read completed with error (sct=0, sc=8) 00:18:23.658 Read completed with error (sct=0, sc=8) 00:18:23.658 starting I/O failed: -6 00:18:23.658 Read completed with error (sct=0, sc=8) 00:18:23.658 Write completed with error (sct=0, sc=8) 00:18:23.658 Read completed with error (sct=0, sc=8) 00:18:23.658 Read completed with error (sct=0, sc=8) 00:18:23.658 starting I/O failed: -6 00:18:23.658 Write completed with error (sct=0, sc=8) 00:18:23.658 Read completed with error (sct=0, sc=8) 00:18:23.658 Write completed with error (sct=0, sc=8) 00:18:23.658 Write completed with error (sct=0, sc=8) 00:18:23.658 starting I/O failed: -6 00:18:23.658 Read completed with error (sct=0, sc=8) 00:18:23.658 Write completed with error (sct=0, sc=8) 00:18:23.658 Read completed with error (sct=0, sc=8) 00:18:23.658 Read completed with error (sct=0, sc=8) 00:18:23.658 starting I/O failed: -6 00:18:23.658 Read completed with error (sct=0, sc=8) 00:18:23.658 Read completed with error (sct=0, sc=8) 00:18:23.658 Read completed with error (sct=0, sc=8) 00:18:23.658 Read completed with error (sct=0, sc=8) 00:18:23.658 starting I/O failed: -6 00:18:23.658 Read completed with error (sct=0, sc=8) 00:18:23.658 [2024-10-01 22:17:18.606464] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f621c00cfe0 is same with the state(6) to be set 00:18:23.658 Write completed with error (sct=0, sc=8) 00:18:23.658 Write completed with error (sct=0, sc=8) 00:18:23.658 Read completed with error (sct=0, sc=8) 00:18:23.658 Read completed with error (sct=0, sc=8) 00:18:23.658 Read completed with error (sct=0, sc=8) 00:18:23.658 Read completed with error (sct=0, sc=8) 00:18:23.658 Read completed with error (sct=0, sc=8) 00:18:23.658 Read completed with error (sct=0, sc=8) 00:18:23.658 Read completed with error (sct=0, sc=8) 00:18:23.658 Read completed with error (sct=0, sc=8) 00:18:23.658 Read completed with error (sct=0, sc=8) 00:18:23.658 Write completed with error (sct=0, sc=8) 00:18:23.658 Read completed with error (sct=0, sc=8) 00:18:23.658 Read completed with error (sct=0, sc=8) 00:18:23.658 Read completed with error (sct=0, sc=8) 00:18:23.658 Write completed with error (sct=0, sc=8) 00:18:23.658 Read completed with error (sct=0, sc=8) 00:18:23.658 Read completed with error (sct=0, sc=8) 00:18:23.658 Read completed with error (sct=0, sc=8) 00:18:23.658 Write completed with error (sct=0, sc=8) 00:18:23.658 Read completed with error (sct=0, sc=8) 00:18:23.658 Read completed with error (sct=0, sc=8) 00:18:23.658 Write completed with error (sct=0, sc=8) 00:18:23.658 Read completed with error (sct=0, sc=8) 00:18:23.658 Write completed with error (sct=0, sc=8) 00:18:23.658 Read completed with error (sct=0, sc=8) 00:18:23.658 Read completed with error (sct=0, sc=8) 00:18:23.658 Read completed with error (sct=0, sc=8) 00:18:23.658 Write completed with error (sct=0, sc=8) 00:18:23.658 Write completed with error (sct=0, sc=8) 00:18:23.658 Write completed with error (sct=0, sc=8) 00:18:23.658 Read completed with error (sct=0, sc=8) 00:18:23.658 Read completed with error (sct=0, sc=8) 00:18:23.658 Read completed with error (sct=0, sc=8) 00:18:23.658 Read completed with error (sct=0, sc=8) 00:18:23.658 Read completed with error (sct=0, sc=8) 00:18:23.658 Read completed with error (sct=0, sc=8) 00:18:23.658 Write completed with error (sct=0, sc=8) 00:18:23.658 Read completed with error (sct=0, sc=8) 00:18:23.658 Read completed with error (sct=0, sc=8) 00:18:23.658 Write completed with error (sct=0, sc=8) 00:18:23.658 Read completed with error (sct=0, sc=8) 00:18:23.658 Read completed with error (sct=0, sc=8) 00:18:23.658 Read completed with error (sct=0, sc=8) 00:18:23.658 Read completed with error (sct=0, sc=8) 00:18:23.658 Write completed with error (sct=0, sc=8) 00:18:23.658 Read completed with error (sct=0, sc=8) 00:18:23.658 Write completed with error (sct=0, sc=8) 00:18:23.658 Read completed with error (sct=0, sc=8) 00:18:23.658 Read completed with error (sct=0, sc=8) 00:18:23.658 Write completed with error (sct=0, sc=8) 00:18:23.658 Read completed with error (sct=0, sc=8) 00:18:23.658 Read completed with error (sct=0, sc=8) 00:18:24.602 [2024-10-01 22:17:19.574827] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1761a70 is same with the state(6) to be set 00:18:24.602 Write completed with error (sct=0, sc=8) 00:18:24.602 Read completed with error (sct=0, sc=8) 00:18:24.602 Write completed with error (sct=0, sc=8) 00:18:24.602 Read completed with error (sct=0, sc=8) 00:18:24.602 Read completed with error (sct=0, sc=8) 00:18:24.602 Read completed with error (sct=0, sc=8) 00:18:24.602 Read completed with error (sct=0, sc=8) 00:18:24.602 Read completed with error (sct=0, sc=8) 00:18:24.602 Read completed with error (sct=0, sc=8) 00:18:24.602 Read completed with error (sct=0, sc=8) 00:18:24.602 Write completed with error (sct=0, sc=8) 00:18:24.602 Read completed with error (sct=0, sc=8) 00:18:24.602 Read completed with error (sct=0, sc=8) 00:18:24.602 Read completed with error (sct=0, sc=8) 00:18:24.602 Read completed with error (sct=0, sc=8) 00:18:24.602 Read completed with error (sct=0, sc=8) 00:18:24.602 Write completed with error (sct=0, sc=8) 00:18:24.602 Read completed with error (sct=0, sc=8) 00:18:24.602 Read completed with error (sct=0, sc=8) 00:18:24.602 Write completed with error (sct=0, sc=8) 00:18:24.602 Read completed with error (sct=0, sc=8) 00:18:24.602 Write completed with error (sct=0, sc=8) 00:18:24.602 Read completed with error (sct=0, sc=8) 00:18:24.602 Read completed with error (sct=0, sc=8) 00:18:24.602 Write completed with error (sct=0, sc=8) 00:18:24.602 [2024-10-01 22:17:19.605749] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1760570 is same with the state(6) to be set 00:18:24.602 Read completed with error (sct=0, sc=8) 00:18:24.602 Read completed with error (sct=0, sc=8) 00:18:24.602 Read completed with error (sct=0, sc=8) 00:18:24.602 Write completed with error (sct=0, sc=8) 00:18:24.602 Read completed with error (sct=0, sc=8) 00:18:24.602 Read completed with error (sct=0, sc=8) 00:18:24.602 Read completed with error (sct=0, sc=8) 00:18:24.602 Write completed with error (sct=0, sc=8) 00:18:24.602 Read completed with error (sct=0, sc=8) 00:18:24.602 Read completed with error (sct=0, sc=8) 00:18:24.602 Write completed with error (sct=0, sc=8) 00:18:24.602 Write completed with error (sct=0, sc=8) 00:18:24.602 Read completed with error (sct=0, sc=8) 00:18:24.602 Read completed with error (sct=0, sc=8) 00:18:24.602 Write completed with error (sct=0, sc=8) 00:18:24.602 Read completed with error (sct=0, sc=8) 00:18:24.602 Read completed with error (sct=0, sc=8) 00:18:24.602 Read completed with error (sct=0, sc=8) 00:18:24.602 Read completed with error (sct=0, sc=8) 00:18:24.602 Read completed with error (sct=0, sc=8) 00:18:24.602 Read completed with error (sct=0, sc=8) 00:18:24.602 Write completed with error (sct=0, sc=8) 00:18:24.602 Read completed with error (sct=0, sc=8) 00:18:24.602 Read completed with error (sct=0, sc=8) 00:18:24.602 Read completed with error (sct=0, sc=8) 00:18:24.602 [2024-10-01 22:17:19.606373] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1760930 is same with the state(6) to be set 00:18:24.602 Read completed with error (sct=0, sc=8) 00:18:24.602 Read completed with error (sct=0, sc=8) 00:18:24.602 Read completed with error (sct=0, sc=8) 00:18:24.602 Read completed with error (sct=0, sc=8) 00:18:24.602 Write completed with error (sct=0, sc=8) 00:18:24.602 Read completed with error (sct=0, sc=8) 00:18:24.602 Write completed with error (sct=0, sc=8) 00:18:24.602 Write completed with error (sct=0, sc=8) 00:18:24.602 Read completed with error (sct=0, sc=8) 00:18:24.602 Write completed with error (sct=0, sc=8) 00:18:24.602 Write completed with error (sct=0, sc=8) 00:18:24.602 Read completed with error (sct=0, sc=8) 00:18:24.602 Write completed with error (sct=0, sc=8) 00:18:24.602 Read completed with error (sct=0, sc=8) 00:18:24.602 Write completed with error (sct=0, sc=8) 00:18:24.602 Read completed with error (sct=0, sc=8) 00:18:24.602 Read completed with error (sct=0, sc=8) 00:18:24.602 Read completed with error (sct=0, sc=8) 00:18:24.602 Read completed with error (sct=0, sc=8) 00:18:24.602 Write completed with error (sct=0, sc=8) 00:18:24.602 Read completed with error (sct=0, sc=8) 00:18:24.602 Read completed with error (sct=0, sc=8) 00:18:24.602 Write completed with error (sct=0, sc=8) 00:18:24.602 [2024-10-01 22:17:19.608660] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f621c00d310 is same with the state(6) to be set 00:18:24.602 Write completed with error (sct=0, sc=8) 00:18:24.602 Write completed with error (sct=0, sc=8) 00:18:24.602 Read completed with error (sct=0, sc=8) 00:18:24.602 Write completed with error (sct=0, sc=8) 00:18:24.602 Read completed with error (sct=0, sc=8) 00:18:24.602 Read completed with error (sct=0, sc=8) 00:18:24.602 Read completed with error (sct=0, sc=8) 00:18:24.602 Read completed with error (sct=0, sc=8) 00:18:24.602 Read completed with error (sct=0, sc=8) 00:18:24.602 Read completed with error (sct=0, sc=8) 00:18:24.602 Read completed with error (sct=0, sc=8) 00:18:24.602 Write completed with error (sct=0, sc=8) 00:18:24.602 Read completed with error (sct=0, sc=8) 00:18:24.602 Read completed with error (sct=0, sc=8) 00:18:24.602 Write completed with error (sct=0, sc=8) 00:18:24.602 Write completed with error (sct=0, sc=8) 00:18:24.602 Write completed with error (sct=0, sc=8) 00:18:24.602 Read completed with error (sct=0, sc=8) 00:18:24.602 Read completed with error (sct=0, sc=8) 00:18:24.602 Write completed with error (sct=0, sc=8) 00:18:24.602 Read completed with error (sct=0, sc=8) 00:18:24.602 Read completed with error (sct=0, sc=8) 00:18:24.602 Read completed with error (sct=0, sc=8) 00:18:24.602 Write completed with error (sct=0, sc=8) 00:18:24.602 Read completed with error (sct=0, sc=8) 00:18:24.603 Write completed with error (sct=0, sc=8) 00:18:24.603 Read completed with error (sct=0, sc=8) 00:18:24.603 Write completed with error (sct=0, sc=8) 00:18:24.603 Read completed with error (sct=0, sc=8) 00:18:24.603 Read completed with error (sct=0, sc=8) 00:18:24.603 Read completed with error (sct=0, sc=8) 00:18:24.603 [2024-10-01 22:17:19.609001] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f621c000c00 is same with the state(6) to be set 00:18:24.603 Initializing NVMe Controllers 00:18:24.603 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:18:24.603 Controller IO queue size 128, less than required. 00:18:24.603 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:18:24.603 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:18:24.603 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:18:24.603 Initialization complete. Launching workers. 00:18:24.603 ======================================================== 00:18:24.603 Latency(us) 00:18:24.603 Device Information : IOPS MiB/s Average min max 00:18:24.603 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 171.75 0.08 890396.03 259.63 1007938.15 00:18:24.603 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 165.28 0.08 938222.25 353.72 2002976.57 00:18:24.603 ======================================================== 00:18:24.603 Total : 337.03 0.16 913849.95 259.63 2002976.57 00:18:24.603 00:18:24.603 [2024-10-01 22:17:19.609387] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1761a70 (9): Bad file descriptor 00:18:24.603 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:18:24.603 22:17:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:24.603 22:17:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@34 -- # delay=0 00:18:24.603 22:17:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 6829 00:18:24.603 22:17:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:18:25.175 22:17:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:18:25.175 22:17:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 6829 00:18:25.175 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 35: kill: (6829) - No such process 00:18:25.175 22:17:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@45 -- # NOT wait 6829 00:18:25.175 22:17:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@650 -- # local es=0 00:18:25.175 22:17:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@652 -- # valid_exec_arg wait 6829 00:18:25.175 22:17:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@638 -- # local arg=wait 00:18:25.175 22:17:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:25.175 22:17:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@642 -- # type -t wait 00:18:25.175 22:17:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:25.175 22:17:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@653 -- # wait 6829 00:18:25.175 22:17:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@653 -- # es=1 00:18:25.175 22:17:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:18:25.175 22:17:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:18:25.175 22:17:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:18:25.175 22:17:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:18:25.175 22:17:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:25.175 22:17:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:18:25.175 22:17:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:25.175 22:17:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:25.175 22:17:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:25.175 22:17:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:18:25.175 [2024-10-01 22:17:20.142051] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:25.175 22:17:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:25.175 22:17:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:18:25.175 22:17:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:25.175 22:17:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:18:25.175 22:17:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:25.175 22:17:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@54 -- # perf_pid=7515 00:18:25.175 22:17:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@56 -- # delay=0 00:18:25.175 22:17:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 3 -q 128 -w randrw -M 70 -o 512 -P 4 00:18:25.175 22:17:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 7515 00:18:25.175 22:17:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:18:25.175 [2024-10-01 22:17:20.209070] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:18:25.437 22:17:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:18:25.437 22:17:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 7515 00:18:25.437 22:17:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:18:26.007 22:17:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:18:26.007 22:17:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 7515 00:18:26.008 22:17:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:18:26.577 22:17:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:18:26.577 22:17:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 7515 00:18:26.577 22:17:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:18:27.150 22:17:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:18:27.150 22:17:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 7515 00:18:27.150 22:17:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:18:27.722 22:17:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:18:27.722 22:17:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 7515 00:18:27.722 22:17:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:18:27.982 22:17:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:18:27.982 22:17:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 7515 00:18:27.982 22:17:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:18:28.242 Initializing NVMe Controllers 00:18:28.243 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:18:28.243 Controller IO queue size 128, less than required. 00:18:28.243 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:18:28.243 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:18:28.243 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:18:28.243 Initialization complete. Launching workers. 00:18:28.243 ======================================================== 00:18:28.243 Latency(us) 00:18:28.243 Device Information : IOPS MiB/s Average min max 00:18:28.243 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 128.00 0.06 1001974.33 1000158.69 1040813.65 00:18:28.243 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 128.00 0.06 1002979.90 1000248.56 1040805.44 00:18:28.243 ======================================================== 00:18:28.243 Total : 256.00 0.12 1002477.12 1000158.69 1040813.65 00:18:28.243 00:18:28.503 22:17:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:18:28.503 22:17:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 7515 00:18:28.503 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 57: kill: (7515) - No such process 00:18:28.503 22:17:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@67 -- # wait 7515 00:18:28.503 22:17:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:18:28.503 22:17:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@71 -- # nvmftestfini 00:18:28.503 22:17:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@514 -- # nvmfcleanup 00:18:28.503 22:17:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@121 -- # sync 00:18:28.503 22:17:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:18:28.503 22:17:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@124 -- # set +e 00:18:28.503 22:17:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@125 -- # for i in {1..20} 00:18:28.503 22:17:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:18:28.503 rmmod nvme_tcp 00:18:28.503 rmmod nvme_fabrics 00:18:28.503 rmmod nvme_keyring 00:18:28.763 22:17:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:18:28.763 22:17:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@128 -- # set -e 00:18:28.763 22:17:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@129 -- # return 0 00:18:28.763 22:17:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@515 -- # '[' -n 6500 ']' 00:18:28.763 22:17:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@516 -- # killprocess 6500 00:18:28.763 22:17:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@950 -- # '[' -z 6500 ']' 00:18:28.763 22:17:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@954 -- # kill -0 6500 00:18:28.763 22:17:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@955 -- # uname 00:18:28.763 22:17:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:18:28.763 22:17:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 6500 00:18:28.763 22:17:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:18:28.763 22:17:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:18:28.763 22:17:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@968 -- # echo 'killing process with pid 6500' 00:18:28.763 killing process with pid 6500 00:18:28.763 22:17:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@969 -- # kill 6500 00:18:28.763 22:17:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@974 -- # wait 6500 00:18:28.763 22:17:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:18:28.764 22:17:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:18:28.764 22:17:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:18:29.025 22:17:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # iptr 00:18:29.025 22:17:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@789 -- # iptables-save 00:18:29.025 22:17:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:18:29.025 22:17:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@789 -- # iptables-restore 00:18:29.025 22:17:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:18:29.025 22:17:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@302 -- # remove_spdk_ns 00:18:29.025 22:17:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:29.025 22:17:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:29.025 22:17:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:30.938 22:17:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:18:30.938 00:18:30.938 real 0m18.368s 00:18:30.938 user 0m31.165s 00:18:30.938 sys 0m6.747s 00:18:30.938 22:17:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1126 -- # xtrace_disable 00:18:30.938 22:17:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:18:30.938 ************************************ 00:18:30.938 END TEST nvmf_delete_subsystem 00:18:30.938 ************************************ 00:18:30.938 22:17:26 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@26 -- # run_test nvmf_host_management /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:18:30.938 22:17:26 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:18:30.938 22:17:26 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:18:30.938 22:17:26 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:18:30.938 ************************************ 00:18:30.938 START TEST nvmf_host_management 00:18:30.938 ************************************ 00:18:30.938 22:17:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:18:31.205 * Looking for test storage... 00:18:31.205 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:18:31.205 22:17:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:18:31.205 22:17:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1681 -- # lcov --version 00:18:31.205 22:17:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:18:31.205 22:17:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:18:31.205 22:17:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:18:31.205 22:17:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@333 -- # local ver1 ver1_l 00:18:31.205 22:17:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@334 -- # local ver2 ver2_l 00:18:31.205 22:17:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@336 -- # IFS=.-: 00:18:31.205 22:17:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@336 -- # read -ra ver1 00:18:31.205 22:17:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@337 -- # IFS=.-: 00:18:31.205 22:17:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@337 -- # read -ra ver2 00:18:31.205 22:17:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@338 -- # local 'op=<' 00:18:31.205 22:17:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@340 -- # ver1_l=2 00:18:31.205 22:17:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@341 -- # ver2_l=1 00:18:31.205 22:17:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:18:31.205 22:17:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@344 -- # case "$op" in 00:18:31.205 22:17:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@345 -- # : 1 00:18:31.205 22:17:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@364 -- # (( v = 0 )) 00:18:31.205 22:17:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:31.205 22:17:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@365 -- # decimal 1 00:18:31.205 22:17:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@353 -- # local d=1 00:18:31.205 22:17:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:31.205 22:17:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@355 -- # echo 1 00:18:31.205 22:17:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@365 -- # ver1[v]=1 00:18:31.205 22:17:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@366 -- # decimal 2 00:18:31.205 22:17:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@353 -- # local d=2 00:18:31.205 22:17:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:31.205 22:17:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@355 -- # echo 2 00:18:31.205 22:17:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@366 -- # ver2[v]=2 00:18:31.205 22:17:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:31.205 22:17:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:18:31.205 22:17:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@368 -- # return 0 00:18:31.205 22:17:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:31.205 22:17:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:18:31.205 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:31.205 --rc genhtml_branch_coverage=1 00:18:31.205 --rc genhtml_function_coverage=1 00:18:31.205 --rc genhtml_legend=1 00:18:31.205 --rc geninfo_all_blocks=1 00:18:31.205 --rc geninfo_unexecuted_blocks=1 00:18:31.205 00:18:31.205 ' 00:18:31.205 22:17:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:18:31.205 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:31.205 --rc genhtml_branch_coverage=1 00:18:31.205 --rc genhtml_function_coverage=1 00:18:31.205 --rc genhtml_legend=1 00:18:31.205 --rc geninfo_all_blocks=1 00:18:31.205 --rc geninfo_unexecuted_blocks=1 00:18:31.205 00:18:31.205 ' 00:18:31.205 22:17:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:18:31.205 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:31.205 --rc genhtml_branch_coverage=1 00:18:31.205 --rc genhtml_function_coverage=1 00:18:31.205 --rc genhtml_legend=1 00:18:31.205 --rc geninfo_all_blocks=1 00:18:31.205 --rc geninfo_unexecuted_blocks=1 00:18:31.205 00:18:31.205 ' 00:18:31.205 22:17:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:18:31.205 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:31.205 --rc genhtml_branch_coverage=1 00:18:31.205 --rc genhtml_function_coverage=1 00:18:31.205 --rc genhtml_legend=1 00:18:31.205 --rc geninfo_all_blocks=1 00:18:31.205 --rc geninfo_unexecuted_blocks=1 00:18:31.205 00:18:31.205 ' 00:18:31.205 22:17:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:18:31.205 22:17:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@7 -- # uname -s 00:18:31.205 22:17:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:31.206 22:17:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:31.206 22:17:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:31.206 22:17:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:31.206 22:17:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:31.206 22:17:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:31.206 22:17:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:31.206 22:17:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:31.206 22:17:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:31.206 22:17:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:31.206 22:17:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 00:18:31.206 22:17:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@18 -- # NVME_HOSTID=80c5a598-37ec-ec11-9bc7-a4bf01928204 00:18:31.206 22:17:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:31.206 22:17:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:31.206 22:17:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:31.206 22:17:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:31.206 22:17:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:18:31.206 22:17:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@15 -- # shopt -s extglob 00:18:31.206 22:17:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:31.206 22:17:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:31.206 22:17:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:31.206 22:17:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:31.206 22:17:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:31.206 22:17:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:31.206 22:17:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@5 -- # export PATH 00:18:31.206 22:17:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:31.206 22:17:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@51 -- # : 0 00:18:31.206 22:17:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:18:31.206 22:17:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:18:31.206 22:17:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:31.206 22:17:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:31.206 22:17:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:31.206 22:17:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:18:31.206 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:18:31.206 22:17:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:18:31.206 22:17:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:18:31.206 22:17:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@55 -- # have_pci_nics=0 00:18:31.206 22:17:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:18:31.206 22:17:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:18:31.206 22:17:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@105 -- # nvmftestinit 00:18:31.206 22:17:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:18:31.206 22:17:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:31.206 22:17:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@474 -- # prepare_net_devs 00:18:31.206 22:17:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@436 -- # local -g is_hw=no 00:18:31.206 22:17:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@438 -- # remove_spdk_ns 00:18:31.206 22:17:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:31.206 22:17:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:31.206 22:17:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:31.206 22:17:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:18:31.206 22:17:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:18:31.206 22:17:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@309 -- # xtrace_disable 00:18:31.206 22:17:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:18:39.391 22:17:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:18:39.391 22:17:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@315 -- # pci_devs=() 00:18:39.391 22:17:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@315 -- # local -a pci_devs 00:18:39.391 22:17:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@316 -- # pci_net_devs=() 00:18:39.391 22:17:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:18:39.391 22:17:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@317 -- # pci_drivers=() 00:18:39.391 22:17:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@317 -- # local -A pci_drivers 00:18:39.391 22:17:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@319 -- # net_devs=() 00:18:39.391 22:17:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@319 -- # local -ga net_devs 00:18:39.391 22:17:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@320 -- # e810=() 00:18:39.391 22:17:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@320 -- # local -ga e810 00:18:39.391 22:17:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@321 -- # x722=() 00:18:39.391 22:17:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@321 -- # local -ga x722 00:18:39.391 22:17:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@322 -- # mlx=() 00:18:39.391 22:17:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@322 -- # local -ga mlx 00:18:39.391 22:17:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:39.391 22:17:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:39.391 22:17:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:39.391 22:17:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:39.391 22:17:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:39.391 22:17:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:39.391 22:17:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:39.391 22:17:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:18:39.391 22:17:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:39.391 22:17:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:39.391 22:17:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:39.391 22:17:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:39.391 22:17:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:18:39.391 22:17:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:18:39.391 22:17:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:18:39.391 22:17:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:18:39.391 22:17:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:18:39.391 22:17:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:18:39.391 22:17:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:18:39.391 22:17:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:18:39.391 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:18:39.391 22:17:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:18:39.391 22:17:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:18:39.391 22:17:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:39.391 22:17:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:39.391 22:17:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:18:39.391 22:17:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:18:39.391 22:17:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:18:39.391 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:18:39.391 22:17:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:18:39.391 22:17:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:18:39.391 22:17:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:39.391 22:17:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:39.391 22:17:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:18:39.391 22:17:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:18:39.391 22:17:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:18:39.391 22:17:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:18:39.391 22:17:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:18:39.391 22:17:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:39.391 22:17:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:18:39.391 22:17:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:39.391 22:17:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@416 -- # [[ up == up ]] 00:18:39.391 22:17:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:18:39.391 22:17:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:39.391 22:17:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:18:39.391 Found net devices under 0000:4b:00.0: cvl_0_0 00:18:39.391 22:17:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:18:39.391 22:17:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:18:39.391 22:17:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:39.391 22:17:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:18:39.392 22:17:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:39.392 22:17:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@416 -- # [[ up == up ]] 00:18:39.392 22:17:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:18:39.392 22:17:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:39.392 22:17:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:18:39.392 Found net devices under 0000:4b:00.1: cvl_0_1 00:18:39.392 22:17:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:18:39.392 22:17:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:18:39.392 22:17:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@440 -- # is_hw=yes 00:18:39.392 22:17:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:18:39.392 22:17:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:18:39.392 22:17:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:18:39.392 22:17:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:18:39.392 22:17:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:39.392 22:17:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:39.392 22:17:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:18:39.392 22:17:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:18:39.392 22:17:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:18:39.392 22:17:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:18:39.392 22:17:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:18:39.392 22:17:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:18:39.392 22:17:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:18:39.392 22:17:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:39.392 22:17:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:18:39.392 22:17:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:18:39.392 22:17:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:18:39.392 22:17:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:18:39.392 22:17:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:18:39.392 22:17:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:18:39.392 22:17:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:18:39.392 22:17:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:18:39.392 22:17:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:18:39.392 22:17:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:18:39.392 22:17:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:18:39.392 22:17:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:18:39.392 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:39.392 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.495 ms 00:18:39.392 00:18:39.392 --- 10.0.0.2 ping statistics --- 00:18:39.392 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:39.392 rtt min/avg/max/mdev = 0.495/0.495/0.495/0.000 ms 00:18:39.392 22:17:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:18:39.392 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:39.392 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.150 ms 00:18:39.392 00:18:39.392 --- 10.0.0.1 ping statistics --- 00:18:39.392 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:39.392 rtt min/avg/max/mdev = 0.150/0.150/0.150/0.000 ms 00:18:39.392 22:17:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:39.392 22:17:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@448 -- # return 0 00:18:39.392 22:17:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:18:39.392 22:17:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:39.392 22:17:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:18:39.392 22:17:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:18:39.392 22:17:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:39.392 22:17:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:18:39.392 22:17:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:18:39.392 22:17:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@107 -- # nvmf_host_management 00:18:39.392 22:17:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@69 -- # starttarget 00:18:39.392 22:17:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:18:39.392 22:17:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:18:39.392 22:17:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@724 -- # xtrace_disable 00:18:39.392 22:17:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:18:39.392 22:17:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@507 -- # nvmfpid=12544 00:18:39.392 22:17:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@508 -- # waitforlisten 12544 00:18:39.392 22:17:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:18:39.392 22:17:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@831 -- # '[' -z 12544 ']' 00:18:39.392 22:17:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:39.392 22:17:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@836 -- # local max_retries=100 00:18:39.392 22:17:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:39.392 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:39.392 22:17:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@840 -- # xtrace_disable 00:18:39.392 22:17:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:18:39.392 [2024-10-01 22:17:33.671921] Starting SPDK v25.01-pre git sha1 1b1c3081e / DPDK 24.03.0 initialization... 00:18:39.392 [2024-10-01 22:17:33.671971] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:39.392 [2024-10-01 22:17:33.756338] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:18:39.392 [2024-10-01 22:17:33.831021] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:39.392 [2024-10-01 22:17:33.831073] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:39.392 [2024-10-01 22:17:33.831082] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:39.392 [2024-10-01 22:17:33.831089] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:39.392 [2024-10-01 22:17:33.831095] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:39.392 [2024-10-01 22:17:33.831224] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:18:39.392 [2024-10-01 22:17:33.831385] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:18:39.392 [2024-10-01 22:17:33.831551] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:18:39.392 [2024-10-01 22:17:33.831551] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 4 00:18:39.392 22:17:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:18:39.392 22:17:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@864 -- # return 0 00:18:39.392 22:17:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:18:39.392 22:17:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@730 -- # xtrace_disable 00:18:39.392 22:17:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:18:39.392 22:17:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:39.392 22:17:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:18:39.392 22:17:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:39.392 22:17:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:18:39.392 [2024-10-01 22:17:34.520585] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:39.392 22:17:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:39.392 22:17:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:18:39.392 22:17:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@724 -- # xtrace_disable 00:18:39.392 22:17:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:18:39.392 22:17:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@22 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:18:39.392 22:17:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@23 -- # cat 00:18:39.392 22:17:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@30 -- # rpc_cmd 00:18:39.392 22:17:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:39.392 22:17:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:18:39.392 Malloc0 00:18:39.392 [2024-10-01 22:17:34.584027] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:39.392 22:17:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:39.392 22:17:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:18:39.393 22:17:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@730 -- # xtrace_disable 00:18:39.393 22:17:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:18:39.393 22:17:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@73 -- # perfpid=12838 00:18:39.393 22:17:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@74 -- # waitforlisten 12838 /var/tmp/bdevperf.sock 00:18:39.393 22:17:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@831 -- # '[' -z 12838 ']' 00:18:39.393 22:17:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:39.393 22:17:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@836 -- # local max_retries=100 00:18:39.393 22:17:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:39.393 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:39.393 22:17:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:18:39.393 22:17:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@840 -- # xtrace_disable 00:18:39.393 22:17:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:18:39.393 22:17:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:18:39.393 22:17:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@558 -- # config=() 00:18:39.393 22:17:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@558 -- # local subsystem config 00:18:39.393 22:17:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:18:39.393 22:17:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:18:39.393 { 00:18:39.393 "params": { 00:18:39.393 "name": "Nvme$subsystem", 00:18:39.393 "trtype": "$TEST_TRANSPORT", 00:18:39.393 "traddr": "$NVMF_FIRST_TARGET_IP", 00:18:39.393 "adrfam": "ipv4", 00:18:39.393 "trsvcid": "$NVMF_PORT", 00:18:39.393 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:18:39.393 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:18:39.393 "hdgst": ${hdgst:-false}, 00:18:39.393 "ddgst": ${ddgst:-false} 00:18:39.393 }, 00:18:39.393 "method": "bdev_nvme_attach_controller" 00:18:39.393 } 00:18:39.393 EOF 00:18:39.393 )") 00:18:39.654 22:17:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@580 -- # cat 00:18:39.654 22:17:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # jq . 00:18:39.654 22:17:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@583 -- # IFS=, 00:18:39.654 22:17:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:18:39.654 "params": { 00:18:39.654 "name": "Nvme0", 00:18:39.654 "trtype": "tcp", 00:18:39.654 "traddr": "10.0.0.2", 00:18:39.654 "adrfam": "ipv4", 00:18:39.654 "trsvcid": "4420", 00:18:39.654 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:18:39.654 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:18:39.654 "hdgst": false, 00:18:39.654 "ddgst": false 00:18:39.654 }, 00:18:39.654 "method": "bdev_nvme_attach_controller" 00:18:39.654 }' 00:18:39.654 [2024-10-01 22:17:34.687235] Starting SPDK v25.01-pre git sha1 1b1c3081e / DPDK 24.03.0 initialization... 00:18:39.654 [2024-10-01 22:17:34.687288] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid12838 ] 00:18:39.654 [2024-10-01 22:17:34.748164] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:39.654 [2024-10-01 22:17:34.813232] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:18:39.914 Running I/O for 10 seconds... 00:18:40.487 22:17:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:18:40.487 22:17:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@864 -- # return 0 00:18:40.487 22:17:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:18:40.487 22:17:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:40.487 22:17:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:18:40.487 22:17:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:40.487 22:17:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:18:40.487 22:17:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:18:40.487 22:17:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:18:40.487 22:17:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:18:40.487 22:17:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@52 -- # local ret=1 00:18:40.487 22:17:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@53 -- # local i 00:18:40.487 22:17:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i = 10 )) 00:18:40.487 22:17:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:18:40.487 22:17:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:18:40.487 22:17:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:18:40.487 22:17:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:40.487 22:17:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:18:40.487 22:17:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:40.487 22:17:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=579 00:18:40.487 22:17:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@58 -- # '[' 579 -ge 100 ']' 00:18:40.487 22:17:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@59 -- # ret=0 00:18:40.487 22:17:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@60 -- # break 00:18:40.487 22:17:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@64 -- # return 0 00:18:40.487 22:17:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:18:40.487 22:17:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:40.487 22:17:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:18:40.487 [2024-10-01 22:17:35.551114] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2035260 is same with the state(6) to be set 00:18:40.487 [2024-10-01 22:17:35.551187] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2035260 is same with the state(6) to be set 00:18:40.487 [2024-10-01 22:17:35.551196] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2035260 is same with the state(6) to be set 00:18:40.487 [2024-10-01 22:17:35.551203] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2035260 is same with the state(6) to be set 00:18:40.487 [2024-10-01 22:17:35.551209] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2035260 is same with the state(6) to be set 00:18:40.487 [2024-10-01 22:17:35.551216] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2035260 is same with the state(6) to be set 00:18:40.487 [2024-10-01 22:17:35.551223] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2035260 is same with the state(6) to be set 00:18:40.487 [2024-10-01 22:17:35.551230] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2035260 is same with the state(6) to be set 00:18:40.487 [2024-10-01 22:17:35.551236] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2035260 is same with the state(6) to be set 00:18:40.487 [2024-10-01 22:17:35.551243] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2035260 is same with the state(6) to be set 00:18:40.487 [2024-10-01 22:17:35.551249] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2035260 is same with the state(6) to be set 00:18:40.487 [2024-10-01 22:17:35.551256] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2035260 is same with the state(6) to be set 00:18:40.487 [2024-10-01 22:17:35.551262] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2035260 is same with the state(6) to be set 00:18:40.487 [2024-10-01 22:17:35.551269] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2035260 is same with the state(6) to be set 00:18:40.487 [2024-10-01 22:17:35.551275] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2035260 is same with the state(6) to be set 00:18:40.487 [2024-10-01 22:17:35.551281] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2035260 is same with the state(6) to be set 00:18:40.487 [2024-10-01 22:17:35.551288] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2035260 is same with the state(6) to be set 00:18:40.487 [2024-10-01 22:17:35.551300] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2035260 is same with the state(6) to be set 00:18:40.487 [2024-10-01 22:17:35.551306] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2035260 is same with the state(6) to be set 00:18:40.487 [2024-10-01 22:17:35.551313] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2035260 is same with the state(6) to be set 00:18:40.487 [2024-10-01 22:17:35.551320] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2035260 is same with the state(6) to be set 00:18:40.487 [2024-10-01 22:17:35.551326] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2035260 is same with the state(6) to be set 00:18:40.487 [2024-10-01 22:17:35.551333] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2035260 is same with the state(6) to be set 00:18:40.487 [2024-10-01 22:17:35.551339] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2035260 is same with the state(6) to be set 00:18:40.487 [2024-10-01 22:17:35.551346] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2035260 is same with the state(6) to be set 00:18:40.487 [2024-10-01 22:17:35.551352] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2035260 is same with the state(6) to be set 00:18:40.487 [2024-10-01 22:17:35.551359] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2035260 is same with the state(6) to be set 00:18:40.487 [2024-10-01 22:17:35.551366] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2035260 is same with the state(6) to be set 00:18:40.487 [2024-10-01 22:17:35.551372] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2035260 is same with the state(6) to be set 00:18:40.487 [2024-10-01 22:17:35.551379] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2035260 is same with the state(6) to be set 00:18:40.487 [2024-10-01 22:17:35.551385] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2035260 is same with the state(6) to be set 00:18:40.487 [2024-10-01 22:17:35.551392] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2035260 is same with the state(6) to be set 00:18:40.487 [2024-10-01 22:17:35.551399] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2035260 is same with the state(6) to be set 00:18:40.487 [2024-10-01 22:17:35.551405] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2035260 is same with the state(6) to be set 00:18:40.487 [2024-10-01 22:17:35.551412] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2035260 is same with the state(6) to be set 00:18:40.488 [2024-10-01 22:17:35.551418] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2035260 is same with the state(6) to be set 00:18:40.488 [2024-10-01 22:17:35.551425] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2035260 is same with the state(6) to be set 00:18:40.488 [2024-10-01 22:17:35.551431] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2035260 is same with the state(6) to be set 00:18:40.488 [2024-10-01 22:17:35.551438] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2035260 is same with the state(6) to be set 00:18:40.488 [2024-10-01 22:17:35.551444] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2035260 is same with the state(6) to be set 00:18:40.488 [2024-10-01 22:17:35.551451] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2035260 is same with the state(6) to be set 00:18:40.488 [2024-10-01 22:17:35.551457] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2035260 is same with the state(6) to be set 00:18:40.488 [2024-10-01 22:17:35.551464] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2035260 is same with the state(6) to be set 00:18:40.488 [2024-10-01 22:17:35.551471] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2035260 is same with the state(6) to be set 00:18:40.488 [2024-10-01 22:17:35.551481] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2035260 is same with the state(6) to be set 00:18:40.488 [2024-10-01 22:17:35.551488] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2035260 is same with the state(6) to be set 00:18:40.488 [2024-10-01 22:17:35.551495] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2035260 is same with the state(6) to be set 00:18:40.488 [2024-10-01 22:17:35.551501] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2035260 is same with the state(6) to be set 00:18:40.488 [2024-10-01 22:17:35.551508] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2035260 is same with the state(6) to be set 00:18:40.488 [2024-10-01 22:17:35.551515] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2035260 is same with the state(6) to be set 00:18:40.488 [2024-10-01 22:17:35.551521] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2035260 is same with the state(6) to be set 00:18:40.488 [2024-10-01 22:17:35.551528] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2035260 is same with the state(6) to be set 00:18:40.488 [2024-10-01 22:17:35.551535] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2035260 is same with the state(6) to be set 00:18:40.488 [2024-10-01 22:17:35.551541] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2035260 is same with the state(6) to be set 00:18:40.488 [2024-10-01 22:17:35.551548] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2035260 is same with the state(6) to be set 00:18:40.488 [2024-10-01 22:17:35.551554] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2035260 is same with the state(6) to be set 00:18:40.488 [2024-10-01 22:17:35.551561] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2035260 is same with the state(6) to be set 00:18:40.488 [2024-10-01 22:17:35.551567] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2035260 is same with the state(6) to be set 00:18:40.488 [2024-10-01 22:17:35.551574] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2035260 is same with the state(6) to be set 00:18:40.488 [2024-10-01 22:17:35.551581] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2035260 is same with the state(6) to be set 00:18:40.488 [2024-10-01 22:17:35.551587] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2035260 is same with the state(6) to be set 00:18:40.488 [2024-10-01 22:17:35.551594] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2035260 is same with the state(6) to be set 00:18:40.488 [2024-10-01 22:17:35.551600] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2035260 is same with the state(6) to be set 00:18:40.488 [2024-10-01 22:17:35.552248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:81920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:40.488 [2024-10-01 22:17:35.552285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:40.488 [2024-10-01 22:17:35.552303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:82048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:40.488 [2024-10-01 22:17:35.552311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:40.488 [2024-10-01 22:17:35.552321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:82176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:40.488 [2024-10-01 22:17:35.552329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:40.488 [2024-10-01 22:17:35.552339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:82304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:40.488 [2024-10-01 22:17:35.552346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:40.488 [2024-10-01 22:17:35.552361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:82432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:40.488 [2024-10-01 22:17:35.552369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:40.488 [2024-10-01 22:17:35.552379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:82560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:40.488 [2024-10-01 22:17:35.552387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:40.488 [2024-10-01 22:17:35.552396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:82688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:40.488 [2024-10-01 22:17:35.552403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:40.488 [2024-10-01 22:17:35.552413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:82816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:40.488 [2024-10-01 22:17:35.552421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:40.488 [2024-10-01 22:17:35.552430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:82944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:40.488 [2024-10-01 22:17:35.552438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:40.488 [2024-10-01 22:17:35.552447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:83072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:40.488 [2024-10-01 22:17:35.552455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:40.488 [2024-10-01 22:17:35.552464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:83200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:40.488 [2024-10-01 22:17:35.552472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:40.488 [2024-10-01 22:17:35.552482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:83328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:40.488 [2024-10-01 22:17:35.552490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:40.488 [2024-10-01 22:17:35.552499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:83456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:40.488 [2024-10-01 22:17:35.552506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:40.488 [2024-10-01 22:17:35.552516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:83584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:40.488 [2024-10-01 22:17:35.552523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:40.488 [2024-10-01 22:17:35.552533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:83712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:40.488 [2024-10-01 22:17:35.552540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:40.488 [2024-10-01 22:17:35.552550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:83840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:40.488 [2024-10-01 22:17:35.552557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:40.488 [2024-10-01 22:17:35.552567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:83968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:40.488 [2024-10-01 22:17:35.552575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:40.488 [2024-10-01 22:17:35.552585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:84096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:40.488 [2024-10-01 22:17:35.552593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:40.488 [2024-10-01 22:17:35.552603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:84224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:40.488 [2024-10-01 22:17:35.552610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:40.488 [2024-10-01 22:17:35.552619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:84352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:40.488 [2024-10-01 22:17:35.552632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:40.488 [2024-10-01 22:17:35.552642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:84480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:40.488 [2024-10-01 22:17:35.552649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:40.488 [2024-10-01 22:17:35.552659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:84608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:40.488 [2024-10-01 22:17:35.552666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:40.488 [2024-10-01 22:17:35.552676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:84736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:40.488 [2024-10-01 22:17:35.552683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:40.488 [2024-10-01 22:17:35.552693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:84864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:40.488 [2024-10-01 22:17:35.552700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:40.488 [2024-10-01 22:17:35.552709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:84992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:40.488 [2024-10-01 22:17:35.552717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:40.488 [2024-10-01 22:17:35.552726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:85120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:40.488 [2024-10-01 22:17:35.552733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:40.488 [2024-10-01 22:17:35.552743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:85248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:40.488 [2024-10-01 22:17:35.552750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:40.489 [2024-10-01 22:17:35.552763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:85376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:40.489 [2024-10-01 22:17:35.552770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:40.489 [2024-10-01 22:17:35.552779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:85504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:40.489 [2024-10-01 22:17:35.552786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:40.489 [2024-10-01 22:17:35.552797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:85632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:40.489 [2024-10-01 22:17:35.552805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:40.489 [2024-10-01 22:17:35.552814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:85760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:40.489 [2024-10-01 22:17:35.552821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:40.489 [2024-10-01 22:17:35.552831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:85888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:40.489 [2024-10-01 22:17:35.552838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:40.489 [2024-10-01 22:17:35.552848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:86016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:40.489 [2024-10-01 22:17:35.552855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:40.489 [2024-10-01 22:17:35.552864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:86144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:40.489 [2024-10-01 22:17:35.552871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:40.489 [2024-10-01 22:17:35.552882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:86272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:40.489 [2024-10-01 22:17:35.552889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:40.489 [2024-10-01 22:17:35.552899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:86400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:40.489 [2024-10-01 22:17:35.552906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:40.489 [2024-10-01 22:17:35.552916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:86528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:40.489 [2024-10-01 22:17:35.552923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:40.489 [2024-10-01 22:17:35.552932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:86656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:40.489 [2024-10-01 22:17:35.552940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:40.489 [2024-10-01 22:17:35.552949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:86784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:40.489 [2024-10-01 22:17:35.552956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:40.489 [2024-10-01 22:17:35.552966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:86912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:40.489 [2024-10-01 22:17:35.552973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:40.489 [2024-10-01 22:17:35.552982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:87040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:40.489 [2024-10-01 22:17:35.552989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:40.489 [2024-10-01 22:17:35.552999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:87168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:40.489 [2024-10-01 22:17:35.553007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:40.489 [2024-10-01 22:17:35.553017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:87296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:40.489 [2024-10-01 22:17:35.553024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:40.489 [2024-10-01 22:17:35.553034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:87424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:40.489 [2024-10-01 22:17:35.553041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:40.489 [2024-10-01 22:17:35.553050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:87552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:40.489 [2024-10-01 22:17:35.553057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:40.489 [2024-10-01 22:17:35.553067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:87680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:40.489 [2024-10-01 22:17:35.553074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:40.489 [2024-10-01 22:17:35.553084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:87808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:40.489 [2024-10-01 22:17:35.553091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:40.489 [2024-10-01 22:17:35.553100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:87936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:40.489 [2024-10-01 22:17:35.553108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:40.489 [2024-10-01 22:17:35.553117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:88064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:40.489 [2024-10-01 22:17:35.553124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:40.489 [2024-10-01 22:17:35.553134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:88192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:40.489 [2024-10-01 22:17:35.553141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:40.489 [2024-10-01 22:17:35.553150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:88320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:40.489 [2024-10-01 22:17:35.553157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:40.489 [2024-10-01 22:17:35.553168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:88448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:40.489 [2024-10-01 22:17:35.553175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:40.489 [2024-10-01 22:17:35.553184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:88576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:40.489 [2024-10-01 22:17:35.553191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:40.489 [2024-10-01 22:17:35.553201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:88704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:40.489 [2024-10-01 22:17:35.553208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:40.489 [2024-10-01 22:17:35.553219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:88832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:40.489 [2024-10-01 22:17:35.553226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:40.489 [2024-10-01 22:17:35.553236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:88960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:40.489 [2024-10-01 22:17:35.553243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:40.489 [2024-10-01 22:17:35.553252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:89088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:40.489 [2024-10-01 22:17:35.553259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:40.489 [2024-10-01 22:17:35.553269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:89216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:40.489 [2024-10-01 22:17:35.553276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:40.489 [2024-10-01 22:17:35.553286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:89344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:40.489 [2024-10-01 22:17:35.553293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:40.489 [2024-10-01 22:17:35.553303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:89472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:40.489 [2024-10-01 22:17:35.553311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:40.489 [2024-10-01 22:17:35.553320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:89600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:40.489 [2024-10-01 22:17:35.553328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:40.489 [2024-10-01 22:17:35.553337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:89728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:40.489 [2024-10-01 22:17:35.553344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:40.489 [2024-10-01 22:17:35.553354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:89856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:40.489 [2024-10-01 22:17:35.553361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:40.489 [2024-10-01 22:17:35.553370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:89984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:40.489 [2024-10-01 22:17:35.553378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:40.489 [2024-10-01 22:17:35.553386] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1555420 is same with the state(6) to be set 00:18:40.489 [2024-10-01 22:17:35.553428] bdev_nvme.c:1730:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1555420 was disconnected and freed. reset controller. 00:18:40.489 [2024-10-01 22:17:35.553484] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:18:40.489 [2024-10-01 22:17:35.553496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:40.489 [2024-10-01 22:17:35.553504] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:18:40.490 [2024-10-01 22:17:35.553513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:40.490 [2024-10-01 22:17:35.553521] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:18:40.490 [2024-10-01 22:17:35.553528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:40.490 [2024-10-01 22:17:35.553536] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:18:40.490 [2024-10-01 22:17:35.553544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:40.490 [2024-10-01 22:17:35.553551] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1545290 is same with the state(6) to be set 00:18:40.490 [2024-10-01 22:17:35.554790] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:18:40.490 22:17:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:40.490 22:17:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:18:40.490 task offset: 81920 on job bdev=Nvme0n1 fails 00:18:40.490 00:18:40.490 Latency(us) 00:18:40.490 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:40.490 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:18:40.490 Job: Nvme0n1 ended in about 0.49 seconds with error 00:18:40.490 Verification LBA range: start 0x0 length 0x400 00:18:40.490 Nvme0n1 : 0.49 1303.98 81.50 130.40 0.00 43477.86 5215.57 36481.71 00:18:40.490 =================================================================================================================== 00:18:40.490 Total : 1303.98 81.50 130.40 0.00 43477.86 5215.57 36481.71 00:18:40.490 22:17:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:40.490 [2024-10-01 22:17:35.556817] app.c:1061:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:18:40.490 [2024-10-01 22:17:35.556839] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1545290 (9): Bad file descriptor 00:18:40.490 22:17:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:18:40.490 22:17:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:40.490 22:17:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@87 -- # sleep 1 00:18:40.490 [2024-10-01 22:17:35.578703] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:18:41.431 22:17:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@91 -- # kill -9 12838 00:18:41.431 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh: line 91: kill: (12838) - No such process 00:18:41.431 22:17:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@91 -- # true 00:18:41.431 22:17:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:18:41.431 22:17:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:18:41.431 22:17:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:18:41.431 22:17:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@558 -- # config=() 00:18:41.431 22:17:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@558 -- # local subsystem config 00:18:41.431 22:17:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:18:41.431 22:17:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:18:41.431 { 00:18:41.431 "params": { 00:18:41.431 "name": "Nvme$subsystem", 00:18:41.431 "trtype": "$TEST_TRANSPORT", 00:18:41.431 "traddr": "$NVMF_FIRST_TARGET_IP", 00:18:41.431 "adrfam": "ipv4", 00:18:41.431 "trsvcid": "$NVMF_PORT", 00:18:41.431 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:18:41.432 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:18:41.432 "hdgst": ${hdgst:-false}, 00:18:41.432 "ddgst": ${ddgst:-false} 00:18:41.432 }, 00:18:41.432 "method": "bdev_nvme_attach_controller" 00:18:41.432 } 00:18:41.432 EOF 00:18:41.432 )") 00:18:41.432 22:17:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@580 -- # cat 00:18:41.432 22:17:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # jq . 00:18:41.432 22:17:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@583 -- # IFS=, 00:18:41.432 22:17:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:18:41.432 "params": { 00:18:41.432 "name": "Nvme0", 00:18:41.432 "trtype": "tcp", 00:18:41.432 "traddr": "10.0.0.2", 00:18:41.432 "adrfam": "ipv4", 00:18:41.432 "trsvcid": "4420", 00:18:41.432 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:18:41.432 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:18:41.432 "hdgst": false, 00:18:41.432 "ddgst": false 00:18:41.432 }, 00:18:41.432 "method": "bdev_nvme_attach_controller" 00:18:41.432 }' 00:18:41.432 [2024-10-01 22:17:36.628533] Starting SPDK v25.01-pre git sha1 1b1c3081e / DPDK 24.03.0 initialization... 00:18:41.432 [2024-10-01 22:17:36.628597] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid13270 ] 00:18:41.691 [2024-10-01 22:17:36.696640] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:41.691 [2024-10-01 22:17:36.760072] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:18:41.952 Running I/O for 1 seconds... 00:18:42.892 1350.00 IOPS, 84.38 MiB/s 00:18:42.892 Latency(us) 00:18:42.892 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:42.892 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:18:42.892 Verification LBA range: start 0x0 length 0x400 00:18:42.892 Nvme0n1 : 1.01 1405.08 87.82 0.00 0.00 44684.29 1501.87 34515.63 00:18:42.892 =================================================================================================================== 00:18:42.893 Total : 1405.08 87.82 0.00 0.00 44684.29 1501.87 34515.63 00:18:43.152 22:17:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@102 -- # stoptarget 00:18:43.152 22:17:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:18:43.152 22:17:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@37 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:18:43.152 22:17:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@38 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:18:43.152 22:17:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@40 -- # nvmftestfini 00:18:43.152 22:17:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@514 -- # nvmfcleanup 00:18:43.153 22:17:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@121 -- # sync 00:18:43.153 22:17:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:18:43.153 22:17:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@124 -- # set +e 00:18:43.153 22:17:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@125 -- # for i in {1..20} 00:18:43.153 22:17:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:18:43.153 rmmod nvme_tcp 00:18:43.153 rmmod nvme_fabrics 00:18:43.153 rmmod nvme_keyring 00:18:43.153 22:17:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:18:43.153 22:17:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@128 -- # set -e 00:18:43.153 22:17:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@129 -- # return 0 00:18:43.153 22:17:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@515 -- # '[' -n 12544 ']' 00:18:43.153 22:17:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@516 -- # killprocess 12544 00:18:43.153 22:17:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@950 -- # '[' -z 12544 ']' 00:18:43.153 22:17:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@954 -- # kill -0 12544 00:18:43.153 22:17:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@955 -- # uname 00:18:43.153 22:17:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:18:43.153 22:17:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 12544 00:18:43.153 22:17:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:18:43.153 22:17:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:18:43.153 22:17:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@968 -- # echo 'killing process with pid 12544' 00:18:43.153 killing process with pid 12544 00:18:43.153 22:17:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@969 -- # kill 12544 00:18:43.153 22:17:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@974 -- # wait 12544 00:18:43.413 [2024-10-01 22:17:38.521747] app.c: 719:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:18:43.413 22:17:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:18:43.413 22:17:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:18:43.413 22:17:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:18:43.413 22:17:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@297 -- # iptr 00:18:43.413 22:17:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@789 -- # iptables-save 00:18:43.413 22:17:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@789 -- # iptables-restore 00:18:43.413 22:17:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:18:43.413 22:17:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:18:43.413 22:17:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@302 -- # remove_spdk_ns 00:18:43.413 22:17:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:43.413 22:17:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:43.413 22:17:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:45.957 22:17:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:18:45.957 22:17:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:18:45.957 00:18:45.957 real 0m14.448s 00:18:45.957 user 0m23.350s 00:18:45.957 sys 0m6.597s 00:18:45.957 22:17:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1126 -- # xtrace_disable 00:18:45.957 22:17:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:18:45.957 ************************************ 00:18:45.957 END TEST nvmf_host_management 00:18:45.957 ************************************ 00:18:45.957 22:17:40 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@27 -- # run_test nvmf_lvol /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:18:45.957 22:17:40 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:18:45.957 22:17:40 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:18:45.957 22:17:40 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:18:45.957 ************************************ 00:18:45.957 START TEST nvmf_lvol 00:18:45.957 ************************************ 00:18:45.957 22:17:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:18:45.957 * Looking for test storage... 00:18:45.957 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:18:45.957 22:17:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:18:45.957 22:17:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1681 -- # lcov --version 00:18:45.957 22:17:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:18:45.957 22:17:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:18:45.957 22:17:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:18:45.957 22:17:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@333 -- # local ver1 ver1_l 00:18:45.957 22:17:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@334 -- # local ver2 ver2_l 00:18:45.957 22:17:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@336 -- # IFS=.-: 00:18:45.957 22:17:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@336 -- # read -ra ver1 00:18:45.957 22:17:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@337 -- # IFS=.-: 00:18:45.957 22:17:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@337 -- # read -ra ver2 00:18:45.957 22:17:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@338 -- # local 'op=<' 00:18:45.957 22:17:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@340 -- # ver1_l=2 00:18:45.957 22:17:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@341 -- # ver2_l=1 00:18:45.957 22:17:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:18:45.957 22:17:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@344 -- # case "$op" in 00:18:45.957 22:17:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@345 -- # : 1 00:18:45.957 22:17:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@364 -- # (( v = 0 )) 00:18:45.957 22:17:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:45.957 22:17:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@365 -- # decimal 1 00:18:45.957 22:17:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@353 -- # local d=1 00:18:45.957 22:17:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:45.957 22:17:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@355 -- # echo 1 00:18:45.957 22:17:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@365 -- # ver1[v]=1 00:18:45.957 22:17:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@366 -- # decimal 2 00:18:45.957 22:17:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@353 -- # local d=2 00:18:45.957 22:17:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:45.957 22:17:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@355 -- # echo 2 00:18:45.957 22:17:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@366 -- # ver2[v]=2 00:18:45.957 22:17:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:45.957 22:17:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:18:45.957 22:17:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@368 -- # return 0 00:18:45.957 22:17:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:45.957 22:17:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:18:45.957 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:45.957 --rc genhtml_branch_coverage=1 00:18:45.957 --rc genhtml_function_coverage=1 00:18:45.957 --rc genhtml_legend=1 00:18:45.957 --rc geninfo_all_blocks=1 00:18:45.957 --rc geninfo_unexecuted_blocks=1 00:18:45.957 00:18:45.957 ' 00:18:45.957 22:17:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:18:45.957 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:45.957 --rc genhtml_branch_coverage=1 00:18:45.957 --rc genhtml_function_coverage=1 00:18:45.957 --rc genhtml_legend=1 00:18:45.957 --rc geninfo_all_blocks=1 00:18:45.957 --rc geninfo_unexecuted_blocks=1 00:18:45.957 00:18:45.957 ' 00:18:45.957 22:17:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:18:45.957 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:45.957 --rc genhtml_branch_coverage=1 00:18:45.957 --rc genhtml_function_coverage=1 00:18:45.957 --rc genhtml_legend=1 00:18:45.957 --rc geninfo_all_blocks=1 00:18:45.957 --rc geninfo_unexecuted_blocks=1 00:18:45.957 00:18:45.957 ' 00:18:45.957 22:17:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:18:45.957 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:45.957 --rc genhtml_branch_coverage=1 00:18:45.957 --rc genhtml_function_coverage=1 00:18:45.957 --rc genhtml_legend=1 00:18:45.957 --rc geninfo_all_blocks=1 00:18:45.957 --rc geninfo_unexecuted_blocks=1 00:18:45.957 00:18:45.957 ' 00:18:45.957 22:17:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:18:45.957 22:17:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@7 -- # uname -s 00:18:45.958 22:17:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:45.958 22:17:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:45.958 22:17:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:45.958 22:17:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:45.958 22:17:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:45.958 22:17:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:45.958 22:17:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:45.958 22:17:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:45.958 22:17:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:45.958 22:17:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:45.958 22:17:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 00:18:45.958 22:17:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@18 -- # NVME_HOSTID=80c5a598-37ec-ec11-9bc7-a4bf01928204 00:18:45.958 22:17:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:45.958 22:17:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:45.958 22:17:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:45.958 22:17:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:45.958 22:17:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:18:45.958 22:17:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@15 -- # shopt -s extglob 00:18:45.958 22:17:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:45.958 22:17:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:45.958 22:17:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:45.958 22:17:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:45.958 22:17:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:45.958 22:17:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:45.958 22:17:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@5 -- # export PATH 00:18:45.958 22:17:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:45.958 22:17:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@51 -- # : 0 00:18:45.958 22:17:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:18:45.958 22:17:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:18:45.958 22:17:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:45.958 22:17:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:45.958 22:17:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:45.958 22:17:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:18:45.958 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:18:45.958 22:17:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:18:45.958 22:17:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:18:45.958 22:17:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@55 -- # have_pci_nics=0 00:18:45.958 22:17:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:18:45.958 22:17:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:18:45.958 22:17:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:18:45.958 22:17:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:18:45.958 22:17:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:18:45.958 22:17:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:18:45.958 22:17:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:18:45.958 22:17:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:45.958 22:17:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@474 -- # prepare_net_devs 00:18:45.958 22:17:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@436 -- # local -g is_hw=no 00:18:45.958 22:17:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@438 -- # remove_spdk_ns 00:18:45.958 22:17:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:45.958 22:17:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:45.958 22:17:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:45.958 22:17:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:18:45.958 22:17:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:18:45.958 22:17:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@309 -- # xtrace_disable 00:18:45.958 22:17:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:18:54.097 22:17:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:18:54.097 22:17:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@315 -- # pci_devs=() 00:18:54.097 22:17:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@315 -- # local -a pci_devs 00:18:54.097 22:17:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@316 -- # pci_net_devs=() 00:18:54.097 22:17:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:18:54.097 22:17:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@317 -- # pci_drivers=() 00:18:54.097 22:17:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@317 -- # local -A pci_drivers 00:18:54.097 22:17:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@319 -- # net_devs=() 00:18:54.097 22:17:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@319 -- # local -ga net_devs 00:18:54.097 22:17:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@320 -- # e810=() 00:18:54.097 22:17:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@320 -- # local -ga e810 00:18:54.097 22:17:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@321 -- # x722=() 00:18:54.097 22:17:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@321 -- # local -ga x722 00:18:54.097 22:17:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@322 -- # mlx=() 00:18:54.097 22:17:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@322 -- # local -ga mlx 00:18:54.097 22:17:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:54.097 22:17:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:54.097 22:17:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:54.097 22:17:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:54.097 22:17:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:54.097 22:17:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:54.097 22:17:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:54.097 22:17:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:18:54.097 22:17:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:54.097 22:17:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:54.097 22:17:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:54.097 22:17:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:54.097 22:17:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:18:54.097 22:17:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:18:54.097 22:17:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:18:54.097 22:17:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:18:54.097 22:17:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:18:54.097 22:17:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:18:54.097 22:17:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:18:54.097 22:17:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:18:54.097 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:18:54.097 22:17:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:18:54.097 22:17:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:18:54.097 22:17:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:54.097 22:17:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:54.097 22:17:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:18:54.097 22:17:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:18:54.097 22:17:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:18:54.097 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:18:54.097 22:17:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:18:54.097 22:17:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:18:54.097 22:17:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:54.097 22:17:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:54.097 22:17:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:18:54.097 22:17:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:18:54.097 22:17:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:18:54.097 22:17:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:18:54.097 22:17:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:18:54.097 22:17:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:54.097 22:17:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:18:54.097 22:17:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:54.097 22:17:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@416 -- # [[ up == up ]] 00:18:54.097 22:17:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:18:54.097 22:17:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:54.097 22:17:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:18:54.097 Found net devices under 0000:4b:00.0: cvl_0_0 00:18:54.097 22:17:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:18:54.097 22:17:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:18:54.097 22:17:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:54.097 22:17:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:18:54.097 22:17:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:54.097 22:17:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@416 -- # [[ up == up ]] 00:18:54.097 22:17:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:18:54.097 22:17:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:54.097 22:17:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:18:54.097 Found net devices under 0000:4b:00.1: cvl_0_1 00:18:54.097 22:17:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:18:54.097 22:17:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:18:54.097 22:17:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@440 -- # is_hw=yes 00:18:54.097 22:17:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:18:54.097 22:17:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:18:54.097 22:17:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:18:54.097 22:17:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:18:54.097 22:17:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:54.097 22:17:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:54.097 22:17:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:18:54.097 22:17:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:18:54.097 22:17:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:18:54.097 22:17:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:18:54.097 22:17:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:18:54.097 22:17:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:18:54.097 22:17:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:18:54.097 22:17:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:54.097 22:17:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:18:54.097 22:17:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:18:54.097 22:17:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:18:54.097 22:17:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:18:54.097 22:17:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:18:54.097 22:17:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:18:54.097 22:17:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:18:54.097 22:17:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:18:54.097 22:17:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:18:54.097 22:17:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:18:54.097 22:17:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:18:54.097 22:17:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:18:54.097 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:54.097 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.626 ms 00:18:54.097 00:18:54.097 --- 10.0.0.2 ping statistics --- 00:18:54.097 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:54.097 rtt min/avg/max/mdev = 0.626/0.626/0.626/0.000 ms 00:18:54.097 22:17:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:18:54.097 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:54.097 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.274 ms 00:18:54.097 00:18:54.097 --- 10.0.0.1 ping statistics --- 00:18:54.097 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:54.097 rtt min/avg/max/mdev = 0.274/0.274/0.274/0.000 ms 00:18:54.098 22:17:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:54.098 22:17:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@448 -- # return 0 00:18:54.098 22:17:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:18:54.098 22:17:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:54.098 22:17:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:18:54.098 22:17:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:18:54.098 22:17:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:54.098 22:17:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:18:54.098 22:17:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:18:54.098 22:17:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:18:54.098 22:17:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:18:54.098 22:17:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@724 -- # xtrace_disable 00:18:54.098 22:17:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:18:54.098 22:17:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@507 -- # nvmfpid=17818 00:18:54.098 22:17:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@508 -- # waitforlisten 17818 00:18:54.098 22:17:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:18:54.098 22:17:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@831 -- # '[' -z 17818 ']' 00:18:54.098 22:17:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:54.098 22:17:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@836 -- # local max_retries=100 00:18:54.098 22:17:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:54.098 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:54.098 22:17:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@840 -- # xtrace_disable 00:18:54.098 22:17:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:18:54.098 [2024-10-01 22:17:48.374842] Starting SPDK v25.01-pre git sha1 1b1c3081e / DPDK 24.03.0 initialization... 00:18:54.098 [2024-10-01 22:17:48.374909] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:54.098 [2024-10-01 22:17:48.446963] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:18:54.098 [2024-10-01 22:17:48.522893] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:54.098 [2024-10-01 22:17:48.522934] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:54.098 [2024-10-01 22:17:48.522942] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:54.098 [2024-10-01 22:17:48.522948] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:54.098 [2024-10-01 22:17:48.522954] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:54.098 [2024-10-01 22:17:48.523132] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:18:54.098 [2024-10-01 22:17:48.523250] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:18:54.098 [2024-10-01 22:17:48.523252] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:18:54.098 22:17:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:18:54.098 22:17:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@864 -- # return 0 00:18:54.098 22:17:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:18:54.098 22:17:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@730 -- # xtrace_disable 00:18:54.098 22:17:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:18:54.098 22:17:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:54.098 22:17:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:18:54.357 [2024-10-01 22:17:49.380203] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:54.357 22:17:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:18:54.357 22:17:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:18:54.357 22:17:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:18:54.618 22:17:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:18:54.618 22:17:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:18:54.879 22:17:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:18:55.140 22:17:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # lvs=f35c1eff-ca20-4528-9432-ae96ca28e886 00:18:55.140 22:17:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u f35c1eff-ca20-4528-9432-ae96ca28e886 lvol 20 00:18:55.140 22:17:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # lvol=73b07f55-65ac-4626-9989-75eec1b698c4 00:18:55.140 22:17:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:18:55.399 22:17:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 73b07f55-65ac-4626-9989-75eec1b698c4 00:18:55.659 22:17:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:18:55.659 [2024-10-01 22:17:50.855219] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:55.659 22:17:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:18:55.920 22:17:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:18:55.920 22:17:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@42 -- # perf_pid=18331 00:18:55.920 22:17:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@44 -- # sleep 1 00:18:56.863 22:17:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_snapshot 73b07f55-65ac-4626-9989-75eec1b698c4 MY_SNAPSHOT 00:18:57.124 22:17:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # snapshot=56a32ee4-b9e4-4cf0-976f-532c08659f1a 00:18:57.124 22:17:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_resize 73b07f55-65ac-4626-9989-75eec1b698c4 30 00:18:57.384 22:17:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_clone 56a32ee4-b9e4-4cf0-976f-532c08659f1a MY_CLONE 00:18:57.646 22:17:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # clone=1b641bd3-4ad3-4b0e-b115-6a724bb4c8ed 00:18:57.646 22:17:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_inflate 1b641bd3-4ad3-4b0e-b115-6a724bb4c8ed 00:18:58.218 22:17:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@53 -- # wait 18331 00:19:06.355 Initializing NVMe Controllers 00:19:06.355 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:19:06.355 Controller IO queue size 128, less than required. 00:19:06.355 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:19:06.355 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:19:06.355 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:19:06.355 Initialization complete. Launching workers. 00:19:06.355 ======================================================== 00:19:06.355 Latency(us) 00:19:06.355 Device Information : IOPS MiB/s Average min max 00:19:06.355 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 12059.70 47.11 10614.46 1483.66 40706.05 00:19:06.355 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 17502.70 68.37 7315.07 1270.69 57006.27 00:19:06.355 ======================================================== 00:19:06.355 Total : 29562.40 115.48 8661.02 1270.69 57006.27 00:19:06.355 00:19:06.355 22:18:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:19:06.614 22:18:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 73b07f55-65ac-4626-9989-75eec1b698c4 00:19:06.875 22:18:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u f35c1eff-ca20-4528-9432-ae96ca28e886 00:19:06.875 22:18:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@60 -- # rm -f 00:19:06.875 22:18:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:19:06.875 22:18:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:19:06.875 22:18:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@514 -- # nvmfcleanup 00:19:06.875 22:18:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@121 -- # sync 00:19:06.875 22:18:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:19:06.875 22:18:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@124 -- # set +e 00:19:06.875 22:18:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@125 -- # for i in {1..20} 00:19:06.875 22:18:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:19:06.875 rmmod nvme_tcp 00:19:06.875 rmmod nvme_fabrics 00:19:06.875 rmmod nvme_keyring 00:19:07.135 22:18:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:19:07.135 22:18:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@128 -- # set -e 00:19:07.135 22:18:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@129 -- # return 0 00:19:07.135 22:18:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@515 -- # '[' -n 17818 ']' 00:19:07.135 22:18:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@516 -- # killprocess 17818 00:19:07.135 22:18:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@950 -- # '[' -z 17818 ']' 00:19:07.135 22:18:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@954 -- # kill -0 17818 00:19:07.135 22:18:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@955 -- # uname 00:19:07.135 22:18:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:19:07.135 22:18:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 17818 00:19:07.135 22:18:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:19:07.135 22:18:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:19:07.135 22:18:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@968 -- # echo 'killing process with pid 17818' 00:19:07.135 killing process with pid 17818 00:19:07.135 22:18:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@969 -- # kill 17818 00:19:07.135 22:18:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@974 -- # wait 17818 00:19:07.395 22:18:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:19:07.395 22:18:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:19:07.395 22:18:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:19:07.395 22:18:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@297 -- # iptr 00:19:07.395 22:18:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@789 -- # iptables-save 00:19:07.395 22:18:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:19:07.395 22:18:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@789 -- # iptables-restore 00:19:07.395 22:18:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:19:07.395 22:18:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@302 -- # remove_spdk_ns 00:19:07.395 22:18:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:07.395 22:18:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:07.395 22:18:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:09.304 22:18:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:19:09.304 00:19:09.304 real 0m23.793s 00:19:09.304 user 1m4.698s 00:19:09.304 sys 0m8.398s 00:19:09.304 22:18:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1126 -- # xtrace_disable 00:19:09.304 22:18:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:19:09.304 ************************************ 00:19:09.304 END TEST nvmf_lvol 00:19:09.304 ************************************ 00:19:09.304 22:18:04 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@28 -- # run_test nvmf_lvs_grow /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:19:09.304 22:18:04 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:19:09.304 22:18:04 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:19:09.304 22:18:04 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:19:09.565 ************************************ 00:19:09.565 START TEST nvmf_lvs_grow 00:19:09.565 ************************************ 00:19:09.565 22:18:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:19:09.565 * Looking for test storage... 00:19:09.565 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:19:09.565 22:18:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:19:09.565 22:18:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1681 -- # lcov --version 00:19:09.565 22:18:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:19:09.565 22:18:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:19:09.565 22:18:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:19:09.565 22:18:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@333 -- # local ver1 ver1_l 00:19:09.565 22:18:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@334 -- # local ver2 ver2_l 00:19:09.565 22:18:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@336 -- # IFS=.-: 00:19:09.565 22:18:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@336 -- # read -ra ver1 00:19:09.565 22:18:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@337 -- # IFS=.-: 00:19:09.565 22:18:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@337 -- # read -ra ver2 00:19:09.565 22:18:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@338 -- # local 'op=<' 00:19:09.565 22:18:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@340 -- # ver1_l=2 00:19:09.565 22:18:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@341 -- # ver2_l=1 00:19:09.565 22:18:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:19:09.565 22:18:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@344 -- # case "$op" in 00:19:09.565 22:18:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@345 -- # : 1 00:19:09.565 22:18:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v = 0 )) 00:19:09.565 22:18:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:09.565 22:18:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@365 -- # decimal 1 00:19:09.565 22:18:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=1 00:19:09.565 22:18:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:09.565 22:18:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 1 00:19:09.565 22:18:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@365 -- # ver1[v]=1 00:19:09.565 22:18:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@366 -- # decimal 2 00:19:09.565 22:18:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=2 00:19:09.565 22:18:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:19:09.565 22:18:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 2 00:19:09.565 22:18:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@366 -- # ver2[v]=2 00:19:09.565 22:18:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:19:09.565 22:18:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:19:09.565 22:18:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@368 -- # return 0 00:19:09.565 22:18:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:19:09.565 22:18:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:19:09.565 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:09.565 --rc genhtml_branch_coverage=1 00:19:09.565 --rc genhtml_function_coverage=1 00:19:09.565 --rc genhtml_legend=1 00:19:09.565 --rc geninfo_all_blocks=1 00:19:09.565 --rc geninfo_unexecuted_blocks=1 00:19:09.565 00:19:09.565 ' 00:19:09.565 22:18:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:19:09.565 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:09.565 --rc genhtml_branch_coverage=1 00:19:09.565 --rc genhtml_function_coverage=1 00:19:09.565 --rc genhtml_legend=1 00:19:09.565 --rc geninfo_all_blocks=1 00:19:09.565 --rc geninfo_unexecuted_blocks=1 00:19:09.565 00:19:09.565 ' 00:19:09.565 22:18:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:19:09.565 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:09.565 --rc genhtml_branch_coverage=1 00:19:09.565 --rc genhtml_function_coverage=1 00:19:09.565 --rc genhtml_legend=1 00:19:09.565 --rc geninfo_all_blocks=1 00:19:09.565 --rc geninfo_unexecuted_blocks=1 00:19:09.565 00:19:09.565 ' 00:19:09.565 22:18:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:19:09.565 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:09.565 --rc genhtml_branch_coverage=1 00:19:09.565 --rc genhtml_function_coverage=1 00:19:09.565 --rc genhtml_legend=1 00:19:09.565 --rc geninfo_all_blocks=1 00:19:09.565 --rc geninfo_unexecuted_blocks=1 00:19:09.565 00:19:09.565 ' 00:19:09.565 22:18:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:19:09.565 22:18:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@7 -- # uname -s 00:19:09.565 22:18:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:09.565 22:18:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:09.565 22:18:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:09.565 22:18:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:09.565 22:18:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:09.565 22:18:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:09.565 22:18:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:09.565 22:18:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:09.565 22:18:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:09.565 22:18:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:09.565 22:18:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 00:19:09.565 22:18:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@18 -- # NVME_HOSTID=80c5a598-37ec-ec11-9bc7-a4bf01928204 00:19:09.565 22:18:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:09.565 22:18:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:09.565 22:18:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:09.565 22:18:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:09.565 22:18:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:19:09.565 22:18:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@15 -- # shopt -s extglob 00:19:09.565 22:18:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:09.565 22:18:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:09.565 22:18:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:09.565 22:18:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:09.565 22:18:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:09.566 22:18:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:09.566 22:18:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@5 -- # export PATH 00:19:09.566 22:18:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:09.566 22:18:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@51 -- # : 0 00:19:09.566 22:18:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:19:09.566 22:18:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:19:09.566 22:18:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:09.566 22:18:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:09.566 22:18:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:09.566 22:18:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:19:09.566 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:19:09.566 22:18:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:19:09.566 22:18:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:19:09.566 22:18:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@55 -- # have_pci_nics=0 00:19:09.566 22:18:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:19:09.566 22:18:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:19:09.566 22:18:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@98 -- # nvmftestinit 00:19:09.566 22:18:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:19:09.566 22:18:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:09.566 22:18:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@474 -- # prepare_net_devs 00:19:09.566 22:18:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@436 -- # local -g is_hw=no 00:19:09.566 22:18:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@438 -- # remove_spdk_ns 00:19:09.566 22:18:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:09.566 22:18:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:09.566 22:18:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:09.566 22:18:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:19:09.566 22:18:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:19:09.566 22:18:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@309 -- # xtrace_disable 00:19:09.566 22:18:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:19:17.713 22:18:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:19:17.713 22:18:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@315 -- # pci_devs=() 00:19:17.713 22:18:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@315 -- # local -a pci_devs 00:19:17.713 22:18:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@316 -- # pci_net_devs=() 00:19:17.713 22:18:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:19:17.713 22:18:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@317 -- # pci_drivers=() 00:19:17.713 22:18:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@317 -- # local -A pci_drivers 00:19:17.713 22:18:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@319 -- # net_devs=() 00:19:17.713 22:18:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@319 -- # local -ga net_devs 00:19:17.713 22:18:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@320 -- # e810=() 00:19:17.713 22:18:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@320 -- # local -ga e810 00:19:17.713 22:18:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@321 -- # x722=() 00:19:17.713 22:18:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@321 -- # local -ga x722 00:19:17.713 22:18:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@322 -- # mlx=() 00:19:17.713 22:18:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@322 -- # local -ga mlx 00:19:17.713 22:18:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:17.713 22:18:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:17.713 22:18:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:17.713 22:18:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:17.713 22:18:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:17.713 22:18:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:17.713 22:18:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:17.713 22:18:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:19:17.713 22:18:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:17.713 22:18:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:17.713 22:18:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:17.713 22:18:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:17.713 22:18:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:19:17.713 22:18:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:19:17.713 22:18:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:19:17.713 22:18:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:19:17.713 22:18:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:19:17.713 22:18:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:19:17.713 22:18:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:19:17.713 22:18:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:19:17.713 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:19:17.713 22:18:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:19:17.713 22:18:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:19:17.713 22:18:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:17.713 22:18:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:17.713 22:18:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:19:17.713 22:18:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:19:17.713 22:18:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:19:17.713 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:19:17.713 22:18:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:19:17.713 22:18:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:19:17.713 22:18:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:17.713 22:18:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:17.713 22:18:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:19:17.713 22:18:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:19:17.713 22:18:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:19:17.713 22:18:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:19:17.713 22:18:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:19:17.713 22:18:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:17.713 22:18:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:19:17.713 22:18:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:17.713 22:18:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ up == up ]] 00:19:17.713 22:18:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:19:17.713 22:18:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:17.713 22:18:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:19:17.713 Found net devices under 0000:4b:00.0: cvl_0_0 00:19:17.713 22:18:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:19:17.713 22:18:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:19:17.713 22:18:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:17.713 22:18:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:19:17.713 22:18:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:17.713 22:18:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ up == up ]] 00:19:17.713 22:18:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:19:17.713 22:18:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:17.713 22:18:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:19:17.713 Found net devices under 0000:4b:00.1: cvl_0_1 00:19:17.713 22:18:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:19:17.713 22:18:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:19:17.713 22:18:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@440 -- # is_hw=yes 00:19:17.713 22:18:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:19:17.713 22:18:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:19:17.713 22:18:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:19:17.713 22:18:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:19:17.713 22:18:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:17.713 22:18:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:17.713 22:18:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:17.713 22:18:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:19:17.714 22:18:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:19:17.714 22:18:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:19:17.714 22:18:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:19:17.714 22:18:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:19:17.714 22:18:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:19:17.714 22:18:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:17.714 22:18:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:19:17.714 22:18:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:19:17.714 22:18:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:19:17.714 22:18:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:19:17.714 22:18:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:19:17.714 22:18:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:19:17.714 22:18:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:19:17.714 22:18:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:19:17.714 22:18:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:19:17.714 22:18:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:19:17.714 22:18:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:19:17.714 22:18:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:19:17.714 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:17.714 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.680 ms 00:19:17.714 00:19:17.714 --- 10.0.0.2 ping statistics --- 00:19:17.714 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:17.714 rtt min/avg/max/mdev = 0.680/0.680/0.680/0.000 ms 00:19:17.714 22:18:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:19:17.714 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:17.714 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.295 ms 00:19:17.714 00:19:17.714 --- 10.0.0.1 ping statistics --- 00:19:17.714 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:17.714 rtt min/avg/max/mdev = 0.295/0.295/0.295/0.000 ms 00:19:17.714 22:18:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:17.714 22:18:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@448 -- # return 0 00:19:17.714 22:18:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:19:17.714 22:18:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:17.714 22:18:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:19:17.714 22:18:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:19:17.714 22:18:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:17.714 22:18:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:19:17.714 22:18:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:19:17.714 22:18:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@99 -- # nvmfappstart -m 0x1 00:19:17.714 22:18:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:19:17.714 22:18:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@724 -- # xtrace_disable 00:19:17.714 22:18:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:19:17.714 22:18:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@507 -- # nvmfpid=25499 00:19:17.714 22:18:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@508 -- # waitforlisten 25499 00:19:17.714 22:18:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:19:17.714 22:18:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@831 -- # '[' -z 25499 ']' 00:19:17.714 22:18:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:17.714 22:18:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@836 -- # local max_retries=100 00:19:17.714 22:18:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:17.714 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:17.714 22:18:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@840 -- # xtrace_disable 00:19:17.714 22:18:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:19:17.714 [2024-10-01 22:18:12.385178] Starting SPDK v25.01-pre git sha1 1b1c3081e / DPDK 24.03.0 initialization... 00:19:17.714 [2024-10-01 22:18:12.385231] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:17.714 [2024-10-01 22:18:12.452843] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:17.714 [2024-10-01 22:18:12.519405] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:17.714 [2024-10-01 22:18:12.519444] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:17.714 [2024-10-01 22:18:12.519455] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:17.714 [2024-10-01 22:18:12.519462] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:17.714 [2024-10-01 22:18:12.519468] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:17.714 [2024-10-01 22:18:12.519487] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:19:17.973 22:18:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:19:17.973 22:18:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@864 -- # return 0 00:19:17.973 22:18:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:19:17.973 22:18:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@730 -- # xtrace_disable 00:19:17.973 22:18:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:19:17.973 22:18:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:17.973 22:18:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:19:18.233 [2024-10-01 22:18:13.376148] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:18.233 22:18:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_clean lvs_grow 00:19:18.233 22:18:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:19:18.233 22:18:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1107 -- # xtrace_disable 00:19:18.233 22:18:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:19:18.233 ************************************ 00:19:18.233 START TEST lvs_grow_clean 00:19:18.233 ************************************ 00:19:18.233 22:18:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1125 -- # lvs_grow 00:19:18.233 22:18:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:19:18.233 22:18:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:19:18.233 22:18:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:19:18.233 22:18:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:19:18.233 22:18:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:19:18.233 22:18:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:19:18.233 22:18:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:19:18.233 22:18:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:19:18.233 22:18:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:19:18.493 22:18:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:19:18.493 22:18:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:19:18.753 22:18:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # lvs=3c714459-1101-40cb-b6e4-f7c47ac08d05 00:19:18.753 22:18:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 3c714459-1101-40cb-b6e4-f7c47ac08d05 00:19:18.753 22:18:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:19:19.012 22:18:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:19:19.012 22:18:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:19:19.012 22:18:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 3c714459-1101-40cb-b6e4-f7c47ac08d05 lvol 150 00:19:19.012 22:18:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # lvol=3e05ea29-12e0-4bee-a079-2aa2f161bf7b 00:19:19.012 22:18:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:19:19.012 22:18:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:19:19.272 [2024-10-01 22:18:14.340336] bdev_aio.c:1044:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:19:19.272 [2024-10-01 22:18:14.340388] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:19:19.272 true 00:19:19.272 22:18:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 3c714459-1101-40cb-b6e4-f7c47ac08d05 00:19:19.272 22:18:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:19:19.272 22:18:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:19:19.272 22:18:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:19:19.531 22:18:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 3e05ea29-12e0-4bee-a079-2aa2f161bf7b 00:19:19.791 22:18:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:19:19.791 [2024-10-01 22:18:15.002395] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:19.791 22:18:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:19:20.051 22:18:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=25982 00:19:20.051 22:18:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:19:20.051 22:18:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 25982 /var/tmp/bdevperf.sock 00:19:20.051 22:18:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@831 -- # '[' -z 25982 ']' 00:19:20.051 22:18:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:20.051 22:18:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@836 -- # local max_retries=100 00:19:20.051 22:18:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:20.051 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:20.051 22:18:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@840 -- # xtrace_disable 00:19:20.051 22:18:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:19:20.051 22:18:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:19:20.051 [2024-10-01 22:18:15.230150] Starting SPDK v25.01-pre git sha1 1b1c3081e / DPDK 24.03.0 initialization... 00:19:20.051 [2024-10-01 22:18:15.230206] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid25982 ] 00:19:20.312 [2024-10-01 22:18:15.308654] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:20.312 [2024-10-01 22:18:15.373765] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:19:20.882 22:18:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:19:20.882 22:18:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@864 -- # return 0 00:19:20.882 22:18:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:19:21.142 Nvme0n1 00:19:21.402 22:18:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:19:21.402 [ 00:19:21.402 { 00:19:21.402 "name": "Nvme0n1", 00:19:21.402 "aliases": [ 00:19:21.402 "3e05ea29-12e0-4bee-a079-2aa2f161bf7b" 00:19:21.403 ], 00:19:21.403 "product_name": "NVMe disk", 00:19:21.403 "block_size": 4096, 00:19:21.403 "num_blocks": 38912, 00:19:21.403 "uuid": "3e05ea29-12e0-4bee-a079-2aa2f161bf7b", 00:19:21.403 "numa_id": 0, 00:19:21.403 "assigned_rate_limits": { 00:19:21.403 "rw_ios_per_sec": 0, 00:19:21.403 "rw_mbytes_per_sec": 0, 00:19:21.403 "r_mbytes_per_sec": 0, 00:19:21.403 "w_mbytes_per_sec": 0 00:19:21.403 }, 00:19:21.403 "claimed": false, 00:19:21.403 "zoned": false, 00:19:21.403 "supported_io_types": { 00:19:21.403 "read": true, 00:19:21.403 "write": true, 00:19:21.403 "unmap": true, 00:19:21.403 "flush": true, 00:19:21.403 "reset": true, 00:19:21.403 "nvme_admin": true, 00:19:21.403 "nvme_io": true, 00:19:21.403 "nvme_io_md": false, 00:19:21.403 "write_zeroes": true, 00:19:21.403 "zcopy": false, 00:19:21.403 "get_zone_info": false, 00:19:21.403 "zone_management": false, 00:19:21.403 "zone_append": false, 00:19:21.403 "compare": true, 00:19:21.403 "compare_and_write": true, 00:19:21.403 "abort": true, 00:19:21.403 "seek_hole": false, 00:19:21.403 "seek_data": false, 00:19:21.403 "copy": true, 00:19:21.403 "nvme_iov_md": false 00:19:21.403 }, 00:19:21.403 "memory_domains": [ 00:19:21.403 { 00:19:21.403 "dma_device_id": "system", 00:19:21.403 "dma_device_type": 1 00:19:21.403 } 00:19:21.403 ], 00:19:21.403 "driver_specific": { 00:19:21.403 "nvme": [ 00:19:21.403 { 00:19:21.403 "trid": { 00:19:21.403 "trtype": "TCP", 00:19:21.403 "adrfam": "IPv4", 00:19:21.403 "traddr": "10.0.0.2", 00:19:21.403 "trsvcid": "4420", 00:19:21.403 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:19:21.403 }, 00:19:21.403 "ctrlr_data": { 00:19:21.403 "cntlid": 1, 00:19:21.403 "vendor_id": "0x8086", 00:19:21.403 "model_number": "SPDK bdev Controller", 00:19:21.403 "serial_number": "SPDK0", 00:19:21.403 "firmware_revision": "25.01", 00:19:21.403 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:19:21.403 "oacs": { 00:19:21.403 "security": 0, 00:19:21.403 "format": 0, 00:19:21.403 "firmware": 0, 00:19:21.403 "ns_manage": 0 00:19:21.403 }, 00:19:21.403 "multi_ctrlr": true, 00:19:21.403 "ana_reporting": false 00:19:21.403 }, 00:19:21.403 "vs": { 00:19:21.403 "nvme_version": "1.3" 00:19:21.403 }, 00:19:21.403 "ns_data": { 00:19:21.403 "id": 1, 00:19:21.403 "can_share": true 00:19:21.403 } 00:19:21.403 } 00:19:21.403 ], 00:19:21.403 "mp_policy": "active_passive" 00:19:21.403 } 00:19:21.403 } 00:19:21.403 ] 00:19:21.403 22:18:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=26321 00:19:21.403 22:18:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:19:21.403 22:18:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:19:21.663 Running I/O for 10 seconds... 00:19:22.633 Latency(us) 00:19:22.633 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:22.633 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:19:22.633 Nvme0n1 : 1.00 17567.00 68.62 0.00 0.00 0.00 0.00 0.00 00:19:22.633 =================================================================================================================== 00:19:22.633 Total : 17567.00 68.62 0.00 0.00 0.00 0.00 0.00 00:19:22.633 00:19:23.573 22:18:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 3c714459-1101-40cb-b6e4-f7c47ac08d05 00:19:23.573 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:19:23.573 Nvme0n1 : 2.00 17744.50 69.31 0.00 0.00 0.00 0.00 0.00 00:19:23.573 =================================================================================================================== 00:19:23.573 Total : 17744.50 69.31 0.00 0.00 0.00 0.00 0.00 00:19:23.573 00:19:23.573 true 00:19:23.573 22:18:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 3c714459-1101-40cb-b6e4-f7c47ac08d05 00:19:23.574 22:18:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:19:23.833 22:18:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:19:23.833 22:18:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:19:23.833 22:18:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@65 -- # wait 26321 00:19:24.772 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:19:24.772 Nvme0n1 : 3.00 17795.67 69.51 0.00 0.00 0.00 0.00 0.00 00:19:24.772 =================================================================================================================== 00:19:24.772 Total : 17795.67 69.51 0.00 0.00 0.00 0.00 0.00 00:19:24.772 00:19:25.713 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:19:25.713 Nvme0n1 : 4.00 17834.25 69.67 0.00 0.00 0.00 0.00 0.00 00:19:25.713 =================================================================================================================== 00:19:25.713 Total : 17834.25 69.67 0.00 0.00 0.00 0.00 0.00 00:19:25.713 00:19:26.654 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:19:26.654 Nvme0n1 : 5.00 17860.60 69.77 0.00 0.00 0.00 0.00 0.00 00:19:26.654 =================================================================================================================== 00:19:26.654 Total : 17860.60 69.77 0.00 0.00 0.00 0.00 0.00 00:19:26.654 00:19:27.596 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:19:27.596 Nvme0n1 : 6.00 17898.17 69.91 0.00 0.00 0.00 0.00 0.00 00:19:27.596 =================================================================================================================== 00:19:27.596 Total : 17898.17 69.91 0.00 0.00 0.00 0.00 0.00 00:19:27.596 00:19:28.535 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:19:28.535 Nvme0n1 : 7.00 17908.57 69.96 0.00 0.00 0.00 0.00 0.00 00:19:28.535 =================================================================================================================== 00:19:28.535 Total : 17908.57 69.96 0.00 0.00 0.00 0.00 0.00 00:19:28.535 00:19:29.475 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:19:29.475 Nvme0n1 : 8.00 17930.12 70.04 0.00 0.00 0.00 0.00 0.00 00:19:29.475 =================================================================================================================== 00:19:29.475 Total : 17930.12 70.04 0.00 0.00 0.00 0.00 0.00 00:19:29.475 00:19:30.859 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:19:30.859 Nvme0n1 : 9.00 17948.00 70.11 0.00 0.00 0.00 0.00 0.00 00:19:30.859 =================================================================================================================== 00:19:30.859 Total : 17948.00 70.11 0.00 0.00 0.00 0.00 0.00 00:19:30.859 00:19:31.800 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:19:31.800 Nvme0n1 : 10.00 17957.00 70.14 0.00 0.00 0.00 0.00 0.00 00:19:31.800 =================================================================================================================== 00:19:31.800 Total : 17957.00 70.14 0.00 0.00 0.00 0.00 0.00 00:19:31.800 00:19:31.800 00:19:31.800 Latency(us) 00:19:31.800 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:31.800 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:19:31.800 Nvme0n1 : 10.00 17955.95 70.14 0.00 0.00 7124.34 4287.15 18240.85 00:19:31.800 =================================================================================================================== 00:19:31.800 Total : 17955.95 70.14 0.00 0.00 7124.34 4287.15 18240.85 00:19:31.800 { 00:19:31.800 "results": [ 00:19:31.800 { 00:19:31.800 "job": "Nvme0n1", 00:19:31.800 "core_mask": "0x2", 00:19:31.800 "workload": "randwrite", 00:19:31.800 "status": "finished", 00:19:31.800 "queue_depth": 128, 00:19:31.800 "io_size": 4096, 00:19:31.800 "runtime": 10.004206, 00:19:31.800 "iops": 17955.94772838544, 00:19:31.800 "mibps": 70.14042081400562, 00:19:31.800 "io_failed": 0, 00:19:31.800 "io_timeout": 0, 00:19:31.800 "avg_latency_us": 7124.335859418636, 00:19:31.800 "min_latency_us": 4287.1466666666665, 00:19:31.800 "max_latency_us": 18240.853333333333 00:19:31.800 } 00:19:31.800 ], 00:19:31.800 "core_count": 1 00:19:31.800 } 00:19:31.800 22:18:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@66 -- # killprocess 25982 00:19:31.800 22:18:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@950 -- # '[' -z 25982 ']' 00:19:31.800 22:18:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@954 -- # kill -0 25982 00:19:31.800 22:18:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@955 -- # uname 00:19:31.800 22:18:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:19:31.800 22:18:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 25982 00:19:31.800 22:18:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:19:31.800 22:18:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:19:31.800 22:18:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@968 -- # echo 'killing process with pid 25982' 00:19:31.800 killing process with pid 25982 00:19:31.800 22:18:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@969 -- # kill 25982 00:19:31.800 Received shutdown signal, test time was about 10.000000 seconds 00:19:31.800 00:19:31.800 Latency(us) 00:19:31.800 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:31.800 =================================================================================================================== 00:19:31.800 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:19:31.800 22:18:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@974 -- # wait 25982 00:19:31.800 22:18:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:19:32.060 22:18:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:19:32.060 22:18:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 3c714459-1101-40cb-b6e4-f7c47ac08d05 00:19:32.060 22:18:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:19:32.321 22:18:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:19:32.321 22:18:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@72 -- # [[ '' == \d\i\r\t\y ]] 00:19:32.321 22:18:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:19:32.582 [2024-10-01 22:18:27.628195] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:19:32.582 22:18:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 3c714459-1101-40cb-b6e4-f7c47ac08d05 00:19:32.582 22:18:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@650 -- # local es=0 00:19:32.582 22:18:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 3c714459-1101-40cb-b6e4-f7c47ac08d05 00:19:32.582 22:18:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:19:32.582 22:18:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:32.582 22:18:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:19:32.582 22:18:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:32.582 22:18:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:19:32.582 22:18:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:32.582 22:18:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:19:32.582 22:18:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:19:32.582 22:18:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 3c714459-1101-40cb-b6e4-f7c47ac08d05 00:19:32.842 request: 00:19:32.842 { 00:19:32.842 "uuid": "3c714459-1101-40cb-b6e4-f7c47ac08d05", 00:19:32.842 "method": "bdev_lvol_get_lvstores", 00:19:32.842 "req_id": 1 00:19:32.842 } 00:19:32.842 Got JSON-RPC error response 00:19:32.842 response: 00:19:32.842 { 00:19:32.842 "code": -19, 00:19:32.842 "message": "No such device" 00:19:32.842 } 00:19:32.842 22:18:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@653 -- # es=1 00:19:32.842 22:18:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:19:32.842 22:18:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:19:32.842 22:18:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:19:32.842 22:18:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:19:32.842 aio_bdev 00:19:32.843 22:18:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 3e05ea29-12e0-4bee-a079-2aa2f161bf7b 00:19:32.843 22:18:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@899 -- # local bdev_name=3e05ea29-12e0-4bee-a079-2aa2f161bf7b 00:19:32.843 22:18:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:19:32.843 22:18:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@901 -- # local i 00:19:32.843 22:18:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:19:32.843 22:18:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:19:32.843 22:18:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:19:33.104 22:18:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 3e05ea29-12e0-4bee-a079-2aa2f161bf7b -t 2000 00:19:33.104 [ 00:19:33.104 { 00:19:33.104 "name": "3e05ea29-12e0-4bee-a079-2aa2f161bf7b", 00:19:33.104 "aliases": [ 00:19:33.104 "lvs/lvol" 00:19:33.104 ], 00:19:33.104 "product_name": "Logical Volume", 00:19:33.104 "block_size": 4096, 00:19:33.104 "num_blocks": 38912, 00:19:33.104 "uuid": "3e05ea29-12e0-4bee-a079-2aa2f161bf7b", 00:19:33.104 "assigned_rate_limits": { 00:19:33.104 "rw_ios_per_sec": 0, 00:19:33.104 "rw_mbytes_per_sec": 0, 00:19:33.104 "r_mbytes_per_sec": 0, 00:19:33.104 "w_mbytes_per_sec": 0 00:19:33.104 }, 00:19:33.104 "claimed": false, 00:19:33.104 "zoned": false, 00:19:33.104 "supported_io_types": { 00:19:33.104 "read": true, 00:19:33.104 "write": true, 00:19:33.104 "unmap": true, 00:19:33.104 "flush": false, 00:19:33.104 "reset": true, 00:19:33.104 "nvme_admin": false, 00:19:33.104 "nvme_io": false, 00:19:33.104 "nvme_io_md": false, 00:19:33.104 "write_zeroes": true, 00:19:33.104 "zcopy": false, 00:19:33.104 "get_zone_info": false, 00:19:33.104 "zone_management": false, 00:19:33.104 "zone_append": false, 00:19:33.104 "compare": false, 00:19:33.104 "compare_and_write": false, 00:19:33.104 "abort": false, 00:19:33.104 "seek_hole": true, 00:19:33.104 "seek_data": true, 00:19:33.104 "copy": false, 00:19:33.104 "nvme_iov_md": false 00:19:33.104 }, 00:19:33.104 "driver_specific": { 00:19:33.104 "lvol": { 00:19:33.104 "lvol_store_uuid": "3c714459-1101-40cb-b6e4-f7c47ac08d05", 00:19:33.104 "base_bdev": "aio_bdev", 00:19:33.104 "thin_provision": false, 00:19:33.104 "num_allocated_clusters": 38, 00:19:33.104 "snapshot": false, 00:19:33.104 "clone": false, 00:19:33.104 "esnap_clone": false 00:19:33.104 } 00:19:33.104 } 00:19:33.104 } 00:19:33.104 ] 00:19:33.104 22:18:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@907 -- # return 0 00:19:33.104 22:18:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 3c714459-1101-40cb-b6e4-f7c47ac08d05 00:19:33.105 22:18:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:19:33.365 22:18:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:19:33.365 22:18:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 3c714459-1101-40cb-b6e4-f7c47ac08d05 00:19:33.365 22:18:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:19:33.696 22:18:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:19:33.696 22:18:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 3e05ea29-12e0-4bee-a079-2aa2f161bf7b 00:19:33.696 22:18:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 3c714459-1101-40cb-b6e4-f7c47ac08d05 00:19:34.027 22:18:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:19:34.027 22:18:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:19:34.027 00:19:34.027 real 0m15.799s 00:19:34.027 user 0m15.388s 00:19:34.027 sys 0m1.406s 00:19:34.027 22:18:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1126 -- # xtrace_disable 00:19:34.027 22:18:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:19:34.027 ************************************ 00:19:34.027 END TEST lvs_grow_clean 00:19:34.027 ************************************ 00:19:34.306 22:18:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@103 -- # run_test lvs_grow_dirty lvs_grow dirty 00:19:34.306 22:18:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:19:34.306 22:18:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1107 -- # xtrace_disable 00:19:34.306 22:18:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:19:34.306 ************************************ 00:19:34.306 START TEST lvs_grow_dirty 00:19:34.306 ************************************ 00:19:34.306 22:18:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1125 -- # lvs_grow dirty 00:19:34.306 22:18:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:19:34.306 22:18:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:19:34.306 22:18:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:19:34.306 22:18:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:19:34.306 22:18:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:19:34.306 22:18:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:19:34.306 22:18:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:19:34.306 22:18:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:19:34.306 22:18:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:19:34.306 22:18:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:19:34.306 22:18:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:19:34.567 22:18:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # lvs=0839780f-93e3-4d94-940c-8e06e6ec5c9e 00:19:34.567 22:18:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 0839780f-93e3-4d94-940c-8e06e6ec5c9e 00:19:34.567 22:18:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:19:34.826 22:18:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:19:34.826 22:18:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:19:34.826 22:18:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 0839780f-93e3-4d94-940c-8e06e6ec5c9e lvol 150 00:19:34.826 22:18:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # lvol=11083f5c-934f-43f5-9ee6-14730e454727 00:19:34.826 22:18:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:19:34.827 22:18:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:19:35.086 [2024-10-01 22:18:30.168181] bdev_aio.c:1044:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:19:35.086 [2024-10-01 22:18:30.168235] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:19:35.086 true 00:19:35.086 22:18:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 0839780f-93e3-4d94-940c-8e06e6ec5c9e 00:19:35.086 22:18:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:19:35.349 22:18:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:19:35.349 22:18:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:19:35.349 22:18:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 11083f5c-934f-43f5-9ee6-14730e454727 00:19:35.609 22:18:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:19:35.609 [2024-10-01 22:18:30.846385] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:35.609 22:18:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:19:35.869 22:18:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=29354 00:19:35.869 22:18:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:19:35.869 22:18:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 29354 /var/tmp/bdevperf.sock 00:19:35.869 22:18:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@831 -- # '[' -z 29354 ']' 00:19:35.869 22:18:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:35.869 22:18:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@836 -- # local max_retries=100 00:19:35.870 22:18:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:35.870 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:35.870 22:18:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # xtrace_disable 00:19:35.870 22:18:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:19:35.870 22:18:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:19:35.870 [2024-10-01 22:18:31.077809] Starting SPDK v25.01-pre git sha1 1b1c3081e / DPDK 24.03.0 initialization... 00:19:35.870 [2024-10-01 22:18:31.077861] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid29354 ] 00:19:36.130 [2024-10-01 22:18:31.153204] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:36.130 [2024-10-01 22:18:31.217746] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:19:36.700 22:18:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:19:36.700 22:18:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # return 0 00:19:36.700 22:18:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:19:36.959 Nvme0n1 00:19:36.960 22:18:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:19:37.220 [ 00:19:37.220 { 00:19:37.220 "name": "Nvme0n1", 00:19:37.220 "aliases": [ 00:19:37.220 "11083f5c-934f-43f5-9ee6-14730e454727" 00:19:37.220 ], 00:19:37.220 "product_name": "NVMe disk", 00:19:37.220 "block_size": 4096, 00:19:37.220 "num_blocks": 38912, 00:19:37.220 "uuid": "11083f5c-934f-43f5-9ee6-14730e454727", 00:19:37.220 "numa_id": 0, 00:19:37.220 "assigned_rate_limits": { 00:19:37.220 "rw_ios_per_sec": 0, 00:19:37.220 "rw_mbytes_per_sec": 0, 00:19:37.220 "r_mbytes_per_sec": 0, 00:19:37.220 "w_mbytes_per_sec": 0 00:19:37.220 }, 00:19:37.220 "claimed": false, 00:19:37.220 "zoned": false, 00:19:37.220 "supported_io_types": { 00:19:37.220 "read": true, 00:19:37.220 "write": true, 00:19:37.220 "unmap": true, 00:19:37.220 "flush": true, 00:19:37.220 "reset": true, 00:19:37.220 "nvme_admin": true, 00:19:37.220 "nvme_io": true, 00:19:37.220 "nvme_io_md": false, 00:19:37.220 "write_zeroes": true, 00:19:37.220 "zcopy": false, 00:19:37.220 "get_zone_info": false, 00:19:37.220 "zone_management": false, 00:19:37.220 "zone_append": false, 00:19:37.220 "compare": true, 00:19:37.220 "compare_and_write": true, 00:19:37.220 "abort": true, 00:19:37.220 "seek_hole": false, 00:19:37.220 "seek_data": false, 00:19:37.220 "copy": true, 00:19:37.220 "nvme_iov_md": false 00:19:37.220 }, 00:19:37.220 "memory_domains": [ 00:19:37.220 { 00:19:37.220 "dma_device_id": "system", 00:19:37.220 "dma_device_type": 1 00:19:37.220 } 00:19:37.220 ], 00:19:37.220 "driver_specific": { 00:19:37.220 "nvme": [ 00:19:37.220 { 00:19:37.220 "trid": { 00:19:37.220 "trtype": "TCP", 00:19:37.220 "adrfam": "IPv4", 00:19:37.220 "traddr": "10.0.0.2", 00:19:37.220 "trsvcid": "4420", 00:19:37.220 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:19:37.220 }, 00:19:37.220 "ctrlr_data": { 00:19:37.220 "cntlid": 1, 00:19:37.220 "vendor_id": "0x8086", 00:19:37.220 "model_number": "SPDK bdev Controller", 00:19:37.220 "serial_number": "SPDK0", 00:19:37.220 "firmware_revision": "25.01", 00:19:37.220 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:19:37.220 "oacs": { 00:19:37.220 "security": 0, 00:19:37.220 "format": 0, 00:19:37.220 "firmware": 0, 00:19:37.220 "ns_manage": 0 00:19:37.220 }, 00:19:37.220 "multi_ctrlr": true, 00:19:37.220 "ana_reporting": false 00:19:37.220 }, 00:19:37.220 "vs": { 00:19:37.220 "nvme_version": "1.3" 00:19:37.220 }, 00:19:37.220 "ns_data": { 00:19:37.220 "id": 1, 00:19:37.220 "can_share": true 00:19:37.220 } 00:19:37.220 } 00:19:37.220 ], 00:19:37.220 "mp_policy": "active_passive" 00:19:37.220 } 00:19:37.220 } 00:19:37.220 ] 00:19:37.220 22:18:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=29469 00:19:37.220 22:18:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:19:37.220 22:18:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:19:37.220 Running I/O for 10 seconds... 00:19:38.161 Latency(us) 00:19:38.161 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:38.161 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:19:38.161 Nvme0n1 : 1.00 17653.00 68.96 0.00 0.00 0.00 0.00 0.00 00:19:38.161 =================================================================================================================== 00:19:38.161 Total : 17653.00 68.96 0.00 0.00 0.00 0.00 0.00 00:19:38.161 00:19:39.102 22:18:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 0839780f-93e3-4d94-940c-8e06e6ec5c9e 00:19:39.362 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:19:39.362 Nvme0n1 : 2.00 17705.00 69.16 0.00 0.00 0.00 0.00 0.00 00:19:39.362 =================================================================================================================== 00:19:39.362 Total : 17705.00 69.16 0.00 0.00 0.00 0.00 0.00 00:19:39.362 00:19:39.362 true 00:19:39.362 22:18:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 0839780f-93e3-4d94-940c-8e06e6ec5c9e 00:19:39.362 22:18:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:19:39.621 22:18:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:19:39.621 22:18:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:19:39.621 22:18:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@65 -- # wait 29469 00:19:40.189 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:19:40.189 Nvme0n1 : 3.00 17766.00 69.40 0.00 0.00 0.00 0.00 0.00 00:19:40.189 =================================================================================================================== 00:19:40.189 Total : 17766.00 69.40 0.00 0.00 0.00 0.00 0.00 00:19:40.189 00:19:41.129 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:19:41.129 Nvme0n1 : 4.00 17808.75 69.57 0.00 0.00 0.00 0.00 0.00 00:19:41.129 =================================================================================================================== 00:19:41.129 Total : 17808.75 69.57 0.00 0.00 0.00 0.00 0.00 00:19:41.129 00:19:42.511 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:19:42.511 Nvme0n1 : 5.00 17841.00 69.69 0.00 0.00 0.00 0.00 0.00 00:19:42.511 =================================================================================================================== 00:19:42.511 Total : 17841.00 69.69 0.00 0.00 0.00 0.00 0.00 00:19:42.511 00:19:43.501 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:19:43.501 Nvme0n1 : 6.00 17869.33 69.80 0.00 0.00 0.00 0.00 0.00 00:19:43.501 =================================================================================================================== 00:19:43.501 Total : 17869.33 69.80 0.00 0.00 0.00 0.00 0.00 00:19:43.501 00:19:44.441 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:19:44.441 Nvme0n1 : 7.00 17875.43 69.83 0.00 0.00 0.00 0.00 0.00 00:19:44.441 =================================================================================================================== 00:19:44.441 Total : 17875.43 69.83 0.00 0.00 0.00 0.00 0.00 00:19:44.441 00:19:45.383 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:19:45.383 Nvme0n1 : 8.00 17893.62 69.90 0.00 0.00 0.00 0.00 0.00 00:19:45.383 =================================================================================================================== 00:19:45.383 Total : 17893.62 69.90 0.00 0.00 0.00 0.00 0.00 00:19:45.383 00:19:46.325 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:19:46.325 Nvme0n1 : 9.00 17906.44 69.95 0.00 0.00 0.00 0.00 0.00 00:19:46.325 =================================================================================================================== 00:19:46.325 Total : 17906.44 69.95 0.00 0.00 0.00 0.00 0.00 00:19:46.325 00:19:47.266 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:19:47.266 Nvme0n1 : 10.00 17914.10 69.98 0.00 0.00 0.00 0.00 0.00 00:19:47.266 =================================================================================================================== 00:19:47.266 Total : 17914.10 69.98 0.00 0.00 0.00 0.00 0.00 00:19:47.266 00:19:47.266 00:19:47.266 Latency(us) 00:19:47.266 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:47.266 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:19:47.266 Nvme0n1 : 10.01 17918.15 69.99 0.00 0.00 7139.10 2075.31 13981.01 00:19:47.266 =================================================================================================================== 00:19:47.266 Total : 17918.15 69.99 0.00 0.00 7139.10 2075.31 13981.01 00:19:47.266 { 00:19:47.266 "results": [ 00:19:47.266 { 00:19:47.266 "job": "Nvme0n1", 00:19:47.266 "core_mask": "0x2", 00:19:47.266 "workload": "randwrite", 00:19:47.266 "status": "finished", 00:19:47.266 "queue_depth": 128, 00:19:47.266 "io_size": 4096, 00:19:47.266 "runtime": 10.008398, 00:19:47.266 "iops": 17918.152335668507, 00:19:47.266 "mibps": 69.9927825612051, 00:19:47.266 "io_failed": 0, 00:19:47.266 "io_timeout": 0, 00:19:47.266 "avg_latency_us": 7139.099954646503, 00:19:47.266 "min_latency_us": 2075.306666666667, 00:19:47.266 "max_latency_us": 13981.013333333334 00:19:47.266 } 00:19:47.266 ], 00:19:47.266 "core_count": 1 00:19:47.266 } 00:19:47.266 22:18:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@66 -- # killprocess 29354 00:19:47.266 22:18:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@950 -- # '[' -z 29354 ']' 00:19:47.266 22:18:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@954 -- # kill -0 29354 00:19:47.266 22:18:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@955 -- # uname 00:19:47.266 22:18:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:19:47.266 22:18:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 29354 00:19:47.266 22:18:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:19:47.266 22:18:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:19:47.266 22:18:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@968 -- # echo 'killing process with pid 29354' 00:19:47.266 killing process with pid 29354 00:19:47.266 22:18:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@969 -- # kill 29354 00:19:47.266 Received shutdown signal, test time was about 10.000000 seconds 00:19:47.266 00:19:47.266 Latency(us) 00:19:47.266 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:47.266 =================================================================================================================== 00:19:47.266 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:19:47.266 22:18:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@974 -- # wait 29354 00:19:47.526 22:18:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:19:47.787 22:18:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:19:47.787 22:18:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 0839780f-93e3-4d94-940c-8e06e6ec5c9e 00:19:47.787 22:18:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:19:48.048 22:18:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:19:48.048 22:18:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@72 -- # [[ dirty == \d\i\r\t\y ]] 00:19:48.048 22:18:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@74 -- # kill -9 25499 00:19:48.048 22:18:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # wait 25499 00:19:48.048 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 75: 25499 Killed "${NVMF_APP[@]}" "$@" 00:19:48.048 22:18:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # true 00:19:48.048 22:18:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@76 -- # nvmfappstart -m 0x1 00:19:48.048 22:18:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:19:48.048 22:18:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@724 -- # xtrace_disable 00:19:48.048 22:18:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:19:48.048 22:18:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@507 -- # nvmfpid=31776 00:19:48.048 22:18:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@508 -- # waitforlisten 31776 00:19:48.048 22:18:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:19:48.048 22:18:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@831 -- # '[' -z 31776 ']' 00:19:48.048 22:18:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:48.048 22:18:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@836 -- # local max_retries=100 00:19:48.048 22:18:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:48.048 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:48.048 22:18:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # xtrace_disable 00:19:48.048 22:18:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:19:48.048 [2024-10-01 22:18:43.276328] Starting SPDK v25.01-pre git sha1 1b1c3081e / DPDK 24.03.0 initialization... 00:19:48.048 [2024-10-01 22:18:43.276383] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:48.308 [2024-10-01 22:18:43.344520] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:48.308 [2024-10-01 22:18:43.410286] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:48.308 [2024-10-01 22:18:43.410322] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:48.308 [2024-10-01 22:18:43.410330] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:48.308 [2024-10-01 22:18:43.410336] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:48.308 [2024-10-01 22:18:43.410342] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:48.308 [2024-10-01 22:18:43.410360] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:19:48.878 22:18:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:19:48.878 22:18:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # return 0 00:19:48.878 22:18:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:19:48.878 22:18:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@730 -- # xtrace_disable 00:19:48.878 22:18:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:19:48.878 22:18:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:48.878 22:18:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:19:49.139 [2024-10-01 22:18:44.262465] blobstore.c:4875:bs_recover: *NOTICE*: Performing recovery on blobstore 00:19:49.139 [2024-10-01 22:18:44.262564] blobstore.c:4822:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:19:49.139 [2024-10-01 22:18:44.262596] blobstore.c:4822:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:19:49.139 22:18:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # aio_bdev=aio_bdev 00:19:49.139 22:18:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@78 -- # waitforbdev 11083f5c-934f-43f5-9ee6-14730e454727 00:19:49.139 22:18:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@899 -- # local bdev_name=11083f5c-934f-43f5-9ee6-14730e454727 00:19:49.139 22:18:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:19:49.139 22:18:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@901 -- # local i 00:19:49.139 22:18:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:19:49.139 22:18:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:19:49.139 22:18:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:19:49.399 22:18:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 11083f5c-934f-43f5-9ee6-14730e454727 -t 2000 00:19:49.399 [ 00:19:49.399 { 00:19:49.399 "name": "11083f5c-934f-43f5-9ee6-14730e454727", 00:19:49.399 "aliases": [ 00:19:49.399 "lvs/lvol" 00:19:49.399 ], 00:19:49.399 "product_name": "Logical Volume", 00:19:49.399 "block_size": 4096, 00:19:49.399 "num_blocks": 38912, 00:19:49.399 "uuid": "11083f5c-934f-43f5-9ee6-14730e454727", 00:19:49.399 "assigned_rate_limits": { 00:19:49.399 "rw_ios_per_sec": 0, 00:19:49.399 "rw_mbytes_per_sec": 0, 00:19:49.399 "r_mbytes_per_sec": 0, 00:19:49.399 "w_mbytes_per_sec": 0 00:19:49.399 }, 00:19:49.399 "claimed": false, 00:19:49.399 "zoned": false, 00:19:49.399 "supported_io_types": { 00:19:49.399 "read": true, 00:19:49.399 "write": true, 00:19:49.399 "unmap": true, 00:19:49.399 "flush": false, 00:19:49.399 "reset": true, 00:19:49.399 "nvme_admin": false, 00:19:49.399 "nvme_io": false, 00:19:49.399 "nvme_io_md": false, 00:19:49.399 "write_zeroes": true, 00:19:49.399 "zcopy": false, 00:19:49.399 "get_zone_info": false, 00:19:49.399 "zone_management": false, 00:19:49.399 "zone_append": false, 00:19:49.399 "compare": false, 00:19:49.399 "compare_and_write": false, 00:19:49.399 "abort": false, 00:19:49.399 "seek_hole": true, 00:19:49.399 "seek_data": true, 00:19:49.399 "copy": false, 00:19:49.399 "nvme_iov_md": false 00:19:49.399 }, 00:19:49.399 "driver_specific": { 00:19:49.399 "lvol": { 00:19:49.399 "lvol_store_uuid": "0839780f-93e3-4d94-940c-8e06e6ec5c9e", 00:19:49.399 "base_bdev": "aio_bdev", 00:19:49.399 "thin_provision": false, 00:19:49.399 "num_allocated_clusters": 38, 00:19:49.399 "snapshot": false, 00:19:49.399 "clone": false, 00:19:49.399 "esnap_clone": false 00:19:49.399 } 00:19:49.399 } 00:19:49.399 } 00:19:49.399 ] 00:19:49.399 22:18:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@907 -- # return 0 00:19:49.399 22:18:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 0839780f-93e3-4d94-940c-8e06e6ec5c9e 00:19:49.399 22:18:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].free_clusters' 00:19:49.658 22:18:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # (( free_clusters == 61 )) 00:19:49.658 22:18:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 0839780f-93e3-4d94-940c-8e06e6ec5c9e 00:19:49.658 22:18:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # jq -r '.[0].total_data_clusters' 00:19:49.917 22:18:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # (( data_clusters == 99 )) 00:19:49.918 22:18:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:19:49.918 [2024-10-01 22:18:45.078554] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:19:49.918 22:18:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 0839780f-93e3-4d94-940c-8e06e6ec5c9e 00:19:49.918 22:18:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@650 -- # local es=0 00:19:49.918 22:18:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 0839780f-93e3-4d94-940c-8e06e6ec5c9e 00:19:49.918 22:18:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:19:49.918 22:18:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:49.918 22:18:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:19:49.918 22:18:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:49.918 22:18:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:19:49.918 22:18:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:49.918 22:18:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:19:49.918 22:18:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:19:49.918 22:18:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 0839780f-93e3-4d94-940c-8e06e6ec5c9e 00:19:50.178 request: 00:19:50.178 { 00:19:50.178 "uuid": "0839780f-93e3-4d94-940c-8e06e6ec5c9e", 00:19:50.178 "method": "bdev_lvol_get_lvstores", 00:19:50.178 "req_id": 1 00:19:50.178 } 00:19:50.178 Got JSON-RPC error response 00:19:50.178 response: 00:19:50.178 { 00:19:50.178 "code": -19, 00:19:50.178 "message": "No such device" 00:19:50.178 } 00:19:50.178 22:18:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@653 -- # es=1 00:19:50.178 22:18:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:19:50.178 22:18:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:19:50.178 22:18:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:19:50.178 22:18:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:19:50.438 aio_bdev 00:19:50.438 22:18:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 11083f5c-934f-43f5-9ee6-14730e454727 00:19:50.438 22:18:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@899 -- # local bdev_name=11083f5c-934f-43f5-9ee6-14730e454727 00:19:50.438 22:18:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:19:50.438 22:18:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@901 -- # local i 00:19:50.438 22:18:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:19:50.438 22:18:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:19:50.438 22:18:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:19:50.438 22:18:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 11083f5c-934f-43f5-9ee6-14730e454727 -t 2000 00:19:50.699 [ 00:19:50.699 { 00:19:50.699 "name": "11083f5c-934f-43f5-9ee6-14730e454727", 00:19:50.699 "aliases": [ 00:19:50.699 "lvs/lvol" 00:19:50.699 ], 00:19:50.699 "product_name": "Logical Volume", 00:19:50.699 "block_size": 4096, 00:19:50.699 "num_blocks": 38912, 00:19:50.699 "uuid": "11083f5c-934f-43f5-9ee6-14730e454727", 00:19:50.699 "assigned_rate_limits": { 00:19:50.699 "rw_ios_per_sec": 0, 00:19:50.699 "rw_mbytes_per_sec": 0, 00:19:50.699 "r_mbytes_per_sec": 0, 00:19:50.699 "w_mbytes_per_sec": 0 00:19:50.699 }, 00:19:50.699 "claimed": false, 00:19:50.699 "zoned": false, 00:19:50.699 "supported_io_types": { 00:19:50.699 "read": true, 00:19:50.699 "write": true, 00:19:50.699 "unmap": true, 00:19:50.699 "flush": false, 00:19:50.699 "reset": true, 00:19:50.699 "nvme_admin": false, 00:19:50.699 "nvme_io": false, 00:19:50.699 "nvme_io_md": false, 00:19:50.699 "write_zeroes": true, 00:19:50.699 "zcopy": false, 00:19:50.699 "get_zone_info": false, 00:19:50.699 "zone_management": false, 00:19:50.699 "zone_append": false, 00:19:50.699 "compare": false, 00:19:50.699 "compare_and_write": false, 00:19:50.699 "abort": false, 00:19:50.699 "seek_hole": true, 00:19:50.699 "seek_data": true, 00:19:50.699 "copy": false, 00:19:50.699 "nvme_iov_md": false 00:19:50.699 }, 00:19:50.699 "driver_specific": { 00:19:50.699 "lvol": { 00:19:50.699 "lvol_store_uuid": "0839780f-93e3-4d94-940c-8e06e6ec5c9e", 00:19:50.699 "base_bdev": "aio_bdev", 00:19:50.699 "thin_provision": false, 00:19:50.699 "num_allocated_clusters": 38, 00:19:50.699 "snapshot": false, 00:19:50.699 "clone": false, 00:19:50.699 "esnap_clone": false 00:19:50.699 } 00:19:50.699 } 00:19:50.699 } 00:19:50.699 ] 00:19:50.699 22:18:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@907 -- # return 0 00:19:50.699 22:18:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 0839780f-93e3-4d94-940c-8e06e6ec5c9e 00:19:50.699 22:18:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:19:50.959 22:18:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:19:50.959 22:18:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:19:50.959 22:18:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 0839780f-93e3-4d94-940c-8e06e6ec5c9e 00:19:50.959 22:18:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:19:50.959 22:18:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 11083f5c-934f-43f5-9ee6-14730e454727 00:19:51.219 22:18:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 0839780f-93e3-4d94-940c-8e06e6ec5c9e 00:19:51.479 22:18:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:19:51.479 22:18:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:19:51.479 00:19:51.479 real 0m17.369s 00:19:51.479 user 0m45.503s 00:19:51.479 sys 0m2.881s 00:19:51.479 22:18:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1126 -- # xtrace_disable 00:19:51.479 22:18:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:19:51.479 ************************************ 00:19:51.479 END TEST lvs_grow_dirty 00:19:51.479 ************************************ 00:19:51.479 22:18:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:19:51.479 22:18:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@808 -- # type=--id 00:19:51.479 22:18:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@809 -- # id=0 00:19:51.479 22:18:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@810 -- # '[' --id = --pid ']' 00:19:51.479 22:18:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@814 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:19:51.740 22:18:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@814 -- # shm_files=nvmf_trace.0 00:19:51.740 22:18:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@816 -- # [[ -z nvmf_trace.0 ]] 00:19:51.740 22:18:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@820 -- # for n in $shm_files 00:19:51.740 22:18:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@821 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:19:51.740 nvmf_trace.0 00:19:51.740 22:18:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@823 -- # return 0 00:19:51.740 22:18:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:19:51.740 22:18:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@514 -- # nvmfcleanup 00:19:51.740 22:18:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@121 -- # sync 00:19:51.740 22:18:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:19:51.740 22:18:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@124 -- # set +e 00:19:51.740 22:18:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@125 -- # for i in {1..20} 00:19:51.740 22:18:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:19:51.740 rmmod nvme_tcp 00:19:51.740 rmmod nvme_fabrics 00:19:51.740 rmmod nvme_keyring 00:19:51.740 22:18:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:19:51.740 22:18:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@128 -- # set -e 00:19:51.740 22:18:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@129 -- # return 0 00:19:51.740 22:18:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@515 -- # '[' -n 31776 ']' 00:19:51.740 22:18:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@516 -- # killprocess 31776 00:19:51.740 22:18:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@950 -- # '[' -z 31776 ']' 00:19:51.740 22:18:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@954 -- # kill -0 31776 00:19:51.740 22:18:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@955 -- # uname 00:19:51.740 22:18:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:19:51.740 22:18:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 31776 00:19:51.740 22:18:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:19:51.740 22:18:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:19:51.740 22:18:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@968 -- # echo 'killing process with pid 31776' 00:19:51.740 killing process with pid 31776 00:19:51.740 22:18:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@969 -- # kill 31776 00:19:51.740 22:18:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@974 -- # wait 31776 00:19:52.000 22:18:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:19:52.000 22:18:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:19:52.000 22:18:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:19:52.000 22:18:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@297 -- # iptr 00:19:52.000 22:18:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@789 -- # iptables-save 00:19:52.000 22:18:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:19:52.000 22:18:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@789 -- # iptables-restore 00:19:52.000 22:18:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:19:52.000 22:18:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@302 -- # remove_spdk_ns 00:19:52.000 22:18:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:52.000 22:18:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:52.000 22:18:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:54.546 22:18:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:19:54.546 00:19:54.546 real 0m44.631s 00:19:54.546 user 1m7.170s 00:19:54.546 sys 0m10.457s 00:19:54.546 22:18:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1126 -- # xtrace_disable 00:19:54.546 22:18:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:19:54.546 ************************************ 00:19:54.546 END TEST nvmf_lvs_grow 00:19:54.546 ************************************ 00:19:54.546 22:18:49 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@29 -- # run_test nvmf_bdev_io_wait /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:19:54.546 22:18:49 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:19:54.546 22:18:49 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:19:54.546 22:18:49 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:19:54.546 ************************************ 00:19:54.546 START TEST nvmf_bdev_io_wait 00:19:54.546 ************************************ 00:19:54.546 22:18:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:19:54.546 * Looking for test storage... 00:19:54.546 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:19:54.546 22:18:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:19:54.546 22:18:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1681 -- # lcov --version 00:19:54.546 22:18:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:19:54.546 22:18:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:19:54.546 22:18:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:19:54.546 22:18:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@333 -- # local ver1 ver1_l 00:19:54.546 22:18:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@334 -- # local ver2 ver2_l 00:19:54.546 22:18:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # IFS=.-: 00:19:54.546 22:18:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # read -ra ver1 00:19:54.546 22:18:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # IFS=.-: 00:19:54.546 22:18:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # read -ra ver2 00:19:54.546 22:18:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@338 -- # local 'op=<' 00:19:54.546 22:18:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@340 -- # ver1_l=2 00:19:54.546 22:18:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@341 -- # ver2_l=1 00:19:54.546 22:18:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:19:54.546 22:18:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@344 -- # case "$op" in 00:19:54.546 22:18:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@345 -- # : 1 00:19:54.546 22:18:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v = 0 )) 00:19:54.546 22:18:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:54.546 22:18:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # decimal 1 00:19:54.546 22:18:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=1 00:19:54.546 22:18:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:54.546 22:18:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 1 00:19:54.546 22:18:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # ver1[v]=1 00:19:54.546 22:18:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # decimal 2 00:19:54.546 22:18:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=2 00:19:54.546 22:18:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:19:54.546 22:18:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 2 00:19:54.546 22:18:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # ver2[v]=2 00:19:54.546 22:18:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:19:54.546 22:18:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:19:54.546 22:18:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # return 0 00:19:54.546 22:18:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:19:54.546 22:18:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:19:54.546 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:54.546 --rc genhtml_branch_coverage=1 00:19:54.546 --rc genhtml_function_coverage=1 00:19:54.546 --rc genhtml_legend=1 00:19:54.546 --rc geninfo_all_blocks=1 00:19:54.546 --rc geninfo_unexecuted_blocks=1 00:19:54.546 00:19:54.546 ' 00:19:54.546 22:18:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:19:54.546 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:54.546 --rc genhtml_branch_coverage=1 00:19:54.546 --rc genhtml_function_coverage=1 00:19:54.546 --rc genhtml_legend=1 00:19:54.546 --rc geninfo_all_blocks=1 00:19:54.546 --rc geninfo_unexecuted_blocks=1 00:19:54.546 00:19:54.546 ' 00:19:54.546 22:18:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:19:54.546 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:54.546 --rc genhtml_branch_coverage=1 00:19:54.546 --rc genhtml_function_coverage=1 00:19:54.546 --rc genhtml_legend=1 00:19:54.546 --rc geninfo_all_blocks=1 00:19:54.546 --rc geninfo_unexecuted_blocks=1 00:19:54.546 00:19:54.546 ' 00:19:54.546 22:18:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:19:54.546 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:54.546 --rc genhtml_branch_coverage=1 00:19:54.546 --rc genhtml_function_coverage=1 00:19:54.546 --rc genhtml_legend=1 00:19:54.546 --rc geninfo_all_blocks=1 00:19:54.546 --rc geninfo_unexecuted_blocks=1 00:19:54.546 00:19:54.546 ' 00:19:54.546 22:18:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:19:54.546 22:18:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # uname -s 00:19:54.546 22:18:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:54.546 22:18:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:54.546 22:18:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:54.546 22:18:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:54.546 22:18:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:54.546 22:18:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:54.546 22:18:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:54.546 22:18:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:54.546 22:18:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:54.546 22:18:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:54.546 22:18:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 00:19:54.546 22:18:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@18 -- # NVME_HOSTID=80c5a598-37ec-ec11-9bc7-a4bf01928204 00:19:54.546 22:18:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:54.546 22:18:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:54.546 22:18:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:54.546 22:18:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:54.546 22:18:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:19:54.546 22:18:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@15 -- # shopt -s extglob 00:19:54.546 22:18:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:54.546 22:18:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:54.546 22:18:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:54.546 22:18:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:54.546 22:18:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:54.547 22:18:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:54.547 22:18:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@5 -- # export PATH 00:19:54.547 22:18:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:54.547 22:18:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@51 -- # : 0 00:19:54.547 22:18:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:19:54.547 22:18:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:19:54.547 22:18:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:54.547 22:18:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:54.547 22:18:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:54.547 22:18:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:19:54.547 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:19:54.547 22:18:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:19:54.547 22:18:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:19:54.547 22:18:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@55 -- # have_pci_nics=0 00:19:54.547 22:18:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:19:54.547 22:18:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:19:54.547 22:18:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:19:54.547 22:18:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:19:54.547 22:18:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:54.547 22:18:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@474 -- # prepare_net_devs 00:19:54.547 22:18:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@436 -- # local -g is_hw=no 00:19:54.547 22:18:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@438 -- # remove_spdk_ns 00:19:54.547 22:18:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:54.547 22:18:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:54.547 22:18:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:54.547 22:18:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:19:54.547 22:18:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:19:54.547 22:18:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@309 -- # xtrace_disable 00:19:54.547 22:18:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:20:01.130 22:18:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:01.130 22:18:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # pci_devs=() 00:20:01.130 22:18:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # local -a pci_devs 00:20:01.130 22:18:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # pci_net_devs=() 00:20:01.130 22:18:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:20:01.130 22:18:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # pci_drivers=() 00:20:01.130 22:18:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # local -A pci_drivers 00:20:01.130 22:18:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # net_devs=() 00:20:01.130 22:18:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # local -ga net_devs 00:20:01.130 22:18:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # e810=() 00:20:01.130 22:18:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # local -ga e810 00:20:01.130 22:18:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # x722=() 00:20:01.130 22:18:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # local -ga x722 00:20:01.130 22:18:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # mlx=() 00:20:01.130 22:18:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # local -ga mlx 00:20:01.130 22:18:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:01.130 22:18:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:01.130 22:18:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:01.130 22:18:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:01.130 22:18:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:01.130 22:18:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:01.130 22:18:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:01.131 22:18:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:20:01.131 22:18:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:01.131 22:18:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:01.131 22:18:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:01.131 22:18:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:01.131 22:18:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:20:01.131 22:18:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:20:01.131 22:18:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:20:01.131 22:18:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:20:01.131 22:18:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:20:01.131 22:18:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:20:01.131 22:18:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:20:01.131 22:18:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:20:01.131 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:20:01.131 22:18:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:20:01.131 22:18:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:20:01.131 22:18:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:01.131 22:18:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:01.131 22:18:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:20:01.131 22:18:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:20:01.131 22:18:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:20:01.131 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:20:01.131 22:18:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:20:01.131 22:18:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:20:01.131 22:18:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:01.131 22:18:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:01.131 22:18:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:20:01.131 22:18:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:20:01.131 22:18:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:20:01.131 22:18:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:20:01.131 22:18:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:20:01.131 22:18:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:01.131 22:18:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:20:01.131 22:18:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:01.131 22:18:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ up == up ]] 00:20:01.131 22:18:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:20:01.131 22:18:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:01.131 22:18:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:20:01.131 Found net devices under 0000:4b:00.0: cvl_0_0 00:20:01.131 22:18:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:20:01.131 22:18:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:20:01.131 22:18:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:01.131 22:18:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:20:01.131 22:18:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:01.131 22:18:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ up == up ]] 00:20:01.131 22:18:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:20:01.131 22:18:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:01.131 22:18:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:20:01.131 Found net devices under 0000:4b:00.1: cvl_0_1 00:20:01.131 22:18:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:20:01.131 22:18:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:20:01.131 22:18:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@440 -- # is_hw=yes 00:20:01.131 22:18:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:20:01.131 22:18:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:20:01.131 22:18:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:20:01.131 22:18:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:20:01.131 22:18:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:01.131 22:18:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:01.131 22:18:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:01.131 22:18:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:20:01.131 22:18:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:20:01.131 22:18:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:20:01.131 22:18:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:20:01.131 22:18:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:20:01.131 22:18:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:20:01.131 22:18:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:01.131 22:18:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:20:01.131 22:18:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:20:01.131 22:18:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:20:01.131 22:18:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:20:01.392 22:18:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:20:01.392 22:18:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:20:01.392 22:18:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:20:01.392 22:18:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:20:01.392 22:18:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:20:01.392 22:18:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:20:01.392 22:18:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:20:01.393 22:18:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:20:01.393 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:01.393 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.577 ms 00:20:01.393 00:20:01.393 --- 10.0.0.2 ping statistics --- 00:20:01.393 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:01.393 rtt min/avg/max/mdev = 0.577/0.577/0.577/0.000 ms 00:20:01.393 22:18:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:20:01.393 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:01.393 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.275 ms 00:20:01.393 00:20:01.393 --- 10.0.0.1 ping statistics --- 00:20:01.393 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:01.393 rtt min/avg/max/mdev = 0.275/0.275/0.275/0.000 ms 00:20:01.393 22:18:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:01.393 22:18:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@448 -- # return 0 00:20:01.393 22:18:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:20:01.393 22:18:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:01.393 22:18:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:20:01.393 22:18:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:20:01.393 22:18:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:01.393 22:18:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:20:01.393 22:18:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:20:01.653 22:18:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:20:01.653 22:18:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:20:01.653 22:18:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@724 -- # xtrace_disable 00:20:01.653 22:18:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:20:01.653 22:18:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@507 -- # nvmfpid=36789 00:20:01.653 22:18:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@508 -- # waitforlisten 36789 00:20:01.653 22:18:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:20:01.653 22:18:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@831 -- # '[' -z 36789 ']' 00:20:01.653 22:18:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:01.653 22:18:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@836 -- # local max_retries=100 00:20:01.653 22:18:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:01.653 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:01.653 22:18:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@840 -- # xtrace_disable 00:20:01.653 22:18:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:20:01.653 [2024-10-01 22:18:56.733333] Starting SPDK v25.01-pre git sha1 1b1c3081e / DPDK 24.03.0 initialization... 00:20:01.653 [2024-10-01 22:18:56.733398] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:01.653 [2024-10-01 22:18:56.804953] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:20:01.653 [2024-10-01 22:18:56.875977] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:01.653 [2024-10-01 22:18:56.876018] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:01.653 [2024-10-01 22:18:56.876026] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:01.653 [2024-10-01 22:18:56.876034] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:01.653 [2024-10-01 22:18:56.876039] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:01.653 [2024-10-01 22:18:56.876209] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:20:01.653 [2024-10-01 22:18:56.876323] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:20:01.653 [2024-10-01 22:18:56.876478] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:20:01.653 [2024-10-01 22:18:56.876479] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:20:02.595 22:18:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:20:02.595 22:18:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@864 -- # return 0 00:20:02.595 22:18:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:20:02.595 22:18:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@730 -- # xtrace_disable 00:20:02.595 22:18:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:20:02.595 22:18:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:02.595 22:18:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:20:02.595 22:18:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:02.595 22:18:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:20:02.595 22:18:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:02.595 22:18:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:20:02.595 22:18:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:02.595 22:18:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:20:02.595 22:18:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:02.595 22:18:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:20:02.595 22:18:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:02.595 22:18:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:20:02.595 [2024-10-01 22:18:57.644816] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:02.595 22:18:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:02.595 22:18:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:20:02.595 22:18:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:02.595 22:18:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:20:02.595 Malloc0 00:20:02.595 22:18:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:02.595 22:18:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:20:02.595 22:18:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:02.595 22:18:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:20:02.595 22:18:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:02.595 22:18:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:20:02.595 22:18:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:02.595 22:18:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:20:02.595 22:18:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:02.595 22:18:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:20:02.596 22:18:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:02.596 22:18:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:20:02.596 [2024-10-01 22:18:57.713659] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:02.596 22:18:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:02.596 22:18:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@28 -- # WRITE_PID=36887 00:20:02.596 22:18:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@30 -- # READ_PID=36889 00:20:02.596 22:18:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:20:02.596 22:18:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:20:02.596 22:18:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # config=() 00:20:02.596 22:18:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # local subsystem config 00:20:02.596 22:18:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:20:02.596 22:18:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:20:02.596 { 00:20:02.596 "params": { 00:20:02.596 "name": "Nvme$subsystem", 00:20:02.596 "trtype": "$TEST_TRANSPORT", 00:20:02.596 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:02.596 "adrfam": "ipv4", 00:20:02.596 "trsvcid": "$NVMF_PORT", 00:20:02.596 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:02.596 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:02.596 "hdgst": ${hdgst:-false}, 00:20:02.596 "ddgst": ${ddgst:-false} 00:20:02.596 }, 00:20:02.596 "method": "bdev_nvme_attach_controller" 00:20:02.596 } 00:20:02.596 EOF 00:20:02.596 )") 00:20:02.596 22:18:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=36891 00:20:02.596 22:18:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:20:02.596 22:18:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:20:02.596 22:18:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # config=() 00:20:02.596 22:18:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # local subsystem config 00:20:02.596 22:18:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:20:02.596 22:18:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=36894 00:20:02.596 22:18:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:20:02.596 { 00:20:02.596 "params": { 00:20:02.596 "name": "Nvme$subsystem", 00:20:02.596 "trtype": "$TEST_TRANSPORT", 00:20:02.596 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:02.596 "adrfam": "ipv4", 00:20:02.596 "trsvcid": "$NVMF_PORT", 00:20:02.596 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:02.596 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:02.596 "hdgst": ${hdgst:-false}, 00:20:02.596 "ddgst": ${ddgst:-false} 00:20:02.596 }, 00:20:02.596 "method": "bdev_nvme_attach_controller" 00:20:02.596 } 00:20:02.596 EOF 00:20:02.596 )") 00:20:02.596 22:18:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:20:02.596 22:18:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@35 -- # sync 00:20:02.596 22:18:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:20:02.596 22:18:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # config=() 00:20:02.596 22:18:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@580 -- # cat 00:20:02.596 22:18:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # local subsystem config 00:20:02.596 22:18:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:20:02.596 22:18:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:20:02.596 { 00:20:02.596 "params": { 00:20:02.596 "name": "Nvme$subsystem", 00:20:02.596 "trtype": "$TEST_TRANSPORT", 00:20:02.596 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:02.596 "adrfam": "ipv4", 00:20:02.596 "trsvcid": "$NVMF_PORT", 00:20:02.596 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:02.596 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:02.596 "hdgst": ${hdgst:-false}, 00:20:02.596 "ddgst": ${ddgst:-false} 00:20:02.596 }, 00:20:02.596 "method": "bdev_nvme_attach_controller" 00:20:02.596 } 00:20:02.596 EOF 00:20:02.596 )") 00:20:02.596 22:18:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:20:02.596 22:18:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:20:02.596 22:18:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # config=() 00:20:02.596 22:18:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@580 -- # cat 00:20:02.596 22:18:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # local subsystem config 00:20:02.596 22:18:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:20:02.596 22:18:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:20:02.596 { 00:20:02.596 "params": { 00:20:02.596 "name": "Nvme$subsystem", 00:20:02.596 "trtype": "$TEST_TRANSPORT", 00:20:02.596 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:02.596 "adrfam": "ipv4", 00:20:02.596 "trsvcid": "$NVMF_PORT", 00:20:02.596 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:02.596 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:02.596 "hdgst": ${hdgst:-false}, 00:20:02.596 "ddgst": ${ddgst:-false} 00:20:02.596 }, 00:20:02.596 "method": "bdev_nvme_attach_controller" 00:20:02.596 } 00:20:02.596 EOF 00:20:02.596 )") 00:20:02.596 22:18:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@580 -- # cat 00:20:02.596 22:18:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@37 -- # wait 36887 00:20:02.596 22:18:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@580 -- # cat 00:20:02.596 22:18:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # jq . 00:20:02.596 22:18:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # jq . 00:20:02.596 22:18:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # jq . 00:20:02.596 22:18:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@583 -- # IFS=, 00:20:02.596 22:18:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:20:02.596 "params": { 00:20:02.596 "name": "Nvme1", 00:20:02.596 "trtype": "tcp", 00:20:02.596 "traddr": "10.0.0.2", 00:20:02.596 "adrfam": "ipv4", 00:20:02.596 "trsvcid": "4420", 00:20:02.596 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:02.596 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:02.596 "hdgst": false, 00:20:02.596 "ddgst": false 00:20:02.596 }, 00:20:02.596 "method": "bdev_nvme_attach_controller" 00:20:02.596 }' 00:20:02.596 22:18:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # jq . 00:20:02.596 22:18:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@583 -- # IFS=, 00:20:02.596 22:18:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:20:02.596 "params": { 00:20:02.596 "name": "Nvme1", 00:20:02.596 "trtype": "tcp", 00:20:02.596 "traddr": "10.0.0.2", 00:20:02.596 "adrfam": "ipv4", 00:20:02.596 "trsvcid": "4420", 00:20:02.596 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:02.596 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:02.596 "hdgst": false, 00:20:02.596 "ddgst": false 00:20:02.596 }, 00:20:02.596 "method": "bdev_nvme_attach_controller" 00:20:02.596 }' 00:20:02.596 22:18:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@583 -- # IFS=, 00:20:02.596 22:18:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:20:02.596 "params": { 00:20:02.596 "name": "Nvme1", 00:20:02.596 "trtype": "tcp", 00:20:02.596 "traddr": "10.0.0.2", 00:20:02.596 "adrfam": "ipv4", 00:20:02.596 "trsvcid": "4420", 00:20:02.596 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:02.596 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:02.596 "hdgst": false, 00:20:02.596 "ddgst": false 00:20:02.596 }, 00:20:02.596 "method": "bdev_nvme_attach_controller" 00:20:02.596 }' 00:20:02.596 22:18:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@583 -- # IFS=, 00:20:02.596 22:18:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:20:02.596 "params": { 00:20:02.596 "name": "Nvme1", 00:20:02.596 "trtype": "tcp", 00:20:02.597 "traddr": "10.0.0.2", 00:20:02.597 "adrfam": "ipv4", 00:20:02.597 "trsvcid": "4420", 00:20:02.597 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:02.597 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:02.597 "hdgst": false, 00:20:02.597 "ddgst": false 00:20:02.597 }, 00:20:02.597 "method": "bdev_nvme_attach_controller" 00:20:02.597 }' 00:20:02.597 [2024-10-01 22:18:57.768612] Starting SPDK v25.01-pre git sha1 1b1c3081e / DPDK 24.03.0 initialization... 00:20:02.597 [2024-10-01 22:18:57.768612] Starting SPDK v25.01-pre git sha1 1b1c3081e / DPDK 24.03.0 initialization... 00:20:02.597 [2024-10-01 22:18:57.768671] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib[2024-10-01 22:18:57.768672] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 .cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 --proc-type=auto ] 00:20:02.597 --proc-type=auto ] 00:20:02.597 [2024-10-01 22:18:57.773443] Starting SPDK v25.01-pre git sha1 1b1c3081e / DPDK 24.03.0 initialization... 00:20:02.597 [2024-10-01 22:18:57.773493] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 00:20:02.597 [2024-10-01 22:18:57.773759] Starting SPDK v25.01-pre git sha1 1b1c3081e / DPDK 24.03.0 initialization... 00:20:02.597 [2024-10-01 22:18:57.773803] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:20:02.857 [2024-10-01 22:18:57.921119] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:02.857 [2024-10-01 22:18:57.972535] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 4 00:20:02.857 [2024-10-01 22:18:57.977083] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:02.857 [2024-10-01 22:18:58.027465] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:02.857 [2024-10-01 22:18:58.029325] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 5 00:20:02.857 [2024-10-01 22:18:58.075697] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:02.857 [2024-10-01 22:18:58.077648] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 6 00:20:03.118 [2024-10-01 22:18:58.125448] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 7 00:20:03.118 Running I/O for 1 seconds... 00:20:03.379 Running I/O for 1 seconds... 00:20:03.379 Running I/O for 1 seconds... 00:20:03.639 Running I/O for 1 seconds... 00:20:04.211 9498.00 IOPS, 37.10 MiB/s 00:20:04.211 Latency(us) 00:20:04.211 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:04.211 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:20:04.211 Nvme1n1 : 1.02 9490.34 37.07 0.00 0.00 13395.85 7427.41 24139.09 00:20:04.211 =================================================================================================================== 00:20:04.211 Total : 9490.34 37.07 0.00 0.00 13395.85 7427.41 24139.09 00:20:04.211 22:18:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@38 -- # wait 36889 00:20:04.471 8645.00 IOPS, 33.77 MiB/s 00:20:04.471 Latency(us) 00:20:04.471 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:04.471 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:20:04.471 Nvme1n1 : 1.01 8723.57 34.08 0.00 0.00 14626.76 4423.68 35607.89 00:20:04.471 =================================================================================================================== 00:20:04.471 Total : 8723.57 34.08 0.00 0.00 14626.76 4423.68 35607.89 00:20:04.471 177608.00 IOPS, 693.78 MiB/s 00:20:04.471 Latency(us) 00:20:04.471 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:04.471 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:20:04.471 Nvme1n1 : 1.00 177247.86 692.37 0.00 0.00 718.08 331.09 1993.39 00:20:04.471 =================================================================================================================== 00:20:04.471 Total : 177247.86 692.37 0.00 0.00 718.08 331.09 1993.39 00:20:04.471 22:18:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@39 -- # wait 36891 00:20:04.731 14795.00 IOPS, 57.79 MiB/s 00:20:04.731 Latency(us) 00:20:04.731 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:04.731 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:20:04.731 Nvme1n1 : 1.01 14857.86 58.04 0.00 0.00 8591.91 2280.11 14090.24 00:20:04.731 =================================================================================================================== 00:20:04.731 Total : 14857.86 58.04 0.00 0.00 8591.91 2280.11 14090.24 00:20:04.731 22:18:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@40 -- # wait 36894 00:20:04.731 22:18:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:20:04.731 22:18:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:04.731 22:18:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:20:04.731 22:18:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:04.731 22:18:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:20:04.731 22:18:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:20:04.731 22:18:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@514 -- # nvmfcleanup 00:20:04.731 22:18:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@121 -- # sync 00:20:04.731 22:18:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:20:04.731 22:18:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@124 -- # set +e 00:20:04.732 22:18:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@125 -- # for i in {1..20} 00:20:04.732 22:18:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:20:04.732 rmmod nvme_tcp 00:20:04.732 rmmod nvme_fabrics 00:20:04.992 rmmod nvme_keyring 00:20:04.992 22:19:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:20:04.992 22:19:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@128 -- # set -e 00:20:04.992 22:19:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@129 -- # return 0 00:20:04.992 22:19:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@515 -- # '[' -n 36789 ']' 00:20:04.992 22:19:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@516 -- # killprocess 36789 00:20:04.992 22:19:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@950 -- # '[' -z 36789 ']' 00:20:04.992 22:19:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@954 -- # kill -0 36789 00:20:04.992 22:19:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@955 -- # uname 00:20:04.992 22:19:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:20:04.992 22:19:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 36789 00:20:04.992 22:19:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:20:04.992 22:19:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:20:04.992 22:19:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@968 -- # echo 'killing process with pid 36789' 00:20:04.992 killing process with pid 36789 00:20:04.992 22:19:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@969 -- # kill 36789 00:20:04.992 22:19:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@974 -- # wait 36789 00:20:04.992 22:19:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:20:04.992 22:19:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:20:04.992 22:19:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:20:04.992 22:19:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # iptr 00:20:04.992 22:19:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@789 -- # iptables-save 00:20:04.992 22:19:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:20:04.992 22:19:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@789 -- # iptables-restore 00:20:04.992 22:19:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:20:04.992 22:19:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@302 -- # remove_spdk_ns 00:20:04.992 22:19:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:04.992 22:19:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:04.992 22:19:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:07.541 22:19:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:20:07.541 00:20:07.541 real 0m13.026s 00:20:07.541 user 0m21.132s 00:20:07.541 sys 0m7.180s 00:20:07.541 22:19:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1126 -- # xtrace_disable 00:20:07.541 22:19:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:20:07.541 ************************************ 00:20:07.541 END TEST nvmf_bdev_io_wait 00:20:07.541 ************************************ 00:20:07.541 22:19:02 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@30 -- # run_test nvmf_queue_depth /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:20:07.541 22:19:02 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:20:07.541 22:19:02 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:20:07.541 22:19:02 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:20:07.541 ************************************ 00:20:07.541 START TEST nvmf_queue_depth 00:20:07.541 ************************************ 00:20:07.541 22:19:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:20:07.541 * Looking for test storage... 00:20:07.541 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:20:07.541 22:19:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:20:07.541 22:19:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1681 -- # lcov --version 00:20:07.541 22:19:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:20:07.541 22:19:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:20:07.541 22:19:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:20:07.541 22:19:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@333 -- # local ver1 ver1_l 00:20:07.541 22:19:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@334 -- # local ver2 ver2_l 00:20:07.541 22:19:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@336 -- # IFS=.-: 00:20:07.541 22:19:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@336 -- # read -ra ver1 00:20:07.541 22:19:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@337 -- # IFS=.-: 00:20:07.541 22:19:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@337 -- # read -ra ver2 00:20:07.541 22:19:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@338 -- # local 'op=<' 00:20:07.541 22:19:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@340 -- # ver1_l=2 00:20:07.541 22:19:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@341 -- # ver2_l=1 00:20:07.541 22:19:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:20:07.541 22:19:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@344 -- # case "$op" in 00:20:07.541 22:19:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@345 -- # : 1 00:20:07.541 22:19:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v = 0 )) 00:20:07.541 22:19:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:07.541 22:19:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@365 -- # decimal 1 00:20:07.541 22:19:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=1 00:20:07.541 22:19:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:20:07.541 22:19:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 1 00:20:07.541 22:19:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@365 -- # ver1[v]=1 00:20:07.541 22:19:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@366 -- # decimal 2 00:20:07.541 22:19:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=2 00:20:07.542 22:19:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:20:07.542 22:19:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 2 00:20:07.542 22:19:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@366 -- # ver2[v]=2 00:20:07.542 22:19:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:20:07.542 22:19:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:20:07.542 22:19:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@368 -- # return 0 00:20:07.542 22:19:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:20:07.542 22:19:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:20:07.542 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:07.542 --rc genhtml_branch_coverage=1 00:20:07.542 --rc genhtml_function_coverage=1 00:20:07.542 --rc genhtml_legend=1 00:20:07.542 --rc geninfo_all_blocks=1 00:20:07.542 --rc geninfo_unexecuted_blocks=1 00:20:07.542 00:20:07.542 ' 00:20:07.542 22:19:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:20:07.542 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:07.542 --rc genhtml_branch_coverage=1 00:20:07.542 --rc genhtml_function_coverage=1 00:20:07.542 --rc genhtml_legend=1 00:20:07.542 --rc geninfo_all_blocks=1 00:20:07.542 --rc geninfo_unexecuted_blocks=1 00:20:07.542 00:20:07.542 ' 00:20:07.542 22:19:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:20:07.542 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:07.542 --rc genhtml_branch_coverage=1 00:20:07.542 --rc genhtml_function_coverage=1 00:20:07.542 --rc genhtml_legend=1 00:20:07.542 --rc geninfo_all_blocks=1 00:20:07.542 --rc geninfo_unexecuted_blocks=1 00:20:07.542 00:20:07.542 ' 00:20:07.542 22:19:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:20:07.542 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:07.542 --rc genhtml_branch_coverage=1 00:20:07.542 --rc genhtml_function_coverage=1 00:20:07.542 --rc genhtml_legend=1 00:20:07.542 --rc geninfo_all_blocks=1 00:20:07.542 --rc geninfo_unexecuted_blocks=1 00:20:07.542 00:20:07.542 ' 00:20:07.542 22:19:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:20:07.542 22:19:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@7 -- # uname -s 00:20:07.542 22:19:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:07.542 22:19:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:07.542 22:19:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:07.542 22:19:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:07.542 22:19:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:07.542 22:19:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:07.542 22:19:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:07.542 22:19:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:07.542 22:19:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:07.542 22:19:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:07.542 22:19:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 00:20:07.542 22:19:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@18 -- # NVME_HOSTID=80c5a598-37ec-ec11-9bc7-a4bf01928204 00:20:07.542 22:19:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:07.542 22:19:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:07.542 22:19:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:20:07.542 22:19:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:07.542 22:19:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:20:07.542 22:19:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@15 -- # shopt -s extglob 00:20:07.542 22:19:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:07.542 22:19:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:07.542 22:19:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:07.542 22:19:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:07.542 22:19:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:07.542 22:19:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:07.542 22:19:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@5 -- # export PATH 00:20:07.542 22:19:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:07.542 22:19:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@51 -- # : 0 00:20:07.542 22:19:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:20:07.542 22:19:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:20:07.542 22:19:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:07.542 22:19:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:07.542 22:19:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:07.542 22:19:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:20:07.542 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:20:07.542 22:19:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:20:07.542 22:19:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:20:07.542 22:19:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@55 -- # have_pci_nics=0 00:20:07.542 22:19:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:20:07.542 22:19:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:20:07.542 22:19:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:20:07.542 22:19:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@19 -- # nvmftestinit 00:20:07.542 22:19:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:20:07.542 22:19:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:07.542 22:19:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@474 -- # prepare_net_devs 00:20:07.543 22:19:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@436 -- # local -g is_hw=no 00:20:07.543 22:19:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@438 -- # remove_spdk_ns 00:20:07.543 22:19:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:07.543 22:19:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:07.543 22:19:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:07.543 22:19:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:20:07.543 22:19:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:20:07.543 22:19:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@309 -- # xtrace_disable 00:20:07.543 22:19:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:20:15.708 22:19:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:15.708 22:19:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@315 -- # pci_devs=() 00:20:15.708 22:19:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@315 -- # local -a pci_devs 00:20:15.708 22:19:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@316 -- # pci_net_devs=() 00:20:15.708 22:19:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:20:15.708 22:19:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@317 -- # pci_drivers=() 00:20:15.708 22:19:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@317 -- # local -A pci_drivers 00:20:15.708 22:19:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@319 -- # net_devs=() 00:20:15.708 22:19:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@319 -- # local -ga net_devs 00:20:15.708 22:19:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@320 -- # e810=() 00:20:15.708 22:19:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@320 -- # local -ga e810 00:20:15.708 22:19:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@321 -- # x722=() 00:20:15.708 22:19:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@321 -- # local -ga x722 00:20:15.708 22:19:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@322 -- # mlx=() 00:20:15.708 22:19:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@322 -- # local -ga mlx 00:20:15.708 22:19:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:15.708 22:19:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:15.708 22:19:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:15.708 22:19:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:15.708 22:19:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:15.708 22:19:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:15.708 22:19:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:15.708 22:19:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:20:15.708 22:19:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:15.709 22:19:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:15.709 22:19:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:15.709 22:19:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:15.709 22:19:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:20:15.709 22:19:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:20:15.709 22:19:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:20:15.709 22:19:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:20:15.709 22:19:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:20:15.709 22:19:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:20:15.709 22:19:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:20:15.709 22:19:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:20:15.709 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:20:15.709 22:19:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:20:15.709 22:19:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:20:15.709 22:19:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:15.709 22:19:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:15.709 22:19:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:20:15.709 22:19:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:20:15.709 22:19:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:20:15.709 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:20:15.709 22:19:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:20:15.709 22:19:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:20:15.709 22:19:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:15.709 22:19:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:15.709 22:19:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:20:15.709 22:19:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:20:15.709 22:19:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:20:15.709 22:19:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:20:15.709 22:19:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:20:15.709 22:19:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:15.709 22:19:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:20:15.709 22:19:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:15.709 22:19:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ up == up ]] 00:20:15.709 22:19:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:20:15.709 22:19:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:15.709 22:19:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:20:15.709 Found net devices under 0000:4b:00.0: cvl_0_0 00:20:15.709 22:19:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:20:15.709 22:19:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:20:15.709 22:19:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:15.709 22:19:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:20:15.709 22:19:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:15.709 22:19:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ up == up ]] 00:20:15.709 22:19:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:20:15.709 22:19:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:15.709 22:19:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:20:15.709 Found net devices under 0000:4b:00.1: cvl_0_1 00:20:15.709 22:19:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:20:15.709 22:19:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:20:15.709 22:19:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@440 -- # is_hw=yes 00:20:15.709 22:19:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:20:15.709 22:19:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:20:15.709 22:19:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:20:15.709 22:19:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:20:15.709 22:19:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:15.709 22:19:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:15.709 22:19:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:15.709 22:19:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:20:15.709 22:19:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:20:15.709 22:19:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:20:15.709 22:19:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:20:15.709 22:19:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:20:15.709 22:19:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:20:15.709 22:19:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:15.709 22:19:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:20:15.709 22:19:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:20:15.709 22:19:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:20:15.709 22:19:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:20:15.709 22:19:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:20:15.709 22:19:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:20:15.709 22:19:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:20:15.709 22:19:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:20:15.709 22:19:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:20:15.709 22:19:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:20:15.709 22:19:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:20:15.709 22:19:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:20:15.709 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:15.709 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.612 ms 00:20:15.709 00:20:15.709 --- 10.0.0.2 ping statistics --- 00:20:15.709 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:15.709 rtt min/avg/max/mdev = 0.612/0.612/0.612/0.000 ms 00:20:15.709 22:19:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:20:15.709 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:15.709 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.275 ms 00:20:15.709 00:20:15.709 --- 10.0.0.1 ping statistics --- 00:20:15.709 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:15.709 rtt min/avg/max/mdev = 0.275/0.275/0.275/0.000 ms 00:20:15.709 22:19:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:15.709 22:19:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@448 -- # return 0 00:20:15.709 22:19:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:20:15.709 22:19:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:15.709 22:19:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:20:15.709 22:19:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:20:15.709 22:19:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:15.709 22:19:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:20:15.709 22:19:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:20:15.709 22:19:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:20:15.709 22:19:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:20:15.709 22:19:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@724 -- # xtrace_disable 00:20:15.709 22:19:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:20:15.709 22:19:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@507 -- # nvmfpid=41592 00:20:15.709 22:19:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@508 -- # waitforlisten 41592 00:20:15.709 22:19:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:20:15.709 22:19:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@831 -- # '[' -z 41592 ']' 00:20:15.709 22:19:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:15.709 22:19:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@836 -- # local max_retries=100 00:20:15.709 22:19:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:15.709 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:15.709 22:19:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@840 -- # xtrace_disable 00:20:15.709 22:19:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:20:15.709 [2024-10-01 22:19:09.952112] Starting SPDK v25.01-pre git sha1 1b1c3081e / DPDK 24.03.0 initialization... 00:20:15.709 [2024-10-01 22:19:09.952162] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:15.710 [2024-10-01 22:19:10.042397] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:15.710 [2024-10-01 22:19:10.116135] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:15.710 [2024-10-01 22:19:10.116180] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:15.710 [2024-10-01 22:19:10.116189] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:15.710 [2024-10-01 22:19:10.116196] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:15.710 [2024-10-01 22:19:10.116202] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:15.710 [2024-10-01 22:19:10.116228] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:20:15.710 22:19:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:20:15.710 22:19:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@864 -- # return 0 00:20:15.710 22:19:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:20:15.710 22:19:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@730 -- # xtrace_disable 00:20:15.710 22:19:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:20:15.710 22:19:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:15.710 22:19:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:20:15.710 22:19:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:15.710 22:19:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:20:15.710 [2024-10-01 22:19:10.805020] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:15.710 22:19:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:15.710 22:19:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:20:15.710 22:19:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:15.710 22:19:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:20:15.710 Malloc0 00:20:15.710 22:19:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:15.710 22:19:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:20:15.710 22:19:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:15.710 22:19:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:20:15.710 22:19:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:15.710 22:19:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:20:15.710 22:19:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:15.710 22:19:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:20:15.710 22:19:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:15.710 22:19:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:20:15.710 22:19:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:15.710 22:19:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:20:15.710 [2024-10-01 22:19:10.850170] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:15.710 22:19:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:15.710 22:19:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@30 -- # bdevperf_pid=41936 00:20:15.710 22:19:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:20:15.710 22:19:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:20:15.710 22:19:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@33 -- # waitforlisten 41936 /var/tmp/bdevperf.sock 00:20:15.710 22:19:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@831 -- # '[' -z 41936 ']' 00:20:15.710 22:19:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:15.710 22:19:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@836 -- # local max_retries=100 00:20:15.710 22:19:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:15.710 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:15.710 22:19:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@840 -- # xtrace_disable 00:20:15.710 22:19:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:20:15.710 [2024-10-01 22:19:10.909451] Starting SPDK v25.01-pre git sha1 1b1c3081e / DPDK 24.03.0 initialization... 00:20:15.710 [2024-10-01 22:19:10.909517] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid41936 ] 00:20:15.971 [2024-10-01 22:19:10.975585] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:15.971 [2024-10-01 22:19:11.050319] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:20:16.540 22:19:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:20:16.540 22:19:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@864 -- # return 0 00:20:16.540 22:19:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:20:16.540 22:19:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:16.540 22:19:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:20:16.801 NVMe0n1 00:20:16.801 22:19:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:16.801 22:19:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:20:16.801 Running I/O for 10 seconds... 00:20:27.056 9660.00 IOPS, 37.73 MiB/s 10721.50 IOPS, 41.88 MiB/s 10925.33 IOPS, 42.68 MiB/s 11084.00 IOPS, 43.30 MiB/s 11233.60 IOPS, 43.88 MiB/s 11266.33 IOPS, 44.01 MiB/s 11321.86 IOPS, 44.23 MiB/s 11391.88 IOPS, 44.50 MiB/s 11379.22 IOPS, 44.45 MiB/s 11401.50 IOPS, 44.54 MiB/s 00:20:27.056 Latency(us) 00:20:27.056 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:27.056 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:20:27.056 Verification LBA range: start 0x0 length 0x4000 00:20:27.056 NVMe0n1 : 10.05 11435.75 44.67 0.00 0.00 89196.15 6553.60 68157.44 00:20:27.056 =================================================================================================================== 00:20:27.056 Total : 11435.75 44.67 0.00 0.00 89196.15 6553.60 68157.44 00:20:27.056 { 00:20:27.056 "results": [ 00:20:27.056 { 00:20:27.056 "job": "NVMe0n1", 00:20:27.056 "core_mask": "0x1", 00:20:27.056 "workload": "verify", 00:20:27.056 "status": "finished", 00:20:27.056 "verify_range": { 00:20:27.056 "start": 0, 00:20:27.056 "length": 16384 00:20:27.056 }, 00:20:27.056 "queue_depth": 1024, 00:20:27.056 "io_size": 4096, 00:20:27.056 "runtime": 10.048574, 00:20:27.056 "iops": 11435.75197834041, 00:20:27.056 "mibps": 44.670906165392225, 00:20:27.056 "io_failed": 0, 00:20:27.056 "io_timeout": 0, 00:20:27.056 "avg_latency_us": 89196.14901893896, 00:20:27.056 "min_latency_us": 6553.6, 00:20:27.056 "max_latency_us": 68157.44 00:20:27.056 } 00:20:27.056 ], 00:20:27.056 "core_count": 1 00:20:27.056 } 00:20:27.056 22:19:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@39 -- # killprocess 41936 00:20:27.056 22:19:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@950 -- # '[' -z 41936 ']' 00:20:27.056 22:19:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@954 -- # kill -0 41936 00:20:27.056 22:19:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@955 -- # uname 00:20:27.056 22:19:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:20:27.056 22:19:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 41936 00:20:27.056 22:19:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:20:27.056 22:19:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:20:27.056 22:19:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@968 -- # echo 'killing process with pid 41936' 00:20:27.056 killing process with pid 41936 00:20:27.056 22:19:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@969 -- # kill 41936 00:20:27.056 Received shutdown signal, test time was about 10.000000 seconds 00:20:27.056 00:20:27.056 Latency(us) 00:20:27.056 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:27.056 =================================================================================================================== 00:20:27.056 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:27.056 22:19:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@974 -- # wait 41936 00:20:27.317 22:19:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:20:27.317 22:19:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@43 -- # nvmftestfini 00:20:27.317 22:19:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@514 -- # nvmfcleanup 00:20:27.317 22:19:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@121 -- # sync 00:20:27.317 22:19:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:20:27.317 22:19:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@124 -- # set +e 00:20:27.317 22:19:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@125 -- # for i in {1..20} 00:20:27.317 22:19:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:20:27.317 rmmod nvme_tcp 00:20:27.317 rmmod nvme_fabrics 00:20:27.317 rmmod nvme_keyring 00:20:27.317 22:19:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:20:27.317 22:19:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@128 -- # set -e 00:20:27.317 22:19:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@129 -- # return 0 00:20:27.317 22:19:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@515 -- # '[' -n 41592 ']' 00:20:27.317 22:19:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@516 -- # killprocess 41592 00:20:27.317 22:19:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@950 -- # '[' -z 41592 ']' 00:20:27.317 22:19:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@954 -- # kill -0 41592 00:20:27.317 22:19:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@955 -- # uname 00:20:27.317 22:19:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:20:27.317 22:19:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 41592 00:20:27.317 22:19:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:20:27.317 22:19:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:20:27.317 22:19:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@968 -- # echo 'killing process with pid 41592' 00:20:27.317 killing process with pid 41592 00:20:27.317 22:19:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@969 -- # kill 41592 00:20:27.317 22:19:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@974 -- # wait 41592 00:20:27.576 22:19:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:20:27.576 22:19:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:20:27.576 22:19:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:20:27.576 22:19:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@297 -- # iptr 00:20:27.576 22:19:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@789 -- # iptables-save 00:20:27.576 22:19:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:20:27.576 22:19:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@789 -- # iptables-restore 00:20:27.576 22:19:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:20:27.576 22:19:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@302 -- # remove_spdk_ns 00:20:27.576 22:19:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:27.576 22:19:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:27.576 22:19:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:29.488 22:19:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:20:29.488 00:20:29.488 real 0m22.311s 00:20:29.488 user 0m25.899s 00:20:29.488 sys 0m6.750s 00:20:29.488 22:19:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1126 -- # xtrace_disable 00:20:29.489 22:19:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:20:29.489 ************************************ 00:20:29.489 END TEST nvmf_queue_depth 00:20:29.489 ************************************ 00:20:29.750 22:19:24 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@31 -- # run_test nvmf_target_multipath /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:20:29.750 22:19:24 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:20:29.750 22:19:24 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:20:29.750 22:19:24 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:20:29.750 ************************************ 00:20:29.750 START TEST nvmf_target_multipath 00:20:29.750 ************************************ 00:20:29.750 22:19:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:20:29.750 * Looking for test storage... 00:20:29.750 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:20:29.750 22:19:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:20:29.750 22:19:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1681 -- # lcov --version 00:20:29.750 22:19:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:20:29.750 22:19:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:20:29.750 22:19:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:20:29.750 22:19:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@333 -- # local ver1 ver1_l 00:20:29.750 22:19:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@334 -- # local ver2 ver2_l 00:20:29.750 22:19:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@336 -- # IFS=.-: 00:20:29.750 22:19:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@336 -- # read -ra ver1 00:20:29.750 22:19:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@337 -- # IFS=.-: 00:20:29.750 22:19:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@337 -- # read -ra ver2 00:20:29.750 22:19:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@338 -- # local 'op=<' 00:20:29.750 22:19:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@340 -- # ver1_l=2 00:20:29.750 22:19:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@341 -- # ver2_l=1 00:20:29.750 22:19:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:20:29.750 22:19:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@344 -- # case "$op" in 00:20:29.750 22:19:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@345 -- # : 1 00:20:29.750 22:19:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v = 0 )) 00:20:29.750 22:19:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:29.750 22:19:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@365 -- # decimal 1 00:20:29.750 22:19:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=1 00:20:29.750 22:19:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:20:29.750 22:19:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 1 00:20:29.750 22:19:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@365 -- # ver1[v]=1 00:20:29.750 22:19:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@366 -- # decimal 2 00:20:29.750 22:19:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=2 00:20:29.750 22:19:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:20:29.750 22:19:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 2 00:20:29.750 22:19:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@366 -- # ver2[v]=2 00:20:29.750 22:19:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:20:29.750 22:19:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:20:29.750 22:19:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@368 -- # return 0 00:20:29.750 22:19:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:20:29.750 22:19:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:20:29.750 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:29.750 --rc genhtml_branch_coverage=1 00:20:29.750 --rc genhtml_function_coverage=1 00:20:29.750 --rc genhtml_legend=1 00:20:29.750 --rc geninfo_all_blocks=1 00:20:29.750 --rc geninfo_unexecuted_blocks=1 00:20:29.750 00:20:29.750 ' 00:20:29.750 22:19:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:20:29.750 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:29.750 --rc genhtml_branch_coverage=1 00:20:29.750 --rc genhtml_function_coverage=1 00:20:29.750 --rc genhtml_legend=1 00:20:29.750 --rc geninfo_all_blocks=1 00:20:29.750 --rc geninfo_unexecuted_blocks=1 00:20:29.750 00:20:29.750 ' 00:20:29.750 22:19:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:20:29.750 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:29.750 --rc genhtml_branch_coverage=1 00:20:29.750 --rc genhtml_function_coverage=1 00:20:29.750 --rc genhtml_legend=1 00:20:29.750 --rc geninfo_all_blocks=1 00:20:29.750 --rc geninfo_unexecuted_blocks=1 00:20:29.750 00:20:29.750 ' 00:20:29.750 22:19:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:20:29.750 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:29.750 --rc genhtml_branch_coverage=1 00:20:29.750 --rc genhtml_function_coverage=1 00:20:29.750 --rc genhtml_legend=1 00:20:29.750 --rc geninfo_all_blocks=1 00:20:29.750 --rc geninfo_unexecuted_blocks=1 00:20:29.750 00:20:29.750 ' 00:20:29.750 22:19:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:20:29.750 22:19:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@7 -- # uname -s 00:20:29.750 22:19:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:29.750 22:19:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:29.750 22:19:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:29.750 22:19:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:29.750 22:19:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:29.750 22:19:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:29.750 22:19:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:29.750 22:19:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:29.750 22:19:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:29.750 22:19:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:29.750 22:19:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 00:20:29.750 22:19:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=80c5a598-37ec-ec11-9bc7-a4bf01928204 00:20:29.750 22:19:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:29.750 22:19:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:29.750 22:19:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:20:29.750 22:19:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:29.750 22:19:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:20:29.750 22:19:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@15 -- # shopt -s extglob 00:20:30.011 22:19:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:30.011 22:19:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:30.011 22:19:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:30.012 22:19:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:30.012 22:19:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:30.012 22:19:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:30.012 22:19:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@5 -- # export PATH 00:20:30.012 22:19:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:30.012 22:19:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@51 -- # : 0 00:20:30.012 22:19:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:20:30.012 22:19:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:20:30.012 22:19:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:30.012 22:19:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:30.012 22:19:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:30.012 22:19:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:20:30.012 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:20:30.012 22:19:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:20:30.012 22:19:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:20:30.012 22:19:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@55 -- # have_pci_nics=0 00:20:30.012 22:19:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:20:30.012 22:19:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:20:30.012 22:19:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:20:30.012 22:19:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:20:30.012 22:19:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@43 -- # nvmftestinit 00:20:30.012 22:19:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:20:30.012 22:19:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:30.012 22:19:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@474 -- # prepare_net_devs 00:20:30.012 22:19:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@436 -- # local -g is_hw=no 00:20:30.012 22:19:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@438 -- # remove_spdk_ns 00:20:30.012 22:19:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:30.012 22:19:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:30.012 22:19:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:30.012 22:19:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:20:30.012 22:19:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:20:30.012 22:19:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@309 -- # xtrace_disable 00:20:30.012 22:19:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:20:38.158 22:19:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:38.158 22:19:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@315 -- # pci_devs=() 00:20:38.158 22:19:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@315 -- # local -a pci_devs 00:20:38.158 22:19:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@316 -- # pci_net_devs=() 00:20:38.158 22:19:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:20:38.158 22:19:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@317 -- # pci_drivers=() 00:20:38.158 22:19:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@317 -- # local -A pci_drivers 00:20:38.158 22:19:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@319 -- # net_devs=() 00:20:38.158 22:19:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@319 -- # local -ga net_devs 00:20:38.158 22:19:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@320 -- # e810=() 00:20:38.158 22:19:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@320 -- # local -ga e810 00:20:38.158 22:19:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@321 -- # x722=() 00:20:38.158 22:19:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@321 -- # local -ga x722 00:20:38.158 22:19:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@322 -- # mlx=() 00:20:38.158 22:19:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@322 -- # local -ga mlx 00:20:38.158 22:19:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:38.158 22:19:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:38.158 22:19:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:38.158 22:19:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:38.158 22:19:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:38.158 22:19:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:38.158 22:19:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:38.158 22:19:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:20:38.158 22:19:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:38.158 22:19:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:38.158 22:19:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:38.158 22:19:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:38.158 22:19:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:20:38.158 22:19:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:20:38.158 22:19:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:20:38.158 22:19:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:20:38.158 22:19:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:20:38.158 22:19:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:20:38.158 22:19:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:20:38.158 22:19:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:20:38.158 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:20:38.158 22:19:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:20:38.158 22:19:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:20:38.158 22:19:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:38.158 22:19:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:38.158 22:19:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:20:38.158 22:19:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:20:38.158 22:19:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:20:38.158 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:20:38.158 22:19:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:20:38.158 22:19:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:20:38.158 22:19:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:38.158 22:19:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:38.158 22:19:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:20:38.158 22:19:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:20:38.158 22:19:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:20:38.158 22:19:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:20:38.158 22:19:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:20:38.158 22:19:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:38.158 22:19:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:20:38.158 22:19:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:38.158 22:19:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ up == up ]] 00:20:38.158 22:19:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:20:38.158 22:19:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:38.158 22:19:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:20:38.158 Found net devices under 0000:4b:00.0: cvl_0_0 00:20:38.158 22:19:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:20:38.158 22:19:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:20:38.158 22:19:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:38.158 22:19:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:20:38.158 22:19:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:38.158 22:19:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ up == up ]] 00:20:38.158 22:19:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:20:38.158 22:19:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:38.159 22:19:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:20:38.159 Found net devices under 0000:4b:00.1: cvl_0_1 00:20:38.159 22:19:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:20:38.159 22:19:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:20:38.159 22:19:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@440 -- # is_hw=yes 00:20:38.159 22:19:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:20:38.159 22:19:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:20:38.159 22:19:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:20:38.159 22:19:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:20:38.159 22:19:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:38.159 22:19:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:38.159 22:19:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:38.159 22:19:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:20:38.159 22:19:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:20:38.159 22:19:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:20:38.159 22:19:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:20:38.159 22:19:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:20:38.159 22:19:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:20:38.159 22:19:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:38.159 22:19:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:20:38.159 22:19:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:20:38.159 22:19:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:20:38.159 22:19:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:20:38.159 22:19:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:20:38.159 22:19:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:20:38.159 22:19:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:20:38.159 22:19:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:20:38.159 22:19:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:20:38.159 22:19:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:20:38.159 22:19:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:20:38.159 22:19:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:20:38.159 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:38.159 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.638 ms 00:20:38.159 00:20:38.159 --- 10.0.0.2 ping statistics --- 00:20:38.159 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:38.159 rtt min/avg/max/mdev = 0.638/0.638/0.638/0.000 ms 00:20:38.159 22:19:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:20:38.159 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:38.159 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.278 ms 00:20:38.159 00:20:38.159 --- 10.0.0.1 ping statistics --- 00:20:38.159 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:38.159 rtt min/avg/max/mdev = 0.278/0.278/0.278/0.000 ms 00:20:38.159 22:19:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:38.159 22:19:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@448 -- # return 0 00:20:38.159 22:19:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:20:38.159 22:19:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:38.159 22:19:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:20:38.159 22:19:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:20:38.159 22:19:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:38.159 22:19:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:20:38.159 22:19:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:20:38.159 22:19:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@45 -- # '[' -z ']' 00:20:38.159 22:19:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@46 -- # echo 'only one NIC for nvmf test' 00:20:38.159 only one NIC for nvmf test 00:20:38.159 22:19:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@47 -- # nvmftestfini 00:20:38.159 22:19:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@514 -- # nvmfcleanup 00:20:38.159 22:19:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:20:38.159 22:19:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:20:38.159 22:19:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:20:38.159 22:19:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:20:38.159 22:19:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:20:38.159 rmmod nvme_tcp 00:20:38.159 rmmod nvme_fabrics 00:20:38.159 rmmod nvme_keyring 00:20:38.159 22:19:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:20:38.159 22:19:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:20:38.159 22:19:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:20:38.159 22:19:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@515 -- # '[' -n '' ']' 00:20:38.159 22:19:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:20:38.159 22:19:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:20:38.159 22:19:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:20:38.159 22:19:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:20:38.159 22:19:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@789 -- # iptables-save 00:20:38.159 22:19:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:20:38.159 22:19:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@789 -- # iptables-restore 00:20:38.159 22:19:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:20:38.159 22:19:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@302 -- # remove_spdk_ns 00:20:38.159 22:19:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:38.159 22:19:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:38.159 22:19:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:39.546 22:19:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:20:39.546 22:19:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@48 -- # exit 0 00:20:39.546 22:19:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@1 -- # nvmftestfini 00:20:39.546 22:19:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@514 -- # nvmfcleanup 00:20:39.546 22:19:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:20:39.546 22:19:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:20:39.546 22:19:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:20:39.546 22:19:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:20:39.546 22:19:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:20:39.546 22:19:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:20:39.546 22:19:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:20:39.546 22:19:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:20:39.546 22:19:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@515 -- # '[' -n '' ']' 00:20:39.546 22:19:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:20:39.546 22:19:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:20:39.546 22:19:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:20:39.546 22:19:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:20:39.546 22:19:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@789 -- # iptables-save 00:20:39.546 22:19:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:20:39.546 22:19:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@789 -- # iptables-restore 00:20:39.546 22:19:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:20:39.546 22:19:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@302 -- # remove_spdk_ns 00:20:39.546 22:19:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:39.546 22:19:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:39.546 22:19:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:39.546 22:19:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:20:39.546 00:20:39.546 real 0m9.850s 00:20:39.546 user 0m2.108s 00:20:39.546 sys 0m5.715s 00:20:39.546 22:19:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1126 -- # xtrace_disable 00:20:39.546 22:19:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:20:39.546 ************************************ 00:20:39.546 END TEST nvmf_target_multipath 00:20:39.546 ************************************ 00:20:39.546 22:19:34 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@32 -- # run_test nvmf_zcopy /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:20:39.546 22:19:34 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:20:39.546 22:19:34 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:20:39.546 22:19:34 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:20:39.546 ************************************ 00:20:39.546 START TEST nvmf_zcopy 00:20:39.546 ************************************ 00:20:39.546 22:19:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:20:39.808 * Looking for test storage... 00:20:39.809 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:20:39.809 22:19:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:20:39.809 22:19:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1681 -- # lcov --version 00:20:39.809 22:19:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:20:39.809 22:19:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:20:39.809 22:19:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:20:39.809 22:19:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@333 -- # local ver1 ver1_l 00:20:39.809 22:19:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@334 -- # local ver2 ver2_l 00:20:39.809 22:19:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@336 -- # IFS=.-: 00:20:39.809 22:19:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@336 -- # read -ra ver1 00:20:39.809 22:19:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@337 -- # IFS=.-: 00:20:39.809 22:19:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@337 -- # read -ra ver2 00:20:39.809 22:19:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@338 -- # local 'op=<' 00:20:39.809 22:19:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@340 -- # ver1_l=2 00:20:39.809 22:19:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@341 -- # ver2_l=1 00:20:39.809 22:19:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:20:39.809 22:19:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@344 -- # case "$op" in 00:20:39.809 22:19:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@345 -- # : 1 00:20:39.809 22:19:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@364 -- # (( v = 0 )) 00:20:39.809 22:19:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:39.809 22:19:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@365 -- # decimal 1 00:20:39.809 22:19:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@353 -- # local d=1 00:20:39.809 22:19:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:20:39.809 22:19:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@355 -- # echo 1 00:20:39.809 22:19:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@365 -- # ver1[v]=1 00:20:39.809 22:19:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@366 -- # decimal 2 00:20:39.809 22:19:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@353 -- # local d=2 00:20:39.809 22:19:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:20:39.809 22:19:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@355 -- # echo 2 00:20:39.809 22:19:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@366 -- # ver2[v]=2 00:20:39.809 22:19:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:20:39.809 22:19:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:20:39.809 22:19:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@368 -- # return 0 00:20:39.809 22:19:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:20:39.809 22:19:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:20:39.809 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:39.809 --rc genhtml_branch_coverage=1 00:20:39.809 --rc genhtml_function_coverage=1 00:20:39.809 --rc genhtml_legend=1 00:20:39.809 --rc geninfo_all_blocks=1 00:20:39.809 --rc geninfo_unexecuted_blocks=1 00:20:39.809 00:20:39.809 ' 00:20:39.809 22:19:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:20:39.809 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:39.809 --rc genhtml_branch_coverage=1 00:20:39.809 --rc genhtml_function_coverage=1 00:20:39.809 --rc genhtml_legend=1 00:20:39.809 --rc geninfo_all_blocks=1 00:20:39.809 --rc geninfo_unexecuted_blocks=1 00:20:39.809 00:20:39.809 ' 00:20:39.809 22:19:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:20:39.809 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:39.809 --rc genhtml_branch_coverage=1 00:20:39.809 --rc genhtml_function_coverage=1 00:20:39.809 --rc genhtml_legend=1 00:20:39.809 --rc geninfo_all_blocks=1 00:20:39.809 --rc geninfo_unexecuted_blocks=1 00:20:39.809 00:20:39.809 ' 00:20:39.809 22:19:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:20:39.809 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:39.809 --rc genhtml_branch_coverage=1 00:20:39.809 --rc genhtml_function_coverage=1 00:20:39.809 --rc genhtml_legend=1 00:20:39.809 --rc geninfo_all_blocks=1 00:20:39.809 --rc geninfo_unexecuted_blocks=1 00:20:39.809 00:20:39.809 ' 00:20:39.809 22:19:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:20:39.809 22:19:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@7 -- # uname -s 00:20:39.809 22:19:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:39.809 22:19:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:39.809 22:19:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:39.809 22:19:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:39.809 22:19:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:39.809 22:19:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:39.809 22:19:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:39.809 22:19:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:39.809 22:19:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:39.809 22:19:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:39.809 22:19:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 00:20:39.809 22:19:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@18 -- # NVME_HOSTID=80c5a598-37ec-ec11-9bc7-a4bf01928204 00:20:39.809 22:19:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:39.809 22:19:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:39.809 22:19:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:20:39.809 22:19:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:39.809 22:19:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:20:39.809 22:19:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@15 -- # shopt -s extglob 00:20:39.809 22:19:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:39.809 22:19:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:39.809 22:19:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:39.809 22:19:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:39.809 22:19:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:39.810 22:19:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:39.810 22:19:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@5 -- # export PATH 00:20:39.810 22:19:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:39.810 22:19:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@51 -- # : 0 00:20:39.810 22:19:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:20:39.810 22:19:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:20:39.810 22:19:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:39.810 22:19:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:39.810 22:19:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:39.810 22:19:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:20:39.810 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:20:39.810 22:19:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:20:39.810 22:19:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:20:39.810 22:19:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@55 -- # have_pci_nics=0 00:20:39.810 22:19:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@12 -- # nvmftestinit 00:20:39.810 22:19:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:20:39.810 22:19:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:39.810 22:19:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@474 -- # prepare_net_devs 00:20:39.810 22:19:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@436 -- # local -g is_hw=no 00:20:39.810 22:19:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@438 -- # remove_spdk_ns 00:20:39.810 22:19:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:39.810 22:19:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:39.810 22:19:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:39.810 22:19:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:20:39.810 22:19:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:20:39.810 22:19:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@309 -- # xtrace_disable 00:20:39.810 22:19:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:20:47.953 22:19:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:47.953 22:19:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@315 -- # pci_devs=() 00:20:47.953 22:19:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@315 -- # local -a pci_devs 00:20:47.953 22:19:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@316 -- # pci_net_devs=() 00:20:47.953 22:19:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:20:47.953 22:19:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@317 -- # pci_drivers=() 00:20:47.953 22:19:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@317 -- # local -A pci_drivers 00:20:47.953 22:19:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@319 -- # net_devs=() 00:20:47.953 22:19:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@319 -- # local -ga net_devs 00:20:47.953 22:19:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@320 -- # e810=() 00:20:47.953 22:19:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@320 -- # local -ga e810 00:20:47.953 22:19:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@321 -- # x722=() 00:20:47.953 22:19:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@321 -- # local -ga x722 00:20:47.953 22:19:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@322 -- # mlx=() 00:20:47.953 22:19:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@322 -- # local -ga mlx 00:20:47.953 22:19:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:47.953 22:19:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:47.953 22:19:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:47.953 22:19:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:47.953 22:19:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:47.953 22:19:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:47.953 22:19:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:47.953 22:19:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:20:47.953 22:19:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:47.953 22:19:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:47.953 22:19:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:47.953 22:19:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:47.953 22:19:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:20:47.953 22:19:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:20:47.953 22:19:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:20:47.953 22:19:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:20:47.953 22:19:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:20:47.953 22:19:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:20:47.953 22:19:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:20:47.953 22:19:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:20:47.953 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:20:47.953 22:19:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:20:47.953 22:19:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:20:47.953 22:19:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:47.953 22:19:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:47.953 22:19:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:20:47.953 22:19:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:20:47.953 22:19:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:20:47.953 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:20:47.953 22:19:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:20:47.953 22:19:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:20:47.953 22:19:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:47.953 22:19:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:47.953 22:19:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:20:47.953 22:19:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:20:47.953 22:19:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:20:47.953 22:19:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:20:47.953 22:19:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:20:47.953 22:19:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:47.953 22:19:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:20:47.953 22:19:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:47.953 22:19:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ up == up ]] 00:20:47.953 22:19:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:20:47.953 22:19:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:47.953 22:19:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:20:47.953 Found net devices under 0000:4b:00.0: cvl_0_0 00:20:47.953 22:19:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:20:47.953 22:19:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:20:47.953 22:19:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:47.953 22:19:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:20:47.953 22:19:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:47.953 22:19:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ up == up ]] 00:20:47.953 22:19:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:20:47.953 22:19:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:47.953 22:19:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:20:47.953 Found net devices under 0000:4b:00.1: cvl_0_1 00:20:47.953 22:19:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:20:47.953 22:19:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:20:47.953 22:19:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@440 -- # is_hw=yes 00:20:47.953 22:19:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:20:47.953 22:19:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:20:47.953 22:19:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:20:47.953 22:19:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:20:47.953 22:19:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:47.953 22:19:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:47.953 22:19:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:47.953 22:19:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:20:47.953 22:19:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:20:47.953 22:19:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:20:47.953 22:19:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:20:47.953 22:19:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:20:47.953 22:19:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:20:47.953 22:19:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:47.953 22:19:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:20:47.953 22:19:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:20:47.953 22:19:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:20:47.953 22:19:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:20:47.954 22:19:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:20:47.954 22:19:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:20:47.954 22:19:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:20:47.954 22:19:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:20:47.954 22:19:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:20:47.954 22:19:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:20:47.954 22:19:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:20:47.954 22:19:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:20:47.954 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:47.954 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.637 ms 00:20:47.954 00:20:47.954 --- 10.0.0.2 ping statistics --- 00:20:47.954 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:47.954 rtt min/avg/max/mdev = 0.637/0.637/0.637/0.000 ms 00:20:47.954 22:19:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:20:47.954 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:47.954 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.278 ms 00:20:47.954 00:20:47.954 --- 10.0.0.1 ping statistics --- 00:20:47.954 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:47.954 rtt min/avg/max/mdev = 0.278/0.278/0.278/0.000 ms 00:20:47.954 22:19:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:47.954 22:19:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@448 -- # return 0 00:20:47.954 22:19:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:20:47.954 22:19:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:47.954 22:19:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:20:47.954 22:19:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:20:47.954 22:19:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:47.954 22:19:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:20:47.954 22:19:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:20:47.954 22:19:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:20:47.954 22:19:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:20:47.954 22:19:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@724 -- # xtrace_disable 00:20:47.954 22:19:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:20:47.954 22:19:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@507 -- # nvmfpid=52638 00:20:47.954 22:19:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@508 -- # waitforlisten 52638 00:20:47.954 22:19:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:20:47.954 22:19:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@831 -- # '[' -z 52638 ']' 00:20:47.954 22:19:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:47.954 22:19:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@836 -- # local max_retries=100 00:20:47.954 22:19:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:47.954 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:47.954 22:19:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@840 -- # xtrace_disable 00:20:47.954 22:19:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:20:47.954 [2024-10-01 22:19:42.397933] Starting SPDK v25.01-pre git sha1 1b1c3081e / DPDK 24.03.0 initialization... 00:20:47.954 [2024-10-01 22:19:42.397998] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:47.954 [2024-10-01 22:19:42.487271] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:47.954 [2024-10-01 22:19:42.580504] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:47.954 [2024-10-01 22:19:42.580561] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:47.954 [2024-10-01 22:19:42.580571] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:47.954 [2024-10-01 22:19:42.580578] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:47.954 [2024-10-01 22:19:42.580585] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:47.954 [2024-10-01 22:19:42.580616] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:20:48.215 22:19:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:20:48.215 22:19:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@864 -- # return 0 00:20:48.215 22:19:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:20:48.215 22:19:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@730 -- # xtrace_disable 00:20:48.215 22:19:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:20:48.215 22:19:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:48.215 22:19:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:20:48.215 22:19:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:20:48.215 22:19:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:48.215 22:19:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:20:48.215 [2024-10-01 22:19:43.262187] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:48.215 22:19:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:48.215 22:19:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:20:48.215 22:19:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:48.215 22:19:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:20:48.215 22:19:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:48.215 22:19:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:20:48.215 22:19:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:48.215 22:19:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:20:48.215 [2024-10-01 22:19:43.278427] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:48.215 22:19:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:48.215 22:19:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:20:48.215 22:19:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:48.215 22:19:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:20:48.215 22:19:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:48.215 22:19:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:20:48.215 22:19:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:48.215 22:19:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:20:48.215 malloc0 00:20:48.215 22:19:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:48.215 22:19:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:20:48.215 22:19:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:48.215 22:19:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:20:48.215 22:19:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:48.215 22:19:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:20:48.215 22:19:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:20:48.215 22:19:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@558 -- # config=() 00:20:48.215 22:19:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@558 -- # local subsystem config 00:20:48.215 22:19:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:20:48.215 22:19:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:20:48.215 { 00:20:48.215 "params": { 00:20:48.215 "name": "Nvme$subsystem", 00:20:48.215 "trtype": "$TEST_TRANSPORT", 00:20:48.215 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:48.215 "adrfam": "ipv4", 00:20:48.215 "trsvcid": "$NVMF_PORT", 00:20:48.215 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:48.215 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:48.215 "hdgst": ${hdgst:-false}, 00:20:48.215 "ddgst": ${ddgst:-false} 00:20:48.215 }, 00:20:48.215 "method": "bdev_nvme_attach_controller" 00:20:48.215 } 00:20:48.215 EOF 00:20:48.215 )") 00:20:48.215 22:19:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@580 -- # cat 00:20:48.215 22:19:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # jq . 00:20:48.215 22:19:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@583 -- # IFS=, 00:20:48.215 22:19:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:20:48.215 "params": { 00:20:48.215 "name": "Nvme1", 00:20:48.215 "trtype": "tcp", 00:20:48.215 "traddr": "10.0.0.2", 00:20:48.215 "adrfam": "ipv4", 00:20:48.215 "trsvcid": "4420", 00:20:48.215 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:48.215 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:48.215 "hdgst": false, 00:20:48.215 "ddgst": false 00:20:48.215 }, 00:20:48.215 "method": "bdev_nvme_attach_controller" 00:20:48.215 }' 00:20:48.215 [2024-10-01 22:19:43.367259] Starting SPDK v25.01-pre git sha1 1b1c3081e / DPDK 24.03.0 initialization... 00:20:48.215 [2024-10-01 22:19:43.367333] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid52866 ] 00:20:48.215 [2024-10-01 22:19:43.434569] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:48.475 [2024-10-01 22:19:43.508767] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:20:48.734 Running I/O for 10 seconds... 00:20:58.596 6596.00 IOPS, 51.53 MiB/s 6746.00 IOPS, 52.70 MiB/s 7729.67 IOPS, 60.39 MiB/s 8225.25 IOPS, 64.26 MiB/s 8520.40 IOPS, 66.57 MiB/s 8719.50 IOPS, 68.12 MiB/s 8858.29 IOPS, 69.21 MiB/s 8963.50 IOPS, 70.03 MiB/s 9046.89 IOPS, 70.68 MiB/s 9110.30 IOPS, 71.17 MiB/s 00:20:58.596 Latency(us) 00:20:58.596 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:58.596 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:20:58.596 Verification LBA range: start 0x0 length 0x1000 00:20:58.596 Nvme1n1 : 10.01 9111.93 71.19 0.00 0.00 13995.05 1856.85 28180.48 00:20:58.596 =================================================================================================================== 00:20:58.596 Total : 9111.93 71.19 0.00 0.00 13995.05 1856.85 28180.48 00:20:58.858 22:19:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@39 -- # perfpid=54998 00:20:58.858 22:19:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@41 -- # xtrace_disable 00:20:58.858 22:19:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:20:58.858 22:19:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:20:58.858 22:19:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:20:58.858 22:19:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@558 -- # config=() 00:20:58.858 [2024-10-01 22:19:53.969582] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:58.858 [2024-10-01 22:19:53.969612] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:58.858 22:19:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@558 -- # local subsystem config 00:20:58.858 22:19:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:20:58.858 22:19:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:20:58.858 { 00:20:58.858 "params": { 00:20:58.858 "name": "Nvme$subsystem", 00:20:58.858 "trtype": "$TEST_TRANSPORT", 00:20:58.858 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:58.858 "adrfam": "ipv4", 00:20:58.858 "trsvcid": "$NVMF_PORT", 00:20:58.858 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:58.858 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:58.858 "hdgst": ${hdgst:-false}, 00:20:58.858 "ddgst": ${ddgst:-false} 00:20:58.858 }, 00:20:58.858 "method": "bdev_nvme_attach_controller" 00:20:58.858 } 00:20:58.858 EOF 00:20:58.858 )") 00:20:58.858 22:19:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@580 -- # cat 00:20:58.858 [2024-10-01 22:19:53.977572] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:58.858 [2024-10-01 22:19:53.977581] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:58.858 22:19:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # jq . 00:20:58.858 22:19:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@583 -- # IFS=, 00:20:58.858 22:19:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:20:58.858 "params": { 00:20:58.858 "name": "Nvme1", 00:20:58.858 "trtype": "tcp", 00:20:58.858 "traddr": "10.0.0.2", 00:20:58.858 "adrfam": "ipv4", 00:20:58.858 "trsvcid": "4420", 00:20:58.858 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:58.858 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:58.858 "hdgst": false, 00:20:58.858 "ddgst": false 00:20:58.858 }, 00:20:58.858 "method": "bdev_nvme_attach_controller" 00:20:58.858 }' 00:20:58.858 [2024-10-01 22:19:53.985591] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:58.858 [2024-10-01 22:19:53.985598] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:58.858 [2024-10-01 22:19:53.993612] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:58.858 [2024-10-01 22:19:53.993619] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:58.858 [2024-10-01 22:19:54.001636] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:58.858 [2024-10-01 22:19:54.001644] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:58.858 [2024-10-01 22:19:54.009656] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:58.858 [2024-10-01 22:19:54.009664] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:58.858 [2024-10-01 22:19:54.017676] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:58.858 [2024-10-01 22:19:54.017683] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:58.858 [2024-10-01 22:19:54.025696] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:58.858 [2024-10-01 22:19:54.025707] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:58.858 [2024-10-01 22:19:54.025738] Starting SPDK v25.01-pre git sha1 1b1c3081e / DPDK 24.03.0 initialization... 00:20:58.858 [2024-10-01 22:19:54.025783] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid54998 ] 00:20:58.858 [2024-10-01 22:19:54.033715] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:58.858 [2024-10-01 22:19:54.033723] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:58.858 [2024-10-01 22:19:54.041735] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:58.858 [2024-10-01 22:19:54.041742] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:58.858 [2024-10-01 22:19:54.049757] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:58.858 [2024-10-01 22:19:54.049763] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:58.858 [2024-10-01 22:19:54.057778] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:58.858 [2024-10-01 22:19:54.057785] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:58.858 [2024-10-01 22:19:54.065800] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:58.858 [2024-10-01 22:19:54.065808] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:58.858 [2024-10-01 22:19:54.073820] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:58.858 [2024-10-01 22:19:54.073827] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:58.858 [2024-10-01 22:19:54.081839] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:58.858 [2024-10-01 22:19:54.081847] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:58.858 [2024-10-01 22:19:54.086073] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:58.858 [2024-10-01 22:19:54.089861] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:58.858 [2024-10-01 22:19:54.089869] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:58.858 [2024-10-01 22:19:54.097882] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:58.858 [2024-10-01 22:19:54.097889] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:58.858 [2024-10-01 22:19:54.105902] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:58.858 [2024-10-01 22:19:54.105911] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:59.119 [2024-10-01 22:19:54.113922] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:59.119 [2024-10-01 22:19:54.113931] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:59.119 [2024-10-01 22:19:54.121943] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:59.119 [2024-10-01 22:19:54.121954] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:59.119 [2024-10-01 22:19:54.129963] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:59.119 [2024-10-01 22:19:54.129971] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:59.119 [2024-10-01 22:19:54.137982] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:59.119 [2024-10-01 22:19:54.137990] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:59.119 [2024-10-01 22:19:54.146002] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:59.119 [2024-10-01 22:19:54.146010] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:59.119 [2024-10-01 22:19:54.150355] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:20:59.119 [2024-10-01 22:19:54.154022] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:59.119 [2024-10-01 22:19:54.154030] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:59.119 [2024-10-01 22:19:54.162042] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:59.119 [2024-10-01 22:19:54.162051] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:59.119 [2024-10-01 22:19:54.170066] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:59.119 [2024-10-01 22:19:54.170079] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:59.119 [2024-10-01 22:19:54.178087] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:59.119 [2024-10-01 22:19:54.178097] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:59.119 [2024-10-01 22:19:54.186106] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:59.119 [2024-10-01 22:19:54.186114] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:59.119 [2024-10-01 22:19:54.194126] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:59.119 [2024-10-01 22:19:54.194133] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:59.119 [2024-10-01 22:19:54.202148] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:59.119 [2024-10-01 22:19:54.202155] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:59.119 [2024-10-01 22:19:54.210167] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:59.119 [2024-10-01 22:19:54.210174] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:59.119 [2024-10-01 22:19:54.218186] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:59.119 [2024-10-01 22:19:54.218193] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:59.119 [2024-10-01 22:19:54.226206] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:59.119 [2024-10-01 22:19:54.226213] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:59.119 [2024-10-01 22:19:54.234229] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:59.119 [2024-10-01 22:19:54.234237] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:59.119 [2024-10-01 22:19:54.242248] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:59.119 [2024-10-01 22:19:54.242256] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:59.119 [2024-10-01 22:19:54.250267] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:59.119 [2024-10-01 22:19:54.250275] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:59.119 [2024-10-01 22:19:54.258288] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:59.119 [2024-10-01 22:19:54.258295] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:59.119 [2024-10-01 22:19:54.266309] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:59.119 [2024-10-01 22:19:54.266316] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:59.119 [2024-10-01 22:19:54.274330] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:59.119 [2024-10-01 22:19:54.274337] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:59.119 [2024-10-01 22:19:54.282361] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:59.119 [2024-10-01 22:19:54.282375] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:59.119 [2024-10-01 22:19:54.290376] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:59.119 [2024-10-01 22:19:54.290386] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:59.120 [2024-10-01 22:19:54.298396] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:59.120 [2024-10-01 22:19:54.298405] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:59.120 [2024-10-01 22:19:54.306418] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:59.120 [2024-10-01 22:19:54.306430] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:59.120 [2024-10-01 22:19:54.314436] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:59.120 [2024-10-01 22:19:54.314443] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:59.120 [2024-10-01 22:19:54.322456] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:59.120 [2024-10-01 22:19:54.322463] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:59.120 [2024-10-01 22:19:54.330477] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:59.120 [2024-10-01 22:19:54.330484] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:59.120 [2024-10-01 22:19:54.338498] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:59.120 [2024-10-01 22:19:54.338504] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:59.120 [2024-10-01 22:19:54.346521] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:59.120 [2024-10-01 22:19:54.346530] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:59.120 [2024-10-01 22:19:54.354542] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:59.120 [2024-10-01 22:19:54.354552] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:59.120 [2024-10-01 22:19:54.362560] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:59.120 [2024-10-01 22:19:54.362567] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:59.120 [2024-10-01 22:19:54.370581] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:59.120 [2024-10-01 22:19:54.370588] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:59.381 [2024-10-01 22:19:54.378601] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:59.381 [2024-10-01 22:19:54.378608] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:59.381 [2024-10-01 22:19:54.386622] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:59.381 [2024-10-01 22:19:54.386632] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:59.381 [2024-10-01 22:19:54.394647] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:59.381 [2024-10-01 22:19:54.394653] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:59.381 [2024-10-01 22:19:54.402668] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:59.381 [2024-10-01 22:19:54.402677] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:59.381 [2024-10-01 22:19:54.410687] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:59.381 [2024-10-01 22:19:54.410694] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:59.381 [2024-10-01 22:19:54.418708] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:59.381 [2024-10-01 22:19:54.418715] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:59.381 [2024-10-01 22:19:54.426731] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:59.381 [2024-10-01 22:19:54.426740] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:59.381 [2024-10-01 22:19:54.434750] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:59.381 [2024-10-01 22:19:54.434757] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:59.381 [2024-10-01 22:19:54.442772] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:59.381 [2024-10-01 22:19:54.442780] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:59.381 [2024-10-01 22:19:54.450793] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:59.381 [2024-10-01 22:19:54.450800] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:59.381 [2024-10-01 22:19:54.458814] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:59.381 [2024-10-01 22:19:54.458827] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:59.381 [2024-10-01 22:19:54.466835] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:59.381 [2024-10-01 22:19:54.466842] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:59.381 [2024-10-01 22:19:54.474857] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:59.381 [2024-10-01 22:19:54.474864] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:59.381 [2024-10-01 22:19:54.482879] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:59.381 [2024-10-01 22:19:54.482885] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:59.381 [2024-10-01 22:19:54.490900] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:59.381 [2024-10-01 22:19:54.490907] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:59.381 [2024-10-01 22:19:54.538098] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:59.381 [2024-10-01 22:19:54.538113] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:59.381 [2024-10-01 22:19:54.543041] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:59.381 [2024-10-01 22:19:54.543051] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:59.381 Running I/O for 5 seconds... 00:20:59.381 [2024-10-01 22:19:54.551057] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:59.381 [2024-10-01 22:19:54.551064] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:59.381 [2024-10-01 22:19:54.563089] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:59.381 [2024-10-01 22:19:54.563104] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:59.381 [2024-10-01 22:19:54.569903] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:59.381 [2024-10-01 22:19:54.569918] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:59.381 [2024-10-01 22:19:54.579125] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:59.381 [2024-10-01 22:19:54.579140] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:59.382 [2024-10-01 22:19:54.587742] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:59.382 [2024-10-01 22:19:54.587757] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:59.382 [2024-10-01 22:19:54.596469] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:59.382 [2024-10-01 22:19:54.596484] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:59.382 [2024-10-01 22:19:54.605331] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:59.382 [2024-10-01 22:19:54.605346] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:59.382 [2024-10-01 22:19:54.614420] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:59.382 [2024-10-01 22:19:54.614434] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:59.382 [2024-10-01 22:19:54.623153] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:59.382 [2024-10-01 22:19:54.623168] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:59.382 [2024-10-01 22:19:54.631763] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:59.382 [2024-10-01 22:19:54.631777] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:59.644 [2024-10-01 22:19:54.640684] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:59.644 [2024-10-01 22:19:54.640699] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:59.644 [2024-10-01 22:19:54.649154] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:59.644 [2024-10-01 22:19:54.649169] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:59.644 [2024-10-01 22:19:54.657619] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:59.644 [2024-10-01 22:19:54.657645] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:59.644 [2024-10-01 22:19:54.666822] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:59.644 [2024-10-01 22:19:54.666837] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:59.644 [2024-10-01 22:19:54.675831] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:59.644 [2024-10-01 22:19:54.675845] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:59.644 [2024-10-01 22:19:54.684887] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:59.644 [2024-10-01 22:19:54.684901] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:59.644 [2024-10-01 22:19:54.694017] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:59.644 [2024-10-01 22:19:54.694031] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:59.644 [2024-10-01 22:19:54.703177] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:59.644 [2024-10-01 22:19:54.703191] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:59.644 [2024-10-01 22:19:54.712382] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:59.644 [2024-10-01 22:19:54.712396] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:59.644 [2024-10-01 22:19:54.721019] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:59.644 [2024-10-01 22:19:54.721034] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:59.644 [2024-10-01 22:19:54.729973] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:59.644 [2024-10-01 22:19:54.729987] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:59.644 [2024-10-01 22:19:54.738516] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:59.644 [2024-10-01 22:19:54.738530] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:59.644 [2024-10-01 22:19:54.747547] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:59.644 [2024-10-01 22:19:54.747561] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:59.644 [2024-10-01 22:19:54.756431] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:59.644 [2024-10-01 22:19:54.756445] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:59.644 [2024-10-01 22:19:54.764937] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:59.644 [2024-10-01 22:19:54.764951] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:59.644 [2024-10-01 22:19:54.774148] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:59.644 [2024-10-01 22:19:54.774161] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:59.644 [2024-10-01 22:19:54.782830] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:59.644 [2024-10-01 22:19:54.782845] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:59.644 [2024-10-01 22:19:54.791285] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:59.644 [2024-10-01 22:19:54.791299] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:59.644 [2024-10-01 22:19:54.800035] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:59.644 [2024-10-01 22:19:54.800050] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:59.644 [2024-10-01 22:19:54.808798] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:59.644 [2024-10-01 22:19:54.808812] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:59.644 [2024-10-01 22:19:54.817362] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:59.644 [2024-10-01 22:19:54.817376] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:59.644 [2024-10-01 22:19:54.826134] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:59.644 [2024-10-01 22:19:54.826148] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:59.644 [2024-10-01 22:19:54.835280] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:59.644 [2024-10-01 22:19:54.835295] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:59.644 [2024-10-01 22:19:54.844417] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:59.644 [2024-10-01 22:19:54.844432] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:59.644 [2024-10-01 22:19:54.853262] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:59.644 [2024-10-01 22:19:54.853277] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:59.644 [2024-10-01 22:19:54.861263] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:59.644 [2024-10-01 22:19:54.861277] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:59.644 [2024-10-01 22:19:54.870113] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:59.644 [2024-10-01 22:19:54.870128] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:59.644 [2024-10-01 22:19:54.878761] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:59.644 [2024-10-01 22:19:54.878776] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:59.644 [2024-10-01 22:19:54.887402] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:59.644 [2024-10-01 22:19:54.887417] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:59.644 [2024-10-01 22:19:54.895902] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:59.644 [2024-10-01 22:19:54.895917] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:59.905 [2024-10-01 22:19:54.904988] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:59.905 [2024-10-01 22:19:54.905003] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:59.905 [2024-10-01 22:19:54.913511] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:59.905 [2024-10-01 22:19:54.913525] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:59.905 [2024-10-01 22:19:54.922201] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:59.905 [2024-10-01 22:19:54.922216] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:59.905 [2024-10-01 22:19:54.930875] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:59.905 [2024-10-01 22:19:54.930890] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:59.905 [2024-10-01 22:19:54.940046] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:59.905 [2024-10-01 22:19:54.940060] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:59.905 [2024-10-01 22:19:54.948745] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:59.905 [2024-10-01 22:19:54.948759] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:59.905 [2024-10-01 22:19:54.957564] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:59.905 [2024-10-01 22:19:54.957578] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:59.905 [2024-10-01 22:19:54.966678] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:59.905 [2024-10-01 22:19:54.966693] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:59.905 [2024-10-01 22:19:54.975245] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:59.905 [2024-10-01 22:19:54.975260] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:59.905 [2024-10-01 22:19:54.984045] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:59.905 [2024-10-01 22:19:54.984060] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:59.905 [2024-10-01 22:19:54.992837] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:59.905 [2024-10-01 22:19:54.992853] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:59.905 [2024-10-01 22:19:55.002261] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:59.905 [2024-10-01 22:19:55.002276] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:59.905 [2024-10-01 22:19:55.010335] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:59.905 [2024-10-01 22:19:55.010349] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:59.905 [2024-10-01 22:19:55.019074] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:59.905 [2024-10-01 22:19:55.019089] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:59.905 [2024-10-01 22:19:55.027895] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:59.905 [2024-10-01 22:19:55.027909] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:59.905 [2024-10-01 22:19:55.036086] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:59.905 [2024-10-01 22:19:55.036100] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:59.905 [2024-10-01 22:19:55.044993] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:59.905 [2024-10-01 22:19:55.045007] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:59.905 [2024-10-01 22:19:55.053534] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:59.905 [2024-10-01 22:19:55.053548] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:59.905 [2024-10-01 22:19:55.061819] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:59.905 [2024-10-01 22:19:55.061833] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:59.905 [2024-10-01 22:19:55.070874] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:59.905 [2024-10-01 22:19:55.070889] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:59.905 [2024-10-01 22:19:55.079765] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:59.905 [2024-10-01 22:19:55.079780] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:59.905 [2024-10-01 22:19:55.088865] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:59.905 [2024-10-01 22:19:55.088879] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:59.905 [2024-10-01 22:19:55.097709] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:59.905 [2024-10-01 22:19:55.097724] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:59.905 [2024-10-01 22:19:55.105594] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:59.905 [2024-10-01 22:19:55.105609] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:59.905 [2024-10-01 22:19:55.114374] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:59.905 [2024-10-01 22:19:55.114388] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:59.905 [2024-10-01 22:19:55.123042] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:59.905 [2024-10-01 22:19:55.123057] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:59.905 [2024-10-01 22:19:55.131888] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:59.905 [2024-10-01 22:19:55.131903] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:59.905 [2024-10-01 22:19:55.140644] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:59.905 [2024-10-01 22:19:55.140659] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:59.905 [2024-10-01 22:19:55.149377] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:59.905 [2024-10-01 22:19:55.149391] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:59.905 [2024-10-01 22:19:55.157910] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:59.906 [2024-10-01 22:19:55.157925] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:00.166 [2024-10-01 22:19:55.166661] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:00.166 [2024-10-01 22:19:55.166675] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:00.166 [2024-10-01 22:19:55.175629] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:00.166 [2024-10-01 22:19:55.175643] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:00.166 [2024-10-01 22:19:55.184412] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:00.166 [2024-10-01 22:19:55.184427] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:00.166 [2024-10-01 22:19:55.193563] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:00.166 [2024-10-01 22:19:55.193578] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:00.166 [2024-10-01 22:19:55.202761] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:00.166 [2024-10-01 22:19:55.202776] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:00.166 [2024-10-01 22:19:55.211846] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:00.166 [2024-10-01 22:19:55.211861] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:00.166 [2024-10-01 22:19:55.220771] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:00.166 [2024-10-01 22:19:55.220785] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:00.166 [2024-10-01 22:19:55.230047] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:00.166 [2024-10-01 22:19:55.230062] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:00.166 [2024-10-01 22:19:55.238744] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:00.166 [2024-10-01 22:19:55.238759] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:00.166 [2024-10-01 22:19:55.248051] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:00.166 [2024-10-01 22:19:55.248066] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:00.166 [2024-10-01 22:19:55.256605] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:00.166 [2024-10-01 22:19:55.256619] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:00.166 [2024-10-01 22:19:55.265745] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:00.166 [2024-10-01 22:19:55.265759] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:00.166 [2024-10-01 22:19:55.274938] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:00.166 [2024-10-01 22:19:55.274953] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:00.166 [2024-10-01 22:19:55.283509] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:00.166 [2024-10-01 22:19:55.283524] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:00.166 [2024-10-01 22:19:55.292409] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:00.166 [2024-10-01 22:19:55.292424] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:00.166 [2024-10-01 22:19:55.300862] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:00.166 [2024-10-01 22:19:55.300876] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:00.166 [2024-10-01 22:19:55.310154] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:00.166 [2024-10-01 22:19:55.310169] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:00.166 [2024-10-01 22:19:55.318293] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:00.166 [2024-10-01 22:19:55.318308] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:00.166 [2024-10-01 22:19:55.327368] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:00.166 [2024-10-01 22:19:55.327383] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:00.166 [2024-10-01 22:19:55.336716] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:00.166 [2024-10-01 22:19:55.336731] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:00.166 [2024-10-01 22:19:55.345314] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:00.166 [2024-10-01 22:19:55.345329] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:00.166 [2024-10-01 22:19:55.354269] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:00.166 [2024-10-01 22:19:55.354283] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:00.166 [2024-10-01 22:19:55.363020] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:00.166 [2024-10-01 22:19:55.363034] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:00.166 [2024-10-01 22:19:55.372088] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:00.166 [2024-10-01 22:19:55.372102] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:00.166 [2024-10-01 22:19:55.380209] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:00.166 [2024-10-01 22:19:55.380223] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:00.166 [2024-10-01 22:19:55.388878] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:00.166 [2024-10-01 22:19:55.388893] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:00.166 [2024-10-01 22:19:55.397414] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:00.166 [2024-10-01 22:19:55.397429] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:00.166 [2024-10-01 22:19:55.406498] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:00.166 [2024-10-01 22:19:55.406513] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:00.166 [2024-10-01 22:19:55.415362] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:00.166 [2024-10-01 22:19:55.415377] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:00.427 [2024-10-01 22:19:55.424238] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:00.427 [2024-10-01 22:19:55.424252] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:00.427 [2024-10-01 22:19:55.432572] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:00.427 [2024-10-01 22:19:55.432586] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:00.427 [2024-10-01 22:19:55.441184] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:00.427 [2024-10-01 22:19:55.441199] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:00.427 [2024-10-01 22:19:55.449887] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:00.427 [2024-10-01 22:19:55.449902] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:00.427 [2024-10-01 22:19:55.458797] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:00.427 [2024-10-01 22:19:55.458812] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:00.427 [2024-10-01 22:19:55.467757] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:00.427 [2024-10-01 22:19:55.467771] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:00.427 [2024-10-01 22:19:55.476670] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:00.427 [2024-10-01 22:19:55.476685] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:00.427 [2024-10-01 22:19:55.485231] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:00.427 [2024-10-01 22:19:55.485249] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:00.427 [2024-10-01 22:19:55.493604] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:00.427 [2024-10-01 22:19:55.493618] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:00.427 [2024-10-01 22:19:55.502416] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:00.427 [2024-10-01 22:19:55.502430] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:00.427 [2024-10-01 22:19:55.510386] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:00.427 [2024-10-01 22:19:55.510400] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:00.427 [2024-10-01 22:19:55.519279] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:00.427 [2024-10-01 22:19:55.519292] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:00.427 [2024-10-01 22:19:55.528232] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:00.427 [2024-10-01 22:19:55.528246] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:00.427 [2024-10-01 22:19:55.537108] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:00.427 [2024-10-01 22:19:55.537122] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:00.427 [2024-10-01 22:19:55.545702] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:00.427 [2024-10-01 22:19:55.545717] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:00.427 18867.00 IOPS, 147.40 MiB/s [2024-10-01 22:19:55.555373] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:00.427 [2024-10-01 22:19:55.555387] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:00.427 [2024-10-01 22:19:55.564237] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:00.427 [2024-10-01 22:19:55.564252] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:00.427 [2024-10-01 22:19:55.573360] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:00.427 [2024-10-01 22:19:55.573374] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:00.427 [2024-10-01 22:19:55.581862] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:00.427 [2024-10-01 22:19:55.581876] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:00.427 [2024-10-01 22:19:55.590815] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:00.427 [2024-10-01 22:19:55.590829] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:00.427 [2024-10-01 22:19:55.599735] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:00.427 [2024-10-01 22:19:55.599750] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:00.427 [2024-10-01 22:19:55.608971] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:00.427 [2024-10-01 22:19:55.608985] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:00.427 [2024-10-01 22:19:55.617695] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:00.427 [2024-10-01 22:19:55.617709] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:00.427 [2024-10-01 22:19:55.626447] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:00.427 [2024-10-01 22:19:55.626461] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:00.427 [2024-10-01 22:19:55.635438] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:00.427 [2024-10-01 22:19:55.635452] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:00.427 [2024-10-01 22:19:55.644040] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:00.427 [2024-10-01 22:19:55.644054] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:00.427 [2024-10-01 22:19:55.652815] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:00.427 [2024-10-01 22:19:55.652833] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:00.427 [2024-10-01 22:19:55.661313] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:00.427 [2024-10-01 22:19:55.661327] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:00.427 [2024-10-01 22:19:55.670598] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:00.427 [2024-10-01 22:19:55.670612] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:00.427 [2024-10-01 22:19:55.678581] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:00.427 [2024-10-01 22:19:55.678595] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:00.687 [2024-10-01 22:19:55.687588] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:00.688 [2024-10-01 22:19:55.687602] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:00.688 [2024-10-01 22:19:55.696138] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:00.688 [2024-10-01 22:19:55.696153] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:00.688 [2024-10-01 22:19:55.704690] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:00.688 [2024-10-01 22:19:55.704704] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:00.688 [2024-10-01 22:19:55.713525] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:00.688 [2024-10-01 22:19:55.713539] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:00.688 [2024-10-01 22:19:55.722120] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:00.688 [2024-10-01 22:19:55.722134] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:00.688 [2024-10-01 22:19:55.731342] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:00.688 [2024-10-01 22:19:55.731355] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:00.688 [2024-10-01 22:19:55.739782] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:00.688 [2024-10-01 22:19:55.739796] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:00.688 [2024-10-01 22:19:55.748281] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:00.688 [2024-10-01 22:19:55.748295] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:00.688 [2024-10-01 22:19:55.756991] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:00.688 [2024-10-01 22:19:55.757005] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:00.688 [2024-10-01 22:19:55.765938] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:00.688 [2024-10-01 22:19:55.765953] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:00.688 [2024-10-01 22:19:55.775063] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:00.688 [2024-10-01 22:19:55.775076] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:00.688 [2024-10-01 22:19:55.783313] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:00.688 [2024-10-01 22:19:55.783327] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:00.688 [2024-10-01 22:19:55.791892] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:00.688 [2024-10-01 22:19:55.791906] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:00.688 [2024-10-01 22:19:55.800546] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:00.688 [2024-10-01 22:19:55.800560] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:00.688 [2024-10-01 22:19:55.809081] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:00.688 [2024-10-01 22:19:55.809095] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:00.688 [2024-10-01 22:19:55.817800] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:00.688 [2024-10-01 22:19:55.817817] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:00.688 [2024-10-01 22:19:55.825909] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:00.688 [2024-10-01 22:19:55.825923] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:00.688 [2024-10-01 22:19:55.834434] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:00.688 [2024-10-01 22:19:55.834449] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:00.688 [2024-10-01 22:19:55.843202] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:00.688 [2024-10-01 22:19:55.843216] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:00.688 [2024-10-01 22:19:55.851780] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:00.688 [2024-10-01 22:19:55.851794] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:00.688 [2024-10-01 22:19:55.861044] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:00.688 [2024-10-01 22:19:55.861058] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:00.688 [2024-10-01 22:19:55.869513] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:00.688 [2024-10-01 22:19:55.869527] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:00.688 [2024-10-01 22:19:55.878881] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:00.688 [2024-10-01 22:19:55.878895] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:00.688 [2024-10-01 22:19:55.887818] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:00.688 [2024-10-01 22:19:55.887832] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:00.688 [2024-10-01 22:19:55.896245] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:00.688 [2024-10-01 22:19:55.896260] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:00.688 [2024-10-01 22:19:55.905249] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:00.688 [2024-10-01 22:19:55.905263] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:00.688 [2024-10-01 22:19:55.914439] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:00.688 [2024-10-01 22:19:55.914453] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:00.688 [2024-10-01 22:19:55.923090] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:00.688 [2024-10-01 22:19:55.923104] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:00.688 [2024-10-01 22:19:55.932250] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:00.688 [2024-10-01 22:19:55.932265] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:00.688 [2024-10-01 22:19:55.940404] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:00.688 [2024-10-01 22:19:55.940417] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:00.949 [2024-10-01 22:19:55.949361] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:00.949 [2024-10-01 22:19:55.949375] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:00.949 [2024-10-01 22:19:55.958216] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:00.949 [2024-10-01 22:19:55.958230] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:00.949 [2024-10-01 22:19:55.966932] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:00.949 [2024-10-01 22:19:55.966946] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:00.949 [2024-10-01 22:19:55.975677] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:00.949 [2024-10-01 22:19:55.975691] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:00.949 [2024-10-01 22:19:55.984313] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:00.949 [2024-10-01 22:19:55.984327] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:00.949 [2024-10-01 22:19:55.992797] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:00.949 [2024-10-01 22:19:55.992812] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:00.949 [2024-10-01 22:19:56.001728] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:00.949 [2024-10-01 22:19:56.001742] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:00.949 [2024-10-01 22:19:56.010565] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:00.949 [2024-10-01 22:19:56.010579] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:00.949 [2024-10-01 22:19:56.019810] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:00.949 [2024-10-01 22:19:56.019824] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:00.949 [2024-10-01 22:19:56.028034] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:00.949 [2024-10-01 22:19:56.028048] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:00.949 [2024-10-01 22:19:56.037057] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:00.949 [2024-10-01 22:19:56.037071] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:00.949 [2024-10-01 22:19:56.046063] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:00.949 [2024-10-01 22:19:56.046078] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:00.949 [2024-10-01 22:19:56.054513] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:00.949 [2024-10-01 22:19:56.054527] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:00.949 [2024-10-01 22:19:56.063306] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:00.949 [2024-10-01 22:19:56.063320] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:00.949 [2024-10-01 22:19:56.071926] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:00.949 [2024-10-01 22:19:56.071940] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:00.949 [2024-10-01 22:19:56.080899] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:00.949 [2024-10-01 22:19:56.080913] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:00.949 [2024-10-01 22:19:56.089567] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:00.949 [2024-10-01 22:19:56.089581] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:00.949 [2024-10-01 22:19:56.098964] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:00.949 [2024-10-01 22:19:56.098978] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:00.949 [2024-10-01 22:19:56.107742] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:00.949 [2024-10-01 22:19:56.107756] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:00.949 [2024-10-01 22:19:56.116649] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:00.949 [2024-10-01 22:19:56.116663] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:00.949 [2024-10-01 22:19:56.125499] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:00.949 [2024-10-01 22:19:56.125514] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:00.949 [2024-10-01 22:19:56.134695] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:00.949 [2024-10-01 22:19:56.134709] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:00.949 [2024-10-01 22:19:56.143313] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:00.949 [2024-10-01 22:19:56.143327] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:00.949 [2024-10-01 22:19:56.152110] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:00.949 [2024-10-01 22:19:56.152124] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:00.949 [2024-10-01 22:19:56.160780] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:00.949 [2024-10-01 22:19:56.160794] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:00.949 [2024-10-01 22:19:56.169238] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:00.949 [2024-10-01 22:19:56.169252] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:00.949 [2024-10-01 22:19:56.178039] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:00.949 [2024-10-01 22:19:56.178054] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:00.950 [2024-10-01 22:19:56.186674] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:00.950 [2024-10-01 22:19:56.186689] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:00.950 [2024-10-01 22:19:56.195903] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:00.950 [2024-10-01 22:19:56.195917] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:01.210 [2024-10-01 22:19:56.204285] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:01.210 [2024-10-01 22:19:56.204299] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:01.210 [2024-10-01 22:19:56.213368] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:01.210 [2024-10-01 22:19:56.213383] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:01.210 [2024-10-01 22:19:56.222199] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:01.210 [2024-10-01 22:19:56.222214] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:01.210 [2024-10-01 22:19:56.231184] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:01.210 [2024-10-01 22:19:56.231199] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:01.210 [2024-10-01 22:19:56.240471] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:01.210 [2024-10-01 22:19:56.240486] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:01.210 [2024-10-01 22:19:56.249124] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:01.210 [2024-10-01 22:19:56.249138] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:01.210 [2024-10-01 22:19:56.258105] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:01.210 [2024-10-01 22:19:56.258119] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:01.210 [2024-10-01 22:19:56.267143] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:01.210 [2024-10-01 22:19:56.267157] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:01.210 [2024-10-01 22:19:56.275727] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:01.210 [2024-10-01 22:19:56.275741] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:01.210 [2024-10-01 22:19:56.284302] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:01.210 [2024-10-01 22:19:56.284316] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:01.210 [2024-10-01 22:19:56.293542] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:01.210 [2024-10-01 22:19:56.293556] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:01.210 [2024-10-01 22:19:56.302596] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:01.210 [2024-10-01 22:19:56.302610] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:01.210 [2024-10-01 22:19:56.311309] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:01.210 [2024-10-01 22:19:56.311323] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:01.210 [2024-10-01 22:19:56.320085] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:01.210 [2024-10-01 22:19:56.320099] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:01.210 [2024-10-01 22:19:56.329541] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:01.210 [2024-10-01 22:19:56.329555] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:01.210 [2024-10-01 22:19:56.337598] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:01.210 [2024-10-01 22:19:56.337612] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:01.210 [2024-10-01 22:19:56.346416] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:01.210 [2024-10-01 22:19:56.346430] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:01.210 [2024-10-01 22:19:56.355210] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:01.210 [2024-10-01 22:19:56.355224] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:01.210 [2024-10-01 22:19:56.363867] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:01.210 [2024-10-01 22:19:56.363881] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:01.210 [2024-10-01 22:19:56.373309] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:01.210 [2024-10-01 22:19:56.373323] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:01.210 [2024-10-01 22:19:56.381510] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:01.210 [2024-10-01 22:19:56.381524] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:01.210 [2024-10-01 22:19:56.389963] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:01.210 [2024-10-01 22:19:56.389977] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:01.210 [2024-10-01 22:19:56.399058] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:01.210 [2024-10-01 22:19:56.399072] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:01.210 [2024-10-01 22:19:56.408431] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:01.210 [2024-10-01 22:19:56.408445] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:01.210 [2024-10-01 22:19:56.416620] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:01.210 [2024-10-01 22:19:56.416638] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:01.210 [2024-10-01 22:19:56.425166] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:01.210 [2024-10-01 22:19:56.425180] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:01.210 [2024-10-01 22:19:56.433752] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:01.210 [2024-10-01 22:19:56.433766] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:01.210 [2024-10-01 22:19:56.442908] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:01.210 [2024-10-01 22:19:56.442922] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:01.210 [2024-10-01 22:19:56.451578] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:01.210 [2024-10-01 22:19:56.451592] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:01.210 [2024-10-01 22:19:56.460176] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:01.210 [2024-10-01 22:19:56.460190] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:01.471 [2024-10-01 22:19:56.469436] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:01.471 [2024-10-01 22:19:56.469450] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:01.471 [2024-10-01 22:19:56.478146] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:01.471 [2024-10-01 22:19:56.478160] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:01.471 [2024-10-01 22:19:56.486518] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:01.471 [2024-10-01 22:19:56.486532] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:01.471 [2024-10-01 22:19:56.495006] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:01.471 [2024-10-01 22:19:56.495021] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:01.471 [2024-10-01 22:19:56.504045] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:01.471 [2024-10-01 22:19:56.504059] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:01.471 [2024-10-01 22:19:56.513541] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:01.471 [2024-10-01 22:19:56.513556] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:01.471 [2024-10-01 22:19:56.522137] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:01.471 [2024-10-01 22:19:56.522151] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:01.471 [2024-10-01 22:19:56.531063] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:01.471 [2024-10-01 22:19:56.531078] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:01.471 [2024-10-01 22:19:56.539698] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:01.471 [2024-10-01 22:19:56.539713] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:01.471 [2024-10-01 22:19:56.548836] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:01.471 [2024-10-01 22:19:56.548851] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:01.471 18951.50 IOPS, 148.06 MiB/s [2024-10-01 22:19:56.558212] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:01.471 [2024-10-01 22:19:56.558226] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:01.471 [2024-10-01 22:19:56.567207] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:01.471 [2024-10-01 22:19:56.567221] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:01.471 [2024-10-01 22:19:56.576308] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:01.471 [2024-10-01 22:19:56.576323] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:01.471 [2024-10-01 22:19:56.585007] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:01.471 [2024-10-01 22:19:56.585021] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:01.471 [2024-10-01 22:19:56.593904] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:01.471 [2024-10-01 22:19:56.593918] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:01.471 [2024-10-01 22:19:56.602853] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:01.471 [2024-10-01 22:19:56.602868] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:01.471 [2024-10-01 22:19:56.611892] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:01.471 [2024-10-01 22:19:56.611906] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:01.471 [2024-10-01 22:19:56.619912] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:01.471 [2024-10-01 22:19:56.619926] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:01.471 [2024-10-01 22:19:56.628539] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:01.471 [2024-10-01 22:19:56.628554] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:01.471 [2024-10-01 22:19:56.637595] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:01.471 [2024-10-01 22:19:56.637610] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:01.471 [2024-10-01 22:19:56.646205] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:01.471 [2024-10-01 22:19:56.646223] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:01.471 [2024-10-01 22:19:56.654561] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:01.471 [2024-10-01 22:19:56.654576] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:01.471 [2024-10-01 22:19:56.663501] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:01.471 [2024-10-01 22:19:56.663516] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:01.471 [2024-10-01 22:19:56.672102] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:01.471 [2024-10-01 22:19:56.672116] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:01.471 [2024-10-01 22:19:56.680446] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:01.471 [2024-10-01 22:19:56.680461] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:01.471 [2024-10-01 22:19:56.689135] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:01.471 [2024-10-01 22:19:56.689150] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:01.471 [2024-10-01 22:19:56.698158] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:01.471 [2024-10-01 22:19:56.698173] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:01.471 [2024-10-01 22:19:56.706713] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:01.471 [2024-10-01 22:19:56.706727] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:01.471 [2024-10-01 22:19:56.714822] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:01.471 [2024-10-01 22:19:56.714836] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:01.471 [2024-10-01 22:19:56.723666] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:01.471 [2024-10-01 22:19:56.723680] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:01.732 [2024-10-01 22:19:56.732444] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:01.732 [2024-10-01 22:19:56.732458] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:01.732 [2024-10-01 22:19:56.742036] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:01.732 [2024-10-01 22:19:56.742051] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:01.732 [2024-10-01 22:19:56.750678] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:01.732 [2024-10-01 22:19:56.750692] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:01.732 [2024-10-01 22:19:56.760091] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:01.732 [2024-10-01 22:19:56.760106] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:01.732 [2024-10-01 22:19:56.768693] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:01.732 [2024-10-01 22:19:56.768708] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:01.732 [2024-10-01 22:19:56.776942] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:01.732 [2024-10-01 22:19:56.776958] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:01.732 [2024-10-01 22:19:56.786393] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:01.732 [2024-10-01 22:19:56.786407] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:01.732 [2024-10-01 22:19:56.794404] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:01.732 [2024-10-01 22:19:56.794418] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:01.732 [2024-10-01 22:19:56.803747] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:01.732 [2024-10-01 22:19:56.803762] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:01.732 [2024-10-01 22:19:56.812255] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:01.732 [2024-10-01 22:19:56.812273] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:01.732 [2024-10-01 22:19:56.821185] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:01.732 [2024-10-01 22:19:56.821200] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:01.732 [2024-10-01 22:19:56.830458] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:01.732 [2024-10-01 22:19:56.830473] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:01.732 [2024-10-01 22:19:56.839210] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:01.732 [2024-10-01 22:19:56.839225] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:01.732 [2024-10-01 22:19:56.847745] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:01.732 [2024-10-01 22:19:56.847760] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:01.732 [2024-10-01 22:19:56.856473] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:01.732 [2024-10-01 22:19:56.856487] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:01.732 [2024-10-01 22:19:56.864954] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:01.732 [2024-10-01 22:19:56.864968] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:01.732 [2024-10-01 22:19:56.873885] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:01.732 [2024-10-01 22:19:56.873899] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:01.732 [2024-10-01 22:19:56.882390] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:01.732 [2024-10-01 22:19:56.882405] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:01.732 [2024-10-01 22:19:56.891716] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:01.732 [2024-10-01 22:19:56.891731] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:01.732 [2024-10-01 22:19:56.900091] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:01.732 [2024-10-01 22:19:56.900106] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:01.732 [2024-10-01 22:19:56.908885] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:01.732 [2024-10-01 22:19:56.908900] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:01.732 [2024-10-01 22:19:56.917534] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:01.733 [2024-10-01 22:19:56.917549] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:01.733 [2024-10-01 22:19:56.926441] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:01.733 [2024-10-01 22:19:56.926456] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:01.733 [2024-10-01 22:19:56.935242] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:01.733 [2024-10-01 22:19:56.935256] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:01.733 [2024-10-01 22:19:56.943916] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:01.733 [2024-10-01 22:19:56.943930] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:01.733 [2024-10-01 22:19:56.952796] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:01.733 [2024-10-01 22:19:56.952810] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:01.733 [2024-10-01 22:19:56.961315] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:01.733 [2024-10-01 22:19:56.961330] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:01.733 [2024-10-01 22:19:56.970422] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:01.733 [2024-10-01 22:19:56.970436] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:01.733 [2024-10-01 22:19:56.978657] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:01.733 [2024-10-01 22:19:56.978675] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:01.992 [2024-10-01 22:19:56.987662] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:01.992 [2024-10-01 22:19:56.987676] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:01.993 [2024-10-01 22:19:57.001303] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:01.993 [2024-10-01 22:19:57.001318] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:01.993 [2024-10-01 22:19:57.014698] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:01.993 [2024-10-01 22:19:57.014713] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:01.993 [2024-10-01 22:19:57.028556] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:01.993 [2024-10-01 22:19:57.028571] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:01.993 [2024-10-01 22:19:57.041940] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:01.993 [2024-10-01 22:19:57.041955] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:01.993 [2024-10-01 22:19:57.055613] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:01.993 [2024-10-01 22:19:57.055633] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:01.993 [2024-10-01 22:19:57.068109] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:01.993 [2024-10-01 22:19:57.068124] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:01.993 [2024-10-01 22:19:57.080925] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:01.993 [2024-10-01 22:19:57.080940] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:01.993 [2024-10-01 22:19:57.094494] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:01.993 [2024-10-01 22:19:57.094509] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:01.993 [2024-10-01 22:19:57.108096] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:01.993 [2024-10-01 22:19:57.108111] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:01.993 [2024-10-01 22:19:57.120302] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:01.993 [2024-10-01 22:19:57.120317] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:01.993 [2024-10-01 22:19:57.133978] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:01.993 [2024-10-01 22:19:57.133992] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:01.993 [2024-10-01 22:19:57.146866] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:01.993 [2024-10-01 22:19:57.146881] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:01.993 [2024-10-01 22:19:57.160073] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:01.993 [2024-10-01 22:19:57.160087] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:01.993 [2024-10-01 22:19:57.172702] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:01.993 [2024-10-01 22:19:57.172717] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:01.993 [2024-10-01 22:19:57.185146] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:01.993 [2024-10-01 22:19:57.185161] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:01.993 [2024-10-01 22:19:57.197917] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:01.993 [2024-10-01 22:19:57.197932] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:01.993 [2024-10-01 22:19:57.211408] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:01.993 [2024-10-01 22:19:57.211423] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:01.993 [2024-10-01 22:19:57.225143] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:01.993 [2024-10-01 22:19:57.225158] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:01.993 [2024-10-01 22:19:57.237934] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:01.993 [2024-10-01 22:19:57.237949] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:02.252 [2024-10-01 22:19:57.251500] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:02.252 [2024-10-01 22:19:57.251514] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:02.252 [2024-10-01 22:19:57.265313] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:02.252 [2024-10-01 22:19:57.265328] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:02.252 [2024-10-01 22:19:57.278006] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:02.252 [2024-10-01 22:19:57.278021] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:02.252 [2024-10-01 22:19:57.290751] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:02.252 [2024-10-01 22:19:57.290765] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:02.252 [2024-10-01 22:19:57.304200] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:02.252 [2024-10-01 22:19:57.304215] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:02.252 [2024-10-01 22:19:57.317195] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:02.252 [2024-10-01 22:19:57.317209] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:02.252 [2024-10-01 22:19:57.330752] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:02.252 [2024-10-01 22:19:57.330766] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:02.252 [2024-10-01 22:19:57.343367] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:02.252 [2024-10-01 22:19:57.343381] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:02.252 [2024-10-01 22:19:57.355768] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:02.252 [2024-10-01 22:19:57.355782] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:02.252 [2024-10-01 22:19:57.369141] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:02.252 [2024-10-01 22:19:57.369156] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:02.252 [2024-10-01 22:19:57.382420] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:02.252 [2024-10-01 22:19:57.382435] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:02.252 [2024-10-01 22:19:57.395937] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:02.252 [2024-10-01 22:19:57.395951] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:02.252 [2024-10-01 22:19:57.409006] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:02.252 [2024-10-01 22:19:57.409020] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:02.252 [2024-10-01 22:19:57.422331] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:02.252 [2024-10-01 22:19:57.422346] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:02.252 [2024-10-01 22:19:57.435851] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:02.252 [2024-10-01 22:19:57.435865] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:02.252 [2024-10-01 22:19:57.449483] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:02.252 [2024-10-01 22:19:57.449498] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:02.252 [2024-10-01 22:19:57.462996] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:02.252 [2024-10-01 22:19:57.463011] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:02.252 [2024-10-01 22:19:57.476099] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:02.252 [2024-10-01 22:19:57.476114] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:02.252 [2024-10-01 22:19:57.489588] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:02.252 [2024-10-01 22:19:57.489603] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:02.252 [2024-10-01 22:19:57.503090] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:02.252 [2024-10-01 22:19:57.503105] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:02.519 [2024-10-01 22:19:57.516697] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:02.519 [2024-10-01 22:19:57.516711] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:02.519 [2024-10-01 22:19:57.530103] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:02.519 [2024-10-01 22:19:57.530118] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:02.519 [2024-10-01 22:19:57.542877] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:02.519 [2024-10-01 22:19:57.542891] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:02.519 [2024-10-01 22:19:57.555891] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:02.519 [2024-10-01 22:19:57.555905] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:02.519 18973.00 IOPS, 148.23 MiB/s [2024-10-01 22:19:57.569113] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:02.519 [2024-10-01 22:19:57.569127] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:02.519 [2024-10-01 22:19:57.581436] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:02.519 [2024-10-01 22:19:57.581450] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:02.519 [2024-10-01 22:19:57.594058] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:02.519 [2024-10-01 22:19:57.594073] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:02.519 [2024-10-01 22:19:57.607127] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:02.519 [2024-10-01 22:19:57.607142] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:02.519 [2024-10-01 22:19:57.620772] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:02.519 [2024-10-01 22:19:57.620787] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:02.519 [2024-10-01 22:19:57.634176] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:02.519 [2024-10-01 22:19:57.634190] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:02.519 [2024-10-01 22:19:57.647510] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:02.519 [2024-10-01 22:19:57.647524] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:02.519 [2024-10-01 22:19:57.660330] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:02.519 [2024-10-01 22:19:57.660344] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:02.519 [2024-10-01 22:19:57.673810] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:02.519 [2024-10-01 22:19:57.673824] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:02.519 [2024-10-01 22:19:57.687250] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:02.519 [2024-10-01 22:19:57.687265] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:02.519 [2024-10-01 22:19:57.700441] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:02.519 [2024-10-01 22:19:57.700455] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:02.519 [2024-10-01 22:19:57.713561] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:02.519 [2024-10-01 22:19:57.713579] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:02.519 [2024-10-01 22:19:57.726847] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:02.519 [2024-10-01 22:19:57.726862] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:02.519 [2024-10-01 22:19:57.740521] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:02.519 [2024-10-01 22:19:57.740535] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:02.519 [2024-10-01 22:19:57.753707] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:02.519 [2024-10-01 22:19:57.753722] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:02.809 [2024-10-01 22:19:57.766404] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:02.809 [2024-10-01 22:19:57.766419] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:02.809 [2024-10-01 22:19:57.779481] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:02.809 [2024-10-01 22:19:57.779496] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:02.809 [2024-10-01 22:19:57.792823] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:02.809 [2024-10-01 22:19:57.792838] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:02.809 [2024-10-01 22:19:57.805384] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:02.809 [2024-10-01 22:19:57.805398] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:02.809 [2024-10-01 22:19:57.817858] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:02.809 [2024-10-01 22:19:57.817872] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:02.809 [2024-10-01 22:19:57.831026] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:02.809 [2024-10-01 22:19:57.831041] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:02.809 [2024-10-01 22:19:57.843786] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:02.809 [2024-10-01 22:19:57.843800] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:02.809 [2024-10-01 22:19:57.856315] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:02.809 [2024-10-01 22:19:57.856329] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:02.809 [2024-10-01 22:19:57.869758] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:02.809 [2024-10-01 22:19:57.869772] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:02.809 [2024-10-01 22:19:57.882933] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:02.809 [2024-10-01 22:19:57.882947] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:02.809 [2024-10-01 22:19:57.895654] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:02.809 [2024-10-01 22:19:57.895668] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:02.809 [2024-10-01 22:19:57.908351] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:02.809 [2024-10-01 22:19:57.908365] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:02.809 [2024-10-01 22:19:57.921719] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:02.809 [2024-10-01 22:19:57.921734] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:02.809 [2024-10-01 22:19:57.934386] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:02.809 [2024-10-01 22:19:57.934400] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:02.809 [2024-10-01 22:19:57.947283] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:02.809 [2024-10-01 22:19:57.947297] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:02.809 [2024-10-01 22:19:57.960842] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:02.809 [2024-10-01 22:19:57.960860] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:02.809 [2024-10-01 22:19:57.974012] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:02.809 [2024-10-01 22:19:57.974027] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:02.809 [2024-10-01 22:19:57.986731] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:02.809 [2024-10-01 22:19:57.986745] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:02.809 [2024-10-01 22:19:58.000096] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:02.809 [2024-10-01 22:19:58.000111] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:02.809 [2024-10-01 22:19:58.013869] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:02.809 [2024-10-01 22:19:58.013884] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:02.809 [2024-10-01 22:19:58.027442] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:02.809 [2024-10-01 22:19:58.027457] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:02.809 [2024-10-01 22:19:58.040696] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:02.809 [2024-10-01 22:19:58.040711] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:03.129 [2024-10-01 22:19:58.053976] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:03.129 [2024-10-01 22:19:58.053991] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:03.129 [2024-10-01 22:19:58.067554] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:03.129 [2024-10-01 22:19:58.067568] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:03.129 [2024-10-01 22:19:58.081240] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:03.129 [2024-10-01 22:19:58.081254] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:03.129 [2024-10-01 22:19:58.094129] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:03.129 [2024-10-01 22:19:58.094143] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:03.129 [2024-10-01 22:19:58.107041] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:03.129 [2024-10-01 22:19:58.107055] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:03.129 [2024-10-01 22:19:58.120633] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:03.129 [2024-10-01 22:19:58.120648] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:03.129 [2024-10-01 22:19:58.134207] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:03.129 [2024-10-01 22:19:58.134221] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:03.129 [2024-10-01 22:19:58.147562] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:03.129 [2024-10-01 22:19:58.147576] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:03.129 [2024-10-01 22:19:58.160581] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:03.129 [2024-10-01 22:19:58.160595] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:03.129 [2024-10-01 22:19:58.174162] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:03.129 [2024-10-01 22:19:58.174176] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:03.129 [2024-10-01 22:19:58.187660] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:03.129 [2024-10-01 22:19:58.187674] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:03.129 [2024-10-01 22:19:58.201591] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:03.129 [2024-10-01 22:19:58.201606] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:03.129 [2024-10-01 22:19:58.214308] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:03.129 [2024-10-01 22:19:58.214326] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:03.129 [2024-10-01 22:19:58.227164] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:03.129 [2024-10-01 22:19:58.227178] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:03.129 [2024-10-01 22:19:58.240546] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:03.129 [2024-10-01 22:19:58.240561] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:03.129 [2024-10-01 22:19:58.253485] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:03.129 [2024-10-01 22:19:58.253499] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:03.129 [2024-10-01 22:19:58.266501] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:03.129 [2024-10-01 22:19:58.266516] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:03.129 [2024-10-01 22:19:58.280544] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:03.129 [2024-10-01 22:19:58.280560] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:03.129 [2024-10-01 22:19:58.293500] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:03.129 [2024-10-01 22:19:58.293515] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:03.129 [2024-10-01 22:19:58.306278] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:03.129 [2024-10-01 22:19:58.306293] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:03.129 [2024-10-01 22:19:58.319512] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:03.129 [2024-10-01 22:19:58.319526] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:03.129 [2024-10-01 22:19:58.333258] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:03.129 [2024-10-01 22:19:58.333273] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:03.129 [2024-10-01 22:19:58.346164] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:03.129 [2024-10-01 22:19:58.346178] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:03.129 [2024-10-01 22:19:58.359090] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:03.129 [2024-10-01 22:19:58.359105] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:03.129 [2024-10-01 22:19:58.372425] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:03.129 [2024-10-01 22:19:58.372440] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:03.390 [2024-10-01 22:19:58.385715] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:03.390 [2024-10-01 22:19:58.385730] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:03.390 [2024-10-01 22:19:58.399447] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:03.390 [2024-10-01 22:19:58.399462] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:03.390 [2024-10-01 22:19:58.412035] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:03.390 [2024-10-01 22:19:58.412050] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:03.390 [2024-10-01 22:19:58.425678] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:03.390 [2024-10-01 22:19:58.425693] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:03.390 [2024-10-01 22:19:58.439056] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:03.390 [2024-10-01 22:19:58.439071] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:03.390 [2024-10-01 22:19:58.452757] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:03.390 [2024-10-01 22:19:58.452772] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:03.390 [2024-10-01 22:19:58.465333] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:03.390 [2024-10-01 22:19:58.465351] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:03.390 [2024-10-01 22:19:58.478421] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:03.390 [2024-10-01 22:19:58.478436] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:03.390 [2024-10-01 22:19:58.491573] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:03.390 [2024-10-01 22:19:58.491589] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:03.390 [2024-10-01 22:19:58.505355] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:03.390 [2024-10-01 22:19:58.505370] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:03.390 [2024-10-01 22:19:58.518751] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:03.390 [2024-10-01 22:19:58.518766] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:03.390 [2024-10-01 22:19:58.531878] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:03.390 [2024-10-01 22:19:58.531893] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:03.390 [2024-10-01 22:19:58.545600] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:03.390 [2024-10-01 22:19:58.545615] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:03.390 [2024-10-01 22:19:58.558742] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:03.390 [2024-10-01 22:19:58.558758] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:03.390 19004.50 IOPS, 148.47 MiB/s [2024-10-01 22:19:58.571040] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:03.390 [2024-10-01 22:19:58.571054] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:03.390 [2024-10-01 22:19:58.583750] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:03.390 [2024-10-01 22:19:58.583765] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:03.390 [2024-10-01 22:19:58.596826] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:03.390 [2024-10-01 22:19:58.596841] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:03.390 [2024-10-01 22:19:58.610261] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:03.390 [2024-10-01 22:19:58.610275] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:03.390 [2024-10-01 22:19:58.622721] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:03.390 [2024-10-01 22:19:58.622736] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:03.390 [2024-10-01 22:19:58.635966] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:03.390 [2024-10-01 22:19:58.635981] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:03.651 [2024-10-01 22:19:58.649200] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:03.651 [2024-10-01 22:19:58.649215] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:03.651 [2024-10-01 22:19:58.662433] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:03.651 [2024-10-01 22:19:58.662448] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:03.651 [2024-10-01 22:19:58.675688] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:03.651 [2024-10-01 22:19:58.675703] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:03.651 [2024-10-01 22:19:58.688908] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:03.651 [2024-10-01 22:19:58.688922] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:03.651 [2024-10-01 22:19:58.702525] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:03.651 [2024-10-01 22:19:58.702539] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:03.651 [2024-10-01 22:19:58.715774] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:03.651 [2024-10-01 22:19:58.715789] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:03.651 [2024-10-01 22:19:58.729297] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:03.651 [2024-10-01 22:19:58.729312] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:03.651 [2024-10-01 22:19:58.742753] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:03.651 [2024-10-01 22:19:58.742768] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:03.651 [2024-10-01 22:19:58.756246] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:03.651 [2024-10-01 22:19:58.756261] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:03.651 [2024-10-01 22:19:58.769475] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:03.651 [2024-10-01 22:19:58.769490] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:03.651 [2024-10-01 22:19:58.782994] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:03.651 [2024-10-01 22:19:58.783009] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:03.651 [2024-10-01 22:19:58.795838] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:03.651 [2024-10-01 22:19:58.795853] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:03.651 [2024-10-01 22:19:58.808492] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:03.651 [2024-10-01 22:19:58.808506] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:03.651 [2024-10-01 22:19:58.822445] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:03.651 [2024-10-01 22:19:58.822460] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:03.651 [2024-10-01 22:19:58.834731] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:03.651 [2024-10-01 22:19:58.834746] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:03.651 [2024-10-01 22:19:58.847359] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:03.651 [2024-10-01 22:19:58.847373] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:03.651 [2024-10-01 22:19:58.860650] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:03.651 [2024-10-01 22:19:58.860665] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:03.651 [2024-10-01 22:19:58.874419] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:03.651 [2024-10-01 22:19:58.874433] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:03.651 [2024-10-01 22:19:58.887334] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:03.651 [2024-10-01 22:19:58.887349] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:03.912 [2024-10-01 22:19:58.908822] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:03.912 [2024-10-01 22:19:58.908838] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:03.912 [2024-10-01 22:19:58.922235] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:03.912 [2024-10-01 22:19:58.922250] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:03.912 [2024-10-01 22:19:58.935881] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:03.912 [2024-10-01 22:19:58.935895] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:03.912 [2024-10-01 22:19:58.949468] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:03.912 [2024-10-01 22:19:58.949483] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:03.912 [2024-10-01 22:19:58.963206] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:03.912 [2024-10-01 22:19:58.963221] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:03.912 [2024-10-01 22:19:58.976336] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:03.912 [2024-10-01 22:19:58.976351] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:03.912 [2024-10-01 22:19:58.989780] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:03.912 [2024-10-01 22:19:58.989794] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:03.912 [2024-10-01 22:19:59.002532] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:03.912 [2024-10-01 22:19:59.002547] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:03.912 [2024-10-01 22:19:59.015252] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:03.912 [2024-10-01 22:19:59.015266] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:03.912 [2024-10-01 22:19:59.027895] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:03.912 [2024-10-01 22:19:59.027910] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:03.912 [2024-10-01 22:19:59.041036] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:03.912 [2024-10-01 22:19:59.041051] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:03.912 [2024-10-01 22:19:59.054543] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:03.912 [2024-10-01 22:19:59.054558] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:03.912 [2024-10-01 22:19:59.067263] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:03.913 [2024-10-01 22:19:59.067278] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:03.913 [2024-10-01 22:19:59.080758] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:03.913 [2024-10-01 22:19:59.080773] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:03.913 [2024-10-01 22:19:59.093644] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:03.913 [2024-10-01 22:19:59.093658] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:03.913 [2024-10-01 22:19:59.107390] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:03.913 [2024-10-01 22:19:59.107404] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:03.913 [2024-10-01 22:19:59.120580] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:03.913 [2024-10-01 22:19:59.120594] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:03.913 [2024-10-01 22:19:59.133692] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:03.913 [2024-10-01 22:19:59.133707] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:03.913 [2024-10-01 22:19:59.147092] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:03.913 [2024-10-01 22:19:59.147106] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:03.913 [2024-10-01 22:19:59.159815] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:03.913 [2024-10-01 22:19:59.159830] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:04.176 [2024-10-01 22:19:59.172609] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:04.176 [2024-10-01 22:19:59.172627] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:04.176 [2024-10-01 22:19:59.184981] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:04.176 [2024-10-01 22:19:59.184995] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:04.176 [2024-10-01 22:19:59.197884] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:04.176 [2024-10-01 22:19:59.197898] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:04.176 [2024-10-01 22:19:59.210693] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:04.176 [2024-10-01 22:19:59.210708] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:04.176 [2024-10-01 22:19:59.223361] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:04.176 [2024-10-01 22:19:59.223376] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:04.176 [2024-10-01 22:19:59.236338] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:04.176 [2024-10-01 22:19:59.236352] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:04.176 [2024-10-01 22:19:59.248749] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:04.176 [2024-10-01 22:19:59.248764] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:04.176 [2024-10-01 22:19:59.261140] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:04.176 [2024-10-01 22:19:59.261154] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:04.176 [2024-10-01 22:19:59.273857] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:04.176 [2024-10-01 22:19:59.273872] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:04.176 [2024-10-01 22:19:59.287297] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:04.176 [2024-10-01 22:19:59.287311] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:04.176 [2024-10-01 22:19:59.300518] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:04.176 [2024-10-01 22:19:59.300533] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:04.176 [2024-10-01 22:19:59.313656] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:04.176 [2024-10-01 22:19:59.313671] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:04.176 [2024-10-01 22:19:59.326934] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:04.176 [2024-10-01 22:19:59.326949] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:04.176 [2024-10-01 22:19:59.339544] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:04.176 [2024-10-01 22:19:59.339558] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:04.176 [2024-10-01 22:19:59.351953] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:04.176 [2024-10-01 22:19:59.351967] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:04.176 [2024-10-01 22:19:59.365700] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:04.176 [2024-10-01 22:19:59.365714] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:04.176 [2024-10-01 22:19:59.378723] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:04.176 [2024-10-01 22:19:59.378738] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:04.176 [2024-10-01 22:19:59.392187] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:04.176 [2024-10-01 22:19:59.392202] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:04.176 [2024-10-01 22:19:59.405315] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:04.176 [2024-10-01 22:19:59.405330] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:04.176 [2024-10-01 22:19:59.418822] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:04.176 [2024-10-01 22:19:59.418836] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:04.437 [2024-10-01 22:19:59.431599] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:04.437 [2024-10-01 22:19:59.431614] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:04.437 [2024-10-01 22:19:59.444656] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:04.437 [2024-10-01 22:19:59.444670] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:04.437 [2024-10-01 22:19:59.457503] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:04.437 [2024-10-01 22:19:59.457521] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:04.437 [2024-10-01 22:19:59.470264] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:04.437 [2024-10-01 22:19:59.470277] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:04.437 [2024-10-01 22:19:59.483805] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:04.437 [2024-10-01 22:19:59.483820] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:04.437 [2024-10-01 22:19:59.496324] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:04.437 [2024-10-01 22:19:59.496339] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:04.437 [2024-10-01 22:19:59.508919] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:04.437 [2024-10-01 22:19:59.508934] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:04.437 [2024-10-01 22:19:59.521545] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:04.437 [2024-10-01 22:19:59.521559] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:04.437 [2024-10-01 22:19:59.534399] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:04.437 [2024-10-01 22:19:59.534413] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:04.437 [2024-10-01 22:19:59.546601] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:04.437 [2024-10-01 22:19:59.546616] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:04.437 [2024-10-01 22:19:59.560125] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:04.437 [2024-10-01 22:19:59.560140] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:04.437 19022.20 IOPS, 148.61 MiB/s 00:21:04.437 Latency(us) 00:21:04.437 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:04.437 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:21:04.437 Nvme1n1 : 5.00 19031.42 148.68 0.00 0.00 6719.83 2594.13 16493.23 00:21:04.437 =================================================================================================================== 00:21:04.437 Total : 19031.42 148.68 0.00 0.00 6719.83 2594.13 16493.23 00:21:04.437 [2024-10-01 22:19:59.569649] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:04.437 [2024-10-01 22:19:59.569662] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:04.437 [2024-10-01 22:19:59.581675] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:04.437 [2024-10-01 22:19:59.581686] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:04.437 [2024-10-01 22:19:59.593708] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:04.437 [2024-10-01 22:19:59.593721] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:04.437 [2024-10-01 22:19:59.605737] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:04.437 [2024-10-01 22:19:59.605748] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:04.437 [2024-10-01 22:19:59.617764] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:04.437 [2024-10-01 22:19:59.617775] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:04.437 [2024-10-01 22:19:59.629794] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:04.437 [2024-10-01 22:19:59.629803] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:04.437 [2024-10-01 22:19:59.641824] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:04.437 [2024-10-01 22:19:59.641833] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:04.437 [2024-10-01 22:19:59.653856] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:04.437 [2024-10-01 22:19:59.653872] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:04.437 [2024-10-01 22:19:59.665887] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:04.437 [2024-10-01 22:19:59.665898] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:04.437 [2024-10-01 22:19:59.677916] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:04.437 [2024-10-01 22:19:59.677925] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:04.437 [2024-10-01 22:19:59.689945] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:04.437 [2024-10-01 22:19:59.689954] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:04.698 [2024-10-01 22:19:59.701977] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:04.698 [2024-10-01 22:19:59.701986] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:04.698 [2024-10-01 22:19:59.714008] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:04.698 [2024-10-01 22:19:59.714017] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:04.698 [2024-10-01 22:19:59.726038] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:04.698 [2024-10-01 22:19:59.726047] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:04.698 [2024-10-01 22:19:59.738069] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:04.698 [2024-10-01 22:19:59.738079] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:04.698 [2024-10-01 22:19:59.750099] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:04.698 [2024-10-01 22:19:59.750106] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:04.698 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (54998) - No such process 00:21:04.698 22:19:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@49 -- # wait 54998 00:21:04.698 22:19:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:21:04.698 22:19:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:04.698 22:19:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:21:04.698 22:19:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:04.698 22:19:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:21:04.698 22:19:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:04.698 22:19:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:21:04.698 delay0 00:21:04.698 22:19:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:04.698 22:19:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:21:04.698 22:19:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:04.698 22:19:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:21:04.698 22:19:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:04.698 22:19:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 ns:1' 00:21:04.698 [2024-10-01 22:19:59.937958] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:21:11.285 Initializing NVMe Controllers 00:21:11.285 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:21:11.285 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:21:11.285 Initialization complete. Launching workers. 00:21:11.285 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 320, failed: 246 00:21:11.285 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 520, failed to submit 46 00:21:11.285 success 246, unsuccessful 274, failed 0 00:21:11.285 22:20:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:21:11.285 22:20:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@60 -- # nvmftestfini 00:21:11.285 22:20:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@514 -- # nvmfcleanup 00:21:11.285 22:20:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@121 -- # sync 00:21:11.285 22:20:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:21:11.285 22:20:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@124 -- # set +e 00:21:11.285 22:20:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@125 -- # for i in {1..20} 00:21:11.285 22:20:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:21:11.285 rmmod nvme_tcp 00:21:11.285 rmmod nvme_fabrics 00:21:11.285 rmmod nvme_keyring 00:21:11.285 22:20:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:21:11.285 22:20:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@128 -- # set -e 00:21:11.285 22:20:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@129 -- # return 0 00:21:11.285 22:20:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@515 -- # '[' -n 52638 ']' 00:21:11.285 22:20:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@516 -- # killprocess 52638 00:21:11.286 22:20:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@950 -- # '[' -z 52638 ']' 00:21:11.286 22:20:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@954 -- # kill -0 52638 00:21:11.286 22:20:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@955 -- # uname 00:21:11.286 22:20:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:21:11.286 22:20:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 52638 00:21:11.286 22:20:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:21:11.286 22:20:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:21:11.286 22:20:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@968 -- # echo 'killing process with pid 52638' 00:21:11.286 killing process with pid 52638 00:21:11.286 22:20:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@969 -- # kill 52638 00:21:11.286 22:20:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@974 -- # wait 52638 00:21:11.286 22:20:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:21:11.286 22:20:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:21:11.286 22:20:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:21:11.286 22:20:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@297 -- # iptr 00:21:11.286 22:20:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@789 -- # iptables-save 00:21:11.286 22:20:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:21:11.286 22:20:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@789 -- # iptables-restore 00:21:11.286 22:20:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:21:11.286 22:20:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@302 -- # remove_spdk_ns 00:21:11.286 22:20:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:11.286 22:20:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:11.286 22:20:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:13.837 22:20:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:21:13.837 00:21:13.837 real 0m33.786s 00:21:13.837 user 0m46.279s 00:21:13.837 sys 0m10.045s 00:21:13.837 22:20:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1126 -- # xtrace_disable 00:21:13.837 22:20:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:21:13.837 ************************************ 00:21:13.837 END TEST nvmf_zcopy 00:21:13.837 ************************************ 00:21:13.837 22:20:08 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@33 -- # run_test nvmf_nmic /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:21:13.837 22:20:08 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:21:13.837 22:20:08 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:21:13.837 22:20:08 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:21:13.837 ************************************ 00:21:13.837 START TEST nvmf_nmic 00:21:13.837 ************************************ 00:21:13.837 22:20:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:21:13.837 * Looking for test storage... 00:21:13.837 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:21:13.837 22:20:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:21:13.837 22:20:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1681 -- # lcov --version 00:21:13.837 22:20:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:21:13.837 22:20:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:21:13.837 22:20:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:21:13.837 22:20:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@333 -- # local ver1 ver1_l 00:21:13.837 22:20:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@334 -- # local ver2 ver2_l 00:21:13.837 22:20:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@336 -- # IFS=.-: 00:21:13.837 22:20:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@336 -- # read -ra ver1 00:21:13.837 22:20:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@337 -- # IFS=.-: 00:21:13.837 22:20:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@337 -- # read -ra ver2 00:21:13.837 22:20:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@338 -- # local 'op=<' 00:21:13.837 22:20:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@340 -- # ver1_l=2 00:21:13.837 22:20:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@341 -- # ver2_l=1 00:21:13.837 22:20:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:21:13.837 22:20:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@344 -- # case "$op" in 00:21:13.837 22:20:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@345 -- # : 1 00:21:13.837 22:20:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@364 -- # (( v = 0 )) 00:21:13.837 22:20:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:13.837 22:20:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@365 -- # decimal 1 00:21:13.837 22:20:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@353 -- # local d=1 00:21:13.837 22:20:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:21:13.837 22:20:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@355 -- # echo 1 00:21:13.837 22:20:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@365 -- # ver1[v]=1 00:21:13.837 22:20:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@366 -- # decimal 2 00:21:13.837 22:20:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@353 -- # local d=2 00:21:13.837 22:20:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:21:13.837 22:20:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@355 -- # echo 2 00:21:13.837 22:20:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@366 -- # ver2[v]=2 00:21:13.837 22:20:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:21:13.837 22:20:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:21:13.837 22:20:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@368 -- # return 0 00:21:13.837 22:20:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:21:13.837 22:20:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:21:13.837 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:13.837 --rc genhtml_branch_coverage=1 00:21:13.837 --rc genhtml_function_coverage=1 00:21:13.837 --rc genhtml_legend=1 00:21:13.837 --rc geninfo_all_blocks=1 00:21:13.837 --rc geninfo_unexecuted_blocks=1 00:21:13.837 00:21:13.837 ' 00:21:13.837 22:20:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:21:13.837 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:13.837 --rc genhtml_branch_coverage=1 00:21:13.837 --rc genhtml_function_coverage=1 00:21:13.837 --rc genhtml_legend=1 00:21:13.837 --rc geninfo_all_blocks=1 00:21:13.837 --rc geninfo_unexecuted_blocks=1 00:21:13.837 00:21:13.837 ' 00:21:13.837 22:20:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:21:13.837 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:13.837 --rc genhtml_branch_coverage=1 00:21:13.837 --rc genhtml_function_coverage=1 00:21:13.837 --rc genhtml_legend=1 00:21:13.837 --rc geninfo_all_blocks=1 00:21:13.837 --rc geninfo_unexecuted_blocks=1 00:21:13.837 00:21:13.837 ' 00:21:13.837 22:20:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:21:13.837 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:13.837 --rc genhtml_branch_coverage=1 00:21:13.837 --rc genhtml_function_coverage=1 00:21:13.837 --rc genhtml_legend=1 00:21:13.837 --rc geninfo_all_blocks=1 00:21:13.837 --rc geninfo_unexecuted_blocks=1 00:21:13.837 00:21:13.837 ' 00:21:13.837 22:20:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:13.837 22:20:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@7 -- # uname -s 00:21:13.837 22:20:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:13.837 22:20:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:13.837 22:20:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:13.837 22:20:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:13.837 22:20:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:13.837 22:20:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:13.837 22:20:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:13.837 22:20:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:13.837 22:20:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:13.837 22:20:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:13.837 22:20:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 00:21:13.837 22:20:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@18 -- # NVME_HOSTID=80c5a598-37ec-ec11-9bc7-a4bf01928204 00:21:13.837 22:20:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:13.837 22:20:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:13.837 22:20:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:13.837 22:20:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:13.837 22:20:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:13.837 22:20:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@15 -- # shopt -s extglob 00:21:13.837 22:20:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:13.837 22:20:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:13.837 22:20:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:13.837 22:20:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:13.837 22:20:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:13.837 22:20:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:13.838 22:20:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@5 -- # export PATH 00:21:13.838 22:20:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:13.838 22:20:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@51 -- # : 0 00:21:13.838 22:20:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:21:13.838 22:20:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:21:13.838 22:20:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:13.838 22:20:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:13.838 22:20:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:13.838 22:20:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:21:13.838 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:21:13.838 22:20:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:21:13.838 22:20:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:21:13.838 22:20:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@55 -- # have_pci_nics=0 00:21:13.838 22:20:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:21:13.838 22:20:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:21:13.838 22:20:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@14 -- # nvmftestinit 00:21:13.838 22:20:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:21:13.838 22:20:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:13.838 22:20:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@474 -- # prepare_net_devs 00:21:13.838 22:20:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@436 -- # local -g is_hw=no 00:21:13.838 22:20:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@438 -- # remove_spdk_ns 00:21:13.838 22:20:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:13.838 22:20:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:13.838 22:20:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:13.838 22:20:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:21:13.838 22:20:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:21:13.838 22:20:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@309 -- # xtrace_disable 00:21:13.838 22:20:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:21:21.985 22:20:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:21.985 22:20:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@315 -- # pci_devs=() 00:21:21.985 22:20:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@315 -- # local -a pci_devs 00:21:21.985 22:20:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@316 -- # pci_net_devs=() 00:21:21.985 22:20:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:21:21.985 22:20:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@317 -- # pci_drivers=() 00:21:21.985 22:20:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@317 -- # local -A pci_drivers 00:21:21.985 22:20:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@319 -- # net_devs=() 00:21:21.985 22:20:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@319 -- # local -ga net_devs 00:21:21.985 22:20:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@320 -- # e810=() 00:21:21.985 22:20:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@320 -- # local -ga e810 00:21:21.985 22:20:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@321 -- # x722=() 00:21:21.985 22:20:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@321 -- # local -ga x722 00:21:21.985 22:20:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@322 -- # mlx=() 00:21:21.985 22:20:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@322 -- # local -ga mlx 00:21:21.985 22:20:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:21.985 22:20:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:21.985 22:20:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:21.985 22:20:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:21.985 22:20:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:21.985 22:20:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:21.985 22:20:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:21.985 22:20:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:21:21.985 22:20:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:21.985 22:20:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:21.985 22:20:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:21.985 22:20:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:21.985 22:20:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:21:21.985 22:20:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:21:21.985 22:20:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:21:21.985 22:20:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:21:21.985 22:20:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:21:21.985 22:20:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:21:21.985 22:20:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:21.985 22:20:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:21:21.985 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:21:21.985 22:20:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:21.985 22:20:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:21.985 22:20:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:21.985 22:20:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:21.985 22:20:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:21.985 22:20:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:21.985 22:20:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:21:21.985 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:21:21.985 22:20:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:21.985 22:20:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:21.985 22:20:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:21.985 22:20:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:21.985 22:20:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:21.985 22:20:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:21:21.985 22:20:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:21:21.985 22:20:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:21:21.985 22:20:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:21:21.986 22:20:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:21.986 22:20:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:21:21.986 22:20:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:21.986 22:20:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@416 -- # [[ up == up ]] 00:21:21.986 22:20:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:21:21.986 22:20:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:21.986 22:20:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:21:21.986 Found net devices under 0000:4b:00.0: cvl_0_0 00:21:21.986 22:20:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:21:21.986 22:20:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:21:21.986 22:20:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:21.986 22:20:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:21:21.986 22:20:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:21.986 22:20:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@416 -- # [[ up == up ]] 00:21:21.986 22:20:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:21:21.986 22:20:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:21.986 22:20:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:21:21.986 Found net devices under 0000:4b:00.1: cvl_0_1 00:21:21.986 22:20:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:21:21.986 22:20:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:21:21.986 22:20:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@440 -- # is_hw=yes 00:21:21.986 22:20:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:21:21.986 22:20:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:21:21.986 22:20:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:21:21.986 22:20:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:21:21.986 22:20:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:21.986 22:20:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:21.986 22:20:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:21.986 22:20:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:21:21.986 22:20:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:21.986 22:20:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:21.986 22:20:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:21:21.986 22:20:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:21:21.986 22:20:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:21.986 22:20:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:21.986 22:20:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:21:21.986 22:20:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:21:21.986 22:20:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:21:21.986 22:20:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:21.986 22:20:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:21.986 22:20:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:21.986 22:20:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:21:21.986 22:20:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:21.986 22:20:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:21.986 22:20:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:21.986 22:20:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:21:21.986 22:20:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:21:21.986 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:21.986 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.617 ms 00:21:21.986 00:21:21.986 --- 10.0.0.2 ping statistics --- 00:21:21.986 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:21.986 rtt min/avg/max/mdev = 0.617/0.617/0.617/0.000 ms 00:21:21.986 22:20:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:21.986 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:21.986 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.294 ms 00:21:21.986 00:21:21.986 --- 10.0.0.1 ping statistics --- 00:21:21.986 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:21.986 rtt min/avg/max/mdev = 0.294/0.294/0.294/0.000 ms 00:21:21.986 22:20:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:21.986 22:20:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@448 -- # return 0 00:21:21.986 22:20:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:21:21.986 22:20:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:21.986 22:20:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:21:21.986 22:20:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:21:21.986 22:20:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:21.986 22:20:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:21:21.986 22:20:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:21:21.986 22:20:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:21:21.986 22:20:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:21:21.986 22:20:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@724 -- # xtrace_disable 00:21:21.986 22:20:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:21:21.986 22:20:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@507 -- # nvmfpid=61691 00:21:21.986 22:20:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@508 -- # waitforlisten 61691 00:21:21.986 22:20:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:21:21.986 22:20:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@831 -- # '[' -z 61691 ']' 00:21:21.986 22:20:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:21.986 22:20:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@836 -- # local max_retries=100 00:21:21.986 22:20:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:21.986 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:21.986 22:20:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@840 -- # xtrace_disable 00:21:21.986 22:20:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:21:21.986 [2024-10-01 22:20:16.419836] Starting SPDK v25.01-pre git sha1 1b1c3081e / DPDK 24.03.0 initialization... 00:21:21.986 [2024-10-01 22:20:16.419904] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:21.986 [2024-10-01 22:20:16.490631] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:21:21.986 [2024-10-01 22:20:16.566103] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:21.986 [2024-10-01 22:20:16.566143] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:21.986 [2024-10-01 22:20:16.566151] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:21.986 [2024-10-01 22:20:16.566158] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:21.986 [2024-10-01 22:20:16.566164] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:21.986 [2024-10-01 22:20:16.566300] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:21:21.986 [2024-10-01 22:20:16.566425] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:21:21.986 [2024-10-01 22:20:16.566582] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:21:21.986 [2024-10-01 22:20:16.566583] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:21:21.986 22:20:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:21:21.986 22:20:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@864 -- # return 0 00:21:21.986 22:20:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:21:21.986 22:20:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@730 -- # xtrace_disable 00:21:21.986 22:20:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:21:22.248 22:20:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:22.248 22:20:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:21:22.248 22:20:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:22.248 22:20:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:21:22.248 [2024-10-01 22:20:17.277607] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:22.248 22:20:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:22.248 22:20:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:21:22.248 22:20:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:22.248 22:20:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:21:22.248 Malloc0 00:21:22.248 22:20:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:22.248 22:20:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:21:22.248 22:20:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:22.248 22:20:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:21:22.248 22:20:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:22.248 22:20:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:21:22.248 22:20:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:22.248 22:20:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:21:22.249 22:20:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:22.249 22:20:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:21:22.249 22:20:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:22.249 22:20:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:21:22.249 [2024-10-01 22:20:17.336882] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:22.249 22:20:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:22.249 22:20:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:21:22.249 test case1: single bdev can't be used in multiple subsystems 00:21:22.249 22:20:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:21:22.249 22:20:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:22.249 22:20:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:21:22.249 22:20:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:22.249 22:20:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:21:22.249 22:20:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:22.249 22:20:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:21:22.249 22:20:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:22.249 22:20:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@28 -- # nmic_status=0 00:21:22.249 22:20:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:21:22.249 22:20:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:22.249 22:20:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:21:22.249 [2024-10-01 22:20:17.372827] bdev.c:8241:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:21:22.249 [2024-10-01 22:20:17.372846] subsystem.c:2157:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:21:22.249 [2024-10-01 22:20:17.372854] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:22.249 request: 00:21:22.249 { 00:21:22.249 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:21:22.249 "namespace": { 00:21:22.249 "bdev_name": "Malloc0", 00:21:22.249 "no_auto_visible": false 00:21:22.249 }, 00:21:22.249 "method": "nvmf_subsystem_add_ns", 00:21:22.249 "req_id": 1 00:21:22.249 } 00:21:22.249 Got JSON-RPC error response 00:21:22.249 response: 00:21:22.249 { 00:21:22.249 "code": -32602, 00:21:22.249 "message": "Invalid parameters" 00:21:22.249 } 00:21:22.249 22:20:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:21:22.249 22:20:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@29 -- # nmic_status=1 00:21:22.249 22:20:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:21:22.249 22:20:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:21:22.249 Adding namespace failed - expected result. 00:21:22.249 22:20:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:21:22.249 test case2: host connect to nvmf target in multiple paths 00:21:22.249 22:20:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:21:22.249 22:20:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:22.249 22:20:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:21:22.249 [2024-10-01 22:20:17.384983] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:21:22.249 22:20:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:22.249 22:20:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 --hostid=80c5a598-37ec-ec11-9bc7-a4bf01928204 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:21:24.158 22:20:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 --hostid=80c5a598-37ec-ec11-9bc7-a4bf01928204 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4421 00:21:25.540 22:20:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:21:25.540 22:20:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1198 -- # local i=0 00:21:25.540 22:20:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:21:25.540 22:20:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:21:25.540 22:20:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1205 -- # sleep 2 00:21:27.453 22:20:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:21:27.453 22:20:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:21:27.453 22:20:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:21:27.453 22:20:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:21:27.453 22:20:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:21:27.453 22:20:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1208 -- # return 0 00:21:27.453 22:20:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:21:27.453 [global] 00:21:27.453 thread=1 00:21:27.453 invalidate=1 00:21:27.453 rw=write 00:21:27.453 time_based=1 00:21:27.453 runtime=1 00:21:27.454 ioengine=libaio 00:21:27.454 direct=1 00:21:27.454 bs=4096 00:21:27.454 iodepth=1 00:21:27.454 norandommap=0 00:21:27.454 numjobs=1 00:21:27.454 00:21:27.454 verify_dump=1 00:21:27.454 verify_backlog=512 00:21:27.454 verify_state_save=0 00:21:27.454 do_verify=1 00:21:27.454 verify=crc32c-intel 00:21:27.454 [job0] 00:21:27.454 filename=/dev/nvme0n1 00:21:27.454 Could not set queue depth (nvme0n1) 00:21:27.715 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:21:27.715 fio-3.35 00:21:27.715 Starting 1 thread 00:21:29.100 00:21:29.100 job0: (groupid=0, jobs=1): err= 0: pid=63119: Tue Oct 1 22:20:24 2024 00:21:29.100 read: IOPS=457, BW=1830KiB/s (1874kB/s)(1832KiB/1001msec) 00:21:29.100 slat (nsec): min=9803, max=41695, avg=25017.70, stdev=1757.46 00:21:29.100 clat (usec): min=493, max=42909, avg=1502.05, stdev=4671.47 00:21:29.100 lat (usec): min=518, max=42934, avg=1527.07, stdev=4671.46 00:21:29.100 clat percentiles (usec): 00:21:29.100 | 1.00th=[ 742], 5.00th=[ 840], 10.00th=[ 873], 20.00th=[ 930], 00:21:29.100 | 30.00th=[ 955], 40.00th=[ 963], 50.00th=[ 979], 60.00th=[ 988], 00:21:29.100 | 70.00th=[ 996], 80.00th=[ 1012], 90.00th=[ 1037], 95.00th=[ 1057], 00:21:29.100 | 99.00th=[41157], 99.50th=[42206], 99.90th=[42730], 99.95th=[42730], 00:21:29.100 | 99.99th=[42730] 00:21:29.100 write: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec); 0 zone resets 00:21:29.100 slat (nsec): min=9462, max=61038, avg=26887.56, stdev=10002.11 00:21:29.100 clat (usec): min=309, max=825, avg=547.18, stdev=89.38 00:21:29.100 lat (usec): min=319, max=857, avg=574.07, stdev=94.13 00:21:29.100 clat percentiles (usec): 00:21:29.100 | 1.00th=[ 351], 5.00th=[ 367], 10.00th=[ 433], 20.00th=[ 461], 00:21:29.100 | 30.00th=[ 523], 40.00th=[ 537], 50.00th=[ 553], 60.00th=[ 570], 00:21:29.100 | 70.00th=[ 594], 80.00th=[ 627], 90.00th=[ 652], 95.00th=[ 676], 00:21:29.100 | 99.00th=[ 725], 99.50th=[ 734], 99.90th=[ 824], 99.95th=[ 824], 00:21:29.100 | 99.99th=[ 824] 00:21:29.100 bw ( KiB/s): min= 4096, max= 4096, per=100.00%, avg=4096.00, stdev= 0.00, samples=1 00:21:29.100 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:21:29.100 lat (usec) : 500=14.23%, 750=38.97%, 1000=33.30% 00:21:29.100 lat (msec) : 2=12.89%, 50=0.62% 00:21:29.100 cpu : usr=0.90%, sys=3.20%, ctx=970, majf=0, minf=1 00:21:29.100 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:21:29.100 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:29.100 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:29.100 issued rwts: total=458,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:29.100 latency : target=0, window=0, percentile=100.00%, depth=1 00:21:29.100 00:21:29.100 Run status group 0 (all jobs): 00:21:29.100 READ: bw=1830KiB/s (1874kB/s), 1830KiB/s-1830KiB/s (1874kB/s-1874kB/s), io=1832KiB (1876kB), run=1001-1001msec 00:21:29.100 WRITE: bw=2046KiB/s (2095kB/s), 2046KiB/s-2046KiB/s (2095kB/s-2095kB/s), io=2048KiB (2097kB), run=1001-1001msec 00:21:29.100 00:21:29.100 Disk stats (read/write): 00:21:29.100 nvme0n1: ios=406/512, merge=0/0, ticks=663/268, in_queue=931, util=94.19% 00:21:29.100 22:20:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:21:29.100 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:21:29.100 22:20:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:21:29.100 22:20:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1219 -- # local i=0 00:21:29.100 22:20:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:21:29.100 22:20:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:21:29.100 22:20:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:21:29.100 22:20:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:21:29.100 22:20:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1231 -- # return 0 00:21:29.100 22:20:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:21:29.100 22:20:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@53 -- # nvmftestfini 00:21:29.100 22:20:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@514 -- # nvmfcleanup 00:21:29.100 22:20:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@121 -- # sync 00:21:29.101 22:20:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:21:29.101 22:20:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@124 -- # set +e 00:21:29.101 22:20:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@125 -- # for i in {1..20} 00:21:29.101 22:20:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:21:29.101 rmmod nvme_tcp 00:21:29.101 rmmod nvme_fabrics 00:21:29.101 rmmod nvme_keyring 00:21:29.101 22:20:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:21:29.101 22:20:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@128 -- # set -e 00:21:29.101 22:20:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@129 -- # return 0 00:21:29.101 22:20:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@515 -- # '[' -n 61691 ']' 00:21:29.101 22:20:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@516 -- # killprocess 61691 00:21:29.101 22:20:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@950 -- # '[' -z 61691 ']' 00:21:29.101 22:20:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@954 -- # kill -0 61691 00:21:29.101 22:20:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@955 -- # uname 00:21:29.101 22:20:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:21:29.101 22:20:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 61691 00:21:29.361 22:20:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:21:29.361 22:20:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:21:29.361 22:20:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@968 -- # echo 'killing process with pid 61691' 00:21:29.361 killing process with pid 61691 00:21:29.361 22:20:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@969 -- # kill 61691 00:21:29.361 22:20:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@974 -- # wait 61691 00:21:29.361 22:20:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:21:29.361 22:20:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:21:29.361 22:20:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:21:29.361 22:20:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@297 -- # iptr 00:21:29.361 22:20:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@789 -- # iptables-save 00:21:29.361 22:20:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:21:29.361 22:20:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@789 -- # iptables-restore 00:21:29.361 22:20:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:21:29.361 22:20:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@302 -- # remove_spdk_ns 00:21:29.361 22:20:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:29.361 22:20:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:29.361 22:20:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:31.907 22:20:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:21:31.907 00:21:31.907 real 0m18.058s 00:21:31.907 user 0m44.989s 00:21:31.907 sys 0m6.737s 00:21:31.907 22:20:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1126 -- # xtrace_disable 00:21:31.908 22:20:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:21:31.908 ************************************ 00:21:31.908 END TEST nvmf_nmic 00:21:31.908 ************************************ 00:21:31.908 22:20:26 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@34 -- # run_test nvmf_fio_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:21:31.908 22:20:26 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:21:31.908 22:20:26 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:21:31.908 22:20:26 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:21:31.908 ************************************ 00:21:31.908 START TEST nvmf_fio_target 00:21:31.908 ************************************ 00:21:31.908 22:20:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:21:31.908 * Looking for test storage... 00:21:31.908 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:21:31.908 22:20:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:21:31.908 22:20:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1681 -- # lcov --version 00:21:31.908 22:20:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:21:31.908 22:20:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:21:31.908 22:20:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:21:31.908 22:20:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:21:31.908 22:20:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:21:31.908 22:20:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@336 -- # IFS=.-: 00:21:31.908 22:20:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@336 -- # read -ra ver1 00:21:31.908 22:20:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@337 -- # IFS=.-: 00:21:31.908 22:20:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@337 -- # read -ra ver2 00:21:31.908 22:20:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@338 -- # local 'op=<' 00:21:31.908 22:20:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@340 -- # ver1_l=2 00:21:31.908 22:20:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@341 -- # ver2_l=1 00:21:31.908 22:20:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:21:31.908 22:20:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@344 -- # case "$op" in 00:21:31.908 22:20:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@345 -- # : 1 00:21:31.908 22:20:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:21:31.908 22:20:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:31.908 22:20:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@365 -- # decimal 1 00:21:31.908 22:20:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@353 -- # local d=1 00:21:31.908 22:20:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:21:31.908 22:20:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@355 -- # echo 1 00:21:31.908 22:20:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@365 -- # ver1[v]=1 00:21:31.908 22:20:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@366 -- # decimal 2 00:21:31.908 22:20:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@353 -- # local d=2 00:21:31.908 22:20:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:21:31.908 22:20:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@355 -- # echo 2 00:21:31.908 22:20:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@366 -- # ver2[v]=2 00:21:31.908 22:20:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:21:31.908 22:20:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:21:31.908 22:20:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@368 -- # return 0 00:21:31.908 22:20:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:21:31.908 22:20:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:21:31.908 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:31.908 --rc genhtml_branch_coverage=1 00:21:31.908 --rc genhtml_function_coverage=1 00:21:31.908 --rc genhtml_legend=1 00:21:31.908 --rc geninfo_all_blocks=1 00:21:31.908 --rc geninfo_unexecuted_blocks=1 00:21:31.908 00:21:31.908 ' 00:21:31.908 22:20:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:21:31.908 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:31.908 --rc genhtml_branch_coverage=1 00:21:31.908 --rc genhtml_function_coverage=1 00:21:31.908 --rc genhtml_legend=1 00:21:31.908 --rc geninfo_all_blocks=1 00:21:31.908 --rc geninfo_unexecuted_blocks=1 00:21:31.908 00:21:31.908 ' 00:21:31.908 22:20:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:21:31.908 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:31.908 --rc genhtml_branch_coverage=1 00:21:31.908 --rc genhtml_function_coverage=1 00:21:31.908 --rc genhtml_legend=1 00:21:31.908 --rc geninfo_all_blocks=1 00:21:31.908 --rc geninfo_unexecuted_blocks=1 00:21:31.908 00:21:31.908 ' 00:21:31.908 22:20:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:21:31.908 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:31.908 --rc genhtml_branch_coverage=1 00:21:31.908 --rc genhtml_function_coverage=1 00:21:31.908 --rc genhtml_legend=1 00:21:31.908 --rc geninfo_all_blocks=1 00:21:31.908 --rc geninfo_unexecuted_blocks=1 00:21:31.908 00:21:31.908 ' 00:21:31.908 22:20:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:31.908 22:20:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@7 -- # uname -s 00:21:31.908 22:20:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:31.908 22:20:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:31.908 22:20:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:31.908 22:20:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:31.908 22:20:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:31.908 22:20:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:31.908 22:20:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:31.908 22:20:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:31.908 22:20:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:31.908 22:20:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:31.908 22:20:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 00:21:31.908 22:20:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@18 -- # NVME_HOSTID=80c5a598-37ec-ec11-9bc7-a4bf01928204 00:21:31.908 22:20:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:31.908 22:20:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:31.908 22:20:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:31.908 22:20:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:31.908 22:20:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:31.908 22:20:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@15 -- # shopt -s extglob 00:21:31.908 22:20:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:31.908 22:20:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:31.908 22:20:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:31.908 22:20:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:31.908 22:20:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:31.908 22:20:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:31.908 22:20:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@5 -- # export PATH 00:21:31.908 22:20:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:31.908 22:20:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@51 -- # : 0 00:21:31.909 22:20:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:21:31.909 22:20:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:21:31.909 22:20:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:31.909 22:20:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:31.909 22:20:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:31.909 22:20:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:21:31.909 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:21:31.909 22:20:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:21:31.909 22:20:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:21:31.909 22:20:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:21:31.909 22:20:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:21:31.909 22:20:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:21:31.909 22:20:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:21:31.909 22:20:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@16 -- # nvmftestinit 00:21:31.909 22:20:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:21:31.909 22:20:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:31.909 22:20:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@474 -- # prepare_net_devs 00:21:31.909 22:20:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@436 -- # local -g is_hw=no 00:21:31.909 22:20:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@438 -- # remove_spdk_ns 00:21:31.909 22:20:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:31.909 22:20:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:31.909 22:20:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:31.909 22:20:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:21:31.909 22:20:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:21:31.909 22:20:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@309 -- # xtrace_disable 00:21:31.909 22:20:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:21:40.051 22:20:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:40.051 22:20:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@315 -- # pci_devs=() 00:21:40.051 22:20:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:21:40.051 22:20:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:21:40.051 22:20:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:21:40.051 22:20:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:21:40.051 22:20:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:21:40.051 22:20:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@319 -- # net_devs=() 00:21:40.051 22:20:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:21:40.051 22:20:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@320 -- # e810=() 00:21:40.051 22:20:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@320 -- # local -ga e810 00:21:40.051 22:20:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@321 -- # x722=() 00:21:40.051 22:20:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@321 -- # local -ga x722 00:21:40.051 22:20:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@322 -- # mlx=() 00:21:40.051 22:20:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@322 -- # local -ga mlx 00:21:40.051 22:20:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:40.051 22:20:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:40.051 22:20:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:40.051 22:20:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:40.051 22:20:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:40.051 22:20:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:40.051 22:20:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:40.051 22:20:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:21:40.051 22:20:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:40.051 22:20:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:40.051 22:20:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:40.051 22:20:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:40.051 22:20:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:21:40.051 22:20:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:21:40.051 22:20:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:21:40.051 22:20:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:21:40.051 22:20:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:21:40.051 22:20:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:21:40.052 22:20:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:40.052 22:20:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:21:40.052 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:21:40.052 22:20:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:40.052 22:20:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:40.052 22:20:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:40.052 22:20:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:40.052 22:20:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:40.052 22:20:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:40.052 22:20:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:21:40.052 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:21:40.052 22:20:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:40.052 22:20:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:40.052 22:20:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:40.052 22:20:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:40.052 22:20:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:40.052 22:20:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:21:40.052 22:20:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:21:40.052 22:20:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:21:40.052 22:20:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:21:40.052 22:20:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:40.052 22:20:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:21:40.052 22:20:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:40.052 22:20:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ up == up ]] 00:21:40.052 22:20:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:21:40.052 22:20:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:40.052 22:20:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:21:40.052 Found net devices under 0000:4b:00.0: cvl_0_0 00:21:40.052 22:20:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:21:40.052 22:20:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:21:40.052 22:20:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:40.052 22:20:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:21:40.052 22:20:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:40.052 22:20:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ up == up ]] 00:21:40.052 22:20:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:21:40.052 22:20:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:40.052 22:20:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:21:40.052 Found net devices under 0000:4b:00.1: cvl_0_1 00:21:40.052 22:20:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:21:40.052 22:20:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:21:40.052 22:20:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@440 -- # is_hw=yes 00:21:40.052 22:20:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:21:40.052 22:20:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:21:40.052 22:20:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:21:40.052 22:20:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:21:40.052 22:20:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:40.052 22:20:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:40.052 22:20:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:40.052 22:20:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:21:40.052 22:20:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:40.052 22:20:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:40.052 22:20:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:21:40.052 22:20:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:21:40.052 22:20:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:40.052 22:20:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:40.052 22:20:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:21:40.052 22:20:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:21:40.052 22:20:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:21:40.052 22:20:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:40.052 22:20:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:40.052 22:20:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:40.052 22:20:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:21:40.052 22:20:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:40.052 22:20:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:40.052 22:20:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:40.052 22:20:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:21:40.052 22:20:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:21:40.052 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:40.052 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.598 ms 00:21:40.052 00:21:40.052 --- 10.0.0.2 ping statistics --- 00:21:40.052 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:40.052 rtt min/avg/max/mdev = 0.598/0.598/0.598/0.000 ms 00:21:40.052 22:20:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:40.052 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:40.052 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.273 ms 00:21:40.052 00:21:40.052 --- 10.0.0.1 ping statistics --- 00:21:40.052 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:40.052 rtt min/avg/max/mdev = 0.273/0.273/0.273/0.000 ms 00:21:40.052 22:20:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:40.052 22:20:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@448 -- # return 0 00:21:40.052 22:20:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:21:40.052 22:20:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:40.052 22:20:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:21:40.052 22:20:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:21:40.052 22:20:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:40.052 22:20:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:21:40.052 22:20:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:21:40.052 22:20:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:21:40.052 22:20:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:21:40.052 22:20:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@724 -- # xtrace_disable 00:21:40.052 22:20:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:21:40.052 22:20:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@507 -- # nvmfpid=67581 00:21:40.052 22:20:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@508 -- # waitforlisten 67581 00:21:40.052 22:20:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@831 -- # '[' -z 67581 ']' 00:21:40.052 22:20:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:40.052 22:20:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@836 -- # local max_retries=100 00:21:40.052 22:20:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:40.052 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:40.052 22:20:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@840 -- # xtrace_disable 00:21:40.052 22:20:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:21:40.052 22:20:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:21:40.052 [2024-10-01 22:20:34.392139] Starting SPDK v25.01-pre git sha1 1b1c3081e / DPDK 24.03.0 initialization... 00:21:40.052 [2024-10-01 22:20:34.392205] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:40.052 [2024-10-01 22:20:34.463561] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:21:40.052 [2024-10-01 22:20:34.539041] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:40.052 [2024-10-01 22:20:34.539078] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:40.052 [2024-10-01 22:20:34.539086] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:40.052 [2024-10-01 22:20:34.539092] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:40.052 [2024-10-01 22:20:34.539098] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:40.052 [2024-10-01 22:20:34.539234] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:21:40.052 [2024-10-01 22:20:34.539351] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:21:40.052 [2024-10-01 22:20:34.539507] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:21:40.052 [2024-10-01 22:20:34.539508] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:21:40.053 22:20:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:21:40.053 22:20:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@864 -- # return 0 00:21:40.053 22:20:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:21:40.053 22:20:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@730 -- # xtrace_disable 00:21:40.053 22:20:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:21:40.053 22:20:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:40.053 22:20:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:21:40.314 [2024-10-01 22:20:35.390218] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:40.314 22:20:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:21:40.575 22:20:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:21:40.575 22:20:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:21:40.575 22:20:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:21:40.575 22:20:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:21:40.836 22:20:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:21:40.836 22:20:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:21:41.098 22:20:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:21:41.098 22:20:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:21:41.359 22:20:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:21:41.359 22:20:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:21:41.359 22:20:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:21:41.621 22:20:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:21:41.621 22:20:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:21:41.882 22:20:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:21:41.882 22:20:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:21:42.142 22:20:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:21:42.142 22:20:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:21:42.142 22:20:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:21:42.402 22:20:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:21:42.402 22:20:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:21:42.662 22:20:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:21:42.662 [2024-10-01 22:20:37.839818] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:42.662 22:20:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:21:42.923 22:20:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:21:43.184 22:20:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 --hostid=80c5a598-37ec-ec11-9bc7-a4bf01928204 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:21:44.565 22:20:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:21:44.565 22:20:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1198 -- # local i=0 00:21:44.565 22:20:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:21:44.565 22:20:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1200 -- # [[ -n 4 ]] 00:21:44.566 22:20:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1201 -- # nvme_device_counter=4 00:21:44.566 22:20:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1205 -- # sleep 2 00:21:47.112 22:20:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:21:47.112 22:20:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:21:47.112 22:20:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:21:47.112 22:20:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1207 -- # nvme_devices=4 00:21:47.112 22:20:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:21:47.112 22:20:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1208 -- # return 0 00:21:47.112 22:20:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:21:47.112 [global] 00:21:47.113 thread=1 00:21:47.113 invalidate=1 00:21:47.113 rw=write 00:21:47.113 time_based=1 00:21:47.113 runtime=1 00:21:47.113 ioengine=libaio 00:21:47.113 direct=1 00:21:47.113 bs=4096 00:21:47.113 iodepth=1 00:21:47.113 norandommap=0 00:21:47.113 numjobs=1 00:21:47.113 00:21:47.113 verify_dump=1 00:21:47.113 verify_backlog=512 00:21:47.113 verify_state_save=0 00:21:47.113 do_verify=1 00:21:47.113 verify=crc32c-intel 00:21:47.113 [job0] 00:21:47.113 filename=/dev/nvme0n1 00:21:47.113 [job1] 00:21:47.113 filename=/dev/nvme0n2 00:21:47.113 [job2] 00:21:47.113 filename=/dev/nvme0n3 00:21:47.113 [job3] 00:21:47.113 filename=/dev/nvme0n4 00:21:47.113 Could not set queue depth (nvme0n1) 00:21:47.113 Could not set queue depth (nvme0n2) 00:21:47.113 Could not set queue depth (nvme0n3) 00:21:47.113 Could not set queue depth (nvme0n4) 00:21:47.113 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:21:47.113 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:21:47.113 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:21:47.113 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:21:47.113 fio-3.35 00:21:47.113 Starting 4 threads 00:21:48.502 00:21:48.502 job0: (groupid=0, jobs=1): err= 0: pid=69503: Tue Oct 1 22:20:43 2024 00:21:48.502 read: IOPS=193, BW=773KiB/s (792kB/s)(792KiB/1024msec) 00:21:48.502 slat (nsec): min=6635, max=38685, avg=20451.19, stdev=8491.02 00:21:48.502 clat (usec): min=281, max=43036, avg=3415.88, stdev=10242.72 00:21:48.502 lat (usec): min=288, max=43051, avg=3436.33, stdev=10243.97 00:21:48.502 clat percentiles (usec): 00:21:48.502 | 1.00th=[ 355], 5.00th=[ 412], 10.00th=[ 486], 20.00th=[ 586], 00:21:48.502 | 30.00th=[ 668], 40.00th=[ 709], 50.00th=[ 742], 60.00th=[ 766], 00:21:48.502 | 70.00th=[ 824], 80.00th=[ 857], 90.00th=[ 930], 95.00th=[41681], 00:21:48.502 | 99.00th=[42206], 99.50th=[43254], 99.90th=[43254], 99.95th=[43254], 00:21:48.502 | 99.99th=[43254] 00:21:48.502 write: IOPS=500, BW=2000KiB/s (2048kB/s)(2048KiB/1024msec); 0 zone resets 00:21:48.502 slat (usec): min=5, max=27120, avg=82.26, stdev=1215.80 00:21:48.502 clat (usec): min=214, max=974, avg=575.26, stdev=120.10 00:21:48.502 lat (usec): min=221, max=27741, avg=657.52, stdev=1224.48 00:21:48.502 clat percentiles (usec): 00:21:48.502 | 1.00th=[ 314], 5.00th=[ 363], 10.00th=[ 416], 20.00th=[ 474], 00:21:48.502 | 30.00th=[ 519], 40.00th=[ 553], 50.00th=[ 578], 60.00th=[ 611], 00:21:48.502 | 70.00th=[ 644], 80.00th=[ 685], 90.00th=[ 734], 95.00th=[ 766], 00:21:48.502 | 99.00th=[ 832], 99.50th=[ 857], 99.90th=[ 971], 99.95th=[ 971], 00:21:48.502 | 99.99th=[ 971] 00:21:48.502 bw ( KiB/s): min= 4096, max= 4096, per=38.72%, avg=4096.00, stdev= 0.00, samples=1 00:21:48.502 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:21:48.502 lat (usec) : 250=0.28%, 500=22.11%, 750=59.86%, 1000=15.63% 00:21:48.502 lat (msec) : 2=0.28%, 50=1.83% 00:21:48.502 cpu : usr=0.98%, sys=0.98%, ctx=716, majf=0, minf=1 00:21:48.502 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:21:48.502 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:48.502 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:48.502 issued rwts: total=198,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:48.502 latency : target=0, window=0, percentile=100.00%, depth=1 00:21:48.502 job1: (groupid=0, jobs=1): err= 0: pid=69504: Tue Oct 1 22:20:43 2024 00:21:48.502 read: IOPS=603, BW=2414KiB/s (2472kB/s)(2416KiB/1001msec) 00:21:48.502 slat (nsec): min=6850, max=68600, avg=23333.84, stdev=7240.85 00:21:48.502 clat (usec): min=400, max=996, avg=748.25, stdev=105.20 00:21:48.502 lat (usec): min=426, max=1021, avg=771.59, stdev=107.20 00:21:48.502 clat percentiles (usec): 00:21:48.502 | 1.00th=[ 469], 5.00th=[ 562], 10.00th=[ 603], 20.00th=[ 652], 00:21:48.502 | 30.00th=[ 685], 40.00th=[ 717], 50.00th=[ 766], 60.00th=[ 799], 00:21:48.502 | 70.00th=[ 816], 80.00th=[ 840], 90.00th=[ 873], 95.00th=[ 898], 00:21:48.502 | 99.00th=[ 938], 99.50th=[ 971], 99.90th=[ 996], 99.95th=[ 996], 00:21:48.502 | 99.99th=[ 996] 00:21:48.502 write: IOPS=1022, BW=4092KiB/s (4190kB/s)(4096KiB/1001msec); 0 zone resets 00:21:48.502 slat (nsec): min=9137, max=63213, avg=31709.80, stdev=7532.99 00:21:48.502 clat (usec): min=132, max=800, avg=478.35, stdev=118.98 00:21:48.502 lat (usec): min=165, max=850, avg=510.06, stdev=120.92 00:21:48.502 clat percentiles (usec): 00:21:48.502 | 1.00th=[ 229], 5.00th=[ 273], 10.00th=[ 330], 20.00th=[ 375], 00:21:48.502 | 30.00th=[ 408], 40.00th=[ 453], 50.00th=[ 482], 60.00th=[ 510], 00:21:48.502 | 70.00th=[ 545], 80.00th=[ 594], 90.00th=[ 635], 95.00th=[ 660], 00:21:48.502 | 99.00th=[ 725], 99.50th=[ 750], 99.90th=[ 791], 99.95th=[ 799], 00:21:48.502 | 99.99th=[ 799] 00:21:48.502 bw ( KiB/s): min= 4096, max= 4096, per=38.72%, avg=4096.00, stdev= 0.00, samples=1 00:21:48.502 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:21:48.502 lat (usec) : 250=1.72%, 500=34.71%, 750=43.55%, 1000=20.02% 00:21:48.502 cpu : usr=2.50%, sys=4.70%, ctx=1629, majf=0, minf=2 00:21:48.502 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:21:48.502 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:48.502 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:48.502 issued rwts: total=604,1024,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:48.502 latency : target=0, window=0, percentile=100.00%, depth=1 00:21:48.502 job2: (groupid=0, jobs=1): err= 0: pid=69505: Tue Oct 1 22:20:43 2024 00:21:48.502 read: IOPS=14, BW=59.8KiB/s (61.2kB/s)(60.0KiB/1004msec) 00:21:48.502 slat (nsec): min=27050, max=28080, avg=27504.67, stdev=233.18 00:21:48.502 clat (usec): min=41850, max=43021, avg=42125.18, stdev=374.35 00:21:48.502 lat (usec): min=41877, max=43048, avg=42152.68, stdev=374.27 00:21:48.502 clat percentiles (usec): 00:21:48.502 | 1.00th=[41681], 5.00th=[41681], 10.00th=[41681], 20.00th=[41681], 00:21:48.502 | 30.00th=[42206], 40.00th=[42206], 50.00th=[42206], 60.00th=[42206], 00:21:48.502 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42730], 95.00th=[43254], 00:21:48.502 | 99.00th=[43254], 99.50th=[43254], 99.90th=[43254], 99.95th=[43254], 00:21:48.502 | 99.99th=[43254] 00:21:48.502 write: IOPS=509, BW=2040KiB/s (2089kB/s)(2048KiB/1004msec); 0 zone resets 00:21:48.502 slat (nsec): min=9866, max=69287, avg=31913.88, stdev=9793.70 00:21:48.502 clat (usec): min=175, max=900, avg=680.89, stdev=119.53 00:21:48.502 lat (usec): min=188, max=935, avg=712.80, stdev=124.15 00:21:48.502 clat percentiles (usec): 00:21:48.502 | 1.00th=[ 359], 5.00th=[ 441], 10.00th=[ 523], 20.00th=[ 603], 00:21:48.502 | 30.00th=[ 635], 40.00th=[ 660], 50.00th=[ 693], 60.00th=[ 725], 00:21:48.502 | 70.00th=[ 758], 80.00th=[ 783], 90.00th=[ 816], 95.00th=[ 840], 00:21:48.502 | 99.00th=[ 881], 99.50th=[ 889], 99.90th=[ 898], 99.95th=[ 898], 00:21:48.502 | 99.99th=[ 898] 00:21:48.502 bw ( KiB/s): min= 4096, max= 4096, per=38.72%, avg=4096.00, stdev= 0.00, samples=1 00:21:48.502 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:21:48.502 lat (usec) : 250=0.19%, 500=7.59%, 750=58.44%, 1000=30.93% 00:21:48.502 lat (msec) : 50=2.85% 00:21:48.502 cpu : usr=0.90%, sys=2.19%, ctx=528, majf=0, minf=1 00:21:48.502 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:21:48.502 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:48.502 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:48.502 issued rwts: total=15,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:48.502 latency : target=0, window=0, percentile=100.00%, depth=1 00:21:48.502 job3: (groupid=0, jobs=1): err= 0: pid=69506: Tue Oct 1 22:20:43 2024 00:21:48.502 read: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec) 00:21:48.502 slat (nsec): min=25724, max=44613, avg=26804.42, stdev=3048.57 00:21:48.502 clat (usec): min=818, max=1459, avg=1076.43, stdev=95.90 00:21:48.502 lat (usec): min=844, max=1486, avg=1103.24, stdev=95.75 00:21:48.502 clat percentiles (usec): 00:21:48.502 | 1.00th=[ 848], 5.00th=[ 914], 10.00th=[ 963], 20.00th=[ 1012], 00:21:48.502 | 30.00th=[ 1037], 40.00th=[ 1057], 50.00th=[ 1074], 60.00th=[ 1090], 00:21:48.502 | 70.00th=[ 1106], 80.00th=[ 1139], 90.00th=[ 1188], 95.00th=[ 1254], 00:21:48.502 | 99.00th=[ 1352], 99.50th=[ 1369], 99.90th=[ 1467], 99.95th=[ 1467], 00:21:48.502 | 99.99th=[ 1467] 00:21:48.502 write: IOPS=659, BW=2637KiB/s (2701kB/s)(2640KiB/1001msec); 0 zone resets 00:21:48.502 slat (nsec): min=9372, max=80358, avg=29917.40, stdev=9367.25 00:21:48.502 clat (usec): min=288, max=972, avg=615.34, stdev=113.90 00:21:48.502 lat (usec): min=298, max=1005, avg=645.26, stdev=118.10 00:21:48.502 clat percentiles (usec): 00:21:48.502 | 1.00th=[ 351], 5.00th=[ 420], 10.00th=[ 457], 20.00th=[ 519], 00:21:48.502 | 30.00th=[ 562], 40.00th=[ 594], 50.00th=[ 627], 60.00th=[ 644], 00:21:48.502 | 70.00th=[ 676], 80.00th=[ 709], 90.00th=[ 750], 95.00th=[ 791], 00:21:48.502 | 99.00th=[ 873], 99.50th=[ 914], 99.90th=[ 971], 99.95th=[ 971], 00:21:48.502 | 99.99th=[ 971] 00:21:48.502 bw ( KiB/s): min= 4096, max= 4096, per=38.72%, avg=4096.00, stdev= 0.00, samples=1 00:21:48.502 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:21:48.502 lat (usec) : 500=9.30%, 750=41.30%, 1000=13.05% 00:21:48.502 lat (msec) : 2=36.35% 00:21:48.502 cpu : usr=2.80%, sys=4.00%, ctx=1173, majf=0, minf=2 00:21:48.502 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:21:48.502 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:48.502 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:48.502 issued rwts: total=512,660,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:48.502 latency : target=0, window=0, percentile=100.00%, depth=1 00:21:48.502 00:21:48.502 Run status group 0 (all jobs): 00:21:48.502 READ: bw=5191KiB/s (5316kB/s), 59.8KiB/s-2414KiB/s (61.2kB/s-2472kB/s), io=5316KiB (5444kB), run=1001-1024msec 00:21:48.502 WRITE: bw=10.3MiB/s (10.8MB/s), 2000KiB/s-4092KiB/s (2048kB/s-4190kB/s), io=10.6MiB (11.1MB), run=1001-1024msec 00:21:48.502 00:21:48.502 Disk stats (read/write): 00:21:48.502 nvme0n1: ios=93/512, merge=0/0, ticks=950/276, in_queue=1226, util=85.57% 00:21:48.503 nvme0n2: ios=562/825, merge=0/0, ticks=463/379, in_queue=842, util=89.96% 00:21:48.503 nvme0n3: ios=67/512, merge=0/0, ticks=1042/295, in_queue=1337, util=91.06% 00:21:48.503 nvme0n4: ios=502/512, merge=0/0, ticks=533/240, in_queue=773, util=97.09% 00:21:48.503 22:20:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:21:48.503 [global] 00:21:48.503 thread=1 00:21:48.503 invalidate=1 00:21:48.503 rw=randwrite 00:21:48.503 time_based=1 00:21:48.503 runtime=1 00:21:48.503 ioengine=libaio 00:21:48.503 direct=1 00:21:48.503 bs=4096 00:21:48.503 iodepth=1 00:21:48.503 norandommap=0 00:21:48.503 numjobs=1 00:21:48.503 00:21:48.503 verify_dump=1 00:21:48.503 verify_backlog=512 00:21:48.503 verify_state_save=0 00:21:48.503 do_verify=1 00:21:48.503 verify=crc32c-intel 00:21:48.503 [job0] 00:21:48.503 filename=/dev/nvme0n1 00:21:48.503 [job1] 00:21:48.503 filename=/dev/nvme0n2 00:21:48.503 [job2] 00:21:48.503 filename=/dev/nvme0n3 00:21:48.503 [job3] 00:21:48.503 filename=/dev/nvme0n4 00:21:48.503 Could not set queue depth (nvme0n1) 00:21:48.503 Could not set queue depth (nvme0n2) 00:21:48.503 Could not set queue depth (nvme0n3) 00:21:48.503 Could not set queue depth (nvme0n4) 00:21:48.764 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:21:48.764 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:21:48.764 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:21:48.764 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:21:48.764 fio-3.35 00:21:48.764 Starting 4 threads 00:21:50.151 00:21:50.151 job0: (groupid=0, jobs=1): err= 0: pid=70028: Tue Oct 1 22:20:45 2024 00:21:50.151 read: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec) 00:21:50.151 slat (nsec): min=25597, max=57665, avg=26564.72, stdev=3190.24 00:21:50.151 clat (usec): min=546, max=1226, avg=1008.19, stdev=91.15 00:21:50.151 lat (usec): min=572, max=1252, avg=1034.75, stdev=90.99 00:21:50.151 clat percentiles (usec): 00:21:50.151 | 1.00th=[ 709], 5.00th=[ 840], 10.00th=[ 914], 20.00th=[ 955], 00:21:50.151 | 30.00th=[ 979], 40.00th=[ 996], 50.00th=[ 1012], 60.00th=[ 1029], 00:21:50.151 | 70.00th=[ 1057], 80.00th=[ 1090], 90.00th=[ 1106], 95.00th=[ 1139], 00:21:50.151 | 99.00th=[ 1188], 99.50th=[ 1205], 99.90th=[ 1221], 99.95th=[ 1221], 00:21:50.151 | 99.99th=[ 1221] 00:21:50.151 write: IOPS=667, BW=2669KiB/s (2733kB/s)(2672KiB/1001msec); 0 zone resets 00:21:50.151 slat (nsec): min=8837, max=77064, avg=30056.28, stdev=7957.69 00:21:50.151 clat (usec): min=182, max=1001, avg=659.83, stdev=137.12 00:21:50.151 lat (usec): min=214, max=1033, avg=689.89, stdev=139.41 00:21:50.151 clat percentiles (usec): 00:21:50.151 | 1.00th=[ 289], 5.00th=[ 404], 10.00th=[ 461], 20.00th=[ 545], 00:21:50.151 | 30.00th=[ 603], 40.00th=[ 635], 50.00th=[ 676], 60.00th=[ 709], 00:21:50.151 | 70.00th=[ 734], 80.00th=[ 775], 90.00th=[ 824], 95.00th=[ 865], 00:21:50.151 | 99.00th=[ 922], 99.50th=[ 938], 99.90th=[ 1004], 99.95th=[ 1004], 00:21:50.151 | 99.99th=[ 1004] 00:21:50.151 bw ( KiB/s): min= 4096, max= 4096, per=38.80%, avg=4096.00, stdev= 0.00, samples=1 00:21:50.151 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:21:50.151 lat (usec) : 250=0.34%, 500=6.86%, 750=35.51%, 1000=31.36% 00:21:50.151 lat (msec) : 2=25.93% 00:21:50.151 cpu : usr=2.70%, sys=4.30%, ctx=1181, majf=0, minf=2 00:21:50.151 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:21:50.151 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:50.151 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:50.151 issued rwts: total=512,668,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:50.151 latency : target=0, window=0, percentile=100.00%, depth=1 00:21:50.151 job1: (groupid=0, jobs=1): err= 0: pid=70029: Tue Oct 1 22:20:45 2024 00:21:50.151 read: IOPS=633, BW=2533KiB/s (2594kB/s)(2536KiB/1001msec) 00:21:50.151 slat (nsec): min=6472, max=48288, avg=25923.98, stdev=6055.39 00:21:50.151 clat (usec): min=433, max=1070, avg=818.31, stdev=104.95 00:21:50.151 lat (usec): min=441, max=1097, avg=844.23, stdev=105.75 00:21:50.151 clat percentiles (usec): 00:21:50.151 | 1.00th=[ 519], 5.00th=[ 644], 10.00th=[ 676], 20.00th=[ 734], 00:21:50.151 | 30.00th=[ 766], 40.00th=[ 799], 50.00th=[ 832], 60.00th=[ 857], 00:21:50.151 | 70.00th=[ 881], 80.00th=[ 906], 90.00th=[ 947], 95.00th=[ 971], 00:21:50.151 | 99.00th=[ 1012], 99.50th=[ 1029], 99.90th=[ 1074], 99.95th=[ 1074], 00:21:50.151 | 99.99th=[ 1074] 00:21:50.151 write: IOPS=1022, BW=4092KiB/s (4190kB/s)(4096KiB/1001msec); 0 zone resets 00:21:50.151 slat (nsec): min=9086, max=65283, avg=31128.81, stdev=8858.76 00:21:50.151 clat (usec): min=183, max=883, avg=410.32, stdev=108.46 00:21:50.151 lat (usec): min=211, max=917, avg=441.45, stdev=110.12 00:21:50.151 clat percentiles (usec): 00:21:50.151 | 1.00th=[ 204], 5.00th=[ 241], 10.00th=[ 285], 20.00th=[ 306], 00:21:50.151 | 30.00th=[ 330], 40.00th=[ 367], 50.00th=[ 416], 60.00th=[ 441], 00:21:50.151 | 70.00th=[ 474], 80.00th=[ 510], 90.00th=[ 553], 95.00th=[ 594], 00:21:50.151 | 99.00th=[ 652], 99.50th=[ 676], 99.90th=[ 725], 99.95th=[ 881], 00:21:50.151 | 99.99th=[ 881] 00:21:50.151 bw ( KiB/s): min= 4096, max= 4096, per=38.80%, avg=4096.00, stdev= 0.00, samples=1 00:21:50.151 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:21:50.151 lat (usec) : 250=3.32%, 500=45.05%, 750=22.98%, 1000=28.05% 00:21:50.151 lat (msec) : 2=0.60% 00:21:50.151 cpu : usr=1.80%, sys=8.00%, ctx=1659, majf=0, minf=1 00:21:50.151 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:21:50.151 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:50.151 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:50.151 issued rwts: total=634,1024,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:50.151 latency : target=0, window=0, percentile=100.00%, depth=1 00:21:50.151 job2: (groupid=0, jobs=1): err= 0: pid=70030: Tue Oct 1 22:20:45 2024 00:21:50.151 read: IOPS=17, BW=71.1KiB/s (72.9kB/s)(72.0KiB/1012msec) 00:21:50.151 slat (nsec): min=26060, max=45056, avg=29664.67, stdev=5924.13 00:21:50.151 clat (usec): min=945, max=42976, avg=37544.67, stdev=13295.80 00:21:50.151 lat (usec): min=990, max=43003, avg=37574.33, stdev=13291.57 00:21:50.151 clat percentiles (usec): 00:21:50.151 | 1.00th=[ 947], 5.00th=[ 947], 10.00th=[ 1074], 20.00th=[41681], 00:21:50.151 | 30.00th=[41681], 40.00th=[41681], 50.00th=[41681], 60.00th=[42206], 00:21:50.151 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42730], 95.00th=[42730], 00:21:50.151 | 99.00th=[42730], 99.50th=[42730], 99.90th=[42730], 99.95th=[42730], 00:21:50.151 | 99.99th=[42730] 00:21:50.151 write: IOPS=505, BW=2024KiB/s (2072kB/s)(2048KiB/1012msec); 0 zone resets 00:21:50.151 slat (nsec): min=9310, max=67015, avg=30231.09, stdev=9660.68 00:21:50.151 clat (usec): min=323, max=856, avg=616.74, stdev=107.21 00:21:50.151 lat (usec): min=335, max=890, avg=646.97, stdev=111.39 00:21:50.151 clat percentiles (usec): 00:21:50.151 | 1.00th=[ 343], 5.00th=[ 441], 10.00th=[ 474], 20.00th=[ 529], 00:21:50.151 | 30.00th=[ 562], 40.00th=[ 594], 50.00th=[ 619], 60.00th=[ 652], 00:21:50.151 | 70.00th=[ 676], 80.00th=[ 717], 90.00th=[ 750], 95.00th=[ 783], 00:21:50.151 | 99.00th=[ 832], 99.50th=[ 857], 99.90th=[ 857], 99.95th=[ 857], 00:21:50.151 | 99.99th=[ 857] 00:21:50.151 bw ( KiB/s): min= 4096, max= 4096, per=38.80%, avg=4096.00, stdev= 0.00, samples=1 00:21:50.152 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:21:50.152 lat (usec) : 500=14.34%, 750=71.89%, 1000=10.57% 00:21:50.152 lat (msec) : 2=0.19%, 50=3.02% 00:21:50.152 cpu : usr=0.99%, sys=1.98%, ctx=532, majf=0, minf=1 00:21:50.152 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:21:50.152 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:50.152 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:50.152 issued rwts: total=18,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:50.152 latency : target=0, window=0, percentile=100.00%, depth=1 00:21:50.152 job3: (groupid=0, jobs=1): err= 0: pid=70031: Tue Oct 1 22:20:45 2024 00:21:50.152 read: IOPS=18, BW=73.9KiB/s (75.6kB/s)(76.0KiB/1029msec) 00:21:50.152 slat (nsec): min=27312, max=28213, avg=27600.84, stdev=217.42 00:21:50.152 clat (usec): min=1048, max=42966, avg=39906.58, stdev=9420.69 00:21:50.152 lat (usec): min=1076, max=42994, avg=39934.18, stdev=9420.68 00:21:50.152 clat percentiles (usec): 00:21:50.152 | 1.00th=[ 1045], 5.00th=[ 1045], 10.00th=[41157], 20.00th=[41681], 00:21:50.152 | 30.00th=[42206], 40.00th=[42206], 50.00th=[42206], 60.00th=[42206], 00:21:50.152 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42730], 95.00th=[42730], 00:21:50.152 | 99.00th=[42730], 99.50th=[42730], 99.90th=[42730], 99.95th=[42730], 00:21:50.152 | 99.99th=[42730] 00:21:50.152 write: IOPS=497, BW=1990KiB/s (2038kB/s)(2048KiB/1029msec); 0 zone resets 00:21:50.152 slat (nsec): min=9630, max=56247, avg=30324.95, stdev=9550.37 00:21:50.152 clat (usec): min=114, max=905, avg=489.41, stdev=148.99 00:21:50.152 lat (usec): min=124, max=939, avg=519.74, stdev=152.69 00:21:50.152 clat percentiles (usec): 00:21:50.152 | 1.00th=[ 130], 5.00th=[ 253], 10.00th=[ 289], 20.00th=[ 359], 00:21:50.152 | 30.00th=[ 408], 40.00th=[ 457], 50.00th=[ 498], 60.00th=[ 529], 00:21:50.152 | 70.00th=[ 578], 80.00th=[ 619], 90.00th=[ 676], 95.00th=[ 734], 00:21:50.152 | 99.00th=[ 816], 99.50th=[ 848], 99.90th=[ 906], 99.95th=[ 906], 00:21:50.152 | 99.99th=[ 906] 00:21:50.152 bw ( KiB/s): min= 4096, max= 4096, per=38.80%, avg=4096.00, stdev= 0.00, samples=1 00:21:50.152 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:21:50.152 lat (usec) : 250=4.33%, 500=44.07%, 750=45.20%, 1000=2.82% 00:21:50.152 lat (msec) : 2=0.19%, 50=3.39% 00:21:50.152 cpu : usr=0.58%, sys=1.75%, ctx=532, majf=0, minf=1 00:21:50.152 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:21:50.152 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:50.152 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:50.152 issued rwts: total=19,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:50.152 latency : target=0, window=0, percentile=100.00%, depth=1 00:21:50.152 00:21:50.152 Run status group 0 (all jobs): 00:21:50.152 READ: bw=4599KiB/s (4709kB/s), 71.1KiB/s-2533KiB/s (72.9kB/s-2594kB/s), io=4732KiB (4846kB), run=1001-1029msec 00:21:50.152 WRITE: bw=10.3MiB/s (10.8MB/s), 1990KiB/s-4092KiB/s (2038kB/s-4190kB/s), io=10.6MiB (11.1MB), run=1001-1029msec 00:21:50.152 00:21:50.152 Disk stats (read/write): 00:21:50.152 nvme0n1: ios=509/512, merge=0/0, ticks=489/282, in_queue=771, util=87.78% 00:21:50.152 nvme0n2: ios=550/890, merge=0/0, ticks=1383/277, in_queue=1660, util=98.88% 00:21:50.152 nvme0n3: ios=73/512, merge=0/0, ticks=572/247, in_queue=819, util=96.20% 00:21:50.152 nvme0n4: ios=38/512, merge=0/0, ticks=1496/220, in_queue=1716, util=96.80% 00:21:50.152 22:20:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:21:50.152 [global] 00:21:50.152 thread=1 00:21:50.152 invalidate=1 00:21:50.152 rw=write 00:21:50.152 time_based=1 00:21:50.152 runtime=1 00:21:50.152 ioengine=libaio 00:21:50.152 direct=1 00:21:50.152 bs=4096 00:21:50.152 iodepth=128 00:21:50.152 norandommap=0 00:21:50.152 numjobs=1 00:21:50.152 00:21:50.152 verify_dump=1 00:21:50.152 verify_backlog=512 00:21:50.152 verify_state_save=0 00:21:50.152 do_verify=1 00:21:50.152 verify=crc32c-intel 00:21:50.152 [job0] 00:21:50.152 filename=/dev/nvme0n1 00:21:50.152 [job1] 00:21:50.152 filename=/dev/nvme0n2 00:21:50.152 [job2] 00:21:50.152 filename=/dev/nvme0n3 00:21:50.152 [job3] 00:21:50.152 filename=/dev/nvme0n4 00:21:50.152 Could not set queue depth (nvme0n1) 00:21:50.152 Could not set queue depth (nvme0n2) 00:21:50.152 Could not set queue depth (nvme0n3) 00:21:50.152 Could not set queue depth (nvme0n4) 00:21:50.413 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:21:50.413 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:21:50.413 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:21:50.413 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:21:50.413 fio-3.35 00:21:50.413 Starting 4 threads 00:21:51.798 00:21:51.798 job0: (groupid=0, jobs=1): err= 0: pid=70527: Tue Oct 1 22:20:46 2024 00:21:51.798 read: IOPS=3531, BW=13.8MiB/s (14.5MB/s)(14.0MiB/1015msec) 00:21:51.798 slat (nsec): min=1008, max=9995.4k, avg=95941.53, stdev=665794.05 00:21:51.798 clat (usec): min=4130, max=35468, avg=11156.91, stdev=4538.61 00:21:51.798 lat (usec): min=4135, max=35471, avg=11252.85, stdev=4593.95 00:21:51.798 clat percentiles (usec): 00:21:51.798 | 1.00th=[ 4686], 5.00th=[ 4883], 10.00th=[ 6063], 20.00th=[ 8979], 00:21:51.798 | 30.00th=[ 9634], 40.00th=[ 9896], 50.00th=[10028], 60.00th=[10683], 00:21:51.798 | 70.00th=[11994], 80.00th=[12518], 90.00th=[16450], 95.00th=[21103], 00:21:51.798 | 99.00th=[30278], 99.50th=[32900], 99.90th=[35390], 99.95th=[35390], 00:21:51.798 | 99.99th=[35390] 00:21:51.798 write: IOPS=3986, BW=15.6MiB/s (16.3MB/s)(15.8MiB/1015msec); 0 zone resets 00:21:51.798 slat (nsec): min=1691, max=12620k, avg=155452.58, stdev=774405.66 00:21:51.798 clat (usec): min=2419, max=86070, avg=21888.23, stdev=16807.32 00:21:51.798 lat (usec): min=2423, max=86079, avg=22043.68, stdev=16907.19 00:21:51.798 clat percentiles (usec): 00:21:51.798 | 1.00th=[ 2900], 5.00th=[ 4080], 10.00th=[ 5342], 20.00th=[ 5932], 00:21:51.798 | 30.00th=[ 9896], 40.00th=[16581], 50.00th=[17695], 60.00th=[19006], 00:21:51.798 | 70.00th=[26608], 80.00th=[34866], 90.00th=[46924], 95.00th=[50594], 00:21:51.798 | 99.00th=[79168], 99.50th=[84411], 99.90th=[86508], 99.95th=[86508], 00:21:51.798 | 99.99th=[86508] 00:21:51.798 bw ( KiB/s): min=12808, max=18544, per=21.30%, avg=15676.00, stdev=4055.96, samples=2 00:21:51.798 iops : min= 3202, max= 4636, avg=3919.00, stdev=1013.99, samples=2 00:21:51.798 lat (msec) : 4=2.56%, 10=37.04%, 20=36.72%, 50=20.73%, 100=2.95% 00:21:51.798 cpu : usr=3.06%, sys=3.94%, ctx=476, majf=0, minf=1 00:21:51.798 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:21:51.798 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:51.798 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:21:51.798 issued rwts: total=3584,4046,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:51.798 latency : target=0, window=0, percentile=100.00%, depth=128 00:21:51.798 job1: (groupid=0, jobs=1): err= 0: pid=70539: Tue Oct 1 22:20:46 2024 00:21:51.798 read: IOPS=2071, BW=8287KiB/s (8485kB/s)(8328KiB/1005msec) 00:21:51.798 slat (nsec): min=910, max=21882k, avg=155961.08, stdev=1252267.48 00:21:51.798 clat (usec): min=3067, max=68073, avg=18678.65, stdev=17106.41 00:21:51.798 lat (usec): min=3076, max=68099, avg=18834.61, stdev=17277.61 00:21:51.798 clat percentiles (usec): 00:21:51.798 | 1.00th=[ 4359], 5.00th=[ 5145], 10.00th=[ 5604], 20.00th=[ 6783], 00:21:51.798 | 30.00th=[ 7373], 40.00th=[ 8356], 50.00th=[ 9634], 60.00th=[11469], 00:21:51.798 | 70.00th=[12256], 80.00th=[43779], 90.00th=[49546], 95.00th=[51119], 00:21:51.798 | 99.00th=[53216], 99.50th=[55313], 99.90th=[63177], 99.95th=[65274], 00:21:51.798 | 99.99th=[67634] 00:21:51.798 write: IOPS=2547, BW=9.95MiB/s (10.4MB/s)(10.0MiB/1005msec); 0 zone resets 00:21:51.798 slat (nsec): min=1639, max=41714k, avg=252317.98, stdev=1805900.00 00:21:51.798 clat (usec): min=852, max=127904, avg=31861.74, stdev=23677.48 00:21:51.798 lat (usec): min=890, max=127914, avg=32114.06, stdev=23831.13 00:21:51.798 clat percentiles (usec): 00:21:51.798 | 1.00th=[ 1778], 5.00th=[ 3326], 10.00th=[ 4113], 20.00th=[ 6652], 00:21:51.798 | 30.00th=[ 10028], 40.00th=[ 19006], 50.00th=[ 36439], 60.00th=[ 42206], 00:21:51.798 | 70.00th=[ 47973], 80.00th=[ 52691], 90.00th=[ 57410], 95.00th=[ 59507], 00:21:51.798 | 99.00th=[125305], 99.50th=[127402], 99.90th=[127402], 99.95th=[127402], 00:21:51.798 | 99.99th=[127402] 00:21:51.798 bw ( KiB/s): min= 6280, max=13456, per=13.41%, avg=9868.00, stdev=5074.20, samples=2 00:21:51.798 iops : min= 1570, max= 3364, avg=2467.00, stdev=1268.55, samples=2 00:21:51.798 lat (usec) : 1000=0.06% 00:21:51.798 lat (msec) : 2=0.78%, 4=4.20%, 10=34.49%, 20=15.19%, 50=27.83% 00:21:51.798 lat (msec) : 100=16.78%, 250=0.67% 00:21:51.798 cpu : usr=2.59%, sys=2.49%, ctx=271, majf=0, minf=1 00:21:51.798 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.7%, >=64=98.6% 00:21:51.798 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:51.798 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:21:51.798 issued rwts: total=2082,2560,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:51.798 latency : target=0, window=0, percentile=100.00%, depth=128 00:21:51.798 job2: (groupid=0, jobs=1): err= 0: pid=70554: Tue Oct 1 22:20:46 2024 00:21:51.798 read: IOPS=8118, BW=31.7MiB/s (33.3MB/s)(32.0MiB/1009msec) 00:21:51.798 slat (nsec): min=931, max=7189.3k, avg=59165.40, stdev=431693.29 00:21:51.798 clat (usec): min=3212, max=20614, avg=8216.91, stdev=2059.10 00:21:51.798 lat (usec): min=3551, max=20620, avg=8276.07, stdev=2089.00 00:21:51.798 clat percentiles (usec): 00:21:51.798 | 1.00th=[ 4228], 5.00th=[ 5866], 10.00th=[ 6325], 20.00th=[ 6652], 00:21:51.798 | 30.00th=[ 7111], 40.00th=[ 7308], 50.00th=[ 7832], 60.00th=[ 8291], 00:21:51.798 | 70.00th=[ 8586], 80.00th=[ 9241], 90.00th=[10945], 95.00th=[12780], 00:21:51.798 | 99.00th=[15139], 99.50th=[15270], 99.90th=[18220], 99.95th=[18744], 00:21:51.798 | 99.99th=[20579] 00:21:51.798 write: IOPS=8298, BW=32.4MiB/s (34.0MB/s)(32.7MiB/1009msec); 0 zone resets 00:21:51.798 slat (nsec): min=1613, max=5771.3k, avg=49962.42, stdev=359847.94 00:21:51.798 clat (usec): min=1212, max=32584, avg=7252.43, stdev=2989.16 00:21:51.798 lat (usec): min=1225, max=32586, avg=7302.39, stdev=3007.29 00:21:51.798 clat percentiles (usec): 00:21:51.798 | 1.00th=[ 3261], 5.00th=[ 4113], 10.00th=[ 4490], 20.00th=[ 5276], 00:21:51.798 | 30.00th=[ 5997], 40.00th=[ 6456], 50.00th=[ 6652], 60.00th=[ 6915], 00:21:51.798 | 70.00th=[ 7177], 80.00th=[ 8586], 90.00th=[11731], 95.00th=[12780], 00:21:51.798 | 99.00th=[18482], 99.50th=[24511], 99.90th=[30278], 99.95th=[31851], 00:21:51.798 | 99.99th=[32637] 00:21:51.798 bw ( KiB/s): min=31712, max=34256, per=44.81%, avg=32984.00, stdev=1798.88, samples=2 00:21:51.798 iops : min= 7928, max= 8564, avg=8246.00, stdev=449.72, samples=2 00:21:51.798 lat (msec) : 2=0.07%, 4=2.21%, 10=84.75%, 20=12.54%, 50=0.42% 00:21:51.798 cpu : usr=6.35%, sys=9.33%, ctx=468, majf=0, minf=2 00:21:51.798 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.6% 00:21:51.798 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:51.798 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:21:51.798 issued rwts: total=8192,8373,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:51.798 latency : target=0, window=0, percentile=100.00%, depth=128 00:21:51.798 job3: (groupid=0, jobs=1): err= 0: pid=70555: Tue Oct 1 22:20:46 2024 00:21:51.798 read: IOPS=3552, BW=13.9MiB/s (14.5MB/s)(14.0MiB/1009msec) 00:21:51.798 slat (nsec): min=910, max=10767k, avg=116668.10, stdev=823243.74 00:21:51.799 clat (usec): min=4769, max=42068, avg=12912.58, stdev=5260.73 00:21:51.799 lat (usec): min=4804, max=42075, avg=13029.25, stdev=5357.24 00:21:51.799 clat percentiles (usec): 00:21:51.799 | 1.00th=[ 5538], 5.00th=[ 9110], 10.00th=[10290], 20.00th=[10683], 00:21:51.799 | 30.00th=[10814], 40.00th=[11076], 50.00th=[11207], 60.00th=[11338], 00:21:51.799 | 70.00th=[11600], 80.00th=[13042], 90.00th=[20317], 95.00th=[24511], 00:21:51.799 | 99.00th=[35914], 99.50th=[39060], 99.90th=[42206], 99.95th=[42206], 00:21:51.799 | 99.99th=[42206] 00:21:51.799 write: IOPS=3666, BW=14.3MiB/s (15.0MB/s)(14.4MiB/1009msec); 0 zone resets 00:21:51.799 slat (nsec): min=1606, max=8837.0k, avg=137774.10, stdev=647347.34 00:21:51.799 clat (usec): min=1207, max=66274, avg=22112.34, stdev=14385.55 00:21:51.799 lat (usec): min=1218, max=66281, avg=22250.11, stdev=14451.59 00:21:51.799 clat percentiles (usec): 00:21:51.799 | 1.00th=[ 2933], 5.00th=[ 5080], 10.00th=[ 6521], 20.00th=[ 9634], 00:21:51.799 | 30.00th=[13173], 40.00th=[16712], 50.00th=[17695], 60.00th=[18744], 00:21:51.799 | 70.00th=[26084], 80.00th=[36439], 90.00th=[44303], 95.00th=[53216], 00:21:51.799 | 99.00th=[62129], 99.50th=[64226], 99.90th=[66323], 99.95th=[66323], 00:21:51.799 | 99.99th=[66323] 00:21:51.799 bw ( KiB/s): min=12976, max=15744, per=19.51%, avg=14360.00, stdev=1957.27, samples=2 00:21:51.799 iops : min= 3244, max= 3936, avg=3590.00, stdev=489.32, samples=2 00:21:51.799 lat (msec) : 2=0.25%, 4=1.26%, 10=12.98%, 20=60.21%, 50=22.19% 00:21:51.799 lat (msec) : 100=3.12% 00:21:51.799 cpu : usr=1.59%, sys=4.37%, ctx=427, majf=0, minf=2 00:21:51.799 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.1% 00:21:51.799 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:51.799 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:21:51.799 issued rwts: total=3584,3699,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:51.799 latency : target=0, window=0, percentile=100.00%, depth=128 00:21:51.799 00:21:51.799 Run status group 0 (all jobs): 00:21:51.799 READ: bw=67.1MiB/s (70.4MB/s), 8287KiB/s-31.7MiB/s (8485kB/s-33.3MB/s), io=68.1MiB (71.4MB), run=1005-1015msec 00:21:51.799 WRITE: bw=71.9MiB/s (75.4MB/s), 9.95MiB/s-32.4MiB/s (10.4MB/s-34.0MB/s), io=73.0MiB (76.5MB), run=1005-1015msec 00:21:51.799 00:21:51.799 Disk stats (read/write): 00:21:51.799 nvme0n1: ios=3121/3375, merge=0/0, ticks=32953/65265, in_queue=98218, util=89.48% 00:21:51.799 nvme0n2: ios=2076/2103, merge=0/0, ticks=18013/17811, in_queue=35824, util=96.33% 00:21:51.799 nvme0n3: ios=6712/6902, merge=0/0, ticks=43553/40177, in_queue=83730, util=91.99% 00:21:51.799 nvme0n4: ios=2859/3072, merge=0/0, ticks=34072/67502, in_queue=101574, util=95.09% 00:21:51.799 22:20:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:21:51.799 [global] 00:21:51.799 thread=1 00:21:51.799 invalidate=1 00:21:51.799 rw=randwrite 00:21:51.799 time_based=1 00:21:51.799 runtime=1 00:21:51.799 ioengine=libaio 00:21:51.799 direct=1 00:21:51.799 bs=4096 00:21:51.799 iodepth=128 00:21:51.799 norandommap=0 00:21:51.799 numjobs=1 00:21:51.799 00:21:51.799 verify_dump=1 00:21:51.799 verify_backlog=512 00:21:51.799 verify_state_save=0 00:21:51.799 do_verify=1 00:21:51.799 verify=crc32c-intel 00:21:51.799 [job0] 00:21:51.799 filename=/dev/nvme0n1 00:21:51.799 [job1] 00:21:51.799 filename=/dev/nvme0n2 00:21:51.799 [job2] 00:21:51.799 filename=/dev/nvme0n3 00:21:51.799 [job3] 00:21:51.799 filename=/dev/nvme0n4 00:21:51.799 Could not set queue depth (nvme0n1) 00:21:51.799 Could not set queue depth (nvme0n2) 00:21:51.799 Could not set queue depth (nvme0n3) 00:21:51.799 Could not set queue depth (nvme0n4) 00:21:52.060 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:21:52.060 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:21:52.060 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:21:52.060 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:21:52.060 fio-3.35 00:21:52.060 Starting 4 threads 00:21:53.517 00:21:53.517 job0: (groupid=0, jobs=1): err= 0: pid=70994: Tue Oct 1 22:20:48 2024 00:21:53.517 read: IOPS=6095, BW=23.8MiB/s (25.0MB/s)(24.0MiB/1008msec) 00:21:53.517 slat (nsec): min=922, max=8426.9k, avg=76187.90, stdev=528856.87 00:21:53.517 clat (usec): min=2852, max=65713, avg=9555.87, stdev=5004.13 00:21:53.517 lat (usec): min=2873, max=65721, avg=9632.06, stdev=5077.78 00:21:53.517 clat percentiles (usec): 00:21:53.517 | 1.00th=[ 4817], 5.00th=[ 6259], 10.00th=[ 6652], 20.00th=[ 6980], 00:21:53.517 | 30.00th=[ 7635], 40.00th=[ 8094], 50.00th=[ 8586], 60.00th=[ 9110], 00:21:53.517 | 70.00th=[ 9372], 80.00th=[10421], 90.00th=[12518], 95.00th=[15139], 00:21:53.517 | 99.00th=[32900], 99.50th=[47973], 99.90th=[59507], 99.95th=[65799], 00:21:53.517 | 99.99th=[65799] 00:21:53.517 write: IOPS=6220, BW=24.3MiB/s (25.5MB/s)(24.5MiB/1008msec); 0 zone resets 00:21:53.518 slat (nsec): min=1686, max=7290.5k, avg=73341.55, stdev=448332.64 00:21:53.518 clat (usec): min=1095, max=74316, avg=11015.96, stdev=11196.89 00:21:53.518 lat (usec): min=1106, max=74322, avg=11089.30, stdev=11263.62 00:21:53.518 clat percentiles (usec): 00:21:53.518 | 1.00th=[ 2573], 5.00th=[ 4080], 10.00th=[ 4948], 20.00th=[ 5669], 00:21:53.518 | 30.00th=[ 5932], 40.00th=[ 6259], 50.00th=[ 6980], 60.00th=[ 7635], 00:21:53.518 | 70.00th=[ 8848], 80.00th=[14222], 90.00th=[22938], 95.00th=[30540], 00:21:53.518 | 99.00th=[68682], 99.50th=[70779], 99.90th=[73925], 99.95th=[73925], 00:21:53.518 | 99.99th=[73925] 00:21:53.518 bw ( KiB/s): min=21704, max=27504, per=31.42%, avg=24604.00, stdev=4101.22, samples=2 00:21:53.518 iops : min= 5426, max= 6876, avg=6151.00, stdev=1025.30, samples=2 00:21:53.518 lat (msec) : 2=0.23%, 4=2.22%, 10=73.48%, 20=16.51%, 50=6.27% 00:21:53.518 lat (msec) : 100=1.29% 00:21:53.518 cpu : usr=4.77%, sys=7.05%, ctx=404, majf=0, minf=1 00:21:53.518 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.5% 00:21:53.518 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:53.518 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:21:53.518 issued rwts: total=6144,6270,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:53.518 latency : target=0, window=0, percentile=100.00%, depth=128 00:21:53.518 job1: (groupid=0, jobs=1): err= 0: pid=71009: Tue Oct 1 22:20:48 2024 00:21:53.518 read: IOPS=3668, BW=14.3MiB/s (15.0MB/s)(15.0MiB/1046msec) 00:21:53.518 slat (nsec): min=915, max=15285k, avg=121949.18, stdev=893111.42 00:21:53.518 clat (usec): min=4331, max=69839, avg=16935.59, stdev=11447.00 00:21:53.518 lat (usec): min=4336, max=80989, avg=17057.54, stdev=11516.90 00:21:53.518 clat percentiles (usec): 00:21:53.518 | 1.00th=[ 5276], 5.00th=[ 7111], 10.00th=[ 7570], 20.00th=[ 9503], 00:21:53.518 | 30.00th=[10552], 40.00th=[11338], 50.00th=[12256], 60.00th=[14353], 00:21:53.518 | 70.00th=[18482], 80.00th=[23987], 90.00th=[30540], 95.00th=[40633], 00:21:53.518 | 99.00th=[67634], 99.50th=[69731], 99.90th=[69731], 99.95th=[69731], 00:21:53.518 | 99.99th=[69731] 00:21:53.518 write: IOPS=3915, BW=15.3MiB/s (16.0MB/s)(16.0MiB/1046msec); 0 zone resets 00:21:53.518 slat (nsec): min=1557, max=14141k, avg=123038.30, stdev=822891.50 00:21:53.518 clat (usec): min=1159, max=66919, avg=16389.35, stdev=12344.23 00:21:53.518 lat (usec): min=1168, max=66928, avg=16512.38, stdev=12443.11 00:21:53.518 clat percentiles (usec): 00:21:53.518 | 1.00th=[ 3785], 5.00th=[ 6063], 10.00th=[ 6652], 20.00th=[ 7898], 00:21:53.518 | 30.00th=[ 8455], 40.00th=[11338], 50.00th=[12780], 60.00th=[15401], 00:21:53.518 | 70.00th=[17171], 80.00th=[19006], 90.00th=[29492], 95.00th=[49021], 00:21:53.518 | 99.00th=[61080], 99.50th=[64226], 99.90th=[66847], 99.95th=[66847], 00:21:53.518 | 99.99th=[66847] 00:21:53.518 bw ( KiB/s): min=12232, max=20536, per=20.92%, avg=16384.00, stdev=5871.81, samples=2 00:21:53.518 iops : min= 3058, max= 5134, avg=4096.00, stdev=1467.95, samples=2 00:21:53.518 lat (msec) : 2=0.15%, 4=0.42%, 10=27.83%, 20=48.33%, 50=19.20% 00:21:53.518 lat (msec) : 100=4.07% 00:21:53.518 cpu : usr=2.87%, sys=4.59%, ctx=257, majf=0, minf=2 00:21:53.518 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:21:53.518 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:53.518 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:21:53.518 issued rwts: total=3837,4096,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:53.518 latency : target=0, window=0, percentile=100.00%, depth=128 00:21:53.518 job2: (groupid=0, jobs=1): err= 0: pid=71027: Tue Oct 1 22:20:48 2024 00:21:53.518 read: IOPS=3558, BW=13.9MiB/s (14.6MB/s)(14.0MiB/1005msec) 00:21:53.518 slat (nsec): min=973, max=13502k, avg=115834.86, stdev=742360.52 00:21:53.518 clat (usec): min=1958, max=80758, avg=12822.93, stdev=9274.97 00:21:53.518 lat (usec): min=4258, max=80768, avg=12938.77, stdev=9380.56 00:21:53.518 clat percentiles (usec): 00:21:53.518 | 1.00th=[ 6587], 5.00th=[ 7570], 10.00th=[ 8160], 20.00th=[ 8356], 00:21:53.518 | 30.00th=[ 8717], 40.00th=[ 9372], 50.00th=[ 9503], 60.00th=[11076], 00:21:53.518 | 70.00th=[12780], 80.00th=[15008], 90.00th=[17695], 95.00th=[26346], 00:21:53.518 | 99.00th=[63701], 99.50th=[71828], 99.90th=[80217], 99.95th=[81265], 00:21:53.518 | 99.99th=[81265] 00:21:53.518 write: IOPS=3566, BW=13.9MiB/s (14.6MB/s)(14.0MiB/1005msec); 0 zone resets 00:21:53.518 slat (nsec): min=1597, max=15210k, avg=151809.25, stdev=785514.83 00:21:53.518 clat (usec): min=465, max=80680, avg=22528.62, stdev=19156.50 00:21:53.518 lat (usec): min=474, max=80684, avg=22680.43, stdev=19269.93 00:21:53.518 clat percentiles (usec): 00:21:53.518 | 1.00th=[ 1942], 5.00th=[ 4752], 10.00th=[ 6390], 20.00th=[ 7832], 00:21:53.518 | 30.00th=[ 8356], 40.00th=[11207], 50.00th=[14353], 60.00th=[17957], 00:21:53.518 | 70.00th=[25297], 80.00th=[42206], 90.00th=[57934], 95.00th=[60031], 00:21:53.518 | 99.00th=[66323], 99.50th=[69731], 99.90th=[73925], 99.95th=[73925], 00:21:53.518 | 99.99th=[80217] 00:21:53.518 bw ( KiB/s): min=11896, max=16776, per=18.31%, avg=14336.00, stdev=3450.68, samples=2 00:21:53.518 iops : min= 2974, max= 4194, avg=3584.00, stdev=862.67, samples=2 00:21:53.518 lat (usec) : 500=0.04%, 1000=0.04% 00:21:53.518 lat (msec) : 2=0.56%, 4=1.15%, 10=43.80%, 20=31.42%, 50=13.21% 00:21:53.518 lat (msec) : 100=9.78% 00:21:53.518 cpu : usr=2.59%, sys=4.78%, ctx=379, majf=0, minf=2 00:21:53.518 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.1% 00:21:53.518 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:53.518 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:21:53.518 issued rwts: total=3576,3584,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:53.518 latency : target=0, window=0, percentile=100.00%, depth=128 00:21:53.518 job3: (groupid=0, jobs=1): err= 0: pid=71034: Tue Oct 1 22:20:48 2024 00:21:53.518 read: IOPS=6131, BW=24.0MiB/s (25.1MB/s)(24.0MiB/1002msec) 00:21:53.518 slat (nsec): min=919, max=19569k, avg=87021.38, stdev=583145.51 00:21:53.518 clat (usec): min=6345, max=55224, avg=11158.36, stdev=7873.50 00:21:53.518 lat (usec): min=6627, max=55230, avg=11245.38, stdev=7906.53 00:21:53.518 clat percentiles (usec): 00:21:53.518 | 1.00th=[ 7242], 5.00th=[ 7767], 10.00th=[ 8029], 20.00th=[ 8356], 00:21:53.518 | 30.00th=[ 8717], 40.00th=[ 8979], 50.00th=[ 9241], 60.00th=[ 9503], 00:21:53.518 | 70.00th=[ 9896], 80.00th=[10290], 90.00th=[11600], 95.00th=[26608], 00:21:53.518 | 99.00th=[52691], 99.50th=[55313], 99.90th=[55313], 99.95th=[55313], 00:21:53.518 | 99.99th=[55313] 00:21:53.518 write: IOPS=6515, BW=25.5MiB/s (26.7MB/s)(25.5MiB/1002msec); 0 zone resets 00:21:53.518 slat (nsec): min=1512, max=7400.5k, avg=67940.64, stdev=355686.27 00:21:53.518 clat (usec): min=614, max=21435, avg=8797.11, stdev=1844.15 00:21:53.518 lat (usec): min=2570, max=21444, avg=8865.05, stdev=1819.57 00:21:53.518 clat percentiles (usec): 00:21:53.518 | 1.00th=[ 5538], 5.00th=[ 7111], 10.00th=[ 7898], 20.00th=[ 8160], 00:21:53.518 | 30.00th=[ 8225], 40.00th=[ 8291], 50.00th=[ 8356], 60.00th=[ 8586], 00:21:53.518 | 70.00th=[ 8848], 80.00th=[ 9110], 90.00th=[10028], 95.00th=[10814], 00:21:53.518 | 99.00th=[18482], 99.50th=[20579], 99.90th=[21365], 99.95th=[21365], 00:21:53.518 | 99.99th=[21365] 00:21:53.518 bw ( KiB/s): min=20728, max=30480, per=32.69%, avg=25604.00, stdev=6895.71, samples=2 00:21:53.518 iops : min= 5182, max= 7620, avg=6401.00, stdev=1723.93, samples=2 00:21:53.518 lat (usec) : 750=0.01% 00:21:53.518 lat (msec) : 4=0.25%, 10=80.51%, 20=16.04%, 50=2.20%, 100=0.99% 00:21:53.518 cpu : usr=2.80%, sys=5.09%, ctx=572, majf=0, minf=1 00:21:53.518 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.5% 00:21:53.518 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:53.518 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:21:53.518 issued rwts: total=6144,6529,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:53.518 latency : target=0, window=0, percentile=100.00%, depth=128 00:21:53.518 00:21:53.518 Run status group 0 (all jobs): 00:21:53.518 READ: bw=73.6MiB/s (77.1MB/s), 13.9MiB/s-24.0MiB/s (14.6MB/s-25.1MB/s), io=77.0MiB (80.7MB), run=1002-1046msec 00:21:53.518 WRITE: bw=76.5MiB/s (80.2MB/s), 13.9MiB/s-25.5MiB/s (14.6MB/s-26.7MB/s), io=80.0MiB (83.9MB), run=1002-1046msec 00:21:53.518 00:21:53.518 Disk stats (read/write): 00:21:53.518 nvme0n1: ios=5171/5415, merge=0/0, ticks=38033/51657, in_queue=89690, util=88.68% 00:21:53.518 nvme0n2: ios=3212/3584, merge=0/0, ticks=28186/32180, in_queue=60366, util=91.23% 00:21:53.518 nvme0n3: ios=3131/3239, merge=0/0, ticks=30974/63508, in_queue=94482, util=96.84% 00:21:53.518 nvme0n4: ios=5110/5120, merge=0/0, ticks=14952/10820, in_queue=25772, util=92.96% 00:21:53.518 22:20:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@55 -- # sync 00:21:53.518 22:20:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@59 -- # fio_pid=71107 00:21:53.518 22:20:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@61 -- # sleep 3 00:21:53.518 22:20:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:21:53.518 [global] 00:21:53.518 thread=1 00:21:53.518 invalidate=1 00:21:53.518 rw=read 00:21:53.518 time_based=1 00:21:53.518 runtime=10 00:21:53.518 ioengine=libaio 00:21:53.518 direct=1 00:21:53.518 bs=4096 00:21:53.518 iodepth=1 00:21:53.518 norandommap=1 00:21:53.518 numjobs=1 00:21:53.518 00:21:53.518 [job0] 00:21:53.518 filename=/dev/nvme0n1 00:21:53.518 [job1] 00:21:53.518 filename=/dev/nvme0n2 00:21:53.518 [job2] 00:21:53.518 filename=/dev/nvme0n3 00:21:53.518 [job3] 00:21:53.518 filename=/dev/nvme0n4 00:21:53.518 Could not set queue depth (nvme0n1) 00:21:53.518 Could not set queue depth (nvme0n2) 00:21:53.518 Could not set queue depth (nvme0n3) 00:21:53.518 Could not set queue depth (nvme0n4) 00:21:53.793 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:21:53.793 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:21:53.793 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:21:53.793 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:21:53.793 fio-3.35 00:21:53.793 Starting 4 threads 00:21:56.337 22:20:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete concat0 00:21:56.598 fio: io_u error on file /dev/nvme0n4: Operation not supported: read offset=4517888, buflen=4096 00:21:56.598 fio: pid=71515, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:21:56.598 22:20:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete raid0 00:21:56.858 fio: io_u error on file /dev/nvme0n3: Operation not supported: read offset=13537280, buflen=4096 00:21:56.858 fio: pid=71509, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:21:56.858 22:20:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:21:56.858 22:20:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:21:56.858 fio: io_u error on file /dev/nvme0n1: Operation not supported: read offset=14274560, buflen=4096 00:21:56.858 fio: pid=71467, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:21:56.858 22:20:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:21:56.858 22:20:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:21:57.119 fio: io_u error on file /dev/nvme0n2: Operation not supported: read offset=1638400, buflen=4096 00:21:57.119 fio: pid=71487, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:21:57.119 22:20:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:21:57.119 22:20:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:21:57.119 00:21:57.119 job0: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=71467: Tue Oct 1 22:20:52 2024 00:21:57.119 read: IOPS=1183, BW=4732KiB/s (4845kB/s)(13.6MiB/2946msec) 00:21:57.119 slat (usec): min=6, max=24462, avg=34.09, stdev=440.90 00:21:57.119 clat (usec): min=233, max=2590, avg=802.06, stdev=174.12 00:21:57.119 lat (usec): min=240, max=25465, avg=836.16, stdev=478.24 00:21:57.119 clat percentiles (usec): 00:21:57.119 | 1.00th=[ 429], 5.00th=[ 523], 10.00th=[ 570], 20.00th=[ 635], 00:21:57.119 | 30.00th=[ 701], 40.00th=[ 758], 50.00th=[ 799], 60.00th=[ 848], 00:21:57.119 | 70.00th=[ 938], 80.00th=[ 979], 90.00th=[ 1012], 95.00th=[ 1037], 00:21:57.119 | 99.00th=[ 1090], 99.50th=[ 1139], 99.90th=[ 1369], 99.95th=[ 1598], 00:21:57.119 | 99.99th=[ 2606] 00:21:57.119 bw ( KiB/s): min= 4000, max= 5608, per=46.67%, avg=4937.60, stdev=783.87, samples=5 00:21:57.119 iops : min= 1000, max= 1402, avg=1234.40, stdev=195.97, samples=5 00:21:57.119 lat (usec) : 250=0.03%, 500=3.44%, 750=35.66%, 1000=47.10% 00:21:57.119 lat (msec) : 2=13.71%, 4=0.03% 00:21:57.119 cpu : usr=1.12%, sys=3.53%, ctx=3488, majf=0, minf=2 00:21:57.119 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:21:57.119 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:57.119 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:57.119 issued rwts: total=3486,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:57.119 latency : target=0, window=0, percentile=100.00%, depth=1 00:21:57.119 job1: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=71487: Tue Oct 1 22:20:52 2024 00:21:57.119 read: IOPS=127, BW=510KiB/s (522kB/s)(1600KiB/3136msec) 00:21:57.119 slat (usec): min=6, max=8574, avg=48.56, stdev=426.86 00:21:57.119 clat (usec): min=733, max=43036, avg=7730.29, stdev=15170.60 00:21:57.119 lat (usec): min=760, max=50007, avg=7778.91, stdev=15224.36 00:21:57.120 clat percentiles (usec): 00:21:57.120 | 1.00th=[ 873], 5.00th=[ 938], 10.00th=[ 963], 20.00th=[ 1012], 00:21:57.120 | 30.00th=[ 1045], 40.00th=[ 1057], 50.00th=[ 1074], 60.00th=[ 1106], 00:21:57.120 | 70.00th=[ 1123], 80.00th=[ 1172], 90.00th=[42206], 95.00th=[42206], 00:21:57.120 | 99.00th=[43254], 99.50th=[43254], 99.90th=[43254], 99.95th=[43254], 00:21:57.120 | 99.99th=[43254] 00:21:57.120 bw ( KiB/s): min= 96, max= 1744, per=4.93%, avg=521.17, stdev=647.01, samples=6 00:21:57.120 iops : min= 24, max= 436, avg=130.17, stdev=161.79, samples=6 00:21:57.120 lat (usec) : 750=0.25%, 1000=14.96% 00:21:57.120 lat (msec) : 2=68.33%, 50=16.21% 00:21:57.120 cpu : usr=0.06%, sys=0.64%, ctx=404, majf=0, minf=2 00:21:57.120 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:21:57.120 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:57.120 complete : 0=0.2%, 4=99.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:57.120 issued rwts: total=401,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:57.120 latency : target=0, window=0, percentile=100.00%, depth=1 00:21:57.120 job2: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=71509: Tue Oct 1 22:20:52 2024 00:21:57.120 read: IOPS=1187, BW=4747KiB/s (4861kB/s)(12.9MiB/2785msec) 00:21:57.120 slat (nsec): min=6765, max=80834, avg=25931.19, stdev=4276.81 00:21:57.120 clat (usec): min=226, max=1271, avg=803.76, stdev=181.97 00:21:57.120 lat (usec): min=234, max=1296, avg=829.69, stdev=182.03 00:21:57.120 clat percentiles (usec): 00:21:57.120 | 1.00th=[ 322], 5.00th=[ 474], 10.00th=[ 545], 20.00th=[ 635], 00:21:57.120 | 30.00th=[ 717], 40.00th=[ 783], 50.00th=[ 840], 60.00th=[ 881], 00:21:57.120 | 70.00th=[ 930], 80.00th=[ 971], 90.00th=[ 1004], 95.00th=[ 1045], 00:21:57.120 | 99.00th=[ 1106], 99.50th=[ 1139], 99.90th=[ 1221], 99.95th=[ 1270], 00:21:57.120 | 99.99th=[ 1270] 00:21:57.120 bw ( KiB/s): min= 4504, max= 5768, per=45.48%, avg=4811.20, stdev=536.91, samples=5 00:21:57.120 iops : min= 1126, max= 1442, avg=1202.80, stdev=134.23, samples=5 00:21:57.120 lat (usec) : 250=0.03%, 500=6.68%, 750=27.80%, 1000=53.96% 00:21:57.120 lat (msec) : 2=11.49% 00:21:57.120 cpu : usr=1.29%, sys=3.59%, ctx=3307, majf=0, minf=1 00:21:57.120 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:21:57.120 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:57.120 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:57.120 issued rwts: total=3306,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:57.120 latency : target=0, window=0, percentile=100.00%, depth=1 00:21:57.120 job3: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=71515: Tue Oct 1 22:20:52 2024 00:21:57.120 read: IOPS=423, BW=1694KiB/s (1735kB/s)(4412KiB/2604msec) 00:21:57.120 slat (nsec): min=6600, max=47484, avg=26642.24, stdev=3065.79 00:21:57.120 clat (usec): min=476, max=43022, avg=2308.03, stdev=7096.87 00:21:57.120 lat (usec): min=484, max=43047, avg=2334.67, stdev=7096.62 00:21:57.120 clat percentiles (usec): 00:21:57.120 | 1.00th=[ 685], 5.00th=[ 848], 10.00th=[ 914], 20.00th=[ 971], 00:21:57.120 | 30.00th=[ 1012], 40.00th=[ 1037], 50.00th=[ 1057], 60.00th=[ 1090], 00:21:57.120 | 70.00th=[ 1106], 80.00th=[ 1139], 90.00th=[ 1172], 95.00th=[ 1221], 00:21:57.120 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42730], 99.95th=[43254], 00:21:57.120 | 99.99th=[43254] 00:21:57.120 bw ( KiB/s): min= 96, max= 3760, per=16.65%, avg=1761.60, stdev=1844.28, samples=5 00:21:57.120 iops : min= 24, max= 940, avg=440.40, stdev=461.07, samples=5 00:21:57.120 lat (usec) : 500=0.09%, 750=1.90%, 1000=25.09% 00:21:57.120 lat (msec) : 2=69.75%, 50=3.08% 00:21:57.120 cpu : usr=0.73%, sys=1.46%, ctx=1104, majf=0, minf=1 00:21:57.120 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:21:57.120 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:57.120 complete : 0=0.1%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:57.120 issued rwts: total=1104,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:57.120 latency : target=0, window=0, percentile=100.00%, depth=1 00:21:57.120 00:21:57.120 Run status group 0 (all jobs): 00:21:57.120 READ: bw=10.3MiB/s (10.8MB/s), 510KiB/s-4747KiB/s (522kB/s-4861kB/s), io=32.4MiB (34.0MB), run=2604-3136msec 00:21:57.120 00:21:57.120 Disk stats (read/write): 00:21:57.120 nvme0n1: ios=3384/0, merge=0/0, ticks=2673/0, in_queue=2673, util=93.72% 00:21:57.120 nvme0n2: ios=399/0, merge=0/0, ticks=3019/0, in_queue=3019, util=95.45% 00:21:57.120 nvme0n3: ios=3104/0, merge=0/0, ticks=2364/0, in_queue=2364, util=96.03% 00:21:57.120 nvme0n4: ios=1103/0, merge=0/0, ticks=2472/0, in_queue=2472, util=96.46% 00:21:57.380 22:20:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:21:57.380 22:20:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:21:57.381 22:20:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:21:57.381 22:20:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:21:57.641 22:20:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:21:57.641 22:20:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:21:57.901 22:20:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:21:57.901 22:20:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:21:58.162 22:20:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@69 -- # fio_status=0 00:21:58.162 22:20:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@70 -- # wait 71107 00:21:58.162 22:20:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@70 -- # fio_status=4 00:21:58.162 22:20:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:21:58.162 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:21:58.162 22:20:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:21:58.162 22:20:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1219 -- # local i=0 00:21:58.162 22:20:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:21:58.162 22:20:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:21:58.162 22:20:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:21:58.163 22:20:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:21:58.163 22:20:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1231 -- # return 0 00:21:58.163 22:20:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:21:58.163 22:20:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:21:58.163 nvmf hotplug test: fio failed as expected 00:21:58.163 22:20:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:21:58.423 22:20:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:21:58.424 22:20:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:21:58.424 22:20:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:21:58.424 22:20:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:21:58.424 22:20:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@91 -- # nvmftestfini 00:21:58.424 22:20:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@514 -- # nvmfcleanup 00:21:58.424 22:20:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@121 -- # sync 00:21:58.424 22:20:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:21:58.424 22:20:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@124 -- # set +e 00:21:58.424 22:20:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:21:58.424 22:20:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:21:58.424 rmmod nvme_tcp 00:21:58.424 rmmod nvme_fabrics 00:21:58.424 rmmod nvme_keyring 00:21:58.424 22:20:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:21:58.424 22:20:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@128 -- # set -e 00:21:58.424 22:20:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@129 -- # return 0 00:21:58.424 22:20:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@515 -- # '[' -n 67581 ']' 00:21:58.424 22:20:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@516 -- # killprocess 67581 00:21:58.424 22:20:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@950 -- # '[' -z 67581 ']' 00:21:58.424 22:20:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@954 -- # kill -0 67581 00:21:58.424 22:20:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@955 -- # uname 00:21:58.424 22:20:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:21:58.424 22:20:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 67581 00:21:58.424 22:20:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:21:58.424 22:20:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:21:58.424 22:20:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@968 -- # echo 'killing process with pid 67581' 00:21:58.424 killing process with pid 67581 00:21:58.424 22:20:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@969 -- # kill 67581 00:21:58.424 22:20:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@974 -- # wait 67581 00:21:58.684 22:20:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:21:58.685 22:20:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:21:58.685 22:20:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:21:58.685 22:20:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@297 -- # iptr 00:21:58.685 22:20:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@789 -- # iptables-save 00:21:58.685 22:20:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:21:58.685 22:20:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@789 -- # iptables-restore 00:21:58.685 22:20:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:21:58.685 22:20:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@302 -- # remove_spdk_ns 00:21:58.685 22:20:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:58.685 22:20:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:58.685 22:20:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:01.232 22:20:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:22:01.232 00:22:01.232 real 0m29.134s 00:22:01.232 user 2m37.569s 00:22:01.232 sys 0m9.495s 00:22:01.232 22:20:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1126 -- # xtrace_disable 00:22:01.232 22:20:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:22:01.232 ************************************ 00:22:01.232 END TEST nvmf_fio_target 00:22:01.232 ************************************ 00:22:01.232 22:20:55 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@35 -- # run_test nvmf_bdevio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:22:01.232 22:20:55 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:22:01.232 22:20:55 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:22:01.232 22:20:55 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:22:01.232 ************************************ 00:22:01.232 START TEST nvmf_bdevio 00:22:01.232 ************************************ 00:22:01.232 22:20:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:22:01.232 * Looking for test storage... 00:22:01.232 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:22:01.232 22:20:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:22:01.232 22:20:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1681 -- # lcov --version 00:22:01.232 22:20:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:22:01.232 22:20:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:22:01.232 22:20:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:22:01.232 22:20:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@333 -- # local ver1 ver1_l 00:22:01.232 22:20:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@334 -- # local ver2 ver2_l 00:22:01.232 22:20:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@336 -- # IFS=.-: 00:22:01.232 22:20:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@336 -- # read -ra ver1 00:22:01.232 22:20:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@337 -- # IFS=.-: 00:22:01.232 22:20:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@337 -- # read -ra ver2 00:22:01.232 22:20:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@338 -- # local 'op=<' 00:22:01.232 22:20:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@340 -- # ver1_l=2 00:22:01.232 22:20:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@341 -- # ver2_l=1 00:22:01.232 22:20:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:22:01.232 22:20:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@344 -- # case "$op" in 00:22:01.232 22:20:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@345 -- # : 1 00:22:01.232 22:20:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@364 -- # (( v = 0 )) 00:22:01.232 22:20:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:01.232 22:20:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@365 -- # decimal 1 00:22:01.232 22:20:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@353 -- # local d=1 00:22:01.232 22:20:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:22:01.232 22:20:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@355 -- # echo 1 00:22:01.232 22:20:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@365 -- # ver1[v]=1 00:22:01.232 22:20:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@366 -- # decimal 2 00:22:01.232 22:20:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@353 -- # local d=2 00:22:01.232 22:20:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:22:01.232 22:20:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@355 -- # echo 2 00:22:01.232 22:20:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@366 -- # ver2[v]=2 00:22:01.232 22:20:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:22:01.232 22:20:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:22:01.232 22:20:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@368 -- # return 0 00:22:01.232 22:20:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:22:01.232 22:20:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:22:01.232 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:01.232 --rc genhtml_branch_coverage=1 00:22:01.232 --rc genhtml_function_coverage=1 00:22:01.232 --rc genhtml_legend=1 00:22:01.232 --rc geninfo_all_blocks=1 00:22:01.232 --rc geninfo_unexecuted_blocks=1 00:22:01.232 00:22:01.232 ' 00:22:01.232 22:20:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:22:01.232 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:01.232 --rc genhtml_branch_coverage=1 00:22:01.232 --rc genhtml_function_coverage=1 00:22:01.232 --rc genhtml_legend=1 00:22:01.232 --rc geninfo_all_blocks=1 00:22:01.232 --rc geninfo_unexecuted_blocks=1 00:22:01.232 00:22:01.232 ' 00:22:01.232 22:20:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:22:01.232 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:01.232 --rc genhtml_branch_coverage=1 00:22:01.232 --rc genhtml_function_coverage=1 00:22:01.232 --rc genhtml_legend=1 00:22:01.232 --rc geninfo_all_blocks=1 00:22:01.232 --rc geninfo_unexecuted_blocks=1 00:22:01.232 00:22:01.232 ' 00:22:01.232 22:20:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:22:01.232 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:01.232 --rc genhtml_branch_coverage=1 00:22:01.232 --rc genhtml_function_coverage=1 00:22:01.232 --rc genhtml_legend=1 00:22:01.232 --rc geninfo_all_blocks=1 00:22:01.232 --rc geninfo_unexecuted_blocks=1 00:22:01.232 00:22:01.233 ' 00:22:01.233 22:20:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:01.233 22:20:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@7 -- # uname -s 00:22:01.233 22:20:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:01.233 22:20:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:01.233 22:20:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:01.233 22:20:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:01.233 22:20:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:01.233 22:20:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:01.233 22:20:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:01.233 22:20:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:01.233 22:20:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:01.233 22:20:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:01.233 22:20:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 00:22:01.233 22:20:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@18 -- # NVME_HOSTID=80c5a598-37ec-ec11-9bc7-a4bf01928204 00:22:01.233 22:20:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:01.233 22:20:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:01.233 22:20:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:01.233 22:20:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:01.233 22:20:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:01.233 22:20:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@15 -- # shopt -s extglob 00:22:01.233 22:20:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:01.233 22:20:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:01.233 22:20:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:01.233 22:20:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:01.233 22:20:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:01.233 22:20:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:01.233 22:20:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@5 -- # export PATH 00:22:01.233 22:20:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:01.233 22:20:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@51 -- # : 0 00:22:01.233 22:20:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:22:01.233 22:20:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:22:01.233 22:20:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:01.233 22:20:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:01.233 22:20:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:01.233 22:20:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:22:01.233 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:22:01.233 22:20:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:22:01.233 22:20:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:22:01.233 22:20:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@55 -- # have_pci_nics=0 00:22:01.233 22:20:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:22:01.233 22:20:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:22:01.233 22:20:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@14 -- # nvmftestinit 00:22:01.233 22:20:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:22:01.233 22:20:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:01.233 22:20:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@474 -- # prepare_net_devs 00:22:01.233 22:20:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@436 -- # local -g is_hw=no 00:22:01.233 22:20:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@438 -- # remove_spdk_ns 00:22:01.233 22:20:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:01.233 22:20:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:01.233 22:20:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:01.233 22:20:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:22:01.233 22:20:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:22:01.233 22:20:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@309 -- # xtrace_disable 00:22:01.233 22:20:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:22:09.379 22:21:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:09.379 22:21:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@315 -- # pci_devs=() 00:22:09.379 22:21:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@315 -- # local -a pci_devs 00:22:09.379 22:21:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@316 -- # pci_net_devs=() 00:22:09.379 22:21:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:22:09.379 22:21:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@317 -- # pci_drivers=() 00:22:09.379 22:21:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@317 -- # local -A pci_drivers 00:22:09.379 22:21:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@319 -- # net_devs=() 00:22:09.379 22:21:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@319 -- # local -ga net_devs 00:22:09.379 22:21:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@320 -- # e810=() 00:22:09.379 22:21:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@320 -- # local -ga e810 00:22:09.379 22:21:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@321 -- # x722=() 00:22:09.379 22:21:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@321 -- # local -ga x722 00:22:09.379 22:21:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@322 -- # mlx=() 00:22:09.379 22:21:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@322 -- # local -ga mlx 00:22:09.379 22:21:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:09.379 22:21:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:09.379 22:21:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:09.379 22:21:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:09.379 22:21:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:09.379 22:21:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:09.379 22:21:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:09.379 22:21:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:22:09.379 22:21:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:09.379 22:21:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:09.379 22:21:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:09.379 22:21:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:09.379 22:21:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:22:09.379 22:21:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:22:09.379 22:21:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:22:09.379 22:21:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:22:09.379 22:21:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:22:09.379 22:21:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:22:09.379 22:21:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:09.379 22:21:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:22:09.379 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:22:09.379 22:21:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:09.379 22:21:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:09.379 22:21:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:09.379 22:21:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:09.379 22:21:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:09.379 22:21:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:09.379 22:21:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:22:09.379 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:22:09.379 22:21:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:09.379 22:21:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:09.379 22:21:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:09.379 22:21:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:09.379 22:21:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:09.379 22:21:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:22:09.379 22:21:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:22:09.379 22:21:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:22:09.379 22:21:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:22:09.379 22:21:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:09.379 22:21:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:22:09.379 22:21:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:09.379 22:21:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ up == up ]] 00:22:09.379 22:21:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:22:09.379 22:21:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:09.379 22:21:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:22:09.379 Found net devices under 0000:4b:00.0: cvl_0_0 00:22:09.379 22:21:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:22:09.379 22:21:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:22:09.379 22:21:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:09.379 22:21:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:22:09.379 22:21:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:09.379 22:21:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ up == up ]] 00:22:09.379 22:21:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:22:09.379 22:21:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:09.380 22:21:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:22:09.380 Found net devices under 0000:4b:00.1: cvl_0_1 00:22:09.380 22:21:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:22:09.380 22:21:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:22:09.380 22:21:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@440 -- # is_hw=yes 00:22:09.380 22:21:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:22:09.380 22:21:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:22:09.380 22:21:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:22:09.380 22:21:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:22:09.380 22:21:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:09.380 22:21:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:09.380 22:21:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:09.380 22:21:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:22:09.380 22:21:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:09.380 22:21:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:09.380 22:21:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:22:09.380 22:21:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:22:09.380 22:21:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:09.380 22:21:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:09.380 22:21:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:22:09.380 22:21:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:22:09.380 22:21:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:22:09.380 22:21:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:09.380 22:21:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:09.380 22:21:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:09.380 22:21:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:22:09.380 22:21:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:09.380 22:21:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:09.380 22:21:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:09.380 22:21:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:22:09.380 22:21:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:22:09.380 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:09.380 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.694 ms 00:22:09.380 00:22:09.380 --- 10.0.0.2 ping statistics --- 00:22:09.380 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:09.380 rtt min/avg/max/mdev = 0.694/0.694/0.694/0.000 ms 00:22:09.380 22:21:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:09.380 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:09.380 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.297 ms 00:22:09.380 00:22:09.380 --- 10.0.0.1 ping statistics --- 00:22:09.380 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:09.380 rtt min/avg/max/mdev = 0.297/0.297/0.297/0.000 ms 00:22:09.380 22:21:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:09.380 22:21:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@448 -- # return 0 00:22:09.380 22:21:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:22:09.380 22:21:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:09.380 22:21:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:22:09.380 22:21:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:22:09.380 22:21:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:09.380 22:21:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:22:09.380 22:21:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:22:09.380 22:21:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:22:09.380 22:21:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:22:09.380 22:21:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@724 -- # xtrace_disable 00:22:09.380 22:21:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:22:09.380 22:21:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@507 -- # nvmfpid=76769 00:22:09.380 22:21:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@508 -- # waitforlisten 76769 00:22:09.380 22:21:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x78 00:22:09.380 22:21:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@831 -- # '[' -z 76769 ']' 00:22:09.380 22:21:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:09.380 22:21:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@836 -- # local max_retries=100 00:22:09.380 22:21:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:09.380 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:09.380 22:21:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@840 -- # xtrace_disable 00:22:09.380 22:21:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:22:09.380 [2024-10-01 22:21:03.573968] Starting SPDK v25.01-pre git sha1 1b1c3081e / DPDK 24.03.0 initialization... 00:22:09.380 [2024-10-01 22:21:03.574050] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:09.380 [2024-10-01 22:21:03.664427] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:22:09.380 [2024-10-01 22:21:03.756659] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:09.380 [2024-10-01 22:21:03.756722] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:09.380 [2024-10-01 22:21:03.756731] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:09.380 [2024-10-01 22:21:03.756738] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:09.380 [2024-10-01 22:21:03.756745] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:09.380 [2024-10-01 22:21:03.756923] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 4 00:22:09.380 [2024-10-01 22:21:03.757082] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 5 00:22:09.380 [2024-10-01 22:21:03.757240] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:22:09.380 [2024-10-01 22:21:03.757241] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 6 00:22:09.380 22:21:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:22:09.380 22:21:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@864 -- # return 0 00:22:09.380 22:21:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:22:09.380 22:21:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@730 -- # xtrace_disable 00:22:09.380 22:21:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:22:09.380 22:21:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:09.380 22:21:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:22:09.380 22:21:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:09.380 22:21:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:22:09.380 [2024-10-01 22:21:04.444115] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:09.380 22:21:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:09.380 22:21:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:22:09.380 22:21:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:09.380 22:21:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:22:09.380 Malloc0 00:22:09.380 22:21:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:09.380 22:21:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:22:09.380 22:21:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:09.380 22:21:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:22:09.380 22:21:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:09.380 22:21:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:22:09.380 22:21:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:09.380 22:21:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:22:09.380 22:21:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:09.380 22:21:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:09.380 22:21:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:09.380 22:21:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:22:09.380 [2024-10-01 22:21:04.509353] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:09.380 22:21:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:09.380 22:21:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:22:09.380 22:21:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:22:09.380 22:21:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@558 -- # config=() 00:22:09.380 22:21:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@558 -- # local subsystem config 00:22:09.381 22:21:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:22:09.381 22:21:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:22:09.381 { 00:22:09.381 "params": { 00:22:09.381 "name": "Nvme$subsystem", 00:22:09.381 "trtype": "$TEST_TRANSPORT", 00:22:09.381 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:09.381 "adrfam": "ipv4", 00:22:09.381 "trsvcid": "$NVMF_PORT", 00:22:09.381 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:09.381 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:09.381 "hdgst": ${hdgst:-false}, 00:22:09.381 "ddgst": ${ddgst:-false} 00:22:09.381 }, 00:22:09.381 "method": "bdev_nvme_attach_controller" 00:22:09.381 } 00:22:09.381 EOF 00:22:09.381 )") 00:22:09.381 22:21:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@580 -- # cat 00:22:09.381 22:21:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@582 -- # jq . 00:22:09.381 22:21:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@583 -- # IFS=, 00:22:09.381 22:21:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:22:09.381 "params": { 00:22:09.381 "name": "Nvme1", 00:22:09.381 "trtype": "tcp", 00:22:09.381 "traddr": "10.0.0.2", 00:22:09.381 "adrfam": "ipv4", 00:22:09.381 "trsvcid": "4420", 00:22:09.381 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:09.381 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:09.381 "hdgst": false, 00:22:09.381 "ddgst": false 00:22:09.381 }, 00:22:09.381 "method": "bdev_nvme_attach_controller" 00:22:09.381 }' 00:22:09.381 [2024-10-01 22:21:04.566794] Starting SPDK v25.01-pre git sha1 1b1c3081e / DPDK 24.03.0 initialization... 00:22:09.381 [2024-10-01 22:21:04.566863] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77032 ] 00:22:09.641 [2024-10-01 22:21:04.634906] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:22:09.641 [2024-10-01 22:21:04.711316] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:22:09.641 [2024-10-01 22:21:04.711437] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:22:09.641 [2024-10-01 22:21:04.711441] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:22:09.901 I/O targets: 00:22:09.901 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:22:09.901 00:22:09.901 00:22:09.901 CUnit - A unit testing framework for C - Version 2.1-3 00:22:09.901 http://cunit.sourceforge.net/ 00:22:09.901 00:22:09.901 00:22:09.901 Suite: bdevio tests on: Nvme1n1 00:22:09.901 Test: blockdev write read block ...passed 00:22:09.901 Test: blockdev write zeroes read block ...passed 00:22:09.901 Test: blockdev write zeroes read no split ...passed 00:22:09.901 Test: blockdev write zeroes read split ...passed 00:22:09.901 Test: blockdev write zeroes read split partial ...passed 00:22:09.901 Test: blockdev reset ...[2024-10-01 22:21:05.076903] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:09.901 [2024-10-01 22:21:05.076970] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23c8270 (9): Bad file descriptor 00:22:09.901 [2024-10-01 22:21:05.129157] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:22:09.901 passed 00:22:10.162 Test: blockdev write read 8 blocks ...passed 00:22:10.162 Test: blockdev write read size > 128k ...passed 00:22:10.162 Test: blockdev write read invalid size ...passed 00:22:10.162 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:22:10.162 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:22:10.162 Test: blockdev write read max offset ...passed 00:22:10.162 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:22:10.162 Test: blockdev writev readv 8 blocks ...passed 00:22:10.162 Test: blockdev writev readv 30 x 1block ...passed 00:22:10.162 Test: blockdev writev readv block ...passed 00:22:10.162 Test: blockdev writev readv size > 128k ...passed 00:22:10.162 Test: blockdev writev readv size > 128k in two iovs ...passed 00:22:10.162 Test: blockdev comparev and writev ...[2024-10-01 22:21:05.350629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:22:10.162 [2024-10-01 22:21:05.350657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:10.162 [2024-10-01 22:21:05.350668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:22:10.162 [2024-10-01 22:21:05.350674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:10.162 [2024-10-01 22:21:05.351043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:22:10.162 [2024-10-01 22:21:05.351052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:22:10.162 [2024-10-01 22:21:05.351062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:22:10.162 [2024-10-01 22:21:05.351068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:22:10.162 [2024-10-01 22:21:05.351458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:22:10.162 [2024-10-01 22:21:05.351468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:22:10.162 [2024-10-01 22:21:05.351478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:22:10.162 [2024-10-01 22:21:05.351483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:22:10.162 [2024-10-01 22:21:05.351798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:22:10.162 [2024-10-01 22:21:05.351808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:22:10.162 [2024-10-01 22:21:05.351818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:22:10.162 [2024-10-01 22:21:05.351823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:22:10.162 passed 00:22:10.423 Test: blockdev nvme passthru rw ...passed 00:22:10.423 Test: blockdev nvme passthru vendor specific ...[2024-10-01 22:21:05.436080] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:22:10.423 [2024-10-01 22:21:05.436093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:22:10.423 [2024-10-01 22:21:05.436293] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:22:10.423 [2024-10-01 22:21:05.436301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:22:10.423 [2024-10-01 22:21:05.436513] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:22:10.423 [2024-10-01 22:21:05.436521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:22:10.423 [2024-10-01 22:21:05.436739] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:22:10.423 [2024-10-01 22:21:05.436747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:22:10.423 passed 00:22:10.423 Test: blockdev nvme admin passthru ...passed 00:22:10.423 Test: blockdev copy ...passed 00:22:10.423 00:22:10.423 Run Summary: Type Total Ran Passed Failed Inactive 00:22:10.423 suites 1 1 n/a 0 0 00:22:10.423 tests 23 23 23 0 0 00:22:10.423 asserts 152 152 152 0 n/a 00:22:10.423 00:22:10.423 Elapsed time = 1.107 seconds 00:22:10.423 22:21:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:22:10.423 22:21:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:10.423 22:21:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:22:10.423 22:21:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:10.423 22:21:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:22:10.423 22:21:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@30 -- # nvmftestfini 00:22:10.423 22:21:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@514 -- # nvmfcleanup 00:22:10.423 22:21:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@121 -- # sync 00:22:10.423 22:21:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:22:10.423 22:21:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@124 -- # set +e 00:22:10.684 22:21:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@125 -- # for i in {1..20} 00:22:10.684 22:21:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:22:10.684 rmmod nvme_tcp 00:22:10.684 rmmod nvme_fabrics 00:22:10.684 rmmod nvme_keyring 00:22:10.684 22:21:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:22:10.684 22:21:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@128 -- # set -e 00:22:10.684 22:21:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@129 -- # return 0 00:22:10.684 22:21:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@515 -- # '[' -n 76769 ']' 00:22:10.684 22:21:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@516 -- # killprocess 76769 00:22:10.684 22:21:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@950 -- # '[' -z 76769 ']' 00:22:10.684 22:21:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@954 -- # kill -0 76769 00:22:10.684 22:21:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@955 -- # uname 00:22:10.684 22:21:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:22:10.684 22:21:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 76769 00:22:10.684 22:21:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@956 -- # process_name=reactor_3 00:22:10.684 22:21:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@960 -- # '[' reactor_3 = sudo ']' 00:22:10.684 22:21:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@968 -- # echo 'killing process with pid 76769' 00:22:10.684 killing process with pid 76769 00:22:10.684 22:21:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@969 -- # kill 76769 00:22:10.684 22:21:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@974 -- # wait 76769 00:22:10.945 22:21:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:22:10.945 22:21:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:22:10.945 22:21:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:22:10.945 22:21:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@297 -- # iptr 00:22:10.945 22:21:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@789 -- # iptables-save 00:22:10.945 22:21:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:22:10.945 22:21:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@789 -- # iptables-restore 00:22:10.945 22:21:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:22:10.945 22:21:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@302 -- # remove_spdk_ns 00:22:10.945 22:21:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:10.945 22:21:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:10.945 22:21:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:12.857 22:21:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:22:12.857 00:22:12.857 real 0m12.144s 00:22:12.857 user 0m13.191s 00:22:12.857 sys 0m6.250s 00:22:12.857 22:21:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1126 -- # xtrace_disable 00:22:12.857 22:21:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:22:12.857 ************************************ 00:22:12.857 END TEST nvmf_bdevio 00:22:12.857 ************************************ 00:22:13.118 22:21:08 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:22:13.118 00:22:13.118 real 5m2.717s 00:22:13.118 user 11m51.784s 00:22:13.118 sys 1m48.240s 00:22:13.118 22:21:08 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1126 -- # xtrace_disable 00:22:13.118 22:21:08 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:22:13.118 ************************************ 00:22:13.118 END TEST nvmf_target_core 00:22:13.118 ************************************ 00:22:13.118 22:21:08 nvmf_tcp -- nvmf/nvmf.sh@15 -- # run_test nvmf_target_extra /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_extra.sh --transport=tcp 00:22:13.118 22:21:08 nvmf_tcp -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:22:13.118 22:21:08 nvmf_tcp -- common/autotest_common.sh@1107 -- # xtrace_disable 00:22:13.118 22:21:08 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:22:13.118 ************************************ 00:22:13.118 START TEST nvmf_target_extra 00:22:13.118 ************************************ 00:22:13.118 22:21:08 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_extra.sh --transport=tcp 00:22:13.118 * Looking for test storage... 00:22:13.118 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:22:13.118 22:21:08 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:22:13.118 22:21:08 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1681 -- # lcov --version 00:22:13.118 22:21:08 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:22:13.381 22:21:08 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:22:13.381 22:21:08 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:22:13.381 22:21:08 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@333 -- # local ver1 ver1_l 00:22:13.381 22:21:08 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@334 -- # local ver2 ver2_l 00:22:13.381 22:21:08 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@336 -- # IFS=.-: 00:22:13.381 22:21:08 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@336 -- # read -ra ver1 00:22:13.381 22:21:08 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@337 -- # IFS=.-: 00:22:13.381 22:21:08 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@337 -- # read -ra ver2 00:22:13.381 22:21:08 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@338 -- # local 'op=<' 00:22:13.381 22:21:08 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@340 -- # ver1_l=2 00:22:13.381 22:21:08 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@341 -- # ver2_l=1 00:22:13.381 22:21:08 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:22:13.381 22:21:08 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@344 -- # case "$op" in 00:22:13.381 22:21:08 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@345 -- # : 1 00:22:13.381 22:21:08 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@364 -- # (( v = 0 )) 00:22:13.381 22:21:08 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:13.381 22:21:08 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@365 -- # decimal 1 00:22:13.381 22:21:08 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@353 -- # local d=1 00:22:13.381 22:21:08 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:22:13.381 22:21:08 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@355 -- # echo 1 00:22:13.381 22:21:08 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@365 -- # ver1[v]=1 00:22:13.381 22:21:08 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@366 -- # decimal 2 00:22:13.381 22:21:08 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@353 -- # local d=2 00:22:13.381 22:21:08 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:22:13.381 22:21:08 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@355 -- # echo 2 00:22:13.381 22:21:08 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@366 -- # ver2[v]=2 00:22:13.381 22:21:08 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:22:13.381 22:21:08 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:22:13.381 22:21:08 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@368 -- # return 0 00:22:13.381 22:21:08 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:22:13.381 22:21:08 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:22:13.381 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:13.381 --rc genhtml_branch_coverage=1 00:22:13.381 --rc genhtml_function_coverage=1 00:22:13.381 --rc genhtml_legend=1 00:22:13.381 --rc geninfo_all_blocks=1 00:22:13.381 --rc geninfo_unexecuted_blocks=1 00:22:13.381 00:22:13.381 ' 00:22:13.381 22:21:08 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:22:13.381 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:13.381 --rc genhtml_branch_coverage=1 00:22:13.381 --rc genhtml_function_coverage=1 00:22:13.381 --rc genhtml_legend=1 00:22:13.381 --rc geninfo_all_blocks=1 00:22:13.381 --rc geninfo_unexecuted_blocks=1 00:22:13.381 00:22:13.381 ' 00:22:13.381 22:21:08 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:22:13.381 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:13.381 --rc genhtml_branch_coverage=1 00:22:13.381 --rc genhtml_function_coverage=1 00:22:13.381 --rc genhtml_legend=1 00:22:13.381 --rc geninfo_all_blocks=1 00:22:13.381 --rc geninfo_unexecuted_blocks=1 00:22:13.381 00:22:13.381 ' 00:22:13.381 22:21:08 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:22:13.381 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:13.381 --rc genhtml_branch_coverage=1 00:22:13.381 --rc genhtml_function_coverage=1 00:22:13.381 --rc genhtml_legend=1 00:22:13.381 --rc geninfo_all_blocks=1 00:22:13.381 --rc geninfo_unexecuted_blocks=1 00:22:13.381 00:22:13.381 ' 00:22:13.381 22:21:08 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:13.381 22:21:08 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@7 -- # uname -s 00:22:13.381 22:21:08 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:13.381 22:21:08 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:13.381 22:21:08 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:13.381 22:21:08 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:13.381 22:21:08 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:13.381 22:21:08 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:13.381 22:21:08 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:13.381 22:21:08 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:13.381 22:21:08 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:13.381 22:21:08 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:13.381 22:21:08 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 00:22:13.381 22:21:08 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@18 -- # NVME_HOSTID=80c5a598-37ec-ec11-9bc7-a4bf01928204 00:22:13.381 22:21:08 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:13.381 22:21:08 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:13.381 22:21:08 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:13.381 22:21:08 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:13.381 22:21:08 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:13.381 22:21:08 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@15 -- # shopt -s extglob 00:22:13.381 22:21:08 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:13.381 22:21:08 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:13.381 22:21:08 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:13.381 22:21:08 nvmf_tcp.nvmf_target_extra -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:13.381 22:21:08 nvmf_tcp.nvmf_target_extra -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:13.381 22:21:08 nvmf_tcp.nvmf_target_extra -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:13.381 22:21:08 nvmf_tcp.nvmf_target_extra -- paths/export.sh@5 -- # export PATH 00:22:13.382 22:21:08 nvmf_tcp.nvmf_target_extra -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:13.382 22:21:08 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@51 -- # : 0 00:22:13.382 22:21:08 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:22:13.382 22:21:08 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:22:13.382 22:21:08 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:13.382 22:21:08 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:13.382 22:21:08 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:13.382 22:21:08 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:22:13.382 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:22:13.382 22:21:08 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:22:13.382 22:21:08 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:22:13.382 22:21:08 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@55 -- # have_pci_nics=0 00:22:13.382 22:21:08 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@11 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:22:13.382 22:21:08 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@13 -- # TEST_ARGS=("$@") 00:22:13.382 22:21:08 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@15 -- # [[ 0 -eq 0 ]] 00:22:13.382 22:21:08 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@16 -- # run_test nvmf_example /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:22:13.382 22:21:08 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:22:13.382 22:21:08 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:22:13.382 22:21:08 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:22:13.382 ************************************ 00:22:13.382 START TEST nvmf_example 00:22:13.382 ************************************ 00:22:13.382 22:21:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:22:13.382 * Looking for test storage... 00:22:13.382 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:22:13.382 22:21:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:22:13.382 22:21:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1681 -- # lcov --version 00:22:13.382 22:21:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:22:13.643 22:21:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:22:13.643 22:21:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:22:13.643 22:21:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@333 -- # local ver1 ver1_l 00:22:13.643 22:21:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@334 -- # local ver2 ver2_l 00:22:13.643 22:21:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@336 -- # IFS=.-: 00:22:13.643 22:21:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@336 -- # read -ra ver1 00:22:13.644 22:21:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@337 -- # IFS=.-: 00:22:13.644 22:21:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@337 -- # read -ra ver2 00:22:13.644 22:21:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@338 -- # local 'op=<' 00:22:13.644 22:21:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@340 -- # ver1_l=2 00:22:13.644 22:21:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@341 -- # ver2_l=1 00:22:13.644 22:21:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:22:13.644 22:21:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@344 -- # case "$op" in 00:22:13.644 22:21:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@345 -- # : 1 00:22:13.644 22:21:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@364 -- # (( v = 0 )) 00:22:13.644 22:21:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:13.644 22:21:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@365 -- # decimal 1 00:22:13.644 22:21:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@353 -- # local d=1 00:22:13.644 22:21:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:22:13.644 22:21:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@355 -- # echo 1 00:22:13.644 22:21:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@365 -- # ver1[v]=1 00:22:13.644 22:21:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@366 -- # decimal 2 00:22:13.644 22:21:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@353 -- # local d=2 00:22:13.644 22:21:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:22:13.644 22:21:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@355 -- # echo 2 00:22:13.644 22:21:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@366 -- # ver2[v]=2 00:22:13.644 22:21:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:22:13.644 22:21:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:22:13.644 22:21:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@368 -- # return 0 00:22:13.644 22:21:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:22:13.644 22:21:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:22:13.644 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:13.644 --rc genhtml_branch_coverage=1 00:22:13.644 --rc genhtml_function_coverage=1 00:22:13.644 --rc genhtml_legend=1 00:22:13.644 --rc geninfo_all_blocks=1 00:22:13.644 --rc geninfo_unexecuted_blocks=1 00:22:13.644 00:22:13.644 ' 00:22:13.644 22:21:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:22:13.644 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:13.644 --rc genhtml_branch_coverage=1 00:22:13.644 --rc genhtml_function_coverage=1 00:22:13.644 --rc genhtml_legend=1 00:22:13.644 --rc geninfo_all_blocks=1 00:22:13.644 --rc geninfo_unexecuted_blocks=1 00:22:13.644 00:22:13.644 ' 00:22:13.644 22:21:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:22:13.644 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:13.644 --rc genhtml_branch_coverage=1 00:22:13.644 --rc genhtml_function_coverage=1 00:22:13.644 --rc genhtml_legend=1 00:22:13.644 --rc geninfo_all_blocks=1 00:22:13.644 --rc geninfo_unexecuted_blocks=1 00:22:13.644 00:22:13.644 ' 00:22:13.644 22:21:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:22:13.644 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:13.644 --rc genhtml_branch_coverage=1 00:22:13.644 --rc genhtml_function_coverage=1 00:22:13.644 --rc genhtml_legend=1 00:22:13.644 --rc geninfo_all_blocks=1 00:22:13.644 --rc geninfo_unexecuted_blocks=1 00:22:13.644 00:22:13.644 ' 00:22:13.644 22:21:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:13.644 22:21:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@7 -- # uname -s 00:22:13.644 22:21:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:13.644 22:21:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:13.644 22:21:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:13.644 22:21:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:13.644 22:21:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:13.644 22:21:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:13.644 22:21:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:13.644 22:21:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:13.644 22:21:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:13.644 22:21:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:13.644 22:21:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 00:22:13.644 22:21:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@18 -- # NVME_HOSTID=80c5a598-37ec-ec11-9bc7-a4bf01928204 00:22:13.644 22:21:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:13.644 22:21:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:13.644 22:21:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:13.644 22:21:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:13.644 22:21:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:13.644 22:21:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@15 -- # shopt -s extglob 00:22:13.644 22:21:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:13.644 22:21:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:13.644 22:21:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:13.644 22:21:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:13.644 22:21:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:13.644 22:21:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:13.644 22:21:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@5 -- # export PATH 00:22:13.644 22:21:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:13.644 22:21:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@51 -- # : 0 00:22:13.644 22:21:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:22:13.644 22:21:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:22:13.644 22:21:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:13.644 22:21:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:13.644 22:21:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:13.644 22:21:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:22:13.644 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:22:13.644 22:21:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:22:13.644 22:21:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:22:13.644 22:21:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@55 -- # have_pci_nics=0 00:22:13.644 22:21:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@11 -- # NVMF_EXAMPLE=("$SPDK_EXAMPLE_DIR/nvmf") 00:22:13.644 22:21:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@13 -- # MALLOC_BDEV_SIZE=64 00:22:13.644 22:21:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:22:13.644 22:21:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@24 -- # build_nvmf_example_args 00:22:13.644 22:21:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@17 -- # '[' 0 -eq 1 ']' 00:22:13.644 22:21:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@20 -- # NVMF_EXAMPLE+=(-i "$NVMF_APP_SHM_ID" -g 10000) 00:22:13.644 22:21:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@21 -- # NVMF_EXAMPLE+=("${NO_HUGE[@]}") 00:22:13.644 22:21:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@40 -- # timing_enter nvmf_example_test 00:22:13.644 22:21:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@724 -- # xtrace_disable 00:22:13.644 22:21:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:22:13.644 22:21:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@41 -- # nvmftestinit 00:22:13.645 22:21:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:22:13.645 22:21:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:13.645 22:21:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@474 -- # prepare_net_devs 00:22:13.645 22:21:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@436 -- # local -g is_hw=no 00:22:13.645 22:21:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@438 -- # remove_spdk_ns 00:22:13.645 22:21:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:13.645 22:21:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:13.645 22:21:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:13.645 22:21:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:22:13.645 22:21:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:22:13.645 22:21:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@309 -- # xtrace_disable 00:22:13.645 22:21:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:22:21.793 22:21:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:21.793 22:21:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@315 -- # pci_devs=() 00:22:21.793 22:21:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@315 -- # local -a pci_devs 00:22:21.793 22:21:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@316 -- # pci_net_devs=() 00:22:21.793 22:21:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:22:21.793 22:21:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@317 -- # pci_drivers=() 00:22:21.793 22:21:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@317 -- # local -A pci_drivers 00:22:21.793 22:21:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@319 -- # net_devs=() 00:22:21.793 22:21:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@319 -- # local -ga net_devs 00:22:21.793 22:21:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@320 -- # e810=() 00:22:21.793 22:21:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@320 -- # local -ga e810 00:22:21.793 22:21:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@321 -- # x722=() 00:22:21.793 22:21:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@321 -- # local -ga x722 00:22:21.793 22:21:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@322 -- # mlx=() 00:22:21.793 22:21:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@322 -- # local -ga mlx 00:22:21.793 22:21:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:21.793 22:21:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:21.793 22:21:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:21.793 22:21:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:21.793 22:21:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:21.793 22:21:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:21.793 22:21:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:21.793 22:21:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:22:21.793 22:21:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:21.793 22:21:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:21.793 22:21:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:21.793 22:21:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:21.793 22:21:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:22:21.793 22:21:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:22:21.793 22:21:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:22:21.793 22:21:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:22:21.793 22:21:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:22:21.793 22:21:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:22:21.793 22:21:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:21.793 22:21:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:22:21.793 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:22:21.793 22:21:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:21.793 22:21:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:21.793 22:21:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:21.793 22:21:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:21.793 22:21:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:21.793 22:21:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:21.793 22:21:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:22:21.793 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:22:21.793 22:21:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:21.793 22:21:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:21.793 22:21:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:21.793 22:21:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:21.793 22:21:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:21.793 22:21:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:22:21.793 22:21:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:22:21.793 22:21:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:22:21.793 22:21:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:22:21.793 22:21:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:21.793 22:21:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:22:21.793 22:21:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:21.793 22:21:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@416 -- # [[ up == up ]] 00:22:21.793 22:21:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:22:21.793 22:21:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:21.793 22:21:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:22:21.793 Found net devices under 0000:4b:00.0: cvl_0_0 00:22:21.793 22:21:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:22:21.793 22:21:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:22:21.793 22:21:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:21.793 22:21:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:22:21.793 22:21:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:21.793 22:21:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@416 -- # [[ up == up ]] 00:22:21.793 22:21:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:22:21.793 22:21:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:21.793 22:21:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:22:21.793 Found net devices under 0000:4b:00.1: cvl_0_1 00:22:21.793 22:21:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:22:21.793 22:21:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:22:21.793 22:21:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@440 -- # is_hw=yes 00:22:21.793 22:21:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:22:21.793 22:21:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:22:21.793 22:21:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:22:21.793 22:21:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:22:21.793 22:21:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:21.793 22:21:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:21.793 22:21:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:21.793 22:21:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:22:21.793 22:21:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:21.793 22:21:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:21.793 22:21:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:22:21.793 22:21:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:22:21.793 22:21:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:21.793 22:21:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:21.793 22:21:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:22:21.793 22:21:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:22:21.793 22:21:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:22:21.794 22:21:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:21.794 22:21:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:21.794 22:21:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:21.794 22:21:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:22:21.794 22:21:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:21.794 22:21:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:21.794 22:21:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:21.794 22:21:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:22:21.794 22:21:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:22:21.794 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:21.794 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.581 ms 00:22:21.794 00:22:21.794 --- 10.0.0.2 ping statistics --- 00:22:21.794 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:21.794 rtt min/avg/max/mdev = 0.581/0.581/0.581/0.000 ms 00:22:21.794 22:21:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:21.794 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:21.794 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.233 ms 00:22:21.794 00:22:21.794 --- 10.0.0.1 ping statistics --- 00:22:21.794 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:21.794 rtt min/avg/max/mdev = 0.233/0.233/0.233/0.000 ms 00:22:21.794 22:21:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:21.794 22:21:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@448 -- # return 0 00:22:21.794 22:21:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:22:21.794 22:21:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:21.794 22:21:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:22:21.794 22:21:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:22:21.794 22:21:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:21.794 22:21:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:22:21.794 22:21:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:22:21.794 22:21:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@42 -- # nvmfexamplestart '-m 0xF' 00:22:21.794 22:21:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@27 -- # timing_enter start_nvmf_example 00:22:21.794 22:21:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@724 -- # xtrace_disable 00:22:21.794 22:21:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:22:21.794 22:21:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@29 -- # '[' tcp == tcp ']' 00:22:21.794 22:21:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@30 -- # NVMF_EXAMPLE=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_EXAMPLE[@]}") 00:22:21.794 22:21:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@34 -- # nvmfpid=81987 00:22:21.794 22:21:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@35 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:22:21.794 22:21:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/nvmf -i 0 -g 10000 -m 0xF 00:22:21.794 22:21:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@36 -- # waitforlisten 81987 00:22:21.794 22:21:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@831 -- # '[' -z 81987 ']' 00:22:21.794 22:21:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:21.794 22:21:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@836 -- # local max_retries=100 00:22:21.794 22:21:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:21.794 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:21.794 22:21:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@840 -- # xtrace_disable 00:22:21.794 22:21:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:22:21.794 22:21:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:22:21.794 22:21:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@864 -- # return 0 00:22:21.794 22:21:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@37 -- # timing_exit start_nvmf_example 00:22:21.794 22:21:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@730 -- # xtrace_disable 00:22:21.794 22:21:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:22:21.794 22:21:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:22:21.794 22:21:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:21.794 22:21:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:22:21.794 22:21:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:21.794 22:21:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@47 -- # rpc_cmd bdev_malloc_create 64 512 00:22:21.794 22:21:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:21.794 22:21:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:22:22.055 22:21:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:22.055 22:21:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@47 -- # malloc_bdevs='Malloc0 ' 00:22:22.055 22:21:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@49 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:22:22.055 22:21:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:22.055 22:21:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:22:22.055 22:21:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:22.055 22:21:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@52 -- # for malloc_bdev in $malloc_bdevs 00:22:22.055 22:21:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:22:22.055 22:21:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:22.055 22:21:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:22:22.055 22:21:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:22.055 22:21:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:22.055 22:21:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:22.055 22:21:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:22:22.055 22:21:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:22.055 22:21:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@59 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:22:22.055 22:21:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:22:34.278 Initializing NVMe Controllers 00:22:34.278 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:22:34.278 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:22:34.278 Initialization complete. Launching workers. 00:22:34.278 ======================================================== 00:22:34.278 Latency(us) 00:22:34.278 Device Information : IOPS MiB/s Average min max 00:22:34.278 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 18712.02 73.09 3419.96 703.02 16568.25 00:22:34.278 ======================================================== 00:22:34.278 Total : 18712.02 73.09 3419.96 703.02 16568.25 00:22:34.278 00:22:34.278 22:21:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@65 -- # trap - SIGINT SIGTERM EXIT 00:22:34.278 22:21:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@66 -- # nvmftestfini 00:22:34.278 22:21:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@514 -- # nvmfcleanup 00:22:34.278 22:21:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@121 -- # sync 00:22:34.278 22:21:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:22:34.278 22:21:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@124 -- # set +e 00:22:34.278 22:21:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@125 -- # for i in {1..20} 00:22:34.278 22:21:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:22:34.278 rmmod nvme_tcp 00:22:34.278 rmmod nvme_fabrics 00:22:34.278 rmmod nvme_keyring 00:22:34.278 22:21:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:22:34.278 22:21:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@128 -- # set -e 00:22:34.278 22:21:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@129 -- # return 0 00:22:34.278 22:21:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@515 -- # '[' -n 81987 ']' 00:22:34.278 22:21:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@516 -- # killprocess 81987 00:22:34.278 22:21:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@950 -- # '[' -z 81987 ']' 00:22:34.278 22:21:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@954 -- # kill -0 81987 00:22:34.278 22:21:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@955 -- # uname 00:22:34.278 22:21:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:22:34.278 22:21:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 81987 00:22:34.279 22:21:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@956 -- # process_name=nvmf 00:22:34.279 22:21:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@960 -- # '[' nvmf = sudo ']' 00:22:34.279 22:21:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@968 -- # echo 'killing process with pid 81987' 00:22:34.279 killing process with pid 81987 00:22:34.279 22:21:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@969 -- # kill 81987 00:22:34.279 22:21:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@974 -- # wait 81987 00:22:34.279 nvmf threads initialize successfully 00:22:34.279 bdev subsystem init successfully 00:22:34.279 created a nvmf target service 00:22:34.279 create targets's poll groups done 00:22:34.279 all subsystems of target started 00:22:34.279 nvmf target is running 00:22:34.279 all subsystems of target stopped 00:22:34.279 destroy targets's poll groups done 00:22:34.279 destroyed the nvmf target service 00:22:34.279 bdev subsystem finish successfully 00:22:34.279 nvmf threads destroy successfully 00:22:34.279 22:21:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:22:34.279 22:21:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:22:34.279 22:21:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:22:34.279 22:21:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@297 -- # iptr 00:22:34.279 22:21:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@789 -- # iptables-restore 00:22:34.279 22:21:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@789 -- # iptables-save 00:22:34.279 22:21:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:22:34.279 22:21:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:22:34.279 22:21:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@302 -- # remove_spdk_ns 00:22:34.279 22:21:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:34.279 22:21:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:34.279 22:21:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:34.847 22:21:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:22:34.847 22:21:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@67 -- # timing_exit nvmf_example_test 00:22:34.847 22:21:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@730 -- # xtrace_disable 00:22:34.847 22:21:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:22:34.847 00:22:34.847 real 0m21.371s 00:22:34.847 user 0m47.099s 00:22:34.847 sys 0m6.774s 00:22:34.847 22:21:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1126 -- # xtrace_disable 00:22:34.847 22:21:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:22:34.847 ************************************ 00:22:34.847 END TEST nvmf_example 00:22:34.847 ************************************ 00:22:34.847 22:21:29 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@17 -- # run_test nvmf_filesystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:22:34.847 22:21:29 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:22:34.847 22:21:29 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:22:34.847 22:21:29 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:22:34.847 ************************************ 00:22:34.847 START TEST nvmf_filesystem 00:22:34.847 ************************************ 00:22:34.847 22:21:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:22:34.847 * Looking for test storage... 00:22:34.847 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:22:34.847 22:21:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:22:34.847 22:21:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1681 -- # lcov --version 00:22:34.847 22:21:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:22:35.111 22:21:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:22:35.111 22:21:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:22:35.111 22:21:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:22:35.111 22:21:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:22:35.111 22:21:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # IFS=.-: 00:22:35.111 22:21:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # read -ra ver1 00:22:35.111 22:21:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # IFS=.-: 00:22:35.111 22:21:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # read -ra ver2 00:22:35.111 22:21:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@338 -- # local 'op=<' 00:22:35.111 22:21:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@340 -- # ver1_l=2 00:22:35.111 22:21:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@341 -- # ver2_l=1 00:22:35.111 22:21:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:22:35.111 22:21:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@344 -- # case "$op" in 00:22:35.111 22:21:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@345 -- # : 1 00:22:35.111 22:21:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:22:35.111 22:21:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:35.111 22:21:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # decimal 1 00:22:35.111 22:21:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=1 00:22:35.111 22:21:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:22:35.111 22:21:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 1 00:22:35.111 22:21:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # ver1[v]=1 00:22:35.111 22:21:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # decimal 2 00:22:35.111 22:21:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=2 00:22:35.111 22:21:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:22:35.111 22:21:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 2 00:22:35.111 22:21:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # ver2[v]=2 00:22:35.111 22:21:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:22:35.111 22:21:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:22:35.111 22:21:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # return 0 00:22:35.111 22:21:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:22:35.111 22:21:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:22:35.111 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:35.112 --rc genhtml_branch_coverage=1 00:22:35.112 --rc genhtml_function_coverage=1 00:22:35.112 --rc genhtml_legend=1 00:22:35.112 --rc geninfo_all_blocks=1 00:22:35.112 --rc geninfo_unexecuted_blocks=1 00:22:35.112 00:22:35.112 ' 00:22:35.112 22:21:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:22:35.112 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:35.112 --rc genhtml_branch_coverage=1 00:22:35.112 --rc genhtml_function_coverage=1 00:22:35.112 --rc genhtml_legend=1 00:22:35.112 --rc geninfo_all_blocks=1 00:22:35.112 --rc geninfo_unexecuted_blocks=1 00:22:35.112 00:22:35.112 ' 00:22:35.112 22:21:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:22:35.112 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:35.112 --rc genhtml_branch_coverage=1 00:22:35.112 --rc genhtml_function_coverage=1 00:22:35.112 --rc genhtml_legend=1 00:22:35.112 --rc geninfo_all_blocks=1 00:22:35.112 --rc geninfo_unexecuted_blocks=1 00:22:35.112 00:22:35.112 ' 00:22:35.112 22:21:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:22:35.112 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:35.112 --rc genhtml_branch_coverage=1 00:22:35.112 --rc genhtml_function_coverage=1 00:22:35.112 --rc genhtml_legend=1 00:22:35.112 --rc geninfo_all_blocks=1 00:22:35.112 --rc geninfo_unexecuted_blocks=1 00:22:35.112 00:22:35.112 ' 00:22:35.112 22:21:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh 00:22:35.112 22:21:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:22:35.112 22:21:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@34 -- # set -e 00:22:35.112 22:21:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:22:35.112 22:21:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@36 -- # shopt -s extglob 00:22:35.112 22:21:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@37 -- # shopt -s inherit_errexit 00:22:35.112 22:21:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@39 -- # '[' -z /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output ']' 00:22:35.112 22:21:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@44 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh ]] 00:22:35.112 22:21:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh 00:22:35.112 22:21:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:22:35.112 22:21:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@2 -- # CONFIG_ASAN=n 00:22:35.112 22:21:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:22:35.112 22:21:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:22:35.112 22:21:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@5 -- # CONFIG_USDT=n 00:22:35.112 22:21:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:22:35.112 22:21:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:22:35.112 22:21:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:22:35.112 22:21:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:22:35.112 22:21:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:22:35.112 22:21:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:22:35.112 22:21:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:22:35.112 22:21:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:22:35.112 22:21:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:22:35.112 22:21:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:22:35.112 22:21:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:22:35.112 22:21:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@17 -- # CONFIG_PGO_CAPTURE=n 00:22:35.112 22:21:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@18 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:22:35.112 22:21:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@19 -- # CONFIG_ENV=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:22:35.112 22:21:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@20 -- # CONFIG_LTO=n 00:22:35.112 22:21:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@21 -- # CONFIG_ISCSI_INITIATOR=y 00:22:35.112 22:21:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@22 -- # CONFIG_CET=n 00:22:35.112 22:21:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@23 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:22:35.112 22:21:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@24 -- # CONFIG_OCF_PATH= 00:22:35.112 22:21:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@25 -- # CONFIG_RDMA_SET_TOS=y 00:22:35.112 22:21:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@26 -- # CONFIG_AIO_FSDEV=y 00:22:35.112 22:21:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@27 -- # CONFIG_HAVE_ARC4RANDOM=y 00:22:35.112 22:21:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@28 -- # CONFIG_HAVE_LIBARCHIVE=n 00:22:35.112 22:21:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@29 -- # CONFIG_UBLK=y 00:22:35.112 22:21:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@30 -- # CONFIG_ISAL_CRYPTO=y 00:22:35.112 22:21:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@31 -- # CONFIG_OPENSSL_PATH= 00:22:35.112 22:21:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@32 -- # CONFIG_OCF=n 00:22:35.112 22:21:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@33 -- # CONFIG_FUSE=n 00:22:35.112 22:21:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@34 -- # CONFIG_VTUNE_DIR= 00:22:35.112 22:21:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@35 -- # CONFIG_FUZZER_LIB= 00:22:35.112 22:21:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@36 -- # CONFIG_FUZZER=n 00:22:35.112 22:21:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@37 -- # CONFIG_FSDEV=y 00:22:35.112 22:21:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@38 -- # CONFIG_DPDK_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:22:35.112 22:21:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@39 -- # CONFIG_CRYPTO=n 00:22:35.112 22:21:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@40 -- # CONFIG_PGO_USE=n 00:22:35.112 22:21:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@41 -- # CONFIG_VHOST=y 00:22:35.112 22:21:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@42 -- # CONFIG_DAOS=n 00:22:35.112 22:21:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@43 -- # CONFIG_DPDK_INC_DIR= 00:22:35.112 22:21:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@44 -- # CONFIG_DAOS_DIR= 00:22:35.112 22:21:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@45 -- # CONFIG_UNIT_TESTS=n 00:22:35.112 22:21:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@46 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:22:35.112 22:21:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@47 -- # CONFIG_VIRTIO=y 00:22:35.112 22:21:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@48 -- # CONFIG_DPDK_UADK=n 00:22:35.112 22:21:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@49 -- # CONFIG_COVERAGE=y 00:22:35.112 22:21:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@50 -- # CONFIG_RDMA=y 00:22:35.112 22:21:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@51 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIM=y 00:22:35.112 22:21:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@52 -- # CONFIG_HAVE_LZ4=n 00:22:35.112 22:21:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@53 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:22:35.112 22:21:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@54 -- # CONFIG_URING_PATH= 00:22:35.112 22:21:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@55 -- # CONFIG_XNVME=n 00:22:35.112 22:21:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@56 -- # CONFIG_VFIO_USER=y 00:22:35.112 22:21:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@57 -- # CONFIG_ARCH=native 00:22:35.112 22:21:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@58 -- # CONFIG_HAVE_EVP_MAC=y 00:22:35.112 22:21:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@59 -- # CONFIG_URING_ZNS=n 00:22:35.112 22:21:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@60 -- # CONFIG_WERROR=y 00:22:35.112 22:21:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@61 -- # CONFIG_HAVE_LIBBSD=n 00:22:35.112 22:21:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@62 -- # CONFIG_UBSAN=y 00:22:35.112 22:21:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@63 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC=n 00:22:35.112 22:21:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@64 -- # CONFIG_IPSEC_MB_DIR= 00:22:35.112 22:21:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@65 -- # CONFIG_GOLANG=n 00:22:35.112 22:21:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@66 -- # CONFIG_ISAL=y 00:22:35.112 22:21:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@67 -- # CONFIG_IDXD_KERNEL=y 00:22:35.112 22:21:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@68 -- # CONFIG_DPDK_LIB_DIR= 00:22:35.112 22:21:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@69 -- # CONFIG_RDMA_PROV=verbs 00:22:35.112 22:21:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@70 -- # CONFIG_APPS=y 00:22:35.112 22:21:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@71 -- # CONFIG_SHARED=y 00:22:35.112 22:21:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@72 -- # CONFIG_HAVE_KEYUTILS=y 00:22:35.112 22:21:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@73 -- # CONFIG_FC_PATH= 00:22:35.112 22:21:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@74 -- # CONFIG_DPDK_PKG_CONFIG=n 00:22:35.112 22:21:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@75 -- # CONFIG_FC=n 00:22:35.112 22:21:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@76 -- # CONFIG_AVAHI=n 00:22:35.112 22:21:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@77 -- # CONFIG_FIO_PLUGIN=y 00:22:35.112 22:21:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@78 -- # CONFIG_RAID5F=n 00:22:35.112 22:21:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@79 -- # CONFIG_EXAMPLES=y 00:22:35.112 22:21:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@80 -- # CONFIG_TESTS=y 00:22:35.112 22:21:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@81 -- # CONFIG_CRYPTO_MLX5=n 00:22:35.113 22:21:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@82 -- # CONFIG_MAX_LCORES=128 00:22:35.113 22:21:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@83 -- # CONFIG_IPSEC_MB=n 00:22:35.113 22:21:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@84 -- # CONFIG_PGO_DIR= 00:22:35.113 22:21:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@85 -- # CONFIG_DEBUG=y 00:22:35.113 22:21:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@86 -- # CONFIG_DPDK_COMPRESSDEV=n 00:22:35.113 22:21:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@87 -- # CONFIG_CROSS_PREFIX= 00:22:35.113 22:21:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@88 -- # CONFIG_COPY_FILE_RANGE=y 00:22:35.113 22:21:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@89 -- # CONFIG_URING=n 00:22:35.113 22:21:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@54 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:22:35.113 22:21:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:22:35.113 22:21:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:22:35.113 22:21:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:22:35.113 22:21:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@9 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:22:35.113 22:21:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@10 -- # _app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:22:35.113 22:21:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@11 -- # _test_app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:22:35.113 22:21:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@12 -- # _examples_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:22:35.113 22:21:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:22:35.113 22:21:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:22:35.113 22:21:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:22:35.113 22:21:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:22:35.113 22:21:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:22:35.113 22:21:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:22:35.113 22:21:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@22 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/config.h ]] 00:22:35.113 22:21:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:22:35.113 #define SPDK_CONFIG_H 00:22:35.113 #define SPDK_CONFIG_AIO_FSDEV 1 00:22:35.113 #define SPDK_CONFIG_APPS 1 00:22:35.113 #define SPDK_CONFIG_ARCH native 00:22:35.113 #undef SPDK_CONFIG_ASAN 00:22:35.113 #undef SPDK_CONFIG_AVAHI 00:22:35.113 #undef SPDK_CONFIG_CET 00:22:35.113 #define SPDK_CONFIG_COPY_FILE_RANGE 1 00:22:35.113 #define SPDK_CONFIG_COVERAGE 1 00:22:35.113 #define SPDK_CONFIG_CROSS_PREFIX 00:22:35.113 #undef SPDK_CONFIG_CRYPTO 00:22:35.113 #undef SPDK_CONFIG_CRYPTO_MLX5 00:22:35.113 #undef SPDK_CONFIG_CUSTOMOCF 00:22:35.113 #undef SPDK_CONFIG_DAOS 00:22:35.113 #define SPDK_CONFIG_DAOS_DIR 00:22:35.113 #define SPDK_CONFIG_DEBUG 1 00:22:35.113 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:22:35.113 #define SPDK_CONFIG_DPDK_DIR /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:22:35.113 #define SPDK_CONFIG_DPDK_INC_DIR 00:22:35.113 #define SPDK_CONFIG_DPDK_LIB_DIR 00:22:35.113 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:22:35.113 #undef SPDK_CONFIG_DPDK_UADK 00:22:35.113 #define SPDK_CONFIG_ENV /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:22:35.113 #define SPDK_CONFIG_EXAMPLES 1 00:22:35.113 #undef SPDK_CONFIG_FC 00:22:35.113 #define SPDK_CONFIG_FC_PATH 00:22:35.113 #define SPDK_CONFIG_FIO_PLUGIN 1 00:22:35.113 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:22:35.113 #define SPDK_CONFIG_FSDEV 1 00:22:35.113 #undef SPDK_CONFIG_FUSE 00:22:35.113 #undef SPDK_CONFIG_FUZZER 00:22:35.113 #define SPDK_CONFIG_FUZZER_LIB 00:22:35.113 #undef SPDK_CONFIG_GOLANG 00:22:35.113 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:22:35.113 #define SPDK_CONFIG_HAVE_EVP_MAC 1 00:22:35.113 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:22:35.113 #define SPDK_CONFIG_HAVE_KEYUTILS 1 00:22:35.113 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:22:35.113 #undef SPDK_CONFIG_HAVE_LIBBSD 00:22:35.113 #undef SPDK_CONFIG_HAVE_LZ4 00:22:35.113 #define SPDK_CONFIG_HAVE_STRUCT_STAT_ST_ATIM 1 00:22:35.113 #undef SPDK_CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC 00:22:35.113 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:22:35.113 #define SPDK_CONFIG_IDXD 1 00:22:35.113 #define SPDK_CONFIG_IDXD_KERNEL 1 00:22:35.113 #undef SPDK_CONFIG_IPSEC_MB 00:22:35.113 #define SPDK_CONFIG_IPSEC_MB_DIR 00:22:35.113 #define SPDK_CONFIG_ISAL 1 00:22:35.113 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:22:35.113 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:22:35.113 #define SPDK_CONFIG_LIBDIR 00:22:35.113 #undef SPDK_CONFIG_LTO 00:22:35.113 #define SPDK_CONFIG_MAX_LCORES 128 00:22:35.113 #define SPDK_CONFIG_NVME_CUSE 1 00:22:35.113 #undef SPDK_CONFIG_OCF 00:22:35.113 #define SPDK_CONFIG_OCF_PATH 00:22:35.113 #define SPDK_CONFIG_OPENSSL_PATH 00:22:35.113 #undef SPDK_CONFIG_PGO_CAPTURE 00:22:35.113 #define SPDK_CONFIG_PGO_DIR 00:22:35.113 #undef SPDK_CONFIG_PGO_USE 00:22:35.113 #define SPDK_CONFIG_PREFIX /usr/local 00:22:35.113 #undef SPDK_CONFIG_RAID5F 00:22:35.113 #undef SPDK_CONFIG_RBD 00:22:35.113 #define SPDK_CONFIG_RDMA 1 00:22:35.113 #define SPDK_CONFIG_RDMA_PROV verbs 00:22:35.113 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:22:35.113 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:22:35.113 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:22:35.113 #define SPDK_CONFIG_SHARED 1 00:22:35.113 #undef SPDK_CONFIG_SMA 00:22:35.113 #define SPDK_CONFIG_TESTS 1 00:22:35.113 #undef SPDK_CONFIG_TSAN 00:22:35.113 #define SPDK_CONFIG_UBLK 1 00:22:35.113 #define SPDK_CONFIG_UBSAN 1 00:22:35.113 #undef SPDK_CONFIG_UNIT_TESTS 00:22:35.113 #undef SPDK_CONFIG_URING 00:22:35.113 #define SPDK_CONFIG_URING_PATH 00:22:35.113 #undef SPDK_CONFIG_URING_ZNS 00:22:35.113 #undef SPDK_CONFIG_USDT 00:22:35.113 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:22:35.113 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:22:35.113 #define SPDK_CONFIG_VFIO_USER 1 00:22:35.113 #define SPDK_CONFIG_VFIO_USER_DIR 00:22:35.113 #define SPDK_CONFIG_VHOST 1 00:22:35.113 #define SPDK_CONFIG_VIRTIO 1 00:22:35.113 #undef SPDK_CONFIG_VTUNE 00:22:35.113 #define SPDK_CONFIG_VTUNE_DIR 00:22:35.113 #define SPDK_CONFIG_WERROR 1 00:22:35.113 #define SPDK_CONFIG_WPDK_DIR 00:22:35.113 #undef SPDK_CONFIG_XNVME 00:22:35.113 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:22:35.113 22:21:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:22:35.113 22:21:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@55 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:35.113 22:21:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@15 -- # shopt -s extglob 00:22:35.113 22:21:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:35.113 22:21:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:35.113 22:21:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:35.113 22:21:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:35.113 22:21:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:35.113 22:21:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:35.113 22:21:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:22:35.113 22:21:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:35.113 22:21:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@56 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:22:35.113 22:21:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:22:35.113 22:21:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:22:35.113 22:21:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # _pmdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:22:35.113 22:21:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@7 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/../../../ 00:22:35.113 22:21:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@7 -- # _pmrootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:22:35.113 22:21:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@64 -- # TEST_TAG=N/A 00:22:35.113 22:21:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@65 -- # TEST_TAG_FILE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.run_test_name 00:22:35.114 22:21:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@67 -- # PM_OUTPUTDIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power 00:22:35.114 22:21:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@68 -- # uname -s 00:22:35.114 22:21:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@68 -- # PM_OS=Linux 00:22:35.114 22:21:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@70 -- # MONITOR_RESOURCES_SUDO=() 00:22:35.114 22:21:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@70 -- # declare -A MONITOR_RESOURCES_SUDO 00:22:35.114 22:21:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@71 -- # MONITOR_RESOURCES_SUDO["collect-bmc-pm"]=1 00:22:35.114 22:21:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@72 -- # MONITOR_RESOURCES_SUDO["collect-cpu-load"]=0 00:22:35.114 22:21:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@73 -- # MONITOR_RESOURCES_SUDO["collect-cpu-temp"]=0 00:22:35.114 22:21:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@74 -- # MONITOR_RESOURCES_SUDO["collect-vmstat"]=0 00:22:35.114 22:21:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@76 -- # SUDO[0]= 00:22:35.114 22:21:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@76 -- # SUDO[1]='sudo -E' 00:22:35.114 22:21:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@78 -- # MONITOR_RESOURCES=(collect-cpu-load collect-vmstat) 00:22:35.114 22:21:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@79 -- # [[ Linux == FreeBSD ]] 00:22:35.114 22:21:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ Linux == Linux ]] 00:22:35.114 22:21:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ ............................... != QEMU ]] 00:22:35.114 22:21:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ ! -e /.dockerenv ]] 00:22:35.114 22:21:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@84 -- # MONITOR_RESOURCES+=(collect-cpu-temp) 00:22:35.114 22:21:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@85 -- # MONITOR_RESOURCES+=(collect-bmc-pm) 00:22:35.114 22:21:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@88 -- # [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power ]] 00:22:35.114 22:21:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@58 -- # : 0 00:22:35.114 22:21:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@59 -- # export RUN_NIGHTLY 00:22:35.114 22:21:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@62 -- # : 0 00:22:35.114 22:21:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@63 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:22:35.114 22:21:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@64 -- # : 0 00:22:35.114 22:21:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@65 -- # export SPDK_RUN_VALGRIND 00:22:35.114 22:21:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@66 -- # : 1 00:22:35.114 22:21:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@67 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:22:35.114 22:21:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@68 -- # : 0 00:22:35.114 22:21:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@69 -- # export SPDK_TEST_UNITTEST 00:22:35.114 22:21:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@70 -- # : 00:22:35.114 22:21:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@71 -- # export SPDK_TEST_AUTOBUILD 00:22:35.114 22:21:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@72 -- # : 0 00:22:35.114 22:21:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@73 -- # export SPDK_TEST_RELEASE_BUILD 00:22:35.114 22:21:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@74 -- # : 0 00:22:35.114 22:21:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@75 -- # export SPDK_TEST_ISAL 00:22:35.114 22:21:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@76 -- # : 0 00:22:35.114 22:21:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@77 -- # export SPDK_TEST_ISCSI 00:22:35.114 22:21:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@78 -- # : 0 00:22:35.114 22:21:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@79 -- # export SPDK_TEST_ISCSI_INITIATOR 00:22:35.114 22:21:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@80 -- # : 0 00:22:35.114 22:21:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@81 -- # export SPDK_TEST_NVME 00:22:35.114 22:21:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@82 -- # : 0 00:22:35.114 22:21:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@83 -- # export SPDK_TEST_NVME_PMR 00:22:35.114 22:21:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@84 -- # : 0 00:22:35.114 22:21:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@85 -- # export SPDK_TEST_NVME_BP 00:22:35.114 22:21:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@86 -- # : 1 00:22:35.114 22:21:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@87 -- # export SPDK_TEST_NVME_CLI 00:22:35.114 22:21:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@88 -- # : 0 00:22:35.114 22:21:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@89 -- # export SPDK_TEST_NVME_CUSE 00:22:35.114 22:21:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@90 -- # : 0 00:22:35.114 22:21:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@91 -- # export SPDK_TEST_NVME_FDP 00:22:35.114 22:21:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@92 -- # : 1 00:22:35.114 22:21:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@93 -- # export SPDK_TEST_NVMF 00:22:35.114 22:21:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@94 -- # : 1 00:22:35.114 22:21:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@95 -- # export SPDK_TEST_VFIOUSER 00:22:35.114 22:21:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@96 -- # : 0 00:22:35.114 22:21:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@97 -- # export SPDK_TEST_VFIOUSER_QEMU 00:22:35.114 22:21:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@98 -- # : 0 00:22:35.114 22:21:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@99 -- # export SPDK_TEST_FUZZER 00:22:35.114 22:21:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@100 -- # : 0 00:22:35.114 22:21:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@101 -- # export SPDK_TEST_FUZZER_SHORT 00:22:35.114 22:21:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@102 -- # : tcp 00:22:35.114 22:21:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@103 -- # export SPDK_TEST_NVMF_TRANSPORT 00:22:35.114 22:21:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@104 -- # : 0 00:22:35.114 22:21:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@105 -- # export SPDK_TEST_RBD 00:22:35.114 22:21:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@106 -- # : 0 00:22:35.114 22:21:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@107 -- # export SPDK_TEST_VHOST 00:22:35.114 22:21:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@108 -- # : 0 00:22:35.114 22:21:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@109 -- # export SPDK_TEST_BLOCKDEV 00:22:35.114 22:21:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@110 -- # : 0 00:22:35.114 22:21:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@111 -- # export SPDK_TEST_RAID 00:22:35.114 22:21:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@112 -- # : 0 00:22:35.114 22:21:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@113 -- # export SPDK_TEST_IOAT 00:22:35.114 22:21:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@114 -- # : 0 00:22:35.114 22:21:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@115 -- # export SPDK_TEST_BLOBFS 00:22:35.114 22:21:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@116 -- # : 0 00:22:35.114 22:21:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@117 -- # export SPDK_TEST_VHOST_INIT 00:22:35.114 22:21:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@118 -- # : 0 00:22:35.114 22:21:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@119 -- # export SPDK_TEST_LVOL 00:22:35.114 22:21:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@120 -- # : 0 00:22:35.114 22:21:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@121 -- # export SPDK_TEST_VBDEV_COMPRESS 00:22:35.114 22:21:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@122 -- # : 0 00:22:35.114 22:21:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@123 -- # export SPDK_RUN_ASAN 00:22:35.114 22:21:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@124 -- # : 1 00:22:35.114 22:21:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@125 -- # export SPDK_RUN_UBSAN 00:22:35.114 22:21:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@126 -- # : 00:22:35.114 22:21:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@127 -- # export SPDK_RUN_EXTERNAL_DPDK 00:22:35.114 22:21:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@128 -- # : 0 00:22:35.114 22:21:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@129 -- # export SPDK_RUN_NON_ROOT 00:22:35.114 22:21:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@130 -- # : 0 00:22:35.114 22:21:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@131 -- # export SPDK_TEST_CRYPTO 00:22:35.114 22:21:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@132 -- # : 0 00:22:35.114 22:21:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@133 -- # export SPDK_TEST_FTL 00:22:35.114 22:21:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@134 -- # : 0 00:22:35.114 22:21:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@135 -- # export SPDK_TEST_OCF 00:22:35.114 22:21:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@136 -- # : 0 00:22:35.114 22:21:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@137 -- # export SPDK_TEST_VMD 00:22:35.114 22:21:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@138 -- # : 0 00:22:35.114 22:21:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@139 -- # export SPDK_TEST_OPAL 00:22:35.114 22:21:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@140 -- # : 00:22:35.114 22:21:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@141 -- # export SPDK_TEST_NATIVE_DPDK 00:22:35.114 22:21:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@142 -- # : true 00:22:35.114 22:21:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@143 -- # export SPDK_AUTOTEST_X 00:22:35.114 22:21:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@144 -- # : 0 00:22:35.114 22:21:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@145 -- # export SPDK_TEST_URING 00:22:35.114 22:21:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@146 -- # : 0 00:22:35.114 22:21:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@147 -- # export SPDK_TEST_USDT 00:22:35.114 22:21:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@148 -- # : 0 00:22:35.114 22:21:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@149 -- # export SPDK_TEST_USE_IGB_UIO 00:22:35.114 22:21:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@150 -- # : 0 00:22:35.114 22:21:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@151 -- # export SPDK_TEST_SCHEDULER 00:22:35.115 22:21:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@152 -- # : 0 00:22:35.115 22:21:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@153 -- # export SPDK_TEST_SCANBUILD 00:22:35.115 22:21:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@154 -- # : e810 00:22:35.115 22:21:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@155 -- # export SPDK_TEST_NVMF_NICS 00:22:35.115 22:21:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@156 -- # : 0 00:22:35.115 22:21:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@157 -- # export SPDK_TEST_SMA 00:22:35.115 22:21:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@158 -- # : 0 00:22:35.115 22:21:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@159 -- # export SPDK_TEST_DAOS 00:22:35.115 22:21:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@160 -- # : 0 00:22:35.115 22:21:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@161 -- # export SPDK_TEST_XNVME 00:22:35.115 22:21:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@162 -- # : 0 00:22:35.115 22:21:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@163 -- # export SPDK_TEST_ACCEL 00:22:35.115 22:21:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@164 -- # : 0 00:22:35.115 22:21:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@165 -- # export SPDK_TEST_ACCEL_DSA 00:22:35.115 22:21:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@166 -- # : 0 00:22:35.115 22:21:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@167 -- # export SPDK_TEST_ACCEL_IAA 00:22:35.115 22:21:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@169 -- # : 00:22:35.115 22:21:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@170 -- # export SPDK_TEST_FUZZER_TARGET 00:22:35.115 22:21:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@171 -- # : 0 00:22:35.115 22:21:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@172 -- # export SPDK_TEST_NVMF_MDNS 00:22:35.115 22:21:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@173 -- # : 0 00:22:35.115 22:21:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@174 -- # export SPDK_JSONRPC_GO_CLIENT 00:22:35.115 22:21:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@175 -- # : 0 00:22:35.115 22:21:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@176 -- # export SPDK_TEST_SETUP 00:22:35.115 22:21:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@179 -- # export SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:22:35.115 22:21:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@179 -- # SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:22:35.115 22:21:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@180 -- # export DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib 00:22:35.115 22:21:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@180 -- # DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib 00:22:35.115 22:21:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@181 -- # export VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:22:35.115 22:21:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@181 -- # VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:22:35.115 22:21:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@182 -- # export LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:22:35.115 22:21:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@182 -- # LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:22:35.115 22:21:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@185 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:22:35.115 22:21:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@185 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:22:35.115 22:21:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@189 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:22:35.115 22:21:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@189 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:22:35.115 22:21:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@193 -- # export PYTHONDONTWRITEBYTECODE=1 00:22:35.115 22:21:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@193 -- # PYTHONDONTWRITEBYTECODE=1 00:22:35.115 22:21:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@197 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:22:35.115 22:21:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@197 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:22:35.115 22:21:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@198 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:22:35.115 22:21:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@198 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:22:35.115 22:21:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@202 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:22:35.115 22:21:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@203 -- # rm -rf /var/tmp/asan_suppression_file 00:22:35.115 22:21:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@204 -- # cat 00:22:35.115 22:21:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@240 -- # echo leak:libfuse3.so 00:22:35.115 22:21:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@242 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:22:35.115 22:21:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@242 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:22:35.115 22:21:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@244 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:22:35.115 22:21:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@244 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:22:35.115 22:21:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@246 -- # '[' -z /var/spdk/dependencies ']' 00:22:35.115 22:21:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@249 -- # export DEPENDENCY_DIR 00:22:35.115 22:21:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@253 -- # export SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:22:35.115 22:21:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@253 -- # SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:22:35.115 22:21:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@254 -- # export SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:22:35.115 22:21:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@254 -- # SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:22:35.115 22:21:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@257 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:22:35.115 22:21:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@257 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:22:35.115 22:21:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@258 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:22:35.115 22:21:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@258 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:22:35.115 22:21:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@260 -- # export AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:22:35.115 22:21:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@260 -- # AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:22:35.115 22:21:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@263 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:22:35.115 22:21:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@263 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:22:35.115 22:21:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@265 -- # _LCOV_MAIN=0 00:22:35.115 22:21:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@266 -- # _LCOV_LLVM=1 00:22:35.115 22:21:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@267 -- # _LCOV= 00:22:35.115 22:21:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@268 -- # [[ '' == *clang* ]] 00:22:35.115 22:21:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@268 -- # [[ 0 -eq 1 ]] 00:22:35.115 22:21:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@270 -- # _lcov_opt[_LCOV_LLVM]='--gcov-tool /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh' 00:22:35.115 22:21:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@271 -- # _lcov_opt[_LCOV_MAIN]= 00:22:35.115 22:21:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@273 -- # lcov_opt= 00:22:35.116 22:21:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@276 -- # '[' 0 -eq 0 ']' 00:22:35.116 22:21:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@277 -- # export valgrind= 00:22:35.116 22:21:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@277 -- # valgrind= 00:22:35.116 22:21:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@283 -- # uname -s 00:22:35.116 22:21:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@283 -- # '[' Linux = Linux ']' 00:22:35.116 22:21:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@284 -- # HUGEMEM=4096 00:22:35.116 22:21:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@285 -- # export CLEAR_HUGE=yes 00:22:35.116 22:21:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@285 -- # CLEAR_HUGE=yes 00:22:35.116 22:21:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@287 -- # MAKE=make 00:22:35.116 22:21:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@288 -- # MAKEFLAGS=-j144 00:22:35.116 22:21:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@304 -- # export HUGEMEM=4096 00:22:35.116 22:21:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@304 -- # HUGEMEM=4096 00:22:35.116 22:21:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@306 -- # NO_HUGE=() 00:22:35.116 22:21:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@307 -- # TEST_MODE= 00:22:35.116 22:21:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@308 -- # for i in "$@" 00:22:35.116 22:21:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@309 -- # case "$i" in 00:22:35.116 22:21:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@314 -- # TEST_TRANSPORT=tcp 00:22:35.116 22:21:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@329 -- # [[ -z 84776 ]] 00:22:35.116 22:21:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@329 -- # kill -0 84776 00:22:35.116 22:21:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1666 -- # set_test_storage 2147483648 00:22:35.116 22:21:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@339 -- # [[ -v testdir ]] 00:22:35.116 22:21:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@341 -- # local requested_size=2147483648 00:22:35.116 22:21:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@342 -- # local mount target_dir 00:22:35.116 22:21:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@344 -- # local -A mounts fss sizes avails uses 00:22:35.116 22:21:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@345 -- # local source fs size avail mount use 00:22:35.116 22:21:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@347 -- # local storage_fallback storage_candidates 00:22:35.116 22:21:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@349 -- # mktemp -udt spdk.XXXXXX 00:22:35.116 22:21:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@349 -- # storage_fallback=/tmp/spdk.vPUUkJ 00:22:35.116 22:21:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@354 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:22:35.116 22:21:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@356 -- # [[ -n '' ]] 00:22:35.116 22:21:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@361 -- # [[ -n '' ]] 00:22:35.116 22:21:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@366 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target /tmp/spdk.vPUUkJ/tests/target /tmp/spdk.vPUUkJ 00:22:35.116 22:21:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@369 -- # requested_size=2214592512 00:22:35.116 22:21:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:22:35.116 22:21:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@338 -- # df -T 00:22:35.116 22:21:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@338 -- # grep -v Filesystem 00:22:35.116 22:21:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # mounts["$mount"]=spdk_devtmpfs 00:22:35.116 22:21:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # fss["$mount"]=devtmpfs 00:22:35.116 22:21:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # avails["$mount"]=67108864 00:22:35.116 22:21:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # sizes["$mount"]=67108864 00:22:35.116 22:21:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # uses["$mount"]=0 00:22:35.116 22:21:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:22:35.116 22:21:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # mounts["$mount"]=/dev/pmem0 00:22:35.116 22:21:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # fss["$mount"]=ext2 00:22:35.116 22:21:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # avails["$mount"]=785162240 00:22:35.116 22:21:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # sizes["$mount"]=5284429824 00:22:35.116 22:21:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # uses["$mount"]=4499267584 00:22:35.116 22:21:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:22:35.116 22:21:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # mounts["$mount"]=spdk_root 00:22:35.116 22:21:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # fss["$mount"]=overlay 00:22:35.116 22:21:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # avails["$mount"]=119326908416 00:22:35.116 22:21:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # sizes["$mount"]=129356505088 00:22:35.116 22:21:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # uses["$mount"]=10029596672 00:22:35.116 22:21:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:22:35.116 22:21:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # mounts["$mount"]=tmpfs 00:22:35.116 22:21:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # fss["$mount"]=tmpfs 00:22:35.116 22:21:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # avails["$mount"]=64666886144 00:22:35.116 22:21:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # sizes["$mount"]=64678252544 00:22:35.116 22:21:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # uses["$mount"]=11366400 00:22:35.116 22:21:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:22:35.116 22:21:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # mounts["$mount"]=tmpfs 00:22:35.116 22:21:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # fss["$mount"]=tmpfs 00:22:35.116 22:21:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # avails["$mount"]=25847934976 00:22:35.116 22:21:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # sizes["$mount"]=25871302656 00:22:35.116 22:21:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # uses["$mount"]=23367680 00:22:35.116 22:21:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:22:35.116 22:21:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # mounts["$mount"]=efivarfs 00:22:35.116 22:21:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # fss["$mount"]=efivarfs 00:22:35.116 22:21:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # avails["$mount"]=101376 00:22:35.116 22:21:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # sizes["$mount"]=507904 00:22:35.116 22:21:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # uses["$mount"]=402432 00:22:35.116 22:21:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:22:35.116 22:21:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # mounts["$mount"]=tmpfs 00:22:35.116 22:21:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # fss["$mount"]=tmpfs 00:22:35.116 22:21:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # avails["$mount"]=64677990400 00:22:35.116 22:21:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # sizes["$mount"]=64678252544 00:22:35.116 22:21:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # uses["$mount"]=262144 00:22:35.116 22:21:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:22:35.116 22:21:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # mounts["$mount"]=tmpfs 00:22:35.116 22:21:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # fss["$mount"]=tmpfs 00:22:35.116 22:21:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # avails["$mount"]=12935634944 00:22:35.116 22:21:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # sizes["$mount"]=12935647232 00:22:35.116 22:21:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # uses["$mount"]=12288 00:22:35.116 22:21:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:22:35.117 22:21:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@377 -- # printf '* Looking for test storage...\n' 00:22:35.117 * Looking for test storage... 00:22:35.117 22:21:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@379 -- # local target_space new_size 00:22:35.117 22:21:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@380 -- # for target_dir in "${storage_candidates[@]}" 00:22:35.117 22:21:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@383 -- # df /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:22:35.117 22:21:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@383 -- # awk '$1 !~ /Filesystem/{print $6}' 00:22:35.117 22:21:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@383 -- # mount=/ 00:22:35.117 22:21:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@385 -- # target_space=119326908416 00:22:35.117 22:21:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@386 -- # (( target_space == 0 || target_space < requested_size )) 00:22:35.117 22:21:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@389 -- # (( target_space >= requested_size )) 00:22:35.117 22:21:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@391 -- # [[ overlay == tmpfs ]] 00:22:35.117 22:21:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@391 -- # [[ overlay == ramfs ]] 00:22:35.117 22:21:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@391 -- # [[ / == / ]] 00:22:35.117 22:21:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@392 -- # new_size=12244189184 00:22:35.117 22:21:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@393 -- # (( new_size * 100 / sizes[/] > 95 )) 00:22:35.117 22:21:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@398 -- # export SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:22:35.117 22:21:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@398 -- # SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:22:35.117 22:21:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@399 -- # printf '* Found test storage at %s\n' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:22:35.117 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:22:35.117 22:21:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@400 -- # return 0 00:22:35.117 22:21:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1668 -- # set -o errtrace 00:22:35.117 22:21:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1669 -- # shopt -s extdebug 00:22:35.117 22:21:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1670 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:22:35.117 22:21:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1672 -- # PS4=' \t ${test_domain:-} -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:22:35.117 22:21:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1673 -- # true 00:22:35.117 22:21:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1675 -- # xtrace_fd 00:22:35.117 22:21:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -n 15 ]] 00:22:35.117 22:21:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/15 ]] 00:22:35.117 22:21:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@27 -- # exec 00:22:35.117 22:21:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@29 -- # exec 00:22:35.117 22:21:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@31 -- # xtrace_restore 00:22:35.117 22:21:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:22:35.117 22:21:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:22:35.117 22:21:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@18 -- # set -x 00:22:35.117 22:21:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:22:35.117 22:21:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1681 -- # lcov --version 00:22:35.117 22:21:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:22:35.379 22:21:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:22:35.379 22:21:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:22:35.379 22:21:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:22:35.379 22:21:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:22:35.379 22:21:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # IFS=.-: 00:22:35.379 22:21:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # read -ra ver1 00:22:35.379 22:21:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # IFS=.-: 00:22:35.379 22:21:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # read -ra ver2 00:22:35.379 22:21:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@338 -- # local 'op=<' 00:22:35.379 22:21:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@340 -- # ver1_l=2 00:22:35.379 22:21:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@341 -- # ver2_l=1 00:22:35.379 22:21:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:22:35.379 22:21:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@344 -- # case "$op" in 00:22:35.379 22:21:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@345 -- # : 1 00:22:35.379 22:21:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:22:35.379 22:21:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:35.379 22:21:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # decimal 1 00:22:35.379 22:21:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=1 00:22:35.379 22:21:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:22:35.379 22:21:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 1 00:22:35.379 22:21:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # ver1[v]=1 00:22:35.379 22:21:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # decimal 2 00:22:35.379 22:21:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=2 00:22:35.379 22:21:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:22:35.379 22:21:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 2 00:22:35.379 22:21:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # ver2[v]=2 00:22:35.379 22:21:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:22:35.379 22:21:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:22:35.379 22:21:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # return 0 00:22:35.379 22:21:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:22:35.379 22:21:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:22:35.379 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:35.379 --rc genhtml_branch_coverage=1 00:22:35.379 --rc genhtml_function_coverage=1 00:22:35.379 --rc genhtml_legend=1 00:22:35.379 --rc geninfo_all_blocks=1 00:22:35.379 --rc geninfo_unexecuted_blocks=1 00:22:35.379 00:22:35.379 ' 00:22:35.379 22:21:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:22:35.379 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:35.379 --rc genhtml_branch_coverage=1 00:22:35.379 --rc genhtml_function_coverage=1 00:22:35.379 --rc genhtml_legend=1 00:22:35.379 --rc geninfo_all_blocks=1 00:22:35.379 --rc geninfo_unexecuted_blocks=1 00:22:35.379 00:22:35.379 ' 00:22:35.379 22:21:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:22:35.379 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:35.379 --rc genhtml_branch_coverage=1 00:22:35.379 --rc genhtml_function_coverage=1 00:22:35.379 --rc genhtml_legend=1 00:22:35.379 --rc geninfo_all_blocks=1 00:22:35.379 --rc geninfo_unexecuted_blocks=1 00:22:35.379 00:22:35.379 ' 00:22:35.379 22:21:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:22:35.379 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:35.379 --rc genhtml_branch_coverage=1 00:22:35.379 --rc genhtml_function_coverage=1 00:22:35.379 --rc genhtml_legend=1 00:22:35.379 --rc geninfo_all_blocks=1 00:22:35.379 --rc geninfo_unexecuted_blocks=1 00:22:35.379 00:22:35.379 ' 00:22:35.379 22:21:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:35.379 22:21:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@7 -- # uname -s 00:22:35.379 22:21:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:35.379 22:21:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:35.379 22:21:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:35.379 22:21:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:35.379 22:21:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:35.379 22:21:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:35.379 22:21:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:35.379 22:21:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:35.379 22:21:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:35.379 22:21:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:35.379 22:21:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 00:22:35.379 22:21:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@18 -- # NVME_HOSTID=80c5a598-37ec-ec11-9bc7-a4bf01928204 00:22:35.379 22:21:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:35.379 22:21:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:35.379 22:21:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:35.379 22:21:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:35.379 22:21:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:35.379 22:21:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@15 -- # shopt -s extglob 00:22:35.379 22:21:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:35.379 22:21:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:35.379 22:21:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:35.379 22:21:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:35.380 22:21:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:35.380 22:21:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:35.380 22:21:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:22:35.380 22:21:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:35.380 22:21:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@51 -- # : 0 00:22:35.380 22:21:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:22:35.380 22:21:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:22:35.380 22:21:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:35.380 22:21:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:35.380 22:21:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:35.380 22:21:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:22:35.380 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:22:35.380 22:21:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:22:35.380 22:21:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:22:35.380 22:21:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@55 -- # have_pci_nics=0 00:22:35.380 22:21:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@12 -- # MALLOC_BDEV_SIZE=512 00:22:35.380 22:21:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:22:35.380 22:21:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@15 -- # nvmftestinit 00:22:35.380 22:21:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:22:35.380 22:21:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:35.380 22:21:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@474 -- # prepare_net_devs 00:22:35.380 22:21:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@436 -- # local -g is_hw=no 00:22:35.380 22:21:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@438 -- # remove_spdk_ns 00:22:35.380 22:21:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:35.380 22:21:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:35.380 22:21:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:35.380 22:21:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:22:35.380 22:21:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:22:35.380 22:21:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@309 -- # xtrace_disable 00:22:35.380 22:21:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:22:43.520 22:21:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:43.520 22:21:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@315 -- # pci_devs=() 00:22:43.520 22:21:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@315 -- # local -a pci_devs 00:22:43.520 22:21:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@316 -- # pci_net_devs=() 00:22:43.520 22:21:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:22:43.520 22:21:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@317 -- # pci_drivers=() 00:22:43.520 22:21:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@317 -- # local -A pci_drivers 00:22:43.520 22:21:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@319 -- # net_devs=() 00:22:43.520 22:21:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@319 -- # local -ga net_devs 00:22:43.520 22:21:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@320 -- # e810=() 00:22:43.520 22:21:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@320 -- # local -ga e810 00:22:43.520 22:21:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@321 -- # x722=() 00:22:43.520 22:21:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@321 -- # local -ga x722 00:22:43.520 22:21:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@322 -- # mlx=() 00:22:43.520 22:21:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@322 -- # local -ga mlx 00:22:43.520 22:21:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:43.521 22:21:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:43.521 22:21:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:43.521 22:21:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:43.521 22:21:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:43.521 22:21:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:43.521 22:21:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:43.521 22:21:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:22:43.521 22:21:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:43.521 22:21:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:43.521 22:21:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:43.521 22:21:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:43.521 22:21:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:22:43.521 22:21:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:22:43.521 22:21:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:22:43.521 22:21:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:22:43.521 22:21:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:22:43.521 22:21:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:22:43.521 22:21:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:43.521 22:21:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:22:43.521 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:22:43.521 22:21:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:43.521 22:21:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:43.521 22:21:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:43.521 22:21:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:43.521 22:21:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:43.521 22:21:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:43.521 22:21:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:22:43.521 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:22:43.521 22:21:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:43.521 22:21:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:43.521 22:21:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:43.521 22:21:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:43.521 22:21:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:43.521 22:21:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:22:43.521 22:21:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:22:43.521 22:21:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:22:43.521 22:21:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:22:43.521 22:21:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:43.521 22:21:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:22:43.521 22:21:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:43.521 22:21:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@416 -- # [[ up == up ]] 00:22:43.521 22:21:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:22:43.521 22:21:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:43.521 22:21:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:22:43.521 Found net devices under 0000:4b:00.0: cvl_0_0 00:22:43.521 22:21:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:22:43.521 22:21:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:22:43.521 22:21:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:43.521 22:21:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:22:43.521 22:21:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:43.521 22:21:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@416 -- # [[ up == up ]] 00:22:43.521 22:21:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:22:43.521 22:21:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:43.521 22:21:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:22:43.521 Found net devices under 0000:4b:00.1: cvl_0_1 00:22:43.521 22:21:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:22:43.521 22:21:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:22:43.521 22:21:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@440 -- # is_hw=yes 00:22:43.521 22:21:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:22:43.521 22:21:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:22:43.521 22:21:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:22:43.521 22:21:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:22:43.521 22:21:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:43.521 22:21:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:43.521 22:21:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:43.521 22:21:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:22:43.521 22:21:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:43.521 22:21:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:43.521 22:21:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:22:43.521 22:21:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:22:43.521 22:21:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:43.521 22:21:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:43.521 22:21:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:22:43.521 22:21:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:22:43.521 22:21:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:22:43.521 22:21:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:43.521 22:21:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:43.521 22:21:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:43.521 22:21:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:22:43.521 22:21:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:43.521 22:21:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:43.521 22:21:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:43.521 22:21:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:22:43.521 22:21:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:22:43.521 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:43.521 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.576 ms 00:22:43.521 00:22:43.521 --- 10.0.0.2 ping statistics --- 00:22:43.521 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:43.521 rtt min/avg/max/mdev = 0.576/0.576/0.576/0.000 ms 00:22:43.521 22:21:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:43.521 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:43.521 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.263 ms 00:22:43.521 00:22:43.521 --- 10.0.0.1 ping statistics --- 00:22:43.521 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:43.521 rtt min/avg/max/mdev = 0.263/0.263/0.263/0.000 ms 00:22:43.521 22:21:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:43.521 22:21:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@448 -- # return 0 00:22:43.521 22:21:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:22:43.521 22:21:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:43.521 22:21:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:22:43.521 22:21:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:22:43.521 22:21:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:43.522 22:21:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:22:43.522 22:21:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:22:43.522 22:21:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@105 -- # run_test nvmf_filesystem_no_in_capsule nvmf_filesystem_part 0 00:22:43.522 22:21:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:22:43.522 22:21:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1107 -- # xtrace_disable 00:22:43.522 22:21:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:22:43.522 ************************************ 00:22:43.522 START TEST nvmf_filesystem_no_in_capsule 00:22:43.522 ************************************ 00:22:43.522 22:21:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1125 -- # nvmf_filesystem_part 0 00:22:43.522 22:21:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@47 -- # in_capsule=0 00:22:43.522 22:21:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:22:43.522 22:21:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:22:43.522 22:21:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@724 -- # xtrace_disable 00:22:43.522 22:21:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:22:43.522 22:21:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@507 -- # nvmfpid=88733 00:22:43.522 22:21:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@508 -- # waitforlisten 88733 00:22:43.522 22:21:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:22:43.522 22:21:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@831 -- # '[' -z 88733 ']' 00:22:43.522 22:21:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:43.522 22:21:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@836 -- # local max_retries=100 00:22:43.522 22:21:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:43.522 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:43.522 22:21:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@840 -- # xtrace_disable 00:22:43.522 22:21:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:22:43.522 [2024-10-01 22:21:38.014879] Starting SPDK v25.01-pre git sha1 1b1c3081e / DPDK 24.03.0 initialization... 00:22:43.522 [2024-10-01 22:21:38.014932] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:43.522 [2024-10-01 22:21:38.082369] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:22:43.522 [2024-10-01 22:21:38.150612] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:43.522 [2024-10-01 22:21:38.150652] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:43.522 [2024-10-01 22:21:38.150660] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:43.522 [2024-10-01 22:21:38.150667] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:43.522 [2024-10-01 22:21:38.150673] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:43.522 [2024-10-01 22:21:38.150860] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:22:43.522 [2024-10-01 22:21:38.150975] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:22:43.522 [2024-10-01 22:21:38.151131] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:22:43.522 [2024-10-01 22:21:38.151131] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:22:43.782 22:21:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:22:43.782 22:21:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@864 -- # return 0 00:22:43.782 22:21:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:22:43.782 22:21:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@730 -- # xtrace_disable 00:22:43.782 22:21:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:22:43.782 22:21:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:43.782 22:21:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:22:43.782 22:21:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:22:43.782 22:21:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:43.782 22:21:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:22:43.782 [2024-10-01 22:21:38.852899] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:43.782 22:21:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:43.782 22:21:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:22:43.782 22:21:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:43.782 22:21:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:22:43.782 Malloc1 00:22:43.782 22:21:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:43.782 22:21:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:22:43.782 22:21:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:43.782 22:21:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:22:43.782 22:21:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:43.782 22:21:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:22:43.782 22:21:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:43.782 22:21:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:22:43.782 22:21:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:43.782 22:21:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:43.782 22:21:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:43.782 22:21:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:22:43.782 [2024-10-01 22:21:38.981132] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:43.782 22:21:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:43.782 22:21:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:22:43.782 22:21:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1378 -- # local bdev_name=Malloc1 00:22:43.783 22:21:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1379 -- # local bdev_info 00:22:43.783 22:21:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1380 -- # local bs 00:22:43.783 22:21:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1381 -- # local nb 00:22:43.783 22:21:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1382 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:22:43.783 22:21:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:43.783 22:21:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:22:43.783 22:21:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:43.783 22:21:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:22:43.783 { 00:22:43.783 "name": "Malloc1", 00:22:43.783 "aliases": [ 00:22:43.783 "4e88e173-47e1-4cbb-aa09-4de26e11f12b" 00:22:43.783 ], 00:22:43.783 "product_name": "Malloc disk", 00:22:43.783 "block_size": 512, 00:22:43.783 "num_blocks": 1048576, 00:22:43.783 "uuid": "4e88e173-47e1-4cbb-aa09-4de26e11f12b", 00:22:43.783 "assigned_rate_limits": { 00:22:43.783 "rw_ios_per_sec": 0, 00:22:43.783 "rw_mbytes_per_sec": 0, 00:22:43.783 "r_mbytes_per_sec": 0, 00:22:43.783 "w_mbytes_per_sec": 0 00:22:43.783 }, 00:22:43.783 "claimed": true, 00:22:43.783 "claim_type": "exclusive_write", 00:22:43.783 "zoned": false, 00:22:43.783 "supported_io_types": { 00:22:43.783 "read": true, 00:22:43.783 "write": true, 00:22:43.783 "unmap": true, 00:22:43.783 "flush": true, 00:22:43.783 "reset": true, 00:22:43.783 "nvme_admin": false, 00:22:43.783 "nvme_io": false, 00:22:43.783 "nvme_io_md": false, 00:22:43.783 "write_zeroes": true, 00:22:43.783 "zcopy": true, 00:22:43.783 "get_zone_info": false, 00:22:43.783 "zone_management": false, 00:22:43.783 "zone_append": false, 00:22:43.783 "compare": false, 00:22:43.783 "compare_and_write": false, 00:22:43.783 "abort": true, 00:22:43.783 "seek_hole": false, 00:22:43.783 "seek_data": false, 00:22:43.783 "copy": true, 00:22:43.783 "nvme_iov_md": false 00:22:43.783 }, 00:22:43.783 "memory_domains": [ 00:22:43.783 { 00:22:43.783 "dma_device_id": "system", 00:22:43.783 "dma_device_type": 1 00:22:43.783 }, 00:22:43.783 { 00:22:43.783 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:43.783 "dma_device_type": 2 00:22:43.783 } 00:22:43.783 ], 00:22:43.783 "driver_specific": {} 00:22:43.783 } 00:22:43.783 ]' 00:22:43.783 22:21:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:22:44.043 22:21:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1383 -- # bs=512 00:22:44.043 22:21:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:22:44.043 22:21:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1384 -- # nb=1048576 00:22:44.043 22:21:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1387 -- # bdev_size=512 00:22:44.043 22:21:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1388 -- # echo 512 00:22:44.043 22:21:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:22:44.043 22:21:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 --hostid=80c5a598-37ec-ec11-9bc7-a4bf01928204 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:22:45.426 22:21:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:22:45.426 22:21:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1198 -- # local i=0 00:22:45.426 22:21:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:22:45.426 22:21:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:22:45.426 22:21:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1205 -- # sleep 2 00:22:47.971 22:21:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:22:47.971 22:21:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:22:47.971 22:21:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:22:47.971 22:21:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:22:47.971 22:21:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:22:47.971 22:21:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1208 -- # return 0 00:22:47.971 22:21:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:22:47.971 22:21:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:22:47.971 22:21:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:22:47.971 22:21:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:22:47.971 22:21:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:22:47.971 22:21:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:22:47.971 22:21:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:22:47.971 22:21:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:22:47.971 22:21:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:22:47.971 22:21:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:22:47.971 22:21:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:22:47.971 22:21:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:22:48.232 22:21:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:22:49.232 22:21:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@76 -- # '[' 0 -eq 0 ']' 00:22:49.232 22:21:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@77 -- # run_test filesystem_ext4 nvmf_filesystem_create ext4 nvme0n1 00:22:49.232 22:21:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:22:49.232 22:21:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1107 -- # xtrace_disable 00:22:49.232 22:21:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:22:49.232 ************************************ 00:22:49.232 START TEST filesystem_ext4 00:22:49.232 ************************************ 00:22:49.232 22:21:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1125 -- # nvmf_filesystem_create ext4 nvme0n1 00:22:49.232 22:21:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:22:49.232 22:21:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:22:49.232 22:21:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:22:49.232 22:21:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@926 -- # local fstype=ext4 00:22:49.232 22:21:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@927 -- # local dev_name=/dev/nvme0n1p1 00:22:49.232 22:21:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@928 -- # local i=0 00:22:49.232 22:21:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@929 -- # local force 00:22:49.233 22:21:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@931 -- # '[' ext4 = ext4 ']' 00:22:49.233 22:21:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@932 -- # force=-F 00:22:49.233 22:21:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@937 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:22:49.233 mke2fs 1.47.0 (5-Feb-2023) 00:22:49.233 Discarding device blocks: 0/522240 done 00:22:49.233 Creating filesystem with 522240 1k blocks and 130560 inodes 00:22:49.233 Filesystem UUID: 145ff6c7-8524-4ba6-b651-92f1feeb1b48 00:22:49.233 Superblock backups stored on blocks: 00:22:49.233 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:22:49.233 00:22:49.233 Allocating group tables: 0/64 done 00:22:49.233 Writing inode tables: 0/64 done 00:22:51.176 Creating journal (8192 blocks): done 00:22:51.176 Writing superblocks and filesystem accounting information: 0/64 done 00:22:51.176 00:22:51.176 22:21:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@945 -- # return 0 00:22:51.176 22:21:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:22:56.464 22:21:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:22:56.724 22:21:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@25 -- # sync 00:22:56.724 22:21:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:22:56.724 22:21:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@27 -- # sync 00:22:56.724 22:21:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@29 -- # i=0 00:22:56.724 22:21:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:22:56.724 22:21:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@37 -- # kill -0 88733 00:22:56.724 22:21:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:22:56.724 22:21:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:22:56.724 22:21:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:22:56.724 22:21:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:22:56.724 00:22:56.724 real 0m7.456s 00:22:56.724 user 0m0.027s 00:22:56.724 sys 0m0.078s 00:22:56.724 22:21:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1126 -- # xtrace_disable 00:22:56.724 22:21:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@10 -- # set +x 00:22:56.724 ************************************ 00:22:56.724 END TEST filesystem_ext4 00:22:56.724 ************************************ 00:22:56.724 22:21:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@78 -- # run_test filesystem_btrfs nvmf_filesystem_create btrfs nvme0n1 00:22:56.724 22:21:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:22:56.724 22:21:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1107 -- # xtrace_disable 00:22:56.724 22:21:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:22:56.724 ************************************ 00:22:56.724 START TEST filesystem_btrfs 00:22:56.724 ************************************ 00:22:56.724 22:21:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1125 -- # nvmf_filesystem_create btrfs nvme0n1 00:22:56.724 22:21:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:22:56.724 22:21:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:22:56.724 22:21:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:22:56.724 22:21:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@926 -- # local fstype=btrfs 00:22:56.724 22:21:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@927 -- # local dev_name=/dev/nvme0n1p1 00:22:56.724 22:21:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@928 -- # local i=0 00:22:56.725 22:21:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@929 -- # local force 00:22:56.725 22:21:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@931 -- # '[' btrfs = ext4 ']' 00:22:56.725 22:21:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@934 -- # force=-f 00:22:56.725 22:21:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@937 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:22:56.985 btrfs-progs v6.8.1 00:22:56.985 See https://btrfs.readthedocs.io for more information. 00:22:56.985 00:22:56.985 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:22:56.985 NOTE: several default settings have changed in version 5.15, please make sure 00:22:56.985 this does not affect your deployments: 00:22:56.985 - DUP for metadata (-m dup) 00:22:56.985 - enabled no-holes (-O no-holes) 00:22:56.985 - enabled free-space-tree (-R free-space-tree) 00:22:56.985 00:22:56.985 Label: (null) 00:22:56.985 UUID: 04e8f713-a97c-4bd8-9eea-9c0d976e8f83 00:22:56.985 Node size: 16384 00:22:56.985 Sector size: 4096 (CPU page size: 4096) 00:22:56.985 Filesystem size: 510.00MiB 00:22:56.985 Block group profiles: 00:22:56.985 Data: single 8.00MiB 00:22:56.985 Metadata: DUP 32.00MiB 00:22:56.985 System: DUP 8.00MiB 00:22:56.985 SSD detected: yes 00:22:56.985 Zoned device: no 00:22:56.985 Features: extref, skinny-metadata, no-holes, free-space-tree 00:22:56.985 Checksum: crc32c 00:22:56.985 Number of devices: 1 00:22:56.985 Devices: 00:22:56.985 ID SIZE PATH 00:22:56.985 1 510.00MiB /dev/nvme0n1p1 00:22:56.985 00:22:56.985 22:21:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@945 -- # return 0 00:22:56.985 22:21:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:22:58.371 22:21:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:22:58.371 22:21:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@25 -- # sync 00:22:58.371 22:21:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:22:58.371 22:21:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@27 -- # sync 00:22:58.371 22:21:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@29 -- # i=0 00:22:58.371 22:21:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:22:58.371 22:21:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@37 -- # kill -0 88733 00:22:58.371 22:21:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:22:58.371 22:21:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:22:58.371 22:21:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:22:58.371 22:21:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:22:58.371 00:22:58.371 real 0m1.479s 00:22:58.371 user 0m0.036s 00:22:58.371 sys 0m0.150s 00:22:58.372 22:21:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1126 -- # xtrace_disable 00:22:58.372 22:21:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@10 -- # set +x 00:22:58.372 ************************************ 00:22:58.372 END TEST filesystem_btrfs 00:22:58.372 ************************************ 00:22:58.372 22:21:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@79 -- # run_test filesystem_xfs nvmf_filesystem_create xfs nvme0n1 00:22:58.372 22:21:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:22:58.372 22:21:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1107 -- # xtrace_disable 00:22:58.372 22:21:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:22:58.372 ************************************ 00:22:58.372 START TEST filesystem_xfs 00:22:58.372 ************************************ 00:22:58.372 22:21:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1125 -- # nvmf_filesystem_create xfs nvme0n1 00:22:58.372 22:21:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:22:58.372 22:21:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:22:58.372 22:21:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:22:58.372 22:21:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@926 -- # local fstype=xfs 00:22:58.372 22:21:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@927 -- # local dev_name=/dev/nvme0n1p1 00:22:58.372 22:21:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@928 -- # local i=0 00:22:58.372 22:21:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@929 -- # local force 00:22:58.372 22:21:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@931 -- # '[' xfs = ext4 ']' 00:22:58.372 22:21:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@934 -- # force=-f 00:22:58.372 22:21:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@937 -- # mkfs.xfs -f /dev/nvme0n1p1 00:22:58.372 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:22:58.372 = sectsz=512 attr=2, projid32bit=1 00:22:58.372 = crc=1 finobt=1, sparse=1, rmapbt=0 00:22:58.372 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:22:58.372 data = bsize=4096 blocks=130560, imaxpct=25 00:22:58.372 = sunit=0 swidth=0 blks 00:22:58.372 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:22:58.372 log =internal log bsize=4096 blocks=16384, version=2 00:22:58.372 = sectsz=512 sunit=0 blks, lazy-count=1 00:22:58.372 realtime =none extsz=4096 blocks=0, rtextents=0 00:22:59.756 Discarding blocks...Done. 00:22:59.756 22:21:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@945 -- # return 0 00:22:59.756 22:21:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:23:03.057 22:21:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:23:03.057 22:21:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@25 -- # sync 00:23:03.057 22:21:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:23:03.057 22:21:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@27 -- # sync 00:23:03.057 22:21:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@29 -- # i=0 00:23:03.057 22:21:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:23:03.057 22:21:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@37 -- # kill -0 88733 00:23:03.057 22:21:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:23:03.057 22:21:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:23:03.057 22:21:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:23:03.057 22:21:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:23:03.057 00:23:03.057 real 0m4.842s 00:23:03.057 user 0m0.033s 00:23:03.057 sys 0m0.107s 00:23:03.057 22:21:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1126 -- # xtrace_disable 00:23:03.057 22:21:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@10 -- # set +x 00:23:03.057 ************************************ 00:23:03.057 END TEST filesystem_xfs 00:23:03.057 ************************************ 00:23:03.318 22:21:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:23:03.318 22:21:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@93 -- # sync 00:23:03.318 22:21:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:23:03.578 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:23:03.579 22:21:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:23:03.579 22:21:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1219 -- # local i=0 00:23:03.579 22:21:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:23:03.579 22:21:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:23:03.579 22:21:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:23:03.579 22:21:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:23:03.579 22:21:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1231 -- # return 0 00:23:03.579 22:21:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:23:03.579 22:21:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:03.579 22:21:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:23:03.579 22:21:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:03.579 22:21:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:23:03.579 22:21:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@101 -- # killprocess 88733 00:23:03.579 22:21:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@950 -- # '[' -z 88733 ']' 00:23:03.579 22:21:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@954 -- # kill -0 88733 00:23:03.579 22:21:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@955 -- # uname 00:23:03.579 22:21:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:23:03.579 22:21:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 88733 00:23:03.579 22:21:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:23:03.579 22:21:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:23:03.579 22:21:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@968 -- # echo 'killing process with pid 88733' 00:23:03.579 killing process with pid 88733 00:23:03.579 22:21:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@969 -- # kill 88733 00:23:03.579 22:21:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@974 -- # wait 88733 00:23:03.839 22:21:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:23:03.839 00:23:03.840 real 0m21.032s 00:23:03.840 user 1m22.958s 00:23:03.840 sys 0m1.617s 00:23:03.840 22:21:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1126 -- # xtrace_disable 00:23:03.840 22:21:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:23:03.840 ************************************ 00:23:03.840 END TEST nvmf_filesystem_no_in_capsule 00:23:03.840 ************************************ 00:23:03.840 22:21:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@106 -- # run_test nvmf_filesystem_in_capsule nvmf_filesystem_part 4096 00:23:03.840 22:21:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:23:03.840 22:21:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1107 -- # xtrace_disable 00:23:03.840 22:21:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:23:03.840 ************************************ 00:23:03.840 START TEST nvmf_filesystem_in_capsule 00:23:03.840 ************************************ 00:23:03.840 22:21:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1125 -- # nvmf_filesystem_part 4096 00:23:03.840 22:21:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@47 -- # in_capsule=4096 00:23:03.840 22:21:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:23:03.840 22:21:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:23:03.840 22:21:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@724 -- # xtrace_disable 00:23:03.840 22:21:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:23:03.840 22:21:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@507 -- # nvmfpid=93006 00:23:03.840 22:21:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@508 -- # waitforlisten 93006 00:23:03.840 22:21:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:23:03.840 22:21:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@831 -- # '[' -z 93006 ']' 00:23:03.840 22:21:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:03.840 22:21:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@836 -- # local max_retries=100 00:23:03.840 22:21:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:03.840 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:03.840 22:21:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@840 -- # xtrace_disable 00:23:03.840 22:21:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:23:04.100 [2024-10-01 22:21:59.120375] Starting SPDK v25.01-pre git sha1 1b1c3081e / DPDK 24.03.0 initialization... 00:23:04.100 [2024-10-01 22:21:59.120431] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:04.100 [2024-10-01 22:21:59.187259] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:23:04.100 [2024-10-01 22:21:59.251413] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:04.100 [2024-10-01 22:21:59.251451] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:04.100 [2024-10-01 22:21:59.251459] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:04.100 [2024-10-01 22:21:59.251465] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:04.100 [2024-10-01 22:21:59.251471] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:04.100 [2024-10-01 22:21:59.251620] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:23:04.100 [2024-10-01 22:21:59.251757] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:23:04.100 [2024-10-01 22:21:59.252005] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:23:04.100 [2024-10-01 22:21:59.252006] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:23:04.673 22:21:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:23:04.673 22:21:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@864 -- # return 0 00:23:04.673 22:21:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:23:04.673 22:21:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@730 -- # xtrace_disable 00:23:04.673 22:21:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:23:04.933 22:21:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:04.933 22:21:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:23:04.933 22:21:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 4096 00:23:04.933 22:21:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:04.933 22:21:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:23:04.933 [2024-10-01 22:21:59.961747] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:04.933 22:21:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:04.933 22:21:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:23:04.933 22:21:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:04.933 22:21:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:23:04.933 Malloc1 00:23:04.933 22:22:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:04.933 22:22:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:23:04.933 22:22:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:04.933 22:22:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:23:04.933 22:22:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:04.934 22:22:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:23:04.934 22:22:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:04.934 22:22:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:23:04.934 22:22:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:04.934 22:22:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:23:04.934 22:22:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:04.934 22:22:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:23:04.934 [2024-10-01 22:22:00.088105] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:04.934 22:22:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:04.934 22:22:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:23:04.934 22:22:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1378 -- # local bdev_name=Malloc1 00:23:04.934 22:22:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1379 -- # local bdev_info 00:23:04.934 22:22:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1380 -- # local bs 00:23:04.934 22:22:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1381 -- # local nb 00:23:04.934 22:22:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1382 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:23:04.934 22:22:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:04.934 22:22:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:23:04.934 22:22:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:04.934 22:22:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:23:04.934 { 00:23:04.934 "name": "Malloc1", 00:23:04.934 "aliases": [ 00:23:04.934 "db7bfa0b-f327-4efd-a488-d0b61a447e54" 00:23:04.934 ], 00:23:04.934 "product_name": "Malloc disk", 00:23:04.934 "block_size": 512, 00:23:04.934 "num_blocks": 1048576, 00:23:04.934 "uuid": "db7bfa0b-f327-4efd-a488-d0b61a447e54", 00:23:04.934 "assigned_rate_limits": { 00:23:04.934 "rw_ios_per_sec": 0, 00:23:04.934 "rw_mbytes_per_sec": 0, 00:23:04.934 "r_mbytes_per_sec": 0, 00:23:04.934 "w_mbytes_per_sec": 0 00:23:04.934 }, 00:23:04.934 "claimed": true, 00:23:04.934 "claim_type": "exclusive_write", 00:23:04.934 "zoned": false, 00:23:04.934 "supported_io_types": { 00:23:04.934 "read": true, 00:23:04.934 "write": true, 00:23:04.934 "unmap": true, 00:23:04.934 "flush": true, 00:23:04.934 "reset": true, 00:23:04.934 "nvme_admin": false, 00:23:04.934 "nvme_io": false, 00:23:04.934 "nvme_io_md": false, 00:23:04.934 "write_zeroes": true, 00:23:04.934 "zcopy": true, 00:23:04.934 "get_zone_info": false, 00:23:04.934 "zone_management": false, 00:23:04.934 "zone_append": false, 00:23:04.934 "compare": false, 00:23:04.934 "compare_and_write": false, 00:23:04.934 "abort": true, 00:23:04.934 "seek_hole": false, 00:23:04.934 "seek_data": false, 00:23:04.934 "copy": true, 00:23:04.934 "nvme_iov_md": false 00:23:04.934 }, 00:23:04.934 "memory_domains": [ 00:23:04.934 { 00:23:04.934 "dma_device_id": "system", 00:23:04.934 "dma_device_type": 1 00:23:04.934 }, 00:23:04.934 { 00:23:04.934 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:04.934 "dma_device_type": 2 00:23:04.934 } 00:23:04.934 ], 00:23:04.934 "driver_specific": {} 00:23:04.934 } 00:23:04.934 ]' 00:23:04.934 22:22:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:23:04.934 22:22:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1383 -- # bs=512 00:23:04.934 22:22:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:23:05.194 22:22:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1384 -- # nb=1048576 00:23:05.194 22:22:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1387 -- # bdev_size=512 00:23:05.194 22:22:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1388 -- # echo 512 00:23:05.194 22:22:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:23:05.194 22:22:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 --hostid=80c5a598-37ec-ec11-9bc7-a4bf01928204 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:23:06.579 22:22:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:23:06.579 22:22:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1198 -- # local i=0 00:23:06.579 22:22:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:23:06.579 22:22:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:23:06.579 22:22:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1205 -- # sleep 2 00:23:09.121 22:22:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:23:09.121 22:22:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:23:09.121 22:22:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:23:09.121 22:22:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:23:09.121 22:22:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:23:09.121 22:22:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1208 -- # return 0 00:23:09.121 22:22:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:23:09.121 22:22:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:23:09.121 22:22:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:23:09.121 22:22:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:23:09.121 22:22:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:23:09.121 22:22:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:23:09.121 22:22:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:23:09.121 22:22:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:23:09.121 22:22:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:23:09.121 22:22:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:23:09.122 22:22:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:23:09.122 22:22:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:23:09.382 22:22:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:23:10.453 22:22:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@76 -- # '[' 4096 -eq 0 ']' 00:23:10.453 22:22:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@81 -- # run_test filesystem_in_capsule_ext4 nvmf_filesystem_create ext4 nvme0n1 00:23:10.453 22:22:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:23:10.453 22:22:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1107 -- # xtrace_disable 00:23:10.453 22:22:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:23:10.453 ************************************ 00:23:10.453 START TEST filesystem_in_capsule_ext4 00:23:10.453 ************************************ 00:23:10.453 22:22:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1125 -- # nvmf_filesystem_create ext4 nvme0n1 00:23:10.453 22:22:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:23:10.453 22:22:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:23:10.453 22:22:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:23:10.453 22:22:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@926 -- # local fstype=ext4 00:23:10.453 22:22:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@927 -- # local dev_name=/dev/nvme0n1p1 00:23:10.453 22:22:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@928 -- # local i=0 00:23:10.453 22:22:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@929 -- # local force 00:23:10.453 22:22:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@931 -- # '[' ext4 = ext4 ']' 00:23:10.453 22:22:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@932 -- # force=-F 00:23:10.453 22:22:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@937 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:23:10.453 mke2fs 1.47.0 (5-Feb-2023) 00:23:10.453 Discarding device blocks: 0/522240 done 00:23:10.453 Creating filesystem with 522240 1k blocks and 130560 inodes 00:23:10.453 Filesystem UUID: b12bb528-aeb6-4810-b993-c1a0d349c2fe 00:23:10.453 Superblock backups stored on blocks: 00:23:10.453 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:23:10.453 00:23:10.453 Allocating group tables: 0/64 done 00:23:10.453 Writing inode tables: 0/64 done 00:23:11.836 Creating journal (8192 blocks): done 00:23:11.836 Writing superblocks and filesystem accounting information: 0/64 done 00:23:11.836 00:23:11.836 22:22:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@945 -- # return 0 00:23:11.836 22:22:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:23:18.420 22:22:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:23:18.420 22:22:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@25 -- # sync 00:23:18.421 22:22:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:23:18.421 22:22:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@27 -- # sync 00:23:18.421 22:22:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@29 -- # i=0 00:23:18.421 22:22:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:23:18.421 22:22:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@37 -- # kill -0 93006 00:23:18.421 22:22:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:23:18.421 22:22:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:23:18.421 22:22:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:23:18.421 22:22:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:23:18.421 00:23:18.421 real 0m7.553s 00:23:18.421 user 0m0.030s 00:23:18.421 sys 0m0.074s 00:23:18.421 22:22:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1126 -- # xtrace_disable 00:23:18.421 22:22:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@10 -- # set +x 00:23:18.421 ************************************ 00:23:18.421 END TEST filesystem_in_capsule_ext4 00:23:18.421 ************************************ 00:23:18.421 22:22:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@82 -- # run_test filesystem_in_capsule_btrfs nvmf_filesystem_create btrfs nvme0n1 00:23:18.421 22:22:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:23:18.421 22:22:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1107 -- # xtrace_disable 00:23:18.421 22:22:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:23:18.421 ************************************ 00:23:18.421 START TEST filesystem_in_capsule_btrfs 00:23:18.421 ************************************ 00:23:18.421 22:22:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1125 -- # nvmf_filesystem_create btrfs nvme0n1 00:23:18.421 22:22:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:23:18.421 22:22:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:23:18.421 22:22:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:23:18.421 22:22:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@926 -- # local fstype=btrfs 00:23:18.421 22:22:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@927 -- # local dev_name=/dev/nvme0n1p1 00:23:18.421 22:22:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@928 -- # local i=0 00:23:18.421 22:22:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@929 -- # local force 00:23:18.421 22:22:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@931 -- # '[' btrfs = ext4 ']' 00:23:18.421 22:22:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@934 -- # force=-f 00:23:18.421 22:22:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@937 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:23:18.421 btrfs-progs v6.8.1 00:23:18.421 See https://btrfs.readthedocs.io for more information. 00:23:18.421 00:23:18.421 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:23:18.421 NOTE: several default settings have changed in version 5.15, please make sure 00:23:18.421 this does not affect your deployments: 00:23:18.421 - DUP for metadata (-m dup) 00:23:18.421 - enabled no-holes (-O no-holes) 00:23:18.421 - enabled free-space-tree (-R free-space-tree) 00:23:18.421 00:23:18.421 Label: (null) 00:23:18.421 UUID: b7b213de-a15b-4139-8894-4af53562ab6d 00:23:18.421 Node size: 16384 00:23:18.421 Sector size: 4096 (CPU page size: 4096) 00:23:18.421 Filesystem size: 510.00MiB 00:23:18.421 Block group profiles: 00:23:18.421 Data: single 8.00MiB 00:23:18.421 Metadata: DUP 32.00MiB 00:23:18.421 System: DUP 8.00MiB 00:23:18.421 SSD detected: yes 00:23:18.421 Zoned device: no 00:23:18.421 Features: extref, skinny-metadata, no-holes, free-space-tree 00:23:18.421 Checksum: crc32c 00:23:18.421 Number of devices: 1 00:23:18.421 Devices: 00:23:18.421 ID SIZE PATH 00:23:18.421 1 510.00MiB /dev/nvme0n1p1 00:23:18.421 00:23:18.421 22:22:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@945 -- # return 0 00:23:18.421 22:22:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:23:19.362 22:22:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:23:19.362 22:22:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@25 -- # sync 00:23:19.362 22:22:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:23:19.362 22:22:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@27 -- # sync 00:23:19.362 22:22:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@29 -- # i=0 00:23:19.362 22:22:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:23:19.362 22:22:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@37 -- # kill -0 93006 00:23:19.362 22:22:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:23:19.362 22:22:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:23:19.362 22:22:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:23:19.362 22:22:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:23:19.362 00:23:19.362 real 0m1.212s 00:23:19.362 user 0m0.031s 00:23:19.362 sys 0m0.118s 00:23:19.362 22:22:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1126 -- # xtrace_disable 00:23:19.362 22:22:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@10 -- # set +x 00:23:19.362 ************************************ 00:23:19.362 END TEST filesystem_in_capsule_btrfs 00:23:19.362 ************************************ 00:23:19.362 22:22:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@83 -- # run_test filesystem_in_capsule_xfs nvmf_filesystem_create xfs nvme0n1 00:23:19.362 22:22:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:23:19.362 22:22:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1107 -- # xtrace_disable 00:23:19.362 22:22:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:23:19.362 ************************************ 00:23:19.362 START TEST filesystem_in_capsule_xfs 00:23:19.362 ************************************ 00:23:19.362 22:22:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1125 -- # nvmf_filesystem_create xfs nvme0n1 00:23:19.362 22:22:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:23:19.362 22:22:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:23:19.362 22:22:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:23:19.362 22:22:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@926 -- # local fstype=xfs 00:23:19.362 22:22:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@927 -- # local dev_name=/dev/nvme0n1p1 00:23:19.362 22:22:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@928 -- # local i=0 00:23:19.362 22:22:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@929 -- # local force 00:23:19.362 22:22:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@931 -- # '[' xfs = ext4 ']' 00:23:19.363 22:22:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@934 -- # force=-f 00:23:19.363 22:22:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@937 -- # mkfs.xfs -f /dev/nvme0n1p1 00:23:19.363 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:23:19.363 = sectsz=512 attr=2, projid32bit=1 00:23:19.363 = crc=1 finobt=1, sparse=1, rmapbt=0 00:23:19.363 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:23:19.363 data = bsize=4096 blocks=130560, imaxpct=25 00:23:19.363 = sunit=0 swidth=0 blks 00:23:19.363 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:23:19.363 log =internal log bsize=4096 blocks=16384, version=2 00:23:19.363 = sectsz=512 sunit=0 blks, lazy-count=1 00:23:19.363 realtime =none extsz=4096 blocks=0, rtextents=0 00:23:19.978 Discarding blocks...Done. 00:23:19.979 22:22:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@945 -- # return 0 00:23:19.979 22:22:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:23:21.889 22:22:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:23:21.889 22:22:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@25 -- # sync 00:23:21.889 22:22:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:23:21.889 22:22:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@27 -- # sync 00:23:21.889 22:22:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@29 -- # i=0 00:23:21.889 22:22:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:23:21.889 22:22:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@37 -- # kill -0 93006 00:23:21.889 22:22:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:23:21.889 22:22:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:23:21.889 22:22:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:23:21.889 22:22:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:23:21.889 00:23:21.889 real 0m2.650s 00:23:21.889 user 0m0.028s 00:23:21.889 sys 0m0.075s 00:23:21.889 22:22:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1126 -- # xtrace_disable 00:23:21.889 22:22:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@10 -- # set +x 00:23:21.889 ************************************ 00:23:21.889 END TEST filesystem_in_capsule_xfs 00:23:21.889 ************************************ 00:23:21.889 22:22:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:23:22.151 22:22:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@93 -- # sync 00:23:22.151 22:22:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:23:22.151 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:23:22.151 22:22:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:23:22.151 22:22:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1219 -- # local i=0 00:23:22.151 22:22:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:23:22.151 22:22:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:23:22.151 22:22:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:23:22.151 22:22:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:23:22.151 22:22:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1231 -- # return 0 00:23:22.151 22:22:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:23:22.151 22:22:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:22.151 22:22:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:23:22.151 22:22:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:22.151 22:22:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:23:22.151 22:22:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@101 -- # killprocess 93006 00:23:22.151 22:22:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@950 -- # '[' -z 93006 ']' 00:23:22.151 22:22:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@954 -- # kill -0 93006 00:23:22.151 22:22:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@955 -- # uname 00:23:22.151 22:22:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:23:22.151 22:22:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 93006 00:23:22.151 22:22:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:23:22.151 22:22:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:23:22.151 22:22:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@968 -- # echo 'killing process with pid 93006' 00:23:22.151 killing process with pid 93006 00:23:22.151 22:22:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@969 -- # kill 93006 00:23:22.151 22:22:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@974 -- # wait 93006 00:23:22.412 22:22:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:23:22.412 00:23:22.412 real 0m18.601s 00:23:22.412 user 1m13.355s 00:23:22.412 sys 0m1.458s 00:23:22.412 22:22:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1126 -- # xtrace_disable 00:23:22.412 22:22:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:23:22.412 ************************************ 00:23:22.412 END TEST nvmf_filesystem_in_capsule 00:23:22.412 ************************************ 00:23:22.674 22:22:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@108 -- # nvmftestfini 00:23:22.674 22:22:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@514 -- # nvmfcleanup 00:23:22.674 22:22:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@121 -- # sync 00:23:22.674 22:22:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:23:22.674 22:22:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@124 -- # set +e 00:23:22.674 22:22:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@125 -- # for i in {1..20} 00:23:22.674 22:22:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:23:22.674 rmmod nvme_tcp 00:23:22.674 rmmod nvme_fabrics 00:23:22.674 rmmod nvme_keyring 00:23:22.674 22:22:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:23:22.674 22:22:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@128 -- # set -e 00:23:22.674 22:22:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@129 -- # return 0 00:23:22.674 22:22:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@515 -- # '[' -n '' ']' 00:23:22.674 22:22:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:23:22.674 22:22:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:23:22.674 22:22:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:23:22.674 22:22:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@297 -- # iptr 00:23:22.674 22:22:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@789 -- # iptables-save 00:23:22.675 22:22:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:23:22.675 22:22:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@789 -- # iptables-restore 00:23:22.675 22:22:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:23:22.675 22:22:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@302 -- # remove_spdk_ns 00:23:22.675 22:22:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:22.675 22:22:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:22.675 22:22:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:24.600 22:22:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:23:24.600 00:23:24.601 real 0m49.909s 00:23:24.601 user 2m38.636s 00:23:24.601 sys 0m8.990s 00:23:24.601 22:22:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1126 -- # xtrace_disable 00:23:24.601 22:22:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:23:24.601 ************************************ 00:23:24.601 END TEST nvmf_filesystem 00:23:24.601 ************************************ 00:23:24.862 22:22:19 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@18 -- # run_test nvmf_target_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:23:24.862 22:22:19 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:23:24.862 22:22:19 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:23:24.862 22:22:19 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:23:24.862 ************************************ 00:23:24.862 START TEST nvmf_target_discovery 00:23:24.862 ************************************ 00:23:24.862 22:22:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:23:24.862 * Looking for test storage... 00:23:24.862 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:23:24.862 22:22:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:23:24.862 22:22:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1681 -- # lcov --version 00:23:24.862 22:22:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:23:24.862 22:22:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:23:24.862 22:22:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:23:24.862 22:22:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@333 -- # local ver1 ver1_l 00:23:24.862 22:22:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@334 -- # local ver2 ver2_l 00:23:24.862 22:22:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@336 -- # IFS=.-: 00:23:24.862 22:22:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@336 -- # read -ra ver1 00:23:24.862 22:22:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@337 -- # IFS=.-: 00:23:24.862 22:22:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@337 -- # read -ra ver2 00:23:24.863 22:22:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@338 -- # local 'op=<' 00:23:24.863 22:22:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@340 -- # ver1_l=2 00:23:24.863 22:22:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@341 -- # ver2_l=1 00:23:24.863 22:22:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:23:24.863 22:22:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@344 -- # case "$op" in 00:23:24.863 22:22:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@345 -- # : 1 00:23:24.863 22:22:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@364 -- # (( v = 0 )) 00:23:24.863 22:22:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:24.863 22:22:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@365 -- # decimal 1 00:23:24.863 22:22:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@353 -- # local d=1 00:23:24.863 22:22:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:24.863 22:22:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@355 -- # echo 1 00:23:24.863 22:22:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@365 -- # ver1[v]=1 00:23:24.863 22:22:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@366 -- # decimal 2 00:23:24.863 22:22:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@353 -- # local d=2 00:23:24.863 22:22:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:23:24.863 22:22:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@355 -- # echo 2 00:23:24.863 22:22:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@366 -- # ver2[v]=2 00:23:24.863 22:22:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:23:24.863 22:22:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:23:24.863 22:22:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@368 -- # return 0 00:23:24.863 22:22:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:23:24.863 22:22:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:23:24.863 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:24.863 --rc genhtml_branch_coverage=1 00:23:24.863 --rc genhtml_function_coverage=1 00:23:24.863 --rc genhtml_legend=1 00:23:24.863 --rc geninfo_all_blocks=1 00:23:24.863 --rc geninfo_unexecuted_blocks=1 00:23:24.863 00:23:24.863 ' 00:23:24.863 22:22:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:23:24.863 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:24.863 --rc genhtml_branch_coverage=1 00:23:24.863 --rc genhtml_function_coverage=1 00:23:24.863 --rc genhtml_legend=1 00:23:24.863 --rc geninfo_all_blocks=1 00:23:24.863 --rc geninfo_unexecuted_blocks=1 00:23:24.863 00:23:24.863 ' 00:23:24.863 22:22:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:23:24.863 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:24.863 --rc genhtml_branch_coverage=1 00:23:24.863 --rc genhtml_function_coverage=1 00:23:24.863 --rc genhtml_legend=1 00:23:24.863 --rc geninfo_all_blocks=1 00:23:24.863 --rc geninfo_unexecuted_blocks=1 00:23:24.863 00:23:24.863 ' 00:23:24.863 22:22:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:23:24.863 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:24.863 --rc genhtml_branch_coverage=1 00:23:24.863 --rc genhtml_function_coverage=1 00:23:24.863 --rc genhtml_legend=1 00:23:24.863 --rc geninfo_all_blocks=1 00:23:24.863 --rc geninfo_unexecuted_blocks=1 00:23:24.863 00:23:24.863 ' 00:23:24.863 22:22:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:25.125 22:22:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@7 -- # uname -s 00:23:25.125 22:22:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:25.125 22:22:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:25.125 22:22:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:25.125 22:22:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:25.125 22:22:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:25.125 22:22:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:25.125 22:22:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:25.125 22:22:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:25.125 22:22:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:25.125 22:22:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:25.125 22:22:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 00:23:25.125 22:22:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=80c5a598-37ec-ec11-9bc7-a4bf01928204 00:23:25.125 22:22:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:25.125 22:22:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:25.125 22:22:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:25.125 22:22:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:25.125 22:22:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:25.125 22:22:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@15 -- # shopt -s extglob 00:23:25.125 22:22:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:25.125 22:22:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:25.125 22:22:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:25.125 22:22:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:25.125 22:22:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:25.125 22:22:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:25.125 22:22:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@5 -- # export PATH 00:23:25.125 22:22:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:25.125 22:22:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@51 -- # : 0 00:23:25.125 22:22:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:23:25.125 22:22:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:23:25.125 22:22:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:25.125 22:22:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:25.125 22:22:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:25.125 22:22:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:23:25.125 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:23:25.125 22:22:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:23:25.125 22:22:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:23:25.125 22:22:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@55 -- # have_pci_nics=0 00:23:25.125 22:22:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@11 -- # NULL_BDEV_SIZE=102400 00:23:25.125 22:22:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@12 -- # NULL_BLOCK_SIZE=512 00:23:25.125 22:22:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@13 -- # NVMF_PORT_REFERRAL=4430 00:23:25.125 22:22:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@15 -- # hash nvme 00:23:25.125 22:22:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@20 -- # nvmftestinit 00:23:25.125 22:22:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:23:25.125 22:22:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:25.126 22:22:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@474 -- # prepare_net_devs 00:23:25.126 22:22:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@436 -- # local -g is_hw=no 00:23:25.126 22:22:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@438 -- # remove_spdk_ns 00:23:25.126 22:22:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:25.126 22:22:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:25.126 22:22:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:25.126 22:22:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:23:25.126 22:22:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:23:25.126 22:22:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@309 -- # xtrace_disable 00:23:25.126 22:22:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:33.272 22:22:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:33.272 22:22:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@315 -- # pci_devs=() 00:23:33.272 22:22:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@315 -- # local -a pci_devs 00:23:33.272 22:22:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@316 -- # pci_net_devs=() 00:23:33.272 22:22:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:23:33.272 22:22:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@317 -- # pci_drivers=() 00:23:33.272 22:22:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@317 -- # local -A pci_drivers 00:23:33.272 22:22:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@319 -- # net_devs=() 00:23:33.272 22:22:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@319 -- # local -ga net_devs 00:23:33.272 22:22:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@320 -- # e810=() 00:23:33.273 22:22:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@320 -- # local -ga e810 00:23:33.273 22:22:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@321 -- # x722=() 00:23:33.273 22:22:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@321 -- # local -ga x722 00:23:33.273 22:22:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@322 -- # mlx=() 00:23:33.273 22:22:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@322 -- # local -ga mlx 00:23:33.273 22:22:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:33.273 22:22:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:33.273 22:22:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:33.273 22:22:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:33.273 22:22:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:33.273 22:22:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:33.273 22:22:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:33.273 22:22:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:23:33.273 22:22:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:33.273 22:22:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:33.273 22:22:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:33.273 22:22:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:33.273 22:22:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:23:33.273 22:22:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:23:33.273 22:22:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:23:33.273 22:22:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:23:33.273 22:22:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:23:33.273 22:22:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:23:33.273 22:22:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:33.273 22:22:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:23:33.273 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:23:33.273 22:22:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:33.273 22:22:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:33.273 22:22:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:33.273 22:22:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:33.273 22:22:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:33.273 22:22:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:33.273 22:22:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:23:33.273 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:23:33.273 22:22:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:33.273 22:22:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:33.273 22:22:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:33.273 22:22:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:33.273 22:22:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:33.273 22:22:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:23:33.273 22:22:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:23:33.273 22:22:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:23:33.273 22:22:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:23:33.273 22:22:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:33.273 22:22:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:23:33.273 22:22:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:33.273 22:22:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@416 -- # [[ up == up ]] 00:23:33.273 22:22:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:23:33.273 22:22:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:33.273 22:22:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:23:33.273 Found net devices under 0000:4b:00.0: cvl_0_0 00:23:33.273 22:22:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:23:33.273 22:22:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:23:33.273 22:22:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:33.273 22:22:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:23:33.273 22:22:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:33.273 22:22:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@416 -- # [[ up == up ]] 00:23:33.273 22:22:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:23:33.273 22:22:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:33.273 22:22:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:23:33.273 Found net devices under 0000:4b:00.1: cvl_0_1 00:23:33.273 22:22:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:23:33.273 22:22:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:23:33.273 22:22:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@440 -- # is_hw=yes 00:23:33.273 22:22:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:23:33.273 22:22:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:23:33.273 22:22:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:23:33.273 22:22:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:23:33.273 22:22:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:33.273 22:22:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:33.273 22:22:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:33.273 22:22:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:23:33.273 22:22:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:33.273 22:22:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:33.273 22:22:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:23:33.273 22:22:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:23:33.273 22:22:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:33.273 22:22:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:33.273 22:22:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:23:33.273 22:22:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:23:33.273 22:22:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:23:33.273 22:22:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:33.273 22:22:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:33.273 22:22:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:33.273 22:22:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:23:33.273 22:22:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:33.273 22:22:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:33.273 22:22:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:33.273 22:22:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:23:33.273 22:22:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:23:33.273 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:33.273 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.499 ms 00:23:33.273 00:23:33.273 --- 10.0.0.2 ping statistics --- 00:23:33.273 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:33.273 rtt min/avg/max/mdev = 0.499/0.499/0.499/0.000 ms 00:23:33.273 22:22:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:33.273 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:33.273 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.295 ms 00:23:33.273 00:23:33.273 --- 10.0.0.1 ping statistics --- 00:23:33.273 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:33.273 rtt min/avg/max/mdev = 0.295/0.295/0.295/0.000 ms 00:23:33.273 22:22:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:33.273 22:22:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@448 -- # return 0 00:23:33.273 22:22:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:23:33.273 22:22:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:33.273 22:22:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:23:33.273 22:22:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:23:33.273 22:22:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:33.274 22:22:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:23:33.274 22:22:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:23:33.274 22:22:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@21 -- # nvmfappstart -m 0xF 00:23:33.274 22:22:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:23:33.274 22:22:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@724 -- # xtrace_disable 00:23:33.274 22:22:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:33.274 22:22:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@507 -- # nvmfpid=100950 00:23:33.274 22:22:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:23:33.274 22:22:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@508 -- # waitforlisten 100950 00:23:33.274 22:22:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@831 -- # '[' -z 100950 ']' 00:23:33.274 22:22:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:33.274 22:22:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@836 -- # local max_retries=100 00:23:33.274 22:22:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:33.274 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:33.274 22:22:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@840 -- # xtrace_disable 00:23:33.274 22:22:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:33.274 [2024-10-01 22:22:27.503848] Starting SPDK v25.01-pre git sha1 1b1c3081e / DPDK 24.03.0 initialization... 00:23:33.274 [2024-10-01 22:22:27.503920] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:33.274 [2024-10-01 22:22:27.576354] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:23:33.274 [2024-10-01 22:22:27.651524] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:33.274 [2024-10-01 22:22:27.651563] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:33.274 [2024-10-01 22:22:27.651571] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:33.274 [2024-10-01 22:22:27.651578] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:33.274 [2024-10-01 22:22:27.651583] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:33.274 [2024-10-01 22:22:27.651667] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:23:33.274 [2024-10-01 22:22:27.651881] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:23:33.274 [2024-10-01 22:22:27.651740] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:23:33.274 [2024-10-01 22:22:27.651881] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:23:33.274 22:22:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:23:33.274 22:22:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@864 -- # return 0 00:23:33.274 22:22:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:23:33.274 22:22:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@730 -- # xtrace_disable 00:23:33.274 22:22:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:33.274 22:22:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:33.274 22:22:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:23:33.274 22:22:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:33.274 22:22:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:33.274 [2024-10-01 22:22:28.357702] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:33.274 22:22:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:33.274 22:22:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # seq 1 4 00:23:33.274 22:22:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:23:33.274 22:22:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null1 102400 512 00:23:33.274 22:22:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:33.274 22:22:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:33.274 Null1 00:23:33.274 22:22:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:33.274 22:22:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:23:33.274 22:22:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:33.274 22:22:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:33.274 22:22:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:33.274 22:22:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Null1 00:23:33.274 22:22:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:33.274 22:22:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:33.274 22:22:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:33.274 22:22:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:23:33.274 22:22:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:33.274 22:22:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:33.274 [2024-10-01 22:22:28.418017] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:33.274 22:22:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:33.274 22:22:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:23:33.274 22:22:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null2 102400 512 00:23:33.274 22:22:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:33.274 22:22:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:33.274 Null2 00:23:33.274 22:22:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:33.274 22:22:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:23:33.274 22:22:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:33.274 22:22:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:33.274 22:22:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:33.274 22:22:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Null2 00:23:33.274 22:22:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:33.274 22:22:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:33.274 22:22:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:33.274 22:22:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:23:33.274 22:22:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:33.274 22:22:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:33.274 22:22:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:33.274 22:22:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:23:33.274 22:22:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null3 102400 512 00:23:33.274 22:22:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:33.274 22:22:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:33.274 Null3 00:23:33.274 22:22:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:33.274 22:22:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000003 00:23:33.274 22:22:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:33.274 22:22:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:33.274 22:22:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:33.274 22:22:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Null3 00:23:33.274 22:22:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:33.274 22:22:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:33.274 22:22:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:33.274 22:22:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:23:33.274 22:22:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:33.274 22:22:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:33.274 22:22:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:33.274 22:22:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:23:33.274 22:22:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null4 102400 512 00:23:33.274 22:22:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:33.274 22:22:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:33.536 Null4 00:23:33.536 22:22:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:33.536 22:22:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK00000000000004 00:23:33.536 22:22:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:33.536 22:22:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:33.536 22:22:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:33.536 22:22:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Null4 00:23:33.536 22:22:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:33.536 22:22:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:33.536 22:22:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:33.536 22:22:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t tcp -a 10.0.0.2 -s 4420 00:23:33.536 22:22:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:33.536 22:22:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:33.536 22:22:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:33.536 22:22:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@32 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:23:33.536 22:22:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:33.536 22:22:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:33.536 22:22:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:33.536 22:22:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@35 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 10.0.0.2 -s 4430 00:23:33.536 22:22:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:33.536 22:22:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:33.536 22:22:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:33.536 22:22:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@37 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 --hostid=80c5a598-37ec-ec11-9bc7-a4bf01928204 -t tcp -a 10.0.0.2 -s 4420 00:23:33.536 00:23:33.536 Discovery Log Number of Records 6, Generation counter 6 00:23:33.536 =====Discovery Log Entry 0====== 00:23:33.536 trtype: tcp 00:23:33.536 adrfam: ipv4 00:23:33.536 subtype: current discovery subsystem 00:23:33.536 treq: not required 00:23:33.536 portid: 0 00:23:33.536 trsvcid: 4420 00:23:33.536 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:23:33.536 traddr: 10.0.0.2 00:23:33.536 eflags: explicit discovery connections, duplicate discovery information 00:23:33.536 sectype: none 00:23:33.536 =====Discovery Log Entry 1====== 00:23:33.536 trtype: tcp 00:23:33.536 adrfam: ipv4 00:23:33.536 subtype: nvme subsystem 00:23:33.536 treq: not required 00:23:33.536 portid: 0 00:23:33.536 trsvcid: 4420 00:23:33.536 subnqn: nqn.2016-06.io.spdk:cnode1 00:23:33.536 traddr: 10.0.0.2 00:23:33.536 eflags: none 00:23:33.536 sectype: none 00:23:33.536 =====Discovery Log Entry 2====== 00:23:33.536 trtype: tcp 00:23:33.536 adrfam: ipv4 00:23:33.536 subtype: nvme subsystem 00:23:33.536 treq: not required 00:23:33.536 portid: 0 00:23:33.536 trsvcid: 4420 00:23:33.536 subnqn: nqn.2016-06.io.spdk:cnode2 00:23:33.536 traddr: 10.0.0.2 00:23:33.536 eflags: none 00:23:33.536 sectype: none 00:23:33.536 =====Discovery Log Entry 3====== 00:23:33.536 trtype: tcp 00:23:33.536 adrfam: ipv4 00:23:33.536 subtype: nvme subsystem 00:23:33.536 treq: not required 00:23:33.536 portid: 0 00:23:33.536 trsvcid: 4420 00:23:33.536 subnqn: nqn.2016-06.io.spdk:cnode3 00:23:33.536 traddr: 10.0.0.2 00:23:33.536 eflags: none 00:23:33.536 sectype: none 00:23:33.536 =====Discovery Log Entry 4====== 00:23:33.536 trtype: tcp 00:23:33.536 adrfam: ipv4 00:23:33.536 subtype: nvme subsystem 00:23:33.536 treq: not required 00:23:33.536 portid: 0 00:23:33.536 trsvcid: 4420 00:23:33.536 subnqn: nqn.2016-06.io.spdk:cnode4 00:23:33.536 traddr: 10.0.0.2 00:23:33.536 eflags: none 00:23:33.536 sectype: none 00:23:33.536 =====Discovery Log Entry 5====== 00:23:33.536 trtype: tcp 00:23:33.536 adrfam: ipv4 00:23:33.536 subtype: discovery subsystem referral 00:23:33.536 treq: not required 00:23:33.536 portid: 0 00:23:33.536 trsvcid: 4430 00:23:33.536 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:23:33.536 traddr: 10.0.0.2 00:23:33.536 eflags: none 00:23:33.536 sectype: none 00:23:33.536 22:22:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@39 -- # echo 'Perform nvmf subsystem discovery via RPC' 00:23:33.536 Perform nvmf subsystem discovery via RPC 00:23:33.536 22:22:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@40 -- # rpc_cmd nvmf_get_subsystems 00:23:33.537 22:22:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:33.537 22:22:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:33.537 [ 00:23:33.537 { 00:23:33.537 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:23:33.537 "subtype": "Discovery", 00:23:33.537 "listen_addresses": [ 00:23:33.537 { 00:23:33.537 "trtype": "TCP", 00:23:33.537 "adrfam": "IPv4", 00:23:33.537 "traddr": "10.0.0.2", 00:23:33.537 "trsvcid": "4420" 00:23:33.537 } 00:23:33.537 ], 00:23:33.537 "allow_any_host": true, 00:23:33.537 "hosts": [] 00:23:33.537 }, 00:23:33.537 { 00:23:33.537 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:33.537 "subtype": "NVMe", 00:23:33.537 "listen_addresses": [ 00:23:33.537 { 00:23:33.537 "trtype": "TCP", 00:23:33.537 "adrfam": "IPv4", 00:23:33.537 "traddr": "10.0.0.2", 00:23:33.537 "trsvcid": "4420" 00:23:33.537 } 00:23:33.537 ], 00:23:33.537 "allow_any_host": true, 00:23:33.537 "hosts": [], 00:23:33.537 "serial_number": "SPDK00000000000001", 00:23:33.537 "model_number": "SPDK bdev Controller", 00:23:33.537 "max_namespaces": 32, 00:23:33.537 "min_cntlid": 1, 00:23:33.537 "max_cntlid": 65519, 00:23:33.537 "namespaces": [ 00:23:33.537 { 00:23:33.537 "nsid": 1, 00:23:33.537 "bdev_name": "Null1", 00:23:33.537 "name": "Null1", 00:23:33.537 "nguid": "6485F293988D4C7A903D9D893ED43915", 00:23:33.537 "uuid": "6485f293-988d-4c7a-903d-9d893ed43915" 00:23:33.537 } 00:23:33.537 ] 00:23:33.537 }, 00:23:33.537 { 00:23:33.537 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:23:33.537 "subtype": "NVMe", 00:23:33.537 "listen_addresses": [ 00:23:33.537 { 00:23:33.537 "trtype": "TCP", 00:23:33.537 "adrfam": "IPv4", 00:23:33.537 "traddr": "10.0.0.2", 00:23:33.537 "trsvcid": "4420" 00:23:33.537 } 00:23:33.537 ], 00:23:33.537 "allow_any_host": true, 00:23:33.537 "hosts": [], 00:23:33.537 "serial_number": "SPDK00000000000002", 00:23:33.537 "model_number": "SPDK bdev Controller", 00:23:33.537 "max_namespaces": 32, 00:23:33.537 "min_cntlid": 1, 00:23:33.537 "max_cntlid": 65519, 00:23:33.537 "namespaces": [ 00:23:33.537 { 00:23:33.537 "nsid": 1, 00:23:33.537 "bdev_name": "Null2", 00:23:33.537 "name": "Null2", 00:23:33.537 "nguid": "8CCF273E36FC44ADB9539E45F0D46701", 00:23:33.537 "uuid": "8ccf273e-36fc-44ad-b953-9e45f0d46701" 00:23:33.537 } 00:23:33.537 ] 00:23:33.537 }, 00:23:33.537 { 00:23:33.537 "nqn": "nqn.2016-06.io.spdk:cnode3", 00:23:33.537 "subtype": "NVMe", 00:23:33.537 "listen_addresses": [ 00:23:33.537 { 00:23:33.537 "trtype": "TCP", 00:23:33.537 "adrfam": "IPv4", 00:23:33.537 "traddr": "10.0.0.2", 00:23:33.537 "trsvcid": "4420" 00:23:33.537 } 00:23:33.537 ], 00:23:33.537 "allow_any_host": true, 00:23:33.537 "hosts": [], 00:23:33.537 "serial_number": "SPDK00000000000003", 00:23:33.537 "model_number": "SPDK bdev Controller", 00:23:33.537 "max_namespaces": 32, 00:23:33.537 "min_cntlid": 1, 00:23:33.537 "max_cntlid": 65519, 00:23:33.537 "namespaces": [ 00:23:33.537 { 00:23:33.537 "nsid": 1, 00:23:33.537 "bdev_name": "Null3", 00:23:33.537 "name": "Null3", 00:23:33.537 "nguid": "D904567C88FC4DA1878E2217FD57CE9B", 00:23:33.537 "uuid": "d904567c-88fc-4da1-878e-2217fd57ce9b" 00:23:33.537 } 00:23:33.537 ] 00:23:33.537 }, 00:23:33.537 { 00:23:33.537 "nqn": "nqn.2016-06.io.spdk:cnode4", 00:23:33.537 "subtype": "NVMe", 00:23:33.537 "listen_addresses": [ 00:23:33.537 { 00:23:33.537 "trtype": "TCP", 00:23:33.537 "adrfam": "IPv4", 00:23:33.537 "traddr": "10.0.0.2", 00:23:33.537 "trsvcid": "4420" 00:23:33.537 } 00:23:33.537 ], 00:23:33.537 "allow_any_host": true, 00:23:33.537 "hosts": [], 00:23:33.537 "serial_number": "SPDK00000000000004", 00:23:33.537 "model_number": "SPDK bdev Controller", 00:23:33.537 "max_namespaces": 32, 00:23:33.537 "min_cntlid": 1, 00:23:33.537 "max_cntlid": 65519, 00:23:33.537 "namespaces": [ 00:23:33.537 { 00:23:33.537 "nsid": 1, 00:23:33.537 "bdev_name": "Null4", 00:23:33.537 "name": "Null4", 00:23:33.537 "nguid": "4F7F857706764BB1BBE0AF9FA9D6D2DB", 00:23:33.537 "uuid": "4f7f8577-0676-4bb1-bbe0-af9fa9d6d2db" 00:23:33.537 } 00:23:33.537 ] 00:23:33.537 } 00:23:33.537 ] 00:23:33.537 22:22:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:33.537 22:22:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # seq 1 4 00:23:33.537 22:22:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:23:33.537 22:22:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:23:33.537 22:22:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:33.537 22:22:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:33.537 22:22:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:33.537 22:22:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null1 00:23:33.537 22:22:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:33.537 22:22:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:33.799 22:22:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:33.799 22:22:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:23:33.799 22:22:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:23:33.799 22:22:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:33.799 22:22:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:33.799 22:22:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:33.799 22:22:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null2 00:23:33.799 22:22:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:33.799 22:22:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:33.799 22:22:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:33.799 22:22:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:23:33.799 22:22:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:23:33.799 22:22:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:33.799 22:22:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:33.799 22:22:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:33.799 22:22:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null3 00:23:33.799 22:22:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:33.799 22:22:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:33.799 22:22:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:33.799 22:22:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:23:33.799 22:22:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:23:33.799 22:22:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:33.799 22:22:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:33.799 22:22:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:33.799 22:22:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null4 00:23:33.799 22:22:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:33.799 22:22:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:33.799 22:22:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:33.799 22:22:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@47 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 10.0.0.2 -s 4430 00:23:33.799 22:22:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:33.799 22:22:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:33.799 22:22:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:33.799 22:22:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # rpc_cmd bdev_get_bdevs 00:23:33.799 22:22:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:33.799 22:22:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # jq -r '.[].name' 00:23:33.799 22:22:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:33.799 22:22:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:33.799 22:22:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # check_bdevs= 00:23:33.799 22:22:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@50 -- # '[' -n '' ']' 00:23:33.799 22:22:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@55 -- # trap - SIGINT SIGTERM EXIT 00:23:33.799 22:22:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@57 -- # nvmftestfini 00:23:33.799 22:22:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@514 -- # nvmfcleanup 00:23:33.799 22:22:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@121 -- # sync 00:23:33.799 22:22:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:23:33.799 22:22:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@124 -- # set +e 00:23:33.799 22:22:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@125 -- # for i in {1..20} 00:23:33.799 22:22:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:23:33.799 rmmod nvme_tcp 00:23:33.799 rmmod nvme_fabrics 00:23:33.799 rmmod nvme_keyring 00:23:33.799 22:22:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:23:33.799 22:22:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@128 -- # set -e 00:23:33.799 22:22:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@129 -- # return 0 00:23:33.799 22:22:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@515 -- # '[' -n 100950 ']' 00:23:33.799 22:22:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@516 -- # killprocess 100950 00:23:33.799 22:22:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@950 -- # '[' -z 100950 ']' 00:23:33.799 22:22:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@954 -- # kill -0 100950 00:23:33.799 22:22:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@955 -- # uname 00:23:33.799 22:22:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:23:33.799 22:22:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 100950 00:23:33.799 22:22:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:23:33.799 22:22:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:23:33.799 22:22:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@968 -- # echo 'killing process with pid 100950' 00:23:33.799 killing process with pid 100950 00:23:33.799 22:22:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@969 -- # kill 100950 00:23:33.799 22:22:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@974 -- # wait 100950 00:23:34.059 22:22:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:23:34.059 22:22:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:23:34.059 22:22:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:23:34.059 22:22:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@297 -- # iptr 00:23:34.059 22:22:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@789 -- # iptables-save 00:23:34.059 22:22:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:23:34.059 22:22:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@789 -- # iptables-restore 00:23:34.059 22:22:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:23:34.059 22:22:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@302 -- # remove_spdk_ns 00:23:34.059 22:22:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:34.059 22:22:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:34.059 22:22:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:36.603 22:22:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:23:36.603 00:23:36.603 real 0m11.394s 00:23:36.603 user 0m8.495s 00:23:36.603 sys 0m5.971s 00:23:36.603 22:22:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1126 -- # xtrace_disable 00:23:36.603 22:22:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:36.603 ************************************ 00:23:36.603 END TEST nvmf_target_discovery 00:23:36.603 ************************************ 00:23:36.603 22:22:31 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@19 -- # run_test nvmf_referrals /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:23:36.603 22:22:31 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:23:36.603 22:22:31 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:23:36.603 22:22:31 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:23:36.603 ************************************ 00:23:36.603 START TEST nvmf_referrals 00:23:36.603 ************************************ 00:23:36.603 22:22:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:23:36.603 * Looking for test storage... 00:23:36.603 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:23:36.603 22:22:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:23:36.603 22:22:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1681 -- # lcov --version 00:23:36.603 22:22:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:23:36.603 22:22:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:23:36.603 22:22:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:23:36.603 22:22:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@333 -- # local ver1 ver1_l 00:23:36.603 22:22:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@334 -- # local ver2 ver2_l 00:23:36.603 22:22:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@336 -- # IFS=.-: 00:23:36.603 22:22:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@336 -- # read -ra ver1 00:23:36.603 22:22:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@337 -- # IFS=.-: 00:23:36.603 22:22:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@337 -- # read -ra ver2 00:23:36.603 22:22:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@338 -- # local 'op=<' 00:23:36.603 22:22:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@340 -- # ver1_l=2 00:23:36.603 22:22:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@341 -- # ver2_l=1 00:23:36.603 22:22:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:23:36.603 22:22:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@344 -- # case "$op" in 00:23:36.603 22:22:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@345 -- # : 1 00:23:36.603 22:22:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@364 -- # (( v = 0 )) 00:23:36.603 22:22:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:36.603 22:22:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@365 -- # decimal 1 00:23:36.603 22:22:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@353 -- # local d=1 00:23:36.603 22:22:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:36.603 22:22:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@355 -- # echo 1 00:23:36.603 22:22:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@365 -- # ver1[v]=1 00:23:36.603 22:22:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@366 -- # decimal 2 00:23:36.603 22:22:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@353 -- # local d=2 00:23:36.603 22:22:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:23:36.603 22:22:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@355 -- # echo 2 00:23:36.603 22:22:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@366 -- # ver2[v]=2 00:23:36.603 22:22:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:23:36.603 22:22:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:23:36.603 22:22:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@368 -- # return 0 00:23:36.603 22:22:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:23:36.603 22:22:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:23:36.603 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:36.603 --rc genhtml_branch_coverage=1 00:23:36.603 --rc genhtml_function_coverage=1 00:23:36.603 --rc genhtml_legend=1 00:23:36.603 --rc geninfo_all_blocks=1 00:23:36.603 --rc geninfo_unexecuted_blocks=1 00:23:36.603 00:23:36.603 ' 00:23:36.603 22:22:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:23:36.603 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:36.603 --rc genhtml_branch_coverage=1 00:23:36.603 --rc genhtml_function_coverage=1 00:23:36.603 --rc genhtml_legend=1 00:23:36.603 --rc geninfo_all_blocks=1 00:23:36.603 --rc geninfo_unexecuted_blocks=1 00:23:36.603 00:23:36.603 ' 00:23:36.603 22:22:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:23:36.603 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:36.603 --rc genhtml_branch_coverage=1 00:23:36.603 --rc genhtml_function_coverage=1 00:23:36.603 --rc genhtml_legend=1 00:23:36.603 --rc geninfo_all_blocks=1 00:23:36.603 --rc geninfo_unexecuted_blocks=1 00:23:36.603 00:23:36.603 ' 00:23:36.603 22:22:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:23:36.603 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:36.603 --rc genhtml_branch_coverage=1 00:23:36.603 --rc genhtml_function_coverage=1 00:23:36.603 --rc genhtml_legend=1 00:23:36.603 --rc geninfo_all_blocks=1 00:23:36.603 --rc geninfo_unexecuted_blocks=1 00:23:36.603 00:23:36.604 ' 00:23:36.604 22:22:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:36.604 22:22:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@7 -- # uname -s 00:23:36.604 22:22:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:36.604 22:22:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:36.604 22:22:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:36.604 22:22:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:36.604 22:22:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:36.604 22:22:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:36.604 22:22:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:36.604 22:22:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:36.604 22:22:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:36.604 22:22:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:36.604 22:22:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 00:23:36.604 22:22:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@18 -- # NVME_HOSTID=80c5a598-37ec-ec11-9bc7-a4bf01928204 00:23:36.604 22:22:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:36.604 22:22:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:36.604 22:22:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:36.604 22:22:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:36.604 22:22:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:36.604 22:22:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@15 -- # shopt -s extglob 00:23:36.604 22:22:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:36.604 22:22:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:36.604 22:22:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:36.604 22:22:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:36.604 22:22:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:36.604 22:22:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:36.604 22:22:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@5 -- # export PATH 00:23:36.604 22:22:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:36.604 22:22:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@51 -- # : 0 00:23:36.604 22:22:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:23:36.604 22:22:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:23:36.604 22:22:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:36.604 22:22:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:36.604 22:22:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:36.604 22:22:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:23:36.604 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:23:36.604 22:22:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:23:36.604 22:22:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:23:36.604 22:22:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@55 -- # have_pci_nics=0 00:23:36.604 22:22:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@11 -- # NVMF_REFERRAL_IP_1=127.0.0.2 00:23:36.604 22:22:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@12 -- # NVMF_REFERRAL_IP_2=127.0.0.3 00:23:36.604 22:22:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@13 -- # NVMF_REFERRAL_IP_3=127.0.0.4 00:23:36.604 22:22:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@14 -- # NVMF_PORT_REFERRAL=4430 00:23:36.604 22:22:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@15 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:23:36.604 22:22:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@16 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:23:36.604 22:22:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@37 -- # nvmftestinit 00:23:36.604 22:22:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:23:36.604 22:22:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:36.604 22:22:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@474 -- # prepare_net_devs 00:23:36.604 22:22:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@436 -- # local -g is_hw=no 00:23:36.604 22:22:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@438 -- # remove_spdk_ns 00:23:36.604 22:22:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:36.604 22:22:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:36.604 22:22:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:36.604 22:22:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:23:36.604 22:22:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:23:36.604 22:22:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@309 -- # xtrace_disable 00:23:36.604 22:22:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:23:44.746 22:22:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:44.746 22:22:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@315 -- # pci_devs=() 00:23:44.746 22:22:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@315 -- # local -a pci_devs 00:23:44.746 22:22:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@316 -- # pci_net_devs=() 00:23:44.746 22:22:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:23:44.746 22:22:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@317 -- # pci_drivers=() 00:23:44.746 22:22:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@317 -- # local -A pci_drivers 00:23:44.746 22:22:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@319 -- # net_devs=() 00:23:44.746 22:22:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@319 -- # local -ga net_devs 00:23:44.746 22:22:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@320 -- # e810=() 00:23:44.746 22:22:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@320 -- # local -ga e810 00:23:44.746 22:22:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@321 -- # x722=() 00:23:44.746 22:22:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@321 -- # local -ga x722 00:23:44.746 22:22:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@322 -- # mlx=() 00:23:44.746 22:22:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@322 -- # local -ga mlx 00:23:44.746 22:22:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:44.746 22:22:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:44.746 22:22:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:44.746 22:22:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:44.746 22:22:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:44.746 22:22:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:44.746 22:22:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:44.746 22:22:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:23:44.746 22:22:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:44.746 22:22:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:44.746 22:22:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:44.746 22:22:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:44.746 22:22:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:23:44.746 22:22:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:23:44.746 22:22:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:23:44.746 22:22:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:23:44.746 22:22:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:23:44.746 22:22:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:23:44.746 22:22:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:44.746 22:22:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:23:44.746 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:23:44.746 22:22:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:44.746 22:22:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:44.746 22:22:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:44.746 22:22:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:44.746 22:22:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:44.746 22:22:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:44.746 22:22:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:23:44.746 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:23:44.746 22:22:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:44.746 22:22:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:44.746 22:22:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:44.746 22:22:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:44.746 22:22:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:44.746 22:22:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:23:44.746 22:22:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:23:44.746 22:22:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:23:44.746 22:22:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:23:44.746 22:22:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:44.746 22:22:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:23:44.746 22:22:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:44.746 22:22:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@416 -- # [[ up == up ]] 00:23:44.746 22:22:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:23:44.746 22:22:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:44.746 22:22:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:23:44.746 Found net devices under 0000:4b:00.0: cvl_0_0 00:23:44.746 22:22:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:23:44.746 22:22:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:23:44.746 22:22:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:44.746 22:22:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:23:44.746 22:22:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:44.746 22:22:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@416 -- # [[ up == up ]] 00:23:44.746 22:22:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:23:44.746 22:22:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:44.746 22:22:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:23:44.746 Found net devices under 0000:4b:00.1: cvl_0_1 00:23:44.746 22:22:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:23:44.746 22:22:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:23:44.746 22:22:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@440 -- # is_hw=yes 00:23:44.746 22:22:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:23:44.746 22:22:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:23:44.746 22:22:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:23:44.746 22:22:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:23:44.746 22:22:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:44.746 22:22:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:44.746 22:22:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:44.746 22:22:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:23:44.746 22:22:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:44.746 22:22:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:44.746 22:22:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:23:44.746 22:22:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:23:44.746 22:22:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:44.746 22:22:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:44.746 22:22:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:23:44.746 22:22:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:23:44.746 22:22:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:23:44.746 22:22:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:44.747 22:22:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:44.747 22:22:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:44.747 22:22:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:23:44.747 22:22:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:44.747 22:22:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:44.747 22:22:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:44.747 22:22:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:23:44.747 22:22:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:23:44.747 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:44.747 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.644 ms 00:23:44.747 00:23:44.747 --- 10.0.0.2 ping statistics --- 00:23:44.747 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:44.747 rtt min/avg/max/mdev = 0.644/0.644/0.644/0.000 ms 00:23:44.747 22:22:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:44.747 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:44.747 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.292 ms 00:23:44.747 00:23:44.747 --- 10.0.0.1 ping statistics --- 00:23:44.747 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:44.747 rtt min/avg/max/mdev = 0.292/0.292/0.292/0.000 ms 00:23:44.747 22:22:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:44.747 22:22:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@448 -- # return 0 00:23:44.747 22:22:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:23:44.747 22:22:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:44.747 22:22:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:23:44.747 22:22:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:23:44.747 22:22:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:44.747 22:22:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:23:44.747 22:22:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:23:44.747 22:22:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@38 -- # nvmfappstart -m 0xF 00:23:44.747 22:22:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:23:44.747 22:22:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@724 -- # xtrace_disable 00:23:44.747 22:22:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:23:44.747 22:22:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@507 -- # nvmfpid=105624 00:23:44.747 22:22:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@508 -- # waitforlisten 105624 00:23:44.747 22:22:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:23:44.747 22:22:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@831 -- # '[' -z 105624 ']' 00:23:44.747 22:22:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:44.747 22:22:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@836 -- # local max_retries=100 00:23:44.747 22:22:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:44.747 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:44.747 22:22:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@840 -- # xtrace_disable 00:23:44.747 22:22:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:23:44.747 [2024-10-01 22:22:38.984223] Starting SPDK v25.01-pre git sha1 1b1c3081e / DPDK 24.03.0 initialization... 00:23:44.747 [2024-10-01 22:22:38.984289] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:44.747 [2024-10-01 22:22:39.060424] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:23:44.747 [2024-10-01 22:22:39.134714] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:44.747 [2024-10-01 22:22:39.134755] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:44.747 [2024-10-01 22:22:39.134763] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:44.747 [2024-10-01 22:22:39.134770] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:44.747 [2024-10-01 22:22:39.134776] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:44.747 [2024-10-01 22:22:39.134916] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:23:44.747 [2024-10-01 22:22:39.135036] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:23:44.747 [2024-10-01 22:22:39.135193] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:23:44.747 [2024-10-01 22:22:39.135194] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:23:44.747 22:22:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:23:44.747 22:22:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@864 -- # return 0 00:23:44.747 22:22:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:23:44.747 22:22:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@730 -- # xtrace_disable 00:23:44.747 22:22:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:23:44.747 22:22:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:44.747 22:22:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@40 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:23:44.747 22:22:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:44.747 22:22:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:23:44.747 [2024-10-01 22:22:39.839756] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:44.747 22:22:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:44.747 22:22:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 10.0.0.2 -s 8009 discovery 00:23:44.747 22:22:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:44.747 22:22:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:23:44.747 [2024-10-01 22:22:39.851967] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:23:44.747 22:22:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:44.747 22:22:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@44 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 00:23:44.747 22:22:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:44.747 22:22:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:23:44.747 22:22:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:44.747 22:22:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@45 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.3 -s 4430 00:23:44.747 22:22:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:44.747 22:22:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:23:44.747 22:22:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:44.747 22:22:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@46 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.4 -s 4430 00:23:44.747 22:22:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:44.747 22:22:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:23:44.747 22:22:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:44.747 22:22:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # rpc_cmd nvmf_discovery_get_referrals 00:23:44.747 22:22:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # jq length 00:23:44.747 22:22:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:44.747 22:22:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:23:44.747 22:22:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:44.747 22:22:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # (( 3 == 3 )) 00:23:44.747 22:22:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@49 -- # get_referral_ips rpc 00:23:44.747 22:22:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:23:44.747 22:22:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:23:44.747 22:22:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:44.747 22:22:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:23:44.747 22:22:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:23:44.747 22:22:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:23:44.747 22:22:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:44.747 22:22:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:23:44.747 22:22:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@49 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:23:44.747 22:22:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@50 -- # get_referral_ips nvme 00:23:44.747 22:22:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:23:44.747 22:22:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:23:44.747 22:22:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 --hostid=80c5a598-37ec-ec11-9bc7-a4bf01928204 -t tcp -a 10.0.0.2 -s 8009 -o json 00:23:44.747 22:22:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:23:44.747 22:22:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:23:45.009 22:22:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:23:45.009 22:22:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@50 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:23:45.009 22:22:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@52 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 00:23:45.009 22:22:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:45.009 22:22:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:23:45.009 22:22:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:45.009 22:22:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@53 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.3 -s 4430 00:23:45.009 22:22:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:45.009 22:22:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:23:45.009 22:22:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:45.009 22:22:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@54 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.4 -s 4430 00:23:45.009 22:22:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:45.009 22:22:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:23:45.009 22:22:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:45.009 22:22:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # rpc_cmd nvmf_discovery_get_referrals 00:23:45.009 22:22:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:45.009 22:22:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:23:45.009 22:22:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # jq length 00:23:45.009 22:22:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:45.009 22:22:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # (( 0 == 0 )) 00:23:45.009 22:22:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@57 -- # get_referral_ips nvme 00:23:45.009 22:22:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:23:45.009 22:22:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:23:45.009 22:22:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 --hostid=80c5a598-37ec-ec11-9bc7-a4bf01928204 -t tcp -a 10.0.0.2 -s 8009 -o json 00:23:45.009 22:22:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:23:45.009 22:22:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:23:45.271 22:22:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:23:45.271 22:22:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@57 -- # [[ '' == '' ]] 00:23:45.271 22:22:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@60 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n discovery 00:23:45.271 22:22:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:45.271 22:22:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:23:45.271 22:22:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:45.271 22:22:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@62 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:23:45.271 22:22:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:45.271 22:22:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:23:45.271 22:22:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:45.271 22:22:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@65 -- # get_referral_ips rpc 00:23:45.271 22:22:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:23:45.271 22:22:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:23:45.271 22:22:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:23:45.271 22:22:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:45.271 22:22:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:23:45.272 22:22:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:23:45.272 22:22:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:45.532 22:22:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.2 00:23:45.532 22:22:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@65 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:23:45.532 22:22:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@66 -- # get_referral_ips nvme 00:23:45.532 22:22:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:23:45.532 22:22:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:23:45.532 22:22:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:23:45.532 22:22:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 --hostid=80c5a598-37ec-ec11-9bc7-a4bf01928204 -t tcp -a 10.0.0.2 -s 8009 -o json 00:23:45.532 22:22:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:23:45.532 22:22:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.2 00:23:45.532 22:22:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@66 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:23:45.532 22:22:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # get_discovery_entries 'nvme subsystem' 00:23:45.532 22:22:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # jq -r .subnqn 00:23:45.532 22:22:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:23:45.532 22:22:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 --hostid=80c5a598-37ec-ec11-9bc7-a4bf01928204 -t tcp -a 10.0.0.2 -s 8009 -o json 00:23:45.532 22:22:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:23:45.793 22:22:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:23:45.793 22:22:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # get_discovery_entries 'discovery subsystem referral' 00:23:45.793 22:22:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # jq -r .subnqn 00:23:45.793 22:22:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:23:45.793 22:22:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 --hostid=80c5a598-37ec-ec11-9bc7-a4bf01928204 -t tcp -a 10.0.0.2 -s 8009 -o json 00:23:45.793 22:22:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:23:46.053 22:22:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:23:46.053 22:22:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@71 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:23:46.053 22:22:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:46.053 22:22:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:23:46.053 22:22:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:46.053 22:22:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@73 -- # get_referral_ips rpc 00:23:46.053 22:22:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:23:46.053 22:22:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:23:46.053 22:22:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:23:46.053 22:22:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:46.053 22:22:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:23:46.053 22:22:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:23:46.053 22:22:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:46.053 22:22:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 00:23:46.053 22:22:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@73 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:23:46.053 22:22:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@74 -- # get_referral_ips nvme 00:23:46.053 22:22:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:23:46.053 22:22:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:23:46.053 22:22:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 --hostid=80c5a598-37ec-ec11-9bc7-a4bf01928204 -t tcp -a 10.0.0.2 -s 8009 -o json 00:23:46.053 22:22:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:23:46.053 22:22:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:23:46.313 22:22:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 00:23:46.313 22:22:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@74 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:23:46.313 22:22:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # get_discovery_entries 'nvme subsystem' 00:23:46.313 22:22:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # jq -r .subnqn 00:23:46.313 22:22:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:23:46.313 22:22:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 --hostid=80c5a598-37ec-ec11-9bc7-a4bf01928204 -t tcp -a 10.0.0.2 -s 8009 -o json 00:23:46.313 22:22:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:23:46.574 22:22:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # [[ '' == '' ]] 00:23:46.574 22:22:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # get_discovery_entries 'discovery subsystem referral' 00:23:46.574 22:22:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # jq -r .subnqn 00:23:46.574 22:22:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:23:46.574 22:22:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 --hostid=80c5a598-37ec-ec11-9bc7-a4bf01928204 -t tcp -a 10.0.0.2 -s 8009 -o json 00:23:46.574 22:22:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:23:46.834 22:22:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:23:46.834 22:22:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@79 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2014-08.org.nvmexpress.discovery 00:23:46.834 22:22:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:46.834 22:22:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:23:46.834 22:22:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:46.834 22:22:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # rpc_cmd nvmf_discovery_get_referrals 00:23:46.834 22:22:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # jq length 00:23:46.834 22:22:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:46.834 22:22:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:23:46.834 22:22:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:46.834 22:22:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # (( 0 == 0 )) 00:23:46.834 22:22:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@83 -- # get_referral_ips nvme 00:23:46.834 22:22:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:23:46.834 22:22:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:23:46.834 22:22:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 --hostid=80c5a598-37ec-ec11-9bc7-a4bf01928204 -t tcp -a 10.0.0.2 -s 8009 -o json 00:23:46.835 22:22:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:23:46.835 22:22:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:23:47.096 22:22:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:23:47.096 22:22:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@83 -- # [[ '' == '' ]] 00:23:47.096 22:22:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@85 -- # trap - SIGINT SIGTERM EXIT 00:23:47.096 22:22:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@86 -- # nvmftestfini 00:23:47.096 22:22:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@514 -- # nvmfcleanup 00:23:47.096 22:22:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@121 -- # sync 00:23:47.096 22:22:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:23:47.096 22:22:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@124 -- # set +e 00:23:47.096 22:22:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@125 -- # for i in {1..20} 00:23:47.096 22:22:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:23:47.096 rmmod nvme_tcp 00:23:47.096 rmmod nvme_fabrics 00:23:47.097 rmmod nvme_keyring 00:23:47.097 22:22:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:23:47.097 22:22:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@128 -- # set -e 00:23:47.097 22:22:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@129 -- # return 0 00:23:47.097 22:22:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@515 -- # '[' -n 105624 ']' 00:23:47.097 22:22:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@516 -- # killprocess 105624 00:23:47.097 22:22:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@950 -- # '[' -z 105624 ']' 00:23:47.097 22:22:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@954 -- # kill -0 105624 00:23:47.097 22:22:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@955 -- # uname 00:23:47.097 22:22:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:23:47.097 22:22:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 105624 00:23:47.097 22:22:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:23:47.097 22:22:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:23:47.097 22:22:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@968 -- # echo 'killing process with pid 105624' 00:23:47.097 killing process with pid 105624 00:23:47.097 22:22:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@969 -- # kill 105624 00:23:47.097 22:22:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@974 -- # wait 105624 00:23:47.357 22:22:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:23:47.357 22:22:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:23:47.357 22:22:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:23:47.357 22:22:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@297 -- # iptr 00:23:47.357 22:22:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@789 -- # iptables-save 00:23:47.357 22:22:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:23:47.357 22:22:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@789 -- # iptables-restore 00:23:47.357 22:22:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:23:47.357 22:22:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@302 -- # remove_spdk_ns 00:23:47.357 22:22:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:47.357 22:22:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:47.357 22:22:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:49.269 22:22:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:23:49.269 00:23:49.269 real 0m13.118s 00:23:49.269 user 0m16.212s 00:23:49.269 sys 0m6.309s 00:23:49.269 22:22:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1126 -- # xtrace_disable 00:23:49.269 22:22:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:23:49.269 ************************************ 00:23:49.269 END TEST nvmf_referrals 00:23:49.269 ************************************ 00:23:49.530 22:22:44 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@20 -- # run_test nvmf_connect_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:23:49.530 22:22:44 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:23:49.530 22:22:44 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:23:49.530 22:22:44 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:23:49.530 ************************************ 00:23:49.530 START TEST nvmf_connect_disconnect 00:23:49.530 ************************************ 00:23:49.530 22:22:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:23:49.531 * Looking for test storage... 00:23:49.531 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:23:49.531 22:22:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:23:49.531 22:22:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1681 -- # lcov --version 00:23:49.531 22:22:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:23:49.531 22:22:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:23:49.531 22:22:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:23:49.531 22:22:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@333 -- # local ver1 ver1_l 00:23:49.531 22:22:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@334 -- # local ver2 ver2_l 00:23:49.531 22:22:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@336 -- # IFS=.-: 00:23:49.531 22:22:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@336 -- # read -ra ver1 00:23:49.531 22:22:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@337 -- # IFS=.-: 00:23:49.531 22:22:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@337 -- # read -ra ver2 00:23:49.531 22:22:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@338 -- # local 'op=<' 00:23:49.531 22:22:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@340 -- # ver1_l=2 00:23:49.531 22:22:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@341 -- # ver2_l=1 00:23:49.531 22:22:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:23:49.531 22:22:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@344 -- # case "$op" in 00:23:49.531 22:22:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@345 -- # : 1 00:23:49.531 22:22:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@364 -- # (( v = 0 )) 00:23:49.531 22:22:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:49.531 22:22:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@365 -- # decimal 1 00:23:49.531 22:22:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@353 -- # local d=1 00:23:49.531 22:22:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:49.531 22:22:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@355 -- # echo 1 00:23:49.531 22:22:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@365 -- # ver1[v]=1 00:23:49.793 22:22:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@366 -- # decimal 2 00:23:49.793 22:22:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@353 -- # local d=2 00:23:49.793 22:22:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:23:49.793 22:22:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@355 -- # echo 2 00:23:49.793 22:22:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@366 -- # ver2[v]=2 00:23:49.793 22:22:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:23:49.793 22:22:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:23:49.793 22:22:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@368 -- # return 0 00:23:49.793 22:22:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:23:49.793 22:22:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:23:49.793 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:49.793 --rc genhtml_branch_coverage=1 00:23:49.793 --rc genhtml_function_coverage=1 00:23:49.793 --rc genhtml_legend=1 00:23:49.793 --rc geninfo_all_blocks=1 00:23:49.793 --rc geninfo_unexecuted_blocks=1 00:23:49.793 00:23:49.793 ' 00:23:49.793 22:22:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:23:49.793 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:49.793 --rc genhtml_branch_coverage=1 00:23:49.793 --rc genhtml_function_coverage=1 00:23:49.793 --rc genhtml_legend=1 00:23:49.793 --rc geninfo_all_blocks=1 00:23:49.793 --rc geninfo_unexecuted_blocks=1 00:23:49.793 00:23:49.793 ' 00:23:49.793 22:22:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:23:49.793 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:49.793 --rc genhtml_branch_coverage=1 00:23:49.793 --rc genhtml_function_coverage=1 00:23:49.793 --rc genhtml_legend=1 00:23:49.793 --rc geninfo_all_blocks=1 00:23:49.793 --rc geninfo_unexecuted_blocks=1 00:23:49.793 00:23:49.793 ' 00:23:49.793 22:22:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:23:49.793 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:49.793 --rc genhtml_branch_coverage=1 00:23:49.793 --rc genhtml_function_coverage=1 00:23:49.793 --rc genhtml_legend=1 00:23:49.793 --rc geninfo_all_blocks=1 00:23:49.793 --rc geninfo_unexecuted_blocks=1 00:23:49.793 00:23:49.793 ' 00:23:49.793 22:22:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:49.793 22:22:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # uname -s 00:23:49.793 22:22:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:49.793 22:22:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:49.793 22:22:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:49.793 22:22:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:49.793 22:22:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:49.793 22:22:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:49.793 22:22:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:49.793 22:22:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:49.793 22:22:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:49.793 22:22:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:49.793 22:22:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 00:23:49.793 22:22:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=80c5a598-37ec-ec11-9bc7-a4bf01928204 00:23:49.793 22:22:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:49.793 22:22:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:49.793 22:22:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:49.793 22:22:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:49.793 22:22:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:49.793 22:22:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@15 -- # shopt -s extglob 00:23:49.793 22:22:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:49.793 22:22:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:49.793 22:22:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:49.793 22:22:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:49.793 22:22:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:49.793 22:22:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:49.793 22:22:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@5 -- # export PATH 00:23:49.793 22:22:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:49.793 22:22:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@51 -- # : 0 00:23:49.793 22:22:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:23:49.793 22:22:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:23:49.793 22:22:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:49.793 22:22:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:49.793 22:22:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:49.794 22:22:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:23:49.794 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:23:49.794 22:22:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:23:49.794 22:22:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:23:49.794 22:22:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@55 -- # have_pci_nics=0 00:23:49.794 22:22:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@11 -- # MALLOC_BDEV_SIZE=64 00:23:49.794 22:22:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:23:49.794 22:22:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@15 -- # nvmftestinit 00:23:49.794 22:22:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:23:49.794 22:22:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:49.794 22:22:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@474 -- # prepare_net_devs 00:23:49.794 22:22:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@436 -- # local -g is_hw=no 00:23:49.794 22:22:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@438 -- # remove_spdk_ns 00:23:49.794 22:22:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:49.794 22:22:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:49.794 22:22:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:49.794 22:22:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:23:49.794 22:22:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:23:49.794 22:22:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@309 -- # xtrace_disable 00:23:49.794 22:22:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:23:57.939 22:22:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:57.940 22:22:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@315 -- # pci_devs=() 00:23:57.940 22:22:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@315 -- # local -a pci_devs 00:23:57.940 22:22:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@316 -- # pci_net_devs=() 00:23:57.940 22:22:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:23:57.940 22:22:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@317 -- # pci_drivers=() 00:23:57.940 22:22:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@317 -- # local -A pci_drivers 00:23:57.940 22:22:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@319 -- # net_devs=() 00:23:57.940 22:22:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@319 -- # local -ga net_devs 00:23:57.940 22:22:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@320 -- # e810=() 00:23:57.940 22:22:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@320 -- # local -ga e810 00:23:57.940 22:22:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@321 -- # x722=() 00:23:57.940 22:22:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@321 -- # local -ga x722 00:23:57.940 22:22:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@322 -- # mlx=() 00:23:57.940 22:22:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@322 -- # local -ga mlx 00:23:57.940 22:22:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:57.940 22:22:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:57.940 22:22:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:57.940 22:22:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:57.940 22:22:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:57.940 22:22:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:57.940 22:22:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:57.940 22:22:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:23:57.940 22:22:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:57.940 22:22:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:57.940 22:22:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:57.940 22:22:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:57.940 22:22:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:23:57.940 22:22:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:23:57.940 22:22:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:23:57.940 22:22:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:23:57.940 22:22:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:23:57.940 22:22:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:23:57.940 22:22:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:57.940 22:22:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:23:57.940 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:23:57.940 22:22:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:57.940 22:22:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:57.940 22:22:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:57.940 22:22:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:57.940 22:22:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:57.940 22:22:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:57.940 22:22:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:23:57.940 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:23:57.940 22:22:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:57.940 22:22:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:57.940 22:22:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:57.940 22:22:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:57.940 22:22:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:57.940 22:22:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:23:57.940 22:22:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:23:57.940 22:22:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:23:57.940 22:22:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:23:57.940 22:22:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:57.940 22:22:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:23:57.940 22:22:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:57.940 22:22:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@416 -- # [[ up == up ]] 00:23:57.940 22:22:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:23:57.940 22:22:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:57.940 22:22:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:23:57.940 Found net devices under 0000:4b:00.0: cvl_0_0 00:23:57.940 22:22:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:23:57.940 22:22:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:23:57.940 22:22:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:57.940 22:22:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:23:57.940 22:22:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:57.940 22:22:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@416 -- # [[ up == up ]] 00:23:57.940 22:22:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:23:57.940 22:22:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:57.940 22:22:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:23:57.940 Found net devices under 0000:4b:00.1: cvl_0_1 00:23:57.940 22:22:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:23:57.940 22:22:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:23:57.940 22:22:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@440 -- # is_hw=yes 00:23:57.940 22:22:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:23:57.940 22:22:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:23:57.940 22:22:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:23:57.940 22:22:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:23:57.940 22:22:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:57.940 22:22:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:57.940 22:22:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:57.940 22:22:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:23:57.940 22:22:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:57.940 22:22:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:57.940 22:22:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:23:57.940 22:22:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:23:57.940 22:22:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:57.940 22:22:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:57.940 22:22:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:23:57.940 22:22:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:23:57.940 22:22:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:23:57.940 22:22:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:57.940 22:22:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:57.940 22:22:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:57.940 22:22:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:23:57.940 22:22:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:57.940 22:22:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:57.940 22:22:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:57.940 22:22:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:23:57.940 22:22:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:23:57.940 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:57.940 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.681 ms 00:23:57.940 00:23:57.940 --- 10.0.0.2 ping statistics --- 00:23:57.940 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:57.940 rtt min/avg/max/mdev = 0.681/0.681/0.681/0.000 ms 00:23:57.940 22:22:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:57.940 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:57.940 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.287 ms 00:23:57.941 00:23:57.941 --- 10.0.0.1 ping statistics --- 00:23:57.941 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:57.941 rtt min/avg/max/mdev = 0.287/0.287/0.287/0.000 ms 00:23:57.941 22:22:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:57.941 22:22:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@448 -- # return 0 00:23:57.941 22:22:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:23:57.941 22:22:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:57.941 22:22:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:23:57.941 22:22:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:23:57.941 22:22:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:57.941 22:22:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:23:57.941 22:22:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:23:57.941 22:22:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@16 -- # nvmfappstart -m 0xF 00:23:57.941 22:22:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:23:57.941 22:22:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@724 -- # xtrace_disable 00:23:57.941 22:22:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:23:57.941 22:22:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@507 -- # nvmfpid=110484 00:23:57.941 22:22:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@508 -- # waitforlisten 110484 00:23:57.941 22:22:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:23:57.941 22:22:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@831 -- # '[' -z 110484 ']' 00:23:57.941 22:22:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:57.941 22:22:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@836 -- # local max_retries=100 00:23:57.941 22:22:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:57.941 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:57.941 22:22:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@840 -- # xtrace_disable 00:23:57.941 22:22:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:23:57.941 [2024-10-01 22:22:52.374949] Starting SPDK v25.01-pre git sha1 1b1c3081e / DPDK 24.03.0 initialization... 00:23:57.941 [2024-10-01 22:22:52.375017] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:57.941 [2024-10-01 22:22:52.447541] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:23:57.941 [2024-10-01 22:22:52.522285] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:57.941 [2024-10-01 22:22:52.522323] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:57.941 [2024-10-01 22:22:52.522331] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:57.941 [2024-10-01 22:22:52.522338] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:57.941 [2024-10-01 22:22:52.522344] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:57.941 [2024-10-01 22:22:52.522478] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:23:57.941 [2024-10-01 22:22:52.522614] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:23:57.941 [2024-10-01 22:22:52.522771] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:23:57.941 [2024-10-01 22:22:52.522771] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:23:57.941 22:22:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:23:57.941 22:22:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@864 -- # return 0 00:23:57.941 22:22:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:23:57.941 22:22:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@730 -- # xtrace_disable 00:23:57.941 22:22:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:23:58.201 22:22:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:58.201 22:22:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:23:58.201 22:22:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:58.201 22:22:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:23:58.201 [2024-10-01 22:22:53.226959] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:58.201 22:22:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:58.201 22:22:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 00:23:58.201 22:22:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:58.201 22:22:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:23:58.201 22:22:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:58.201 22:22:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # bdev=Malloc0 00:23:58.201 22:22:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:23:58.201 22:22:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:58.201 22:22:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:23:58.201 22:22:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:58.201 22:22:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:23:58.202 22:22:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:58.202 22:22:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:23:58.202 22:22:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:58.202 22:22:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:23:58.202 22:22:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:58.202 22:22:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:23:58.202 [2024-10-01 22:22:53.286359] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:58.202 22:22:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:58.202 22:22:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@26 -- # '[' 0 -eq 1 ']' 00:23:58.202 22:22:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@31 -- # num_iterations=5 00:23:58.202 22:22:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@34 -- # set +x 00:24:02.409 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:24:05.709 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:24:09.007 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:24:13.201 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:24:16.498 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:24:16.498 22:23:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@43 -- # trap - SIGINT SIGTERM EXIT 00:24:16.498 22:23:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@45 -- # nvmftestfini 00:24:16.498 22:23:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@514 -- # nvmfcleanup 00:24:16.498 22:23:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@121 -- # sync 00:24:16.498 22:23:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:24:16.498 22:23:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@124 -- # set +e 00:24:16.498 22:23:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@125 -- # for i in {1..20} 00:24:16.498 22:23:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:24:16.498 rmmod nvme_tcp 00:24:16.499 rmmod nvme_fabrics 00:24:16.499 rmmod nvme_keyring 00:24:16.499 22:23:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:24:16.499 22:23:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@128 -- # set -e 00:24:16.499 22:23:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@129 -- # return 0 00:24:16.499 22:23:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@515 -- # '[' -n 110484 ']' 00:24:16.499 22:23:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@516 -- # killprocess 110484 00:24:16.499 22:23:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@950 -- # '[' -z 110484 ']' 00:24:16.499 22:23:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@954 -- # kill -0 110484 00:24:16.499 22:23:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@955 -- # uname 00:24:16.499 22:23:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:24:16.499 22:23:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 110484 00:24:16.499 22:23:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:24:16.499 22:23:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:24:16.499 22:23:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@968 -- # echo 'killing process with pid 110484' 00:24:16.499 killing process with pid 110484 00:24:16.499 22:23:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@969 -- # kill 110484 00:24:16.499 22:23:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@974 -- # wait 110484 00:24:16.759 22:23:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:24:16.759 22:23:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:24:16.759 22:23:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:24:16.759 22:23:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@297 -- # iptr 00:24:16.759 22:23:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@789 -- # iptables-save 00:24:16.759 22:23:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:24:16.759 22:23:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@789 -- # iptables-restore 00:24:16.759 22:23:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:24:16.759 22:23:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@302 -- # remove_spdk_ns 00:24:16.759 22:23:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:16.759 22:23:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:16.759 22:23:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:19.302 22:23:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:24:19.302 00:24:19.302 real 0m29.417s 00:24:19.302 user 1m19.159s 00:24:19.302 sys 0m7.180s 00:24:19.302 22:23:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1126 -- # xtrace_disable 00:24:19.302 22:23:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:24:19.302 ************************************ 00:24:19.302 END TEST nvmf_connect_disconnect 00:24:19.302 ************************************ 00:24:19.302 22:23:14 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@21 -- # run_test nvmf_multitarget /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:24:19.302 22:23:14 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:24:19.302 22:23:14 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:24:19.302 22:23:14 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:24:19.302 ************************************ 00:24:19.302 START TEST nvmf_multitarget 00:24:19.302 ************************************ 00:24:19.302 22:23:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:24:19.302 * Looking for test storage... 00:24:19.302 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:24:19.302 22:23:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:24:19.302 22:23:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1681 -- # lcov --version 00:24:19.302 22:23:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:24:19.302 22:23:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:24:19.302 22:23:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:24:19.302 22:23:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@333 -- # local ver1 ver1_l 00:24:19.302 22:23:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@334 -- # local ver2 ver2_l 00:24:19.302 22:23:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@336 -- # IFS=.-: 00:24:19.302 22:23:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@336 -- # read -ra ver1 00:24:19.302 22:23:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@337 -- # IFS=.-: 00:24:19.302 22:23:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@337 -- # read -ra ver2 00:24:19.302 22:23:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@338 -- # local 'op=<' 00:24:19.302 22:23:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@340 -- # ver1_l=2 00:24:19.302 22:23:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@341 -- # ver2_l=1 00:24:19.302 22:23:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:24:19.302 22:23:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@344 -- # case "$op" in 00:24:19.302 22:23:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@345 -- # : 1 00:24:19.302 22:23:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@364 -- # (( v = 0 )) 00:24:19.302 22:23:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:19.302 22:23:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@365 -- # decimal 1 00:24:19.302 22:23:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@353 -- # local d=1 00:24:19.302 22:23:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:19.302 22:23:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@355 -- # echo 1 00:24:19.302 22:23:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@365 -- # ver1[v]=1 00:24:19.302 22:23:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@366 -- # decimal 2 00:24:19.302 22:23:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@353 -- # local d=2 00:24:19.302 22:23:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:19.302 22:23:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@355 -- # echo 2 00:24:19.302 22:23:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@366 -- # ver2[v]=2 00:24:19.302 22:23:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:19.302 22:23:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:24:19.302 22:23:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@368 -- # return 0 00:24:19.302 22:23:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:19.302 22:23:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:24:19.302 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:19.302 --rc genhtml_branch_coverage=1 00:24:19.302 --rc genhtml_function_coverage=1 00:24:19.302 --rc genhtml_legend=1 00:24:19.302 --rc geninfo_all_blocks=1 00:24:19.302 --rc geninfo_unexecuted_blocks=1 00:24:19.302 00:24:19.302 ' 00:24:19.302 22:23:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:24:19.302 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:19.302 --rc genhtml_branch_coverage=1 00:24:19.302 --rc genhtml_function_coverage=1 00:24:19.302 --rc genhtml_legend=1 00:24:19.302 --rc geninfo_all_blocks=1 00:24:19.302 --rc geninfo_unexecuted_blocks=1 00:24:19.302 00:24:19.302 ' 00:24:19.302 22:23:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:24:19.302 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:19.302 --rc genhtml_branch_coverage=1 00:24:19.302 --rc genhtml_function_coverage=1 00:24:19.302 --rc genhtml_legend=1 00:24:19.302 --rc geninfo_all_blocks=1 00:24:19.302 --rc geninfo_unexecuted_blocks=1 00:24:19.302 00:24:19.302 ' 00:24:19.302 22:23:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:24:19.302 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:19.302 --rc genhtml_branch_coverage=1 00:24:19.302 --rc genhtml_function_coverage=1 00:24:19.302 --rc genhtml_legend=1 00:24:19.302 --rc geninfo_all_blocks=1 00:24:19.302 --rc geninfo_unexecuted_blocks=1 00:24:19.302 00:24:19.302 ' 00:24:19.302 22:23:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:19.302 22:23:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@7 -- # uname -s 00:24:19.302 22:23:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:19.302 22:23:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:19.302 22:23:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:19.302 22:23:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:19.302 22:23:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:19.302 22:23:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:19.302 22:23:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:19.302 22:23:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:19.302 22:23:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:19.302 22:23:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:19.302 22:23:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 00:24:19.302 22:23:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@18 -- # NVME_HOSTID=80c5a598-37ec-ec11-9bc7-a4bf01928204 00:24:19.302 22:23:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:19.302 22:23:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:19.302 22:23:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:19.302 22:23:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:19.302 22:23:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:19.302 22:23:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@15 -- # shopt -s extglob 00:24:19.302 22:23:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:19.302 22:23:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:19.302 22:23:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:19.302 22:23:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:19.302 22:23:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:19.303 22:23:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:19.303 22:23:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@5 -- # export PATH 00:24:19.303 22:23:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:19.303 22:23:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@51 -- # : 0 00:24:19.303 22:23:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:24:19.303 22:23:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:24:19.303 22:23:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:19.303 22:23:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:19.303 22:23:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:19.303 22:23:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:24:19.303 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:24:19.303 22:23:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:24:19.303 22:23:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:24:19.303 22:23:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@55 -- # have_pci_nics=0 00:24:19.303 22:23:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@13 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:24:19.303 22:23:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@15 -- # nvmftestinit 00:24:19.303 22:23:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:24:19.303 22:23:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:19.303 22:23:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@474 -- # prepare_net_devs 00:24:19.303 22:23:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@436 -- # local -g is_hw=no 00:24:19.303 22:23:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@438 -- # remove_spdk_ns 00:24:19.303 22:23:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:19.303 22:23:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:19.303 22:23:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:19.303 22:23:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:24:19.303 22:23:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:24:19.303 22:23:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@309 -- # xtrace_disable 00:24:19.303 22:23:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:24:27.441 22:23:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:27.441 22:23:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@315 -- # pci_devs=() 00:24:27.442 22:23:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@315 -- # local -a pci_devs 00:24:27.442 22:23:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@316 -- # pci_net_devs=() 00:24:27.442 22:23:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:24:27.442 22:23:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@317 -- # pci_drivers=() 00:24:27.442 22:23:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@317 -- # local -A pci_drivers 00:24:27.442 22:23:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@319 -- # net_devs=() 00:24:27.442 22:23:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@319 -- # local -ga net_devs 00:24:27.442 22:23:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@320 -- # e810=() 00:24:27.442 22:23:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@320 -- # local -ga e810 00:24:27.442 22:23:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@321 -- # x722=() 00:24:27.442 22:23:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@321 -- # local -ga x722 00:24:27.442 22:23:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@322 -- # mlx=() 00:24:27.442 22:23:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@322 -- # local -ga mlx 00:24:27.442 22:23:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:27.442 22:23:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:27.442 22:23:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:27.442 22:23:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:27.442 22:23:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:27.442 22:23:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:27.442 22:23:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:27.442 22:23:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:24:27.442 22:23:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:27.442 22:23:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:27.442 22:23:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:27.442 22:23:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:27.442 22:23:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:24:27.442 22:23:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:24:27.442 22:23:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:24:27.442 22:23:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:24:27.442 22:23:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:24:27.442 22:23:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:24:27.442 22:23:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:27.442 22:23:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:24:27.442 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:24:27.442 22:23:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:27.442 22:23:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:27.442 22:23:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:27.442 22:23:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:27.442 22:23:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:27.442 22:23:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:27.442 22:23:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:24:27.442 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:24:27.442 22:23:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:27.442 22:23:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:27.442 22:23:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:27.442 22:23:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:27.442 22:23:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:27.442 22:23:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:24:27.442 22:23:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:24:27.442 22:23:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:24:27.442 22:23:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:24:27.442 22:23:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:27.442 22:23:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:24:27.442 22:23:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:27.442 22:23:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@416 -- # [[ up == up ]] 00:24:27.442 22:23:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:24:27.442 22:23:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:27.442 22:23:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:24:27.442 Found net devices under 0000:4b:00.0: cvl_0_0 00:24:27.442 22:23:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:24:27.442 22:23:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:24:27.442 22:23:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:27.442 22:23:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:24:27.442 22:23:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:27.442 22:23:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@416 -- # [[ up == up ]] 00:24:27.442 22:23:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:24:27.442 22:23:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:27.442 22:23:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:24:27.442 Found net devices under 0000:4b:00.1: cvl_0_1 00:24:27.442 22:23:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:24:27.442 22:23:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:24:27.442 22:23:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@440 -- # is_hw=yes 00:24:27.442 22:23:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:24:27.442 22:23:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:24:27.442 22:23:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:24:27.442 22:23:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:24:27.442 22:23:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:27.442 22:23:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:27.442 22:23:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:27.442 22:23:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:24:27.442 22:23:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:27.442 22:23:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:27.442 22:23:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:24:27.442 22:23:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:24:27.442 22:23:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:27.442 22:23:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:27.442 22:23:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:24:27.442 22:23:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:24:27.442 22:23:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:24:27.442 22:23:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:27.442 22:23:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:27.442 22:23:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:27.442 22:23:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:24:27.442 22:23:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:27.442 22:23:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:27.442 22:23:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:27.442 22:23:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:24:27.442 22:23:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:24:27.442 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:27.442 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.619 ms 00:24:27.442 00:24:27.442 --- 10.0.0.2 ping statistics --- 00:24:27.442 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:27.442 rtt min/avg/max/mdev = 0.619/0.619/0.619/0.000 ms 00:24:27.442 22:23:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:27.442 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:27.442 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.275 ms 00:24:27.442 00:24:27.442 --- 10.0.0.1 ping statistics --- 00:24:27.442 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:27.442 rtt min/avg/max/mdev = 0.275/0.275/0.275/0.000 ms 00:24:27.442 22:23:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:27.442 22:23:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@448 -- # return 0 00:24:27.443 22:23:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:24:27.443 22:23:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:27.443 22:23:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:24:27.443 22:23:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:24:27.443 22:23:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:27.443 22:23:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:24:27.443 22:23:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:24:27.443 22:23:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@16 -- # nvmfappstart -m 0xF 00:24:27.443 22:23:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:24:27.443 22:23:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@724 -- # xtrace_disable 00:24:27.443 22:23:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:24:27.443 22:23:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@507 -- # nvmfpid=118528 00:24:27.443 22:23:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:24:27.443 22:23:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@508 -- # waitforlisten 118528 00:24:27.443 22:23:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@831 -- # '[' -z 118528 ']' 00:24:27.443 22:23:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:27.443 22:23:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@836 -- # local max_retries=100 00:24:27.443 22:23:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:27.443 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:27.443 22:23:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@840 -- # xtrace_disable 00:24:27.443 22:23:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:24:27.443 [2024-10-01 22:23:21.700323] Starting SPDK v25.01-pre git sha1 1b1c3081e / DPDK 24.03.0 initialization... 00:24:27.443 [2024-10-01 22:23:21.700392] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:27.443 [2024-10-01 22:23:21.771911] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:24:27.443 [2024-10-01 22:23:21.846300] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:27.443 [2024-10-01 22:23:21.846337] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:27.443 [2024-10-01 22:23:21.846347] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:27.443 [2024-10-01 22:23:21.846354] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:27.443 [2024-10-01 22:23:21.846360] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:27.443 [2024-10-01 22:23:21.846500] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:24:27.443 [2024-10-01 22:23:21.846640] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:24:27.443 [2024-10-01 22:23:21.846746] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:24:27.443 [2024-10-01 22:23:21.846917] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:24:27.443 22:23:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:24:27.443 22:23:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@864 -- # return 0 00:24:27.443 22:23:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:24:27.443 22:23:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@730 -- # xtrace_disable 00:24:27.443 22:23:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:24:27.443 22:23:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:27.443 22:23:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@18 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:24:27.443 22:23:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:24:27.443 22:23:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # jq length 00:24:27.443 22:23:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # '[' 1 '!=' 1 ']' 00:24:27.443 22:23:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_1 -s 32 00:24:27.703 "nvmf_tgt_1" 00:24:27.703 22:23:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_2 -s 32 00:24:27.703 "nvmf_tgt_2" 00:24:27.703 22:23:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:24:27.703 22:23:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # jq length 00:24:27.962 22:23:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # '[' 3 '!=' 3 ']' 00:24:27.962 22:23:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_1 00:24:27.962 true 00:24:27.962 22:23:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_2 00:24:27.962 true 00:24:27.962 22:23:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:24:27.962 22:23:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # jq length 00:24:28.222 22:23:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # '[' 1 '!=' 1 ']' 00:24:28.222 22:23:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:24:28.222 22:23:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@41 -- # nvmftestfini 00:24:28.222 22:23:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@514 -- # nvmfcleanup 00:24:28.222 22:23:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@121 -- # sync 00:24:28.222 22:23:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:24:28.222 22:23:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@124 -- # set +e 00:24:28.222 22:23:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@125 -- # for i in {1..20} 00:24:28.222 22:23:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:24:28.222 rmmod nvme_tcp 00:24:28.222 rmmod nvme_fabrics 00:24:28.223 rmmod nvme_keyring 00:24:28.223 22:23:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:24:28.223 22:23:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@128 -- # set -e 00:24:28.223 22:23:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@129 -- # return 0 00:24:28.223 22:23:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@515 -- # '[' -n 118528 ']' 00:24:28.223 22:23:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@516 -- # killprocess 118528 00:24:28.223 22:23:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@950 -- # '[' -z 118528 ']' 00:24:28.223 22:23:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@954 -- # kill -0 118528 00:24:28.223 22:23:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@955 -- # uname 00:24:28.223 22:23:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:24:28.223 22:23:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 118528 00:24:28.223 22:23:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:24:28.223 22:23:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:24:28.223 22:23:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@968 -- # echo 'killing process with pid 118528' 00:24:28.223 killing process with pid 118528 00:24:28.223 22:23:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@969 -- # kill 118528 00:24:28.223 22:23:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@974 -- # wait 118528 00:24:28.483 22:23:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:24:28.483 22:23:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:24:28.483 22:23:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:24:28.483 22:23:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@297 -- # iptr 00:24:28.483 22:23:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@789 -- # iptables-save 00:24:28.483 22:23:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:24:28.483 22:23:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@789 -- # iptables-restore 00:24:28.483 22:23:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:24:28.483 22:23:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@302 -- # remove_spdk_ns 00:24:28.483 22:23:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:28.483 22:23:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:28.483 22:23:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:31.026 22:23:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:24:31.026 00:24:31.026 real 0m11.611s 00:24:31.026 user 0m9.845s 00:24:31.026 sys 0m6.078s 00:24:31.026 22:23:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1126 -- # xtrace_disable 00:24:31.026 22:23:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:24:31.026 ************************************ 00:24:31.026 END TEST nvmf_multitarget 00:24:31.026 ************************************ 00:24:31.026 22:23:25 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@22 -- # run_test nvmf_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:24:31.026 22:23:25 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:24:31.026 22:23:25 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:24:31.026 22:23:25 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:24:31.026 ************************************ 00:24:31.026 START TEST nvmf_rpc 00:24:31.026 ************************************ 00:24:31.026 22:23:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:24:31.026 * Looking for test storage... 00:24:31.026 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:24:31.026 22:23:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:24:31.026 22:23:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1681 -- # lcov --version 00:24:31.026 22:23:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:24:31.026 22:23:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:24:31.026 22:23:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:24:31.026 22:23:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:24:31.026 22:23:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:24:31.026 22:23:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:24:31.026 22:23:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:24:31.026 22:23:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:24:31.026 22:23:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:24:31.026 22:23:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:24:31.026 22:23:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:24:31.026 22:23:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:24:31.026 22:23:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:24:31.026 22:23:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@344 -- # case "$op" in 00:24:31.026 22:23:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@345 -- # : 1 00:24:31.026 22:23:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:24:31.026 22:23:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:31.026 22:23:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@365 -- # decimal 1 00:24:31.026 22:23:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@353 -- # local d=1 00:24:31.026 22:23:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:31.026 22:23:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@355 -- # echo 1 00:24:31.026 22:23:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:24:31.026 22:23:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@366 -- # decimal 2 00:24:31.026 22:23:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@353 -- # local d=2 00:24:31.026 22:23:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:31.026 22:23:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@355 -- # echo 2 00:24:31.026 22:23:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:24:31.026 22:23:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:31.026 22:23:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:24:31.026 22:23:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@368 -- # return 0 00:24:31.026 22:23:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:31.026 22:23:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:24:31.026 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:31.026 --rc genhtml_branch_coverage=1 00:24:31.026 --rc genhtml_function_coverage=1 00:24:31.026 --rc genhtml_legend=1 00:24:31.026 --rc geninfo_all_blocks=1 00:24:31.026 --rc geninfo_unexecuted_blocks=1 00:24:31.026 00:24:31.026 ' 00:24:31.026 22:23:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:24:31.026 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:31.026 --rc genhtml_branch_coverage=1 00:24:31.026 --rc genhtml_function_coverage=1 00:24:31.026 --rc genhtml_legend=1 00:24:31.026 --rc geninfo_all_blocks=1 00:24:31.026 --rc geninfo_unexecuted_blocks=1 00:24:31.026 00:24:31.026 ' 00:24:31.026 22:23:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:24:31.026 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:31.026 --rc genhtml_branch_coverage=1 00:24:31.026 --rc genhtml_function_coverage=1 00:24:31.026 --rc genhtml_legend=1 00:24:31.026 --rc geninfo_all_blocks=1 00:24:31.026 --rc geninfo_unexecuted_blocks=1 00:24:31.026 00:24:31.026 ' 00:24:31.026 22:23:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:24:31.026 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:31.026 --rc genhtml_branch_coverage=1 00:24:31.026 --rc genhtml_function_coverage=1 00:24:31.026 --rc genhtml_legend=1 00:24:31.026 --rc geninfo_all_blocks=1 00:24:31.026 --rc geninfo_unexecuted_blocks=1 00:24:31.026 00:24:31.026 ' 00:24:31.026 22:23:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:31.026 22:23:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@7 -- # uname -s 00:24:31.026 22:23:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:31.026 22:23:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:31.026 22:23:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:31.026 22:23:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:31.026 22:23:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:31.026 22:23:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:31.026 22:23:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:31.026 22:23:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:31.026 22:23:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:31.026 22:23:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:31.026 22:23:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 00:24:31.026 22:23:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@18 -- # NVME_HOSTID=80c5a598-37ec-ec11-9bc7-a4bf01928204 00:24:31.026 22:23:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:31.026 22:23:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:31.026 22:23:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:31.026 22:23:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:31.026 22:23:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:31.026 22:23:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@15 -- # shopt -s extglob 00:24:31.026 22:23:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:31.026 22:23:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:31.026 22:23:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:31.026 22:23:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:31.026 22:23:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:31.026 22:23:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:31.026 22:23:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@5 -- # export PATH 00:24:31.026 22:23:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:31.026 22:23:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@51 -- # : 0 00:24:31.026 22:23:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:24:31.027 22:23:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:24:31.027 22:23:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:31.027 22:23:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:31.027 22:23:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:31.027 22:23:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:24:31.027 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:24:31.027 22:23:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:24:31.027 22:23:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:24:31.027 22:23:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@55 -- # have_pci_nics=0 00:24:31.027 22:23:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@11 -- # loops=5 00:24:31.027 22:23:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@23 -- # nvmftestinit 00:24:31.027 22:23:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:24:31.027 22:23:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:31.027 22:23:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@474 -- # prepare_net_devs 00:24:31.027 22:23:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@436 -- # local -g is_hw=no 00:24:31.027 22:23:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@438 -- # remove_spdk_ns 00:24:31.027 22:23:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:31.027 22:23:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:31.027 22:23:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:31.027 22:23:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:24:31.027 22:23:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:24:31.027 22:23:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@309 -- # xtrace_disable 00:24:31.027 22:23:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:24:39.170 22:23:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:39.170 22:23:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@315 -- # pci_devs=() 00:24:39.170 22:23:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@315 -- # local -a pci_devs 00:24:39.170 22:23:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@316 -- # pci_net_devs=() 00:24:39.170 22:23:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:24:39.170 22:23:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@317 -- # pci_drivers=() 00:24:39.170 22:23:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@317 -- # local -A pci_drivers 00:24:39.170 22:23:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@319 -- # net_devs=() 00:24:39.170 22:23:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@319 -- # local -ga net_devs 00:24:39.170 22:23:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@320 -- # e810=() 00:24:39.170 22:23:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@320 -- # local -ga e810 00:24:39.170 22:23:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@321 -- # x722=() 00:24:39.170 22:23:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@321 -- # local -ga x722 00:24:39.170 22:23:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@322 -- # mlx=() 00:24:39.170 22:23:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@322 -- # local -ga mlx 00:24:39.170 22:23:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:39.170 22:23:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:39.170 22:23:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:39.170 22:23:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:39.170 22:23:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:39.170 22:23:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:39.170 22:23:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:39.170 22:23:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:24:39.170 22:23:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:39.170 22:23:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:39.170 22:23:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:39.170 22:23:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:39.170 22:23:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:24:39.170 22:23:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:24:39.170 22:23:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:24:39.170 22:23:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:24:39.170 22:23:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:24:39.171 22:23:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:24:39.171 22:23:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:39.171 22:23:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:24:39.171 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:24:39.171 22:23:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:39.171 22:23:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:39.171 22:23:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:39.171 22:23:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:39.171 22:23:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:39.171 22:23:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:39.171 22:23:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:24:39.171 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:24:39.171 22:23:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:39.171 22:23:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:39.171 22:23:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:39.171 22:23:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:39.171 22:23:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:39.171 22:23:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:24:39.171 22:23:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:24:39.171 22:23:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:24:39.171 22:23:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:24:39.171 22:23:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:39.171 22:23:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:24:39.171 22:23:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:39.171 22:23:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@416 -- # [[ up == up ]] 00:24:39.171 22:23:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:24:39.171 22:23:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:39.171 22:23:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:24:39.171 Found net devices under 0000:4b:00.0: cvl_0_0 00:24:39.171 22:23:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:24:39.171 22:23:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:24:39.171 22:23:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:39.171 22:23:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:24:39.171 22:23:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:39.171 22:23:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@416 -- # [[ up == up ]] 00:24:39.171 22:23:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:24:39.171 22:23:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:39.171 22:23:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:24:39.171 Found net devices under 0000:4b:00.1: cvl_0_1 00:24:39.171 22:23:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:24:39.171 22:23:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:24:39.171 22:23:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@440 -- # is_hw=yes 00:24:39.171 22:23:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:24:39.171 22:23:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:24:39.171 22:23:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:24:39.171 22:23:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:24:39.171 22:23:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:39.171 22:23:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:39.171 22:23:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:39.171 22:23:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:24:39.171 22:23:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:39.171 22:23:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:39.171 22:23:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:24:39.171 22:23:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:24:39.171 22:23:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:39.171 22:23:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:39.171 22:23:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:24:39.171 22:23:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:24:39.171 22:23:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:24:39.171 22:23:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:39.171 22:23:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:39.171 22:23:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:39.171 22:23:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:24:39.171 22:23:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:39.171 22:23:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:39.171 22:23:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:39.171 22:23:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:24:39.171 22:23:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:24:39.171 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:39.171 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.570 ms 00:24:39.171 00:24:39.171 --- 10.0.0.2 ping statistics --- 00:24:39.171 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:39.171 rtt min/avg/max/mdev = 0.570/0.570/0.570/0.000 ms 00:24:39.171 22:23:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:39.171 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:39.171 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.276 ms 00:24:39.171 00:24:39.171 --- 10.0.0.1 ping statistics --- 00:24:39.171 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:39.171 rtt min/avg/max/mdev = 0.276/0.276/0.276/0.000 ms 00:24:39.171 22:23:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:39.171 22:23:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@448 -- # return 0 00:24:39.171 22:23:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:24:39.171 22:23:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:39.171 22:23:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:24:39.171 22:23:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:24:39.171 22:23:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:39.171 22:23:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:24:39.171 22:23:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:24:39.171 22:23:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@24 -- # nvmfappstart -m 0xF 00:24:39.171 22:23:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:24:39.171 22:23:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@724 -- # xtrace_disable 00:24:39.171 22:23:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:24:39.171 22:23:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@507 -- # nvmfpid=123221 00:24:39.171 22:23:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@508 -- # waitforlisten 123221 00:24:39.171 22:23:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:24:39.171 22:23:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@831 -- # '[' -z 123221 ']' 00:24:39.171 22:23:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:39.171 22:23:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:24:39.171 22:23:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:39.171 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:39.171 22:23:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:24:39.171 22:23:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:24:39.171 [2024-10-01 22:23:33.466083] Starting SPDK v25.01-pre git sha1 1b1c3081e / DPDK 24.03.0 initialization... 00:24:39.171 [2024-10-01 22:23:33.466134] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:39.171 [2024-10-01 22:23:33.532911] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:24:39.171 [2024-10-01 22:23:33.599109] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:39.171 [2024-10-01 22:23:33.599161] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:39.171 [2024-10-01 22:23:33.599169] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:39.171 [2024-10-01 22:23:33.599175] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:39.171 [2024-10-01 22:23:33.599182] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:39.171 [2024-10-01 22:23:33.599322] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:24:39.171 [2024-10-01 22:23:33.599435] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:24:39.171 [2024-10-01 22:23:33.599590] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:24:39.172 [2024-10-01 22:23:33.599591] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:24:39.172 22:23:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:24:39.172 22:23:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@864 -- # return 0 00:24:39.172 22:23:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:24:39.172 22:23:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@730 -- # xtrace_disable 00:24:39.172 22:23:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:24:39.172 22:23:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:39.172 22:23:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@26 -- # rpc_cmd nvmf_get_stats 00:24:39.172 22:23:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:39.172 22:23:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:24:39.172 22:23:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:39.172 22:23:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@26 -- # stats='{ 00:24:39.172 "tick_rate": 2400000000, 00:24:39.172 "poll_groups": [ 00:24:39.172 { 00:24:39.172 "name": "nvmf_tgt_poll_group_000", 00:24:39.172 "admin_qpairs": 0, 00:24:39.172 "io_qpairs": 0, 00:24:39.172 "current_admin_qpairs": 0, 00:24:39.172 "current_io_qpairs": 0, 00:24:39.172 "pending_bdev_io": 0, 00:24:39.172 "completed_nvme_io": 0, 00:24:39.172 "transports": [] 00:24:39.172 }, 00:24:39.172 { 00:24:39.172 "name": "nvmf_tgt_poll_group_001", 00:24:39.172 "admin_qpairs": 0, 00:24:39.172 "io_qpairs": 0, 00:24:39.172 "current_admin_qpairs": 0, 00:24:39.172 "current_io_qpairs": 0, 00:24:39.172 "pending_bdev_io": 0, 00:24:39.172 "completed_nvme_io": 0, 00:24:39.172 "transports": [] 00:24:39.172 }, 00:24:39.172 { 00:24:39.172 "name": "nvmf_tgt_poll_group_002", 00:24:39.172 "admin_qpairs": 0, 00:24:39.172 "io_qpairs": 0, 00:24:39.172 "current_admin_qpairs": 0, 00:24:39.172 "current_io_qpairs": 0, 00:24:39.172 "pending_bdev_io": 0, 00:24:39.172 "completed_nvme_io": 0, 00:24:39.172 "transports": [] 00:24:39.172 }, 00:24:39.172 { 00:24:39.172 "name": "nvmf_tgt_poll_group_003", 00:24:39.172 "admin_qpairs": 0, 00:24:39.172 "io_qpairs": 0, 00:24:39.172 "current_admin_qpairs": 0, 00:24:39.172 "current_io_qpairs": 0, 00:24:39.172 "pending_bdev_io": 0, 00:24:39.172 "completed_nvme_io": 0, 00:24:39.172 "transports": [] 00:24:39.172 } 00:24:39.172 ] 00:24:39.172 }' 00:24:39.172 22:23:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@28 -- # jcount '.poll_groups[].name' 00:24:39.172 22:23:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@14 -- # local 'filter=.poll_groups[].name' 00:24:39.172 22:23:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@15 -- # jq '.poll_groups[].name' 00:24:39.172 22:23:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@15 -- # wc -l 00:24:39.172 22:23:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@28 -- # (( 4 == 4 )) 00:24:39.172 22:23:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@29 -- # jq '.poll_groups[0].transports[0]' 00:24:39.172 22:23:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@29 -- # [[ null == null ]] 00:24:39.172 22:23:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@31 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:24:39.172 22:23:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:39.172 22:23:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:24:39.433 [2024-10-01 22:23:34.423980] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:39.433 22:23:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:39.433 22:23:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@33 -- # rpc_cmd nvmf_get_stats 00:24:39.433 22:23:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:39.433 22:23:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:24:39.433 22:23:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:39.433 22:23:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@33 -- # stats='{ 00:24:39.433 "tick_rate": 2400000000, 00:24:39.433 "poll_groups": [ 00:24:39.433 { 00:24:39.433 "name": "nvmf_tgt_poll_group_000", 00:24:39.433 "admin_qpairs": 0, 00:24:39.433 "io_qpairs": 0, 00:24:39.433 "current_admin_qpairs": 0, 00:24:39.433 "current_io_qpairs": 0, 00:24:39.433 "pending_bdev_io": 0, 00:24:39.433 "completed_nvme_io": 0, 00:24:39.433 "transports": [ 00:24:39.433 { 00:24:39.433 "trtype": "TCP" 00:24:39.433 } 00:24:39.433 ] 00:24:39.433 }, 00:24:39.433 { 00:24:39.433 "name": "nvmf_tgt_poll_group_001", 00:24:39.433 "admin_qpairs": 0, 00:24:39.433 "io_qpairs": 0, 00:24:39.433 "current_admin_qpairs": 0, 00:24:39.433 "current_io_qpairs": 0, 00:24:39.433 "pending_bdev_io": 0, 00:24:39.433 "completed_nvme_io": 0, 00:24:39.433 "transports": [ 00:24:39.433 { 00:24:39.433 "trtype": "TCP" 00:24:39.433 } 00:24:39.433 ] 00:24:39.433 }, 00:24:39.433 { 00:24:39.433 "name": "nvmf_tgt_poll_group_002", 00:24:39.433 "admin_qpairs": 0, 00:24:39.433 "io_qpairs": 0, 00:24:39.433 "current_admin_qpairs": 0, 00:24:39.433 "current_io_qpairs": 0, 00:24:39.433 "pending_bdev_io": 0, 00:24:39.433 "completed_nvme_io": 0, 00:24:39.433 "transports": [ 00:24:39.433 { 00:24:39.433 "trtype": "TCP" 00:24:39.433 } 00:24:39.433 ] 00:24:39.433 }, 00:24:39.433 { 00:24:39.433 "name": "nvmf_tgt_poll_group_003", 00:24:39.433 "admin_qpairs": 0, 00:24:39.433 "io_qpairs": 0, 00:24:39.433 "current_admin_qpairs": 0, 00:24:39.433 "current_io_qpairs": 0, 00:24:39.433 "pending_bdev_io": 0, 00:24:39.433 "completed_nvme_io": 0, 00:24:39.433 "transports": [ 00:24:39.433 { 00:24:39.433 "trtype": "TCP" 00:24:39.433 } 00:24:39.433 ] 00:24:39.433 } 00:24:39.433 ] 00:24:39.433 }' 00:24:39.433 22:23:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@35 -- # jsum '.poll_groups[].admin_qpairs' 00:24:39.433 22:23:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:24:39.433 22:23:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:24:39.433 22:23:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:24:39.433 22:23:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@35 -- # (( 0 == 0 )) 00:24:39.433 22:23:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@36 -- # jsum '.poll_groups[].io_qpairs' 00:24:39.433 22:23:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:24:39.433 22:23:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:24:39.433 22:23:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:24:39.433 22:23:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@36 -- # (( 0 == 0 )) 00:24:39.433 22:23:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@38 -- # '[' rdma == tcp ']' 00:24:39.433 22:23:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@46 -- # MALLOC_BDEV_SIZE=64 00:24:39.433 22:23:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@47 -- # MALLOC_BLOCK_SIZE=512 00:24:39.433 22:23:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@49 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:24:39.433 22:23:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:39.433 22:23:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:24:39.433 Malloc1 00:24:39.433 22:23:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:39.433 22:23:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@52 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:24:39.433 22:23:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:39.433 22:23:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:24:39.433 22:23:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:39.433 22:23:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:24:39.433 22:23:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:39.433 22:23:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:24:39.433 22:23:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:39.433 22:23:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@54 -- # rpc_cmd nvmf_subsystem_allow_any_host -d nqn.2016-06.io.spdk:cnode1 00:24:39.433 22:23:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:39.433 22:23:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:24:39.433 22:23:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:39.433 22:23:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@55 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:39.433 22:23:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:39.433 22:23:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:24:39.433 [2024-10-01 22:23:34.615850] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:39.433 22:23:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:39.433 22:23:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@58 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 --hostid=80c5a598-37ec-ec11-9bc7-a4bf01928204 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 -a 10.0.0.2 -s 4420 00:24:39.433 22:23:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@650 -- # local es=0 00:24:39.433 22:23:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 --hostid=80c5a598-37ec-ec11-9bc7-a4bf01928204 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 -a 10.0.0.2 -s 4420 00:24:39.433 22:23:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@638 -- # local arg=nvme 00:24:39.433 22:23:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:24:39.433 22:23:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # type -t nvme 00:24:39.433 22:23:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:24:39.433 22:23:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # type -P nvme 00:24:39.433 22:23:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:24:39.433 22:23:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # arg=/usr/sbin/nvme 00:24:39.433 22:23:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # [[ -x /usr/sbin/nvme ]] 00:24:39.433 22:23:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@653 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 --hostid=80c5a598-37ec-ec11-9bc7-a4bf01928204 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 -a 10.0.0.2 -s 4420 00:24:39.433 [2024-10-01 22:23:34.652955] ctrlr.c: 823:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204' 00:24:39.434 Failed to write to /dev/nvme-fabrics: Input/output error 00:24:39.434 could not add new controller: failed to write to nvme-fabrics device 00:24:39.434 22:23:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@653 -- # es=1 00:24:39.434 22:23:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:24:39.434 22:23:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:24:39.434 22:23:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:24:39.434 22:23:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@61 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 00:24:39.434 22:23:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:39.434 22:23:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:24:39.694 22:23:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:39.694 22:23:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@62 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 --hostid=80c5a598-37ec-ec11-9bc7-a4bf01928204 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:24:41.075 22:23:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@63 -- # waitforserial SPDKISFASTANDAWESOME 00:24:41.075 22:23:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:24:41.075 22:23:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:24:41.075 22:23:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:24:41.075 22:23:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:24:42.986 22:23:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:24:42.986 22:23:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:24:42.986 22:23:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:24:42.986 22:23:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:24:42.986 22:23:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:24:42.986 22:23:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:24:42.986 22:23:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@64 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:24:43.246 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:24:43.246 22:23:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@65 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:24:43.246 22:23:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:24:43.246 22:23:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:24:43.246 22:23:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:24:43.246 22:23:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:24:43.246 22:23:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:24:43.246 22:23:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:24:43.246 22:23:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@68 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 00:24:43.246 22:23:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:43.246 22:23:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:24:43.246 22:23:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:43.246 22:23:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@69 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 --hostid=80c5a598-37ec-ec11-9bc7-a4bf01928204 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:24:43.246 22:23:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@650 -- # local es=0 00:24:43.246 22:23:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 --hostid=80c5a598-37ec-ec11-9bc7-a4bf01928204 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:24:43.246 22:23:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@638 -- # local arg=nvme 00:24:43.246 22:23:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:24:43.246 22:23:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # type -t nvme 00:24:43.246 22:23:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:24:43.246 22:23:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # type -P nvme 00:24:43.246 22:23:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:24:43.246 22:23:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # arg=/usr/sbin/nvme 00:24:43.246 22:23:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # [[ -x /usr/sbin/nvme ]] 00:24:43.247 22:23:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@653 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 --hostid=80c5a598-37ec-ec11-9bc7-a4bf01928204 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:24:43.247 [2024-10-01 22:23:38.408600] ctrlr.c: 823:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204' 00:24:43.247 Failed to write to /dev/nvme-fabrics: Input/output error 00:24:43.247 could not add new controller: failed to write to nvme-fabrics device 00:24:43.247 22:23:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@653 -- # es=1 00:24:43.247 22:23:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:24:43.247 22:23:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:24:43.247 22:23:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:24:43.247 22:23:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@72 -- # rpc_cmd nvmf_subsystem_allow_any_host -e nqn.2016-06.io.spdk:cnode1 00:24:43.247 22:23:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:43.247 22:23:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:24:43.247 22:23:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:43.247 22:23:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@73 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 --hostid=80c5a598-37ec-ec11-9bc7-a4bf01928204 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:24:45.156 22:23:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@74 -- # waitforserial SPDKISFASTANDAWESOME 00:24:45.156 22:23:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:24:45.156 22:23:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:24:45.156 22:23:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:24:45.156 22:23:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:24:47.070 22:23:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:24:47.070 22:23:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:24:47.070 22:23:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:24:47.070 22:23:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:24:47.070 22:23:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:24:47.070 22:23:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:24:47.070 22:23:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@75 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:24:47.070 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:24:47.070 22:23:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@76 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:24:47.070 22:23:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:24:47.070 22:23:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:24:47.070 22:23:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:24:47.070 22:23:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:24:47.070 22:23:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:24:47.070 22:23:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:24:47.070 22:23:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@78 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:24:47.070 22:23:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:47.070 22:23:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:24:47.070 22:23:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:47.070 22:23:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # seq 1 5 00:24:47.070 22:23:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:24:47.070 22:23:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:24:47.070 22:23:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:47.070 22:23:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:24:47.070 22:23:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:47.070 22:23:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:47.070 22:23:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:47.070 22:23:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:24:47.070 [2024-10-01 22:23:42.137735] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:47.070 22:23:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:47.070 22:23:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:24:47.070 22:23:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:47.070 22:23:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:24:47.070 22:23:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:47.070 22:23:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:24:47.070 22:23:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:47.070 22:23:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:24:47.070 22:23:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:47.070 22:23:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 --hostid=80c5a598-37ec-ec11-9bc7-a4bf01928204 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:24:48.982 22:23:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:24:48.982 22:23:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:24:48.982 22:23:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:24:48.982 22:23:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:24:48.982 22:23:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:24:50.892 22:23:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:24:50.892 22:23:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:24:50.892 22:23:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:24:50.892 22:23:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:24:50.892 22:23:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:24:50.892 22:23:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:24:50.892 22:23:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:24:50.892 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:24:50.892 22:23:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:24:50.892 22:23:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:24:50.892 22:23:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:24:50.892 22:23:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:24:50.892 22:23:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:24:50.892 22:23:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:24:50.892 22:23:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:24:50.892 22:23:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:24:50.892 22:23:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:50.892 22:23:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:24:50.892 22:23:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:50.892 22:23:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:24:50.892 22:23:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:50.892 22:23:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:24:50.892 22:23:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:50.892 22:23:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:24:50.892 22:23:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:24:50.892 22:23:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:50.892 22:23:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:24:50.892 22:23:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:50.892 22:23:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:50.892 22:23:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:50.892 22:23:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:24:50.892 [2024-10-01 22:23:45.893653] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:50.892 22:23:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:50.892 22:23:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:24:50.892 22:23:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:50.892 22:23:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:24:50.893 22:23:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:50.893 22:23:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:24:50.893 22:23:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:50.893 22:23:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:24:50.893 22:23:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:50.893 22:23:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 --hostid=80c5a598-37ec-ec11-9bc7-a4bf01928204 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:24:52.275 22:23:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:24:52.275 22:23:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:24:52.275 22:23:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:24:52.275 22:23:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:24:52.275 22:23:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:24:54.819 22:23:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:24:54.819 22:23:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:24:54.819 22:23:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:24:54.819 22:23:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:24:54.819 22:23:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:24:54.819 22:23:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:24:54.819 22:23:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:24:54.819 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:24:54.819 22:23:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:24:54.819 22:23:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:24:54.819 22:23:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:24:54.819 22:23:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:24:54.819 22:23:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:24:54.819 22:23:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:24:54.819 22:23:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:24:54.819 22:23:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:24:54.819 22:23:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:54.819 22:23:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:24:54.819 22:23:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:54.819 22:23:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:24:54.819 22:23:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:54.819 22:23:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:24:54.819 22:23:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:54.819 22:23:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:24:54.819 22:23:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:24:54.819 22:23:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:54.819 22:23:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:24:54.819 22:23:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:54.819 22:23:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:54.819 22:23:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:54.819 22:23:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:24:54.819 [2024-10-01 22:23:49.646568] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:54.819 22:23:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:54.819 22:23:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:24:54.819 22:23:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:54.819 22:23:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:24:54.819 22:23:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:54.819 22:23:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:24:54.819 22:23:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:54.819 22:23:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:24:54.819 22:23:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:54.819 22:23:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 --hostid=80c5a598-37ec-ec11-9bc7-a4bf01928204 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:24:56.201 22:23:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:24:56.201 22:23:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:24:56.201 22:23:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:24:56.201 22:23:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:24:56.201 22:23:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:24:58.107 22:23:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:24:58.107 22:23:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:24:58.107 22:23:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:24:58.107 22:23:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:24:58.107 22:23:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:24:58.107 22:23:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:24:58.107 22:23:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:24:58.107 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:24:58.107 22:23:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:24:58.107 22:23:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:24:58.107 22:23:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:24:58.107 22:23:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:24:58.107 22:23:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:24:58.107 22:23:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:24:58.107 22:23:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:24:58.107 22:23:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:24:58.107 22:23:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:58.107 22:23:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:24:58.107 22:23:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:58.107 22:23:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:24:58.107 22:23:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:58.107 22:23:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:24:58.107 22:23:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:58.107 22:23:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:24:58.107 22:23:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:24:58.107 22:23:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:58.107 22:23:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:24:58.107 22:23:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:58.107 22:23:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:58.107 22:23:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:58.107 22:23:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:24:58.367 [2024-10-01 22:23:53.365898] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:58.367 22:23:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:58.367 22:23:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:24:58.367 22:23:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:58.367 22:23:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:24:58.367 22:23:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:58.367 22:23:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:24:58.367 22:23:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:58.367 22:23:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:24:58.367 22:23:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:58.367 22:23:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 --hostid=80c5a598-37ec-ec11-9bc7-a4bf01928204 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:24:59.833 22:23:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:24:59.833 22:23:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:24:59.833 22:23:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:24:59.833 22:23:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:24:59.833 22:23:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:25:01.874 22:23:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:25:01.874 22:23:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:25:01.874 22:23:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:25:01.874 22:23:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:25:01.874 22:23:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:25:01.874 22:23:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:25:01.874 22:23:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:25:01.874 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:25:01.875 22:23:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:25:01.875 22:23:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:25:01.875 22:23:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:25:01.875 22:23:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:25:01.875 22:23:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:25:01.875 22:23:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:25:01.875 22:23:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:25:01.875 22:23:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:25:01.875 22:23:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:01.875 22:23:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:25:01.875 22:23:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:01.875 22:23:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:25:01.875 22:23:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:01.875 22:23:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:25:01.875 22:23:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:01.875 22:23:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:25:01.875 22:23:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:25:01.875 22:23:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:01.875 22:23:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:25:01.875 22:23:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:01.875 22:23:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:01.875 22:23:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:01.875 22:23:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:25:01.875 [2024-10-01 22:23:57.114292] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:01.875 22:23:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:01.875 22:23:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:25:01.875 22:23:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:01.875 22:23:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:25:02.136 22:23:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:02.136 22:23:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:25:02.136 22:23:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:02.136 22:23:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:25:02.136 22:23:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:02.136 22:23:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 --hostid=80c5a598-37ec-ec11-9bc7-a4bf01928204 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:25:03.520 22:23:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:25:03.520 22:23:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:25:03.520 22:23:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:25:03.520 22:23:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:25:03.520 22:23:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:25:06.067 22:24:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:25:06.067 22:24:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:25:06.067 22:24:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:25:06.068 22:24:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:25:06.068 22:24:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:25:06.068 22:24:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:25:06.068 22:24:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:25:06.068 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:25:06.068 22:24:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:25:06.068 22:24:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:25:06.068 22:24:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:25:06.068 22:24:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:25:06.068 22:24:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:25:06.068 22:24:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:25:06.068 22:24:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:25:06.068 22:24:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:25:06.068 22:24:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:06.068 22:24:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:25:06.068 22:24:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:06.068 22:24:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:25:06.068 22:24:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:06.068 22:24:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:25:06.068 22:24:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:06.068 22:24:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # seq 1 5 00:25:06.068 22:24:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:25:06.068 22:24:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:25:06.068 22:24:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:06.068 22:24:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:25:06.068 22:24:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:06.068 22:24:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:06.068 22:24:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:06.068 22:24:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:25:06.068 [2024-10-01 22:24:00.879113] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:06.068 22:24:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:06.068 22:24:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:25:06.068 22:24:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:06.068 22:24:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:25:06.068 22:24:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:06.068 22:24:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:25:06.068 22:24:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:06.068 22:24:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:25:06.068 22:24:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:06.068 22:24:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:25:06.068 22:24:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:06.068 22:24:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:25:06.068 22:24:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:06.068 22:24:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:25:06.068 22:24:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:06.068 22:24:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:25:06.068 22:24:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:06.068 22:24:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:25:06.068 22:24:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:25:06.068 22:24:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:06.068 22:24:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:25:06.068 22:24:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:06.068 22:24:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:06.068 22:24:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:06.068 22:24:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:25:06.068 [2024-10-01 22:24:00.947249] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:06.068 22:24:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:06.068 22:24:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:25:06.068 22:24:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:06.068 22:24:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:25:06.068 22:24:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:06.068 22:24:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:25:06.068 22:24:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:06.068 22:24:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:25:06.068 22:24:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:06.068 22:24:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:25:06.068 22:24:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:06.068 22:24:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:25:06.068 22:24:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:06.068 22:24:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:25:06.068 22:24:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:06.068 22:24:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:25:06.068 22:24:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:06.068 22:24:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:25:06.068 22:24:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:25:06.068 22:24:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:06.068 22:24:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:25:06.068 22:24:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:06.068 22:24:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:06.068 22:24:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:06.068 22:24:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:25:06.068 [2024-10-01 22:24:01.011409] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:06.068 22:24:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:06.068 22:24:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:25:06.068 22:24:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:06.068 22:24:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:25:06.068 22:24:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:06.068 22:24:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:25:06.068 22:24:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:06.068 22:24:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:25:06.068 22:24:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:06.068 22:24:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:25:06.068 22:24:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:06.068 22:24:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:25:06.068 22:24:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:06.068 22:24:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:25:06.068 22:24:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:06.068 22:24:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:25:06.068 22:24:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:06.068 22:24:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:25:06.068 22:24:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:25:06.068 22:24:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:06.068 22:24:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:25:06.068 22:24:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:06.068 22:24:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:06.068 22:24:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:06.068 22:24:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:25:06.068 [2024-10-01 22:24:01.075615] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:06.069 22:24:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:06.069 22:24:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:25:06.069 22:24:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:06.069 22:24:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:25:06.069 22:24:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:06.069 22:24:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:25:06.069 22:24:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:06.069 22:24:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:25:06.069 22:24:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:06.069 22:24:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:25:06.069 22:24:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:06.069 22:24:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:25:06.069 22:24:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:06.069 22:24:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:25:06.069 22:24:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:06.069 22:24:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:25:06.069 22:24:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:06.069 22:24:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:25:06.069 22:24:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:25:06.069 22:24:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:06.069 22:24:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:25:06.069 22:24:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:06.069 22:24:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:06.069 22:24:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:06.069 22:24:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:25:06.069 [2024-10-01 22:24:01.139827] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:06.069 22:24:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:06.069 22:24:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:25:06.069 22:24:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:06.069 22:24:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:25:06.069 22:24:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:06.069 22:24:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:25:06.069 22:24:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:06.069 22:24:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:25:06.069 22:24:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:06.069 22:24:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:25:06.069 22:24:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:06.069 22:24:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:25:06.069 22:24:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:06.069 22:24:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:25:06.069 22:24:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:06.069 22:24:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:25:06.069 22:24:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:06.069 22:24:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@110 -- # rpc_cmd nvmf_get_stats 00:25:06.069 22:24:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:06.069 22:24:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:25:06.069 22:24:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:06.069 22:24:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@110 -- # stats='{ 00:25:06.069 "tick_rate": 2400000000, 00:25:06.069 "poll_groups": [ 00:25:06.069 { 00:25:06.069 "name": "nvmf_tgt_poll_group_000", 00:25:06.069 "admin_qpairs": 0, 00:25:06.069 "io_qpairs": 224, 00:25:06.069 "current_admin_qpairs": 0, 00:25:06.069 "current_io_qpairs": 0, 00:25:06.069 "pending_bdev_io": 0, 00:25:06.069 "completed_nvme_io": 224, 00:25:06.069 "transports": [ 00:25:06.069 { 00:25:06.069 "trtype": "TCP" 00:25:06.069 } 00:25:06.069 ] 00:25:06.069 }, 00:25:06.069 { 00:25:06.069 "name": "nvmf_tgt_poll_group_001", 00:25:06.069 "admin_qpairs": 1, 00:25:06.069 "io_qpairs": 223, 00:25:06.069 "current_admin_qpairs": 0, 00:25:06.069 "current_io_qpairs": 0, 00:25:06.069 "pending_bdev_io": 0, 00:25:06.069 "completed_nvme_io": 223, 00:25:06.069 "transports": [ 00:25:06.069 { 00:25:06.069 "trtype": "TCP" 00:25:06.069 } 00:25:06.069 ] 00:25:06.069 }, 00:25:06.069 { 00:25:06.069 "name": "nvmf_tgt_poll_group_002", 00:25:06.069 "admin_qpairs": 6, 00:25:06.069 "io_qpairs": 218, 00:25:06.069 "current_admin_qpairs": 0, 00:25:06.069 "current_io_qpairs": 0, 00:25:06.069 "pending_bdev_io": 0, 00:25:06.069 "completed_nvme_io": 273, 00:25:06.069 "transports": [ 00:25:06.069 { 00:25:06.069 "trtype": "TCP" 00:25:06.069 } 00:25:06.069 ] 00:25:06.069 }, 00:25:06.069 { 00:25:06.069 "name": "nvmf_tgt_poll_group_003", 00:25:06.069 "admin_qpairs": 0, 00:25:06.069 "io_qpairs": 224, 00:25:06.069 "current_admin_qpairs": 0, 00:25:06.069 "current_io_qpairs": 0, 00:25:06.069 "pending_bdev_io": 0, 00:25:06.069 "completed_nvme_io": 519, 00:25:06.069 "transports": [ 00:25:06.069 { 00:25:06.069 "trtype": "TCP" 00:25:06.069 } 00:25:06.069 ] 00:25:06.069 } 00:25:06.069 ] 00:25:06.069 }' 00:25:06.069 22:24:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@112 -- # jsum '.poll_groups[].admin_qpairs' 00:25:06.069 22:24:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:25:06.069 22:24:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:25:06.069 22:24:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:25:06.069 22:24:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@112 -- # (( 7 > 0 )) 00:25:06.069 22:24:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@113 -- # jsum '.poll_groups[].io_qpairs' 00:25:06.069 22:24:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:25:06.069 22:24:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:25:06.069 22:24:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:25:06.069 22:24:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@113 -- # (( 889 > 0 )) 00:25:06.069 22:24:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@115 -- # '[' rdma == tcp ']' 00:25:06.069 22:24:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@121 -- # trap - SIGINT SIGTERM EXIT 00:25:06.069 22:24:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@123 -- # nvmftestfini 00:25:06.069 22:24:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@514 -- # nvmfcleanup 00:25:06.069 22:24:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@121 -- # sync 00:25:06.069 22:24:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:25:06.069 22:24:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@124 -- # set +e 00:25:06.069 22:24:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@125 -- # for i in {1..20} 00:25:06.069 22:24:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:25:06.069 rmmod nvme_tcp 00:25:06.331 rmmod nvme_fabrics 00:25:06.331 rmmod nvme_keyring 00:25:06.331 22:24:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:25:06.331 22:24:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@128 -- # set -e 00:25:06.331 22:24:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@129 -- # return 0 00:25:06.331 22:24:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@515 -- # '[' -n 123221 ']' 00:25:06.331 22:24:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@516 -- # killprocess 123221 00:25:06.331 22:24:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@950 -- # '[' -z 123221 ']' 00:25:06.331 22:24:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@954 -- # kill -0 123221 00:25:06.331 22:24:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@955 -- # uname 00:25:06.331 22:24:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:25:06.331 22:24:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 123221 00:25:06.331 22:24:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:25:06.331 22:24:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:25:06.331 22:24:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 123221' 00:25:06.331 killing process with pid 123221 00:25:06.331 22:24:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@969 -- # kill 123221 00:25:06.331 22:24:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@974 -- # wait 123221 00:25:06.592 22:24:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:25:06.592 22:24:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:25:06.592 22:24:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:25:06.592 22:24:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@297 -- # iptr 00:25:06.592 22:24:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:25:06.592 22:24:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@789 -- # iptables-save 00:25:06.592 22:24:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@789 -- # iptables-restore 00:25:06.592 22:24:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:25:06.592 22:24:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@302 -- # remove_spdk_ns 00:25:06.592 22:24:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:06.592 22:24:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:06.592 22:24:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:08.505 22:24:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:25:08.505 00:25:08.505 real 0m37.943s 00:25:08.505 user 1m54.026s 00:25:08.505 sys 0m7.690s 00:25:08.505 22:24:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:25:08.505 22:24:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:25:08.505 ************************************ 00:25:08.505 END TEST nvmf_rpc 00:25:08.505 ************************************ 00:25:08.767 22:24:03 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@23 -- # run_test nvmf_invalid /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:25:08.767 22:24:03 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:25:08.767 22:24:03 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:25:08.767 22:24:03 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:25:08.767 ************************************ 00:25:08.767 START TEST nvmf_invalid 00:25:08.767 ************************************ 00:25:08.767 22:24:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:25:08.767 * Looking for test storage... 00:25:08.767 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:25:08.767 22:24:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:25:08.767 22:24:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1681 -- # lcov --version 00:25:08.767 22:24:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:25:08.767 22:24:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:25:08.767 22:24:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:25:08.767 22:24:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@333 -- # local ver1 ver1_l 00:25:08.767 22:24:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@334 -- # local ver2 ver2_l 00:25:08.767 22:24:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@336 -- # IFS=.-: 00:25:08.767 22:24:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@336 -- # read -ra ver1 00:25:08.767 22:24:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@337 -- # IFS=.-: 00:25:08.767 22:24:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@337 -- # read -ra ver2 00:25:08.767 22:24:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@338 -- # local 'op=<' 00:25:08.767 22:24:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@340 -- # ver1_l=2 00:25:08.767 22:24:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@341 -- # ver2_l=1 00:25:08.767 22:24:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:25:08.767 22:24:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@344 -- # case "$op" in 00:25:08.767 22:24:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@345 -- # : 1 00:25:08.767 22:24:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@364 -- # (( v = 0 )) 00:25:08.767 22:24:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:25:08.767 22:24:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@365 -- # decimal 1 00:25:08.767 22:24:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@353 -- # local d=1 00:25:08.767 22:24:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:25:08.767 22:24:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@355 -- # echo 1 00:25:08.767 22:24:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@365 -- # ver1[v]=1 00:25:08.767 22:24:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@366 -- # decimal 2 00:25:08.767 22:24:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@353 -- # local d=2 00:25:08.767 22:24:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:25:08.767 22:24:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@355 -- # echo 2 00:25:08.767 22:24:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@366 -- # ver2[v]=2 00:25:08.767 22:24:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:25:08.767 22:24:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:25:08.767 22:24:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@368 -- # return 0 00:25:08.767 22:24:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:25:08.767 22:24:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:25:08.767 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:08.767 --rc genhtml_branch_coverage=1 00:25:08.767 --rc genhtml_function_coverage=1 00:25:08.767 --rc genhtml_legend=1 00:25:08.767 --rc geninfo_all_blocks=1 00:25:08.767 --rc geninfo_unexecuted_blocks=1 00:25:08.767 00:25:08.767 ' 00:25:08.767 22:24:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:25:08.767 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:08.767 --rc genhtml_branch_coverage=1 00:25:08.767 --rc genhtml_function_coverage=1 00:25:08.767 --rc genhtml_legend=1 00:25:08.767 --rc geninfo_all_blocks=1 00:25:08.767 --rc geninfo_unexecuted_blocks=1 00:25:08.767 00:25:08.767 ' 00:25:08.767 22:24:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:25:08.768 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:08.768 --rc genhtml_branch_coverage=1 00:25:08.768 --rc genhtml_function_coverage=1 00:25:08.768 --rc genhtml_legend=1 00:25:08.768 --rc geninfo_all_blocks=1 00:25:08.768 --rc geninfo_unexecuted_blocks=1 00:25:08.768 00:25:08.768 ' 00:25:08.768 22:24:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:25:08.768 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:08.768 --rc genhtml_branch_coverage=1 00:25:08.768 --rc genhtml_function_coverage=1 00:25:08.768 --rc genhtml_legend=1 00:25:08.768 --rc geninfo_all_blocks=1 00:25:08.768 --rc geninfo_unexecuted_blocks=1 00:25:08.768 00:25:08.768 ' 00:25:08.768 22:24:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:08.768 22:24:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@7 -- # uname -s 00:25:08.768 22:24:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:08.768 22:24:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:08.768 22:24:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:08.768 22:24:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:08.768 22:24:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:08.768 22:24:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:08.768 22:24:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:08.768 22:24:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:08.768 22:24:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:08.768 22:24:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:08.768 22:24:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 00:25:08.768 22:24:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@18 -- # NVME_HOSTID=80c5a598-37ec-ec11-9bc7-a4bf01928204 00:25:08.768 22:24:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:08.768 22:24:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:08.768 22:24:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:08.768 22:24:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:08.768 22:24:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:08.768 22:24:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@15 -- # shopt -s extglob 00:25:08.768 22:24:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:08.768 22:24:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:08.768 22:24:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:08.768 22:24:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:08.768 22:24:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:08.768 22:24:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:08.768 22:24:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@5 -- # export PATH 00:25:08.768 22:24:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:08.768 22:24:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@51 -- # : 0 00:25:08.768 22:24:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:25:08.768 22:24:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:25:08.768 22:24:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:08.768 22:24:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:08.768 22:24:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:08.768 22:24:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:25:08.768 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:25:08.768 22:24:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:25:08.768 22:24:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:25:08.768 22:24:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@55 -- # have_pci_nics=0 00:25:09.028 22:24:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@11 -- # multi_target_rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:25:09.028 22:24:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@12 -- # rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:25:09.028 22:24:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode 00:25:09.028 22:24:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@14 -- # target=foobar 00:25:09.029 22:24:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@16 -- # RANDOM=0 00:25:09.029 22:24:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@34 -- # nvmftestinit 00:25:09.029 22:24:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:25:09.029 22:24:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:09.029 22:24:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@474 -- # prepare_net_devs 00:25:09.029 22:24:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@436 -- # local -g is_hw=no 00:25:09.029 22:24:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@438 -- # remove_spdk_ns 00:25:09.029 22:24:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:09.029 22:24:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:09.029 22:24:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:09.029 22:24:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:25:09.029 22:24:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:25:09.029 22:24:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@309 -- # xtrace_disable 00:25:09.029 22:24:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:25:17.174 22:24:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:17.174 22:24:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@315 -- # pci_devs=() 00:25:17.174 22:24:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@315 -- # local -a pci_devs 00:25:17.174 22:24:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@316 -- # pci_net_devs=() 00:25:17.174 22:24:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:25:17.174 22:24:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@317 -- # pci_drivers=() 00:25:17.174 22:24:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@317 -- # local -A pci_drivers 00:25:17.174 22:24:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@319 -- # net_devs=() 00:25:17.174 22:24:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@319 -- # local -ga net_devs 00:25:17.174 22:24:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@320 -- # e810=() 00:25:17.174 22:24:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@320 -- # local -ga e810 00:25:17.174 22:24:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@321 -- # x722=() 00:25:17.174 22:24:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@321 -- # local -ga x722 00:25:17.174 22:24:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@322 -- # mlx=() 00:25:17.174 22:24:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@322 -- # local -ga mlx 00:25:17.174 22:24:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:17.174 22:24:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:17.174 22:24:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:17.174 22:24:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:17.174 22:24:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:17.174 22:24:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:17.174 22:24:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:17.174 22:24:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:25:17.174 22:24:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:17.174 22:24:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:17.174 22:24:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:17.174 22:24:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:17.174 22:24:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:25:17.174 22:24:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:25:17.174 22:24:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:25:17.174 22:24:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:25:17.174 22:24:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:25:17.174 22:24:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:25:17.174 22:24:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:17.174 22:24:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:25:17.174 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:25:17.174 22:24:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:17.174 22:24:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:17.174 22:24:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:17.174 22:24:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:17.174 22:24:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:17.174 22:24:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:17.174 22:24:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:25:17.174 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:25:17.174 22:24:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:17.174 22:24:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:17.174 22:24:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:17.174 22:24:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:17.174 22:24:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:17.174 22:24:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:25:17.174 22:24:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:25:17.174 22:24:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:25:17.174 22:24:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:25:17.174 22:24:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:17.174 22:24:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:25:17.174 22:24:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:17.174 22:24:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@416 -- # [[ up == up ]] 00:25:17.174 22:24:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:25:17.174 22:24:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:17.174 22:24:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:25:17.174 Found net devices under 0000:4b:00.0: cvl_0_0 00:25:17.174 22:24:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:25:17.174 22:24:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:25:17.174 22:24:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:17.174 22:24:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:25:17.174 22:24:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:17.174 22:24:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@416 -- # [[ up == up ]] 00:25:17.174 22:24:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:25:17.174 22:24:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:17.174 22:24:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:25:17.174 Found net devices under 0000:4b:00.1: cvl_0_1 00:25:17.174 22:24:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:25:17.174 22:24:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:25:17.174 22:24:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@440 -- # is_hw=yes 00:25:17.174 22:24:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:25:17.174 22:24:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:25:17.174 22:24:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:25:17.174 22:24:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:25:17.174 22:24:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:17.174 22:24:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:17.174 22:24:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:17.174 22:24:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:25:17.174 22:24:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:17.174 22:24:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:17.174 22:24:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:25:17.174 22:24:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:25:17.174 22:24:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:17.174 22:24:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:17.174 22:24:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:25:17.174 22:24:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:25:17.174 22:24:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:25:17.174 22:24:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:17.174 22:24:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:17.174 22:24:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:17.174 22:24:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:25:17.174 22:24:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:17.174 22:24:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:17.174 22:24:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:17.174 22:24:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:25:17.175 22:24:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:25:17.175 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:17.175 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.571 ms 00:25:17.175 00:25:17.175 --- 10.0.0.2 ping statistics --- 00:25:17.175 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:17.175 rtt min/avg/max/mdev = 0.571/0.571/0.571/0.000 ms 00:25:17.175 22:24:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:17.175 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:17.175 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.276 ms 00:25:17.175 00:25:17.175 --- 10.0.0.1 ping statistics --- 00:25:17.175 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:17.175 rtt min/avg/max/mdev = 0.276/0.276/0.276/0.000 ms 00:25:17.175 22:24:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:17.175 22:24:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@448 -- # return 0 00:25:17.175 22:24:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:25:17.175 22:24:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:17.175 22:24:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:25:17.175 22:24:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:25:17.175 22:24:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:17.175 22:24:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:25:17.175 22:24:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:25:17.175 22:24:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@35 -- # nvmfappstart -m 0xF 00:25:17.175 22:24:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:25:17.175 22:24:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@724 -- # xtrace_disable 00:25:17.175 22:24:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:25:17.175 22:24:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@507 -- # nvmfpid=133626 00:25:17.175 22:24:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@508 -- # waitforlisten 133626 00:25:17.175 22:24:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:25:17.175 22:24:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@831 -- # '[' -z 133626 ']' 00:25:17.175 22:24:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:17.175 22:24:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@836 -- # local max_retries=100 00:25:17.175 22:24:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:17.175 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:17.175 22:24:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@840 -- # xtrace_disable 00:25:17.175 22:24:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:25:17.175 [2024-10-01 22:24:11.385967] Starting SPDK v25.01-pre git sha1 1b1c3081e / DPDK 24.03.0 initialization... 00:25:17.175 [2024-10-01 22:24:11.386023] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:17.175 [2024-10-01 22:24:11.457257] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:25:17.175 [2024-10-01 22:24:11.525472] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:17.175 [2024-10-01 22:24:11.525512] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:17.175 [2024-10-01 22:24:11.525520] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:17.175 [2024-10-01 22:24:11.525527] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:17.175 [2024-10-01 22:24:11.525533] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:17.175 [2024-10-01 22:24:11.525623] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:25:17.175 [2024-10-01 22:24:11.525736] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:25:17.175 [2024-10-01 22:24:11.525890] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:25:17.175 [2024-10-01 22:24:11.525892] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:25:17.175 22:24:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:25:17.175 22:24:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@864 -- # return 0 00:25:17.175 22:24:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:25:17.175 22:24:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@730 -- # xtrace_disable 00:25:17.175 22:24:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:25:17.175 22:24:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:17.175 22:24:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@37 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:25:17.175 22:24:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -t foobar nqn.2016-06.io.spdk:cnode23965 00:25:17.175 [2024-10-01 22:24:12.384020] nvmf_rpc.c: 396:rpc_nvmf_create_subsystem: *ERROR*: Unable to find target foobar 00:25:17.175 22:24:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@40 -- # out='request: 00:25:17.175 { 00:25:17.175 "nqn": "nqn.2016-06.io.spdk:cnode23965", 00:25:17.175 "tgt_name": "foobar", 00:25:17.175 "method": "nvmf_create_subsystem", 00:25:17.175 "req_id": 1 00:25:17.175 } 00:25:17.175 Got JSON-RPC error response 00:25:17.175 response: 00:25:17.175 { 00:25:17.175 "code": -32603, 00:25:17.175 "message": "Unable to find target foobar" 00:25:17.175 }' 00:25:17.175 22:24:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@41 -- # [[ request: 00:25:17.175 { 00:25:17.175 "nqn": "nqn.2016-06.io.spdk:cnode23965", 00:25:17.175 "tgt_name": "foobar", 00:25:17.175 "method": "nvmf_create_subsystem", 00:25:17.175 "req_id": 1 00:25:17.175 } 00:25:17.175 Got JSON-RPC error response 00:25:17.175 response: 00:25:17.175 { 00:25:17.175 "code": -32603, 00:25:17.175 "message": "Unable to find target foobar" 00:25:17.175 } == *\U\n\a\b\l\e\ \t\o\ \f\i\n\d\ \t\a\r\g\e\t* ]] 00:25:17.175 22:24:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # echo -e '\x1f' 00:25:17.175 22:24:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s $'SPDKISFASTANDAWESOME\037' nqn.2016-06.io.spdk:cnode10744 00:25:17.436 [2024-10-01 22:24:12.572676] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode10744: invalid serial number 'SPDKISFASTANDAWESOME' 00:25:17.436 22:24:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # out='request: 00:25:17.436 { 00:25:17.436 "nqn": "nqn.2016-06.io.spdk:cnode10744", 00:25:17.436 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:25:17.436 "method": "nvmf_create_subsystem", 00:25:17.436 "req_id": 1 00:25:17.436 } 00:25:17.436 Got JSON-RPC error response 00:25:17.436 response: 00:25:17.436 { 00:25:17.436 "code": -32602, 00:25:17.436 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:25:17.437 }' 00:25:17.437 22:24:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@46 -- # [[ request: 00:25:17.437 { 00:25:17.437 "nqn": "nqn.2016-06.io.spdk:cnode10744", 00:25:17.437 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:25:17.437 "method": "nvmf_create_subsystem", 00:25:17.437 "req_id": 1 00:25:17.437 } 00:25:17.437 Got JSON-RPC error response 00:25:17.437 response: 00:25:17.437 { 00:25:17.437 "code": -32602, 00:25:17.437 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:25:17.437 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:25:17.437 22:24:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # echo -e '\x1f' 00:25:17.437 22:24:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d $'SPDK_Controller\037' nqn.2016-06.io.spdk:cnode31846 00:25:17.698 [2024-10-01 22:24:12.761196] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode31846: invalid model number 'SPDK_Controller' 00:25:17.698 22:24:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # out='request: 00:25:17.698 { 00:25:17.698 "nqn": "nqn.2016-06.io.spdk:cnode31846", 00:25:17.698 "model_number": "SPDK_Controller\u001f", 00:25:17.698 "method": "nvmf_create_subsystem", 00:25:17.698 "req_id": 1 00:25:17.698 } 00:25:17.698 Got JSON-RPC error response 00:25:17.698 response: 00:25:17.698 { 00:25:17.698 "code": -32602, 00:25:17.698 "message": "Invalid MN SPDK_Controller\u001f" 00:25:17.698 }' 00:25:17.698 22:24:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@51 -- # [[ request: 00:25:17.698 { 00:25:17.698 "nqn": "nqn.2016-06.io.spdk:cnode31846", 00:25:17.698 "model_number": "SPDK_Controller\u001f", 00:25:17.698 "method": "nvmf_create_subsystem", 00:25:17.698 "req_id": 1 00:25:17.698 } 00:25:17.698 Got JSON-RPC error response 00:25:17.698 response: 00:25:17.698 { 00:25:17.698 "code": -32602, 00:25:17.698 "message": "Invalid MN SPDK_Controller\u001f" 00:25:17.698 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:25:17.698 22:24:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # gen_random_s 21 00:25:17.698 22:24:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@19 -- # local length=21 ll 00:25:17.698 22:24:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:25:17.698 22:24:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:25:17.698 22:24:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:25:17.698 22:24:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:25:17.698 22:24:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:25:17.698 22:24:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 80 00:25:17.698 22:24:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x50' 00:25:17.698 22:24:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=P 00:25:17.698 22:24:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:25:17.698 22:24:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:25:17.699 22:24:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 45 00:25:17.699 22:24:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2d' 00:25:17.699 22:24:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=- 00:25:17.699 22:24:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:25:17.699 22:24:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:25:17.699 22:24:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 99 00:25:17.699 22:24:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x63' 00:25:17.699 22:24:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=c 00:25:17.699 22:24:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:25:17.699 22:24:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:25:17.699 22:24:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 55 00:25:17.699 22:24:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x37' 00:25:17.699 22:24:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=7 00:25:17.699 22:24:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:25:17.699 22:24:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:25:17.699 22:24:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 116 00:25:17.699 22:24:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x74' 00:25:17.699 22:24:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=t 00:25:17.699 22:24:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:25:17.699 22:24:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:25:17.699 22:24:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 116 00:25:17.699 22:24:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x74' 00:25:17.699 22:24:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=t 00:25:17.699 22:24:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:25:17.699 22:24:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:25:17.699 22:24:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 62 00:25:17.699 22:24:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3e' 00:25:17.699 22:24:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='>' 00:25:17.699 22:24:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:25:17.699 22:24:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:25:17.699 22:24:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 35 00:25:17.699 22:24:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x23' 00:25:17.699 22:24:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='#' 00:25:17.699 22:24:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:25:17.699 22:24:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:25:17.699 22:24:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 76 00:25:17.699 22:24:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4c' 00:25:17.699 22:24:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=L 00:25:17.699 22:24:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:25:17.699 22:24:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:25:17.699 22:24:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 62 00:25:17.699 22:24:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3e' 00:25:17.699 22:24:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='>' 00:25:17.699 22:24:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:25:17.699 22:24:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:25:17.699 22:24:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 126 00:25:17.699 22:24:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7e' 00:25:17.699 22:24:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='~' 00:25:17.699 22:24:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:25:17.699 22:24:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:25:17.699 22:24:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 105 00:25:17.699 22:24:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x69' 00:25:17.699 22:24:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=i 00:25:17.699 22:24:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:25:17.699 22:24:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:25:17.699 22:24:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 84 00:25:17.699 22:24:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x54' 00:25:17.699 22:24:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=T 00:25:17.699 22:24:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:25:17.699 22:24:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:25:17.699 22:24:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 59 00:25:17.699 22:24:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3b' 00:25:17.699 22:24:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=';' 00:25:17.699 22:24:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:25:17.699 22:24:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:25:17.699 22:24:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 75 00:25:17.699 22:24:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4b' 00:25:17.699 22:24:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=K 00:25:17.699 22:24:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:25:17.699 22:24:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:25:17.699 22:24:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 62 00:25:17.699 22:24:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3e' 00:25:17.699 22:24:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='>' 00:25:17.699 22:24:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:25:17.699 22:24:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:25:17.699 22:24:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 73 00:25:17.699 22:24:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x49' 00:25:17.699 22:24:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=I 00:25:17.699 22:24:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:25:17.699 22:24:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:25:17.699 22:24:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 99 00:25:17.699 22:24:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x63' 00:25:17.699 22:24:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=c 00:25:17.699 22:24:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:25:17.699 22:24:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:25:17.699 22:24:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 57 00:25:17.699 22:24:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x39' 00:25:17.699 22:24:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=9 00:25:17.699 22:24:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:25:17.699 22:24:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:25:17.699 22:24:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 120 00:25:17.961 22:24:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x78' 00:25:17.961 22:24:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=x 00:25:17.961 22:24:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:25:17.961 22:24:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:25:17.961 22:24:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 96 00:25:17.961 22:24:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x60' 00:25:17.961 22:24:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='`' 00:25:17.961 22:24:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:25:17.961 22:24:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:25:17.961 22:24:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@28 -- # [[ P == \- ]] 00:25:17.961 22:24:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@31 -- # echo 'P-c7tt>#L>~iT;K>Ic9x`' 00:25:17.961 22:24:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s 'P-c7tt>#L>~iT;K>Ic9x`' nqn.2016-06.io.spdk:cnode32564 00:25:17.961 [2024-10-01 22:24:13.114321] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode32564: invalid serial number 'P-c7tt>#L>~iT;K>Ic9x`' 00:25:17.961 22:24:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # out='request: 00:25:17.961 { 00:25:17.961 "nqn": "nqn.2016-06.io.spdk:cnode32564", 00:25:17.961 "serial_number": "P-c7tt>#L>~iT;K>Ic9x`", 00:25:17.961 "method": "nvmf_create_subsystem", 00:25:17.961 "req_id": 1 00:25:17.961 } 00:25:17.961 Got JSON-RPC error response 00:25:17.961 response: 00:25:17.961 { 00:25:17.961 "code": -32602, 00:25:17.961 "message": "Invalid SN P-c7tt>#L>~iT;K>Ic9x`" 00:25:17.961 }' 00:25:17.961 22:24:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@55 -- # [[ request: 00:25:17.961 { 00:25:17.961 "nqn": "nqn.2016-06.io.spdk:cnode32564", 00:25:17.961 "serial_number": "P-c7tt>#L>~iT;K>Ic9x`", 00:25:17.961 "method": "nvmf_create_subsystem", 00:25:17.961 "req_id": 1 00:25:17.961 } 00:25:17.961 Got JSON-RPC error response 00:25:17.961 response: 00:25:17.961 { 00:25:17.961 "code": -32602, 00:25:17.961 "message": "Invalid SN P-c7tt>#L>~iT;K>Ic9x`" 00:25:17.961 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:25:17.961 22:24:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@58 -- # gen_random_s 41 00:25:17.961 22:24:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@19 -- # local length=41 ll 00:25:17.961 22:24:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:25:17.961 22:24:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:25:17.961 22:24:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:25:17.961 22:24:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:25:17.961 22:24:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:25:17.961 22:24:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 33 00:25:17.961 22:24:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x21' 00:25:17.961 22:24:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='!' 00:25:17.961 22:24:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:25:17.961 22:24:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:25:17.961 22:24:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 99 00:25:17.961 22:24:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x63' 00:25:17.961 22:24:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=c 00:25:17.961 22:24:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:25:17.961 22:24:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:25:17.961 22:24:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 94 00:25:17.961 22:24:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5e' 00:25:17.961 22:24:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='^' 00:25:17.961 22:24:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:25:17.961 22:24:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:25:17.961 22:24:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 119 00:25:17.961 22:24:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x77' 00:25:17.961 22:24:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=w 00:25:17.961 22:24:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:25:17.961 22:24:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:25:17.961 22:24:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 99 00:25:17.961 22:24:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x63' 00:25:17.961 22:24:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=c 00:25:17.961 22:24:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:25:17.961 22:24:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:25:17.961 22:24:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 57 00:25:17.961 22:24:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x39' 00:25:17.961 22:24:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=9 00:25:17.961 22:24:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:25:17.961 22:24:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:25:17.961 22:24:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 40 00:25:17.961 22:24:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x28' 00:25:17.961 22:24:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='(' 00:25:17.961 22:24:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:25:17.961 22:24:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:25:17.961 22:24:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 79 00:25:17.961 22:24:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4f' 00:25:17.961 22:24:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=O 00:25:17.961 22:24:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:25:17.961 22:24:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:25:18.223 22:24:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 36 00:25:18.223 22:24:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x24' 00:25:18.223 22:24:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='$' 00:25:18.223 22:24:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:25:18.223 22:24:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:25:18.223 22:24:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 47 00:25:18.223 22:24:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2f' 00:25:18.223 22:24:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=/ 00:25:18.223 22:24:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:25:18.223 22:24:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:25:18.223 22:24:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 34 00:25:18.223 22:24:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x22' 00:25:18.223 22:24:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='"' 00:25:18.223 22:24:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:25:18.223 22:24:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:25:18.223 22:24:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 84 00:25:18.223 22:24:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x54' 00:25:18.223 22:24:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=T 00:25:18.223 22:24:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:25:18.223 22:24:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:25:18.223 22:24:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 91 00:25:18.223 22:24:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5b' 00:25:18.223 22:24:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='[' 00:25:18.223 22:24:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:25:18.223 22:24:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:25:18.223 22:24:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 43 00:25:18.223 22:24:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2b' 00:25:18.223 22:24:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=+ 00:25:18.223 22:24:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:25:18.223 22:24:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:25:18.223 22:24:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 51 00:25:18.223 22:24:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x33' 00:25:18.223 22:24:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=3 00:25:18.223 22:24:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:25:18.223 22:24:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:25:18.223 22:24:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 127 00:25:18.223 22:24:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7f' 00:25:18.223 22:24:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=$'\177' 00:25:18.223 22:24:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:25:18.223 22:24:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:25:18.223 22:24:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 105 00:25:18.223 22:24:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x69' 00:25:18.223 22:24:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=i 00:25:18.223 22:24:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:25:18.223 22:24:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:25:18.223 22:24:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 103 00:25:18.223 22:24:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x67' 00:25:18.223 22:24:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=g 00:25:18.223 22:24:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:25:18.223 22:24:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:25:18.223 22:24:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 102 00:25:18.223 22:24:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x66' 00:25:18.223 22:24:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=f 00:25:18.223 22:24:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:25:18.223 22:24:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:25:18.223 22:24:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 113 00:25:18.223 22:24:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x71' 00:25:18.223 22:24:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=q 00:25:18.223 22:24:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:25:18.223 22:24:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:25:18.223 22:24:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 44 00:25:18.223 22:24:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2c' 00:25:18.223 22:24:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=, 00:25:18.223 22:24:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:25:18.223 22:24:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:25:18.223 22:24:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 90 00:25:18.223 22:24:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5a' 00:25:18.223 22:24:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=Z 00:25:18.223 22:24:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:25:18.223 22:24:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:25:18.223 22:24:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 125 00:25:18.223 22:24:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7d' 00:25:18.223 22:24:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='}' 00:25:18.223 22:24:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:25:18.223 22:24:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:25:18.223 22:24:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 95 00:25:18.223 22:24:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5f' 00:25:18.223 22:24:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=_ 00:25:18.223 22:24:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:25:18.223 22:24:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:25:18.223 22:24:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 122 00:25:18.223 22:24:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7a' 00:25:18.223 22:24:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=z 00:25:18.223 22:24:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:25:18.223 22:24:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:25:18.223 22:24:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 89 00:25:18.223 22:24:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x59' 00:25:18.223 22:24:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=Y 00:25:18.223 22:24:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:25:18.223 22:24:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:25:18.223 22:24:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 92 00:25:18.223 22:24:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5c' 00:25:18.223 22:24:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='\' 00:25:18.223 22:24:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:25:18.223 22:24:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:25:18.223 22:24:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 115 00:25:18.223 22:24:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x73' 00:25:18.223 22:24:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=s 00:25:18.223 22:24:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:25:18.223 22:24:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:25:18.223 22:24:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 126 00:25:18.223 22:24:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7e' 00:25:18.223 22:24:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='~' 00:25:18.223 22:24:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:25:18.223 22:24:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:25:18.223 22:24:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 85 00:25:18.223 22:24:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x55' 00:25:18.223 22:24:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=U 00:25:18.223 22:24:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:25:18.223 22:24:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:25:18.223 22:24:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 65 00:25:18.223 22:24:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x41' 00:25:18.223 22:24:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=A 00:25:18.223 22:24:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:25:18.223 22:24:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:25:18.223 22:24:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 116 00:25:18.223 22:24:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x74' 00:25:18.223 22:24:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=t 00:25:18.223 22:24:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:25:18.223 22:24:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:25:18.224 22:24:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 122 00:25:18.224 22:24:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7a' 00:25:18.224 22:24:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=z 00:25:18.224 22:24:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:25:18.224 22:24:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:25:18.224 22:24:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 121 00:25:18.224 22:24:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x79' 00:25:18.224 22:24:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=y 00:25:18.224 22:24:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:25:18.224 22:24:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:25:18.224 22:24:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 88 00:25:18.224 22:24:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x58' 00:25:18.224 22:24:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=X 00:25:18.224 22:24:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:25:18.224 22:24:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:25:18.224 22:24:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 93 00:25:18.224 22:24:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5d' 00:25:18.224 22:24:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=']' 00:25:18.224 22:24:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:25:18.224 22:24:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:25:18.224 22:24:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 93 00:25:18.224 22:24:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5d' 00:25:18.224 22:24:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=']' 00:25:18.224 22:24:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:25:18.224 22:24:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:25:18.224 22:24:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 70 00:25:18.224 22:24:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x46' 00:25:18.224 22:24:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=F 00:25:18.224 22:24:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:25:18.224 22:24:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:25:18.224 22:24:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 61 00:25:18.224 22:24:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3d' 00:25:18.224 22:24:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+== 00:25:18.224 22:24:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:25:18.224 22:24:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:25:18.224 22:24:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 57 00:25:18.224 22:24:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x39' 00:25:18.224 22:24:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=9 00:25:18.224 22:24:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:25:18.224 22:24:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:25:18.224 22:24:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 81 00:25:18.224 22:24:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x51' 00:25:18.224 22:24:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=Q 00:25:18.224 22:24:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:25:18.224 22:24:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:25:18.224 22:24:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@28 -- # [[ ! == \- ]] 00:25:18.224 22:24:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@31 -- # echo '!c^wc9(O$/"T[+3igfq,Z}_zY\s~UAtzyX]]F=9Q' 00:25:18.224 22:24:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d '!c^wc9(O$/"T[+3igfq,Z}_zY\s~UAtzyX]]F=9Q' nqn.2016-06.io.spdk:cnode27565 00:25:18.484 [2024-10-01 22:24:13.619999] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode27565: invalid model number '!c^wc9(O$/"T[+3igfq,Z}_zY\s~UAtzyX]]F=9Q' 00:25:18.484 22:24:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@58 -- # out='request: 00:25:18.484 { 00:25:18.484 "nqn": "nqn.2016-06.io.spdk:cnode27565", 00:25:18.484 "model_number": "!c^wc9(O$/\"T[+3\u007figfq,Z}_zY\\s~UAtzyX]]F=9Q", 00:25:18.484 "method": "nvmf_create_subsystem", 00:25:18.484 "req_id": 1 00:25:18.484 } 00:25:18.484 Got JSON-RPC error response 00:25:18.484 response: 00:25:18.484 { 00:25:18.484 "code": -32602, 00:25:18.484 "message": "Invalid MN !c^wc9(O$/\"T[+3\u007figfq,Z}_zY\\s~UAtzyX]]F=9Q" 00:25:18.484 }' 00:25:18.484 22:24:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@59 -- # [[ request: 00:25:18.484 { 00:25:18.484 "nqn": "nqn.2016-06.io.spdk:cnode27565", 00:25:18.484 "model_number": "!c^wc9(O$/\"T[+3\u007figfq,Z}_zY\\s~UAtzyX]]F=9Q", 00:25:18.484 "method": "nvmf_create_subsystem", 00:25:18.484 "req_id": 1 00:25:18.484 } 00:25:18.484 Got JSON-RPC error response 00:25:18.484 response: 00:25:18.484 { 00:25:18.484 "code": -32602, 00:25:18.484 "message": "Invalid MN !c^wc9(O$/\"T[+3\u007figfq,Z}_zY\\s~UAtzyX]]F=9Q" 00:25:18.484 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:25:18.484 22:24:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport --trtype tcp 00:25:18.744 [2024-10-01 22:24:13.804695] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:18.744 22:24:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode -s SPDK001 -a 00:25:19.006 22:24:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@64 -- # [[ tcp == \T\C\P ]] 00:25:19.006 22:24:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@67 -- # echo '' 00:25:19.006 22:24:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@67 -- # head -n 1 00:25:19.006 22:24:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@67 -- # IP= 00:25:19.006 22:24:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode -t tcp -a '' -s 4421 00:25:19.006 [2024-10-01 22:24:14.185833] nvmf_rpc.c: 783:nvmf_rpc_listen_paused: *ERROR*: Unable to remove listener, rc -2 00:25:19.006 22:24:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@69 -- # out='request: 00:25:19.006 { 00:25:19.006 "nqn": "nqn.2016-06.io.spdk:cnode", 00:25:19.006 "listen_address": { 00:25:19.006 "trtype": "tcp", 00:25:19.006 "traddr": "", 00:25:19.006 "trsvcid": "4421" 00:25:19.006 }, 00:25:19.006 "method": "nvmf_subsystem_remove_listener", 00:25:19.006 "req_id": 1 00:25:19.006 } 00:25:19.006 Got JSON-RPC error response 00:25:19.006 response: 00:25:19.006 { 00:25:19.006 "code": -32602, 00:25:19.006 "message": "Invalid parameters" 00:25:19.006 }' 00:25:19.006 22:24:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@70 -- # [[ request: 00:25:19.006 { 00:25:19.006 "nqn": "nqn.2016-06.io.spdk:cnode", 00:25:19.006 "listen_address": { 00:25:19.006 "trtype": "tcp", 00:25:19.006 "traddr": "", 00:25:19.006 "trsvcid": "4421" 00:25:19.006 }, 00:25:19.006 "method": "nvmf_subsystem_remove_listener", 00:25:19.006 "req_id": 1 00:25:19.006 } 00:25:19.006 Got JSON-RPC error response 00:25:19.006 response: 00:25:19.006 { 00:25:19.006 "code": -32602, 00:25:19.006 "message": "Invalid parameters" 00:25:19.006 } != *\U\n\a\b\l\e\ \t\o\ \s\t\o\p\ \l\i\s\t\e\n\e\r\.* ]] 00:25:19.006 22:24:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode20849 -i 0 00:25:19.268 [2024-10-01 22:24:14.374390] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode20849: invalid cntlid range [0-65519] 00:25:19.268 22:24:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@73 -- # out='request: 00:25:19.268 { 00:25:19.268 "nqn": "nqn.2016-06.io.spdk:cnode20849", 00:25:19.268 "min_cntlid": 0, 00:25:19.268 "method": "nvmf_create_subsystem", 00:25:19.268 "req_id": 1 00:25:19.268 } 00:25:19.268 Got JSON-RPC error response 00:25:19.268 response: 00:25:19.268 { 00:25:19.268 "code": -32602, 00:25:19.268 "message": "Invalid cntlid range [0-65519]" 00:25:19.268 }' 00:25:19.268 22:24:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@74 -- # [[ request: 00:25:19.268 { 00:25:19.268 "nqn": "nqn.2016-06.io.spdk:cnode20849", 00:25:19.268 "min_cntlid": 0, 00:25:19.268 "method": "nvmf_create_subsystem", 00:25:19.268 "req_id": 1 00:25:19.268 } 00:25:19.268 Got JSON-RPC error response 00:25:19.268 response: 00:25:19.268 { 00:25:19.268 "code": -32602, 00:25:19.268 "message": "Invalid cntlid range [0-65519]" 00:25:19.268 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:25:19.268 22:24:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@75 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode15274 -i 65520 00:25:19.529 [2024-10-01 22:24:14.563002] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode15274: invalid cntlid range [65520-65519] 00:25:19.529 22:24:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@75 -- # out='request: 00:25:19.529 { 00:25:19.529 "nqn": "nqn.2016-06.io.spdk:cnode15274", 00:25:19.529 "min_cntlid": 65520, 00:25:19.529 "method": "nvmf_create_subsystem", 00:25:19.529 "req_id": 1 00:25:19.529 } 00:25:19.529 Got JSON-RPC error response 00:25:19.529 response: 00:25:19.529 { 00:25:19.529 "code": -32602, 00:25:19.529 "message": "Invalid cntlid range [65520-65519]" 00:25:19.529 }' 00:25:19.529 22:24:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@76 -- # [[ request: 00:25:19.529 { 00:25:19.529 "nqn": "nqn.2016-06.io.spdk:cnode15274", 00:25:19.529 "min_cntlid": 65520, 00:25:19.529 "method": "nvmf_create_subsystem", 00:25:19.529 "req_id": 1 00:25:19.529 } 00:25:19.529 Got JSON-RPC error response 00:25:19.529 response: 00:25:19.529 { 00:25:19.529 "code": -32602, 00:25:19.529 "message": "Invalid cntlid range [65520-65519]" 00:25:19.529 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:25:19.529 22:24:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode15749 -I 0 00:25:19.529 [2024-10-01 22:24:14.751603] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode15749: invalid cntlid range [1-0] 00:25:19.789 22:24:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@77 -- # out='request: 00:25:19.789 { 00:25:19.789 "nqn": "nqn.2016-06.io.spdk:cnode15749", 00:25:19.789 "max_cntlid": 0, 00:25:19.789 "method": "nvmf_create_subsystem", 00:25:19.789 "req_id": 1 00:25:19.789 } 00:25:19.789 Got JSON-RPC error response 00:25:19.789 response: 00:25:19.789 { 00:25:19.789 "code": -32602, 00:25:19.789 "message": "Invalid cntlid range [1-0]" 00:25:19.789 }' 00:25:19.789 22:24:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@78 -- # [[ request: 00:25:19.789 { 00:25:19.789 "nqn": "nqn.2016-06.io.spdk:cnode15749", 00:25:19.789 "max_cntlid": 0, 00:25:19.789 "method": "nvmf_create_subsystem", 00:25:19.789 "req_id": 1 00:25:19.789 } 00:25:19.789 Got JSON-RPC error response 00:25:19.789 response: 00:25:19.789 { 00:25:19.789 "code": -32602, 00:25:19.789 "message": "Invalid cntlid range [1-0]" 00:25:19.789 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:25:19.789 22:24:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode28819 -I 65520 00:25:19.789 [2024-10-01 22:24:14.936185] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode28819: invalid cntlid range [1-65520] 00:25:19.789 22:24:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@79 -- # out='request: 00:25:19.789 { 00:25:19.789 "nqn": "nqn.2016-06.io.spdk:cnode28819", 00:25:19.789 "max_cntlid": 65520, 00:25:19.789 "method": "nvmf_create_subsystem", 00:25:19.789 "req_id": 1 00:25:19.789 } 00:25:19.789 Got JSON-RPC error response 00:25:19.789 response: 00:25:19.789 { 00:25:19.789 "code": -32602, 00:25:19.789 "message": "Invalid cntlid range [1-65520]" 00:25:19.789 }' 00:25:19.789 22:24:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@80 -- # [[ request: 00:25:19.789 { 00:25:19.789 "nqn": "nqn.2016-06.io.spdk:cnode28819", 00:25:19.789 "max_cntlid": 65520, 00:25:19.789 "method": "nvmf_create_subsystem", 00:25:19.789 "req_id": 1 00:25:19.789 } 00:25:19.789 Got JSON-RPC error response 00:25:19.789 response: 00:25:19.789 { 00:25:19.789 "code": -32602, 00:25:19.789 "message": "Invalid cntlid range [1-65520]" 00:25:19.789 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:25:19.789 22:24:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3207 -i 6 -I 5 00:25:20.049 [2024-10-01 22:24:15.116755] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode3207: invalid cntlid range [6-5] 00:25:20.049 22:24:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@83 -- # out='request: 00:25:20.049 { 00:25:20.049 "nqn": "nqn.2016-06.io.spdk:cnode3207", 00:25:20.049 "min_cntlid": 6, 00:25:20.049 "max_cntlid": 5, 00:25:20.049 "method": "nvmf_create_subsystem", 00:25:20.049 "req_id": 1 00:25:20.049 } 00:25:20.049 Got JSON-RPC error response 00:25:20.049 response: 00:25:20.049 { 00:25:20.049 "code": -32602, 00:25:20.049 "message": "Invalid cntlid range [6-5]" 00:25:20.049 }' 00:25:20.049 22:24:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@84 -- # [[ request: 00:25:20.049 { 00:25:20.049 "nqn": "nqn.2016-06.io.spdk:cnode3207", 00:25:20.049 "min_cntlid": 6, 00:25:20.049 "max_cntlid": 5, 00:25:20.049 "method": "nvmf_create_subsystem", 00:25:20.049 "req_id": 1 00:25:20.049 } 00:25:20.049 Got JSON-RPC error response 00:25:20.049 response: 00:25:20.049 { 00:25:20.049 "code": -32602, 00:25:20.049 "message": "Invalid cntlid range [6-5]" 00:25:20.049 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:25:20.049 22:24:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target --name foobar 00:25:20.049 22:24:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@87 -- # out='request: 00:25:20.049 { 00:25:20.049 "name": "foobar", 00:25:20.049 "method": "nvmf_delete_target", 00:25:20.049 "req_id": 1 00:25:20.049 } 00:25:20.049 Got JSON-RPC error response 00:25:20.049 response: 00:25:20.049 { 00:25:20.049 "code": -32602, 00:25:20.049 "message": "The specified target doesn'\''t exist, cannot delete it." 00:25:20.049 }' 00:25:20.049 22:24:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@88 -- # [[ request: 00:25:20.049 { 00:25:20.049 "name": "foobar", 00:25:20.049 "method": "nvmf_delete_target", 00:25:20.049 "req_id": 1 00:25:20.049 } 00:25:20.049 Got JSON-RPC error response 00:25:20.049 response: 00:25:20.049 { 00:25:20.049 "code": -32602, 00:25:20.049 "message": "The specified target doesn't exist, cannot delete it." 00:25:20.049 } == *\T\h\e\ \s\p\e\c\i\f\i\e\d\ \t\a\r\g\e\t\ \d\o\e\s\n\'\t\ \e\x\i\s\t\,\ \c\a\n\n\o\t\ \d\e\l\e\t\e\ \i\t\.* ]] 00:25:20.049 22:24:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@90 -- # trap - SIGINT SIGTERM EXIT 00:25:20.049 22:24:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@91 -- # nvmftestfini 00:25:20.049 22:24:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@514 -- # nvmfcleanup 00:25:20.049 22:24:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@121 -- # sync 00:25:20.049 22:24:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:25:20.049 22:24:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@124 -- # set +e 00:25:20.049 22:24:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@125 -- # for i in {1..20} 00:25:20.049 22:24:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:25:20.049 rmmod nvme_tcp 00:25:20.049 rmmod nvme_fabrics 00:25:20.049 rmmod nvme_keyring 00:25:20.309 22:24:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:25:20.309 22:24:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@128 -- # set -e 00:25:20.309 22:24:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@129 -- # return 0 00:25:20.309 22:24:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@515 -- # '[' -n 133626 ']' 00:25:20.309 22:24:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@516 -- # killprocess 133626 00:25:20.309 22:24:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@950 -- # '[' -z 133626 ']' 00:25:20.309 22:24:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@954 -- # kill -0 133626 00:25:20.309 22:24:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@955 -- # uname 00:25:20.309 22:24:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:25:20.309 22:24:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 133626 00:25:20.309 22:24:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:25:20.309 22:24:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:25:20.309 22:24:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@968 -- # echo 'killing process with pid 133626' 00:25:20.309 killing process with pid 133626 00:25:20.309 22:24:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@969 -- # kill 133626 00:25:20.309 22:24:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@974 -- # wait 133626 00:25:20.570 22:24:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:25:20.570 22:24:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:25:20.570 22:24:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:25:20.570 22:24:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@297 -- # iptr 00:25:20.570 22:24:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@789 -- # iptables-save 00:25:20.570 22:24:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:25:20.570 22:24:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@789 -- # iptables-restore 00:25:20.570 22:24:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:25:20.570 22:24:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@302 -- # remove_spdk_ns 00:25:20.570 22:24:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:20.570 22:24:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:20.570 22:24:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:22.482 22:24:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:25:22.482 00:25:22.482 real 0m13.849s 00:25:22.482 user 0m20.550s 00:25:22.482 sys 0m6.516s 00:25:22.482 22:24:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1126 -- # xtrace_disable 00:25:22.482 22:24:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:25:22.482 ************************************ 00:25:22.482 END TEST nvmf_invalid 00:25:22.482 ************************************ 00:25:22.482 22:24:17 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@24 -- # run_test nvmf_connect_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:25:22.482 22:24:17 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:25:22.482 22:24:17 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:25:22.482 22:24:17 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:25:22.745 ************************************ 00:25:22.745 START TEST nvmf_connect_stress 00:25:22.745 ************************************ 00:25:22.745 22:24:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:25:22.745 * Looking for test storage... 00:25:22.745 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:25:22.745 22:24:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:25:22.745 22:24:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1681 -- # lcov --version 00:25:22.745 22:24:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:25:22.745 22:24:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:25:22.745 22:24:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:25:22.745 22:24:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@333 -- # local ver1 ver1_l 00:25:22.745 22:24:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@334 -- # local ver2 ver2_l 00:25:22.745 22:24:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@336 -- # IFS=.-: 00:25:22.745 22:24:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@336 -- # read -ra ver1 00:25:22.745 22:24:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@337 -- # IFS=.-: 00:25:22.745 22:24:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@337 -- # read -ra ver2 00:25:22.745 22:24:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@338 -- # local 'op=<' 00:25:22.745 22:24:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@340 -- # ver1_l=2 00:25:22.745 22:24:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@341 -- # ver2_l=1 00:25:22.745 22:24:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:25:22.745 22:24:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@344 -- # case "$op" in 00:25:22.745 22:24:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@345 -- # : 1 00:25:22.745 22:24:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@364 -- # (( v = 0 )) 00:25:22.745 22:24:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:25:22.745 22:24:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@365 -- # decimal 1 00:25:22.745 22:24:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@353 -- # local d=1 00:25:22.745 22:24:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:25:22.745 22:24:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@355 -- # echo 1 00:25:22.745 22:24:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@365 -- # ver1[v]=1 00:25:22.745 22:24:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@366 -- # decimal 2 00:25:22.745 22:24:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@353 -- # local d=2 00:25:22.745 22:24:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:25:22.745 22:24:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@355 -- # echo 2 00:25:22.745 22:24:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@366 -- # ver2[v]=2 00:25:22.745 22:24:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:25:22.745 22:24:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:25:22.745 22:24:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@368 -- # return 0 00:25:22.745 22:24:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:25:22.745 22:24:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:25:22.745 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:22.745 --rc genhtml_branch_coverage=1 00:25:22.745 --rc genhtml_function_coverage=1 00:25:22.745 --rc genhtml_legend=1 00:25:22.745 --rc geninfo_all_blocks=1 00:25:22.745 --rc geninfo_unexecuted_blocks=1 00:25:22.745 00:25:22.745 ' 00:25:22.745 22:24:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:25:22.745 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:22.745 --rc genhtml_branch_coverage=1 00:25:22.745 --rc genhtml_function_coverage=1 00:25:22.745 --rc genhtml_legend=1 00:25:22.745 --rc geninfo_all_blocks=1 00:25:22.745 --rc geninfo_unexecuted_blocks=1 00:25:22.745 00:25:22.745 ' 00:25:22.745 22:24:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:25:22.745 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:22.745 --rc genhtml_branch_coverage=1 00:25:22.746 --rc genhtml_function_coverage=1 00:25:22.746 --rc genhtml_legend=1 00:25:22.746 --rc geninfo_all_blocks=1 00:25:22.746 --rc geninfo_unexecuted_blocks=1 00:25:22.746 00:25:22.746 ' 00:25:22.746 22:24:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:25:22.746 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:22.746 --rc genhtml_branch_coverage=1 00:25:22.746 --rc genhtml_function_coverage=1 00:25:22.746 --rc genhtml_legend=1 00:25:22.746 --rc geninfo_all_blocks=1 00:25:22.746 --rc geninfo_unexecuted_blocks=1 00:25:22.746 00:25:22.746 ' 00:25:22.746 22:24:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:22.746 22:24:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@7 -- # uname -s 00:25:22.746 22:24:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:22.746 22:24:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:22.746 22:24:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:22.746 22:24:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:22.746 22:24:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:22.746 22:24:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:22.746 22:24:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:22.746 22:24:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:22.746 22:24:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:22.746 22:24:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:22.746 22:24:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 00:25:22.746 22:24:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=80c5a598-37ec-ec11-9bc7-a4bf01928204 00:25:22.746 22:24:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:22.746 22:24:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:22.746 22:24:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:22.746 22:24:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:22.746 22:24:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:22.746 22:24:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@15 -- # shopt -s extglob 00:25:22.746 22:24:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:22.746 22:24:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:22.746 22:24:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:22.746 22:24:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:22.746 22:24:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:22.746 22:24:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:22.746 22:24:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@5 -- # export PATH 00:25:22.746 22:24:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:22.746 22:24:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@51 -- # : 0 00:25:22.746 22:24:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:25:22.746 22:24:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:25:22.746 22:24:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:22.746 22:24:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:22.746 22:24:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:22.746 22:24:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:25:22.746 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:25:22.746 22:24:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:25:22.746 22:24:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:25:22.746 22:24:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@55 -- # have_pci_nics=0 00:25:22.746 22:24:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@12 -- # nvmftestinit 00:25:22.746 22:24:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:25:22.746 22:24:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:22.746 22:24:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@474 -- # prepare_net_devs 00:25:22.746 22:24:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@436 -- # local -g is_hw=no 00:25:22.746 22:24:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@438 -- # remove_spdk_ns 00:25:22.746 22:24:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:22.746 22:24:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:22.746 22:24:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:22.746 22:24:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:25:22.746 22:24:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:25:22.746 22:24:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@309 -- # xtrace_disable 00:25:22.746 22:24:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:25:30.894 22:24:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:30.894 22:24:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@315 -- # pci_devs=() 00:25:30.894 22:24:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@315 -- # local -a pci_devs 00:25:30.894 22:24:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@316 -- # pci_net_devs=() 00:25:30.894 22:24:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:25:30.894 22:24:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@317 -- # pci_drivers=() 00:25:30.894 22:24:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@317 -- # local -A pci_drivers 00:25:30.894 22:24:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@319 -- # net_devs=() 00:25:30.894 22:24:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@319 -- # local -ga net_devs 00:25:30.894 22:24:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@320 -- # e810=() 00:25:30.894 22:24:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@320 -- # local -ga e810 00:25:30.894 22:24:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@321 -- # x722=() 00:25:30.894 22:24:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@321 -- # local -ga x722 00:25:30.894 22:24:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@322 -- # mlx=() 00:25:30.894 22:24:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@322 -- # local -ga mlx 00:25:30.894 22:24:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:30.894 22:24:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:30.894 22:24:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:30.894 22:24:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:30.894 22:24:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:30.894 22:24:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:30.894 22:24:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:30.894 22:24:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:25:30.894 22:24:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:30.894 22:24:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:30.894 22:24:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:30.894 22:24:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:30.894 22:24:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:25:30.894 22:24:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:25:30.894 22:24:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:25:30.894 22:24:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:25:30.894 22:24:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:25:30.894 22:24:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:25:30.894 22:24:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:30.894 22:24:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:25:30.894 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:25:30.894 22:24:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:30.894 22:24:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:30.894 22:24:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:30.894 22:24:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:30.894 22:24:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:30.894 22:24:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:30.894 22:24:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:25:30.894 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:25:30.894 22:24:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:30.894 22:24:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:30.894 22:24:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:30.894 22:24:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:30.894 22:24:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:30.894 22:24:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:25:30.894 22:24:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:25:30.894 22:24:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:25:30.894 22:24:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:25:30.894 22:24:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:30.894 22:24:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:25:30.894 22:24:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:30.894 22:24:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@416 -- # [[ up == up ]] 00:25:30.894 22:24:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:25:30.894 22:24:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:30.894 22:24:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:25:30.894 Found net devices under 0000:4b:00.0: cvl_0_0 00:25:30.894 22:24:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:25:30.894 22:24:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:25:30.894 22:24:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:30.894 22:24:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:25:30.894 22:24:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:30.894 22:24:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@416 -- # [[ up == up ]] 00:25:30.894 22:24:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:25:30.894 22:24:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:30.894 22:24:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:25:30.894 Found net devices under 0000:4b:00.1: cvl_0_1 00:25:30.894 22:24:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:25:30.894 22:24:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:25:30.894 22:24:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@440 -- # is_hw=yes 00:25:30.894 22:24:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:25:30.894 22:24:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:25:30.894 22:24:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:25:30.894 22:24:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:25:30.894 22:24:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:30.894 22:24:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:30.894 22:24:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:30.894 22:24:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:25:30.894 22:24:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:30.894 22:24:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:30.894 22:24:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:25:30.894 22:24:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:25:30.894 22:24:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:30.894 22:24:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:30.894 22:24:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:25:30.894 22:24:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:25:30.894 22:24:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:25:30.894 22:24:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:30.894 22:24:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:30.894 22:24:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:30.894 22:24:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:25:30.895 22:24:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:30.895 22:24:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:30.895 22:24:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:30.895 22:24:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:25:30.895 22:24:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:25:30.895 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:30.895 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.610 ms 00:25:30.895 00:25:30.895 --- 10.0.0.2 ping statistics --- 00:25:30.895 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:30.895 rtt min/avg/max/mdev = 0.610/0.610/0.610/0.000 ms 00:25:30.895 22:24:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:30.895 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:30.895 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.266 ms 00:25:30.895 00:25:30.895 --- 10.0.0.1 ping statistics --- 00:25:30.895 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:30.895 rtt min/avg/max/mdev = 0.266/0.266/0.266/0.000 ms 00:25:30.895 22:24:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:30.895 22:24:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@448 -- # return 0 00:25:30.895 22:24:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:25:30.895 22:24:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:30.895 22:24:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:25:30.895 22:24:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:25:30.895 22:24:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:30.895 22:24:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:25:30.895 22:24:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:25:30.895 22:24:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@13 -- # nvmfappstart -m 0xE 00:25:30.895 22:24:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:25:30.895 22:24:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@724 -- # xtrace_disable 00:25:30.895 22:24:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:25:30.895 22:24:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@507 -- # nvmfpid=138831 00:25:30.895 22:24:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@508 -- # waitforlisten 138831 00:25:30.895 22:24:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:25:30.895 22:24:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@831 -- # '[' -z 138831 ']' 00:25:30.895 22:24:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:30.895 22:24:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@836 -- # local max_retries=100 00:25:30.895 22:24:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:30.895 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:30.895 22:24:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@840 -- # xtrace_disable 00:25:30.895 22:24:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:25:30.895 [2024-10-01 22:24:25.434150] Starting SPDK v25.01-pre git sha1 1b1c3081e / DPDK 24.03.0 initialization... 00:25:30.895 [2024-10-01 22:24:25.434217] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:30.895 [2024-10-01 22:24:25.523882] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:25:30.895 [2024-10-01 22:24:25.617551] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:30.895 [2024-10-01 22:24:25.617610] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:30.895 [2024-10-01 22:24:25.617620] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:30.895 [2024-10-01 22:24:25.617635] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:30.895 [2024-10-01 22:24:25.617642] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:30.895 [2024-10-01 22:24:25.617808] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:25:30.895 [2024-10-01 22:24:25.618020] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:25:30.895 [2024-10-01 22:24:25.618020] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:25:31.157 22:24:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:25:31.157 22:24:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@864 -- # return 0 00:25:31.157 22:24:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:25:31.157 22:24:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@730 -- # xtrace_disable 00:25:31.157 22:24:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:25:31.157 22:24:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:31.157 22:24:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:25:31.157 22:24:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:31.157 22:24:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:25:31.157 [2024-10-01 22:24:26.294463] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:31.157 22:24:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:31.157 22:24:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:25:31.157 22:24:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:31.157 22:24:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:25:31.157 22:24:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:31.157 22:24:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:31.157 22:24:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:31.157 22:24:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:25:31.157 [2024-10-01 22:24:26.318900] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:31.157 22:24:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:31.157 22:24:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:25:31.157 22:24:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:31.157 22:24:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:25:31.157 NULL1 00:25:31.157 22:24:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:31.157 22:24:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@21 -- # PERF_PID=138907 00:25:31.157 22:24:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@23 -- # rpcs=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:25:31.157 22:24:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/connect_stress/connect_stress -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -t 10 00:25:31.157 22:24:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@25 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:25:31.157 22:24:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # seq 1 20 00:25:31.157 22:24:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:25:31.157 22:24:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:25:31.157 22:24:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:25:31.157 22:24:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:25:31.157 22:24:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:25:31.157 22:24:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:25:31.157 22:24:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:25:31.157 22:24:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:25:31.158 22:24:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:25:31.158 22:24:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:25:31.158 22:24:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:25:31.158 22:24:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:25:31.158 22:24:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:25:31.158 22:24:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:25:31.158 22:24:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:25:31.158 22:24:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:25:31.158 22:24:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:25:31.158 22:24:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:25:31.158 22:24:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:25:31.158 22:24:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:25:31.158 22:24:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:25:31.158 22:24:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:25:31.158 22:24:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:25:31.158 22:24:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:25:31.158 22:24:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:25:31.158 22:24:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:25:31.158 22:24:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:25:31.158 22:24:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:25:31.419 22:24:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:25:31.419 22:24:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:25:31.420 22:24:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:25:31.420 22:24:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:25:31.420 22:24:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:25:31.420 22:24:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:25:31.420 22:24:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:25:31.420 22:24:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:25:31.420 22:24:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:25:31.420 22:24:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:25:31.420 22:24:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:25:31.420 22:24:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:25:31.420 22:24:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 138907 00:25:31.420 22:24:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:25:31.420 22:24:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:31.420 22:24:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:25:31.681 22:24:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:31.681 22:24:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 138907 00:25:31.681 22:24:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:25:31.681 22:24:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:31.681 22:24:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:25:31.942 22:24:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:31.942 22:24:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 138907 00:25:31.942 22:24:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:25:31.942 22:24:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:31.942 22:24:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:25:32.201 22:24:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:32.201 22:24:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 138907 00:25:32.201 22:24:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:25:32.201 22:24:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:32.201 22:24:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:25:32.771 22:24:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:32.771 22:24:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 138907 00:25:32.771 22:24:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:25:32.771 22:24:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:32.771 22:24:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:25:33.032 22:24:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:33.032 22:24:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 138907 00:25:33.032 22:24:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:25:33.032 22:24:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:33.032 22:24:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:25:33.292 22:24:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:33.292 22:24:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 138907 00:25:33.292 22:24:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:25:33.292 22:24:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:33.292 22:24:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:25:33.552 22:24:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:33.552 22:24:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 138907 00:25:33.552 22:24:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:25:33.552 22:24:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:33.552 22:24:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:25:33.813 22:24:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:33.813 22:24:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 138907 00:25:33.813 22:24:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:25:33.813 22:24:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:33.813 22:24:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:25:34.385 22:24:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:34.385 22:24:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 138907 00:25:34.385 22:24:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:25:34.385 22:24:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:34.385 22:24:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:25:34.646 22:24:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:34.646 22:24:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 138907 00:25:34.646 22:24:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:25:34.646 22:24:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:34.646 22:24:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:25:34.907 22:24:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:34.907 22:24:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 138907 00:25:34.907 22:24:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:25:34.907 22:24:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:34.907 22:24:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:25:35.168 22:24:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:35.168 22:24:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 138907 00:25:35.168 22:24:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:25:35.168 22:24:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:35.168 22:24:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:25:35.428 22:24:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:35.428 22:24:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 138907 00:25:35.428 22:24:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:25:35.428 22:24:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:35.428 22:24:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:25:35.999 22:24:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:35.999 22:24:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 138907 00:25:35.999 22:24:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:25:35.999 22:24:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:35.999 22:24:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:25:36.259 22:24:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:36.259 22:24:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 138907 00:25:36.259 22:24:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:25:36.259 22:24:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:36.259 22:24:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:25:36.520 22:24:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:36.520 22:24:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 138907 00:25:36.520 22:24:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:25:36.520 22:24:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:36.520 22:24:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:25:36.780 22:24:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:36.780 22:24:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 138907 00:25:36.780 22:24:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:25:36.780 22:24:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:36.780 22:24:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:25:37.040 22:24:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:37.040 22:24:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 138907 00:25:37.040 22:24:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:25:37.040 22:24:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:37.040 22:24:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:25:37.610 22:24:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:37.610 22:24:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 138907 00:25:37.610 22:24:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:25:37.610 22:24:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:37.610 22:24:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:25:37.869 22:24:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:37.869 22:24:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 138907 00:25:37.869 22:24:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:25:37.869 22:24:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:37.869 22:24:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:25:38.131 22:24:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:38.131 22:24:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 138907 00:25:38.131 22:24:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:25:38.131 22:24:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:38.131 22:24:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:25:38.391 22:24:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:38.391 22:24:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 138907 00:25:38.391 22:24:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:25:38.391 22:24:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:38.391 22:24:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:25:38.962 22:24:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:38.962 22:24:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 138907 00:25:38.962 22:24:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:25:38.962 22:24:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:38.962 22:24:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:25:39.223 22:24:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:39.223 22:24:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 138907 00:25:39.223 22:24:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:25:39.223 22:24:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:39.223 22:24:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:25:39.483 22:24:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:39.483 22:24:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 138907 00:25:39.483 22:24:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:25:39.483 22:24:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:39.483 22:24:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:25:39.743 22:24:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:39.743 22:24:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 138907 00:25:39.743 22:24:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:25:39.743 22:24:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:39.743 22:24:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:25:40.005 22:24:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:40.005 22:24:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 138907 00:25:40.005 22:24:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:25:40.005 22:24:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:40.005 22:24:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:25:40.575 22:24:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:40.576 22:24:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 138907 00:25:40.576 22:24:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:25:40.576 22:24:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:40.576 22:24:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:25:40.836 22:24:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:40.836 22:24:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 138907 00:25:40.836 22:24:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:25:40.836 22:24:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:40.836 22:24:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:25:41.097 22:24:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:41.097 22:24:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 138907 00:25:41.097 22:24:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:25:41.097 22:24:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:41.097 22:24:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:25:41.406 Testing NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:25:41.406 22:24:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:41.406 22:24:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 138907 00:25:41.406 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh: line 34: kill: (138907) - No such process 00:25:41.406 22:24:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@38 -- # wait 138907 00:25:41.406 22:24:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@39 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:25:41.406 22:24:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:25:41.406 22:24:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@43 -- # nvmftestfini 00:25:41.406 22:24:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@514 -- # nvmfcleanup 00:25:41.406 22:24:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@121 -- # sync 00:25:41.406 22:24:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:25:41.406 22:24:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@124 -- # set +e 00:25:41.406 22:24:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@125 -- # for i in {1..20} 00:25:41.406 22:24:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:25:41.406 rmmod nvme_tcp 00:25:41.406 rmmod nvme_fabrics 00:25:41.406 rmmod nvme_keyring 00:25:41.406 22:24:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:25:41.406 22:24:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@128 -- # set -e 00:25:41.406 22:24:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@129 -- # return 0 00:25:41.406 22:24:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@515 -- # '[' -n 138831 ']' 00:25:41.406 22:24:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@516 -- # killprocess 138831 00:25:41.406 22:24:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@950 -- # '[' -z 138831 ']' 00:25:41.406 22:24:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@954 -- # kill -0 138831 00:25:41.406 22:24:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@955 -- # uname 00:25:41.406 22:24:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:25:41.406 22:24:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 138831 00:25:41.690 22:24:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:25:41.690 22:24:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:25:41.690 22:24:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@968 -- # echo 'killing process with pid 138831' 00:25:41.690 killing process with pid 138831 00:25:41.690 22:24:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@969 -- # kill 138831 00:25:41.690 22:24:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@974 -- # wait 138831 00:25:41.690 22:24:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:25:41.690 22:24:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:25:41.690 22:24:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:25:41.690 22:24:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@297 -- # iptr 00:25:41.690 22:24:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@789 -- # iptables-save 00:25:41.690 22:24:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:25:41.690 22:24:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@789 -- # iptables-restore 00:25:41.690 22:24:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:25:41.690 22:24:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@302 -- # remove_spdk_ns 00:25:41.690 22:24:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:41.690 22:24:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:41.690 22:24:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:44.241 22:24:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:25:44.241 00:25:44.241 real 0m21.184s 00:25:44.241 user 0m43.362s 00:25:44.241 sys 0m7.955s 00:25:44.241 22:24:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1126 -- # xtrace_disable 00:25:44.241 22:24:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:25:44.241 ************************************ 00:25:44.241 END TEST nvmf_connect_stress 00:25:44.241 ************************************ 00:25:44.241 22:24:38 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@25 -- # run_test nvmf_fused_ordering /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:25:44.241 22:24:38 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:25:44.241 22:24:38 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:25:44.241 22:24:38 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:25:44.241 ************************************ 00:25:44.241 START TEST nvmf_fused_ordering 00:25:44.241 ************************************ 00:25:44.241 22:24:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:25:44.241 * Looking for test storage... 00:25:44.241 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:25:44.241 22:24:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:25:44.241 22:24:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1681 -- # lcov --version 00:25:44.241 22:24:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:25:44.241 22:24:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:25:44.241 22:24:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:25:44.241 22:24:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@333 -- # local ver1 ver1_l 00:25:44.241 22:24:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@334 -- # local ver2 ver2_l 00:25:44.241 22:24:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@336 -- # IFS=.-: 00:25:44.241 22:24:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@336 -- # read -ra ver1 00:25:44.241 22:24:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@337 -- # IFS=.-: 00:25:44.241 22:24:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@337 -- # read -ra ver2 00:25:44.241 22:24:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@338 -- # local 'op=<' 00:25:44.241 22:24:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@340 -- # ver1_l=2 00:25:44.241 22:24:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@341 -- # ver2_l=1 00:25:44.241 22:24:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:25:44.241 22:24:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@344 -- # case "$op" in 00:25:44.241 22:24:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@345 -- # : 1 00:25:44.241 22:24:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@364 -- # (( v = 0 )) 00:25:44.241 22:24:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:25:44.241 22:24:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@365 -- # decimal 1 00:25:44.241 22:24:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@353 -- # local d=1 00:25:44.241 22:24:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:25:44.241 22:24:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@355 -- # echo 1 00:25:44.241 22:24:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@365 -- # ver1[v]=1 00:25:44.241 22:24:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@366 -- # decimal 2 00:25:44.242 22:24:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@353 -- # local d=2 00:25:44.242 22:24:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:25:44.242 22:24:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@355 -- # echo 2 00:25:44.242 22:24:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@366 -- # ver2[v]=2 00:25:44.242 22:24:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:25:44.242 22:24:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:25:44.242 22:24:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@368 -- # return 0 00:25:44.242 22:24:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:25:44.242 22:24:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:25:44.242 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:44.242 --rc genhtml_branch_coverage=1 00:25:44.242 --rc genhtml_function_coverage=1 00:25:44.242 --rc genhtml_legend=1 00:25:44.242 --rc geninfo_all_blocks=1 00:25:44.242 --rc geninfo_unexecuted_blocks=1 00:25:44.242 00:25:44.242 ' 00:25:44.242 22:24:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:25:44.242 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:44.242 --rc genhtml_branch_coverage=1 00:25:44.242 --rc genhtml_function_coverage=1 00:25:44.242 --rc genhtml_legend=1 00:25:44.242 --rc geninfo_all_blocks=1 00:25:44.242 --rc geninfo_unexecuted_blocks=1 00:25:44.242 00:25:44.242 ' 00:25:44.242 22:24:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:25:44.242 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:44.242 --rc genhtml_branch_coverage=1 00:25:44.242 --rc genhtml_function_coverage=1 00:25:44.242 --rc genhtml_legend=1 00:25:44.242 --rc geninfo_all_blocks=1 00:25:44.242 --rc geninfo_unexecuted_blocks=1 00:25:44.242 00:25:44.242 ' 00:25:44.242 22:24:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:25:44.242 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:44.242 --rc genhtml_branch_coverage=1 00:25:44.242 --rc genhtml_function_coverage=1 00:25:44.242 --rc genhtml_legend=1 00:25:44.242 --rc geninfo_all_blocks=1 00:25:44.242 --rc geninfo_unexecuted_blocks=1 00:25:44.242 00:25:44.242 ' 00:25:44.242 22:24:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:44.242 22:24:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@7 -- # uname -s 00:25:44.242 22:24:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:44.242 22:24:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:44.242 22:24:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:44.242 22:24:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:44.242 22:24:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:44.242 22:24:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:44.242 22:24:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:44.242 22:24:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:44.242 22:24:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:44.242 22:24:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:44.242 22:24:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 00:25:44.242 22:24:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@18 -- # NVME_HOSTID=80c5a598-37ec-ec11-9bc7-a4bf01928204 00:25:44.242 22:24:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:44.242 22:24:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:44.242 22:24:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:44.242 22:24:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:44.242 22:24:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:44.242 22:24:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@15 -- # shopt -s extglob 00:25:44.242 22:24:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:44.242 22:24:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:44.242 22:24:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:44.242 22:24:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:44.242 22:24:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:44.242 22:24:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:44.242 22:24:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@5 -- # export PATH 00:25:44.242 22:24:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:44.242 22:24:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@51 -- # : 0 00:25:44.242 22:24:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:25:44.242 22:24:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:25:44.242 22:24:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:44.242 22:24:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:44.242 22:24:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:44.242 22:24:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:25:44.242 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:25:44.242 22:24:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:25:44.242 22:24:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:25:44.242 22:24:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@55 -- # have_pci_nics=0 00:25:44.242 22:24:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@12 -- # nvmftestinit 00:25:44.242 22:24:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:25:44.242 22:24:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:44.242 22:24:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@474 -- # prepare_net_devs 00:25:44.242 22:24:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@436 -- # local -g is_hw=no 00:25:44.242 22:24:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@438 -- # remove_spdk_ns 00:25:44.242 22:24:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:44.242 22:24:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:44.242 22:24:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:44.243 22:24:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:25:44.243 22:24:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:25:44.243 22:24:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@309 -- # xtrace_disable 00:25:44.243 22:24:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:25:52.385 22:24:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:52.385 22:24:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@315 -- # pci_devs=() 00:25:52.385 22:24:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@315 -- # local -a pci_devs 00:25:52.385 22:24:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@316 -- # pci_net_devs=() 00:25:52.385 22:24:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:25:52.385 22:24:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@317 -- # pci_drivers=() 00:25:52.385 22:24:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@317 -- # local -A pci_drivers 00:25:52.385 22:24:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@319 -- # net_devs=() 00:25:52.385 22:24:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@319 -- # local -ga net_devs 00:25:52.385 22:24:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@320 -- # e810=() 00:25:52.385 22:24:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@320 -- # local -ga e810 00:25:52.385 22:24:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@321 -- # x722=() 00:25:52.385 22:24:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@321 -- # local -ga x722 00:25:52.385 22:24:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@322 -- # mlx=() 00:25:52.385 22:24:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@322 -- # local -ga mlx 00:25:52.385 22:24:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:52.385 22:24:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:52.385 22:24:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:52.385 22:24:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:52.385 22:24:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:52.385 22:24:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:52.385 22:24:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:52.385 22:24:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:25:52.385 22:24:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:52.385 22:24:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:52.385 22:24:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:52.385 22:24:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:52.385 22:24:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:25:52.385 22:24:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:25:52.385 22:24:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:25:52.385 22:24:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:25:52.385 22:24:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:25:52.385 22:24:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:25:52.385 22:24:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:52.385 22:24:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:25:52.385 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:25:52.385 22:24:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:52.385 22:24:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:52.385 22:24:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:52.385 22:24:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:52.385 22:24:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:52.385 22:24:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:52.385 22:24:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:25:52.385 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:25:52.385 22:24:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:52.385 22:24:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:52.385 22:24:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:52.385 22:24:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:52.385 22:24:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:52.385 22:24:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:25:52.385 22:24:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:25:52.385 22:24:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:25:52.385 22:24:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:25:52.385 22:24:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:52.385 22:24:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:25:52.385 22:24:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:52.385 22:24:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@416 -- # [[ up == up ]] 00:25:52.385 22:24:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:25:52.385 22:24:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:52.385 22:24:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:25:52.385 Found net devices under 0000:4b:00.0: cvl_0_0 00:25:52.385 22:24:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:25:52.385 22:24:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:25:52.385 22:24:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:52.385 22:24:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:25:52.385 22:24:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:52.385 22:24:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@416 -- # [[ up == up ]] 00:25:52.386 22:24:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:25:52.386 22:24:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:52.386 22:24:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:25:52.386 Found net devices under 0000:4b:00.1: cvl_0_1 00:25:52.386 22:24:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:25:52.386 22:24:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:25:52.386 22:24:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@440 -- # is_hw=yes 00:25:52.386 22:24:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:25:52.386 22:24:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:25:52.386 22:24:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:25:52.386 22:24:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:25:52.386 22:24:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:52.386 22:24:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:52.386 22:24:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:52.386 22:24:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:25:52.386 22:24:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:52.386 22:24:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:52.386 22:24:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:25:52.386 22:24:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:25:52.386 22:24:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:52.386 22:24:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:52.386 22:24:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:25:52.386 22:24:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:25:52.386 22:24:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:25:52.386 22:24:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:52.386 22:24:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:52.386 22:24:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:52.386 22:24:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:25:52.386 22:24:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:52.386 22:24:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:52.386 22:24:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:52.386 22:24:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:25:52.386 22:24:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:25:52.386 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:52.386 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.482 ms 00:25:52.386 00:25:52.386 --- 10.0.0.2 ping statistics --- 00:25:52.386 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:52.386 rtt min/avg/max/mdev = 0.482/0.482/0.482/0.000 ms 00:25:52.386 22:24:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:52.386 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:52.386 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.295 ms 00:25:52.386 00:25:52.386 --- 10.0.0.1 ping statistics --- 00:25:52.386 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:52.386 rtt min/avg/max/mdev = 0.295/0.295/0.295/0.000 ms 00:25:52.386 22:24:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:52.386 22:24:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@448 -- # return 0 00:25:52.386 22:24:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:25:52.386 22:24:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:52.386 22:24:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:25:52.386 22:24:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:25:52.386 22:24:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:52.386 22:24:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:25:52.386 22:24:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:25:52.386 22:24:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@13 -- # nvmfappstart -m 0x2 00:25:52.386 22:24:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:25:52.386 22:24:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@724 -- # xtrace_disable 00:25:52.386 22:24:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:25:52.386 22:24:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:25:52.386 22:24:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@507 -- # nvmfpid=145214 00:25:52.386 22:24:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@508 -- # waitforlisten 145214 00:25:52.386 22:24:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@831 -- # '[' -z 145214 ']' 00:25:52.386 22:24:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:52.386 22:24:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@836 -- # local max_retries=100 00:25:52.386 22:24:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:52.386 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:52.386 22:24:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@840 -- # xtrace_disable 00:25:52.386 22:24:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:25:52.386 [2024-10-01 22:24:46.616558] Starting SPDK v25.01-pre git sha1 1b1c3081e / DPDK 24.03.0 initialization... 00:25:52.386 [2024-10-01 22:24:46.616634] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:52.386 [2024-10-01 22:24:46.705755] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:52.386 [2024-10-01 22:24:46.796528] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:52.386 [2024-10-01 22:24:46.796589] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:52.386 [2024-10-01 22:24:46.796599] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:52.386 [2024-10-01 22:24:46.796607] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:52.386 [2024-10-01 22:24:46.796618] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:52.386 [2024-10-01 22:24:46.796653] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:25:52.386 22:24:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:25:52.386 22:24:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@864 -- # return 0 00:25:52.386 22:24:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:25:52.386 22:24:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@730 -- # xtrace_disable 00:25:52.386 22:24:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:25:52.386 22:24:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:52.386 22:24:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:25:52.386 22:24:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:52.386 22:24:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:25:52.386 [2024-10-01 22:24:47.479772] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:52.386 22:24:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:52.386 22:24:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:25:52.386 22:24:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:52.386 22:24:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:25:52.386 22:24:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:52.386 22:24:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:52.386 22:24:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:52.386 22:24:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:25:52.386 [2024-10-01 22:24:47.504037] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:52.386 22:24:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:52.386 22:24:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:25:52.386 22:24:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:52.386 22:24:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:25:52.386 NULL1 00:25:52.386 22:24:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:52.386 22:24:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@19 -- # rpc_cmd bdev_wait_for_examine 00:25:52.386 22:24:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:52.386 22:24:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:25:52.387 22:24:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:52.387 22:24:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:25:52.387 22:24:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:52.387 22:24:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:25:52.387 22:24:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:52.387 22:24:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/fused_ordering/fused_ordering -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:25:52.387 [2024-10-01 22:24:47.574152] Starting SPDK v25.01-pre git sha1 1b1c3081e / DPDK 24.03.0 initialization... 00:25:52.387 [2024-10-01 22:24:47.574196] [ DPDK EAL parameters: fused_ordering --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid145408 ] 00:25:53.327 Attached to nqn.2016-06.io.spdk:cnode1 00:25:53.327 Namespace ID: 1 size: 1GB 00:25:53.327 fused_ordering(0) 00:25:53.327 fused_ordering(1) 00:25:53.327 fused_ordering(2) 00:25:53.327 fused_ordering(3) 00:25:53.327 fused_ordering(4) 00:25:53.327 fused_ordering(5) 00:25:53.327 fused_ordering(6) 00:25:53.327 fused_ordering(7) 00:25:53.327 fused_ordering(8) 00:25:53.327 fused_ordering(9) 00:25:53.327 fused_ordering(10) 00:25:53.327 fused_ordering(11) 00:25:53.327 fused_ordering(12) 00:25:53.327 fused_ordering(13) 00:25:53.327 fused_ordering(14) 00:25:53.327 fused_ordering(15) 00:25:53.327 fused_ordering(16) 00:25:53.327 fused_ordering(17) 00:25:53.327 fused_ordering(18) 00:25:53.327 fused_ordering(19) 00:25:53.327 fused_ordering(20) 00:25:53.327 fused_ordering(21) 00:25:53.327 fused_ordering(22) 00:25:53.327 fused_ordering(23) 00:25:53.327 fused_ordering(24) 00:25:53.327 fused_ordering(25) 00:25:53.327 fused_ordering(26) 00:25:53.327 fused_ordering(27) 00:25:53.327 fused_ordering(28) 00:25:53.327 fused_ordering(29) 00:25:53.327 fused_ordering(30) 00:25:53.327 fused_ordering(31) 00:25:53.327 fused_ordering(32) 00:25:53.327 fused_ordering(33) 00:25:53.327 fused_ordering(34) 00:25:53.327 fused_ordering(35) 00:25:53.327 fused_ordering(36) 00:25:53.327 fused_ordering(37) 00:25:53.327 fused_ordering(38) 00:25:53.327 fused_ordering(39) 00:25:53.327 fused_ordering(40) 00:25:53.327 fused_ordering(41) 00:25:53.327 fused_ordering(42) 00:25:53.327 fused_ordering(43) 00:25:53.327 fused_ordering(44) 00:25:53.327 fused_ordering(45) 00:25:53.327 fused_ordering(46) 00:25:53.327 fused_ordering(47) 00:25:53.327 fused_ordering(48) 00:25:53.327 fused_ordering(49) 00:25:53.327 fused_ordering(50) 00:25:53.327 fused_ordering(51) 00:25:53.327 fused_ordering(52) 00:25:53.327 fused_ordering(53) 00:25:53.327 fused_ordering(54) 00:25:53.327 fused_ordering(55) 00:25:53.327 fused_ordering(56) 00:25:53.327 fused_ordering(57) 00:25:53.327 fused_ordering(58) 00:25:53.327 fused_ordering(59) 00:25:53.327 fused_ordering(60) 00:25:53.327 fused_ordering(61) 00:25:53.327 fused_ordering(62) 00:25:53.327 fused_ordering(63) 00:25:53.327 fused_ordering(64) 00:25:53.327 fused_ordering(65) 00:25:53.327 fused_ordering(66) 00:25:53.327 fused_ordering(67) 00:25:53.327 fused_ordering(68) 00:25:53.327 fused_ordering(69) 00:25:53.327 fused_ordering(70) 00:25:53.327 fused_ordering(71) 00:25:53.327 fused_ordering(72) 00:25:53.327 fused_ordering(73) 00:25:53.327 fused_ordering(74) 00:25:53.327 fused_ordering(75) 00:25:53.327 fused_ordering(76) 00:25:53.327 fused_ordering(77) 00:25:53.327 fused_ordering(78) 00:25:53.327 fused_ordering(79) 00:25:53.327 fused_ordering(80) 00:25:53.327 fused_ordering(81) 00:25:53.327 fused_ordering(82) 00:25:53.327 fused_ordering(83) 00:25:53.327 fused_ordering(84) 00:25:53.327 fused_ordering(85) 00:25:53.327 fused_ordering(86) 00:25:53.327 fused_ordering(87) 00:25:53.327 fused_ordering(88) 00:25:53.327 fused_ordering(89) 00:25:53.327 fused_ordering(90) 00:25:53.327 fused_ordering(91) 00:25:53.327 fused_ordering(92) 00:25:53.327 fused_ordering(93) 00:25:53.327 fused_ordering(94) 00:25:53.327 fused_ordering(95) 00:25:53.327 fused_ordering(96) 00:25:53.327 fused_ordering(97) 00:25:53.327 fused_ordering(98) 00:25:53.327 fused_ordering(99) 00:25:53.327 fused_ordering(100) 00:25:53.327 fused_ordering(101) 00:25:53.327 fused_ordering(102) 00:25:53.327 fused_ordering(103) 00:25:53.327 fused_ordering(104) 00:25:53.327 fused_ordering(105) 00:25:53.327 fused_ordering(106) 00:25:53.327 fused_ordering(107) 00:25:53.327 fused_ordering(108) 00:25:53.327 fused_ordering(109) 00:25:53.327 fused_ordering(110) 00:25:53.327 fused_ordering(111) 00:25:53.327 fused_ordering(112) 00:25:53.327 fused_ordering(113) 00:25:53.327 fused_ordering(114) 00:25:53.327 fused_ordering(115) 00:25:53.327 fused_ordering(116) 00:25:53.327 fused_ordering(117) 00:25:53.327 fused_ordering(118) 00:25:53.327 fused_ordering(119) 00:25:53.327 fused_ordering(120) 00:25:53.327 fused_ordering(121) 00:25:53.327 fused_ordering(122) 00:25:53.327 fused_ordering(123) 00:25:53.327 fused_ordering(124) 00:25:53.327 fused_ordering(125) 00:25:53.327 fused_ordering(126) 00:25:53.327 fused_ordering(127) 00:25:53.327 fused_ordering(128) 00:25:53.327 fused_ordering(129) 00:25:53.327 fused_ordering(130) 00:25:53.328 fused_ordering(131) 00:25:53.328 fused_ordering(132) 00:25:53.328 fused_ordering(133) 00:25:53.328 fused_ordering(134) 00:25:53.328 fused_ordering(135) 00:25:53.328 fused_ordering(136) 00:25:53.328 fused_ordering(137) 00:25:53.328 fused_ordering(138) 00:25:53.328 fused_ordering(139) 00:25:53.328 fused_ordering(140) 00:25:53.328 fused_ordering(141) 00:25:53.328 fused_ordering(142) 00:25:53.328 fused_ordering(143) 00:25:53.328 fused_ordering(144) 00:25:53.328 fused_ordering(145) 00:25:53.328 fused_ordering(146) 00:25:53.328 fused_ordering(147) 00:25:53.328 fused_ordering(148) 00:25:53.328 fused_ordering(149) 00:25:53.328 fused_ordering(150) 00:25:53.328 fused_ordering(151) 00:25:53.328 fused_ordering(152) 00:25:53.328 fused_ordering(153) 00:25:53.328 fused_ordering(154) 00:25:53.328 fused_ordering(155) 00:25:53.328 fused_ordering(156) 00:25:53.328 fused_ordering(157) 00:25:53.328 fused_ordering(158) 00:25:53.328 fused_ordering(159) 00:25:53.328 fused_ordering(160) 00:25:53.328 fused_ordering(161) 00:25:53.328 fused_ordering(162) 00:25:53.328 fused_ordering(163) 00:25:53.328 fused_ordering(164) 00:25:53.328 fused_ordering(165) 00:25:53.328 fused_ordering(166) 00:25:53.328 fused_ordering(167) 00:25:53.328 fused_ordering(168) 00:25:53.328 fused_ordering(169) 00:25:53.328 fused_ordering(170) 00:25:53.328 fused_ordering(171) 00:25:53.328 fused_ordering(172) 00:25:53.328 fused_ordering(173) 00:25:53.328 fused_ordering(174) 00:25:53.328 fused_ordering(175) 00:25:53.328 fused_ordering(176) 00:25:53.328 fused_ordering(177) 00:25:53.328 fused_ordering(178) 00:25:53.328 fused_ordering(179) 00:25:53.328 fused_ordering(180) 00:25:53.328 fused_ordering(181) 00:25:53.328 fused_ordering(182) 00:25:53.328 fused_ordering(183) 00:25:53.328 fused_ordering(184) 00:25:53.328 fused_ordering(185) 00:25:53.328 fused_ordering(186) 00:25:53.328 fused_ordering(187) 00:25:53.328 fused_ordering(188) 00:25:53.328 fused_ordering(189) 00:25:53.328 fused_ordering(190) 00:25:53.328 fused_ordering(191) 00:25:53.328 fused_ordering(192) 00:25:53.328 fused_ordering(193) 00:25:53.328 fused_ordering(194) 00:25:53.328 fused_ordering(195) 00:25:53.328 fused_ordering(196) 00:25:53.328 fused_ordering(197) 00:25:53.328 fused_ordering(198) 00:25:53.328 fused_ordering(199) 00:25:53.328 fused_ordering(200) 00:25:53.328 fused_ordering(201) 00:25:53.328 fused_ordering(202) 00:25:53.328 fused_ordering(203) 00:25:53.328 fused_ordering(204) 00:25:53.328 fused_ordering(205) 00:25:53.589 fused_ordering(206) 00:25:53.589 fused_ordering(207) 00:25:53.589 fused_ordering(208) 00:25:53.589 fused_ordering(209) 00:25:53.589 fused_ordering(210) 00:25:53.589 fused_ordering(211) 00:25:53.589 fused_ordering(212) 00:25:53.589 fused_ordering(213) 00:25:53.589 fused_ordering(214) 00:25:53.589 fused_ordering(215) 00:25:53.589 fused_ordering(216) 00:25:53.589 fused_ordering(217) 00:25:53.589 fused_ordering(218) 00:25:53.589 fused_ordering(219) 00:25:53.589 fused_ordering(220) 00:25:53.589 fused_ordering(221) 00:25:53.589 fused_ordering(222) 00:25:53.589 fused_ordering(223) 00:25:53.589 fused_ordering(224) 00:25:53.589 fused_ordering(225) 00:25:53.589 fused_ordering(226) 00:25:53.589 fused_ordering(227) 00:25:53.589 fused_ordering(228) 00:25:53.589 fused_ordering(229) 00:25:53.589 fused_ordering(230) 00:25:53.589 fused_ordering(231) 00:25:53.589 fused_ordering(232) 00:25:53.589 fused_ordering(233) 00:25:53.589 fused_ordering(234) 00:25:53.589 fused_ordering(235) 00:25:53.589 fused_ordering(236) 00:25:53.589 fused_ordering(237) 00:25:53.589 fused_ordering(238) 00:25:53.589 fused_ordering(239) 00:25:53.589 fused_ordering(240) 00:25:53.589 fused_ordering(241) 00:25:53.589 fused_ordering(242) 00:25:53.589 fused_ordering(243) 00:25:53.589 fused_ordering(244) 00:25:53.589 fused_ordering(245) 00:25:53.589 fused_ordering(246) 00:25:53.589 fused_ordering(247) 00:25:53.589 fused_ordering(248) 00:25:53.589 fused_ordering(249) 00:25:53.589 fused_ordering(250) 00:25:53.589 fused_ordering(251) 00:25:53.589 fused_ordering(252) 00:25:53.589 fused_ordering(253) 00:25:53.589 fused_ordering(254) 00:25:53.589 fused_ordering(255) 00:25:53.589 fused_ordering(256) 00:25:53.589 fused_ordering(257) 00:25:53.589 fused_ordering(258) 00:25:53.589 fused_ordering(259) 00:25:53.589 fused_ordering(260) 00:25:53.589 fused_ordering(261) 00:25:53.589 fused_ordering(262) 00:25:53.589 fused_ordering(263) 00:25:53.589 fused_ordering(264) 00:25:53.589 fused_ordering(265) 00:25:53.589 fused_ordering(266) 00:25:53.589 fused_ordering(267) 00:25:53.589 fused_ordering(268) 00:25:53.589 fused_ordering(269) 00:25:53.589 fused_ordering(270) 00:25:53.589 fused_ordering(271) 00:25:53.589 fused_ordering(272) 00:25:53.589 fused_ordering(273) 00:25:53.589 fused_ordering(274) 00:25:53.589 fused_ordering(275) 00:25:53.589 fused_ordering(276) 00:25:53.589 fused_ordering(277) 00:25:53.589 fused_ordering(278) 00:25:53.589 fused_ordering(279) 00:25:53.589 fused_ordering(280) 00:25:53.589 fused_ordering(281) 00:25:53.589 fused_ordering(282) 00:25:53.589 fused_ordering(283) 00:25:53.589 fused_ordering(284) 00:25:53.589 fused_ordering(285) 00:25:53.589 fused_ordering(286) 00:25:53.589 fused_ordering(287) 00:25:53.589 fused_ordering(288) 00:25:53.589 fused_ordering(289) 00:25:53.589 fused_ordering(290) 00:25:53.589 fused_ordering(291) 00:25:53.589 fused_ordering(292) 00:25:53.589 fused_ordering(293) 00:25:53.589 fused_ordering(294) 00:25:53.589 fused_ordering(295) 00:25:53.589 fused_ordering(296) 00:25:53.589 fused_ordering(297) 00:25:53.589 fused_ordering(298) 00:25:53.589 fused_ordering(299) 00:25:53.589 fused_ordering(300) 00:25:53.589 fused_ordering(301) 00:25:53.589 fused_ordering(302) 00:25:53.589 fused_ordering(303) 00:25:53.589 fused_ordering(304) 00:25:53.589 fused_ordering(305) 00:25:53.589 fused_ordering(306) 00:25:53.589 fused_ordering(307) 00:25:53.589 fused_ordering(308) 00:25:53.589 fused_ordering(309) 00:25:53.589 fused_ordering(310) 00:25:53.589 fused_ordering(311) 00:25:53.589 fused_ordering(312) 00:25:53.589 fused_ordering(313) 00:25:53.589 fused_ordering(314) 00:25:53.589 fused_ordering(315) 00:25:53.589 fused_ordering(316) 00:25:53.589 fused_ordering(317) 00:25:53.589 fused_ordering(318) 00:25:53.589 fused_ordering(319) 00:25:53.589 fused_ordering(320) 00:25:53.589 fused_ordering(321) 00:25:53.589 fused_ordering(322) 00:25:53.589 fused_ordering(323) 00:25:53.589 fused_ordering(324) 00:25:53.589 fused_ordering(325) 00:25:53.589 fused_ordering(326) 00:25:53.589 fused_ordering(327) 00:25:53.589 fused_ordering(328) 00:25:53.589 fused_ordering(329) 00:25:53.589 fused_ordering(330) 00:25:53.589 fused_ordering(331) 00:25:53.589 fused_ordering(332) 00:25:53.589 fused_ordering(333) 00:25:53.589 fused_ordering(334) 00:25:53.589 fused_ordering(335) 00:25:53.589 fused_ordering(336) 00:25:53.589 fused_ordering(337) 00:25:53.589 fused_ordering(338) 00:25:53.589 fused_ordering(339) 00:25:53.589 fused_ordering(340) 00:25:53.589 fused_ordering(341) 00:25:53.589 fused_ordering(342) 00:25:53.589 fused_ordering(343) 00:25:53.589 fused_ordering(344) 00:25:53.589 fused_ordering(345) 00:25:53.589 fused_ordering(346) 00:25:53.589 fused_ordering(347) 00:25:53.589 fused_ordering(348) 00:25:53.589 fused_ordering(349) 00:25:53.589 fused_ordering(350) 00:25:53.589 fused_ordering(351) 00:25:53.589 fused_ordering(352) 00:25:53.589 fused_ordering(353) 00:25:53.589 fused_ordering(354) 00:25:53.589 fused_ordering(355) 00:25:53.589 fused_ordering(356) 00:25:53.589 fused_ordering(357) 00:25:53.589 fused_ordering(358) 00:25:53.589 fused_ordering(359) 00:25:53.589 fused_ordering(360) 00:25:53.589 fused_ordering(361) 00:25:53.589 fused_ordering(362) 00:25:53.589 fused_ordering(363) 00:25:53.589 fused_ordering(364) 00:25:53.589 fused_ordering(365) 00:25:53.589 fused_ordering(366) 00:25:53.589 fused_ordering(367) 00:25:53.589 fused_ordering(368) 00:25:53.589 fused_ordering(369) 00:25:53.589 fused_ordering(370) 00:25:53.589 fused_ordering(371) 00:25:53.589 fused_ordering(372) 00:25:53.589 fused_ordering(373) 00:25:53.589 fused_ordering(374) 00:25:53.589 fused_ordering(375) 00:25:53.589 fused_ordering(376) 00:25:53.589 fused_ordering(377) 00:25:53.589 fused_ordering(378) 00:25:53.589 fused_ordering(379) 00:25:53.589 fused_ordering(380) 00:25:53.589 fused_ordering(381) 00:25:53.589 fused_ordering(382) 00:25:53.589 fused_ordering(383) 00:25:53.589 fused_ordering(384) 00:25:53.589 fused_ordering(385) 00:25:53.589 fused_ordering(386) 00:25:53.589 fused_ordering(387) 00:25:53.589 fused_ordering(388) 00:25:53.589 fused_ordering(389) 00:25:53.589 fused_ordering(390) 00:25:53.589 fused_ordering(391) 00:25:53.589 fused_ordering(392) 00:25:53.589 fused_ordering(393) 00:25:53.589 fused_ordering(394) 00:25:53.589 fused_ordering(395) 00:25:53.589 fused_ordering(396) 00:25:53.589 fused_ordering(397) 00:25:53.589 fused_ordering(398) 00:25:53.589 fused_ordering(399) 00:25:53.589 fused_ordering(400) 00:25:53.589 fused_ordering(401) 00:25:53.589 fused_ordering(402) 00:25:53.589 fused_ordering(403) 00:25:53.589 fused_ordering(404) 00:25:53.589 fused_ordering(405) 00:25:53.589 fused_ordering(406) 00:25:53.589 fused_ordering(407) 00:25:53.589 fused_ordering(408) 00:25:53.589 fused_ordering(409) 00:25:53.589 fused_ordering(410) 00:25:54.160 fused_ordering(411) 00:25:54.160 fused_ordering(412) 00:25:54.160 fused_ordering(413) 00:25:54.160 fused_ordering(414) 00:25:54.160 fused_ordering(415) 00:25:54.160 fused_ordering(416) 00:25:54.160 fused_ordering(417) 00:25:54.160 fused_ordering(418) 00:25:54.160 fused_ordering(419) 00:25:54.160 fused_ordering(420) 00:25:54.160 fused_ordering(421) 00:25:54.160 fused_ordering(422) 00:25:54.160 fused_ordering(423) 00:25:54.160 fused_ordering(424) 00:25:54.160 fused_ordering(425) 00:25:54.160 fused_ordering(426) 00:25:54.160 fused_ordering(427) 00:25:54.160 fused_ordering(428) 00:25:54.160 fused_ordering(429) 00:25:54.160 fused_ordering(430) 00:25:54.160 fused_ordering(431) 00:25:54.160 fused_ordering(432) 00:25:54.160 fused_ordering(433) 00:25:54.160 fused_ordering(434) 00:25:54.160 fused_ordering(435) 00:25:54.160 fused_ordering(436) 00:25:54.160 fused_ordering(437) 00:25:54.160 fused_ordering(438) 00:25:54.160 fused_ordering(439) 00:25:54.160 fused_ordering(440) 00:25:54.160 fused_ordering(441) 00:25:54.160 fused_ordering(442) 00:25:54.160 fused_ordering(443) 00:25:54.160 fused_ordering(444) 00:25:54.160 fused_ordering(445) 00:25:54.160 fused_ordering(446) 00:25:54.160 fused_ordering(447) 00:25:54.160 fused_ordering(448) 00:25:54.160 fused_ordering(449) 00:25:54.160 fused_ordering(450) 00:25:54.160 fused_ordering(451) 00:25:54.160 fused_ordering(452) 00:25:54.160 fused_ordering(453) 00:25:54.160 fused_ordering(454) 00:25:54.160 fused_ordering(455) 00:25:54.160 fused_ordering(456) 00:25:54.160 fused_ordering(457) 00:25:54.160 fused_ordering(458) 00:25:54.160 fused_ordering(459) 00:25:54.160 fused_ordering(460) 00:25:54.160 fused_ordering(461) 00:25:54.160 fused_ordering(462) 00:25:54.160 fused_ordering(463) 00:25:54.160 fused_ordering(464) 00:25:54.160 fused_ordering(465) 00:25:54.160 fused_ordering(466) 00:25:54.160 fused_ordering(467) 00:25:54.160 fused_ordering(468) 00:25:54.160 fused_ordering(469) 00:25:54.160 fused_ordering(470) 00:25:54.160 fused_ordering(471) 00:25:54.160 fused_ordering(472) 00:25:54.160 fused_ordering(473) 00:25:54.160 fused_ordering(474) 00:25:54.160 fused_ordering(475) 00:25:54.160 fused_ordering(476) 00:25:54.160 fused_ordering(477) 00:25:54.160 fused_ordering(478) 00:25:54.160 fused_ordering(479) 00:25:54.160 fused_ordering(480) 00:25:54.160 fused_ordering(481) 00:25:54.160 fused_ordering(482) 00:25:54.160 fused_ordering(483) 00:25:54.160 fused_ordering(484) 00:25:54.160 fused_ordering(485) 00:25:54.160 fused_ordering(486) 00:25:54.160 fused_ordering(487) 00:25:54.160 fused_ordering(488) 00:25:54.161 fused_ordering(489) 00:25:54.161 fused_ordering(490) 00:25:54.161 fused_ordering(491) 00:25:54.161 fused_ordering(492) 00:25:54.161 fused_ordering(493) 00:25:54.161 fused_ordering(494) 00:25:54.161 fused_ordering(495) 00:25:54.161 fused_ordering(496) 00:25:54.161 fused_ordering(497) 00:25:54.161 fused_ordering(498) 00:25:54.161 fused_ordering(499) 00:25:54.161 fused_ordering(500) 00:25:54.161 fused_ordering(501) 00:25:54.161 fused_ordering(502) 00:25:54.161 fused_ordering(503) 00:25:54.161 fused_ordering(504) 00:25:54.161 fused_ordering(505) 00:25:54.161 fused_ordering(506) 00:25:54.161 fused_ordering(507) 00:25:54.161 fused_ordering(508) 00:25:54.161 fused_ordering(509) 00:25:54.161 fused_ordering(510) 00:25:54.161 fused_ordering(511) 00:25:54.161 fused_ordering(512) 00:25:54.161 fused_ordering(513) 00:25:54.161 fused_ordering(514) 00:25:54.161 fused_ordering(515) 00:25:54.161 fused_ordering(516) 00:25:54.161 fused_ordering(517) 00:25:54.161 fused_ordering(518) 00:25:54.161 fused_ordering(519) 00:25:54.161 fused_ordering(520) 00:25:54.161 fused_ordering(521) 00:25:54.161 fused_ordering(522) 00:25:54.161 fused_ordering(523) 00:25:54.161 fused_ordering(524) 00:25:54.161 fused_ordering(525) 00:25:54.161 fused_ordering(526) 00:25:54.161 fused_ordering(527) 00:25:54.161 fused_ordering(528) 00:25:54.161 fused_ordering(529) 00:25:54.161 fused_ordering(530) 00:25:54.161 fused_ordering(531) 00:25:54.161 fused_ordering(532) 00:25:54.161 fused_ordering(533) 00:25:54.161 fused_ordering(534) 00:25:54.161 fused_ordering(535) 00:25:54.161 fused_ordering(536) 00:25:54.161 fused_ordering(537) 00:25:54.161 fused_ordering(538) 00:25:54.161 fused_ordering(539) 00:25:54.161 fused_ordering(540) 00:25:54.161 fused_ordering(541) 00:25:54.161 fused_ordering(542) 00:25:54.161 fused_ordering(543) 00:25:54.161 fused_ordering(544) 00:25:54.161 fused_ordering(545) 00:25:54.161 fused_ordering(546) 00:25:54.161 fused_ordering(547) 00:25:54.161 fused_ordering(548) 00:25:54.161 fused_ordering(549) 00:25:54.161 fused_ordering(550) 00:25:54.161 fused_ordering(551) 00:25:54.161 fused_ordering(552) 00:25:54.161 fused_ordering(553) 00:25:54.161 fused_ordering(554) 00:25:54.161 fused_ordering(555) 00:25:54.161 fused_ordering(556) 00:25:54.161 fused_ordering(557) 00:25:54.161 fused_ordering(558) 00:25:54.161 fused_ordering(559) 00:25:54.161 fused_ordering(560) 00:25:54.161 fused_ordering(561) 00:25:54.161 fused_ordering(562) 00:25:54.161 fused_ordering(563) 00:25:54.161 fused_ordering(564) 00:25:54.161 fused_ordering(565) 00:25:54.161 fused_ordering(566) 00:25:54.161 fused_ordering(567) 00:25:54.161 fused_ordering(568) 00:25:54.161 fused_ordering(569) 00:25:54.161 fused_ordering(570) 00:25:54.161 fused_ordering(571) 00:25:54.161 fused_ordering(572) 00:25:54.161 fused_ordering(573) 00:25:54.161 fused_ordering(574) 00:25:54.161 fused_ordering(575) 00:25:54.161 fused_ordering(576) 00:25:54.161 fused_ordering(577) 00:25:54.161 fused_ordering(578) 00:25:54.161 fused_ordering(579) 00:25:54.161 fused_ordering(580) 00:25:54.161 fused_ordering(581) 00:25:54.161 fused_ordering(582) 00:25:54.161 fused_ordering(583) 00:25:54.161 fused_ordering(584) 00:25:54.161 fused_ordering(585) 00:25:54.161 fused_ordering(586) 00:25:54.161 fused_ordering(587) 00:25:54.161 fused_ordering(588) 00:25:54.161 fused_ordering(589) 00:25:54.161 fused_ordering(590) 00:25:54.161 fused_ordering(591) 00:25:54.161 fused_ordering(592) 00:25:54.161 fused_ordering(593) 00:25:54.161 fused_ordering(594) 00:25:54.161 fused_ordering(595) 00:25:54.161 fused_ordering(596) 00:25:54.161 fused_ordering(597) 00:25:54.161 fused_ordering(598) 00:25:54.161 fused_ordering(599) 00:25:54.161 fused_ordering(600) 00:25:54.161 fused_ordering(601) 00:25:54.161 fused_ordering(602) 00:25:54.161 fused_ordering(603) 00:25:54.161 fused_ordering(604) 00:25:54.161 fused_ordering(605) 00:25:54.161 fused_ordering(606) 00:25:54.161 fused_ordering(607) 00:25:54.161 fused_ordering(608) 00:25:54.161 fused_ordering(609) 00:25:54.161 fused_ordering(610) 00:25:54.161 fused_ordering(611) 00:25:54.161 fused_ordering(612) 00:25:54.161 fused_ordering(613) 00:25:54.161 fused_ordering(614) 00:25:54.161 fused_ordering(615) 00:25:54.732 fused_ordering(616) 00:25:54.732 fused_ordering(617) 00:25:54.732 fused_ordering(618) 00:25:54.732 fused_ordering(619) 00:25:54.732 fused_ordering(620) 00:25:54.732 fused_ordering(621) 00:25:54.732 fused_ordering(622) 00:25:54.732 fused_ordering(623) 00:25:54.732 fused_ordering(624) 00:25:54.732 fused_ordering(625) 00:25:54.732 fused_ordering(626) 00:25:54.732 fused_ordering(627) 00:25:54.732 fused_ordering(628) 00:25:54.732 fused_ordering(629) 00:25:54.732 fused_ordering(630) 00:25:54.732 fused_ordering(631) 00:25:54.732 fused_ordering(632) 00:25:54.732 fused_ordering(633) 00:25:54.732 fused_ordering(634) 00:25:54.732 fused_ordering(635) 00:25:54.732 fused_ordering(636) 00:25:54.732 fused_ordering(637) 00:25:54.732 fused_ordering(638) 00:25:54.732 fused_ordering(639) 00:25:54.732 fused_ordering(640) 00:25:54.732 fused_ordering(641) 00:25:54.732 fused_ordering(642) 00:25:54.732 fused_ordering(643) 00:25:54.732 fused_ordering(644) 00:25:54.732 fused_ordering(645) 00:25:54.732 fused_ordering(646) 00:25:54.732 fused_ordering(647) 00:25:54.732 fused_ordering(648) 00:25:54.732 fused_ordering(649) 00:25:54.732 fused_ordering(650) 00:25:54.732 fused_ordering(651) 00:25:54.732 fused_ordering(652) 00:25:54.732 fused_ordering(653) 00:25:54.732 fused_ordering(654) 00:25:54.732 fused_ordering(655) 00:25:54.732 fused_ordering(656) 00:25:54.732 fused_ordering(657) 00:25:54.732 fused_ordering(658) 00:25:54.732 fused_ordering(659) 00:25:54.732 fused_ordering(660) 00:25:54.732 fused_ordering(661) 00:25:54.732 fused_ordering(662) 00:25:54.732 fused_ordering(663) 00:25:54.732 fused_ordering(664) 00:25:54.732 fused_ordering(665) 00:25:54.732 fused_ordering(666) 00:25:54.732 fused_ordering(667) 00:25:54.732 fused_ordering(668) 00:25:54.732 fused_ordering(669) 00:25:54.732 fused_ordering(670) 00:25:54.732 fused_ordering(671) 00:25:54.732 fused_ordering(672) 00:25:54.732 fused_ordering(673) 00:25:54.732 fused_ordering(674) 00:25:54.732 fused_ordering(675) 00:25:54.732 fused_ordering(676) 00:25:54.732 fused_ordering(677) 00:25:54.732 fused_ordering(678) 00:25:54.732 fused_ordering(679) 00:25:54.732 fused_ordering(680) 00:25:54.732 fused_ordering(681) 00:25:54.732 fused_ordering(682) 00:25:54.732 fused_ordering(683) 00:25:54.732 fused_ordering(684) 00:25:54.732 fused_ordering(685) 00:25:54.732 fused_ordering(686) 00:25:54.732 fused_ordering(687) 00:25:54.732 fused_ordering(688) 00:25:54.732 fused_ordering(689) 00:25:54.732 fused_ordering(690) 00:25:54.732 fused_ordering(691) 00:25:54.732 fused_ordering(692) 00:25:54.732 fused_ordering(693) 00:25:54.733 fused_ordering(694) 00:25:54.733 fused_ordering(695) 00:25:54.733 fused_ordering(696) 00:25:54.733 fused_ordering(697) 00:25:54.733 fused_ordering(698) 00:25:54.733 fused_ordering(699) 00:25:54.733 fused_ordering(700) 00:25:54.733 fused_ordering(701) 00:25:54.733 fused_ordering(702) 00:25:54.733 fused_ordering(703) 00:25:54.733 fused_ordering(704) 00:25:54.733 fused_ordering(705) 00:25:54.733 fused_ordering(706) 00:25:54.733 fused_ordering(707) 00:25:54.733 fused_ordering(708) 00:25:54.733 fused_ordering(709) 00:25:54.733 fused_ordering(710) 00:25:54.733 fused_ordering(711) 00:25:54.733 fused_ordering(712) 00:25:54.733 fused_ordering(713) 00:25:54.733 fused_ordering(714) 00:25:54.733 fused_ordering(715) 00:25:54.733 fused_ordering(716) 00:25:54.733 fused_ordering(717) 00:25:54.733 fused_ordering(718) 00:25:54.733 fused_ordering(719) 00:25:54.733 fused_ordering(720) 00:25:54.733 fused_ordering(721) 00:25:54.733 fused_ordering(722) 00:25:54.733 fused_ordering(723) 00:25:54.733 fused_ordering(724) 00:25:54.733 fused_ordering(725) 00:25:54.733 fused_ordering(726) 00:25:54.733 fused_ordering(727) 00:25:54.733 fused_ordering(728) 00:25:54.733 fused_ordering(729) 00:25:54.733 fused_ordering(730) 00:25:54.733 fused_ordering(731) 00:25:54.733 fused_ordering(732) 00:25:54.733 fused_ordering(733) 00:25:54.733 fused_ordering(734) 00:25:54.733 fused_ordering(735) 00:25:54.733 fused_ordering(736) 00:25:54.733 fused_ordering(737) 00:25:54.733 fused_ordering(738) 00:25:54.733 fused_ordering(739) 00:25:54.733 fused_ordering(740) 00:25:54.733 fused_ordering(741) 00:25:54.733 fused_ordering(742) 00:25:54.733 fused_ordering(743) 00:25:54.733 fused_ordering(744) 00:25:54.733 fused_ordering(745) 00:25:54.733 fused_ordering(746) 00:25:54.733 fused_ordering(747) 00:25:54.733 fused_ordering(748) 00:25:54.733 fused_ordering(749) 00:25:54.733 fused_ordering(750) 00:25:54.733 fused_ordering(751) 00:25:54.733 fused_ordering(752) 00:25:54.733 fused_ordering(753) 00:25:54.733 fused_ordering(754) 00:25:54.733 fused_ordering(755) 00:25:54.733 fused_ordering(756) 00:25:54.733 fused_ordering(757) 00:25:54.733 fused_ordering(758) 00:25:54.733 fused_ordering(759) 00:25:54.733 fused_ordering(760) 00:25:54.733 fused_ordering(761) 00:25:54.733 fused_ordering(762) 00:25:54.733 fused_ordering(763) 00:25:54.733 fused_ordering(764) 00:25:54.733 fused_ordering(765) 00:25:54.733 fused_ordering(766) 00:25:54.733 fused_ordering(767) 00:25:54.733 fused_ordering(768) 00:25:54.733 fused_ordering(769) 00:25:54.733 fused_ordering(770) 00:25:54.733 fused_ordering(771) 00:25:54.733 fused_ordering(772) 00:25:54.733 fused_ordering(773) 00:25:54.733 fused_ordering(774) 00:25:54.733 fused_ordering(775) 00:25:54.733 fused_ordering(776) 00:25:54.733 fused_ordering(777) 00:25:54.733 fused_ordering(778) 00:25:54.733 fused_ordering(779) 00:25:54.733 fused_ordering(780) 00:25:54.733 fused_ordering(781) 00:25:54.733 fused_ordering(782) 00:25:54.733 fused_ordering(783) 00:25:54.733 fused_ordering(784) 00:25:54.733 fused_ordering(785) 00:25:54.733 fused_ordering(786) 00:25:54.733 fused_ordering(787) 00:25:54.733 fused_ordering(788) 00:25:54.733 fused_ordering(789) 00:25:54.733 fused_ordering(790) 00:25:54.733 fused_ordering(791) 00:25:54.733 fused_ordering(792) 00:25:54.733 fused_ordering(793) 00:25:54.733 fused_ordering(794) 00:25:54.733 fused_ordering(795) 00:25:54.733 fused_ordering(796) 00:25:54.733 fused_ordering(797) 00:25:54.733 fused_ordering(798) 00:25:54.733 fused_ordering(799) 00:25:54.733 fused_ordering(800) 00:25:54.733 fused_ordering(801) 00:25:54.733 fused_ordering(802) 00:25:54.733 fused_ordering(803) 00:25:54.733 fused_ordering(804) 00:25:54.733 fused_ordering(805) 00:25:54.733 fused_ordering(806) 00:25:54.733 fused_ordering(807) 00:25:54.733 fused_ordering(808) 00:25:54.733 fused_ordering(809) 00:25:54.733 fused_ordering(810) 00:25:54.733 fused_ordering(811) 00:25:54.733 fused_ordering(812) 00:25:54.733 fused_ordering(813) 00:25:54.733 fused_ordering(814) 00:25:54.733 fused_ordering(815) 00:25:54.733 fused_ordering(816) 00:25:54.733 fused_ordering(817) 00:25:54.733 fused_ordering(818) 00:25:54.733 fused_ordering(819) 00:25:54.733 fused_ordering(820) 00:25:55.305 fused_ordering(821) 00:25:55.305 fused_ordering(822) 00:25:55.305 fused_ordering(823) 00:25:55.305 fused_ordering(824) 00:25:55.305 fused_ordering(825) 00:25:55.305 fused_ordering(826) 00:25:55.305 fused_ordering(827) 00:25:55.305 fused_ordering(828) 00:25:55.305 fused_ordering(829) 00:25:55.305 fused_ordering(830) 00:25:55.305 fused_ordering(831) 00:25:55.305 fused_ordering(832) 00:25:55.305 fused_ordering(833) 00:25:55.305 fused_ordering(834) 00:25:55.305 fused_ordering(835) 00:25:55.305 fused_ordering(836) 00:25:55.305 fused_ordering(837) 00:25:55.305 fused_ordering(838) 00:25:55.305 fused_ordering(839) 00:25:55.305 fused_ordering(840) 00:25:55.305 fused_ordering(841) 00:25:55.305 fused_ordering(842) 00:25:55.305 fused_ordering(843) 00:25:55.305 fused_ordering(844) 00:25:55.305 fused_ordering(845) 00:25:55.305 fused_ordering(846) 00:25:55.305 fused_ordering(847) 00:25:55.305 fused_ordering(848) 00:25:55.305 fused_ordering(849) 00:25:55.305 fused_ordering(850) 00:25:55.305 fused_ordering(851) 00:25:55.305 fused_ordering(852) 00:25:55.305 fused_ordering(853) 00:25:55.305 fused_ordering(854) 00:25:55.305 fused_ordering(855) 00:25:55.305 fused_ordering(856) 00:25:55.305 fused_ordering(857) 00:25:55.305 fused_ordering(858) 00:25:55.305 fused_ordering(859) 00:25:55.305 fused_ordering(860) 00:25:55.305 fused_ordering(861) 00:25:55.305 fused_ordering(862) 00:25:55.305 fused_ordering(863) 00:25:55.305 fused_ordering(864) 00:25:55.305 fused_ordering(865) 00:25:55.305 fused_ordering(866) 00:25:55.305 fused_ordering(867) 00:25:55.305 fused_ordering(868) 00:25:55.305 fused_ordering(869) 00:25:55.305 fused_ordering(870) 00:25:55.305 fused_ordering(871) 00:25:55.305 fused_ordering(872) 00:25:55.305 fused_ordering(873) 00:25:55.305 fused_ordering(874) 00:25:55.305 fused_ordering(875) 00:25:55.305 fused_ordering(876) 00:25:55.305 fused_ordering(877) 00:25:55.305 fused_ordering(878) 00:25:55.305 fused_ordering(879) 00:25:55.305 fused_ordering(880) 00:25:55.305 fused_ordering(881) 00:25:55.305 fused_ordering(882) 00:25:55.305 fused_ordering(883) 00:25:55.305 fused_ordering(884) 00:25:55.305 fused_ordering(885) 00:25:55.305 fused_ordering(886) 00:25:55.305 fused_ordering(887) 00:25:55.305 fused_ordering(888) 00:25:55.305 fused_ordering(889) 00:25:55.305 fused_ordering(890) 00:25:55.305 fused_ordering(891) 00:25:55.305 fused_ordering(892) 00:25:55.305 fused_ordering(893) 00:25:55.305 fused_ordering(894) 00:25:55.305 fused_ordering(895) 00:25:55.305 fused_ordering(896) 00:25:55.305 fused_ordering(897) 00:25:55.305 fused_ordering(898) 00:25:55.305 fused_ordering(899) 00:25:55.305 fused_ordering(900) 00:25:55.305 fused_ordering(901) 00:25:55.305 fused_ordering(902) 00:25:55.305 fused_ordering(903) 00:25:55.305 fused_ordering(904) 00:25:55.305 fused_ordering(905) 00:25:55.305 fused_ordering(906) 00:25:55.305 fused_ordering(907) 00:25:55.305 fused_ordering(908) 00:25:55.305 fused_ordering(909) 00:25:55.305 fused_ordering(910) 00:25:55.305 fused_ordering(911) 00:25:55.305 fused_ordering(912) 00:25:55.305 fused_ordering(913) 00:25:55.305 fused_ordering(914) 00:25:55.305 fused_ordering(915) 00:25:55.305 fused_ordering(916) 00:25:55.305 fused_ordering(917) 00:25:55.305 fused_ordering(918) 00:25:55.305 fused_ordering(919) 00:25:55.305 fused_ordering(920) 00:25:55.305 fused_ordering(921) 00:25:55.305 fused_ordering(922) 00:25:55.305 fused_ordering(923) 00:25:55.305 fused_ordering(924) 00:25:55.305 fused_ordering(925) 00:25:55.305 fused_ordering(926) 00:25:55.305 fused_ordering(927) 00:25:55.305 fused_ordering(928) 00:25:55.305 fused_ordering(929) 00:25:55.305 fused_ordering(930) 00:25:55.305 fused_ordering(931) 00:25:55.305 fused_ordering(932) 00:25:55.305 fused_ordering(933) 00:25:55.305 fused_ordering(934) 00:25:55.305 fused_ordering(935) 00:25:55.305 fused_ordering(936) 00:25:55.305 fused_ordering(937) 00:25:55.305 fused_ordering(938) 00:25:55.305 fused_ordering(939) 00:25:55.305 fused_ordering(940) 00:25:55.305 fused_ordering(941) 00:25:55.305 fused_ordering(942) 00:25:55.305 fused_ordering(943) 00:25:55.305 fused_ordering(944) 00:25:55.305 fused_ordering(945) 00:25:55.305 fused_ordering(946) 00:25:55.305 fused_ordering(947) 00:25:55.305 fused_ordering(948) 00:25:55.306 fused_ordering(949) 00:25:55.306 fused_ordering(950) 00:25:55.306 fused_ordering(951) 00:25:55.306 fused_ordering(952) 00:25:55.306 fused_ordering(953) 00:25:55.306 fused_ordering(954) 00:25:55.306 fused_ordering(955) 00:25:55.306 fused_ordering(956) 00:25:55.306 fused_ordering(957) 00:25:55.306 fused_ordering(958) 00:25:55.306 fused_ordering(959) 00:25:55.306 fused_ordering(960) 00:25:55.306 fused_ordering(961) 00:25:55.306 fused_ordering(962) 00:25:55.306 fused_ordering(963) 00:25:55.306 fused_ordering(964) 00:25:55.306 fused_ordering(965) 00:25:55.306 fused_ordering(966) 00:25:55.306 fused_ordering(967) 00:25:55.306 fused_ordering(968) 00:25:55.306 fused_ordering(969) 00:25:55.306 fused_ordering(970) 00:25:55.306 fused_ordering(971) 00:25:55.306 fused_ordering(972) 00:25:55.306 fused_ordering(973) 00:25:55.306 fused_ordering(974) 00:25:55.306 fused_ordering(975) 00:25:55.306 fused_ordering(976) 00:25:55.306 fused_ordering(977) 00:25:55.306 fused_ordering(978) 00:25:55.306 fused_ordering(979) 00:25:55.306 fused_ordering(980) 00:25:55.306 fused_ordering(981) 00:25:55.306 fused_ordering(982) 00:25:55.306 fused_ordering(983) 00:25:55.306 fused_ordering(984) 00:25:55.306 fused_ordering(985) 00:25:55.306 fused_ordering(986) 00:25:55.306 fused_ordering(987) 00:25:55.306 fused_ordering(988) 00:25:55.306 fused_ordering(989) 00:25:55.306 fused_ordering(990) 00:25:55.306 fused_ordering(991) 00:25:55.306 fused_ordering(992) 00:25:55.306 fused_ordering(993) 00:25:55.306 fused_ordering(994) 00:25:55.306 fused_ordering(995) 00:25:55.306 fused_ordering(996) 00:25:55.306 fused_ordering(997) 00:25:55.306 fused_ordering(998) 00:25:55.306 fused_ordering(999) 00:25:55.306 fused_ordering(1000) 00:25:55.306 fused_ordering(1001) 00:25:55.306 fused_ordering(1002) 00:25:55.306 fused_ordering(1003) 00:25:55.306 fused_ordering(1004) 00:25:55.306 fused_ordering(1005) 00:25:55.306 fused_ordering(1006) 00:25:55.306 fused_ordering(1007) 00:25:55.306 fused_ordering(1008) 00:25:55.306 fused_ordering(1009) 00:25:55.306 fused_ordering(1010) 00:25:55.306 fused_ordering(1011) 00:25:55.306 fused_ordering(1012) 00:25:55.306 fused_ordering(1013) 00:25:55.306 fused_ordering(1014) 00:25:55.306 fused_ordering(1015) 00:25:55.306 fused_ordering(1016) 00:25:55.306 fused_ordering(1017) 00:25:55.306 fused_ordering(1018) 00:25:55.306 fused_ordering(1019) 00:25:55.306 fused_ordering(1020) 00:25:55.306 fused_ordering(1021) 00:25:55.306 fused_ordering(1022) 00:25:55.306 fused_ordering(1023) 00:25:55.306 22:24:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@23 -- # trap - SIGINT SIGTERM EXIT 00:25:55.306 22:24:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@25 -- # nvmftestfini 00:25:55.306 22:24:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@514 -- # nvmfcleanup 00:25:55.306 22:24:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@121 -- # sync 00:25:55.306 22:24:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:25:55.306 22:24:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@124 -- # set +e 00:25:55.306 22:24:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@125 -- # for i in {1..20} 00:25:55.306 22:24:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:25:55.306 rmmod nvme_tcp 00:25:55.306 rmmod nvme_fabrics 00:25:55.306 rmmod nvme_keyring 00:25:55.306 22:24:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:25:55.306 22:24:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@128 -- # set -e 00:25:55.306 22:24:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@129 -- # return 0 00:25:55.306 22:24:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@515 -- # '[' -n 145214 ']' 00:25:55.306 22:24:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@516 -- # killprocess 145214 00:25:55.306 22:24:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@950 -- # '[' -z 145214 ']' 00:25:55.306 22:24:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@954 -- # kill -0 145214 00:25:55.306 22:24:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@955 -- # uname 00:25:55.306 22:24:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:25:55.306 22:24:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 145214 00:25:55.306 22:24:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:25:55.306 22:24:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:25:55.306 22:24:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@968 -- # echo 'killing process with pid 145214' 00:25:55.306 killing process with pid 145214 00:25:55.306 22:24:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@969 -- # kill 145214 00:25:55.306 22:24:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@974 -- # wait 145214 00:25:55.567 22:24:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:25:55.567 22:24:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:25:55.567 22:24:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:25:55.567 22:24:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@297 -- # iptr 00:25:55.567 22:24:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@789 -- # iptables-save 00:25:55.567 22:24:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:25:55.567 22:24:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@789 -- # iptables-restore 00:25:55.567 22:24:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:25:55.567 22:24:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@302 -- # remove_spdk_ns 00:25:55.567 22:24:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:55.567 22:24:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:55.567 22:24:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:57.526 22:24:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:25:57.526 00:25:57.526 real 0m13.713s 00:25:57.526 user 0m8.040s 00:25:57.526 sys 0m6.886s 00:25:57.526 22:24:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1126 -- # xtrace_disable 00:25:57.526 22:24:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:25:57.526 ************************************ 00:25:57.526 END TEST nvmf_fused_ordering 00:25:57.526 ************************************ 00:25:57.526 22:24:52 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@26 -- # run_test nvmf_ns_masking test/nvmf/target/ns_masking.sh --transport=tcp 00:25:57.526 22:24:52 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:25:57.526 22:24:52 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:25:57.526 22:24:52 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:25:57.789 ************************************ 00:25:57.789 START TEST nvmf_ns_masking 00:25:57.789 ************************************ 00:25:57.789 22:24:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1125 -- # test/nvmf/target/ns_masking.sh --transport=tcp 00:25:57.789 * Looking for test storage... 00:25:57.789 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:25:57.789 22:24:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:25:57.789 22:24:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:25:57.789 22:24:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1681 -- # lcov --version 00:25:57.789 22:24:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:25:57.789 22:24:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:25:57.789 22:24:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@333 -- # local ver1 ver1_l 00:25:57.789 22:24:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@334 -- # local ver2 ver2_l 00:25:57.789 22:24:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@336 -- # IFS=.-: 00:25:57.789 22:24:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@336 -- # read -ra ver1 00:25:57.789 22:24:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@337 -- # IFS=.-: 00:25:57.789 22:24:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@337 -- # read -ra ver2 00:25:57.789 22:24:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@338 -- # local 'op=<' 00:25:57.789 22:24:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@340 -- # ver1_l=2 00:25:57.789 22:24:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@341 -- # ver2_l=1 00:25:57.789 22:24:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:25:57.789 22:24:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@344 -- # case "$op" in 00:25:57.789 22:24:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@345 -- # : 1 00:25:57.789 22:24:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@364 -- # (( v = 0 )) 00:25:57.789 22:24:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:25:57.789 22:24:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@365 -- # decimal 1 00:25:57.789 22:24:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@353 -- # local d=1 00:25:57.789 22:24:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:25:57.789 22:24:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@355 -- # echo 1 00:25:57.789 22:24:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@365 -- # ver1[v]=1 00:25:57.789 22:24:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@366 -- # decimal 2 00:25:57.789 22:24:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@353 -- # local d=2 00:25:57.789 22:24:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:25:57.789 22:24:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@355 -- # echo 2 00:25:57.789 22:24:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@366 -- # ver2[v]=2 00:25:57.789 22:24:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:25:57.789 22:24:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:25:57.789 22:24:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@368 -- # return 0 00:25:57.789 22:24:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:25:57.789 22:24:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:25:57.789 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:57.789 --rc genhtml_branch_coverage=1 00:25:57.789 --rc genhtml_function_coverage=1 00:25:57.789 --rc genhtml_legend=1 00:25:57.789 --rc geninfo_all_blocks=1 00:25:57.789 --rc geninfo_unexecuted_blocks=1 00:25:57.789 00:25:57.789 ' 00:25:57.789 22:24:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:25:57.789 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:57.789 --rc genhtml_branch_coverage=1 00:25:57.789 --rc genhtml_function_coverage=1 00:25:57.789 --rc genhtml_legend=1 00:25:57.789 --rc geninfo_all_blocks=1 00:25:57.789 --rc geninfo_unexecuted_blocks=1 00:25:57.789 00:25:57.789 ' 00:25:57.789 22:24:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:25:57.789 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:57.789 --rc genhtml_branch_coverage=1 00:25:57.789 --rc genhtml_function_coverage=1 00:25:57.789 --rc genhtml_legend=1 00:25:57.789 --rc geninfo_all_blocks=1 00:25:57.789 --rc geninfo_unexecuted_blocks=1 00:25:57.789 00:25:57.789 ' 00:25:57.789 22:24:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:25:57.789 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:57.789 --rc genhtml_branch_coverage=1 00:25:57.789 --rc genhtml_function_coverage=1 00:25:57.789 --rc genhtml_legend=1 00:25:57.789 --rc geninfo_all_blocks=1 00:25:57.789 --rc geninfo_unexecuted_blocks=1 00:25:57.789 00:25:57.789 ' 00:25:57.789 22:24:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:57.789 22:24:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@7 -- # uname -s 00:25:57.789 22:24:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:57.789 22:24:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:57.789 22:24:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:57.789 22:24:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:57.789 22:24:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:57.789 22:24:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:57.789 22:24:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:57.789 22:24:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:57.789 22:24:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:57.789 22:24:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:58.052 22:24:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 00:25:58.052 22:24:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@18 -- # NVME_HOSTID=80c5a598-37ec-ec11-9bc7-a4bf01928204 00:25:58.052 22:24:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:58.052 22:24:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:58.052 22:24:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:58.052 22:24:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:58.052 22:24:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:58.052 22:24:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@15 -- # shopt -s extglob 00:25:58.052 22:24:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:58.052 22:24:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:58.052 22:24:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:58.052 22:24:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:58.052 22:24:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:58.052 22:24:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:58.052 22:24:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@5 -- # export PATH 00:25:58.052 22:24:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:58.052 22:24:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@51 -- # : 0 00:25:58.052 22:24:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:25:58.052 22:24:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:25:58.052 22:24:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:58.052 22:24:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:58.052 22:24:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:58.052 22:24:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:25:58.052 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:25:58.052 22:24:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:25:58.052 22:24:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:25:58.052 22:24:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@55 -- # have_pci_nics=0 00:25:58.052 22:24:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@10 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:25:58.052 22:24:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@11 -- # hostsock=/var/tmp/host.sock 00:25:58.052 22:24:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@12 -- # loops=5 00:25:58.052 22:24:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@13 -- # uuidgen 00:25:58.052 22:24:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@13 -- # ns1uuid=03779758-46f2-4856-8841-47b48d23aa91 00:25:58.052 22:24:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@14 -- # uuidgen 00:25:58.052 22:24:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@14 -- # ns2uuid=fb80f5ce-ef1e-4454-a017-bcbe885d70b9 00:25:58.052 22:24:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@16 -- # SUBSYSNQN=nqn.2016-06.io.spdk:cnode1 00:25:58.052 22:24:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@17 -- # HOSTNQN1=nqn.2016-06.io.spdk:host1 00:25:58.052 22:24:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@18 -- # HOSTNQN2=nqn.2016-06.io.spdk:host2 00:25:58.053 22:24:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@19 -- # uuidgen 00:25:58.053 22:24:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@19 -- # HOSTID=86071b8e-cbde-49ff-a14d-5bed1ce334ba 00:25:58.053 22:24:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@50 -- # nvmftestinit 00:25:58.053 22:24:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:25:58.053 22:24:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:58.053 22:24:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@474 -- # prepare_net_devs 00:25:58.053 22:24:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@436 -- # local -g is_hw=no 00:25:58.053 22:24:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@438 -- # remove_spdk_ns 00:25:58.053 22:24:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:58.053 22:24:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:58.053 22:24:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:58.053 22:24:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:25:58.053 22:24:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:25:58.053 22:24:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@309 -- # xtrace_disable 00:25:58.053 22:24:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:26:06.204 22:24:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:26:06.204 22:24:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@315 -- # pci_devs=() 00:26:06.204 22:24:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@315 -- # local -a pci_devs 00:26:06.204 22:24:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@316 -- # pci_net_devs=() 00:26:06.204 22:24:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:26:06.204 22:24:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@317 -- # pci_drivers=() 00:26:06.204 22:24:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@317 -- # local -A pci_drivers 00:26:06.204 22:24:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@319 -- # net_devs=() 00:26:06.204 22:24:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@319 -- # local -ga net_devs 00:26:06.204 22:24:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@320 -- # e810=() 00:26:06.204 22:24:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@320 -- # local -ga e810 00:26:06.204 22:24:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@321 -- # x722=() 00:26:06.204 22:24:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@321 -- # local -ga x722 00:26:06.204 22:24:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@322 -- # mlx=() 00:26:06.204 22:24:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@322 -- # local -ga mlx 00:26:06.204 22:24:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:06.204 22:24:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:06.204 22:24:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:06.204 22:24:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:06.204 22:24:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:06.204 22:24:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:06.204 22:24:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:06.204 22:24:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:26:06.204 22:24:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:06.204 22:24:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:06.204 22:24:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:06.204 22:24:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:06.204 22:24:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:26:06.204 22:24:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:26:06.204 22:24:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:26:06.204 22:24:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:26:06.204 22:24:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:26:06.204 22:24:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:26:06.204 22:24:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:26:06.204 22:24:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:26:06.204 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:26:06.204 22:24:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:26:06.204 22:24:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:26:06.204 22:24:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:06.204 22:24:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:06.204 22:24:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:26:06.204 22:24:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:26:06.204 22:24:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:26:06.204 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:26:06.204 22:24:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:26:06.204 22:24:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:26:06.204 22:24:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:06.204 22:24:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:06.204 22:24:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:26:06.204 22:24:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:26:06.204 22:24:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:26:06.204 22:24:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:26:06.204 22:24:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:26:06.204 22:24:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:06.204 22:24:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:26:06.204 22:24:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:06.204 22:24:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@416 -- # [[ up == up ]] 00:26:06.204 22:24:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:26:06.204 22:24:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:06.204 22:24:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:26:06.204 Found net devices under 0000:4b:00.0: cvl_0_0 00:26:06.204 22:24:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:26:06.204 22:24:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:26:06.204 22:24:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:06.204 22:24:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:26:06.204 22:24:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:06.204 22:24:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@416 -- # [[ up == up ]] 00:26:06.204 22:24:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:26:06.204 22:24:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:06.204 22:24:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:26:06.204 Found net devices under 0000:4b:00.1: cvl_0_1 00:26:06.204 22:24:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:26:06.204 22:24:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:26:06.204 22:24:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@440 -- # is_hw=yes 00:26:06.204 22:24:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:26:06.204 22:24:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:26:06.204 22:24:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:26:06.204 22:24:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:26:06.204 22:24:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:06.204 22:24:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:06.204 22:24:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:06.204 22:24:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:26:06.204 22:24:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:26:06.204 22:24:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:26:06.204 22:24:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:26:06.204 22:24:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:26:06.204 22:24:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:26:06.204 22:24:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:06.204 22:24:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:26:06.204 22:24:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:26:06.204 22:25:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:26:06.204 22:25:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:26:06.204 22:25:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:26:06.204 22:25:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:06.204 22:25:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:26:06.204 22:25:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:06.204 22:25:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:26:06.204 22:25:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:26:06.204 22:25:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:26:06.204 22:25:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:26:06.204 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:06.204 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.695 ms 00:26:06.204 00:26:06.204 --- 10.0.0.2 ping statistics --- 00:26:06.204 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:06.204 rtt min/avg/max/mdev = 0.695/0.695/0.695/0.000 ms 00:26:06.204 22:25:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:26:06.204 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:06.205 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.280 ms 00:26:06.205 00:26:06.205 --- 10.0.0.1 ping statistics --- 00:26:06.205 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:06.205 rtt min/avg/max/mdev = 0.280/0.280/0.280/0.000 ms 00:26:06.205 22:25:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:06.205 22:25:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@448 -- # return 0 00:26:06.205 22:25:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:26:06.205 22:25:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:06.205 22:25:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:26:06.205 22:25:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:26:06.205 22:25:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:06.205 22:25:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:26:06.205 22:25:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:26:06.205 22:25:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@51 -- # nvmfappstart 00:26:06.205 22:25:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:26:06.205 22:25:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@724 -- # xtrace_disable 00:26:06.205 22:25:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:26:06.205 22:25:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@507 -- # nvmfpid=150235 00:26:06.205 22:25:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@508 -- # waitforlisten 150235 00:26:06.205 22:25:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:26:06.205 22:25:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@831 -- # '[' -z 150235 ']' 00:26:06.205 22:25:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:06.205 22:25:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@836 -- # local max_retries=100 00:26:06.205 22:25:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:06.205 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:06.205 22:25:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@840 -- # xtrace_disable 00:26:06.205 22:25:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:26:06.205 [2024-10-01 22:25:00.412250] Starting SPDK v25.01-pre git sha1 1b1c3081e / DPDK 24.03.0 initialization... 00:26:06.205 [2024-10-01 22:25:00.412316] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:06.205 [2024-10-01 22:25:00.482266] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:06.205 [2024-10-01 22:25:00.546157] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:06.205 [2024-10-01 22:25:00.546196] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:06.205 [2024-10-01 22:25:00.546204] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:06.205 [2024-10-01 22:25:00.546211] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:06.205 [2024-10-01 22:25:00.546217] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:06.205 [2024-10-01 22:25:00.546238] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:26:06.205 22:25:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:26:06.205 22:25:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@864 -- # return 0 00:26:06.205 22:25:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:26:06.205 22:25:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@730 -- # xtrace_disable 00:26:06.205 22:25:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:26:06.205 22:25:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:06.205 22:25:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:26:06.205 [2024-10-01 22:25:01.405087] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:06.205 22:25:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@55 -- # MALLOC_BDEV_SIZE=64 00:26:06.205 22:25:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@56 -- # MALLOC_BLOCK_SIZE=512 00:26:06.205 22:25:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:26:06.465 Malloc1 00:26:06.465 22:25:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:26:06.725 Malloc2 00:26:06.725 22:25:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:26:06.985 22:25:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 00:26:06.985 22:25:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:26:07.246 [2024-10-01 22:25:02.325217] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:07.246 22:25:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@67 -- # connect 00:26:07.246 22:25:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 86071b8e-cbde-49ff-a14d-5bed1ce334ba -a 10.0.0.2 -s 4420 -i 4 00:26:07.506 22:25:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 00:26:07.506 22:25:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1198 -- # local i=0 00:26:07.506 22:25:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:26:07.506 22:25:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:26:07.506 22:25:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # sleep 2 00:26:09.418 22:25:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:26:09.418 22:25:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:26:09.418 22:25:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:26:09.418 22:25:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:26:09.418 22:25:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:26:09.418 22:25:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # return 0 00:26:09.418 22:25:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:26:09.418 22:25:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:26:09.418 22:25:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:26:09.418 22:25:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:26:09.418 22:25:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@68 -- # ns_is_visible 0x1 00:26:09.418 22:25:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:26:09.418 22:25:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:26:09.418 [ 0]:0x1 00:26:09.418 22:25:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:26:09.418 22:25:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:26:09.679 22:25:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=3bdc068815ac4e49ba08cdf2a3dd274e 00:26:09.679 22:25:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 3bdc068815ac4e49ba08cdf2a3dd274e != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:26:09.679 22:25:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 00:26:09.679 22:25:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@72 -- # ns_is_visible 0x1 00:26:09.679 22:25:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:26:09.679 22:25:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:26:09.679 [ 0]:0x1 00:26:09.679 22:25:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:26:09.679 22:25:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:26:09.939 22:25:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=3bdc068815ac4e49ba08cdf2a3dd274e 00:26:09.939 22:25:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 3bdc068815ac4e49ba08cdf2a3dd274e != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:26:09.939 22:25:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@73 -- # ns_is_visible 0x2 00:26:09.939 22:25:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:26:09.939 22:25:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:26:09.939 [ 1]:0x2 00:26:09.939 22:25:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:26:09.939 22:25:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:26:09.939 22:25:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=aaa4a2b68afb44e3af8551c023f5dcd3 00:26:09.939 22:25:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ aaa4a2b68afb44e3af8551c023f5dcd3 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:26:09.939 22:25:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@75 -- # disconnect 00:26:09.939 22:25:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:26:09.939 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:26:09.939 22:25:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:26:10.200 22:25:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 --no-auto-visible 00:26:10.200 22:25:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@83 -- # connect 1 00:26:10.461 22:25:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 86071b8e-cbde-49ff-a14d-5bed1ce334ba -a 10.0.0.2 -s 4420 -i 4 00:26:10.461 22:25:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 1 00:26:10.461 22:25:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1198 -- # local i=0 00:26:10.461 22:25:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:26:10.461 22:25:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1200 -- # [[ -n 1 ]] 00:26:10.461 22:25:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1201 -- # nvme_device_counter=1 00:26:10.461 22:25:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # sleep 2 00:26:13.006 22:25:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:26:13.006 22:25:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:26:13.006 22:25:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:26:13.006 22:25:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:26:13.006 22:25:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:26:13.006 22:25:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # return 0 00:26:13.006 22:25:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:26:13.006 22:25:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:26:13.006 22:25:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:26:13.006 22:25:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:26:13.006 22:25:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@84 -- # NOT ns_is_visible 0x1 00:26:13.006 22:25:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@650 -- # local es=0 00:26:13.006 22:25:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # valid_exec_arg ns_is_visible 0x1 00:26:13.006 22:25:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@638 -- # local arg=ns_is_visible 00:26:13.006 22:25:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:26:13.006 22:25:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # type -t ns_is_visible 00:26:13.006 22:25:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:26:13.006 22:25:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # ns_is_visible 0x1 00:26:13.006 22:25:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:26:13.006 22:25:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:26:13.006 22:25:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:26:13.006 22:25:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:26:13.006 22:25:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:26:13.006 22:25:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:26:13.006 22:25:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # es=1 00:26:13.006 22:25:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:26:13.006 22:25:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:26:13.006 22:25:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:26:13.006 22:25:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@85 -- # ns_is_visible 0x2 00:26:13.006 22:25:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:26:13.006 22:25:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:26:13.006 [ 0]:0x2 00:26:13.006 22:25:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:26:13.006 22:25:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:26:13.006 22:25:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=aaa4a2b68afb44e3af8551c023f5dcd3 00:26:13.006 22:25:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ aaa4a2b68afb44e3af8551c023f5dcd3 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:26:13.006 22:25:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:26:13.006 22:25:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@89 -- # ns_is_visible 0x1 00:26:13.006 22:25:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:26:13.006 22:25:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:26:13.006 [ 0]:0x1 00:26:13.006 22:25:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:26:13.006 22:25:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:26:13.006 22:25:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=3bdc068815ac4e49ba08cdf2a3dd274e 00:26:13.006 22:25:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 3bdc068815ac4e49ba08cdf2a3dd274e != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:26:13.006 22:25:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@90 -- # ns_is_visible 0x2 00:26:13.006 22:25:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:26:13.006 22:25:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:26:13.006 [ 1]:0x2 00:26:13.006 22:25:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:26:13.006 22:25:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:26:13.006 22:25:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=aaa4a2b68afb44e3af8551c023f5dcd3 00:26:13.006 22:25:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ aaa4a2b68afb44e3af8551c023f5dcd3 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:26:13.006 22:25:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:26:13.267 22:25:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@94 -- # NOT ns_is_visible 0x1 00:26:13.267 22:25:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@650 -- # local es=0 00:26:13.267 22:25:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # valid_exec_arg ns_is_visible 0x1 00:26:13.267 22:25:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@638 -- # local arg=ns_is_visible 00:26:13.267 22:25:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:26:13.267 22:25:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # type -t ns_is_visible 00:26:13.267 22:25:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:26:13.267 22:25:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # ns_is_visible 0x1 00:26:13.267 22:25:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:26:13.267 22:25:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:26:13.267 22:25:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:26:13.267 22:25:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:26:13.267 22:25:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:26:13.267 22:25:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:26:13.267 22:25:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # es=1 00:26:13.267 22:25:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:26:13.267 22:25:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:26:13.267 22:25:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:26:13.267 22:25:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@95 -- # ns_is_visible 0x2 00:26:13.267 22:25:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:26:13.267 22:25:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:26:13.267 [ 0]:0x2 00:26:13.267 22:25:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:26:13.267 22:25:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:26:13.268 22:25:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=aaa4a2b68afb44e3af8551c023f5dcd3 00:26:13.268 22:25:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ aaa4a2b68afb44e3af8551c023f5dcd3 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:26:13.268 22:25:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@97 -- # disconnect 00:26:13.268 22:25:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:26:13.268 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:26:13.268 22:25:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:26:13.528 22:25:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@101 -- # connect 2 00:26:13.528 22:25:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 86071b8e-cbde-49ff-a14d-5bed1ce334ba -a 10.0.0.2 -s 4420 -i 4 00:26:13.789 22:25:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 2 00:26:13.789 22:25:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1198 -- # local i=0 00:26:13.789 22:25:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:26:13.789 22:25:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1200 -- # [[ -n 2 ]] 00:26:13.789 22:25:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1201 -- # nvme_device_counter=2 00:26:13.789 22:25:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # sleep 2 00:26:15.703 22:25:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:26:15.703 22:25:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:26:15.703 22:25:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:26:15.703 22:25:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # nvme_devices=2 00:26:15.703 22:25:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:26:15.703 22:25:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # return 0 00:26:15.703 22:25:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:26:15.703 22:25:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:26:15.965 22:25:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:26:15.965 22:25:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:26:15.965 22:25:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@102 -- # ns_is_visible 0x1 00:26:15.965 22:25:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:26:15.965 22:25:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:26:15.965 [ 0]:0x1 00:26:15.965 22:25:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:26:15.965 22:25:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:26:15.965 22:25:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=3bdc068815ac4e49ba08cdf2a3dd274e 00:26:15.965 22:25:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 3bdc068815ac4e49ba08cdf2a3dd274e != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:26:15.965 22:25:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@103 -- # ns_is_visible 0x2 00:26:15.965 22:25:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:26:15.965 22:25:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:26:15.965 [ 1]:0x2 00:26:15.965 22:25:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:26:15.965 22:25:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:26:15.965 22:25:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=aaa4a2b68afb44e3af8551c023f5dcd3 00:26:15.965 22:25:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ aaa4a2b68afb44e3af8551c023f5dcd3 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:26:15.965 22:25:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@106 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:26:16.227 22:25:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@107 -- # NOT ns_is_visible 0x1 00:26:16.227 22:25:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@650 -- # local es=0 00:26:16.227 22:25:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # valid_exec_arg ns_is_visible 0x1 00:26:16.227 22:25:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@638 -- # local arg=ns_is_visible 00:26:16.227 22:25:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:26:16.227 22:25:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # type -t ns_is_visible 00:26:16.227 22:25:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:26:16.227 22:25:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # ns_is_visible 0x1 00:26:16.227 22:25:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:26:16.227 22:25:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:26:16.227 22:25:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:26:16.227 22:25:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:26:16.227 22:25:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:26:16.227 22:25:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:26:16.227 22:25:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # es=1 00:26:16.227 22:25:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:26:16.227 22:25:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:26:16.227 22:25:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:26:16.227 22:25:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@108 -- # ns_is_visible 0x2 00:26:16.227 22:25:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:26:16.227 22:25:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:26:16.227 [ 0]:0x2 00:26:16.227 22:25:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:26:16.227 22:25:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:26:16.227 22:25:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=aaa4a2b68afb44e3af8551c023f5dcd3 00:26:16.227 22:25:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ aaa4a2b68afb44e3af8551c023f5dcd3 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:26:16.227 22:25:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@111 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:26:16.227 22:25:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@650 -- # local es=0 00:26:16.227 22:25:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:26:16.227 22:25:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:26:16.227 22:25:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:26:16.227 22:25:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:26:16.227 22:25:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:26:16.227 22:25:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:26:16.227 22:25:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:26:16.227 22:25:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:26:16.227 22:25:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:26:16.227 22:25:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:26:16.489 [2024-10-01 22:25:11.551870] nvmf_rpc.c:1870:nvmf_rpc_ns_visible_paused: *ERROR*: Unable to add/remove nqn.2016-06.io.spdk:host1 to namespace ID 2 00:26:16.489 request: 00:26:16.489 { 00:26:16.489 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:26:16.489 "nsid": 2, 00:26:16.489 "host": "nqn.2016-06.io.spdk:host1", 00:26:16.489 "method": "nvmf_ns_remove_host", 00:26:16.489 "req_id": 1 00:26:16.489 } 00:26:16.489 Got JSON-RPC error response 00:26:16.489 response: 00:26:16.489 { 00:26:16.489 "code": -32602, 00:26:16.489 "message": "Invalid parameters" 00:26:16.489 } 00:26:16.489 22:25:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # es=1 00:26:16.489 22:25:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:26:16.489 22:25:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:26:16.489 22:25:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:26:16.489 22:25:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@112 -- # NOT ns_is_visible 0x1 00:26:16.489 22:25:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@650 -- # local es=0 00:26:16.489 22:25:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # valid_exec_arg ns_is_visible 0x1 00:26:16.489 22:25:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@638 -- # local arg=ns_is_visible 00:26:16.489 22:25:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:26:16.489 22:25:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # type -t ns_is_visible 00:26:16.489 22:25:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:26:16.489 22:25:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # ns_is_visible 0x1 00:26:16.489 22:25:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:26:16.489 22:25:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:26:16.489 22:25:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:26:16.489 22:25:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:26:16.489 22:25:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:26:16.489 22:25:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:26:16.489 22:25:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # es=1 00:26:16.489 22:25:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:26:16.489 22:25:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:26:16.489 22:25:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:26:16.489 22:25:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@113 -- # ns_is_visible 0x2 00:26:16.489 22:25:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:26:16.489 22:25:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:26:16.489 [ 0]:0x2 00:26:16.489 22:25:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:26:16.489 22:25:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:26:16.489 22:25:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=aaa4a2b68afb44e3af8551c023f5dcd3 00:26:16.489 22:25:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ aaa4a2b68afb44e3af8551c023f5dcd3 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:26:16.489 22:25:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@114 -- # disconnect 00:26:16.489 22:25:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:26:16.750 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:26:16.750 22:25:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@118 -- # hostpid=152435 00:26:16.750 22:25:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@119 -- # trap 'killprocess $hostpid; nvmftestfini' SIGINT SIGTERM EXIT 00:26:16.750 22:25:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@117 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -r /var/tmp/host.sock -m 2 00:26:16.751 22:25:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@121 -- # waitforlisten 152435 /var/tmp/host.sock 00:26:16.751 22:25:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@831 -- # '[' -z 152435 ']' 00:26:16.751 22:25:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/host.sock 00:26:16.751 22:25:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@836 -- # local max_retries=100 00:26:16.751 22:25:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:26:16.751 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:26:16.751 22:25:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@840 -- # xtrace_disable 00:26:16.751 22:25:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:26:16.751 [2024-10-01 22:25:11.820098] Starting SPDK v25.01-pre git sha1 1b1c3081e / DPDK 24.03.0 initialization... 00:26:16.751 [2024-10-01 22:25:11.820152] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid152435 ] 00:26:16.751 [2024-10-01 22:25:11.897693] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:16.751 [2024-10-01 22:25:11.961460] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:26:17.695 22:25:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:26:17.695 22:25:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@864 -- # return 0 00:26:17.695 22:25:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@122 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:26:17.695 22:25:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:26:17.695 22:25:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@124 -- # uuid2nguid 03779758-46f2-4856-8841-47b48d23aa91 00:26:17.695 22:25:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@785 -- # tr -d - 00:26:17.695 22:25:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 -g 0377975846F24856884147B48D23AA91 -i 00:26:17.956 22:25:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@125 -- # uuid2nguid fb80f5ce-ef1e-4454-a017-bcbe885d70b9 00:26:17.956 22:25:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@785 -- # tr -d - 00:26:17.956 22:25:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 -g FB80F5CEEF1E4454A017BCBE885D70B9 -i 00:26:18.217 22:25:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@126 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:26:18.217 22:25:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host2 00:26:18.479 22:25:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@129 -- # hostrpc bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -b nvme0 00:26:18.479 22:25:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -b nvme0 00:26:18.740 nvme0n1 00:26:18.740 22:25:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@131 -- # hostrpc bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 -b nvme1 00:26:18.740 22:25:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 -b nvme1 00:26:19.000 nvme1n2 00:26:19.000 22:25:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # hostrpc bdev_get_bdevs 00:26:19.000 22:25:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # jq -r '.[].name' 00:26:19.000 22:25:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs 00:26:19.000 22:25:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # sort 00:26:19.000 22:25:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # xargs 00:26:19.260 22:25:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # [[ nvme0n1 nvme1n2 == \n\v\m\e\0\n\1\ \n\v\m\e\1\n\2 ]] 00:26:19.260 22:25:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # hostrpc bdev_get_bdevs -b nvme0n1 00:26:19.260 22:25:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs -b nvme0n1 00:26:19.260 22:25:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # jq -r '.[].uuid' 00:26:19.522 22:25:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # [[ 03779758-46f2-4856-8841-47b48d23aa91 == \0\3\7\7\9\7\5\8\-\4\6\f\2\-\4\8\5\6\-\8\8\4\1\-\4\7\b\4\8\d\2\3\a\a\9\1 ]] 00:26:19.522 22:25:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # hostrpc bdev_get_bdevs -b nvme1n2 00:26:19.522 22:25:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs -b nvme1n2 00:26:19.522 22:25:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # jq -r '.[].uuid' 00:26:19.522 22:25:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # [[ fb80f5ce-ef1e-4454-a017-bcbe885d70b9 == \f\b\8\0\f\5\c\e\-\e\f\1\e\-\4\4\5\4\-\a\0\1\7\-\b\c\b\e\8\8\5\d\7\0\b\9 ]] 00:26:19.522 22:25:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@138 -- # killprocess 152435 00:26:19.522 22:25:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@950 -- # '[' -z 152435 ']' 00:26:19.522 22:25:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@954 -- # kill -0 152435 00:26:19.522 22:25:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@955 -- # uname 00:26:19.522 22:25:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:26:19.522 22:25:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 152435 00:26:19.522 22:25:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:26:19.522 22:25:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:26:19.522 22:25:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@968 -- # echo 'killing process with pid 152435' 00:26:19.522 killing process with pid 152435 00:26:19.522 22:25:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@969 -- # kill 152435 00:26:19.522 22:25:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@974 -- # wait 152435 00:26:20.093 22:25:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@139 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:26:20.093 22:25:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@141 -- # trap - SIGINT SIGTERM EXIT 00:26:20.093 22:25:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@142 -- # nvmftestfini 00:26:20.093 22:25:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@514 -- # nvmfcleanup 00:26:20.093 22:25:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@121 -- # sync 00:26:20.093 22:25:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:26:20.093 22:25:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@124 -- # set +e 00:26:20.093 22:25:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@125 -- # for i in {1..20} 00:26:20.093 22:25:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:26:20.093 rmmod nvme_tcp 00:26:20.093 rmmod nvme_fabrics 00:26:20.093 rmmod nvme_keyring 00:26:20.093 22:25:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:26:20.093 22:25:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@128 -- # set -e 00:26:20.093 22:25:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@129 -- # return 0 00:26:20.093 22:25:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@515 -- # '[' -n 150235 ']' 00:26:20.093 22:25:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@516 -- # killprocess 150235 00:26:20.093 22:25:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@950 -- # '[' -z 150235 ']' 00:26:20.093 22:25:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@954 -- # kill -0 150235 00:26:20.093 22:25:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@955 -- # uname 00:26:20.093 22:25:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:26:20.093 22:25:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 150235 00:26:20.355 22:25:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:26:20.355 22:25:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:26:20.355 22:25:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@968 -- # echo 'killing process with pid 150235' 00:26:20.355 killing process with pid 150235 00:26:20.355 22:25:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@969 -- # kill 150235 00:26:20.355 22:25:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@974 -- # wait 150235 00:26:20.355 22:25:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:26:20.355 22:25:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:26:20.355 22:25:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:26:20.355 22:25:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@297 -- # iptr 00:26:20.355 22:25:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@789 -- # iptables-save 00:26:20.355 22:25:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:26:20.355 22:25:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@789 -- # iptables-restore 00:26:20.355 22:25:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:26:20.355 22:25:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@302 -- # remove_spdk_ns 00:26:20.355 22:25:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:20.355 22:25:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:20.355 22:25:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:22.903 22:25:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:26:22.903 00:26:22.903 real 0m24.847s 00:26:22.903 user 0m25.137s 00:26:22.903 sys 0m7.666s 00:26:22.903 22:25:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1126 -- # xtrace_disable 00:26:22.903 22:25:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:26:22.903 ************************************ 00:26:22.903 END TEST nvmf_ns_masking 00:26:22.903 ************************************ 00:26:22.903 22:25:17 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@27 -- # [[ 1 -eq 1 ]] 00:26:22.903 22:25:17 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@28 -- # run_test nvmf_nvme_cli /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:26:22.903 22:25:17 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:26:22.903 22:25:17 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:26:22.903 22:25:17 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:26:22.903 ************************************ 00:26:22.903 START TEST nvmf_nvme_cli 00:26:22.903 ************************************ 00:26:22.903 22:25:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:26:22.903 * Looking for test storage... 00:26:22.903 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:26:22.903 22:25:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:26:22.903 22:25:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1681 -- # lcov --version 00:26:22.903 22:25:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:26:22.903 22:25:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:26:22.904 22:25:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:26:22.904 22:25:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@333 -- # local ver1 ver1_l 00:26:22.904 22:25:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@334 -- # local ver2 ver2_l 00:26:22.904 22:25:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@336 -- # IFS=.-: 00:26:22.904 22:25:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@336 -- # read -ra ver1 00:26:22.904 22:25:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@337 -- # IFS=.-: 00:26:22.904 22:25:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@337 -- # read -ra ver2 00:26:22.904 22:25:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@338 -- # local 'op=<' 00:26:22.904 22:25:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@340 -- # ver1_l=2 00:26:22.904 22:25:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@341 -- # ver2_l=1 00:26:22.904 22:25:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:26:22.904 22:25:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@344 -- # case "$op" in 00:26:22.904 22:25:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@345 -- # : 1 00:26:22.904 22:25:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@364 -- # (( v = 0 )) 00:26:22.904 22:25:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:26:22.904 22:25:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@365 -- # decimal 1 00:26:22.904 22:25:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@353 -- # local d=1 00:26:22.904 22:25:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:26:22.904 22:25:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@355 -- # echo 1 00:26:22.904 22:25:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@365 -- # ver1[v]=1 00:26:22.904 22:25:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@366 -- # decimal 2 00:26:22.904 22:25:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@353 -- # local d=2 00:26:22.904 22:25:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:26:22.904 22:25:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@355 -- # echo 2 00:26:22.904 22:25:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@366 -- # ver2[v]=2 00:26:22.904 22:25:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:26:22.904 22:25:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:26:22.904 22:25:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@368 -- # return 0 00:26:22.904 22:25:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:26:22.904 22:25:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:26:22.904 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:22.904 --rc genhtml_branch_coverage=1 00:26:22.904 --rc genhtml_function_coverage=1 00:26:22.904 --rc genhtml_legend=1 00:26:22.904 --rc geninfo_all_blocks=1 00:26:22.904 --rc geninfo_unexecuted_blocks=1 00:26:22.904 00:26:22.904 ' 00:26:22.904 22:25:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:26:22.904 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:22.904 --rc genhtml_branch_coverage=1 00:26:22.904 --rc genhtml_function_coverage=1 00:26:22.904 --rc genhtml_legend=1 00:26:22.904 --rc geninfo_all_blocks=1 00:26:22.904 --rc geninfo_unexecuted_blocks=1 00:26:22.904 00:26:22.904 ' 00:26:22.904 22:25:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:26:22.904 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:22.904 --rc genhtml_branch_coverage=1 00:26:22.904 --rc genhtml_function_coverage=1 00:26:22.904 --rc genhtml_legend=1 00:26:22.904 --rc geninfo_all_blocks=1 00:26:22.904 --rc geninfo_unexecuted_blocks=1 00:26:22.904 00:26:22.904 ' 00:26:22.904 22:25:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:26:22.904 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:22.904 --rc genhtml_branch_coverage=1 00:26:22.904 --rc genhtml_function_coverage=1 00:26:22.904 --rc genhtml_legend=1 00:26:22.904 --rc geninfo_all_blocks=1 00:26:22.904 --rc geninfo_unexecuted_blocks=1 00:26:22.904 00:26:22.904 ' 00:26:22.904 22:25:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:26:22.904 22:25:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@7 -- # uname -s 00:26:22.904 22:25:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:22.904 22:25:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:22.904 22:25:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:22.904 22:25:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:22.904 22:25:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:22.904 22:25:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:22.904 22:25:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:22.904 22:25:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:22.904 22:25:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:22.904 22:25:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:22.904 22:25:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 00:26:22.904 22:25:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@18 -- # NVME_HOSTID=80c5a598-37ec-ec11-9bc7-a4bf01928204 00:26:22.904 22:25:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:22.904 22:25:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:22.904 22:25:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:22.904 22:25:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:22.904 22:25:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:22.904 22:25:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@15 -- # shopt -s extglob 00:26:22.904 22:25:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:22.904 22:25:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:22.904 22:25:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:22.904 22:25:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:22.904 22:25:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:22.904 22:25:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:22.904 22:25:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@5 -- # export PATH 00:26:22.904 22:25:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:22.904 22:25:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@51 -- # : 0 00:26:22.904 22:25:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:26:22.904 22:25:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:26:22.904 22:25:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:22.904 22:25:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:22.904 22:25:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:22.904 22:25:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:26:22.904 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:26:22.904 22:25:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:26:22.904 22:25:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:26:22.904 22:25:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@55 -- # have_pci_nics=0 00:26:22.904 22:25:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@11 -- # MALLOC_BDEV_SIZE=64 00:26:22.904 22:25:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:26:22.904 22:25:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@14 -- # devs=() 00:26:22.904 22:25:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@16 -- # nvmftestinit 00:26:22.904 22:25:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:26:22.904 22:25:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:22.904 22:25:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@474 -- # prepare_net_devs 00:26:22.904 22:25:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@436 -- # local -g is_hw=no 00:26:22.905 22:25:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@438 -- # remove_spdk_ns 00:26:22.905 22:25:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:22.905 22:25:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:22.905 22:25:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:22.905 22:25:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:26:22.905 22:25:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:26:22.905 22:25:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@309 -- # xtrace_disable 00:26:22.905 22:25:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:26:31.052 22:25:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:26:31.052 22:25:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@315 -- # pci_devs=() 00:26:31.052 22:25:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@315 -- # local -a pci_devs 00:26:31.052 22:25:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@316 -- # pci_net_devs=() 00:26:31.052 22:25:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:26:31.052 22:25:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@317 -- # pci_drivers=() 00:26:31.052 22:25:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@317 -- # local -A pci_drivers 00:26:31.052 22:25:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@319 -- # net_devs=() 00:26:31.052 22:25:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@319 -- # local -ga net_devs 00:26:31.052 22:25:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@320 -- # e810=() 00:26:31.052 22:25:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@320 -- # local -ga e810 00:26:31.052 22:25:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@321 -- # x722=() 00:26:31.052 22:25:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@321 -- # local -ga x722 00:26:31.052 22:25:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@322 -- # mlx=() 00:26:31.052 22:25:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@322 -- # local -ga mlx 00:26:31.052 22:25:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:31.052 22:25:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:31.052 22:25:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:31.052 22:25:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:31.052 22:25:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:31.052 22:25:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:31.052 22:25:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:31.052 22:25:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:26:31.052 22:25:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:31.052 22:25:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:31.052 22:25:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:31.052 22:25:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:31.052 22:25:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:26:31.052 22:25:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:26:31.052 22:25:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:26:31.052 22:25:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:26:31.052 22:25:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:26:31.052 22:25:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:26:31.052 22:25:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:26:31.052 22:25:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:26:31.052 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:26:31.052 22:25:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:26:31.052 22:25:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:26:31.052 22:25:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:31.052 22:25:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:31.052 22:25:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:26:31.052 22:25:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:26:31.052 22:25:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:26:31.052 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:26:31.052 22:25:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:26:31.052 22:25:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:26:31.052 22:25:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:31.052 22:25:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:31.052 22:25:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:26:31.052 22:25:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:26:31.052 22:25:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:26:31.052 22:25:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:26:31.052 22:25:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:26:31.052 22:25:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:31.052 22:25:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:26:31.052 22:25:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:31.052 22:25:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@416 -- # [[ up == up ]] 00:26:31.052 22:25:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:26:31.052 22:25:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:31.052 22:25:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:26:31.052 Found net devices under 0000:4b:00.0: cvl_0_0 00:26:31.052 22:25:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:26:31.052 22:25:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:26:31.052 22:25:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:31.052 22:25:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:26:31.052 22:25:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:31.052 22:25:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@416 -- # [[ up == up ]] 00:26:31.052 22:25:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:26:31.052 22:25:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:31.052 22:25:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:26:31.052 Found net devices under 0000:4b:00.1: cvl_0_1 00:26:31.052 22:25:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:26:31.052 22:25:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:26:31.052 22:25:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@440 -- # is_hw=yes 00:26:31.052 22:25:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:26:31.052 22:25:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:26:31.052 22:25:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:26:31.052 22:25:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:26:31.052 22:25:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:31.052 22:25:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:31.052 22:25:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:31.052 22:25:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:26:31.052 22:25:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:26:31.052 22:25:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:26:31.052 22:25:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:26:31.052 22:25:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:26:31.052 22:25:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:26:31.052 22:25:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:31.052 22:25:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:26:31.052 22:25:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:26:31.052 22:25:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:26:31.052 22:25:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:26:31.052 22:25:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:26:31.052 22:25:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:31.052 22:25:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:26:31.053 22:25:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:31.053 22:25:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:26:31.053 22:25:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:26:31.053 22:25:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:26:31.053 22:25:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:26:31.053 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:31.053 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.637 ms 00:26:31.053 00:26:31.053 --- 10.0.0.2 ping statistics --- 00:26:31.053 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:31.053 rtt min/avg/max/mdev = 0.637/0.637/0.637/0.000 ms 00:26:31.053 22:25:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:26:31.053 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:31.053 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.253 ms 00:26:31.053 00:26:31.053 --- 10.0.0.1 ping statistics --- 00:26:31.053 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:31.053 rtt min/avg/max/mdev = 0.253/0.253/0.253/0.000 ms 00:26:31.053 22:25:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:31.053 22:25:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@448 -- # return 0 00:26:31.053 22:25:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:26:31.053 22:25:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:31.053 22:25:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:26:31.053 22:25:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:26:31.053 22:25:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:31.053 22:25:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:26:31.053 22:25:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:26:31.053 22:25:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@17 -- # nvmfappstart -m 0xF 00:26:31.053 22:25:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:26:31.053 22:25:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@724 -- # xtrace_disable 00:26:31.053 22:25:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:26:31.053 22:25:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@507 -- # nvmfpid=157453 00:26:31.053 22:25:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@508 -- # waitforlisten 157453 00:26:31.053 22:25:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:26:31.053 22:25:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@831 -- # '[' -z 157453 ']' 00:26:31.053 22:25:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:31.053 22:25:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@836 -- # local max_retries=100 00:26:31.053 22:25:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:31.053 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:31.053 22:25:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@840 -- # xtrace_disable 00:26:31.053 22:25:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:26:31.053 [2024-10-01 22:25:25.445199] Starting SPDK v25.01-pre git sha1 1b1c3081e / DPDK 24.03.0 initialization... 00:26:31.053 [2024-10-01 22:25:25.445250] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:31.053 [2024-10-01 22:25:25.512700] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:26:31.053 [2024-10-01 22:25:25.579283] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:31.053 [2024-10-01 22:25:25.579321] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:31.053 [2024-10-01 22:25:25.579329] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:31.053 [2024-10-01 22:25:25.579336] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:31.053 [2024-10-01 22:25:25.579342] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:31.053 [2024-10-01 22:25:25.579480] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:26:31.053 [2024-10-01 22:25:25.579593] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:26:31.053 [2024-10-01 22:25:25.579729] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:26:31.053 [2024-10-01 22:25:25.579729] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:26:31.053 22:25:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:26:31.053 22:25:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@864 -- # return 0 00:26:31.053 22:25:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:26:31.053 22:25:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@730 -- # xtrace_disable 00:26:31.053 22:25:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:26:31.053 22:25:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:31.053 22:25:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:26:31.053 22:25:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:31.053 22:25:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:26:31.053 [2024-10-01 22:25:26.285798] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:31.053 22:25:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:31.053 22:25:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@21 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:26:31.053 22:25:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:31.053 22:25:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:26:31.315 Malloc0 00:26:31.315 22:25:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:31.315 22:25:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:26:31.315 22:25:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:31.315 22:25:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:26:31.315 Malloc1 00:26:31.315 22:25:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:31.315 22:25:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME -d SPDK_Controller1 -i 291 00:26:31.315 22:25:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:31.315 22:25:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:26:31.315 22:25:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:31.315 22:25:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:26:31.315 22:25:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:31.315 22:25:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:26:31.315 22:25:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:31.315 22:25:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:26:31.315 22:25:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:31.315 22:25:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:26:31.315 22:25:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:31.315 22:25:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:26:31.315 22:25:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:31.315 22:25:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:26:31.315 [2024-10-01 22:25:26.351704] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:31.315 22:25:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:31.315 22:25:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@28 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:26:31.315 22:25:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:31.315 22:25:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:26:31.315 22:25:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:31.315 22:25:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@30 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 --hostid=80c5a598-37ec-ec11-9bc7-a4bf01928204 -t tcp -a 10.0.0.2 -s 4420 00:26:31.315 00:26:31.315 Discovery Log Number of Records 2, Generation counter 2 00:26:31.315 =====Discovery Log Entry 0====== 00:26:31.315 trtype: tcp 00:26:31.315 adrfam: ipv4 00:26:31.315 subtype: current discovery subsystem 00:26:31.315 treq: not required 00:26:31.315 portid: 0 00:26:31.315 trsvcid: 4420 00:26:31.315 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:26:31.315 traddr: 10.0.0.2 00:26:31.315 eflags: explicit discovery connections, duplicate discovery information 00:26:31.315 sectype: none 00:26:31.315 =====Discovery Log Entry 1====== 00:26:31.315 trtype: tcp 00:26:31.315 adrfam: ipv4 00:26:31.315 subtype: nvme subsystem 00:26:31.315 treq: not required 00:26:31.315 portid: 0 00:26:31.315 trsvcid: 4420 00:26:31.315 subnqn: nqn.2016-06.io.spdk:cnode1 00:26:31.315 traddr: 10.0.0.2 00:26:31.315 eflags: none 00:26:31.315 sectype: none 00:26:31.315 22:25:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # devs=($(get_nvme_devs)) 00:26:31.315 22:25:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # get_nvme_devs 00:26:31.315 22:25:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@548 -- # local dev _ 00:26:31.315 22:25:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # read -r dev _ 00:26:31.315 22:25:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@547 -- # nvme list 00:26:31.576 22:25:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@551 -- # [[ Node == /dev/nvme* ]] 00:26:31.576 22:25:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # read -r dev _ 00:26:31.576 22:25:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@551 -- # [[ --------------------- == /dev/nvme* ]] 00:26:31.576 22:25:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # read -r dev _ 00:26:31.576 22:25:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # nvme_num_before_connection=0 00:26:31.576 22:25:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@32 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 --hostid=80c5a598-37ec-ec11-9bc7-a4bf01928204 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:26:32.960 22:25:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@34 -- # waitforserial SPDKISFASTANDAWESOME 2 00:26:32.960 22:25:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1198 -- # local i=0 00:26:32.960 22:25:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:26:32.960 22:25:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1200 -- # [[ -n 2 ]] 00:26:32.960 22:25:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1201 -- # nvme_device_counter=2 00:26:32.960 22:25:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1205 -- # sleep 2 00:26:34.879 22:25:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:26:34.879 22:25:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:26:34.879 22:25:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:26:34.879 22:25:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1207 -- # nvme_devices=2 00:26:34.879 22:25:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:26:34.879 22:25:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1208 -- # return 0 00:26:34.879 22:25:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # get_nvme_devs 00:26:34.879 22:25:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@548 -- # local dev _ 00:26:34.879 22:25:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # read -r dev _ 00:26:34.879 22:25:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@547 -- # nvme list 00:26:35.139 22:25:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@551 -- # [[ Node == /dev/nvme* ]] 00:26:35.139 22:25:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # read -r dev _ 00:26:35.139 22:25:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@551 -- # [[ --------------------- == /dev/nvme* ]] 00:26:35.139 22:25:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # read -r dev _ 00:26:35.139 22:25:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@551 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:26:35.139 22:25:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # echo /dev/nvme0n1 00:26:35.139 22:25:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # read -r dev _ 00:26:35.139 22:25:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@551 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:26:35.139 22:25:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # echo /dev/nvme0n2 00:26:35.139 22:25:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # read -r dev _ 00:26:35.139 22:25:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # [[ -z /dev/nvme0n1 00:26:35.139 /dev/nvme0n2 ]] 00:26:35.139 22:25:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # devs=($(get_nvme_devs)) 00:26:35.139 22:25:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # get_nvme_devs 00:26:35.139 22:25:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@548 -- # local dev _ 00:26:35.139 22:25:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # read -r dev _ 00:26:35.139 22:25:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@547 -- # nvme list 00:26:35.399 22:25:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@551 -- # [[ Node == /dev/nvme* ]] 00:26:35.399 22:25:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # read -r dev _ 00:26:35.399 22:25:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@551 -- # [[ --------------------- == /dev/nvme* ]] 00:26:35.399 22:25:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # read -r dev _ 00:26:35.399 22:25:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@551 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:26:35.399 22:25:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # echo /dev/nvme0n1 00:26:35.399 22:25:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # read -r dev _ 00:26:35.399 22:25:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@551 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:26:35.399 22:25:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # echo /dev/nvme0n2 00:26:35.399 22:25:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # read -r dev _ 00:26:35.399 22:25:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # nvme_num=2 00:26:35.399 22:25:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@60 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:26:35.660 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:26:35.660 22:25:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@61 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:26:35.660 22:25:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1219 -- # local i=0 00:26:35.660 22:25:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:26:35.660 22:25:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:26:35.660 22:25:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:26:35.660 22:25:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:26:35.660 22:25:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1231 -- # return 0 00:26:35.660 22:25:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@62 -- # (( nvme_num <= nvme_num_before_connection )) 00:26:35.660 22:25:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@67 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:26:35.660 22:25:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:35.660 22:25:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:26:35.660 22:25:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:35.660 22:25:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:26:35.661 22:25:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@70 -- # nvmftestfini 00:26:35.661 22:25:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@514 -- # nvmfcleanup 00:26:35.661 22:25:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@121 -- # sync 00:26:35.661 22:25:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:26:35.661 22:25:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@124 -- # set +e 00:26:35.661 22:25:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@125 -- # for i in {1..20} 00:26:35.661 22:25:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:26:35.661 rmmod nvme_tcp 00:26:35.661 rmmod nvme_fabrics 00:26:35.661 rmmod nvme_keyring 00:26:35.661 22:25:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:26:35.661 22:25:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@128 -- # set -e 00:26:35.661 22:25:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@129 -- # return 0 00:26:35.661 22:25:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@515 -- # '[' -n 157453 ']' 00:26:35.661 22:25:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@516 -- # killprocess 157453 00:26:35.661 22:25:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@950 -- # '[' -z 157453 ']' 00:26:35.661 22:25:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@954 -- # kill -0 157453 00:26:35.661 22:25:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@955 -- # uname 00:26:35.661 22:25:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:26:35.661 22:25:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 157453 00:26:35.661 22:25:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:26:35.661 22:25:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:26:35.661 22:25:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@968 -- # echo 'killing process with pid 157453' 00:26:35.661 killing process with pid 157453 00:26:35.661 22:25:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@969 -- # kill 157453 00:26:35.661 22:25:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@974 -- # wait 157453 00:26:35.921 22:25:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:26:35.921 22:25:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:26:35.921 22:25:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:26:35.921 22:25:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@297 -- # iptr 00:26:35.921 22:25:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@789 -- # iptables-save 00:26:35.921 22:25:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:26:35.921 22:25:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@789 -- # iptables-restore 00:26:35.921 22:25:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:26:35.921 22:25:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@302 -- # remove_spdk_ns 00:26:35.921 22:25:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:35.921 22:25:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:35.921 22:25:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:38.465 22:25:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:26:38.465 00:26:38.465 real 0m15.416s 00:26:38.465 user 0m23.869s 00:26:38.465 sys 0m6.185s 00:26:38.465 22:25:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1126 -- # xtrace_disable 00:26:38.465 22:25:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:26:38.465 ************************************ 00:26:38.465 END TEST nvmf_nvme_cli 00:26:38.465 ************************************ 00:26:38.465 22:25:33 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@30 -- # [[ 1 -eq 1 ]] 00:26:38.465 22:25:33 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@31 -- # run_test nvmf_vfio_user /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_vfio_user.sh --transport=tcp 00:26:38.465 22:25:33 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:26:38.465 22:25:33 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:26:38.465 22:25:33 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:26:38.465 ************************************ 00:26:38.465 START TEST nvmf_vfio_user 00:26:38.465 ************************************ 00:26:38.465 22:25:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_vfio_user.sh --transport=tcp 00:26:38.465 * Looking for test storage... 00:26:38.465 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:26:38.465 22:25:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:26:38.465 22:25:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1681 -- # lcov --version 00:26:38.465 22:25:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:26:38.465 22:25:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:26:38.465 22:25:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:26:38.465 22:25:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@333 -- # local ver1 ver1_l 00:26:38.465 22:25:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@334 -- # local ver2 ver2_l 00:26:38.465 22:25:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@336 -- # IFS=.-: 00:26:38.465 22:25:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@336 -- # read -ra ver1 00:26:38.465 22:25:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@337 -- # IFS=.-: 00:26:38.465 22:25:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@337 -- # read -ra ver2 00:26:38.465 22:25:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@338 -- # local 'op=<' 00:26:38.465 22:25:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@340 -- # ver1_l=2 00:26:38.465 22:25:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@341 -- # ver2_l=1 00:26:38.465 22:25:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:26:38.465 22:25:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@344 -- # case "$op" in 00:26:38.465 22:25:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@345 -- # : 1 00:26:38.465 22:25:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@364 -- # (( v = 0 )) 00:26:38.465 22:25:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:26:38.465 22:25:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@365 -- # decimal 1 00:26:38.465 22:25:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@353 -- # local d=1 00:26:38.465 22:25:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:26:38.465 22:25:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@355 -- # echo 1 00:26:38.465 22:25:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@365 -- # ver1[v]=1 00:26:38.465 22:25:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@366 -- # decimal 2 00:26:38.465 22:25:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@353 -- # local d=2 00:26:38.465 22:25:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:26:38.465 22:25:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@355 -- # echo 2 00:26:38.465 22:25:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@366 -- # ver2[v]=2 00:26:38.466 22:25:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:26:38.466 22:25:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:26:38.466 22:25:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@368 -- # return 0 00:26:38.466 22:25:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:26:38.466 22:25:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:26:38.466 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:38.466 --rc genhtml_branch_coverage=1 00:26:38.466 --rc genhtml_function_coverage=1 00:26:38.466 --rc genhtml_legend=1 00:26:38.466 --rc geninfo_all_blocks=1 00:26:38.466 --rc geninfo_unexecuted_blocks=1 00:26:38.466 00:26:38.466 ' 00:26:38.466 22:25:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:26:38.466 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:38.466 --rc genhtml_branch_coverage=1 00:26:38.466 --rc genhtml_function_coverage=1 00:26:38.466 --rc genhtml_legend=1 00:26:38.466 --rc geninfo_all_blocks=1 00:26:38.466 --rc geninfo_unexecuted_blocks=1 00:26:38.466 00:26:38.466 ' 00:26:38.466 22:25:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:26:38.466 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:38.466 --rc genhtml_branch_coverage=1 00:26:38.466 --rc genhtml_function_coverage=1 00:26:38.466 --rc genhtml_legend=1 00:26:38.466 --rc geninfo_all_blocks=1 00:26:38.466 --rc geninfo_unexecuted_blocks=1 00:26:38.466 00:26:38.466 ' 00:26:38.466 22:25:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:26:38.466 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:38.466 --rc genhtml_branch_coverage=1 00:26:38.466 --rc genhtml_function_coverage=1 00:26:38.466 --rc genhtml_legend=1 00:26:38.466 --rc geninfo_all_blocks=1 00:26:38.466 --rc geninfo_unexecuted_blocks=1 00:26:38.466 00:26:38.466 ' 00:26:38.466 22:25:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:26:38.466 22:25:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@7 -- # uname -s 00:26:38.466 22:25:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:38.466 22:25:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:38.466 22:25:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:38.466 22:25:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:38.466 22:25:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:38.466 22:25:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:38.466 22:25:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:38.466 22:25:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:38.466 22:25:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:38.466 22:25:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:38.466 22:25:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 00:26:38.466 22:25:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@18 -- # NVME_HOSTID=80c5a598-37ec-ec11-9bc7-a4bf01928204 00:26:38.466 22:25:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:38.466 22:25:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:38.466 22:25:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:38.466 22:25:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:38.466 22:25:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:38.466 22:25:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@15 -- # shopt -s extglob 00:26:38.466 22:25:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:38.466 22:25:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:38.466 22:25:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:38.466 22:25:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:38.466 22:25:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:38.466 22:25:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:38.466 22:25:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@5 -- # export PATH 00:26:38.466 22:25:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:38.466 22:25:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@51 -- # : 0 00:26:38.466 22:25:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:26:38.466 22:25:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:26:38.466 22:25:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:38.466 22:25:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:38.466 22:25:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:38.466 22:25:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:26:38.466 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:26:38.466 22:25:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:26:38.466 22:25:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:26:38.466 22:25:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@55 -- # have_pci_nics=0 00:26:38.466 22:25:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@12 -- # MALLOC_BDEV_SIZE=64 00:26:38.466 22:25:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:26:38.466 22:25:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@14 -- # NUM_DEVICES=2 00:26:38.466 22:25:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:26:38.466 22:25:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@18 -- # export TEST_TRANSPORT=VFIOUSER 00:26:38.466 22:25:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@18 -- # TEST_TRANSPORT=VFIOUSER 00:26:38.466 22:25:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@47 -- # rm -rf /var/run/vfio-user 00:26:38.466 22:25:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@103 -- # setup_nvmf_vfio_user '' '' 00:26:38.466 22:25:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@51 -- # local nvmf_app_args= 00:26:38.466 22:25:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@52 -- # local transport_args= 00:26:38.466 22:25:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@55 -- # nvmfpid=159270 00:26:38.466 22:25:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@57 -- # echo 'Process pid: 159270' 00:26:38.466 Process pid: 159270 00:26:38.466 22:25:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@59 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:26:38.466 22:25:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@60 -- # waitforlisten 159270 00:26:38.466 22:25:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m '[0,1,2,3]' 00:26:38.466 22:25:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@831 -- # '[' -z 159270 ']' 00:26:38.466 22:25:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:38.466 22:25:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@836 -- # local max_retries=100 00:26:38.466 22:25:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:38.466 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:38.466 22:25:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@840 -- # xtrace_disable 00:26:38.466 22:25:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:26:38.466 [2024-10-01 22:25:33.540495] Starting SPDK v25.01-pre git sha1 1b1c3081e / DPDK 24.03.0 initialization... 00:26:38.466 [2024-10-01 22:25:33.540558] [ DPDK EAL parameters: nvmf -l 0,1,2,3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:38.466 [2024-10-01 22:25:33.603930] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:26:38.466 [2024-10-01 22:25:33.672834] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:38.466 [2024-10-01 22:25:33.672870] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:38.466 [2024-10-01 22:25:33.672878] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:38.467 [2024-10-01 22:25:33.672885] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:38.467 [2024-10-01 22:25:33.672890] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:38.467 [2024-10-01 22:25:33.673024] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:26:38.467 [2024-10-01 22:25:33.673138] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:26:38.467 [2024-10-01 22:25:33.673292] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:26:38.467 [2024-10-01 22:25:33.673293] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:26:39.408 22:25:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:26:39.408 22:25:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@864 -- # return 0 00:26:39.408 22:25:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@62 -- # sleep 1 00:26:40.348 22:25:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t VFIOUSER 00:26:40.348 22:25:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@66 -- # mkdir -p /var/run/vfio-user 00:26:40.348 22:25:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # seq 1 2 00:26:40.348 22:25:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:26:40.348 22:25:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user1/1 00:26:40.348 22:25:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:26:40.607 Malloc1 00:26:40.607 22:25:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -a -s SPDK1 00:26:40.865 22:25:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1 00:26:40.865 22:25:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user1/1 -s 0 00:26:41.126 22:25:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:26:41.126 22:25:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user2/2 00:26:41.126 22:25:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:26:41.385 Malloc2 00:26:41.385 22:25:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -a -s SPDK2 00:26:41.646 22:25:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc2 00:26:41.646 22:25:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user2/2 -s 0 00:26:41.908 22:25:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@104 -- # run_nvmf_vfio_user 00:26:41.908 22:25:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # seq 1 2 00:26:41.908 22:25:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # for i in $(seq 1 $NUM_DEVICES) 00:26:41.908 22:25:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@81 -- # test_traddr=/var/run/vfio-user/domain/vfio-user1/1 00:26:41.908 22:25:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@82 -- # test_subnqn=nqn.2019-07.io.spdk:cnode1 00:26:41.908 22:25:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -g -L nvme -L nvme_vfio -L vfio_pci 00:26:41.908 [2024-10-01 22:25:37.060716] Starting SPDK v25.01-pre git sha1 1b1c3081e / DPDK 24.03.0 initialization... 00:26:41.908 [2024-10-01 22:25:37.060758] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid159957 ] 00:26:41.908 [2024-10-01 22:25:37.093260] nvme_vfio_user.c: 259:nvme_vfio_ctrlr_scan: *DEBUG*: Scan controller : /var/run/vfio-user/domain/vfio-user1/1 00:26:41.908 [2024-10-01 22:25:37.101932] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x2000, Offset 0x0, Flags 0xf, Cap offset 32 00:26:41.908 [2024-10-01 22:25:37.101955] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x1000, Map addr 0x7f9261a35000 00:26:41.908 [2024-10-01 22:25:37.102938] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:26:41.908 [2024-10-01 22:25:37.103937] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:26:41.908 [2024-10-01 22:25:37.104940] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:26:41.908 [2024-10-01 22:25:37.105952] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x2000, Offset 0x0, Flags 0x3, Cap offset 0 00:26:41.908 [2024-10-01 22:25:37.106955] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:26:41.908 [2024-10-01 22:25:37.107955] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:26:41.908 [2024-10-01 22:25:37.108965] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:26:41.908 [2024-10-01 22:25:37.109965] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:26:41.908 [2024-10-01 22:25:37.110979] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0xc000, Offset 0x0, Flags 0xf, Cap offset 32 00:26:41.908 [2024-10-01 22:25:37.110988] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0xb000, Offset 0x1000, Map addr 0x7f9261a2a000 00:26:41.908 [2024-10-01 22:25:37.112315] vfio_user_pci.c: 65:vfio_add_mr: *DEBUG*: Add memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:26:41.908 [2024-10-01 22:25:37.129227] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /var/run/vfio-user/domain/vfio-user1/1/cntrl Setup Successfully 00:26:41.908 [2024-10-01 22:25:37.129251] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to connect adminq (no timeout) 00:26:41.908 [2024-10-01 22:25:37.132094] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x0, value 0x201e0100ff 00:26:41.908 [2024-10-01 22:25:37.132144] nvme_pcie_common.c: 134:nvme_pcie_qpair_construct: *INFO*: max_completions_cap = 64 num_trackers = 192 00:26:41.908 [2024-10-01 22:25:37.132232] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for connect adminq (no timeout) 00:26:41.908 [2024-10-01 22:25:37.132250] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read vs (no timeout) 00:26:41.908 [2024-10-01 22:25:37.132256] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read vs wait for vs (no timeout) 00:26:41.908 [2024-10-01 22:25:37.136631] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x8, value 0x10300 00:26:41.908 [2024-10-01 22:25:37.136641] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read cap (no timeout) 00:26:41.908 [2024-10-01 22:25:37.136648] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read cap wait for cap (no timeout) 00:26:41.908 [2024-10-01 22:25:37.137125] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x0, value 0x201e0100ff 00:26:41.908 [2024-10-01 22:25:37.137133] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to check en (no timeout) 00:26:41.908 [2024-10-01 22:25:37.137140] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to check en wait for cc (timeout 15000 ms) 00:26:41.908 [2024-10-01 22:25:37.138131] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x0 00:26:41.908 [2024-10-01 22:25:37.138140] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:26:41.908 [2024-10-01 22:25:37.139137] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x0 00:26:41.908 [2024-10-01 22:25:37.139145] nvme_ctrlr.c:3893:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] CC.EN = 0 && CSTS.RDY = 0 00:26:41.908 [2024-10-01 22:25:37.139150] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to controller is disabled (timeout 15000 ms) 00:26:41.908 [2024-10-01 22:25:37.139157] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:26:41.908 [2024-10-01 22:25:37.139262] nvme_ctrlr.c:4091:nvme_ctrlr_process_init: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Setting CC.EN = 1 00:26:41.908 [2024-10-01 22:25:37.139267] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:26:41.908 [2024-10-01 22:25:37.139272] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x28, value 0x2000003c0000 00:26:41.908 [2024-10-01 22:25:37.140147] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x30, value 0x2000003be000 00:26:41.908 [2024-10-01 22:25:37.141154] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x24, value 0xff00ff 00:26:41.908 [2024-10-01 22:25:37.142162] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x460001 00:26:41.908 [2024-10-01 22:25:37.143164] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:26:41.908 [2024-10-01 22:25:37.143219] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:26:41.908 [2024-10-01 22:25:37.144176] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x1 00:26:41.908 [2024-10-01 22:25:37.144187] nvme_ctrlr.c:3928:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:26:41.908 [2024-10-01 22:25:37.144192] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to reset admin queue (timeout 30000 ms) 00:26:41.908 [2024-10-01 22:25:37.144213] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify controller (no timeout) 00:26:41.908 [2024-10-01 22:25:37.144220] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify controller (timeout 30000 ms) 00:26:41.908 [2024-10-01 22:25:37.144234] nvme_pcie_common.c:1204:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:26:41.908 [2024-10-01 22:25:37.144239] nvme_pcie_common.c:1232:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:26:41.908 [2024-10-01 22:25:37.144243] nvme_pcie_common.c:1292:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:26:41.908 [2024-10-01 22:25:37.144255] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:26:41.908 [2024-10-01 22:25:37.144287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0001 p:1 m:0 dnr:0 00:26:41.908 [2024-10-01 22:25:37.144296] nvme_ctrlr.c:2077:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] transport max_xfer_size 131072 00:26:41.908 [2024-10-01 22:25:37.144301] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] MDTS max_xfer_size 131072 00:26:41.908 [2024-10-01 22:25:37.144305] nvme_ctrlr.c:2084:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] CNTLID 0x0001 00:26:41.908 [2024-10-01 22:25:37.144310] nvme_ctrlr.c:2095:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Identify CNTLID 0x0001 != Connect CNTLID 0x0000 00:26:41.908 [2024-10-01 22:25:37.144315] nvme_ctrlr.c:2108:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] transport max_sges 1 00:26:41.908 [2024-10-01 22:25:37.144319] nvme_ctrlr.c:2123:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] fuses compare and write: 1 00:26:41.908 [2024-10-01 22:25:37.144324] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to configure AER (timeout 30000 ms) 00:26:41.908 [2024-10-01 22:25:37.144332] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for configure aer (timeout 30000 ms) 00:26:41.908 [2024-10-01 22:25:37.144341] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:191 cdw10:0000000b PRP1 0x0 PRP2 0x0 00:26:41.908 [2024-10-01 22:25:37.144356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:26:41.908 [2024-10-01 22:25:37.144367] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:26:41.908 [2024-10-01 22:25:37.144376] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:26:41.908 [2024-10-01 22:25:37.144385] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:26:41.908 [2024-10-01 22:25:37.144393] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:26:41.908 [2024-10-01 22:25:37.144398] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set keep alive timeout (timeout 30000 ms) 00:26:41.908 [2024-10-01 22:25:37.144408] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:26:41.908 [2024-10-01 22:25:37.144419] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:191 cdw10:0000000f PRP1 0x0 PRP2 0x0 00:26:41.908 [2024-10-01 22:25:37.144426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0007 p:1 m:0 dnr:0 00:26:41.908 [2024-10-01 22:25:37.144432] nvme_ctrlr.c:3034:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Controller adjusted keep alive timeout to 0 ms 00:26:41.908 [2024-10-01 22:25:37.144437] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify controller iocs specific (timeout 30000 ms) 00:26:41.908 [2024-10-01 22:25:37.144444] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set number of queues (timeout 30000 ms) 00:26:41.908 [2024-10-01 22:25:37.144451] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for set number of queues (timeout 30000 ms) 00:26:41.908 [2024-10-01 22:25:37.144460] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:26:41.908 [2024-10-01 22:25:37.144470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:0008 p:1 m:0 dnr:0 00:26:41.908 [2024-10-01 22:25:37.144532] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify active ns (timeout 30000 ms) 00:26:41.908 [2024-10-01 22:25:37.144540] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify active ns (timeout 30000 ms) 00:26:41.908 [2024-10-01 22:25:37.144548] nvme_pcie_common.c:1204:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f9000 len:4096 00:26:41.908 [2024-10-01 22:25:37.144553] nvme_pcie_common.c:1232:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f9000 00:26:41.909 [2024-10-01 22:25:37.144556] nvme_pcie_common.c:1292:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:26:41.909 [2024-10-01 22:25:37.144563] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000002 cdw11:00000000 PRP1 0x2000002f9000 PRP2 0x0 00:26:41.909 [2024-10-01 22:25:37.144577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0009 p:1 m:0 dnr:0 00:26:41.909 [2024-10-01 22:25:37.144585] nvme_ctrlr.c:4722:spdk_nvme_ctrlr_get_ns: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Namespace 1 was added 00:26:41.909 [2024-10-01 22:25:37.144597] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify ns (timeout 30000 ms) 00:26:41.909 [2024-10-01 22:25:37.144605] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify ns (timeout 30000 ms) 00:26:41.909 [2024-10-01 22:25:37.144612] nvme_pcie_common.c:1204:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:26:41.909 [2024-10-01 22:25:37.144616] nvme_pcie_common.c:1232:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:26:41.909 [2024-10-01 22:25:37.144620] nvme_pcie_common.c:1292:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:26:41.909 [2024-10-01 22:25:37.144629] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000000 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:26:41.909 [2024-10-01 22:25:37.144648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000a p:1 m:0 dnr:0 00:26:41.909 [2024-10-01 22:25:37.144660] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:26:41.909 [2024-10-01 22:25:37.144668] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:26:41.909 [2024-10-01 22:25:37.144675] nvme_pcie_common.c:1204:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:26:41.909 [2024-10-01 22:25:37.144681] nvme_pcie_common.c:1232:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:26:41.909 [2024-10-01 22:25:37.144685] nvme_pcie_common.c:1292:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:26:41.909 [2024-10-01 22:25:37.144691] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:26:41.909 [2024-10-01 22:25:37.144701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000b p:1 m:0 dnr:0 00:26:41.909 [2024-10-01 22:25:37.144709] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify ns iocs specific (timeout 30000 ms) 00:26:41.909 [2024-10-01 22:25:37.144716] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set supported log pages (timeout 30000 ms) 00:26:41.909 [2024-10-01 22:25:37.144725] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set supported features (timeout 30000 ms) 00:26:41.909 [2024-10-01 22:25:37.144732] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set host behavior support feature (timeout 30000 ms) 00:26:41.909 [2024-10-01 22:25:37.144737] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set doorbell buffer config (timeout 30000 ms) 00:26:41.909 [2024-10-01 22:25:37.144742] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set host ID (timeout 30000 ms) 00:26:41.909 [2024-10-01 22:25:37.144747] nvme_ctrlr.c:3134:nvme_ctrlr_set_host_id: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] NVMe-oF transport - not sending Set Features - Host ID 00:26:41.909 [2024-10-01 22:25:37.144752] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to transport ready (timeout 30000 ms) 00:26:41.909 [2024-10-01 22:25:37.144757] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to ready (no timeout) 00:26:41.909 [2024-10-01 22:25:37.144776] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:191 cdw10:00000001 PRP1 0x0 PRP2 0x0 00:26:41.909 [2024-10-01 22:25:37.144784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000c p:1 m:0 dnr:0 00:26:41.909 [2024-10-01 22:25:37.144796] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:191 cdw10:00000002 PRP1 0x0 PRP2 0x0 00:26:41.909 [2024-10-01 22:25:37.144806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000d p:1 m:0 dnr:0 00:26:41.909 [2024-10-01 22:25:37.144817] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:191 cdw10:00000004 PRP1 0x0 PRP2 0x0 00:26:41.909 [2024-10-01 22:25:37.144830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000e p:1 m:0 dnr:0 00:26:41.909 [2024-10-01 22:25:37.144841] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:26:41.909 [2024-10-01 22:25:37.144848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:000f p:1 m:0 dnr:0 00:26:41.909 [2024-10-01 22:25:37.144862] nvme_pcie_common.c:1204:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f6000 len:8192 00:26:41.909 [2024-10-01 22:25:37.144866] nvme_pcie_common.c:1232:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f6000 00:26:41.909 [2024-10-01 22:25:37.144870] nvme_pcie_common.c:1241:nvme_pcie_prp_list_append: *DEBUG*: prp[0] = 0x2000002f7000 00:26:41.909 [2024-10-01 22:25:37.144874] nvme_pcie_common.c:1257:nvme_pcie_prp_list_append: *DEBUG*: prp2 = 0x2000002f7000 00:26:41.909 [2024-10-01 22:25:37.144877] nvme_pcie_common.c:1292:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 2 00:26:41.909 [2024-10-01 22:25:37.144883] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:191 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 PRP1 0x2000002f6000 PRP2 0x2000002f7000 00:26:41.909 [2024-10-01 22:25:37.144893] nvme_pcie_common.c:1204:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fc000 len:512 00:26:41.909 [2024-10-01 22:25:37.144898] nvme_pcie_common.c:1232:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fc000 00:26:41.909 [2024-10-01 22:25:37.144901] nvme_pcie_common.c:1292:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:26:41.909 [2024-10-01 22:25:37.144907] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:186 nsid:ffffffff cdw10:007f0002 cdw11:00000000 PRP1 0x2000002fc000 PRP2 0x0 00:26:41.909 [2024-10-01 22:25:37.144915] nvme_pcie_common.c:1204:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:512 00:26:41.909 [2024-10-01 22:25:37.144920] nvme_pcie_common.c:1232:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:26:41.909 [2024-10-01 22:25:37.144923] nvme_pcie_common.c:1292:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:26:41.909 [2024-10-01 22:25:37.144929] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:185 nsid:ffffffff cdw10:007f0003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:26:41.909 [2024-10-01 22:25:37.144937] nvme_pcie_common.c:1204:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f4000 len:4096 00:26:41.909 [2024-10-01 22:25:37.144942] nvme_pcie_common.c:1232:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f4000 00:26:41.909 [2024-10-01 22:25:37.144945] nvme_pcie_common.c:1292:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:26:41.909 [2024-10-01 22:25:37.144951] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:184 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 PRP1 0x2000002f4000 PRP2 0x0 00:26:41.909 [2024-10-01 22:25:37.144958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0010 p:1 m:0 dnr:0 00:26:41.909 [2024-10-01 22:25:37.144970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:186 cdw0:0 sqhd:0011 p:1 m:0 dnr:0 00:26:41.909 [2024-10-01 22:25:37.144981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:185 cdw0:0 sqhd:0012 p:1 m:0 dnr:0 00:26:41.909 [2024-10-01 22:25:37.144988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0013 p:1 m:0 dnr:0 00:26:41.909 ===================================================== 00:26:41.909 NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:26:41.909 ===================================================== 00:26:41.909 Controller Capabilities/Features 00:26:41.909 ================================ 00:26:41.909 Vendor ID: 4e58 00:26:41.909 Subsystem Vendor ID: 4e58 00:26:41.909 Serial Number: SPDK1 00:26:41.909 Model Number: SPDK bdev Controller 00:26:41.909 Firmware Version: 25.01 00:26:41.909 Recommended Arb Burst: 6 00:26:41.909 IEEE OUI Identifier: 8d 6b 50 00:26:41.909 Multi-path I/O 00:26:41.909 May have multiple subsystem ports: Yes 00:26:41.909 May have multiple controllers: Yes 00:26:41.909 Associated with SR-IOV VF: No 00:26:41.909 Max Data Transfer Size: 131072 00:26:41.909 Max Number of Namespaces: 32 00:26:41.909 Max Number of I/O Queues: 127 00:26:41.909 NVMe Specification Version (VS): 1.3 00:26:41.909 NVMe Specification Version (Identify): 1.3 00:26:41.909 Maximum Queue Entries: 256 00:26:41.909 Contiguous Queues Required: Yes 00:26:41.909 Arbitration Mechanisms Supported 00:26:41.909 Weighted Round Robin: Not Supported 00:26:41.909 Vendor Specific: Not Supported 00:26:41.909 Reset Timeout: 15000 ms 00:26:41.909 Doorbell Stride: 4 bytes 00:26:41.909 NVM Subsystem Reset: Not Supported 00:26:41.909 Command Sets Supported 00:26:41.909 NVM Command Set: Supported 00:26:41.909 Boot Partition: Not Supported 00:26:41.909 Memory Page Size Minimum: 4096 bytes 00:26:41.909 Memory Page Size Maximum: 4096 bytes 00:26:41.909 Persistent Memory Region: Not Supported 00:26:41.909 Optional Asynchronous Events Supported 00:26:41.909 Namespace Attribute Notices: Supported 00:26:41.909 Firmware Activation Notices: Not Supported 00:26:41.909 ANA Change Notices: Not Supported 00:26:41.909 PLE Aggregate Log Change Notices: Not Supported 00:26:41.909 LBA Status Info Alert Notices: Not Supported 00:26:41.909 EGE Aggregate Log Change Notices: Not Supported 00:26:41.909 Normal NVM Subsystem Shutdown event: Not Supported 00:26:41.909 Zone Descriptor Change Notices: Not Supported 00:26:41.909 Discovery Log Change Notices: Not Supported 00:26:41.909 Controller Attributes 00:26:41.909 128-bit Host Identifier: Supported 00:26:41.909 Non-Operational Permissive Mode: Not Supported 00:26:41.909 NVM Sets: Not Supported 00:26:41.909 Read Recovery Levels: Not Supported 00:26:41.909 Endurance Groups: Not Supported 00:26:41.909 Predictable Latency Mode: Not Supported 00:26:41.909 Traffic Based Keep ALive: Not Supported 00:26:41.909 Namespace Granularity: Not Supported 00:26:41.909 SQ Associations: Not Supported 00:26:41.909 UUID List: Not Supported 00:26:41.909 Multi-Domain Subsystem: Not Supported 00:26:41.909 Fixed Capacity Management: Not Supported 00:26:41.909 Variable Capacity Management: Not Supported 00:26:41.909 Delete Endurance Group: Not Supported 00:26:41.909 Delete NVM Set: Not Supported 00:26:41.909 Extended LBA Formats Supported: Not Supported 00:26:41.910 Flexible Data Placement Supported: Not Supported 00:26:41.910 00:26:41.910 Controller Memory Buffer Support 00:26:41.910 ================================ 00:26:41.910 Supported: No 00:26:41.910 00:26:41.910 Persistent Memory Region Support 00:26:41.910 ================================ 00:26:41.910 Supported: No 00:26:41.910 00:26:41.910 Admin Command Set Attributes 00:26:41.910 ============================ 00:26:41.910 Security Send/Receive: Not Supported 00:26:41.910 Format NVM: Not Supported 00:26:41.910 Firmware Activate/Download: Not Supported 00:26:41.910 Namespace Management: Not Supported 00:26:41.910 Device Self-Test: Not Supported 00:26:41.910 Directives: Not Supported 00:26:41.910 NVMe-MI: Not Supported 00:26:41.910 Virtualization Management: Not Supported 00:26:41.910 Doorbell Buffer Config: Not Supported 00:26:41.910 Get LBA Status Capability: Not Supported 00:26:41.910 Command & Feature Lockdown Capability: Not Supported 00:26:41.910 Abort Command Limit: 4 00:26:41.910 Async Event Request Limit: 4 00:26:41.910 Number of Firmware Slots: N/A 00:26:41.910 Firmware Slot 1 Read-Only: N/A 00:26:41.910 Firmware Activation Without Reset: N/A 00:26:41.910 Multiple Update Detection Support: N/A 00:26:41.910 Firmware Update Granularity: No Information Provided 00:26:41.910 Per-Namespace SMART Log: No 00:26:41.910 Asymmetric Namespace Access Log Page: Not Supported 00:26:41.910 Subsystem NQN: nqn.2019-07.io.spdk:cnode1 00:26:41.910 Command Effects Log Page: Supported 00:26:41.910 Get Log Page Extended Data: Supported 00:26:41.910 Telemetry Log Pages: Not Supported 00:26:41.910 Persistent Event Log Pages: Not Supported 00:26:41.910 Supported Log Pages Log Page: May Support 00:26:41.910 Commands Supported & Effects Log Page: Not Supported 00:26:41.910 Feature Identifiers & Effects Log Page:May Support 00:26:41.910 NVMe-MI Commands & Effects Log Page: May Support 00:26:41.910 Data Area 4 for Telemetry Log: Not Supported 00:26:41.910 Error Log Page Entries Supported: 128 00:26:41.910 Keep Alive: Supported 00:26:41.910 Keep Alive Granularity: 10000 ms 00:26:41.910 00:26:41.910 NVM Command Set Attributes 00:26:41.910 ========================== 00:26:41.910 Submission Queue Entry Size 00:26:41.910 Max: 64 00:26:41.910 Min: 64 00:26:41.910 Completion Queue Entry Size 00:26:41.910 Max: 16 00:26:41.910 Min: 16 00:26:41.910 Number of Namespaces: 32 00:26:41.910 Compare Command: Supported 00:26:41.910 Write Uncorrectable Command: Not Supported 00:26:41.910 Dataset Management Command: Supported 00:26:41.910 Write Zeroes Command: Supported 00:26:41.910 Set Features Save Field: Not Supported 00:26:41.910 Reservations: Not Supported 00:26:41.910 Timestamp: Not Supported 00:26:41.910 Copy: Supported 00:26:41.910 Volatile Write Cache: Present 00:26:41.910 Atomic Write Unit (Normal): 1 00:26:41.910 Atomic Write Unit (PFail): 1 00:26:41.910 Atomic Compare & Write Unit: 1 00:26:41.910 Fused Compare & Write: Supported 00:26:41.910 Scatter-Gather List 00:26:41.910 SGL Command Set: Supported (Dword aligned) 00:26:41.910 SGL Keyed: Not Supported 00:26:41.910 SGL Bit Bucket Descriptor: Not Supported 00:26:41.910 SGL Metadata Pointer: Not Supported 00:26:41.910 Oversized SGL: Not Supported 00:26:41.910 SGL Metadata Address: Not Supported 00:26:41.910 SGL Offset: Not Supported 00:26:41.910 Transport SGL Data Block: Not Supported 00:26:41.910 Replay Protected Memory Block: Not Supported 00:26:41.910 00:26:41.910 Firmware Slot Information 00:26:41.910 ========================= 00:26:41.910 Active slot: 1 00:26:41.910 Slot 1 Firmware Revision: 25.01 00:26:41.910 00:26:41.910 00:26:41.910 Commands Supported and Effects 00:26:41.910 ============================== 00:26:41.910 Admin Commands 00:26:41.910 -------------- 00:26:41.910 Get Log Page (02h): Supported 00:26:41.910 Identify (06h): Supported 00:26:41.910 Abort (08h): Supported 00:26:41.910 Set Features (09h): Supported 00:26:41.910 Get Features (0Ah): Supported 00:26:41.910 Asynchronous Event Request (0Ch): Supported 00:26:41.910 Keep Alive (18h): Supported 00:26:41.910 I/O Commands 00:26:41.910 ------------ 00:26:41.910 Flush (00h): Supported LBA-Change 00:26:41.910 Write (01h): Supported LBA-Change 00:26:41.910 Read (02h): Supported 00:26:41.910 Compare (05h): Supported 00:26:41.910 Write Zeroes (08h): Supported LBA-Change 00:26:41.910 Dataset Management (09h): Supported LBA-Change 00:26:41.910 Copy (19h): Supported LBA-Change 00:26:41.910 00:26:41.910 Error Log 00:26:41.910 ========= 00:26:41.910 00:26:41.910 Arbitration 00:26:41.910 =========== 00:26:41.910 Arbitration Burst: 1 00:26:41.910 00:26:41.910 Power Management 00:26:41.910 ================ 00:26:41.910 Number of Power States: 1 00:26:41.910 Current Power State: Power State #0 00:26:41.910 Power State #0: 00:26:41.910 Max Power: 0.00 W 00:26:41.910 Non-Operational State: Operational 00:26:41.910 Entry Latency: Not Reported 00:26:41.910 Exit Latency: Not Reported 00:26:41.910 Relative Read Throughput: 0 00:26:41.910 Relative Read Latency: 0 00:26:41.910 Relative Write Throughput: 0 00:26:41.910 Relative Write Latency: 0 00:26:41.910 Idle Power: Not Reported 00:26:41.910 Active Power: Not Reported 00:26:41.910 Non-Operational Permissive Mode: Not Supported 00:26:41.910 00:26:41.910 Health Information 00:26:41.910 ================== 00:26:41.910 Critical Warnings: 00:26:41.910 Available Spare Space: OK 00:26:41.910 Temperature: OK 00:26:41.910 Device Reliability: OK 00:26:41.910 Read Only: No 00:26:41.910 Volatile Memory Backup: OK 00:26:41.910 Current Temperature: 0 Kelvin (-273 Celsius) 00:26:41.910 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:26:41.910 Available Spare: 0% 00:26:41.910 Available Sp[2024-10-01 22:25:37.145087] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:184 cdw10:00000005 PRP1 0x0 PRP2 0x0 00:26:41.910 [2024-10-01 22:25:37.145096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0014 p:1 m:0 dnr:0 00:26:41.910 [2024-10-01 22:25:37.145124] nvme_ctrlr.c:4386:nvme_ctrlr_destruct_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Prepare to destruct SSD 00:26:41.910 [2024-10-01 22:25:37.145133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:41.910 [2024-10-01 22:25:37.145140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:41.910 [2024-10-01 22:25:37.145146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:41.910 [2024-10-01 22:25:37.145153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:41.910 [2024-10-01 22:25:37.145182] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x460001 00:26:41.910 [2024-10-01 22:25:37.145192] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x464001 00:26:41.910 [2024-10-01 22:25:37.146188] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:26:41.910 [2024-10-01 22:25:37.146230] nvme_ctrlr.c:1147:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] RTD3E = 0 us 00:26:41.910 [2024-10-01 22:25:37.146241] nvme_ctrlr.c:1150:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] shutdown timeout = 10000 ms 00:26:41.910 [2024-10-01 22:25:37.147191] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x9 00:26:41.910 [2024-10-01 22:25:37.147202] nvme_ctrlr.c:1269:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] shutdown complete in 0 milliseconds 00:26:41.910 [2024-10-01 22:25:37.147270] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /var/run/vfio-user/domain/vfio-user1/1/cntrl 00:26:41.910 [2024-10-01 22:25:37.149215] vfio_user_pci.c: 96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:26:42.170 are Threshold: 0% 00:26:42.171 Life Percentage Used: 0% 00:26:42.171 Data Units Read: 0 00:26:42.171 Data Units Written: 0 00:26:42.171 Host Read Commands: 0 00:26:42.171 Host Write Commands: 0 00:26:42.171 Controller Busy Time: 0 minutes 00:26:42.171 Power Cycles: 0 00:26:42.171 Power On Hours: 0 hours 00:26:42.171 Unsafe Shutdowns: 0 00:26:42.171 Unrecoverable Media Errors: 0 00:26:42.171 Lifetime Error Log Entries: 0 00:26:42.171 Warning Temperature Time: 0 minutes 00:26:42.171 Critical Temperature Time: 0 minutes 00:26:42.171 00:26:42.171 Number of Queues 00:26:42.171 ================ 00:26:42.171 Number of I/O Submission Queues: 127 00:26:42.171 Number of I/O Completion Queues: 127 00:26:42.171 00:26:42.171 Active Namespaces 00:26:42.171 ================= 00:26:42.171 Namespace ID:1 00:26:42.171 Error Recovery Timeout: Unlimited 00:26:42.171 Command Set Identifier: NVM (00h) 00:26:42.171 Deallocate: Supported 00:26:42.171 Deallocated/Unwritten Error: Not Supported 00:26:42.171 Deallocated Read Value: Unknown 00:26:42.171 Deallocate in Write Zeroes: Not Supported 00:26:42.171 Deallocated Guard Field: 0xFFFF 00:26:42.171 Flush: Supported 00:26:42.171 Reservation: Supported 00:26:42.171 Namespace Sharing Capabilities: Multiple Controllers 00:26:42.171 Size (in LBAs): 131072 (0GiB) 00:26:42.171 Capacity (in LBAs): 131072 (0GiB) 00:26:42.171 Utilization (in LBAs): 131072 (0GiB) 00:26:42.171 NGUID: 72ECF17165F64E6EA8478A86020B8A52 00:26:42.171 UUID: 72ecf171-65f6-4e6e-a847-8a86020b8a52 00:26:42.171 Thin Provisioning: Not Supported 00:26:42.171 Per-NS Atomic Units: Yes 00:26:42.171 Atomic Boundary Size (Normal): 0 00:26:42.171 Atomic Boundary Size (PFail): 0 00:26:42.171 Atomic Boundary Offset: 0 00:26:42.171 Maximum Single Source Range Length: 65535 00:26:42.171 Maximum Copy Length: 65535 00:26:42.171 Maximum Source Range Count: 1 00:26:42.171 NGUID/EUI64 Never Reused: No 00:26:42.171 Namespace Write Protected: No 00:26:42.171 Number of LBA Formats: 1 00:26:42.171 Current LBA Format: LBA Format #00 00:26:42.171 LBA Format #00: Data Size: 512 Metadata Size: 0 00:26:42.171 00:26:42.171 22:25:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -s 256 -g -q 128 -o 4096 -w read -t 5 -c 0x2 00:26:42.171 [2024-10-01 22:25:37.331239] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:26:47.509 Initializing NVMe Controllers 00:26:47.509 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:26:47.509 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 with lcore 1 00:26:47.509 Initialization complete. Launching workers. 00:26:47.509 ======================================================== 00:26:47.509 Latency(us) 00:26:47.509 Device Information : IOPS MiB/s Average min max 00:26:47.509 VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 from core 1: 39970.52 156.13 3202.59 851.17 7924.50 00:26:47.509 ======================================================== 00:26:47.509 Total : 39970.52 156.13 3202.59 851.17 7924.50 00:26:47.509 00:26:47.509 [2024-10-01 22:25:42.352476] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:26:47.509 22:25:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -s 256 -g -q 128 -o 4096 -w write -t 5 -c 0x2 00:26:47.509 [2024-10-01 22:25:42.533312] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:26:52.948 Initializing NVMe Controllers 00:26:52.948 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:26:52.948 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 with lcore 1 00:26:52.948 Initialization complete. Launching workers. 00:26:52.948 ======================================================== 00:26:52.948 Latency(us) 00:26:52.948 Device Information : IOPS MiB/s Average min max 00:26:52.948 VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 from core 1: 16051.20 62.70 7982.12 6981.66 10976.73 00:26:52.948 ======================================================== 00:26:52.948 Total : 16051.20 62.70 7982.12 6981.66 10976.73 00:26:52.948 00:26:52.948 [2024-10-01 22:25:47.566083] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:26:52.948 22:25:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -g -q 32 -o 4096 -w randrw -M 50 -t 5 -c 0xE 00:26:52.948 [2024-10-01 22:25:47.755930] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:26:58.250 [2024-10-01 22:25:52.838858] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:26:58.250 Initializing NVMe Controllers 00:26:58.250 Attaching to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:26:58.250 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:26:58.250 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 1 00:26:58.250 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 2 00:26:58.250 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 3 00:26:58.250 Initialization complete. Launching workers. 00:26:58.250 Starting thread on core 2 00:26:58.250 Starting thread on core 3 00:26:58.250 Starting thread on core 1 00:26:58.250 22:25:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -t 3 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -d 256 -g 00:26:58.250 [2024-10-01 22:25:53.109058] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:27:01.547 [2024-10-01 22:25:56.169066] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:27:01.547 Initializing NVMe Controllers 00:27:01.547 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:27:01.547 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:27:01.547 Associating SPDK bdev Controller (SPDK1 ) with lcore 0 00:27:01.547 Associating SPDK bdev Controller (SPDK1 ) with lcore 1 00:27:01.547 Associating SPDK bdev Controller (SPDK1 ) with lcore 2 00:27:01.547 Associating SPDK bdev Controller (SPDK1 ) with lcore 3 00:27:01.547 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration run with configuration: 00:27:01.547 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i -1 00:27:01.547 Initialization complete. Launching workers. 00:27:01.547 Starting thread on core 1 with urgent priority queue 00:27:01.547 Starting thread on core 2 with urgent priority queue 00:27:01.547 Starting thread on core 3 with urgent priority queue 00:27:01.547 Starting thread on core 0 with urgent priority queue 00:27:01.547 SPDK bdev Controller (SPDK1 ) core 0: 13943.67 IO/s 7.17 secs/100000 ios 00:27:01.547 SPDK bdev Controller (SPDK1 ) core 1: 10170.33 IO/s 9.83 secs/100000 ios 00:27:01.547 SPDK bdev Controller (SPDK1 ) core 2: 10585.33 IO/s 9.45 secs/100000 ios 00:27:01.547 SPDK bdev Controller (SPDK1 ) core 3: 12339.00 IO/s 8.10 secs/100000 ios 00:27:01.547 ======================================================== 00:27:01.547 00:27:01.547 22:25:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/hello_world -d 256 -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' 00:27:01.547 [2024-10-01 22:25:56.433059] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:27:01.547 Initializing NVMe Controllers 00:27:01.547 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:27:01.547 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:27:01.547 Namespace ID: 1 size: 0GB 00:27:01.547 Initialization complete. 00:27:01.547 INFO: using host memory buffer for IO 00:27:01.547 Hello world! 00:27:01.547 [2024-10-01 22:25:56.470266] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:27:01.547 22:25:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -g -d 256 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' 00:27:01.547 [2024-10-01 22:25:56.731051] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:27:02.930 Initializing NVMe Controllers 00:27:02.930 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:27:02.930 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:27:02.930 Initialization complete. Launching workers. 00:27:02.930 submit (in ns) avg, min, max = 8074.3, 3898.3, 4000001.7 00:27:02.930 complete (in ns) avg, min, max = 18779.6, 2389.2, 3998529.2 00:27:02.930 00:27:02.930 Submit histogram 00:27:02.930 ================ 00:27:02.930 Range in us Cumulative Count 00:27:02.930 3.893 - 3.920: 1.9144% ( 361) 00:27:02.931 3.920 - 3.947: 9.3546% ( 1403) 00:27:02.931 3.947 - 3.973: 19.6001% ( 1932) 00:27:02.931 3.973 - 4.000: 31.5745% ( 2258) 00:27:02.931 4.000 - 4.027: 42.5306% ( 2066) 00:27:02.931 4.027 - 4.053: 53.7996% ( 2125) 00:27:02.931 4.053 - 4.080: 70.8119% ( 3208) 00:27:02.931 4.080 - 4.107: 84.8756% ( 2652) 00:27:02.931 4.107 - 4.133: 93.4083% ( 1609) 00:27:02.931 4.133 - 4.160: 97.7144% ( 812) 00:27:02.931 4.160 - 4.187: 99.1303% ( 267) 00:27:02.931 4.187 - 4.213: 99.4432% ( 59) 00:27:02.931 4.213 - 4.240: 99.4750% ( 6) 00:27:02.931 4.240 - 4.267: 99.4856% ( 2) 00:27:02.931 4.267 - 4.293: 99.4909% ( 1) 00:27:02.931 4.347 - 4.373: 99.4962% ( 1) 00:27:02.931 4.560 - 4.587: 99.5015% ( 1) 00:27:02.931 4.667 - 4.693: 99.5068% ( 1) 00:27:02.931 4.800 - 4.827: 99.5121% ( 1) 00:27:02.931 4.987 - 5.013: 99.5174% ( 1) 00:27:02.931 5.173 - 5.200: 99.5227% ( 1) 00:27:02.931 5.253 - 5.280: 99.5280% ( 1) 00:27:02.931 5.467 - 5.493: 99.5333% ( 1) 00:27:02.931 5.547 - 5.573: 99.5386% ( 1) 00:27:02.931 5.813 - 5.840: 99.5492% ( 2) 00:27:02.931 5.867 - 5.893: 99.5545% ( 1) 00:27:02.931 6.000 - 6.027: 99.5651% ( 2) 00:27:02.931 6.027 - 6.053: 99.5758% ( 2) 00:27:02.931 6.053 - 6.080: 99.5811% ( 1) 00:27:02.931 6.080 - 6.107: 99.5864% ( 1) 00:27:02.931 6.133 - 6.160: 99.5917% ( 1) 00:27:02.931 6.187 - 6.213: 99.6023% ( 2) 00:27:02.931 6.213 - 6.240: 99.6129% ( 2) 00:27:02.931 6.320 - 6.347: 99.6235% ( 2) 00:27:02.931 6.347 - 6.373: 99.6288% ( 1) 00:27:02.931 6.427 - 6.453: 99.6394% ( 2) 00:27:02.931 6.453 - 6.480: 99.6447% ( 1) 00:27:02.931 6.480 - 6.507: 99.6500% ( 1) 00:27:02.931 6.507 - 6.533: 99.6553% ( 1) 00:27:02.931 6.533 - 6.560: 99.6606% ( 1) 00:27:02.931 6.587 - 6.613: 99.6712% ( 2) 00:27:02.931 6.640 - 6.667: 99.6765% ( 1) 00:27:02.931 6.720 - 6.747: 99.6871% ( 2) 00:27:02.931 6.747 - 6.773: 99.6924% ( 1) 00:27:02.931 6.773 - 6.800: 99.7136% ( 4) 00:27:02.931 6.800 - 6.827: 99.7189% ( 1) 00:27:02.931 6.827 - 6.880: 99.7348% ( 3) 00:27:02.931 6.880 - 6.933: 99.7508% ( 3) 00:27:02.931 6.933 - 6.987: 99.7561% ( 1) 00:27:02.931 6.987 - 7.040: 99.7720% ( 3) 00:27:02.931 7.040 - 7.093: 99.7826% ( 2) 00:27:02.931 7.093 - 7.147: 99.7932% ( 2) 00:27:02.931 7.147 - 7.200: 99.8144% ( 4) 00:27:02.931 7.200 - 7.253: 99.8197% ( 1) 00:27:02.931 7.307 - 7.360: 99.8250% ( 1) 00:27:02.931 7.520 - 7.573: 99.8303% ( 1) 00:27:02.931 7.573 - 7.627: 99.8462% ( 3) 00:27:02.931 7.680 - 7.733: 99.8515% ( 1) 00:27:02.931 7.787 - 7.840: 99.8568% ( 1) 00:27:02.931 7.840 - 7.893: 99.8621% ( 1) 00:27:02.931 7.947 - 8.000: 99.8674% ( 1) 00:27:02.931 8.053 - 8.107: 99.8780% ( 2) 00:27:02.931 9.120 - 9.173: 99.8833% ( 1) 00:27:02.931 13.387 - 13.440: 99.8886% ( 1) 00:27:02.931 14.293 - 14.400: 99.8939% ( 1) 00:27:02.931 14.613 - 14.720: 99.8992% ( 1) 00:27:02.931 3986.773 - 4014.080: 100.0000% ( 19) 00:27:02.931 00:27:02.931 Complete histogram 00:27:02.931 ================== 00:27:02.931 Range in us Cumulative Count 00:27:02.931 2.387 - 2.400: 0.9015% ( 170) 00:27:02.931 2.400 - 2.413: 1.1614% ( 49) 00:27:02.931 2.413 - 2.427: 1.2993% ( 26) 00:27:02.931 2.427 - 2.440: 29.3737% ( 5294) 00:27:02.931 2.440 - 2.453: 53.4868% ( 4547) 00:27:02.931 2.453 - 2.467: 65.2649% ( 2221) 00:27:02.931 2.467 - 2.480: 74.8634% ( 1810) 00:27:02.931 2.480 - 2.493: 79.9067% ( 951) 00:27:02.931 2.493 - 2.507: 82.6696% ( 521) 00:27:02.931 2.507 - 2.520: 88.8848% ( 1172) 00:27:02.931 2.520 - [2024-10-01 22:25:57.753467] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:27:02.931 2.533: 93.9969% ( 964) 00:27:02.931 2.533 - 2.547: 96.6113% ( 493) 00:27:02.931 2.547 - 2.560: 98.3454% ( 327) 00:27:02.931 2.560 - 2.573: 99.0932% ( 141) 00:27:02.931 2.573 - 2.587: 99.3636% ( 51) 00:27:02.931 2.587 - 2.600: 99.3954% ( 6) 00:27:02.931 2.600 - 2.613: 99.4008% ( 1) 00:27:02.931 4.400 - 4.427: 99.4061% ( 1) 00:27:02.931 4.533 - 4.560: 99.4114% ( 1) 00:27:02.931 4.587 - 4.613: 99.4167% ( 1) 00:27:02.931 4.720 - 4.747: 99.4220% ( 1) 00:27:02.931 4.800 - 4.827: 99.4273% ( 1) 00:27:02.931 4.853 - 4.880: 99.4326% ( 1) 00:27:02.931 4.907 - 4.933: 99.4379% ( 1) 00:27:02.931 4.960 - 4.987: 99.4432% ( 1) 00:27:02.931 5.040 - 5.067: 99.4591% ( 3) 00:27:02.931 5.093 - 5.120: 99.4644% ( 1) 00:27:02.931 5.120 - 5.147: 99.4697% ( 1) 00:27:02.931 5.173 - 5.200: 99.4750% ( 1) 00:27:02.931 5.253 - 5.280: 99.4856% ( 2) 00:27:02.931 5.333 - 5.360: 99.4909% ( 1) 00:27:02.931 5.440 - 5.467: 99.5015% ( 2) 00:27:02.931 5.467 - 5.493: 99.5068% ( 1) 00:27:02.931 5.493 - 5.520: 99.5121% ( 1) 00:27:02.931 5.520 - 5.547: 99.5174% ( 1) 00:27:02.931 5.547 - 5.573: 99.5227% ( 1) 00:27:02.931 5.600 - 5.627: 99.5333% ( 2) 00:27:02.931 5.627 - 5.653: 99.5386% ( 1) 00:27:02.931 5.680 - 5.707: 99.5439% ( 1) 00:27:02.931 5.733 - 5.760: 99.5492% ( 1) 00:27:02.931 5.840 - 5.867: 99.5545% ( 1) 00:27:02.931 5.920 - 5.947: 99.5598% ( 1) 00:27:02.931 6.347 - 6.373: 99.5651% ( 1) 00:27:02.931 6.427 - 6.453: 99.5705% ( 1) 00:27:02.931 7.360 - 7.413: 99.5758% ( 1) 00:27:02.931 7.520 - 7.573: 99.5811% ( 1) 00:27:02.931 12.640 - 12.693: 99.5864% ( 1) 00:27:02.931 13.173 - 13.227: 99.5917% ( 1) 00:27:02.931 3932.160 - 3959.467: 99.5970% ( 1) 00:27:02.931 3986.773 - 4014.080: 100.0000% ( 76) 00:27:02.931 00:27:02.931 22:25:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@90 -- # aer_vfio_user /var/run/vfio-user/domain/vfio-user1/1 nqn.2019-07.io.spdk:cnode1 1 00:27:02.931 22:25:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@22 -- # local traddr=/var/run/vfio-user/domain/vfio-user1/1 00:27:02.931 22:25:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@23 -- # local subnqn=nqn.2019-07.io.spdk:cnode1 00:27:02.931 22:25:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@24 -- # local malloc_num=Malloc3 00:27:02.931 22:25:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:27:02.931 [ 00:27:02.931 { 00:27:02.931 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:27:02.931 "subtype": "Discovery", 00:27:02.931 "listen_addresses": [], 00:27:02.931 "allow_any_host": true, 00:27:02.931 "hosts": [] 00:27:02.931 }, 00:27:02.931 { 00:27:02.931 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:27:02.931 "subtype": "NVMe", 00:27:02.931 "listen_addresses": [ 00:27:02.931 { 00:27:02.931 "trtype": "VFIOUSER", 00:27:02.931 "adrfam": "IPv4", 00:27:02.931 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:27:02.931 "trsvcid": "0" 00:27:02.931 } 00:27:02.931 ], 00:27:02.931 "allow_any_host": true, 00:27:02.931 "hosts": [], 00:27:02.931 "serial_number": "SPDK1", 00:27:02.931 "model_number": "SPDK bdev Controller", 00:27:02.931 "max_namespaces": 32, 00:27:02.931 "min_cntlid": 1, 00:27:02.931 "max_cntlid": 65519, 00:27:02.931 "namespaces": [ 00:27:02.931 { 00:27:02.931 "nsid": 1, 00:27:02.931 "bdev_name": "Malloc1", 00:27:02.931 "name": "Malloc1", 00:27:02.931 "nguid": "72ECF17165F64E6EA8478A86020B8A52", 00:27:02.931 "uuid": "72ecf171-65f6-4e6e-a847-8a86020b8a52" 00:27:02.931 } 00:27:02.931 ] 00:27:02.931 }, 00:27:02.931 { 00:27:02.931 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:27:02.931 "subtype": "NVMe", 00:27:02.931 "listen_addresses": [ 00:27:02.931 { 00:27:02.931 "trtype": "VFIOUSER", 00:27:02.931 "adrfam": "IPv4", 00:27:02.931 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:27:02.931 "trsvcid": "0" 00:27:02.931 } 00:27:02.931 ], 00:27:02.931 "allow_any_host": true, 00:27:02.931 "hosts": [], 00:27:02.931 "serial_number": "SPDK2", 00:27:02.931 "model_number": "SPDK bdev Controller", 00:27:02.931 "max_namespaces": 32, 00:27:02.931 "min_cntlid": 1, 00:27:02.931 "max_cntlid": 65519, 00:27:02.931 "namespaces": [ 00:27:02.931 { 00:27:02.931 "nsid": 1, 00:27:02.931 "bdev_name": "Malloc2", 00:27:02.931 "name": "Malloc2", 00:27:02.931 "nguid": "B225E8F97B1C490FAC861B06371AEF4F", 00:27:02.931 "uuid": "b225e8f9-7b1c-490f-ac86-1b06371aef4f" 00:27:02.931 } 00:27:02.931 ] 00:27:02.931 } 00:27:02.931 ] 00:27:02.931 22:25:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@27 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:27:02.931 22:25:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -n 2 -g -t /tmp/aer_touch_file 00:27:02.932 22:25:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@34 -- # aerpid=163992 00:27:02.932 22:25:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@37 -- # waitforfile /tmp/aer_touch_file 00:27:02.932 22:25:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1265 -- # local i=0 00:27:02.932 22:25:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:27:02.932 22:25:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1267 -- # '[' 0 -lt 200 ']' 00:27:02.932 22:25:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1268 -- # i=1 00:27:02.932 22:25:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1269 -- # sleep 0.1 00:27:02.932 22:25:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:27:02.932 22:25:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1267 -- # '[' 1 -lt 200 ']' 00:27:02.932 22:25:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1268 -- # i=2 00:27:02.932 22:25:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1269 -- # sleep 0.1 00:27:02.932 [2024-10-01 22:25:58.143060] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:27:03.192 22:25:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:27:03.192 22:25:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1272 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:27:03.192 22:25:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1276 -- # return 0 00:27:03.192 22:25:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@38 -- # rm -f /tmp/aer_touch_file 00:27:03.192 22:25:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 --name Malloc3 00:27:03.192 Malloc3 00:27:03.192 22:25:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc3 -n 2 00:27:03.452 [2024-10-01 22:25:58.544798] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:27:03.452 22:25:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:27:03.452 Asynchronous Event Request test 00:27:03.452 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:27:03.452 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:27:03.452 Registering asynchronous event callbacks... 00:27:03.452 Starting namespace attribute notice tests for all controllers... 00:27:03.452 /var/run/vfio-user/domain/vfio-user1/1: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:27:03.452 aer_cb - Changed Namespace 00:27:03.452 Cleaning up... 00:27:03.715 [ 00:27:03.715 { 00:27:03.715 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:27:03.715 "subtype": "Discovery", 00:27:03.715 "listen_addresses": [], 00:27:03.715 "allow_any_host": true, 00:27:03.715 "hosts": [] 00:27:03.715 }, 00:27:03.715 { 00:27:03.715 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:27:03.715 "subtype": "NVMe", 00:27:03.715 "listen_addresses": [ 00:27:03.715 { 00:27:03.715 "trtype": "VFIOUSER", 00:27:03.715 "adrfam": "IPv4", 00:27:03.715 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:27:03.715 "trsvcid": "0" 00:27:03.715 } 00:27:03.715 ], 00:27:03.715 "allow_any_host": true, 00:27:03.715 "hosts": [], 00:27:03.715 "serial_number": "SPDK1", 00:27:03.715 "model_number": "SPDK bdev Controller", 00:27:03.715 "max_namespaces": 32, 00:27:03.715 "min_cntlid": 1, 00:27:03.715 "max_cntlid": 65519, 00:27:03.715 "namespaces": [ 00:27:03.715 { 00:27:03.715 "nsid": 1, 00:27:03.715 "bdev_name": "Malloc1", 00:27:03.715 "name": "Malloc1", 00:27:03.715 "nguid": "72ECF17165F64E6EA8478A86020B8A52", 00:27:03.715 "uuid": "72ecf171-65f6-4e6e-a847-8a86020b8a52" 00:27:03.715 }, 00:27:03.715 { 00:27:03.715 "nsid": 2, 00:27:03.715 "bdev_name": "Malloc3", 00:27:03.715 "name": "Malloc3", 00:27:03.715 "nguid": "B18A13821BCD45209AAA7FE48B2A7DAF", 00:27:03.715 "uuid": "b18a1382-1bcd-4520-9aaa-7fe48b2a7daf" 00:27:03.715 } 00:27:03.715 ] 00:27:03.715 }, 00:27:03.715 { 00:27:03.715 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:27:03.715 "subtype": "NVMe", 00:27:03.715 "listen_addresses": [ 00:27:03.715 { 00:27:03.715 "trtype": "VFIOUSER", 00:27:03.715 "adrfam": "IPv4", 00:27:03.715 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:27:03.715 "trsvcid": "0" 00:27:03.715 } 00:27:03.715 ], 00:27:03.715 "allow_any_host": true, 00:27:03.715 "hosts": [], 00:27:03.715 "serial_number": "SPDK2", 00:27:03.715 "model_number": "SPDK bdev Controller", 00:27:03.715 "max_namespaces": 32, 00:27:03.715 "min_cntlid": 1, 00:27:03.715 "max_cntlid": 65519, 00:27:03.715 "namespaces": [ 00:27:03.715 { 00:27:03.715 "nsid": 1, 00:27:03.715 "bdev_name": "Malloc2", 00:27:03.715 "name": "Malloc2", 00:27:03.715 "nguid": "B225E8F97B1C490FAC861B06371AEF4F", 00:27:03.715 "uuid": "b225e8f9-7b1c-490f-ac86-1b06371aef4f" 00:27:03.715 } 00:27:03.715 ] 00:27:03.715 } 00:27:03.715 ] 00:27:03.715 22:25:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@44 -- # wait 163992 00:27:03.715 22:25:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # for i in $(seq 1 $NUM_DEVICES) 00:27:03.715 22:25:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@81 -- # test_traddr=/var/run/vfio-user/domain/vfio-user2/2 00:27:03.715 22:25:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@82 -- # test_subnqn=nqn.2019-07.io.spdk:cnode2 00:27:03.715 22:25:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -g -L nvme -L nvme_vfio -L vfio_pci 00:27:03.715 [2024-10-01 22:25:58.776114] Starting SPDK v25.01-pre git sha1 1b1c3081e / DPDK 24.03.0 initialization... 00:27:03.715 [2024-10-01 22:25:58.776160] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid164033 ] 00:27:03.715 [2024-10-01 22:25:58.808181] nvme_vfio_user.c: 259:nvme_vfio_ctrlr_scan: *DEBUG*: Scan controller : /var/run/vfio-user/domain/vfio-user2/2 00:27:03.715 [2024-10-01 22:25:58.816841] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x2000, Offset 0x0, Flags 0xf, Cap offset 32 00:27:03.715 [2024-10-01 22:25:58.816866] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x1000, Map addr 0x7feaf6647000 00:27:03.715 [2024-10-01 22:25:58.817848] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:27:03.715 [2024-10-01 22:25:58.818850] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:27:03.715 [2024-10-01 22:25:58.819856] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:27:03.715 [2024-10-01 22:25:58.820861] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x2000, Offset 0x0, Flags 0x3, Cap offset 0 00:27:03.715 [2024-10-01 22:25:58.821864] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:27:03.715 [2024-10-01 22:25:58.822875] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:27:03.715 [2024-10-01 22:25:58.823882] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:27:03.715 [2024-10-01 22:25:58.824888] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:27:03.715 [2024-10-01 22:25:58.825895] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0xc000, Offset 0x0, Flags 0xf, Cap offset 32 00:27:03.715 [2024-10-01 22:25:58.825905] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0xb000, Offset 0x1000, Map addr 0x7feaf663c000 00:27:03.715 [2024-10-01 22:25:58.827230] vfio_user_pci.c: 65:vfio_add_mr: *DEBUG*: Add memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:27:03.715 [2024-10-01 22:25:58.844439] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /var/run/vfio-user/domain/vfio-user2/2/cntrl Setup Successfully 00:27:03.715 [2024-10-01 22:25:58.844464] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to connect adminq (no timeout) 00:27:03.715 [2024-10-01 22:25:58.846514] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x0, value 0x201e0100ff 00:27:03.715 [2024-10-01 22:25:58.846561] nvme_pcie_common.c: 134:nvme_pcie_qpair_construct: *INFO*: max_completions_cap = 64 num_trackers = 192 00:27:03.715 [2024-10-01 22:25:58.846648] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for connect adminq (no timeout) 00:27:03.715 [2024-10-01 22:25:58.846666] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read vs (no timeout) 00:27:03.715 [2024-10-01 22:25:58.846672] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read vs wait for vs (no timeout) 00:27:03.715 [2024-10-01 22:25:58.847518] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x8, value 0x10300 00:27:03.715 [2024-10-01 22:25:58.847528] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read cap (no timeout) 00:27:03.715 [2024-10-01 22:25:58.847535] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read cap wait for cap (no timeout) 00:27:03.716 [2024-10-01 22:25:58.848520] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x0, value 0x201e0100ff 00:27:03.716 [2024-10-01 22:25:58.848529] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to check en (no timeout) 00:27:03.716 [2024-10-01 22:25:58.848540] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to check en wait for cc (timeout 15000 ms) 00:27:03.716 [2024-10-01 22:25:58.849522] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x0 00:27:03.716 [2024-10-01 22:25:58.849533] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:27:03.716 [2024-10-01 22:25:58.850529] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x0 00:27:03.716 [2024-10-01 22:25:58.850538] nvme_ctrlr.c:3893:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] CC.EN = 0 && CSTS.RDY = 0 00:27:03.716 [2024-10-01 22:25:58.850544] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to controller is disabled (timeout 15000 ms) 00:27:03.716 [2024-10-01 22:25:58.850550] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:27:03.716 [2024-10-01 22:25:58.850656] nvme_ctrlr.c:4091:nvme_ctrlr_process_init: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Setting CC.EN = 1 00:27:03.716 [2024-10-01 22:25:58.850661] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:27:03.716 [2024-10-01 22:25:58.850666] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x28, value 0x2000003c0000 00:27:03.716 [2024-10-01 22:25:58.851535] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x30, value 0x2000003be000 00:27:03.716 [2024-10-01 22:25:58.852543] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x24, value 0xff00ff 00:27:03.716 [2024-10-01 22:25:58.853550] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x460001 00:27:03.716 [2024-10-01 22:25:58.854553] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:27:03.716 [2024-10-01 22:25:58.854595] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:27:03.716 [2024-10-01 22:25:58.855563] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x1 00:27:03.716 [2024-10-01 22:25:58.855573] nvme_ctrlr.c:3928:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:27:03.716 [2024-10-01 22:25:58.855578] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to reset admin queue (timeout 30000 ms) 00:27:03.716 [2024-10-01 22:25:58.855599] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify controller (no timeout) 00:27:03.716 [2024-10-01 22:25:58.855607] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify controller (timeout 30000 ms) 00:27:03.716 [2024-10-01 22:25:58.855620] nvme_pcie_common.c:1204:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:27:03.716 [2024-10-01 22:25:58.855629] nvme_pcie_common.c:1232:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:27:03.716 [2024-10-01 22:25:58.855633] nvme_pcie_common.c:1292:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:27:03.716 [2024-10-01 22:25:58.855644] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:27:03.716 [2024-10-01 22:25:58.863633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0001 p:1 m:0 dnr:0 00:27:03.716 [2024-10-01 22:25:58.863648] nvme_ctrlr.c:2077:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] transport max_xfer_size 131072 00:27:03.716 [2024-10-01 22:25:58.863653] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] MDTS max_xfer_size 131072 00:27:03.716 [2024-10-01 22:25:58.863658] nvme_ctrlr.c:2084:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] CNTLID 0x0001 00:27:03.716 [2024-10-01 22:25:58.863663] nvme_ctrlr.c:2095:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Identify CNTLID 0x0001 != Connect CNTLID 0x0000 00:27:03.716 [2024-10-01 22:25:58.863668] nvme_ctrlr.c:2108:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] transport max_sges 1 00:27:03.716 [2024-10-01 22:25:58.863672] nvme_ctrlr.c:2123:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] fuses compare and write: 1 00:27:03.716 [2024-10-01 22:25:58.863677] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to configure AER (timeout 30000 ms) 00:27:03.716 [2024-10-01 22:25:58.863685] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for configure aer (timeout 30000 ms) 00:27:03.716 [2024-10-01 22:25:58.863695] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:191 cdw10:0000000b PRP1 0x0 PRP2 0x0 00:27:03.716 [2024-10-01 22:25:58.871632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:27:03.716 [2024-10-01 22:25:58.871645] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:27:03.716 [2024-10-01 22:25:58.871654] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:27:03.716 [2024-10-01 22:25:58.871663] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:27:03.716 [2024-10-01 22:25:58.871672] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:27:03.716 [2024-10-01 22:25:58.871677] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set keep alive timeout (timeout 30000 ms) 00:27:03.716 [2024-10-01 22:25:58.871687] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:27:03.716 [2024-10-01 22:25:58.871696] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:191 cdw10:0000000f PRP1 0x0 PRP2 0x0 00:27:03.716 [2024-10-01 22:25:58.879632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0007 p:1 m:0 dnr:0 00:27:03.716 [2024-10-01 22:25:58.879640] nvme_ctrlr.c:3034:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Controller adjusted keep alive timeout to 0 ms 00:27:03.716 [2024-10-01 22:25:58.879646] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify controller iocs specific (timeout 30000 ms) 00:27:03.716 [2024-10-01 22:25:58.879653] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set number of queues (timeout 30000 ms) 00:27:03.716 [2024-10-01 22:25:58.879661] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for set number of queues (timeout 30000 ms) 00:27:03.716 [2024-10-01 22:25:58.879671] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:27:03.716 [2024-10-01 22:25:58.887630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:0008 p:1 m:0 dnr:0 00:27:03.716 [2024-10-01 22:25:58.887696] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify active ns (timeout 30000 ms) 00:27:03.716 [2024-10-01 22:25:58.887707] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify active ns (timeout 30000 ms) 00:27:03.716 [2024-10-01 22:25:58.887715] nvme_pcie_common.c:1204:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f9000 len:4096 00:27:03.716 [2024-10-01 22:25:58.887720] nvme_pcie_common.c:1232:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f9000 00:27:03.716 [2024-10-01 22:25:58.887724] nvme_pcie_common.c:1292:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:27:03.716 [2024-10-01 22:25:58.887730] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000002 cdw11:00000000 PRP1 0x2000002f9000 PRP2 0x0 00:27:03.716 [2024-10-01 22:25:58.895631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0009 p:1 m:0 dnr:0 00:27:03.716 [2024-10-01 22:25:58.895643] nvme_ctrlr.c:4722:spdk_nvme_ctrlr_get_ns: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Namespace 1 was added 00:27:03.716 [2024-10-01 22:25:58.895653] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify ns (timeout 30000 ms) 00:27:03.716 [2024-10-01 22:25:58.895661] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify ns (timeout 30000 ms) 00:27:03.716 [2024-10-01 22:25:58.895669] nvme_pcie_common.c:1204:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:27:03.716 [2024-10-01 22:25:58.895674] nvme_pcie_common.c:1232:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:27:03.716 [2024-10-01 22:25:58.895677] nvme_pcie_common.c:1292:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:27:03.716 [2024-10-01 22:25:58.895684] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000000 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:27:03.716 [2024-10-01 22:25:58.903631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000a p:1 m:0 dnr:0 00:27:03.716 [2024-10-01 22:25:58.903646] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify namespace id descriptors (timeout 30000 ms) 00:27:03.716 [2024-10-01 22:25:58.903654] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:27:03.716 [2024-10-01 22:25:58.903662] nvme_pcie_common.c:1204:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:27:03.716 [2024-10-01 22:25:58.903666] nvme_pcie_common.c:1232:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:27:03.716 [2024-10-01 22:25:58.903670] nvme_pcie_common.c:1292:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:27:03.716 [2024-10-01 22:25:58.903676] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:27:03.716 [2024-10-01 22:25:58.911631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000b p:1 m:0 dnr:0 00:27:03.716 [2024-10-01 22:25:58.911641] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify ns iocs specific (timeout 30000 ms) 00:27:03.716 [2024-10-01 22:25:58.911648] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set supported log pages (timeout 30000 ms) 00:27:03.716 [2024-10-01 22:25:58.911657] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set supported features (timeout 30000 ms) 00:27:03.716 [2024-10-01 22:25:58.911663] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set host behavior support feature (timeout 30000 ms) 00:27:03.716 [2024-10-01 22:25:58.911668] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set doorbell buffer config (timeout 30000 ms) 00:27:03.716 [2024-10-01 22:25:58.911675] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set host ID (timeout 30000 ms) 00:27:03.716 [2024-10-01 22:25:58.911680] nvme_ctrlr.c:3134:nvme_ctrlr_set_host_id: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] NVMe-oF transport - not sending Set Features - Host ID 00:27:03.716 [2024-10-01 22:25:58.911685] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to transport ready (timeout 30000 ms) 00:27:03.716 [2024-10-01 22:25:58.911690] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to ready (no timeout) 00:27:03.716 [2024-10-01 22:25:58.911707] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:191 cdw10:00000001 PRP1 0x0 PRP2 0x0 00:27:03.716 [2024-10-01 22:25:58.919631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000c p:1 m:0 dnr:0 00:27:03.716 [2024-10-01 22:25:58.919646] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:191 cdw10:00000002 PRP1 0x0 PRP2 0x0 00:27:03.716 [2024-10-01 22:25:58.927632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000d p:1 m:0 dnr:0 00:27:03.716 [2024-10-01 22:25:58.927646] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:191 cdw10:00000004 PRP1 0x0 PRP2 0x0 00:27:03.716 [2024-10-01 22:25:58.935630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000e p:1 m:0 dnr:0 00:27:03.716 [2024-10-01 22:25:58.935644] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:27:03.716 [2024-10-01 22:25:58.943632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:000f p:1 m:0 dnr:0 00:27:03.716 [2024-10-01 22:25:58.943651] nvme_pcie_common.c:1204:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f6000 len:8192 00:27:03.716 [2024-10-01 22:25:58.943656] nvme_pcie_common.c:1232:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f6000 00:27:03.716 [2024-10-01 22:25:58.943660] nvme_pcie_common.c:1241:nvme_pcie_prp_list_append: *DEBUG*: prp[0] = 0x2000002f7000 00:27:03.716 [2024-10-01 22:25:58.943663] nvme_pcie_common.c:1257:nvme_pcie_prp_list_append: *DEBUG*: prp2 = 0x2000002f7000 00:27:03.716 [2024-10-01 22:25:58.943667] nvme_pcie_common.c:1292:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 2 00:27:03.716 [2024-10-01 22:25:58.943673] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:191 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 PRP1 0x2000002f6000 PRP2 0x2000002f7000 00:27:03.716 [2024-10-01 22:25:58.943681] nvme_pcie_common.c:1204:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fc000 len:512 00:27:03.716 [2024-10-01 22:25:58.943686] nvme_pcie_common.c:1232:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fc000 00:27:03.716 [2024-10-01 22:25:58.943689] nvme_pcie_common.c:1292:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:27:03.716 [2024-10-01 22:25:58.943695] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:186 nsid:ffffffff cdw10:007f0002 cdw11:00000000 PRP1 0x2000002fc000 PRP2 0x0 00:27:03.716 [2024-10-01 22:25:58.943703] nvme_pcie_common.c:1204:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:512 00:27:03.716 [2024-10-01 22:25:58.943707] nvme_pcie_common.c:1232:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:27:03.716 [2024-10-01 22:25:58.943711] nvme_pcie_common.c:1292:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:27:03.716 [2024-10-01 22:25:58.943717] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:185 nsid:ffffffff cdw10:007f0003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:27:03.716 [2024-10-01 22:25:58.943724] nvme_pcie_common.c:1204:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f4000 len:4096 00:27:03.716 [2024-10-01 22:25:58.943729] nvme_pcie_common.c:1232:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f4000 00:27:03.716 [2024-10-01 22:25:58.943734] nvme_pcie_common.c:1292:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:27:03.716 [2024-10-01 22:25:58.943740] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:184 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 PRP1 0x2000002f4000 PRP2 0x0 00:27:03.716 [2024-10-01 22:25:58.951633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0010 p:1 m:0 dnr:0 00:27:03.716 [2024-10-01 22:25:58.951648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:186 cdw0:0 sqhd:0011 p:1 m:0 dnr:0 00:27:03.716 [2024-10-01 22:25:58.951659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:185 cdw0:0 sqhd:0012 p:1 m:0 dnr:0 00:27:03.716 [2024-10-01 22:25:58.951666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0013 p:1 m:0 dnr:0 00:27:03.716 ===================================================== 00:27:03.716 NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:27:03.716 ===================================================== 00:27:03.716 Controller Capabilities/Features 00:27:03.716 ================================ 00:27:03.716 Vendor ID: 4e58 00:27:03.716 Subsystem Vendor ID: 4e58 00:27:03.716 Serial Number: SPDK2 00:27:03.716 Model Number: SPDK bdev Controller 00:27:03.716 Firmware Version: 25.01 00:27:03.716 Recommended Arb Burst: 6 00:27:03.716 IEEE OUI Identifier: 8d 6b 50 00:27:03.716 Multi-path I/O 00:27:03.716 May have multiple subsystem ports: Yes 00:27:03.716 May have multiple controllers: Yes 00:27:03.716 Associated with SR-IOV VF: No 00:27:03.716 Max Data Transfer Size: 131072 00:27:03.716 Max Number of Namespaces: 32 00:27:03.716 Max Number of I/O Queues: 127 00:27:03.716 NVMe Specification Version (VS): 1.3 00:27:03.716 NVMe Specification Version (Identify): 1.3 00:27:03.716 Maximum Queue Entries: 256 00:27:03.716 Contiguous Queues Required: Yes 00:27:03.716 Arbitration Mechanisms Supported 00:27:03.716 Weighted Round Robin: Not Supported 00:27:03.716 Vendor Specific: Not Supported 00:27:03.716 Reset Timeout: 15000 ms 00:27:03.716 Doorbell Stride: 4 bytes 00:27:03.716 NVM Subsystem Reset: Not Supported 00:27:03.716 Command Sets Supported 00:27:03.716 NVM Command Set: Supported 00:27:03.716 Boot Partition: Not Supported 00:27:03.716 Memory Page Size Minimum: 4096 bytes 00:27:03.716 Memory Page Size Maximum: 4096 bytes 00:27:03.716 Persistent Memory Region: Not Supported 00:27:03.716 Optional Asynchronous Events Supported 00:27:03.716 Namespace Attribute Notices: Supported 00:27:03.716 Firmware Activation Notices: Not Supported 00:27:03.716 ANA Change Notices: Not Supported 00:27:03.716 PLE Aggregate Log Change Notices: Not Supported 00:27:03.716 LBA Status Info Alert Notices: Not Supported 00:27:03.716 EGE Aggregate Log Change Notices: Not Supported 00:27:03.716 Normal NVM Subsystem Shutdown event: Not Supported 00:27:03.716 Zone Descriptor Change Notices: Not Supported 00:27:03.716 Discovery Log Change Notices: Not Supported 00:27:03.716 Controller Attributes 00:27:03.716 128-bit Host Identifier: Supported 00:27:03.716 Non-Operational Permissive Mode: Not Supported 00:27:03.716 NVM Sets: Not Supported 00:27:03.716 Read Recovery Levels: Not Supported 00:27:03.716 Endurance Groups: Not Supported 00:27:03.716 Predictable Latency Mode: Not Supported 00:27:03.716 Traffic Based Keep ALive: Not Supported 00:27:03.716 Namespace Granularity: Not Supported 00:27:03.716 SQ Associations: Not Supported 00:27:03.716 UUID List: Not Supported 00:27:03.716 Multi-Domain Subsystem: Not Supported 00:27:03.716 Fixed Capacity Management: Not Supported 00:27:03.716 Variable Capacity Management: Not Supported 00:27:03.716 Delete Endurance Group: Not Supported 00:27:03.716 Delete NVM Set: Not Supported 00:27:03.716 Extended LBA Formats Supported: Not Supported 00:27:03.716 Flexible Data Placement Supported: Not Supported 00:27:03.716 00:27:03.716 Controller Memory Buffer Support 00:27:03.716 ================================ 00:27:03.716 Supported: No 00:27:03.716 00:27:03.716 Persistent Memory Region Support 00:27:03.716 ================================ 00:27:03.716 Supported: No 00:27:03.716 00:27:03.716 Admin Command Set Attributes 00:27:03.716 ============================ 00:27:03.716 Security Send/Receive: Not Supported 00:27:03.716 Format NVM: Not Supported 00:27:03.716 Firmware Activate/Download: Not Supported 00:27:03.716 Namespace Management: Not Supported 00:27:03.716 Device Self-Test: Not Supported 00:27:03.716 Directives: Not Supported 00:27:03.716 NVMe-MI: Not Supported 00:27:03.716 Virtualization Management: Not Supported 00:27:03.716 Doorbell Buffer Config: Not Supported 00:27:03.716 Get LBA Status Capability: Not Supported 00:27:03.716 Command & Feature Lockdown Capability: Not Supported 00:27:03.716 Abort Command Limit: 4 00:27:03.716 Async Event Request Limit: 4 00:27:03.716 Number of Firmware Slots: N/A 00:27:03.716 Firmware Slot 1 Read-Only: N/A 00:27:03.716 Firmware Activation Without Reset: N/A 00:27:03.716 Multiple Update Detection Support: N/A 00:27:03.716 Firmware Update Granularity: No Information Provided 00:27:03.716 Per-Namespace SMART Log: No 00:27:03.716 Asymmetric Namespace Access Log Page: Not Supported 00:27:03.716 Subsystem NQN: nqn.2019-07.io.spdk:cnode2 00:27:03.716 Command Effects Log Page: Supported 00:27:03.716 Get Log Page Extended Data: Supported 00:27:03.716 Telemetry Log Pages: Not Supported 00:27:03.716 Persistent Event Log Pages: Not Supported 00:27:03.716 Supported Log Pages Log Page: May Support 00:27:03.716 Commands Supported & Effects Log Page: Not Supported 00:27:03.716 Feature Identifiers & Effects Log Page:May Support 00:27:03.716 NVMe-MI Commands & Effects Log Page: May Support 00:27:03.717 Data Area 4 for Telemetry Log: Not Supported 00:27:03.717 Error Log Page Entries Supported: 128 00:27:03.717 Keep Alive: Supported 00:27:03.717 Keep Alive Granularity: 10000 ms 00:27:03.717 00:27:03.717 NVM Command Set Attributes 00:27:03.717 ========================== 00:27:03.717 Submission Queue Entry Size 00:27:03.717 Max: 64 00:27:03.717 Min: 64 00:27:03.717 Completion Queue Entry Size 00:27:03.717 Max: 16 00:27:03.717 Min: 16 00:27:03.717 Number of Namespaces: 32 00:27:03.717 Compare Command: Supported 00:27:03.717 Write Uncorrectable Command: Not Supported 00:27:03.717 Dataset Management Command: Supported 00:27:03.717 Write Zeroes Command: Supported 00:27:03.717 Set Features Save Field: Not Supported 00:27:03.717 Reservations: Not Supported 00:27:03.717 Timestamp: Not Supported 00:27:03.717 Copy: Supported 00:27:03.717 Volatile Write Cache: Present 00:27:03.717 Atomic Write Unit (Normal): 1 00:27:03.717 Atomic Write Unit (PFail): 1 00:27:03.717 Atomic Compare & Write Unit: 1 00:27:03.717 Fused Compare & Write: Supported 00:27:03.717 Scatter-Gather List 00:27:03.717 SGL Command Set: Supported (Dword aligned) 00:27:03.717 SGL Keyed: Not Supported 00:27:03.717 SGL Bit Bucket Descriptor: Not Supported 00:27:03.717 SGL Metadata Pointer: Not Supported 00:27:03.717 Oversized SGL: Not Supported 00:27:03.717 SGL Metadata Address: Not Supported 00:27:03.717 SGL Offset: Not Supported 00:27:03.717 Transport SGL Data Block: Not Supported 00:27:03.717 Replay Protected Memory Block: Not Supported 00:27:03.717 00:27:03.717 Firmware Slot Information 00:27:03.717 ========================= 00:27:03.717 Active slot: 1 00:27:03.717 Slot 1 Firmware Revision: 25.01 00:27:03.717 00:27:03.717 00:27:03.717 Commands Supported and Effects 00:27:03.717 ============================== 00:27:03.717 Admin Commands 00:27:03.717 -------------- 00:27:03.717 Get Log Page (02h): Supported 00:27:03.717 Identify (06h): Supported 00:27:03.717 Abort (08h): Supported 00:27:03.717 Set Features (09h): Supported 00:27:03.717 Get Features (0Ah): Supported 00:27:03.717 Asynchronous Event Request (0Ch): Supported 00:27:03.717 Keep Alive (18h): Supported 00:27:03.717 I/O Commands 00:27:03.717 ------------ 00:27:03.717 Flush (00h): Supported LBA-Change 00:27:03.717 Write (01h): Supported LBA-Change 00:27:03.717 Read (02h): Supported 00:27:03.717 Compare (05h): Supported 00:27:03.717 Write Zeroes (08h): Supported LBA-Change 00:27:03.717 Dataset Management (09h): Supported LBA-Change 00:27:03.717 Copy (19h): Supported LBA-Change 00:27:03.717 00:27:03.717 Error Log 00:27:03.717 ========= 00:27:03.717 00:27:03.717 Arbitration 00:27:03.717 =========== 00:27:03.717 Arbitration Burst: 1 00:27:03.717 00:27:03.717 Power Management 00:27:03.717 ================ 00:27:03.717 Number of Power States: 1 00:27:03.717 Current Power State: Power State #0 00:27:03.717 Power State #0: 00:27:03.717 Max Power: 0.00 W 00:27:03.717 Non-Operational State: Operational 00:27:03.717 Entry Latency: Not Reported 00:27:03.717 Exit Latency: Not Reported 00:27:03.717 Relative Read Throughput: 0 00:27:03.717 Relative Read Latency: 0 00:27:03.717 Relative Write Throughput: 0 00:27:03.717 Relative Write Latency: 0 00:27:03.717 Idle Power: Not Reported 00:27:03.717 Active Power: Not Reported 00:27:03.717 Non-Operational Permissive Mode: Not Supported 00:27:03.717 00:27:03.717 Health Information 00:27:03.717 ================== 00:27:03.717 Critical Warnings: 00:27:03.717 Available Spare Space: OK 00:27:03.717 Temperature: OK 00:27:03.717 Device Reliability: OK 00:27:03.717 Read Only: No 00:27:03.717 Volatile Memory Backup: OK 00:27:03.717 Current Temperature: 0 Kelvin (-273 Celsius) 00:27:03.717 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:27:03.717 Available Spare: 0% 00:27:03.717 Available Sp[2024-10-01 22:25:58.951765] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:184 cdw10:00000005 PRP1 0x0 PRP2 0x0 00:27:03.717 [2024-10-01 22:25:58.959630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0014 p:1 m:0 dnr:0 00:27:03.717 [2024-10-01 22:25:58.959661] nvme_ctrlr.c:4386:nvme_ctrlr_destruct_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Prepare to destruct SSD 00:27:03.717 [2024-10-01 22:25:58.959671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:03.717 [2024-10-01 22:25:58.959678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:03.717 [2024-10-01 22:25:58.959684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:03.717 [2024-10-01 22:25:58.959691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:03.717 [2024-10-01 22:25:58.959738] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x460001 00:27:03.717 [2024-10-01 22:25:58.959749] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x464001 00:27:03.717 [2024-10-01 22:25:58.960744] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:27:03.717 [2024-10-01 22:25:58.960796] nvme_ctrlr.c:1147:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] RTD3E = 0 us 00:27:03.717 [2024-10-01 22:25:58.960803] nvme_ctrlr.c:1150:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] shutdown timeout = 10000 ms 00:27:03.717 [2024-10-01 22:25:58.961748] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x9 00:27:03.717 [2024-10-01 22:25:58.961761] nvme_ctrlr.c:1269:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] shutdown complete in 0 milliseconds 00:27:03.717 [2024-10-01 22:25:58.961816] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /var/run/vfio-user/domain/vfio-user2/2/cntrl 00:27:03.717 [2024-10-01 22:25:58.963194] vfio_user_pci.c: 96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:27:03.977 are Threshold: 0% 00:27:03.977 Life Percentage Used: 0% 00:27:03.977 Data Units Read: 0 00:27:03.977 Data Units Written: 0 00:27:03.977 Host Read Commands: 0 00:27:03.977 Host Write Commands: 0 00:27:03.977 Controller Busy Time: 0 minutes 00:27:03.977 Power Cycles: 0 00:27:03.977 Power On Hours: 0 hours 00:27:03.977 Unsafe Shutdowns: 0 00:27:03.977 Unrecoverable Media Errors: 0 00:27:03.977 Lifetime Error Log Entries: 0 00:27:03.977 Warning Temperature Time: 0 minutes 00:27:03.977 Critical Temperature Time: 0 minutes 00:27:03.977 00:27:03.977 Number of Queues 00:27:03.977 ================ 00:27:03.977 Number of I/O Submission Queues: 127 00:27:03.977 Number of I/O Completion Queues: 127 00:27:03.977 00:27:03.977 Active Namespaces 00:27:03.977 ================= 00:27:03.977 Namespace ID:1 00:27:03.977 Error Recovery Timeout: Unlimited 00:27:03.977 Command Set Identifier: NVM (00h) 00:27:03.977 Deallocate: Supported 00:27:03.977 Deallocated/Unwritten Error: Not Supported 00:27:03.977 Deallocated Read Value: Unknown 00:27:03.977 Deallocate in Write Zeroes: Not Supported 00:27:03.977 Deallocated Guard Field: 0xFFFF 00:27:03.977 Flush: Supported 00:27:03.977 Reservation: Supported 00:27:03.977 Namespace Sharing Capabilities: Multiple Controllers 00:27:03.977 Size (in LBAs): 131072 (0GiB) 00:27:03.977 Capacity (in LBAs): 131072 (0GiB) 00:27:03.977 Utilization (in LBAs): 131072 (0GiB) 00:27:03.977 NGUID: B225E8F97B1C490FAC861B06371AEF4F 00:27:03.977 UUID: b225e8f9-7b1c-490f-ac86-1b06371aef4f 00:27:03.977 Thin Provisioning: Not Supported 00:27:03.977 Per-NS Atomic Units: Yes 00:27:03.977 Atomic Boundary Size (Normal): 0 00:27:03.977 Atomic Boundary Size (PFail): 0 00:27:03.977 Atomic Boundary Offset: 0 00:27:03.977 Maximum Single Source Range Length: 65535 00:27:03.977 Maximum Copy Length: 65535 00:27:03.977 Maximum Source Range Count: 1 00:27:03.977 NGUID/EUI64 Never Reused: No 00:27:03.977 Namespace Write Protected: No 00:27:03.977 Number of LBA Formats: 1 00:27:03.977 Current LBA Format: LBA Format #00 00:27:03.977 LBA Format #00: Data Size: 512 Metadata Size: 0 00:27:03.977 00:27:03.977 22:25:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -s 256 -g -q 128 -o 4096 -w read -t 5 -c 0x2 00:27:03.977 [2024-10-01 22:25:59.157018] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:27:09.261 Initializing NVMe Controllers 00:27:09.261 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:27:09.261 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 with lcore 1 00:27:09.261 Initialization complete. Launching workers. 00:27:09.261 ======================================================== 00:27:09.261 Latency(us) 00:27:09.261 Device Information : IOPS MiB/s Average min max 00:27:09.261 VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 from core 1: 39975.51 156.15 3201.81 848.23 6785.90 00:27:09.261 ======================================================== 00:27:09.261 Total : 39975.51 156.15 3201.81 848.23 6785.90 00:27:09.261 00:27:09.261 [2024-10-01 22:26:04.262821] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:27:09.261 22:26:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -s 256 -g -q 128 -o 4096 -w write -t 5 -c 0x2 00:27:09.261 [2024-10-01 22:26:04.442424] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:27:14.545 Initializing NVMe Controllers 00:27:14.545 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:27:14.545 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 with lcore 1 00:27:14.545 Initialization complete. Launching workers. 00:27:14.545 ======================================================== 00:27:14.545 Latency(us) 00:27:14.545 Device Information : IOPS MiB/s Average min max 00:27:14.545 VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 from core 1: 34096.05 133.19 3753.38 1112.53 10703.44 00:27:14.545 ======================================================== 00:27:14.545 Total : 34096.05 133.19 3753.38 1112.53 10703.44 00:27:14.545 00:27:14.545 [2024-10-01 22:26:09.459461] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:27:14.545 22:26:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -g -q 32 -o 4096 -w randrw -M 50 -t 5 -c 0xE 00:27:14.545 [2024-10-01 22:26:09.649027] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:27:19.831 [2024-10-01 22:26:14.774710] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:27:19.831 Initializing NVMe Controllers 00:27:19.831 Attaching to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:27:19.831 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:27:19.831 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 1 00:27:19.831 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 2 00:27:19.831 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 3 00:27:19.831 Initialization complete. Launching workers. 00:27:19.831 Starting thread on core 2 00:27:19.831 Starting thread on core 3 00:27:19.831 Starting thread on core 1 00:27:19.831 22:26:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -t 3 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -d 256 -g 00:27:19.831 [2024-10-01 22:26:15.039059] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:27:23.130 [2024-10-01 22:26:18.092240] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:27:23.130 Initializing NVMe Controllers 00:27:23.130 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:27:23.130 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:27:23.130 Associating SPDK bdev Controller (SPDK2 ) with lcore 0 00:27:23.130 Associating SPDK bdev Controller (SPDK2 ) with lcore 1 00:27:23.130 Associating SPDK bdev Controller (SPDK2 ) with lcore 2 00:27:23.130 Associating SPDK bdev Controller (SPDK2 ) with lcore 3 00:27:23.130 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration run with configuration: 00:27:23.130 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i -1 00:27:23.130 Initialization complete. Launching workers. 00:27:23.130 Starting thread on core 1 with urgent priority queue 00:27:23.130 Starting thread on core 2 with urgent priority queue 00:27:23.130 Starting thread on core 3 with urgent priority queue 00:27:23.130 Starting thread on core 0 with urgent priority queue 00:27:23.130 SPDK bdev Controller (SPDK2 ) core 0: 10291.33 IO/s 9.72 secs/100000 ios 00:27:23.130 SPDK bdev Controller (SPDK2 ) core 1: 11410.00 IO/s 8.76 secs/100000 ios 00:27:23.130 SPDK bdev Controller (SPDK2 ) core 2: 15180.00 IO/s 6.59 secs/100000 ios 00:27:23.130 SPDK bdev Controller (SPDK2 ) core 3: 6541.67 IO/s 15.29 secs/100000 ios 00:27:23.130 ======================================================== 00:27:23.130 00:27:23.130 22:26:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/hello_world -d 256 -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' 00:27:23.130 [2024-10-01 22:26:18.355876] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:27:23.130 Initializing NVMe Controllers 00:27:23.130 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:27:23.130 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:27:23.130 Namespace ID: 1 size: 0GB 00:27:23.130 Initialization complete. 00:27:23.130 INFO: using host memory buffer for IO 00:27:23.130 Hello world! 00:27:23.130 [2024-10-01 22:26:18.363935] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:27:23.390 22:26:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -g -d 256 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' 00:27:23.390 [2024-10-01 22:26:18.625908] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:27:24.781 Initializing NVMe Controllers 00:27:24.781 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:27:24.781 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:27:24.781 Initialization complete. Launching workers. 00:27:24.781 submit (in ns) avg, min, max = 8596.6, 3922.5, 5994625.0 00:27:24.781 complete (in ns) avg, min, max = 19013.5, 2381.7, 3998345.8 00:27:24.781 00:27:24.781 Submit histogram 00:27:24.781 ================ 00:27:24.781 Range in us Cumulative Count 00:27:24.781 3.920 - 3.947: 1.8798% ( 357) 00:27:24.781 3.947 - 3.973: 9.4203% ( 1432) 00:27:24.781 3.973 - 4.000: 21.8946% ( 2369) 00:27:24.781 4.000 - 4.027: 33.4474% ( 2194) 00:27:24.781 4.027 - 4.053: 44.1314% ( 2029) 00:27:24.781 4.053 - 4.080: 54.2941% ( 1930) 00:27:24.781 4.080 - 4.107: 70.0226% ( 2987) 00:27:24.781 4.107 - 4.133: 83.6133% ( 2581) 00:27:24.781 4.133 - 4.160: 93.8602% ( 1946) 00:27:24.781 4.160 - 4.187: 97.6831% ( 726) 00:27:24.781 4.187 - 4.213: 98.9837% ( 247) 00:27:24.781 4.213 - 4.240: 99.3839% ( 76) 00:27:24.781 4.240 - 4.267: 99.4576% ( 14) 00:27:24.781 4.267 - 4.293: 99.4734% ( 3) 00:27:24.781 4.560 - 4.587: 99.4787% ( 1) 00:27:24.781 4.693 - 4.720: 99.4892% ( 2) 00:27:24.781 4.800 - 4.827: 99.4945% ( 1) 00:27:24.781 4.827 - 4.853: 99.4998% ( 1) 00:27:24.781 4.880 - 4.907: 99.5050% ( 1) 00:27:24.781 4.933 - 4.960: 99.5103% ( 1) 00:27:24.781 4.960 - 4.987: 99.5156% ( 1) 00:27:24.781 5.013 - 5.040: 99.5208% ( 1) 00:27:24.781 5.307 - 5.333: 99.5261% ( 1) 00:27:24.781 5.680 - 5.707: 99.5314% ( 1) 00:27:24.781 5.787 - 5.813: 99.5366% ( 1) 00:27:24.781 5.813 - 5.840: 99.5472% ( 2) 00:27:24.781 5.867 - 5.893: 99.5577% ( 2) 00:27:24.781 5.893 - 5.920: 99.5630% ( 1) 00:27:24.781 5.947 - 5.973: 99.5682% ( 1) 00:27:24.781 5.973 - 6.000: 99.5735% ( 1) 00:27:24.781 6.000 - 6.027: 99.5840% ( 2) 00:27:24.781 6.027 - 6.053: 99.5945% ( 2) 00:27:24.781 6.080 - 6.107: 99.5998% ( 1) 00:27:24.781 6.107 - 6.133: 99.6103% ( 2) 00:27:24.781 6.133 - 6.160: 99.6209% ( 2) 00:27:24.782 6.160 - 6.187: 99.6314% ( 2) 00:27:24.782 6.187 - 6.213: 99.6472% ( 3) 00:27:24.782 6.240 - 6.267: 99.6525% ( 1) 00:27:24.782 6.267 - 6.293: 99.6577% ( 1) 00:27:24.782 6.293 - 6.320: 99.6630% ( 1) 00:27:24.782 6.320 - 6.347: 99.6683% ( 1) 00:27:24.782 6.347 - 6.373: 99.6735% ( 1) 00:27:24.782 6.400 - 6.427: 99.6893% ( 3) 00:27:24.782 6.427 - 6.453: 99.6946% ( 1) 00:27:24.782 6.560 - 6.587: 99.6999% ( 1) 00:27:24.782 6.587 - 6.613: 99.7157% ( 3) 00:27:24.782 6.613 - 6.640: 99.7209% ( 1) 00:27:24.782 6.640 - 6.667: 99.7262% ( 1) 00:27:24.782 6.693 - 6.720: 99.7315% ( 1) 00:27:24.782 6.773 - 6.800: 99.7367% ( 1) 00:27:24.782 6.827 - 6.880: 99.7472% ( 2) 00:27:24.782 6.880 - 6.933: 99.7578% ( 2) 00:27:24.782 6.933 - 6.987: 99.7736% ( 3) 00:27:24.782 6.987 - 7.040: 99.7788% ( 1) 00:27:24.782 7.093 - 7.147: 99.7841% ( 1) 00:27:24.782 7.200 - 7.253: 99.7894% ( 1) 00:27:24.782 7.253 - 7.307: 99.8052% ( 3) 00:27:24.782 7.307 - 7.360: 99.8157% ( 2) 00:27:24.782 7.360 - 7.413: 99.8210% ( 1) 00:27:24.782 7.413 - 7.467: 99.8262% ( 1) 00:27:24.782 7.467 - 7.520: 99.8315% ( 1) 00:27:24.782 7.520 - 7.573: 99.8420% ( 2) 00:27:24.782 7.787 - 7.840: 99.8473% ( 1) 00:27:24.782 7.840 - 7.893: 99.8578% ( 2) 00:27:24.782 7.893 - 7.947: 99.8631% ( 1) 00:27:24.782 8.107 - 8.160: 99.8684% ( 1) 00:27:24.782 8.213 - 8.267: 99.8736% ( 1) 00:27:24.782 10.080 - 10.133: 99.8789% ( 1) 00:27:24.782 13.227 - 13.280: 99.8842% ( 1) 00:27:24.782 13.493 - 13.547: 99.8894% ( 1) 00:27:24.782 3986.773 - 4014.080: 99.9947% ( 20) 00:27:24.782 5980.160 - 6007.467: 100.0000% ( 1) 00:27:24.782 00:27:24.782 Complete histogram 00:27:24.782 ================== 00:27:24.782 Range in us Cumulative Count 00:27:24.782 2.373 - 2.387: 0.0105% ( 2) 00:27:24.782 2.387 - 2.400: 0.0527% ( 8) 00:27:24.782 2.400 - 2.413: 1.2164% ( 221) 00:27:24.782 2.413 - 2.427: 1.2690% ( 10) 00:27:24.782 2.427 - 2.440: 1.3743% ( 20) 00:27:24.782 2.440 - [2024-10-01 22:26:19.721310] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:27:24.782 2.453: 1.4059% ( 6) 00:27:24.782 2.453 - 2.467: 22.0157% ( 3914) 00:27:24.782 2.467 - 2.480: 63.5038% ( 7879) 00:27:24.782 2.480 - 2.493: 73.3558% ( 1871) 00:27:24.782 2.493 - 2.507: 81.0910% ( 1469) 00:27:24.782 2.507 - 2.520: 83.8660% ( 527) 00:27:24.782 2.520 - 2.533: 85.5721% ( 324) 00:27:24.782 2.533 - 2.547: 90.0637% ( 853) 00:27:24.782 2.547 - 2.560: 94.8133% ( 902) 00:27:24.782 2.560 - 2.573: 97.4251% ( 496) 00:27:24.782 2.573 - 2.587: 98.6467% ( 232) 00:27:24.782 2.587 - 2.600: 99.1733% ( 100) 00:27:24.782 2.600 - 2.613: 99.3102% ( 26) 00:27:24.782 2.613 - 2.627: 99.3365% ( 5) 00:27:24.782 2.640 - 2.653: 99.3418% ( 1) 00:27:24.782 2.693 - 2.707: 99.3471% ( 1) 00:27:24.782 2.973 - 2.987: 99.3523% ( 1) 00:27:24.782 3.187 - 3.200: 99.3576% ( 1) 00:27:24.782 4.240 - 4.267: 99.3629% ( 1) 00:27:24.782 4.267 - 4.293: 99.3681% ( 1) 00:27:24.782 4.373 - 4.400: 99.3734% ( 1) 00:27:24.782 4.400 - 4.427: 99.3787% ( 1) 00:27:24.782 4.560 - 4.587: 99.3892% ( 2) 00:27:24.782 4.587 - 4.613: 99.3945% ( 1) 00:27:24.782 4.613 - 4.640: 99.3997% ( 1) 00:27:24.782 4.720 - 4.747: 99.4102% ( 2) 00:27:24.782 4.853 - 4.880: 99.4155% ( 1) 00:27:24.782 4.880 - 4.907: 99.4208% ( 1) 00:27:24.782 4.907 - 4.933: 99.4260% ( 1) 00:27:24.782 4.933 - 4.960: 99.4366% ( 2) 00:27:24.782 4.960 - 4.987: 99.4418% ( 1) 00:27:24.782 4.987 - 5.013: 99.4524% ( 2) 00:27:24.782 5.067 - 5.093: 99.4576% ( 1) 00:27:24.782 5.093 - 5.120: 99.4629% ( 1) 00:27:24.782 5.173 - 5.200: 99.4682% ( 1) 00:27:24.782 5.333 - 5.360: 99.4734% ( 1) 00:27:24.782 5.440 - 5.467: 99.4787% ( 1) 00:27:24.782 5.493 - 5.520: 99.4945% ( 3) 00:27:24.782 5.627 - 5.653: 99.4998% ( 1) 00:27:24.782 5.707 - 5.733: 99.5050% ( 1) 00:27:24.782 5.760 - 5.787: 99.5103% ( 1) 00:27:24.782 5.840 - 5.867: 99.5156% ( 1) 00:27:24.782 5.867 - 5.893: 99.5208% ( 1) 00:27:24.782 5.893 - 5.920: 99.5261% ( 1) 00:27:24.782 5.920 - 5.947: 99.5314% ( 1) 00:27:24.782 5.947 - 5.973: 99.5419% ( 2) 00:27:24.782 6.027 - 6.053: 99.5472% ( 1) 00:27:24.782 6.053 - 6.080: 99.5524% ( 1) 00:27:24.782 8.693 - 8.747: 99.5577% ( 1) 00:27:24.782 9.067 - 9.120: 99.5630% ( 1) 00:27:24.782 9.547 - 9.600: 99.5682% ( 1) 00:27:24.782 10.613 - 10.667: 99.5735% ( 1) 00:27:24.782 12.533 - 12.587: 99.5787% ( 1) 00:27:24.782 44.160 - 44.373: 99.5840% ( 1) 00:27:24.782 2129.920 - 2143.573: 99.5893% ( 1) 00:27:24.782 3986.773 - 4014.080: 100.0000% ( 78) 00:27:24.782 00:27:24.782 22:26:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@90 -- # aer_vfio_user /var/run/vfio-user/domain/vfio-user2/2 nqn.2019-07.io.spdk:cnode2 2 00:27:24.782 22:26:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@22 -- # local traddr=/var/run/vfio-user/domain/vfio-user2/2 00:27:24.782 22:26:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@23 -- # local subnqn=nqn.2019-07.io.spdk:cnode2 00:27:24.782 22:26:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@24 -- # local malloc_num=Malloc4 00:27:24.782 22:26:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:27:24.782 [ 00:27:24.782 { 00:27:24.782 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:27:24.782 "subtype": "Discovery", 00:27:24.782 "listen_addresses": [], 00:27:24.783 "allow_any_host": true, 00:27:24.783 "hosts": [] 00:27:24.783 }, 00:27:24.783 { 00:27:24.783 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:27:24.783 "subtype": "NVMe", 00:27:24.783 "listen_addresses": [ 00:27:24.783 { 00:27:24.783 "trtype": "VFIOUSER", 00:27:24.783 "adrfam": "IPv4", 00:27:24.783 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:27:24.783 "trsvcid": "0" 00:27:24.783 } 00:27:24.783 ], 00:27:24.783 "allow_any_host": true, 00:27:24.783 "hosts": [], 00:27:24.783 "serial_number": "SPDK1", 00:27:24.783 "model_number": "SPDK bdev Controller", 00:27:24.783 "max_namespaces": 32, 00:27:24.783 "min_cntlid": 1, 00:27:24.783 "max_cntlid": 65519, 00:27:24.783 "namespaces": [ 00:27:24.783 { 00:27:24.783 "nsid": 1, 00:27:24.783 "bdev_name": "Malloc1", 00:27:24.783 "name": "Malloc1", 00:27:24.783 "nguid": "72ECF17165F64E6EA8478A86020B8A52", 00:27:24.783 "uuid": "72ecf171-65f6-4e6e-a847-8a86020b8a52" 00:27:24.783 }, 00:27:24.783 { 00:27:24.783 "nsid": 2, 00:27:24.783 "bdev_name": "Malloc3", 00:27:24.783 "name": "Malloc3", 00:27:24.783 "nguid": "B18A13821BCD45209AAA7FE48B2A7DAF", 00:27:24.783 "uuid": "b18a1382-1bcd-4520-9aaa-7fe48b2a7daf" 00:27:24.783 } 00:27:24.783 ] 00:27:24.783 }, 00:27:24.783 { 00:27:24.783 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:27:24.783 "subtype": "NVMe", 00:27:24.783 "listen_addresses": [ 00:27:24.783 { 00:27:24.783 "trtype": "VFIOUSER", 00:27:24.783 "adrfam": "IPv4", 00:27:24.783 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:27:24.783 "trsvcid": "0" 00:27:24.783 } 00:27:24.783 ], 00:27:24.783 "allow_any_host": true, 00:27:24.783 "hosts": [], 00:27:24.783 "serial_number": "SPDK2", 00:27:24.783 "model_number": "SPDK bdev Controller", 00:27:24.783 "max_namespaces": 32, 00:27:24.783 "min_cntlid": 1, 00:27:24.783 "max_cntlid": 65519, 00:27:24.783 "namespaces": [ 00:27:24.783 { 00:27:24.783 "nsid": 1, 00:27:24.783 "bdev_name": "Malloc2", 00:27:24.783 "name": "Malloc2", 00:27:24.783 "nguid": "B225E8F97B1C490FAC861B06371AEF4F", 00:27:24.783 "uuid": "b225e8f9-7b1c-490f-ac86-1b06371aef4f" 00:27:24.783 } 00:27:24.783 ] 00:27:24.783 } 00:27:24.783 ] 00:27:24.783 22:26:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@27 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:27:24.783 22:26:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@34 -- # aerpid=168317 00:27:24.783 22:26:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@37 -- # waitforfile /tmp/aer_touch_file 00:27:24.783 22:26:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1265 -- # local i=0 00:27:24.783 22:26:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -n 2 -g -t /tmp/aer_touch_file 00:27:24.783 22:26:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:27:24.783 22:26:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1267 -- # '[' 0 -lt 200 ']' 00:27:24.783 22:26:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1268 -- # i=1 00:27:24.783 22:26:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1269 -- # sleep 0.1 00:27:25.044 22:26:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:27:25.044 22:26:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1267 -- # '[' 1 -lt 200 ']' 00:27:25.044 22:26:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1268 -- # i=2 00:27:25.044 22:26:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1269 -- # sleep 0.1 00:27:25.044 [2024-10-01 22:26:20.129049] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:27:25.044 22:26:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:27:25.044 22:26:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1272 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:27:25.044 22:26:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1276 -- # return 0 00:27:25.044 22:26:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@38 -- # rm -f /tmp/aer_touch_file 00:27:25.044 22:26:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 --name Malloc4 00:27:25.306 Malloc4 00:27:25.306 22:26:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc4 -n 2 00:27:25.306 [2024-10-01 22:26:20.527743] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:27:25.306 22:26:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:27:25.567 Asynchronous Event Request test 00:27:25.567 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:27:25.567 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:27:25.567 Registering asynchronous event callbacks... 00:27:25.567 Starting namespace attribute notice tests for all controllers... 00:27:25.567 /var/run/vfio-user/domain/vfio-user2/2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:27:25.567 aer_cb - Changed Namespace 00:27:25.567 Cleaning up... 00:27:25.567 [ 00:27:25.567 { 00:27:25.567 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:27:25.567 "subtype": "Discovery", 00:27:25.567 "listen_addresses": [], 00:27:25.567 "allow_any_host": true, 00:27:25.567 "hosts": [] 00:27:25.567 }, 00:27:25.567 { 00:27:25.567 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:27:25.567 "subtype": "NVMe", 00:27:25.567 "listen_addresses": [ 00:27:25.567 { 00:27:25.567 "trtype": "VFIOUSER", 00:27:25.567 "adrfam": "IPv4", 00:27:25.567 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:27:25.567 "trsvcid": "0" 00:27:25.567 } 00:27:25.567 ], 00:27:25.567 "allow_any_host": true, 00:27:25.567 "hosts": [], 00:27:25.567 "serial_number": "SPDK1", 00:27:25.567 "model_number": "SPDK bdev Controller", 00:27:25.567 "max_namespaces": 32, 00:27:25.567 "min_cntlid": 1, 00:27:25.567 "max_cntlid": 65519, 00:27:25.567 "namespaces": [ 00:27:25.567 { 00:27:25.567 "nsid": 1, 00:27:25.567 "bdev_name": "Malloc1", 00:27:25.567 "name": "Malloc1", 00:27:25.567 "nguid": "72ECF17165F64E6EA8478A86020B8A52", 00:27:25.567 "uuid": "72ecf171-65f6-4e6e-a847-8a86020b8a52" 00:27:25.567 }, 00:27:25.567 { 00:27:25.567 "nsid": 2, 00:27:25.567 "bdev_name": "Malloc3", 00:27:25.567 "name": "Malloc3", 00:27:25.567 "nguid": "B18A13821BCD45209AAA7FE48B2A7DAF", 00:27:25.567 "uuid": "b18a1382-1bcd-4520-9aaa-7fe48b2a7daf" 00:27:25.567 } 00:27:25.567 ] 00:27:25.567 }, 00:27:25.567 { 00:27:25.567 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:27:25.567 "subtype": "NVMe", 00:27:25.567 "listen_addresses": [ 00:27:25.567 { 00:27:25.567 "trtype": "VFIOUSER", 00:27:25.567 "adrfam": "IPv4", 00:27:25.567 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:27:25.567 "trsvcid": "0" 00:27:25.567 } 00:27:25.567 ], 00:27:25.567 "allow_any_host": true, 00:27:25.567 "hosts": [], 00:27:25.567 "serial_number": "SPDK2", 00:27:25.567 "model_number": "SPDK bdev Controller", 00:27:25.567 "max_namespaces": 32, 00:27:25.567 "min_cntlid": 1, 00:27:25.567 "max_cntlid": 65519, 00:27:25.567 "namespaces": [ 00:27:25.567 { 00:27:25.567 "nsid": 1, 00:27:25.567 "bdev_name": "Malloc2", 00:27:25.567 "name": "Malloc2", 00:27:25.567 "nguid": "B225E8F97B1C490FAC861B06371AEF4F", 00:27:25.567 "uuid": "b225e8f9-7b1c-490f-ac86-1b06371aef4f" 00:27:25.567 }, 00:27:25.567 { 00:27:25.567 "nsid": 2, 00:27:25.567 "bdev_name": "Malloc4", 00:27:25.567 "name": "Malloc4", 00:27:25.568 "nguid": "388C9DE01428480A86F753E97B3EE832", 00:27:25.568 "uuid": "388c9de0-1428-480a-86f7-53e97b3ee832" 00:27:25.568 } 00:27:25.568 ] 00:27:25.568 } 00:27:25.568 ] 00:27:25.568 22:26:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@44 -- # wait 168317 00:27:25.568 22:26:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@105 -- # stop_nvmf_vfio_user 00:27:25.568 22:26:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@95 -- # killprocess 159270 00:27:25.568 22:26:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@950 -- # '[' -z 159270 ']' 00:27:25.568 22:26:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@954 -- # kill -0 159270 00:27:25.568 22:26:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@955 -- # uname 00:27:25.568 22:26:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:27:25.568 22:26:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 159270 00:27:25.568 22:26:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:27:25.568 22:26:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:27:25.568 22:26:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@968 -- # echo 'killing process with pid 159270' 00:27:25.568 killing process with pid 159270 00:27:25.568 22:26:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@969 -- # kill 159270 00:27:25.568 22:26:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@974 -- # wait 159270 00:27:25.829 22:26:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@97 -- # rm -rf /var/run/vfio-user 00:27:25.829 22:26:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:27:25.829 22:26:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@108 -- # setup_nvmf_vfio_user --interrupt-mode '-M -I' 00:27:25.829 22:26:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@51 -- # local nvmf_app_args=--interrupt-mode 00:27:25.829 22:26:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@52 -- # local 'transport_args=-M -I' 00:27:25.829 22:26:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@55 -- # nvmfpid=168381 00:27:25.829 22:26:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@57 -- # echo 'Process pid: 168381' 00:27:25.829 Process pid: 168381 00:27:25.829 22:26:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@59 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:27:25.829 22:26:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m '[0,1,2,3]' --interrupt-mode 00:27:25.829 22:26:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@60 -- # waitforlisten 168381 00:27:25.829 22:26:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@831 -- # '[' -z 168381 ']' 00:27:25.829 22:26:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:25.829 22:26:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@836 -- # local max_retries=100 00:27:25.829 22:26:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:25.829 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:25.829 22:26:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@840 -- # xtrace_disable 00:27:25.829 22:26:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:27:26.089 [2024-10-01 22:26:21.093839] thread.c:2945:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:27:26.089 [2024-10-01 22:26:21.094780] Starting SPDK v25.01-pre git sha1 1b1c3081e / DPDK 24.03.0 initialization... 00:27:26.089 [2024-10-01 22:26:21.094823] [ DPDK EAL parameters: nvmf -l 0,1,2,3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:26.089 [2024-10-01 22:26:21.156406] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:27:26.089 [2024-10-01 22:26:21.222543] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:26.089 [2024-10-01 22:26:21.222583] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:26.089 [2024-10-01 22:26:21.222591] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:26.089 [2024-10-01 22:26:21.222597] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:26.089 [2024-10-01 22:26:21.222603] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:26.089 [2024-10-01 22:26:21.222713] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:27:26.089 [2024-10-01 22:26:21.222839] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:27:26.089 [2024-10-01 22:26:21.222975] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:27:26.089 [2024-10-01 22:26:21.222977] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:27:26.089 [2024-10-01 22:26:21.337931] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:27:26.089 [2024-10-01 22:26:21.338266] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:27:26.089 [2024-10-01 22:26:21.339040] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:27:26.089 [2024-10-01 22:26:21.339166] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:27:26.089 [2024-10-01 22:26:21.339389] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:27:26.660 22:26:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:27:26.660 22:26:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@864 -- # return 0 00:27:26.660 22:26:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@62 -- # sleep 1 00:27:28.041 22:26:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t VFIOUSER -M -I 00:27:28.041 22:26:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@66 -- # mkdir -p /var/run/vfio-user 00:27:28.041 22:26:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # seq 1 2 00:27:28.041 22:26:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:27:28.041 22:26:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user1/1 00:27:28.041 22:26:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:27:28.041 Malloc1 00:27:28.041 22:26:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -a -s SPDK1 00:27:28.300 22:26:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1 00:27:28.560 22:26:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user1/1 -s 0 00:27:28.819 22:26:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:27:28.819 22:26:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user2/2 00:27:28.819 22:26:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:27:28.819 Malloc2 00:27:28.819 22:26:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -a -s SPDK2 00:27:29.079 22:26:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc2 00:27:29.337 22:26:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user2/2 -s 0 00:27:29.338 22:26:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@109 -- # stop_nvmf_vfio_user 00:27:29.338 22:26:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@95 -- # killprocess 168381 00:27:29.338 22:26:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@950 -- # '[' -z 168381 ']' 00:27:29.338 22:26:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@954 -- # kill -0 168381 00:27:29.338 22:26:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@955 -- # uname 00:27:29.338 22:26:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:27:29.338 22:26:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 168381 00:27:29.597 22:26:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:27:29.597 22:26:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:27:29.597 22:26:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@968 -- # echo 'killing process with pid 168381' 00:27:29.597 killing process with pid 168381 00:27:29.597 22:26:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@969 -- # kill 168381 00:27:29.597 22:26:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@974 -- # wait 168381 00:27:29.597 22:26:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@97 -- # rm -rf /var/run/vfio-user 00:27:29.856 22:26:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:27:29.856 00:27:29.856 real 0m51.612s 00:27:29.856 user 3m17.483s 00:27:29.856 sys 0m2.850s 00:27:29.856 22:26:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1126 -- # xtrace_disable 00:27:29.856 22:26:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:27:29.856 ************************************ 00:27:29.856 END TEST nvmf_vfio_user 00:27:29.856 ************************************ 00:27:29.856 22:26:24 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@32 -- # run_test nvmf_vfio_user_nvme_compliance /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/compliance.sh --transport=tcp 00:27:29.856 22:26:24 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:27:29.856 22:26:24 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:27:29.856 22:26:24 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:27:29.856 ************************************ 00:27:29.856 START TEST nvmf_vfio_user_nvme_compliance 00:27:29.856 ************************************ 00:27:29.856 22:26:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/compliance.sh --transport=tcp 00:27:29.856 * Looking for test storage... 00:27:29.856 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance 00:27:29.856 22:26:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:27:29.856 22:26:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1681 -- # lcov --version 00:27:29.856 22:26:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:27:30.116 22:26:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:27:30.116 22:26:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:27:30.116 22:26:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@333 -- # local ver1 ver1_l 00:27:30.116 22:26:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@334 -- # local ver2 ver2_l 00:27:30.116 22:26:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@336 -- # IFS=.-: 00:27:30.116 22:26:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@336 -- # read -ra ver1 00:27:30.116 22:26:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@337 -- # IFS=.-: 00:27:30.116 22:26:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@337 -- # read -ra ver2 00:27:30.116 22:26:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@338 -- # local 'op=<' 00:27:30.116 22:26:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@340 -- # ver1_l=2 00:27:30.116 22:26:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@341 -- # ver2_l=1 00:27:30.116 22:26:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:27:30.116 22:26:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@344 -- # case "$op" in 00:27:30.116 22:26:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@345 -- # : 1 00:27:30.116 22:26:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@364 -- # (( v = 0 )) 00:27:30.116 22:26:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:27:30.116 22:26:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@365 -- # decimal 1 00:27:30.116 22:26:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@353 -- # local d=1 00:27:30.116 22:26:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:27:30.116 22:26:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@355 -- # echo 1 00:27:30.116 22:26:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@365 -- # ver1[v]=1 00:27:30.116 22:26:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@366 -- # decimal 2 00:27:30.116 22:26:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@353 -- # local d=2 00:27:30.116 22:26:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:27:30.116 22:26:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@355 -- # echo 2 00:27:30.116 22:26:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@366 -- # ver2[v]=2 00:27:30.116 22:26:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:27:30.116 22:26:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:27:30.116 22:26:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@368 -- # return 0 00:27:30.116 22:26:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:27:30.116 22:26:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:27:30.116 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:30.116 --rc genhtml_branch_coverage=1 00:27:30.116 --rc genhtml_function_coverage=1 00:27:30.116 --rc genhtml_legend=1 00:27:30.116 --rc geninfo_all_blocks=1 00:27:30.116 --rc geninfo_unexecuted_blocks=1 00:27:30.116 00:27:30.116 ' 00:27:30.116 22:26:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:27:30.116 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:30.116 --rc genhtml_branch_coverage=1 00:27:30.116 --rc genhtml_function_coverage=1 00:27:30.116 --rc genhtml_legend=1 00:27:30.116 --rc geninfo_all_blocks=1 00:27:30.116 --rc geninfo_unexecuted_blocks=1 00:27:30.116 00:27:30.116 ' 00:27:30.116 22:26:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:27:30.116 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:30.116 --rc genhtml_branch_coverage=1 00:27:30.116 --rc genhtml_function_coverage=1 00:27:30.116 --rc genhtml_legend=1 00:27:30.116 --rc geninfo_all_blocks=1 00:27:30.116 --rc geninfo_unexecuted_blocks=1 00:27:30.116 00:27:30.116 ' 00:27:30.116 22:26:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:27:30.116 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:30.116 --rc genhtml_branch_coverage=1 00:27:30.116 --rc genhtml_function_coverage=1 00:27:30.116 --rc genhtml_legend=1 00:27:30.116 --rc geninfo_all_blocks=1 00:27:30.116 --rc geninfo_unexecuted_blocks=1 00:27:30.116 00:27:30.116 ' 00:27:30.116 22:26:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:30.116 22:26:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@7 -- # uname -s 00:27:30.116 22:26:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:30.116 22:26:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:30.116 22:26:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:30.116 22:26:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:30.116 22:26:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:30.116 22:26:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:30.116 22:26:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:30.116 22:26:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:30.116 22:26:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:30.116 22:26:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:30.116 22:26:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 00:27:30.116 22:26:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@18 -- # NVME_HOSTID=80c5a598-37ec-ec11-9bc7-a4bf01928204 00:27:30.116 22:26:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:30.116 22:26:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:30.116 22:26:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:30.116 22:26:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:30.116 22:26:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:30.116 22:26:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@15 -- # shopt -s extglob 00:27:30.116 22:26:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:30.116 22:26:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:30.116 22:26:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:30.116 22:26:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:30.117 22:26:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:30.117 22:26:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:30.117 22:26:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@5 -- # export PATH 00:27:30.117 22:26:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:30.117 22:26:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@51 -- # : 0 00:27:30.117 22:26:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:27:30.117 22:26:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:27:30.117 22:26:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:30.117 22:26:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:30.117 22:26:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:30.117 22:26:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:27:30.117 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:27:30.117 22:26:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:27:30.117 22:26:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:27:30.117 22:26:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@55 -- # have_pci_nics=0 00:27:30.117 22:26:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@11 -- # MALLOC_BDEV_SIZE=64 00:27:30.117 22:26:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:27:30.117 22:26:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@14 -- # export TEST_TRANSPORT=VFIOUSER 00:27:30.117 22:26:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@14 -- # TEST_TRANSPORT=VFIOUSER 00:27:30.117 22:26:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@16 -- # rm -rf /var/run/vfio-user 00:27:30.117 22:26:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@20 -- # nvmfpid=169458 00:27:30.117 22:26:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@21 -- # echo 'Process pid: 169458' 00:27:30.117 Process pid: 169458 00:27:30.117 22:26:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:27:30.117 22:26:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@23 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:27:30.117 22:26:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@24 -- # waitforlisten 169458 00:27:30.117 22:26:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@831 -- # '[' -z 169458 ']' 00:27:30.117 22:26:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:30.117 22:26:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@836 -- # local max_retries=100 00:27:30.117 22:26:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:30.117 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:30.117 22:26:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@840 -- # xtrace_disable 00:27:30.117 22:26:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:27:30.117 [2024-10-01 22:26:25.227882] Starting SPDK v25.01-pre git sha1 1b1c3081e / DPDK 24.03.0 initialization... 00:27:30.117 [2024-10-01 22:26:25.227938] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:30.117 [2024-10-01 22:26:25.288812] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:27:30.117 [2024-10-01 22:26:25.353361] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:30.117 [2024-10-01 22:26:25.353396] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:30.117 [2024-10-01 22:26:25.353404] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:30.117 [2024-10-01 22:26:25.353411] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:30.117 [2024-10-01 22:26:25.353416] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:30.117 [2024-10-01 22:26:25.353549] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:27:30.117 [2024-10-01 22:26:25.353664] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:27:30.117 [2024-10-01 22:26:25.353679] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:27:31.057 22:26:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:27:31.057 22:26:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@864 -- # return 0 00:27:31.057 22:26:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@26 -- # sleep 1 00:27:31.998 22:26:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@28 -- # nqn=nqn.2021-09.io.spdk:cnode0 00:27:31.998 22:26:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@29 -- # traddr=/var/run/vfio-user 00:27:31.998 22:26:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@31 -- # rpc_cmd nvmf_create_transport -t VFIOUSER 00:27:31.998 22:26:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:31.998 22:26:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:27:31.998 22:26:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:31.998 22:26:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@33 -- # mkdir -p /var/run/vfio-user 00:27:31.998 22:26:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@35 -- # rpc_cmd bdev_malloc_create 64 512 -b malloc0 00:27:31.998 22:26:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:31.998 22:26:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:27:31.998 malloc0 00:27:31.998 22:26:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:31.998 22:26:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@36 -- # rpc_cmd nvmf_create_subsystem nqn.2021-09.io.spdk:cnode0 -a -s spdk -m 32 00:27:31.998 22:26:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:31.998 22:26:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:27:31.998 22:26:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:31.998 22:26:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@37 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2021-09.io.spdk:cnode0 malloc0 00:27:31.998 22:26:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:31.998 22:26:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:27:31.998 22:26:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:31.998 22:26:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@38 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2021-09.io.spdk:cnode0 -t VFIOUSER -a /var/run/vfio-user -s 0 00:27:31.998 22:26:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:31.998 22:26:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:27:31.998 22:26:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:31.998 22:26:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/nvme_compliance -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user subnqn:nqn.2021-09.io.spdk:cnode0' 00:27:31.998 00:27:31.998 00:27:31.998 CUnit - A unit testing framework for C - Version 2.1-3 00:27:31.998 http://cunit.sourceforge.net/ 00:27:31.998 00:27:31.998 00:27:31.998 Suite: nvme_compliance 00:27:32.258 Test: admin_identify_ctrlr_verify_dptr ...[2024-10-01 22:26:27.265104] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:27:32.259 [2024-10-01 22:26:27.266442] vfio_user.c: 804:nvme_cmd_map_prps: *ERROR*: no PRP2, 3072 remaining 00:27:32.259 [2024-10-01 22:26:27.266453] vfio_user.c:5507:map_admin_cmd_req: *ERROR*: /var/run/vfio-user: map Admin Opc 6 failed 00:27:32.259 [2024-10-01 22:26:27.266457] vfio_user.c:5600:handle_cmd_req: *ERROR*: /var/run/vfio-user: process NVMe command opc 0x6 failed 00:27:32.259 [2024-10-01 22:26:27.268122] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:27:32.259 passed 00:27:32.259 Test: admin_identify_ctrlr_verify_fused ...[2024-10-01 22:26:27.363719] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:27:32.259 [2024-10-01 22:26:27.366735] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:27:32.259 passed 00:27:32.259 Test: admin_identify_ns ...[2024-10-01 22:26:27.462876] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:27:32.519 [2024-10-01 22:26:27.525644] ctrlr.c:2750:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:27:32.519 [2024-10-01 22:26:27.533636] ctrlr.c:2750:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 4294967295 00:27:32.519 [2024-10-01 22:26:27.554753] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:27:32.519 passed 00:27:32.519 Test: admin_get_features_mandatory_features ...[2024-10-01 22:26:27.645382] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:27:32.519 [2024-10-01 22:26:27.648395] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:27:32.519 passed 00:27:32.519 Test: admin_get_features_optional_features ...[2024-10-01 22:26:27.742949] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:27:32.519 [2024-10-01 22:26:27.745965] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:27:32.779 passed 00:27:32.779 Test: admin_set_features_number_of_queues ...[2024-10-01 22:26:27.840092] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:27:32.779 [2024-10-01 22:26:27.944714] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:27:32.779 passed 00:27:33.040 Test: admin_get_log_page_mandatory_logs ...[2024-10-01 22:26:28.036339] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:27:33.040 [2024-10-01 22:26:28.039357] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:27:33.040 passed 00:27:33.040 Test: admin_get_log_page_with_lpo ...[2024-10-01 22:26:28.133500] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:27:33.040 [2024-10-01 22:26:28.200636] ctrlr.c:2697:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: offset (516) > len (512) 00:27:33.040 [2024-10-01 22:26:28.213679] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:27:33.040 passed 00:27:33.300 Test: fabric_property_get ...[2024-10-01 22:26:28.305313] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:27:33.300 [2024-10-01 22:26:28.306558] vfio_user.c:5600:handle_cmd_req: *ERROR*: /var/run/vfio-user: process NVMe command opc 0x7f failed 00:27:33.300 [2024-10-01 22:26:28.308332] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:27:33.300 passed 00:27:33.300 Test: admin_delete_io_sq_use_admin_qid ...[2024-10-01 22:26:28.401879] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:27:33.300 [2024-10-01 22:26:28.403159] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:0 does not exist 00:27:33.300 [2024-10-01 22:26:28.404909] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:27:33.300 passed 00:27:33.300 Test: admin_delete_io_sq_delete_sq_twice ...[2024-10-01 22:26:28.494035] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:27:33.561 [2024-10-01 22:26:28.578632] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:27:33.561 [2024-10-01 22:26:28.594635] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:27:33.561 [2024-10-01 22:26:28.599720] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:27:33.561 passed 00:27:33.561 Test: admin_delete_io_cq_use_admin_qid ...[2024-10-01 22:26:28.691310] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:27:33.561 [2024-10-01 22:26:28.692563] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O cqid:0 does not exist 00:27:33.561 [2024-10-01 22:26:28.694332] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:27:33.561 passed 00:27:33.561 Test: admin_delete_io_cq_delete_cq_first ...[2024-10-01 22:26:28.785440] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:27:33.821 [2024-10-01 22:26:28.864632] vfio_user.c:2319:handle_del_io_q: *ERROR*: /var/run/vfio-user: the associated SQ must be deleted first 00:27:33.821 [2024-10-01 22:26:28.888631] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:27:33.821 [2024-10-01 22:26:28.893725] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:27:33.821 passed 00:27:33.821 Test: admin_create_io_cq_verify_iv_pc ...[2024-10-01 22:26:28.983334] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:27:33.821 [2024-10-01 22:26:28.984576] vfio_user.c:2158:handle_create_io_cq: *ERROR*: /var/run/vfio-user: IV is too big 00:27:33.821 [2024-10-01 22:26:28.984594] vfio_user.c:2152:handle_create_io_cq: *ERROR*: /var/run/vfio-user: non-PC CQ not supported 00:27:33.821 [2024-10-01 22:26:28.986356] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:27:33.821 passed 00:27:34.081 Test: admin_create_io_sq_verify_qsize_cqid ...[2024-10-01 22:26:29.079457] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:27:34.081 [2024-10-01 22:26:29.170636] vfio_user.c:2240:handle_create_io_q: *ERROR*: /var/run/vfio-user: invalid I/O queue size 1 00:27:34.081 [2024-10-01 22:26:29.178632] vfio_user.c:2240:handle_create_io_q: *ERROR*: /var/run/vfio-user: invalid I/O queue size 257 00:27:34.081 [2024-10-01 22:26:29.186628] vfio_user.c:2038:handle_create_io_sq: *ERROR*: /var/run/vfio-user: invalid cqid:0 00:27:34.081 [2024-10-01 22:26:29.194630] vfio_user.c:2038:handle_create_io_sq: *ERROR*: /var/run/vfio-user: invalid cqid:128 00:27:34.081 [2024-10-01 22:26:29.223719] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:27:34.081 passed 00:27:34.081 Test: admin_create_io_sq_verify_pc ...[2024-10-01 22:26:29.317321] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:27:34.081 [2024-10-01 22:26:29.331637] vfio_user.c:2051:handle_create_io_sq: *ERROR*: /var/run/vfio-user: non-PC SQ not supported 00:27:34.341 [2024-10-01 22:26:29.349481] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:27:34.341 passed 00:27:34.341 Test: admin_create_io_qp_max_qps ...[2024-10-01 22:26:29.445025] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:27:35.725 [2024-10-01 22:26:30.554633] nvme_ctrlr.c:5504:spdk_nvme_ctrlr_alloc_qid: *ERROR*: [/var/run/vfio-user] No free I/O queue IDs 00:27:35.725 [2024-10-01 22:26:30.935465] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:27:35.725 passed 00:27:35.985 Test: admin_create_io_sq_shared_cq ...[2024-10-01 22:26:31.027896] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:27:35.985 [2024-10-01 22:26:31.163630] vfio_user.c:2319:handle_del_io_q: *ERROR*: /var/run/vfio-user: the associated SQ must be deleted first 00:27:35.985 [2024-10-01 22:26:31.200698] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:27:35.985 passed 00:27:35.985 00:27:35.985 Run Summary: Type Total Ran Passed Failed Inactive 00:27:35.985 suites 1 1 n/a 0 0 00:27:35.985 tests 18 18 18 0 0 00:27:35.985 asserts 360 360 360 0 n/a 00:27:35.985 00:27:35.985 Elapsed time = 1.650 seconds 00:27:36.245 22:26:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@42 -- # killprocess 169458 00:27:36.245 22:26:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@950 -- # '[' -z 169458 ']' 00:27:36.245 22:26:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@954 -- # kill -0 169458 00:27:36.245 22:26:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@955 -- # uname 00:27:36.245 22:26:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:27:36.245 22:26:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 169458 00:27:36.245 22:26:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:27:36.245 22:26:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:27:36.245 22:26:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@968 -- # echo 'killing process with pid 169458' 00:27:36.245 killing process with pid 169458 00:27:36.245 22:26:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@969 -- # kill 169458 00:27:36.245 22:26:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@974 -- # wait 169458 00:27:36.507 22:26:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@44 -- # rm -rf /var/run/vfio-user 00:27:36.507 22:26:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@46 -- # trap - SIGINT SIGTERM EXIT 00:27:36.507 00:27:36.507 real 0m6.584s 00:27:36.507 user 0m18.581s 00:27:36.507 sys 0m0.579s 00:27:36.507 22:26:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1126 -- # xtrace_disable 00:27:36.507 22:26:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:27:36.507 ************************************ 00:27:36.507 END TEST nvmf_vfio_user_nvme_compliance 00:27:36.507 ************************************ 00:27:36.507 22:26:31 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@33 -- # run_test nvmf_vfio_user_fuzz /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/vfio_user_fuzz.sh --transport=tcp 00:27:36.507 22:26:31 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:27:36.507 22:26:31 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:27:36.507 22:26:31 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:27:36.507 ************************************ 00:27:36.507 START TEST nvmf_vfio_user_fuzz 00:27:36.507 ************************************ 00:27:36.507 22:26:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/vfio_user_fuzz.sh --transport=tcp 00:27:36.507 * Looking for test storage... 00:27:36.507 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:27:36.507 22:26:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:27:36.507 22:26:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1681 -- # lcov --version 00:27:36.507 22:26:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:27:36.769 22:26:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:27:36.769 22:26:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:27:36.770 22:26:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@333 -- # local ver1 ver1_l 00:27:36.770 22:26:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@334 -- # local ver2 ver2_l 00:27:36.770 22:26:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@336 -- # IFS=.-: 00:27:36.770 22:26:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@336 -- # read -ra ver1 00:27:36.770 22:26:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@337 -- # IFS=.-: 00:27:36.770 22:26:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@337 -- # read -ra ver2 00:27:36.770 22:26:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@338 -- # local 'op=<' 00:27:36.770 22:26:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@340 -- # ver1_l=2 00:27:36.770 22:26:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@341 -- # ver2_l=1 00:27:36.770 22:26:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:27:36.770 22:26:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@344 -- # case "$op" in 00:27:36.770 22:26:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@345 -- # : 1 00:27:36.770 22:26:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@364 -- # (( v = 0 )) 00:27:36.770 22:26:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:27:36.770 22:26:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@365 -- # decimal 1 00:27:36.770 22:26:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@353 -- # local d=1 00:27:36.770 22:26:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:27:36.770 22:26:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@355 -- # echo 1 00:27:36.770 22:26:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@365 -- # ver1[v]=1 00:27:36.770 22:26:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@366 -- # decimal 2 00:27:36.770 22:26:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@353 -- # local d=2 00:27:36.770 22:26:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:27:36.770 22:26:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@355 -- # echo 2 00:27:36.770 22:26:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@366 -- # ver2[v]=2 00:27:36.770 22:26:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:27:36.770 22:26:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:27:36.770 22:26:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@368 -- # return 0 00:27:36.770 22:26:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:27:36.770 22:26:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:27:36.770 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:36.770 --rc genhtml_branch_coverage=1 00:27:36.770 --rc genhtml_function_coverage=1 00:27:36.770 --rc genhtml_legend=1 00:27:36.770 --rc geninfo_all_blocks=1 00:27:36.770 --rc geninfo_unexecuted_blocks=1 00:27:36.770 00:27:36.770 ' 00:27:36.770 22:26:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:27:36.770 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:36.770 --rc genhtml_branch_coverage=1 00:27:36.770 --rc genhtml_function_coverage=1 00:27:36.770 --rc genhtml_legend=1 00:27:36.770 --rc geninfo_all_blocks=1 00:27:36.770 --rc geninfo_unexecuted_blocks=1 00:27:36.770 00:27:36.770 ' 00:27:36.770 22:26:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:27:36.770 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:36.770 --rc genhtml_branch_coverage=1 00:27:36.770 --rc genhtml_function_coverage=1 00:27:36.770 --rc genhtml_legend=1 00:27:36.770 --rc geninfo_all_blocks=1 00:27:36.770 --rc geninfo_unexecuted_blocks=1 00:27:36.770 00:27:36.770 ' 00:27:36.770 22:26:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:27:36.770 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:36.770 --rc genhtml_branch_coverage=1 00:27:36.770 --rc genhtml_function_coverage=1 00:27:36.770 --rc genhtml_legend=1 00:27:36.770 --rc geninfo_all_blocks=1 00:27:36.770 --rc geninfo_unexecuted_blocks=1 00:27:36.770 00:27:36.770 ' 00:27:36.770 22:26:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:36.770 22:26:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@7 -- # uname -s 00:27:36.770 22:26:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:36.770 22:26:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:36.770 22:26:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:36.770 22:26:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:36.770 22:26:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:36.770 22:26:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:36.770 22:26:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:36.770 22:26:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:36.770 22:26:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:36.770 22:26:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:36.770 22:26:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 00:27:36.770 22:26:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@18 -- # NVME_HOSTID=80c5a598-37ec-ec11-9bc7-a4bf01928204 00:27:36.770 22:26:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:36.770 22:26:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:36.770 22:26:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:36.770 22:26:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:36.770 22:26:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:36.770 22:26:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@15 -- # shopt -s extglob 00:27:36.770 22:26:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:36.770 22:26:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:36.770 22:26:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:36.770 22:26:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:36.770 22:26:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:36.770 22:26:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:36.770 22:26:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@5 -- # export PATH 00:27:36.770 22:26:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:36.770 22:26:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@51 -- # : 0 00:27:36.770 22:26:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:27:36.770 22:26:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:27:36.770 22:26:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:36.770 22:26:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:36.770 22:26:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:36.770 22:26:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:27:36.770 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:27:36.770 22:26:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:27:36.770 22:26:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:27:36.770 22:26:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@55 -- # have_pci_nics=0 00:27:36.770 22:26:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@12 -- # MALLOC_BDEV_SIZE=64 00:27:36.770 22:26:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:27:36.770 22:26:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@15 -- # nqn=nqn.2021-09.io.spdk:cnode0 00:27:36.770 22:26:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@16 -- # traddr=/var/run/vfio-user 00:27:36.770 22:26:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@18 -- # export TEST_TRANSPORT=VFIOUSER 00:27:36.771 22:26:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@18 -- # TEST_TRANSPORT=VFIOUSER 00:27:36.771 22:26:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@20 -- # rm -rf /var/run/vfio-user 00:27:36.771 22:26:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@24 -- # nvmfpid=170759 00:27:36.771 22:26:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@25 -- # echo 'Process pid: 170759' 00:27:36.771 Process pid: 170759 00:27:36.771 22:26:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@27 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:27:36.771 22:26:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:27:36.771 22:26:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@28 -- # waitforlisten 170759 00:27:36.771 22:26:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@831 -- # '[' -z 170759 ']' 00:27:36.771 22:26:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:36.771 22:26:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@836 -- # local max_retries=100 00:27:36.771 22:26:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:36.771 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:36.771 22:26:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@840 -- # xtrace_disable 00:27:36.771 22:26:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:27:37.712 22:26:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:27:37.712 22:26:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@864 -- # return 0 00:27:37.712 22:26:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@30 -- # sleep 1 00:27:38.654 22:26:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@32 -- # rpc_cmd nvmf_create_transport -t VFIOUSER 00:27:38.654 22:26:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:38.654 22:26:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:27:38.654 22:26:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:38.654 22:26:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@34 -- # mkdir -p /var/run/vfio-user 00:27:38.654 22:26:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b malloc0 00:27:38.654 22:26:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:38.654 22:26:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:27:38.654 malloc0 00:27:38.654 22:26:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:38.654 22:26:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2021-09.io.spdk:cnode0 -a -s spdk 00:27:38.654 22:26:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:38.654 22:26:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:27:38.654 22:26:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:38.654 22:26:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2021-09.io.spdk:cnode0 malloc0 00:27:38.654 22:26:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:38.654 22:26:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:27:38.654 22:26:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:38.654 22:26:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@39 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2021-09.io.spdk:cnode0 -t VFIOUSER -a /var/run/vfio-user -s 0 00:27:38.654 22:26:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:38.654 22:26:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:27:38.654 22:26:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:38.654 22:26:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@41 -- # trid='trtype:VFIOUSER subnqn:nqn.2021-09.io.spdk:cnode0 traddr:/var/run/vfio-user' 00:27:38.654 22:26:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -t 30 -S 123456 -F 'trtype:VFIOUSER subnqn:nqn.2021-09.io.spdk:cnode0 traddr:/var/run/vfio-user' -N -a 00:28:10.771 Fuzzing completed. Shutting down the fuzz application 00:28:10.771 00:28:10.771 Dumping successful admin opcodes: 00:28:10.771 8, 9, 10, 24, 00:28:10.771 Dumping successful io opcodes: 00:28:10.771 0, 00:28:10.771 NS: 0x200003a1ef00 I/O qp, Total commands completed: 1154549, total successful commands: 4542, random_seed: 2684132736 00:28:10.771 NS: 0x200003a1ef00 admin qp, Total commands completed: 145044, total successful commands: 1178, random_seed: 432917120 00:28:10.771 22:27:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@44 -- # rpc_cmd nvmf_delete_subsystem nqn.2021-09.io.spdk:cnode0 00:28:10.771 22:27:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:10.771 22:27:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:28:10.771 22:27:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:10.771 22:27:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@46 -- # killprocess 170759 00:28:10.771 22:27:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@950 -- # '[' -z 170759 ']' 00:28:10.771 22:27:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@954 -- # kill -0 170759 00:28:10.771 22:27:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@955 -- # uname 00:28:10.771 22:27:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:28:10.771 22:27:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 170759 00:28:10.771 22:27:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:28:10.771 22:27:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:28:10.771 22:27:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@968 -- # echo 'killing process with pid 170759' 00:28:10.771 killing process with pid 170759 00:28:10.771 22:27:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@969 -- # kill 170759 00:28:10.771 22:27:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@974 -- # wait 170759 00:28:10.771 22:27:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@48 -- # rm -rf /var/run/vfio-user /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/vfio_user_fuzz_log.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/vfio_user_fuzz_tgt_output.txt 00:28:10.772 22:27:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@50 -- # trap - SIGINT SIGTERM EXIT 00:28:10.772 00:28:10.772 real 0m33.886s 00:28:10.772 user 0m40.115s 00:28:10.772 sys 0m23.101s 00:28:10.772 22:27:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1126 -- # xtrace_disable 00:28:10.772 22:27:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:28:10.772 ************************************ 00:28:10.772 END TEST nvmf_vfio_user_fuzz 00:28:10.772 ************************************ 00:28:10.772 22:27:05 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@37 -- # run_test nvmf_auth_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=tcp 00:28:10.772 22:27:05 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:28:10.772 22:27:05 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:28:10.772 22:27:05 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:28:10.772 ************************************ 00:28:10.772 START TEST nvmf_auth_target 00:28:10.772 ************************************ 00:28:10.772 22:27:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=tcp 00:28:10.772 * Looking for test storage... 00:28:10.772 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:28:10.772 22:27:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:28:10.772 22:27:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1681 -- # lcov --version 00:28:10.772 22:27:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:28:10.772 22:27:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:28:10.772 22:27:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:28:10.772 22:27:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:28:10.772 22:27:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:28:10.772 22:27:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@336 -- # IFS=.-: 00:28:10.772 22:27:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@336 -- # read -ra ver1 00:28:10.772 22:27:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@337 -- # IFS=.-: 00:28:10.772 22:27:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@337 -- # read -ra ver2 00:28:10.772 22:27:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@338 -- # local 'op=<' 00:28:10.772 22:27:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@340 -- # ver1_l=2 00:28:10.772 22:27:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@341 -- # ver2_l=1 00:28:10.772 22:27:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:28:10.772 22:27:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@344 -- # case "$op" in 00:28:10.772 22:27:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@345 -- # : 1 00:28:10.772 22:27:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:28:10.772 22:27:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:28:10.772 22:27:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@365 -- # decimal 1 00:28:10.772 22:27:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@353 -- # local d=1 00:28:10.772 22:27:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:28:10.772 22:27:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@355 -- # echo 1 00:28:10.772 22:27:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@365 -- # ver1[v]=1 00:28:10.772 22:27:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@366 -- # decimal 2 00:28:10.772 22:27:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@353 -- # local d=2 00:28:10.772 22:27:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:28:10.772 22:27:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@355 -- # echo 2 00:28:10.772 22:27:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@366 -- # ver2[v]=2 00:28:10.772 22:27:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:28:10.772 22:27:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:28:10.772 22:27:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@368 -- # return 0 00:28:10.772 22:27:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:28:10.772 22:27:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:28:10.772 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:10.772 --rc genhtml_branch_coverage=1 00:28:10.772 --rc genhtml_function_coverage=1 00:28:10.772 --rc genhtml_legend=1 00:28:10.772 --rc geninfo_all_blocks=1 00:28:10.772 --rc geninfo_unexecuted_blocks=1 00:28:10.772 00:28:10.772 ' 00:28:10.772 22:27:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:28:10.772 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:10.772 --rc genhtml_branch_coverage=1 00:28:10.772 --rc genhtml_function_coverage=1 00:28:10.772 --rc genhtml_legend=1 00:28:10.772 --rc geninfo_all_blocks=1 00:28:10.772 --rc geninfo_unexecuted_blocks=1 00:28:10.772 00:28:10.772 ' 00:28:10.772 22:27:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:28:10.772 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:10.772 --rc genhtml_branch_coverage=1 00:28:10.772 --rc genhtml_function_coverage=1 00:28:10.772 --rc genhtml_legend=1 00:28:10.772 --rc geninfo_all_blocks=1 00:28:10.772 --rc geninfo_unexecuted_blocks=1 00:28:10.772 00:28:10.772 ' 00:28:10.772 22:27:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:28:10.772 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:10.772 --rc genhtml_branch_coverage=1 00:28:10.772 --rc genhtml_function_coverage=1 00:28:10.772 --rc genhtml_legend=1 00:28:10.772 --rc geninfo_all_blocks=1 00:28:10.772 --rc geninfo_unexecuted_blocks=1 00:28:10.772 00:28:10.772 ' 00:28:10.772 22:27:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:10.772 22:27:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@7 -- # uname -s 00:28:10.772 22:27:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:10.772 22:27:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:10.772 22:27:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:10.772 22:27:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:10.772 22:27:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:10.772 22:27:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:10.772 22:27:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:10.772 22:27:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:10.772 22:27:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:10.772 22:27:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:10.772 22:27:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 00:28:10.772 22:27:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@18 -- # NVME_HOSTID=80c5a598-37ec-ec11-9bc7-a4bf01928204 00:28:10.772 22:27:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:10.772 22:27:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:10.772 22:27:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:10.772 22:27:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:10.772 22:27:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:10.772 22:27:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@15 -- # shopt -s extglob 00:28:10.772 22:27:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:10.772 22:27:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:10.772 22:27:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:10.773 22:27:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:10.773 22:27:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:10.773 22:27:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:10.773 22:27:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@5 -- # export PATH 00:28:10.773 22:27:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:10.773 22:27:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@51 -- # : 0 00:28:10.773 22:27:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:28:10.773 22:27:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:28:10.773 22:27:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:10.773 22:27:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:10.773 22:27:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:10.773 22:27:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:28:10.773 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:28:10.773 22:27:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:28:10.773 22:27:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:28:10.773 22:27:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:28:10.773 22:27:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:28:10.773 22:27:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@14 -- # dhgroups=("null" "ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:28:10.773 22:27:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@15 -- # subnqn=nqn.2024-03.io.spdk:cnode0 00:28:10.773 22:27:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@16 -- # hostnqn=nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 00:28:10.773 22:27:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@17 -- # hostsock=/var/tmp/host.sock 00:28:10.773 22:27:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@18 -- # keys=() 00:28:10.773 22:27:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@18 -- # ckeys=() 00:28:10.773 22:27:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@86 -- # nvmftestinit 00:28:10.773 22:27:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:28:10.773 22:27:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:10.773 22:27:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@474 -- # prepare_net_devs 00:28:10.773 22:27:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@436 -- # local -g is_hw=no 00:28:10.773 22:27:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@438 -- # remove_spdk_ns 00:28:10.773 22:27:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:10.773 22:27:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:10.773 22:27:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:10.773 22:27:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:28:10.773 22:27:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:28:10.773 22:27:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@309 -- # xtrace_disable 00:28:10.773 22:27:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:28:19.017 22:27:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:19.017 22:27:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@315 -- # pci_devs=() 00:28:19.017 22:27:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:28:19.017 22:27:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:28:19.017 22:27:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:28:19.017 22:27:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:28:19.017 22:27:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:28:19.017 22:27:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@319 -- # net_devs=() 00:28:19.017 22:27:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:28:19.017 22:27:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@320 -- # e810=() 00:28:19.017 22:27:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@320 -- # local -ga e810 00:28:19.017 22:27:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@321 -- # x722=() 00:28:19.017 22:27:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@321 -- # local -ga x722 00:28:19.017 22:27:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@322 -- # mlx=() 00:28:19.017 22:27:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@322 -- # local -ga mlx 00:28:19.017 22:27:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:19.017 22:27:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:19.017 22:27:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:19.017 22:27:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:19.017 22:27:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:19.017 22:27:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:19.017 22:27:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:19.017 22:27:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:28:19.017 22:27:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:19.017 22:27:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:19.017 22:27:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:19.017 22:27:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:19.017 22:27:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:28:19.017 22:27:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:28:19.017 22:27:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:28:19.017 22:27:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:28:19.017 22:27:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:28:19.017 22:27:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:28:19.017 22:27:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:19.017 22:27:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:28:19.017 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:28:19.017 22:27:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:19.017 22:27:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:19.017 22:27:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:19.017 22:27:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:19.017 22:27:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:19.017 22:27:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:19.017 22:27:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:28:19.017 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:28:19.017 22:27:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:19.017 22:27:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:19.017 22:27:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:19.017 22:27:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:19.017 22:27:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:19.017 22:27:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:28:19.017 22:27:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:28:19.017 22:27:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:28:19.017 22:27:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:28:19.017 22:27:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:19.017 22:27:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:28:19.017 22:27:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:19.017 22:27:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@416 -- # [[ up == up ]] 00:28:19.017 22:27:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:28:19.017 22:27:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:19.017 22:27:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:28:19.017 Found net devices under 0000:4b:00.0: cvl_0_0 00:28:19.017 22:27:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:28:19.017 22:27:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:28:19.017 22:27:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:19.017 22:27:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:28:19.017 22:27:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:19.017 22:27:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@416 -- # [[ up == up ]] 00:28:19.017 22:27:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:28:19.017 22:27:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:19.017 22:27:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:28:19.017 Found net devices under 0000:4b:00.1: cvl_0_1 00:28:19.017 22:27:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:28:19.017 22:27:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:28:19.017 22:27:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@440 -- # is_hw=yes 00:28:19.017 22:27:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:28:19.017 22:27:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:28:19.017 22:27:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:28:19.017 22:27:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:28:19.017 22:27:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:19.017 22:27:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:19.017 22:27:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:19.017 22:27:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:28:19.017 22:27:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:19.017 22:27:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:19.017 22:27:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:28:19.017 22:27:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:28:19.017 22:27:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:19.017 22:27:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:19.017 22:27:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:28:19.017 22:27:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:28:19.017 22:27:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:28:19.017 22:27:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:19.017 22:27:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:19.017 22:27:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:19.017 22:27:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:28:19.017 22:27:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:19.017 22:27:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:19.017 22:27:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:19.017 22:27:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:28:19.017 22:27:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:28:19.017 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:19.017 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.599 ms 00:28:19.017 00:28:19.017 --- 10.0.0.2 ping statistics --- 00:28:19.017 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:19.017 rtt min/avg/max/mdev = 0.599/0.599/0.599/0.000 ms 00:28:19.018 22:27:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:19.018 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:19.018 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.214 ms 00:28:19.018 00:28:19.018 --- 10.0.0.1 ping statistics --- 00:28:19.018 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:19.018 rtt min/avg/max/mdev = 0.214/0.214/0.214/0.000 ms 00:28:19.018 22:27:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:19.018 22:27:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@448 -- # return 0 00:28:19.018 22:27:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:28:19.018 22:27:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:19.018 22:27:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:28:19.018 22:27:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:28:19.018 22:27:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:19.018 22:27:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:28:19.018 22:27:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:28:19.018 22:27:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@87 -- # nvmfappstart -L nvmf_auth 00:28:19.018 22:27:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:28:19.018 22:27:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@724 -- # xtrace_disable 00:28:19.018 22:27:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:28:19.018 22:27:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@507 -- # nvmfpid=181460 00:28:19.018 22:27:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@508 -- # waitforlisten 181460 00:28:19.018 22:27:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvmf_auth 00:28:19.018 22:27:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@831 -- # '[' -z 181460 ']' 00:28:19.018 22:27:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:19.018 22:27:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@836 -- # local max_retries=100 00:28:19.018 22:27:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:19.018 22:27:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # xtrace_disable 00:28:19.018 22:27:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:28:19.018 22:27:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:28:19.018 22:27:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # return 0 00:28:19.018 22:27:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:28:19.018 22:27:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@730 -- # xtrace_disable 00:28:19.018 22:27:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:28:19.018 22:27:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:19.018 22:27:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@89 -- # hostpid=181802 00:28:19.018 22:27:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@91 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:28:19.018 22:27:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 2 -r /var/tmp/host.sock -L nvme_auth 00:28:19.018 22:27:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # gen_dhchap_key null 48 00:28:19.018 22:27:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@749 -- # local digest len file key 00:28:19.018 22:27:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:28:19.018 22:27:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # local -A digests 00:28:19.018 22:27:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digest=null 00:28:19.018 22:27:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # len=48 00:28:19.018 22:27:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@753 -- # xxd -p -c0 -l 24 /dev/urandom 00:28:19.018 22:27:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@753 -- # key=b3c3b1f97990b3e4a1279d7511bdce65e851ee4bec42e1d9 00:28:19.018 22:27:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # mktemp -t spdk.key-null.XXX 00:28:19.018 22:27:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # file=/tmp/spdk.key-null.Es7 00:28:19.018 22:27:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # format_dhchap_key b3c3b1f97990b3e4a1279d7511bdce65e851ee4bec42e1d9 0 00:28:19.018 22:27:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@745 -- # format_key DHHC-1 b3c3b1f97990b3e4a1279d7511bdce65e851ee4bec42e1d9 0 00:28:19.018 22:27:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # local prefix key digest 00:28:19.018 22:27:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # prefix=DHHC-1 00:28:19.018 22:27:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # key=b3c3b1f97990b3e4a1279d7511bdce65e851ee4bec42e1d9 00:28:19.018 22:27:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # digest=0 00:28:19.018 22:27:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@731 -- # python - 00:28:19.018 22:27:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # chmod 0600 /tmp/spdk.key-null.Es7 00:28:19.018 22:27:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # echo /tmp/spdk.key-null.Es7 00:28:19.018 22:27:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # keys[0]=/tmp/spdk.key-null.Es7 00:28:19.018 22:27:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # gen_dhchap_key sha512 64 00:28:19.018 22:27:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@749 -- # local digest len file key 00:28:19.018 22:27:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:28:19.018 22:27:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # local -A digests 00:28:19.018 22:27:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digest=sha512 00:28:19.018 22:27:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # len=64 00:28:19.018 22:27:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@753 -- # xxd -p -c0 -l 32 /dev/urandom 00:28:19.018 22:27:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@753 -- # key=70d691404d793889250c0109bfe6b04c6e74f823ef1ed094a518fb5b05b8a8d7 00:28:19.018 22:27:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # mktemp -t spdk.key-sha512.XXX 00:28:19.018 22:27:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # file=/tmp/spdk.key-sha512.wXB 00:28:19.018 22:27:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # format_dhchap_key 70d691404d793889250c0109bfe6b04c6e74f823ef1ed094a518fb5b05b8a8d7 3 00:28:19.018 22:27:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@745 -- # format_key DHHC-1 70d691404d793889250c0109bfe6b04c6e74f823ef1ed094a518fb5b05b8a8d7 3 00:28:19.018 22:27:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # local prefix key digest 00:28:19.018 22:27:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # prefix=DHHC-1 00:28:19.018 22:27:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # key=70d691404d793889250c0109bfe6b04c6e74f823ef1ed094a518fb5b05b8a8d7 00:28:19.018 22:27:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # digest=3 00:28:19.018 22:27:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@731 -- # python - 00:28:19.018 22:27:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # chmod 0600 /tmp/spdk.key-sha512.wXB 00:28:19.018 22:27:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # echo /tmp/spdk.key-sha512.wXB 00:28:19.018 22:27:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # ckeys[0]=/tmp/spdk.key-sha512.wXB 00:28:19.018 22:27:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # gen_dhchap_key sha256 32 00:28:19.018 22:27:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@749 -- # local digest len file key 00:28:19.018 22:27:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:28:19.018 22:27:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # local -A digests 00:28:19.018 22:27:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digest=sha256 00:28:19.018 22:27:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # len=32 00:28:19.018 22:27:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@753 -- # xxd -p -c0 -l 16 /dev/urandom 00:28:19.018 22:27:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@753 -- # key=19fcf5305943457a4c1a435e555da6b8 00:28:19.018 22:27:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # mktemp -t spdk.key-sha256.XXX 00:28:19.018 22:27:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # file=/tmp/spdk.key-sha256.Mja 00:28:19.018 22:27:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # format_dhchap_key 19fcf5305943457a4c1a435e555da6b8 1 00:28:19.018 22:27:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@745 -- # format_key DHHC-1 19fcf5305943457a4c1a435e555da6b8 1 00:28:19.018 22:27:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # local prefix key digest 00:28:19.018 22:27:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # prefix=DHHC-1 00:28:19.018 22:27:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # key=19fcf5305943457a4c1a435e555da6b8 00:28:19.018 22:27:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # digest=1 00:28:19.018 22:27:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@731 -- # python - 00:28:19.280 22:27:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # chmod 0600 /tmp/spdk.key-sha256.Mja 00:28:19.280 22:27:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # echo /tmp/spdk.key-sha256.Mja 00:28:19.280 22:27:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # keys[1]=/tmp/spdk.key-sha256.Mja 00:28:19.280 22:27:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # gen_dhchap_key sha384 48 00:28:19.280 22:27:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@749 -- # local digest len file key 00:28:19.280 22:27:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:28:19.280 22:27:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # local -A digests 00:28:19.280 22:27:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digest=sha384 00:28:19.280 22:27:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # len=48 00:28:19.280 22:27:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@753 -- # xxd -p -c0 -l 24 /dev/urandom 00:28:19.280 22:27:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@753 -- # key=93ee3afab6e317a57fa68de26c1a195b7fd2537cf993d6b5 00:28:19.280 22:27:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # mktemp -t spdk.key-sha384.XXX 00:28:19.280 22:27:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # file=/tmp/spdk.key-sha384.2FQ 00:28:19.280 22:27:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # format_dhchap_key 93ee3afab6e317a57fa68de26c1a195b7fd2537cf993d6b5 2 00:28:19.280 22:27:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@745 -- # format_key DHHC-1 93ee3afab6e317a57fa68de26c1a195b7fd2537cf993d6b5 2 00:28:19.280 22:27:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # local prefix key digest 00:28:19.280 22:27:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # prefix=DHHC-1 00:28:19.280 22:27:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # key=93ee3afab6e317a57fa68de26c1a195b7fd2537cf993d6b5 00:28:19.280 22:27:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # digest=2 00:28:19.280 22:27:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@731 -- # python - 00:28:19.280 22:27:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # chmod 0600 /tmp/spdk.key-sha384.2FQ 00:28:19.280 22:27:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # echo /tmp/spdk.key-sha384.2FQ 00:28:19.280 22:27:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # ckeys[1]=/tmp/spdk.key-sha384.2FQ 00:28:19.280 22:27:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # gen_dhchap_key sha384 48 00:28:19.280 22:27:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@749 -- # local digest len file key 00:28:19.280 22:27:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:28:19.280 22:27:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # local -A digests 00:28:19.280 22:27:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digest=sha384 00:28:19.280 22:27:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # len=48 00:28:19.280 22:27:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@753 -- # xxd -p -c0 -l 24 /dev/urandom 00:28:19.280 22:27:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@753 -- # key=ca97d425f43c8f1dd7c49ffac22f943540ebe02c13c9116a 00:28:19.280 22:27:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # mktemp -t spdk.key-sha384.XXX 00:28:19.280 22:27:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # file=/tmp/spdk.key-sha384.LJP 00:28:19.280 22:27:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # format_dhchap_key ca97d425f43c8f1dd7c49ffac22f943540ebe02c13c9116a 2 00:28:19.280 22:27:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@745 -- # format_key DHHC-1 ca97d425f43c8f1dd7c49ffac22f943540ebe02c13c9116a 2 00:28:19.280 22:27:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # local prefix key digest 00:28:19.280 22:27:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # prefix=DHHC-1 00:28:19.280 22:27:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # key=ca97d425f43c8f1dd7c49ffac22f943540ebe02c13c9116a 00:28:19.280 22:27:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # digest=2 00:28:19.280 22:27:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@731 -- # python - 00:28:19.280 22:27:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # chmod 0600 /tmp/spdk.key-sha384.LJP 00:28:19.280 22:27:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # echo /tmp/spdk.key-sha384.LJP 00:28:19.280 22:27:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # keys[2]=/tmp/spdk.key-sha384.LJP 00:28:19.280 22:27:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # gen_dhchap_key sha256 32 00:28:19.280 22:27:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@749 -- # local digest len file key 00:28:19.280 22:27:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:28:19.280 22:27:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # local -A digests 00:28:19.280 22:27:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digest=sha256 00:28:19.280 22:27:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # len=32 00:28:19.280 22:27:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@753 -- # xxd -p -c0 -l 16 /dev/urandom 00:28:19.280 22:27:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@753 -- # key=54b8f25a7fabf34a67cc92c91019c5f0 00:28:19.280 22:27:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # mktemp -t spdk.key-sha256.XXX 00:28:19.280 22:27:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # file=/tmp/spdk.key-sha256.bGO 00:28:19.280 22:27:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # format_dhchap_key 54b8f25a7fabf34a67cc92c91019c5f0 1 00:28:19.280 22:27:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@745 -- # format_key DHHC-1 54b8f25a7fabf34a67cc92c91019c5f0 1 00:28:19.280 22:27:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # local prefix key digest 00:28:19.280 22:27:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # prefix=DHHC-1 00:28:19.280 22:27:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # key=54b8f25a7fabf34a67cc92c91019c5f0 00:28:19.281 22:27:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # digest=1 00:28:19.281 22:27:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@731 -- # python - 00:28:19.281 22:27:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # chmod 0600 /tmp/spdk.key-sha256.bGO 00:28:19.281 22:27:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # echo /tmp/spdk.key-sha256.bGO 00:28:19.281 22:27:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # ckeys[2]=/tmp/spdk.key-sha256.bGO 00:28:19.281 22:27:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # gen_dhchap_key sha512 64 00:28:19.281 22:27:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@749 -- # local digest len file key 00:28:19.281 22:27:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:28:19.281 22:27:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # local -A digests 00:28:19.281 22:27:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digest=sha512 00:28:19.281 22:27:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # len=64 00:28:19.281 22:27:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@753 -- # xxd -p -c0 -l 32 /dev/urandom 00:28:19.281 22:27:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@753 -- # key=fdcf0e5f8cacca5837ffdf6140cf2c0893180eba9cb609d5be88c84cb1179b94 00:28:19.281 22:27:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # mktemp -t spdk.key-sha512.XXX 00:28:19.281 22:27:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # file=/tmp/spdk.key-sha512.JZd 00:28:19.281 22:27:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # format_dhchap_key fdcf0e5f8cacca5837ffdf6140cf2c0893180eba9cb609d5be88c84cb1179b94 3 00:28:19.281 22:27:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@745 -- # format_key DHHC-1 fdcf0e5f8cacca5837ffdf6140cf2c0893180eba9cb609d5be88c84cb1179b94 3 00:28:19.281 22:27:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # local prefix key digest 00:28:19.281 22:27:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # prefix=DHHC-1 00:28:19.281 22:27:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # key=fdcf0e5f8cacca5837ffdf6140cf2c0893180eba9cb609d5be88c84cb1179b94 00:28:19.281 22:27:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # digest=3 00:28:19.281 22:27:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@731 -- # python - 00:28:19.541 22:27:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # chmod 0600 /tmp/spdk.key-sha512.JZd 00:28:19.541 22:27:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # echo /tmp/spdk.key-sha512.JZd 00:28:19.541 22:27:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # keys[3]=/tmp/spdk.key-sha512.JZd 00:28:19.541 22:27:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # ckeys[3]= 00:28:19.541 22:27:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@99 -- # waitforlisten 181460 00:28:19.541 22:27:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@831 -- # '[' -z 181460 ']' 00:28:19.541 22:27:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:19.541 22:27:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@836 -- # local max_retries=100 00:28:19.541 22:27:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:19.541 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:19.541 22:27:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # xtrace_disable 00:28:19.541 22:27:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:28:19.541 22:27:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:28:19.541 22:27:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # return 0 00:28:19.541 22:27:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@100 -- # waitforlisten 181802 /var/tmp/host.sock 00:28:19.541 22:27:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@831 -- # '[' -z 181802 ']' 00:28:19.541 22:27:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/host.sock 00:28:19.541 22:27:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@836 -- # local max_retries=100 00:28:19.541 22:27:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:28:19.541 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:28:19.541 22:27:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # xtrace_disable 00:28:19.541 22:27:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:28:19.802 22:27:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:28:19.802 22:27:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # return 0 00:28:19.802 22:27:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@101 -- # rpc_cmd 00:28:19.802 22:27:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:19.802 22:27:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:28:19.802 22:27:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:19.802 22:27:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:28:19.802 22:27:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.Es7 00:28:19.802 22:27:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:19.802 22:27:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:28:19.802 22:27:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:19.802 22:27:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key0 /tmp/spdk.key-null.Es7 00:28:19.802 22:27:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key0 /tmp/spdk.key-null.Es7 00:28:20.064 22:27:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha512.wXB ]] 00:28:20.064 22:27:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.wXB 00:28:20.064 22:27:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:20.064 22:27:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:28:20.064 22:27:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:20.064 22:27:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey0 /tmp/spdk.key-sha512.wXB 00:28:20.064 22:27:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey0 /tmp/spdk.key-sha512.wXB 00:28:20.064 22:27:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:28:20.064 22:27:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.Mja 00:28:20.064 22:27:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:20.064 22:27:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:28:20.064 22:27:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:20.064 22:27:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key1 /tmp/spdk.key-sha256.Mja 00:28:20.064 22:27:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key1 /tmp/spdk.key-sha256.Mja 00:28:20.325 22:27:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha384.2FQ ]] 00:28:20.325 22:27:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.2FQ 00:28:20.325 22:27:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:20.325 22:27:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:28:20.325 22:27:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:20.325 22:27:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey1 /tmp/spdk.key-sha384.2FQ 00:28:20.325 22:27:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey1 /tmp/spdk.key-sha384.2FQ 00:28:20.585 22:27:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:28:20.585 22:27:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.LJP 00:28:20.585 22:27:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:20.585 22:27:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:28:20.585 22:27:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:20.585 22:27:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key2 /tmp/spdk.key-sha384.LJP 00:28:20.585 22:27:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key2 /tmp/spdk.key-sha384.LJP 00:28:20.585 22:27:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha256.bGO ]] 00:28:20.585 22:27:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.bGO 00:28:20.585 22:27:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:20.585 22:27:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:28:20.585 22:27:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:20.585 22:27:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey2 /tmp/spdk.key-sha256.bGO 00:28:20.585 22:27:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey2 /tmp/spdk.key-sha256.bGO 00:28:20.845 22:27:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:28:20.845 22:27:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.JZd 00:28:20.845 22:27:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:20.845 22:27:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:28:20.845 22:27:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:20.845 22:27:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key3 /tmp/spdk.key-sha512.JZd 00:28:20.846 22:27:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key3 /tmp/spdk.key-sha512.JZd 00:28:21.106 22:27:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n '' ]] 00:28:21.106 22:27:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:28:21.107 22:27:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:28:21.107 22:27:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:28:21.107 22:27:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:28:21.107 22:27:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:28:21.367 22:27:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 0 00:28:21.367 22:27:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:28:21.367 22:27:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:28:21.367 22:27:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:28:21.367 22:27:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:28:21.367 22:27:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:28:21.367 22:27:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:28:21.367 22:27:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:21.367 22:27:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:28:21.367 22:27:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:21.367 22:27:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:28:21.367 22:27:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:28:21.367 22:27:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:28:21.367 00:28:21.628 22:27:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:28:21.628 22:27:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:28:21.628 22:27:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:28:21.628 22:27:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:21.628 22:27:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:28:21.628 22:27:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:21.628 22:27:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:28:21.628 22:27:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:21.628 22:27:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:28:21.628 { 00:28:21.628 "cntlid": 1, 00:28:21.628 "qid": 0, 00:28:21.628 "state": "enabled", 00:28:21.628 "thread": "nvmf_tgt_poll_group_000", 00:28:21.628 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204", 00:28:21.628 "listen_address": { 00:28:21.628 "trtype": "TCP", 00:28:21.628 "adrfam": "IPv4", 00:28:21.628 "traddr": "10.0.0.2", 00:28:21.628 "trsvcid": "4420" 00:28:21.628 }, 00:28:21.628 "peer_address": { 00:28:21.628 "trtype": "TCP", 00:28:21.628 "adrfam": "IPv4", 00:28:21.628 "traddr": "10.0.0.1", 00:28:21.628 "trsvcid": "57108" 00:28:21.628 }, 00:28:21.628 "auth": { 00:28:21.628 "state": "completed", 00:28:21.628 "digest": "sha256", 00:28:21.628 "dhgroup": "null" 00:28:21.628 } 00:28:21.628 } 00:28:21.628 ]' 00:28:21.628 22:27:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:28:21.628 22:27:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:28:21.628 22:27:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:28:21.889 22:27:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:28:21.889 22:27:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:28:21.889 22:27:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:28:21.889 22:27:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:28:21.889 22:27:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:28:21.889 22:27:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YjNjM2IxZjk3OTkwYjNlNGExMjc5ZDc1MTFiZGNlNjVlODUxZWU0YmVjNDJlMWQ5Ps097Q==: --dhchap-ctrl-secret DHHC-1:03:NzBkNjkxNDA0ZDc5Mzg4OTI1MGMwMTA5YmZlNmIwNGM2ZTc0ZjgyM2VmMWVkMDk0YTUxOGZiNWIwNWI4YThkN3OXbH8=: 00:28:21.889 22:27:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 --hostid 80c5a598-37ec-ec11-9bc7-a4bf01928204 -l 0 --dhchap-secret DHHC-1:00:YjNjM2IxZjk3OTkwYjNlNGExMjc5ZDc1MTFiZGNlNjVlODUxZWU0YmVjNDJlMWQ5Ps097Q==: --dhchap-ctrl-secret DHHC-1:03:NzBkNjkxNDA0ZDc5Mzg4OTI1MGMwMTA5YmZlNmIwNGM2ZTc0ZjgyM2VmMWVkMDk0YTUxOGZiNWIwNWI4YThkN3OXbH8=: 00:28:26.093 22:27:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:28:26.093 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:28:26.093 22:27:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 00:28:26.093 22:27:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:26.093 22:27:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:28:26.093 22:27:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:26.093 22:27:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:28:26.093 22:27:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:28:26.093 22:27:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:28:26.093 22:27:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 1 00:28:26.093 22:27:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:28:26.093 22:27:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:28:26.093 22:27:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:28:26.093 22:27:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:28:26.093 22:27:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:28:26.093 22:27:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:28:26.093 22:27:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:26.093 22:27:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:28:26.093 22:27:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:26.093 22:27:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:28:26.094 22:27:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:28:26.094 22:27:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:28:26.354 00:28:26.354 22:27:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:28:26.354 22:27:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:28:26.354 22:27:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:28:26.614 22:27:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:26.614 22:27:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:28:26.614 22:27:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:26.614 22:27:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:28:26.614 22:27:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:26.614 22:27:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:28:26.614 { 00:28:26.614 "cntlid": 3, 00:28:26.614 "qid": 0, 00:28:26.614 "state": "enabled", 00:28:26.614 "thread": "nvmf_tgt_poll_group_000", 00:28:26.614 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204", 00:28:26.614 "listen_address": { 00:28:26.614 "trtype": "TCP", 00:28:26.614 "adrfam": "IPv4", 00:28:26.614 "traddr": "10.0.0.2", 00:28:26.614 "trsvcid": "4420" 00:28:26.614 }, 00:28:26.614 "peer_address": { 00:28:26.614 "trtype": "TCP", 00:28:26.614 "adrfam": "IPv4", 00:28:26.614 "traddr": "10.0.0.1", 00:28:26.614 "trsvcid": "57136" 00:28:26.614 }, 00:28:26.614 "auth": { 00:28:26.614 "state": "completed", 00:28:26.614 "digest": "sha256", 00:28:26.614 "dhgroup": "null" 00:28:26.614 } 00:28:26.614 } 00:28:26.614 ]' 00:28:26.615 22:27:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:28:26.615 22:27:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:28:26.615 22:27:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:28:26.615 22:27:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:28:26.615 22:27:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:28:26.615 22:27:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:28:26.615 22:27:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:28:26.615 22:27:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:28:26.876 22:27:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MTlmY2Y1MzA1OTQzNDU3YTRjMWE0MzVlNTU1ZGE2YjhHWWQq: --dhchap-ctrl-secret DHHC-1:02:OTNlZTNhZmFiNmUzMTdhNTdmYTY4ZGUyNmMxYTE5NWI3ZmQyNTM3Y2Y5OTNkNmI1qvQ86g==: 00:28:26.876 22:27:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 --hostid 80c5a598-37ec-ec11-9bc7-a4bf01928204 -l 0 --dhchap-secret DHHC-1:01:MTlmY2Y1MzA1OTQzNDU3YTRjMWE0MzVlNTU1ZGE2YjhHWWQq: --dhchap-ctrl-secret DHHC-1:02:OTNlZTNhZmFiNmUzMTdhNTdmYTY4ZGUyNmMxYTE5NWI3ZmQyNTM3Y2Y5OTNkNmI1qvQ86g==: 00:28:27.818 22:27:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:28:27.818 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:28:27.818 22:27:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 00:28:27.818 22:27:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:27.818 22:27:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:28:27.818 22:27:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:27.818 22:27:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:28:27.818 22:27:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:28:27.818 22:27:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:28:27.818 22:27:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 2 00:28:27.818 22:27:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:28:27.818 22:27:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:28:27.818 22:27:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:28:27.818 22:27:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:28:27.818 22:27:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:28:27.818 22:27:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:28:27.818 22:27:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:27.818 22:27:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:28:27.818 22:27:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:27.818 22:27:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:28:27.818 22:27:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:28:27.818 22:27:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:28:28.080 00:28:28.080 22:27:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:28:28.080 22:27:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:28:28.080 22:27:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:28:28.340 22:27:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:28.340 22:27:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:28:28.340 22:27:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:28.341 22:27:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:28:28.341 22:27:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:28.341 22:27:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:28:28.341 { 00:28:28.341 "cntlid": 5, 00:28:28.341 "qid": 0, 00:28:28.341 "state": "enabled", 00:28:28.341 "thread": "nvmf_tgt_poll_group_000", 00:28:28.341 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204", 00:28:28.341 "listen_address": { 00:28:28.341 "trtype": "TCP", 00:28:28.341 "adrfam": "IPv4", 00:28:28.341 "traddr": "10.0.0.2", 00:28:28.341 "trsvcid": "4420" 00:28:28.341 }, 00:28:28.341 "peer_address": { 00:28:28.341 "trtype": "TCP", 00:28:28.341 "adrfam": "IPv4", 00:28:28.341 "traddr": "10.0.0.1", 00:28:28.341 "trsvcid": "59158" 00:28:28.341 }, 00:28:28.341 "auth": { 00:28:28.341 "state": "completed", 00:28:28.341 "digest": "sha256", 00:28:28.341 "dhgroup": "null" 00:28:28.341 } 00:28:28.341 } 00:28:28.341 ]' 00:28:28.341 22:27:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:28:28.341 22:27:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:28:28.341 22:27:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:28:28.341 22:27:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:28:28.341 22:27:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:28:28.341 22:27:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:28:28.341 22:27:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:28:28.341 22:27:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:28:28.601 22:27:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:Y2E5N2Q0MjVmNDNjOGYxZGQ3YzQ5ZmZhYzIyZjk0MzU0MGViZTAyYzEzYzkxMTZhU0B5KA==: --dhchap-ctrl-secret DHHC-1:01:NTRiOGYyNWE3ZmFiZjM0YTY3Y2M5MmM5MTAxOWM1ZjDN3lJm: 00:28:28.601 22:27:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 --hostid 80c5a598-37ec-ec11-9bc7-a4bf01928204 -l 0 --dhchap-secret DHHC-1:02:Y2E5N2Q0MjVmNDNjOGYxZGQ3YzQ5ZmZhYzIyZjk0MzU0MGViZTAyYzEzYzkxMTZhU0B5KA==: --dhchap-ctrl-secret DHHC-1:01:NTRiOGYyNWE3ZmFiZjM0YTY3Y2M5MmM5MTAxOWM1ZjDN3lJm: 00:28:29.546 22:27:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:28:29.546 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:28:29.546 22:27:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 00:28:29.546 22:27:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:29.546 22:27:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:28:29.546 22:27:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:29.546 22:27:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:28:29.546 22:27:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:28:29.546 22:27:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:28:29.546 22:27:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 3 00:28:29.546 22:27:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:28:29.546 22:27:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:28:29.546 22:27:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:28:29.546 22:27:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:28:29.546 22:27:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:28:29.546 22:27:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 --dhchap-key key3 00:28:29.546 22:27:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:29.546 22:27:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:28:29.546 22:27:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:29.546 22:27:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:28:29.546 22:27:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:28:29.546 22:27:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:28:29.809 00:28:29.809 22:27:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:28:29.809 22:27:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:28:29.809 22:27:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:28:30.123 22:27:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:30.123 22:27:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:28:30.123 22:27:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:30.123 22:27:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:28:30.123 22:27:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:30.123 22:27:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:28:30.123 { 00:28:30.123 "cntlid": 7, 00:28:30.123 "qid": 0, 00:28:30.123 "state": "enabled", 00:28:30.123 "thread": "nvmf_tgt_poll_group_000", 00:28:30.123 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204", 00:28:30.123 "listen_address": { 00:28:30.123 "trtype": "TCP", 00:28:30.123 "adrfam": "IPv4", 00:28:30.123 "traddr": "10.0.0.2", 00:28:30.123 "trsvcid": "4420" 00:28:30.123 }, 00:28:30.123 "peer_address": { 00:28:30.123 "trtype": "TCP", 00:28:30.123 "adrfam": "IPv4", 00:28:30.123 "traddr": "10.0.0.1", 00:28:30.123 "trsvcid": "59184" 00:28:30.123 }, 00:28:30.123 "auth": { 00:28:30.123 "state": "completed", 00:28:30.123 "digest": "sha256", 00:28:30.123 "dhgroup": "null" 00:28:30.123 } 00:28:30.123 } 00:28:30.123 ]' 00:28:30.123 22:27:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:28:30.123 22:27:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:28:30.123 22:27:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:28:30.124 22:27:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:28:30.124 22:27:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:28:30.124 22:27:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:28:30.124 22:27:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:28:30.124 22:27:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:28:30.384 22:27:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZmRjZjBlNWY4Y2FjY2E1ODM3ZmZkZjYxNDBjZjJjMDg5MzE4MGViYTljYjYwOWQ1YmU4OGM4NGNiMTE3OWI5NASd1M0=: 00:28:30.384 22:27:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 --hostid 80c5a598-37ec-ec11-9bc7-a4bf01928204 -l 0 --dhchap-secret DHHC-1:03:ZmRjZjBlNWY4Y2FjY2E1ODM3ZmZkZjYxNDBjZjJjMDg5MzE4MGViYTljYjYwOWQ1YmU4OGM4NGNiMTE3OWI5NASd1M0=: 00:28:30.954 22:27:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:28:30.954 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:28:30.954 22:27:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 00:28:30.954 22:27:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:30.954 22:27:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:28:31.214 22:27:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:31.214 22:27:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:28:31.214 22:27:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:28:31.214 22:27:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:28:31.215 22:27:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:28:31.215 22:27:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 0 00:28:31.215 22:27:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:28:31.215 22:27:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:28:31.215 22:27:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:28:31.215 22:27:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:28:31.215 22:27:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:28:31.215 22:27:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:28:31.215 22:27:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:31.215 22:27:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:28:31.215 22:27:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:31.215 22:27:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:28:31.215 22:27:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:28:31.215 22:27:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:28:31.475 00:28:31.475 22:27:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:28:31.475 22:27:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:28:31.475 22:27:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:28:31.736 22:27:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:31.736 22:27:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:28:31.736 22:27:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:31.736 22:27:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:28:31.736 22:27:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:31.736 22:27:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:28:31.736 { 00:28:31.736 "cntlid": 9, 00:28:31.736 "qid": 0, 00:28:31.736 "state": "enabled", 00:28:31.736 "thread": "nvmf_tgt_poll_group_000", 00:28:31.736 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204", 00:28:31.736 "listen_address": { 00:28:31.736 "trtype": "TCP", 00:28:31.736 "adrfam": "IPv4", 00:28:31.736 "traddr": "10.0.0.2", 00:28:31.736 "trsvcid": "4420" 00:28:31.736 }, 00:28:31.736 "peer_address": { 00:28:31.736 "trtype": "TCP", 00:28:31.736 "adrfam": "IPv4", 00:28:31.736 "traddr": "10.0.0.1", 00:28:31.736 "trsvcid": "59206" 00:28:31.736 }, 00:28:31.736 "auth": { 00:28:31.736 "state": "completed", 00:28:31.736 "digest": "sha256", 00:28:31.736 "dhgroup": "ffdhe2048" 00:28:31.736 } 00:28:31.736 } 00:28:31.736 ]' 00:28:31.736 22:27:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:28:31.736 22:27:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:28:31.736 22:27:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:28:31.737 22:27:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:28:31.737 22:27:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:28:31.737 22:27:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:28:31.737 22:27:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:28:31.737 22:27:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:28:31.997 22:27:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YjNjM2IxZjk3OTkwYjNlNGExMjc5ZDc1MTFiZGNlNjVlODUxZWU0YmVjNDJlMWQ5Ps097Q==: --dhchap-ctrl-secret DHHC-1:03:NzBkNjkxNDA0ZDc5Mzg4OTI1MGMwMTA5YmZlNmIwNGM2ZTc0ZjgyM2VmMWVkMDk0YTUxOGZiNWIwNWI4YThkN3OXbH8=: 00:28:31.997 22:27:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 --hostid 80c5a598-37ec-ec11-9bc7-a4bf01928204 -l 0 --dhchap-secret DHHC-1:00:YjNjM2IxZjk3OTkwYjNlNGExMjc5ZDc1MTFiZGNlNjVlODUxZWU0YmVjNDJlMWQ5Ps097Q==: --dhchap-ctrl-secret DHHC-1:03:NzBkNjkxNDA0ZDc5Mzg4OTI1MGMwMTA5YmZlNmIwNGM2ZTc0ZjgyM2VmMWVkMDk0YTUxOGZiNWIwNWI4YThkN3OXbH8=: 00:28:32.940 22:27:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:28:32.940 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:28:32.940 22:27:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 00:28:32.940 22:27:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:32.940 22:27:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:28:32.940 22:27:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:32.941 22:27:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:28:32.941 22:27:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:28:32.941 22:27:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:28:32.941 22:27:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 1 00:28:32.941 22:27:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:28:32.941 22:27:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:28:32.941 22:27:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:28:32.941 22:27:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:28:32.941 22:27:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:28:32.941 22:27:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:28:32.941 22:27:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:32.941 22:27:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:28:32.941 22:27:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:32.941 22:27:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:28:32.941 22:27:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:28:32.941 22:27:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:28:33.201 00:28:33.201 22:27:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:28:33.201 22:27:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:28:33.201 22:27:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:28:33.463 22:27:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:33.463 22:27:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:28:33.463 22:27:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:33.463 22:27:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:28:33.463 22:27:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:33.463 22:27:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:28:33.463 { 00:28:33.463 "cntlid": 11, 00:28:33.463 "qid": 0, 00:28:33.463 "state": "enabled", 00:28:33.463 "thread": "nvmf_tgt_poll_group_000", 00:28:33.463 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204", 00:28:33.463 "listen_address": { 00:28:33.463 "trtype": "TCP", 00:28:33.463 "adrfam": "IPv4", 00:28:33.463 "traddr": "10.0.0.2", 00:28:33.463 "trsvcid": "4420" 00:28:33.463 }, 00:28:33.463 "peer_address": { 00:28:33.463 "trtype": "TCP", 00:28:33.463 "adrfam": "IPv4", 00:28:33.463 "traddr": "10.0.0.1", 00:28:33.463 "trsvcid": "59236" 00:28:33.463 }, 00:28:33.463 "auth": { 00:28:33.463 "state": "completed", 00:28:33.463 "digest": "sha256", 00:28:33.463 "dhgroup": "ffdhe2048" 00:28:33.463 } 00:28:33.463 } 00:28:33.463 ]' 00:28:33.463 22:27:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:28:33.463 22:27:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:28:33.463 22:27:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:28:33.463 22:27:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:28:33.463 22:27:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:28:33.463 22:27:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:28:33.463 22:27:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:28:33.463 22:27:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:28:33.724 22:27:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MTlmY2Y1MzA1OTQzNDU3YTRjMWE0MzVlNTU1ZGE2YjhHWWQq: --dhchap-ctrl-secret DHHC-1:02:OTNlZTNhZmFiNmUzMTdhNTdmYTY4ZGUyNmMxYTE5NWI3ZmQyNTM3Y2Y5OTNkNmI1qvQ86g==: 00:28:33.724 22:27:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 --hostid 80c5a598-37ec-ec11-9bc7-a4bf01928204 -l 0 --dhchap-secret DHHC-1:01:MTlmY2Y1MzA1OTQzNDU3YTRjMWE0MzVlNTU1ZGE2YjhHWWQq: --dhchap-ctrl-secret DHHC-1:02:OTNlZTNhZmFiNmUzMTdhNTdmYTY4ZGUyNmMxYTE5NWI3ZmQyNTM3Y2Y5OTNkNmI1qvQ86g==: 00:28:34.668 22:27:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:28:34.668 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:28:34.668 22:27:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 00:28:34.668 22:27:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:34.668 22:27:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:28:34.668 22:27:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:34.668 22:27:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:28:34.668 22:27:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:28:34.668 22:27:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:28:34.668 22:27:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 2 00:28:34.668 22:27:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:28:34.668 22:27:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:28:34.668 22:27:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:28:34.668 22:27:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:28:34.668 22:27:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:28:34.668 22:27:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:28:34.668 22:27:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:34.668 22:27:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:28:34.668 22:27:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:34.668 22:27:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:28:34.668 22:27:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:28:34.668 22:27:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:28:34.930 00:28:34.930 22:27:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:28:34.930 22:27:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:28:34.930 22:27:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:28:35.192 22:27:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:35.192 22:27:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:28:35.192 22:27:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:35.192 22:27:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:28:35.192 22:27:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:35.192 22:27:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:28:35.192 { 00:28:35.192 "cntlid": 13, 00:28:35.192 "qid": 0, 00:28:35.192 "state": "enabled", 00:28:35.192 "thread": "nvmf_tgt_poll_group_000", 00:28:35.192 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204", 00:28:35.192 "listen_address": { 00:28:35.192 "trtype": "TCP", 00:28:35.192 "adrfam": "IPv4", 00:28:35.192 "traddr": "10.0.0.2", 00:28:35.192 "trsvcid": "4420" 00:28:35.192 }, 00:28:35.192 "peer_address": { 00:28:35.192 "trtype": "TCP", 00:28:35.192 "adrfam": "IPv4", 00:28:35.192 "traddr": "10.0.0.1", 00:28:35.192 "trsvcid": "59248" 00:28:35.192 }, 00:28:35.192 "auth": { 00:28:35.192 "state": "completed", 00:28:35.192 "digest": "sha256", 00:28:35.192 "dhgroup": "ffdhe2048" 00:28:35.192 } 00:28:35.192 } 00:28:35.192 ]' 00:28:35.192 22:27:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:28:35.192 22:27:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:28:35.192 22:27:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:28:35.192 22:27:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:28:35.192 22:27:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:28:35.192 22:27:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:28:35.192 22:27:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:28:35.192 22:27:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:28:35.455 22:27:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:Y2E5N2Q0MjVmNDNjOGYxZGQ3YzQ5ZmZhYzIyZjk0MzU0MGViZTAyYzEzYzkxMTZhU0B5KA==: --dhchap-ctrl-secret DHHC-1:01:NTRiOGYyNWE3ZmFiZjM0YTY3Y2M5MmM5MTAxOWM1ZjDN3lJm: 00:28:35.455 22:27:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 --hostid 80c5a598-37ec-ec11-9bc7-a4bf01928204 -l 0 --dhchap-secret DHHC-1:02:Y2E5N2Q0MjVmNDNjOGYxZGQ3YzQ5ZmZhYzIyZjk0MzU0MGViZTAyYzEzYzkxMTZhU0B5KA==: --dhchap-ctrl-secret DHHC-1:01:NTRiOGYyNWE3ZmFiZjM0YTY3Y2M5MmM5MTAxOWM1ZjDN3lJm: 00:28:36.399 22:27:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:28:36.399 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:28:36.399 22:27:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 00:28:36.399 22:27:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:36.399 22:27:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:28:36.399 22:27:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:36.399 22:27:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:28:36.399 22:27:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:28:36.399 22:27:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:28:36.399 22:27:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 3 00:28:36.399 22:27:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:28:36.400 22:27:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:28:36.400 22:27:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:28:36.400 22:27:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:28:36.400 22:27:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:28:36.400 22:27:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 --dhchap-key key3 00:28:36.400 22:27:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:36.400 22:27:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:28:36.400 22:27:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:36.400 22:27:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:28:36.400 22:27:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:28:36.400 22:27:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:28:36.660 00:28:36.660 22:27:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:28:36.660 22:27:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:28:36.660 22:27:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:28:36.921 22:27:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:36.921 22:27:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:28:36.921 22:27:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:36.921 22:27:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:28:36.921 22:27:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:36.921 22:27:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:28:36.921 { 00:28:36.921 "cntlid": 15, 00:28:36.921 "qid": 0, 00:28:36.921 "state": "enabled", 00:28:36.921 "thread": "nvmf_tgt_poll_group_000", 00:28:36.921 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204", 00:28:36.921 "listen_address": { 00:28:36.921 "trtype": "TCP", 00:28:36.921 "adrfam": "IPv4", 00:28:36.921 "traddr": "10.0.0.2", 00:28:36.921 "trsvcid": "4420" 00:28:36.921 }, 00:28:36.921 "peer_address": { 00:28:36.921 "trtype": "TCP", 00:28:36.921 "adrfam": "IPv4", 00:28:36.921 "traddr": "10.0.0.1", 00:28:36.921 "trsvcid": "37506" 00:28:36.921 }, 00:28:36.921 "auth": { 00:28:36.921 "state": "completed", 00:28:36.921 "digest": "sha256", 00:28:36.921 "dhgroup": "ffdhe2048" 00:28:36.921 } 00:28:36.921 } 00:28:36.921 ]' 00:28:36.921 22:27:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:28:36.921 22:27:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:28:36.921 22:27:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:28:36.921 22:27:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:28:36.921 22:27:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:28:36.921 22:27:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:28:36.921 22:27:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:28:36.921 22:27:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:28:37.183 22:27:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZmRjZjBlNWY4Y2FjY2E1ODM3ZmZkZjYxNDBjZjJjMDg5MzE4MGViYTljYjYwOWQ1YmU4OGM4NGNiMTE3OWI5NASd1M0=: 00:28:37.183 22:27:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 --hostid 80c5a598-37ec-ec11-9bc7-a4bf01928204 -l 0 --dhchap-secret DHHC-1:03:ZmRjZjBlNWY4Y2FjY2E1ODM3ZmZkZjYxNDBjZjJjMDg5MzE4MGViYTljYjYwOWQ1YmU4OGM4NGNiMTE3OWI5NASd1M0=: 00:28:38.127 22:27:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:28:38.127 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:28:38.127 22:27:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 00:28:38.127 22:27:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:38.127 22:27:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:28:38.127 22:27:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:38.127 22:27:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:28:38.127 22:27:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:28:38.127 22:27:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:28:38.127 22:27:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:28:38.127 22:27:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 0 00:28:38.127 22:27:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:28:38.127 22:27:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:28:38.127 22:27:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:28:38.127 22:27:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:28:38.127 22:27:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:28:38.127 22:27:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:28:38.127 22:27:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:38.127 22:27:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:28:38.127 22:27:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:38.127 22:27:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:28:38.127 22:27:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:28:38.127 22:27:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:28:38.388 00:28:38.388 22:27:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:28:38.388 22:27:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:28:38.389 22:27:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:28:38.649 22:27:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:38.649 22:27:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:28:38.649 22:27:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:38.649 22:27:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:28:38.649 22:27:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:38.649 22:27:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:28:38.649 { 00:28:38.649 "cntlid": 17, 00:28:38.649 "qid": 0, 00:28:38.649 "state": "enabled", 00:28:38.649 "thread": "nvmf_tgt_poll_group_000", 00:28:38.649 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204", 00:28:38.649 "listen_address": { 00:28:38.649 "trtype": "TCP", 00:28:38.649 "adrfam": "IPv4", 00:28:38.649 "traddr": "10.0.0.2", 00:28:38.649 "trsvcid": "4420" 00:28:38.649 }, 00:28:38.649 "peer_address": { 00:28:38.649 "trtype": "TCP", 00:28:38.649 "adrfam": "IPv4", 00:28:38.649 "traddr": "10.0.0.1", 00:28:38.649 "trsvcid": "37530" 00:28:38.649 }, 00:28:38.649 "auth": { 00:28:38.649 "state": "completed", 00:28:38.649 "digest": "sha256", 00:28:38.649 "dhgroup": "ffdhe3072" 00:28:38.649 } 00:28:38.649 } 00:28:38.649 ]' 00:28:38.650 22:27:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:28:38.650 22:27:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:28:38.650 22:27:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:28:38.650 22:27:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:28:38.650 22:27:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:28:38.650 22:27:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:28:38.650 22:27:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:28:38.650 22:27:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:28:38.911 22:27:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YjNjM2IxZjk3OTkwYjNlNGExMjc5ZDc1MTFiZGNlNjVlODUxZWU0YmVjNDJlMWQ5Ps097Q==: --dhchap-ctrl-secret DHHC-1:03:NzBkNjkxNDA0ZDc5Mzg4OTI1MGMwMTA5YmZlNmIwNGM2ZTc0ZjgyM2VmMWVkMDk0YTUxOGZiNWIwNWI4YThkN3OXbH8=: 00:28:38.911 22:27:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 --hostid 80c5a598-37ec-ec11-9bc7-a4bf01928204 -l 0 --dhchap-secret DHHC-1:00:YjNjM2IxZjk3OTkwYjNlNGExMjc5ZDc1MTFiZGNlNjVlODUxZWU0YmVjNDJlMWQ5Ps097Q==: --dhchap-ctrl-secret DHHC-1:03:NzBkNjkxNDA0ZDc5Mzg4OTI1MGMwMTA5YmZlNmIwNGM2ZTc0ZjgyM2VmMWVkMDk0YTUxOGZiNWIwNWI4YThkN3OXbH8=: 00:28:39.856 22:27:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:28:39.856 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:28:39.856 22:27:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 00:28:39.856 22:27:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:39.856 22:27:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:28:39.856 22:27:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:39.856 22:27:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:28:39.856 22:27:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:28:39.856 22:27:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:28:39.856 22:27:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 1 00:28:39.856 22:27:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:28:39.856 22:27:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:28:39.856 22:27:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:28:39.856 22:27:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:28:39.856 22:27:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:28:39.856 22:27:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:28:39.856 22:27:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:39.856 22:27:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:28:39.856 22:27:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:39.856 22:27:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:28:39.856 22:27:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:28:39.856 22:27:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:28:40.117 00:28:40.117 22:27:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:28:40.117 22:27:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:28:40.117 22:27:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:28:40.379 22:27:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:40.379 22:27:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:28:40.379 22:27:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:40.379 22:27:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:28:40.379 22:27:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:40.379 22:27:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:28:40.379 { 00:28:40.379 "cntlid": 19, 00:28:40.379 "qid": 0, 00:28:40.379 "state": "enabled", 00:28:40.379 "thread": "nvmf_tgt_poll_group_000", 00:28:40.379 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204", 00:28:40.379 "listen_address": { 00:28:40.379 "trtype": "TCP", 00:28:40.379 "adrfam": "IPv4", 00:28:40.379 "traddr": "10.0.0.2", 00:28:40.379 "trsvcid": "4420" 00:28:40.379 }, 00:28:40.379 "peer_address": { 00:28:40.379 "trtype": "TCP", 00:28:40.379 "adrfam": "IPv4", 00:28:40.379 "traddr": "10.0.0.1", 00:28:40.379 "trsvcid": "37552" 00:28:40.379 }, 00:28:40.379 "auth": { 00:28:40.379 "state": "completed", 00:28:40.379 "digest": "sha256", 00:28:40.379 "dhgroup": "ffdhe3072" 00:28:40.379 } 00:28:40.379 } 00:28:40.379 ]' 00:28:40.379 22:27:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:28:40.379 22:27:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:28:40.379 22:27:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:28:40.379 22:27:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:28:40.379 22:27:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:28:40.379 22:27:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:28:40.379 22:27:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:28:40.379 22:27:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:28:40.639 22:27:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MTlmY2Y1MzA1OTQzNDU3YTRjMWE0MzVlNTU1ZGE2YjhHWWQq: --dhchap-ctrl-secret DHHC-1:02:OTNlZTNhZmFiNmUzMTdhNTdmYTY4ZGUyNmMxYTE5NWI3ZmQyNTM3Y2Y5OTNkNmI1qvQ86g==: 00:28:40.639 22:27:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 --hostid 80c5a598-37ec-ec11-9bc7-a4bf01928204 -l 0 --dhchap-secret DHHC-1:01:MTlmY2Y1MzA1OTQzNDU3YTRjMWE0MzVlNTU1ZGE2YjhHWWQq: --dhchap-ctrl-secret DHHC-1:02:OTNlZTNhZmFiNmUzMTdhNTdmYTY4ZGUyNmMxYTE5NWI3ZmQyNTM3Y2Y5OTNkNmI1qvQ86g==: 00:28:41.208 22:27:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:28:41.468 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:28:41.468 22:27:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 00:28:41.468 22:27:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:41.468 22:27:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:28:41.468 22:27:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:41.468 22:27:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:28:41.468 22:27:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:28:41.468 22:27:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:28:41.468 22:27:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 2 00:28:41.468 22:27:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:28:41.468 22:27:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:28:41.468 22:27:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:28:41.468 22:27:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:28:41.468 22:27:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:28:41.468 22:27:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:28:41.468 22:27:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:41.468 22:27:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:28:41.468 22:27:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:41.468 22:27:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:28:41.468 22:27:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:28:41.468 22:27:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:28:41.729 00:28:41.729 22:27:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:28:41.729 22:27:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:28:41.729 22:27:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:28:41.990 22:27:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:41.990 22:27:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:28:41.990 22:27:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:41.990 22:27:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:28:41.990 22:27:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:41.990 22:27:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:28:41.990 { 00:28:41.990 "cntlid": 21, 00:28:41.990 "qid": 0, 00:28:41.990 "state": "enabled", 00:28:41.990 "thread": "nvmf_tgt_poll_group_000", 00:28:41.990 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204", 00:28:41.990 "listen_address": { 00:28:41.990 "trtype": "TCP", 00:28:41.990 "adrfam": "IPv4", 00:28:41.990 "traddr": "10.0.0.2", 00:28:41.990 "trsvcid": "4420" 00:28:41.990 }, 00:28:41.990 "peer_address": { 00:28:41.990 "trtype": "TCP", 00:28:41.990 "adrfam": "IPv4", 00:28:41.990 "traddr": "10.0.0.1", 00:28:41.990 "trsvcid": "37570" 00:28:41.990 }, 00:28:41.990 "auth": { 00:28:41.990 "state": "completed", 00:28:41.990 "digest": "sha256", 00:28:41.990 "dhgroup": "ffdhe3072" 00:28:41.990 } 00:28:41.990 } 00:28:41.990 ]' 00:28:41.990 22:27:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:28:41.990 22:27:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:28:41.990 22:27:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:28:41.990 22:27:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:28:41.990 22:27:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:28:41.990 22:27:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:28:41.990 22:27:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:28:41.990 22:27:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:28:42.251 22:27:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:Y2E5N2Q0MjVmNDNjOGYxZGQ3YzQ5ZmZhYzIyZjk0MzU0MGViZTAyYzEzYzkxMTZhU0B5KA==: --dhchap-ctrl-secret DHHC-1:01:NTRiOGYyNWE3ZmFiZjM0YTY3Y2M5MmM5MTAxOWM1ZjDN3lJm: 00:28:42.251 22:27:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 --hostid 80c5a598-37ec-ec11-9bc7-a4bf01928204 -l 0 --dhchap-secret DHHC-1:02:Y2E5N2Q0MjVmNDNjOGYxZGQ3YzQ5ZmZhYzIyZjk0MzU0MGViZTAyYzEzYzkxMTZhU0B5KA==: --dhchap-ctrl-secret DHHC-1:01:NTRiOGYyNWE3ZmFiZjM0YTY3Y2M5MmM5MTAxOWM1ZjDN3lJm: 00:28:43.195 22:27:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:28:43.196 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:28:43.196 22:27:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 00:28:43.196 22:27:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:43.196 22:27:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:28:43.196 22:27:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:43.196 22:27:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:28:43.196 22:27:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:28:43.196 22:27:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:28:43.196 22:27:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 3 00:28:43.196 22:27:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:28:43.196 22:27:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:28:43.196 22:27:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:28:43.196 22:27:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:28:43.196 22:27:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:28:43.196 22:27:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 --dhchap-key key3 00:28:43.196 22:27:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:43.196 22:27:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:28:43.196 22:27:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:43.196 22:27:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:28:43.196 22:27:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:28:43.196 22:27:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:28:43.456 00:28:43.456 22:27:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:28:43.456 22:27:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:28:43.456 22:27:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:28:43.716 22:27:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:43.716 22:27:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:28:43.716 22:27:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:43.716 22:27:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:28:43.716 22:27:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:43.716 22:27:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:28:43.716 { 00:28:43.716 "cntlid": 23, 00:28:43.716 "qid": 0, 00:28:43.716 "state": "enabled", 00:28:43.716 "thread": "nvmf_tgt_poll_group_000", 00:28:43.716 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204", 00:28:43.716 "listen_address": { 00:28:43.716 "trtype": "TCP", 00:28:43.716 "adrfam": "IPv4", 00:28:43.716 "traddr": "10.0.0.2", 00:28:43.716 "trsvcid": "4420" 00:28:43.717 }, 00:28:43.717 "peer_address": { 00:28:43.717 "trtype": "TCP", 00:28:43.717 "adrfam": "IPv4", 00:28:43.717 "traddr": "10.0.0.1", 00:28:43.717 "trsvcid": "37588" 00:28:43.717 }, 00:28:43.717 "auth": { 00:28:43.717 "state": "completed", 00:28:43.717 "digest": "sha256", 00:28:43.717 "dhgroup": "ffdhe3072" 00:28:43.717 } 00:28:43.717 } 00:28:43.717 ]' 00:28:43.717 22:27:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:28:43.717 22:27:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:28:43.717 22:27:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:28:43.717 22:27:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:28:43.717 22:27:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:28:43.717 22:27:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:28:43.717 22:27:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:28:43.717 22:27:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:28:43.978 22:27:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZmRjZjBlNWY4Y2FjY2E1ODM3ZmZkZjYxNDBjZjJjMDg5MzE4MGViYTljYjYwOWQ1YmU4OGM4NGNiMTE3OWI5NASd1M0=: 00:28:43.978 22:27:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 --hostid 80c5a598-37ec-ec11-9bc7-a4bf01928204 -l 0 --dhchap-secret DHHC-1:03:ZmRjZjBlNWY4Y2FjY2E1ODM3ZmZkZjYxNDBjZjJjMDg5MzE4MGViYTljYjYwOWQ1YmU4OGM4NGNiMTE3OWI5NASd1M0=: 00:28:44.921 22:27:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:28:44.921 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:28:44.921 22:27:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 00:28:44.921 22:27:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:44.921 22:27:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:28:44.921 22:27:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:44.921 22:27:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:28:44.921 22:27:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:28:44.921 22:27:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:28:44.921 22:27:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:28:44.921 22:27:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 0 00:28:44.921 22:27:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:28:44.921 22:27:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:28:44.921 22:27:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:28:44.921 22:27:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:28:44.921 22:27:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:28:44.921 22:27:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:28:44.921 22:27:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:44.921 22:27:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:28:44.921 22:27:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:44.921 22:27:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:28:44.921 22:27:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:28:44.921 22:27:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:28:45.183 00:28:45.183 22:27:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:28:45.183 22:27:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:28:45.183 22:27:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:28:45.443 22:27:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:45.443 22:27:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:28:45.443 22:27:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:45.443 22:27:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:28:45.443 22:27:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:45.443 22:27:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:28:45.443 { 00:28:45.443 "cntlid": 25, 00:28:45.443 "qid": 0, 00:28:45.443 "state": "enabled", 00:28:45.443 "thread": "nvmf_tgt_poll_group_000", 00:28:45.443 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204", 00:28:45.443 "listen_address": { 00:28:45.443 "trtype": "TCP", 00:28:45.443 "adrfam": "IPv4", 00:28:45.443 "traddr": "10.0.0.2", 00:28:45.443 "trsvcid": "4420" 00:28:45.443 }, 00:28:45.443 "peer_address": { 00:28:45.443 "trtype": "TCP", 00:28:45.443 "adrfam": "IPv4", 00:28:45.443 "traddr": "10.0.0.1", 00:28:45.443 "trsvcid": "37612" 00:28:45.443 }, 00:28:45.443 "auth": { 00:28:45.443 "state": "completed", 00:28:45.443 "digest": "sha256", 00:28:45.443 "dhgroup": "ffdhe4096" 00:28:45.443 } 00:28:45.443 } 00:28:45.443 ]' 00:28:45.443 22:27:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:28:45.443 22:27:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:28:45.444 22:27:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:28:45.444 22:27:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:28:45.444 22:27:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:28:45.444 22:27:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:28:45.444 22:27:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:28:45.444 22:27:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:28:45.704 22:27:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YjNjM2IxZjk3OTkwYjNlNGExMjc5ZDc1MTFiZGNlNjVlODUxZWU0YmVjNDJlMWQ5Ps097Q==: --dhchap-ctrl-secret DHHC-1:03:NzBkNjkxNDA0ZDc5Mzg4OTI1MGMwMTA5YmZlNmIwNGM2ZTc0ZjgyM2VmMWVkMDk0YTUxOGZiNWIwNWI4YThkN3OXbH8=: 00:28:45.704 22:27:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 --hostid 80c5a598-37ec-ec11-9bc7-a4bf01928204 -l 0 --dhchap-secret DHHC-1:00:YjNjM2IxZjk3OTkwYjNlNGExMjc5ZDc1MTFiZGNlNjVlODUxZWU0YmVjNDJlMWQ5Ps097Q==: --dhchap-ctrl-secret DHHC-1:03:NzBkNjkxNDA0ZDc5Mzg4OTI1MGMwMTA5YmZlNmIwNGM2ZTc0ZjgyM2VmMWVkMDk0YTUxOGZiNWIwNWI4YThkN3OXbH8=: 00:28:46.648 22:27:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:28:46.648 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:28:46.648 22:27:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 00:28:46.648 22:27:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:46.648 22:27:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:28:46.648 22:27:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:46.648 22:27:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:28:46.648 22:27:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:28:46.648 22:27:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:28:46.648 22:27:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 1 00:28:46.648 22:27:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:28:46.648 22:27:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:28:46.648 22:27:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:28:46.648 22:27:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:28:46.648 22:27:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:28:46.648 22:27:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:28:46.648 22:27:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:46.648 22:27:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:28:46.648 22:27:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:46.648 22:27:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:28:46.648 22:27:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:28:46.648 22:27:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:28:46.909 00:28:46.909 22:27:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:28:46.909 22:27:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:28:46.909 22:27:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:28:47.169 22:27:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:47.169 22:27:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:28:47.169 22:27:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:47.169 22:27:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:28:47.169 22:27:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:47.169 22:27:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:28:47.169 { 00:28:47.169 "cntlid": 27, 00:28:47.169 "qid": 0, 00:28:47.169 "state": "enabled", 00:28:47.169 "thread": "nvmf_tgt_poll_group_000", 00:28:47.169 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204", 00:28:47.169 "listen_address": { 00:28:47.169 "trtype": "TCP", 00:28:47.169 "adrfam": "IPv4", 00:28:47.169 "traddr": "10.0.0.2", 00:28:47.169 "trsvcid": "4420" 00:28:47.169 }, 00:28:47.169 "peer_address": { 00:28:47.169 "trtype": "TCP", 00:28:47.169 "adrfam": "IPv4", 00:28:47.169 "traddr": "10.0.0.1", 00:28:47.170 "trsvcid": "33758" 00:28:47.170 }, 00:28:47.170 "auth": { 00:28:47.170 "state": "completed", 00:28:47.170 "digest": "sha256", 00:28:47.170 "dhgroup": "ffdhe4096" 00:28:47.170 } 00:28:47.170 } 00:28:47.170 ]' 00:28:47.170 22:27:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:28:47.170 22:27:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:28:47.170 22:27:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:28:47.430 22:27:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:28:47.430 22:27:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:28:47.430 22:27:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:28:47.430 22:27:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:28:47.430 22:27:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:28:47.430 22:27:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MTlmY2Y1MzA1OTQzNDU3YTRjMWE0MzVlNTU1ZGE2YjhHWWQq: --dhchap-ctrl-secret DHHC-1:02:OTNlZTNhZmFiNmUzMTdhNTdmYTY4ZGUyNmMxYTE5NWI3ZmQyNTM3Y2Y5OTNkNmI1qvQ86g==: 00:28:47.430 22:27:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 --hostid 80c5a598-37ec-ec11-9bc7-a4bf01928204 -l 0 --dhchap-secret DHHC-1:01:MTlmY2Y1MzA1OTQzNDU3YTRjMWE0MzVlNTU1ZGE2YjhHWWQq: --dhchap-ctrl-secret DHHC-1:02:OTNlZTNhZmFiNmUzMTdhNTdmYTY4ZGUyNmMxYTE5NWI3ZmQyNTM3Y2Y5OTNkNmI1qvQ86g==: 00:28:48.373 22:27:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:28:48.373 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:28:48.373 22:27:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 00:28:48.373 22:27:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:48.373 22:27:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:28:48.373 22:27:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:48.373 22:27:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:28:48.373 22:27:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:28:48.373 22:27:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:28:48.373 22:27:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 2 00:28:48.373 22:27:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:28:48.373 22:27:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:28:48.373 22:27:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:28:48.373 22:27:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:28:48.373 22:27:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:28:48.373 22:27:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:28:48.373 22:27:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:48.373 22:27:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:28:48.373 22:27:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:48.373 22:27:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:28:48.373 22:27:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:28:48.373 22:27:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:28:48.634 00:28:48.895 22:27:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:28:48.896 22:27:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:28:48.896 22:27:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:28:48.896 22:27:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:48.896 22:27:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:28:48.896 22:27:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:48.896 22:27:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:28:48.896 22:27:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:48.896 22:27:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:28:48.896 { 00:28:48.896 "cntlid": 29, 00:28:48.896 "qid": 0, 00:28:48.896 "state": "enabled", 00:28:48.896 "thread": "nvmf_tgt_poll_group_000", 00:28:48.896 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204", 00:28:48.896 "listen_address": { 00:28:48.896 "trtype": "TCP", 00:28:48.896 "adrfam": "IPv4", 00:28:48.896 "traddr": "10.0.0.2", 00:28:48.896 "trsvcid": "4420" 00:28:48.896 }, 00:28:48.896 "peer_address": { 00:28:48.896 "trtype": "TCP", 00:28:48.896 "adrfam": "IPv4", 00:28:48.896 "traddr": "10.0.0.1", 00:28:48.896 "trsvcid": "33780" 00:28:48.896 }, 00:28:48.896 "auth": { 00:28:48.896 "state": "completed", 00:28:48.896 "digest": "sha256", 00:28:48.896 "dhgroup": "ffdhe4096" 00:28:48.896 } 00:28:48.896 } 00:28:48.896 ]' 00:28:48.896 22:27:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:28:48.896 22:27:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:28:48.896 22:27:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:28:49.157 22:27:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:28:49.157 22:27:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:28:49.157 22:27:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:28:49.157 22:27:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:28:49.157 22:27:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:28:49.157 22:27:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:Y2E5N2Q0MjVmNDNjOGYxZGQ3YzQ5ZmZhYzIyZjk0MzU0MGViZTAyYzEzYzkxMTZhU0B5KA==: --dhchap-ctrl-secret DHHC-1:01:NTRiOGYyNWE3ZmFiZjM0YTY3Y2M5MmM5MTAxOWM1ZjDN3lJm: 00:28:49.158 22:27:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 --hostid 80c5a598-37ec-ec11-9bc7-a4bf01928204 -l 0 --dhchap-secret DHHC-1:02:Y2E5N2Q0MjVmNDNjOGYxZGQ3YzQ5ZmZhYzIyZjk0MzU0MGViZTAyYzEzYzkxMTZhU0B5KA==: --dhchap-ctrl-secret DHHC-1:01:NTRiOGYyNWE3ZmFiZjM0YTY3Y2M5MmM5MTAxOWM1ZjDN3lJm: 00:28:50.103 22:27:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:28:50.103 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:28:50.103 22:27:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 00:28:50.103 22:27:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:50.103 22:27:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:28:50.103 22:27:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:50.103 22:27:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:28:50.103 22:27:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:28:50.103 22:27:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:28:50.103 22:27:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 3 00:28:50.104 22:27:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:28:50.104 22:27:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:28:50.104 22:27:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:28:50.104 22:27:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:28:50.104 22:27:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:28:50.104 22:27:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 --dhchap-key key3 00:28:50.104 22:27:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:50.104 22:27:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:28:50.104 22:27:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:50.104 22:27:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:28:50.104 22:27:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:28:50.104 22:27:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:28:50.365 00:28:50.626 22:27:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:28:50.626 22:27:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:28:50.626 22:27:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:28:50.626 22:27:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:50.626 22:27:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:28:50.626 22:27:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:50.626 22:27:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:28:50.626 22:27:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:50.626 22:27:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:28:50.626 { 00:28:50.626 "cntlid": 31, 00:28:50.626 "qid": 0, 00:28:50.626 "state": "enabled", 00:28:50.626 "thread": "nvmf_tgt_poll_group_000", 00:28:50.626 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204", 00:28:50.626 "listen_address": { 00:28:50.626 "trtype": "TCP", 00:28:50.626 "adrfam": "IPv4", 00:28:50.626 "traddr": "10.0.0.2", 00:28:50.626 "trsvcid": "4420" 00:28:50.626 }, 00:28:50.626 "peer_address": { 00:28:50.626 "trtype": "TCP", 00:28:50.626 "adrfam": "IPv4", 00:28:50.626 "traddr": "10.0.0.1", 00:28:50.626 "trsvcid": "33808" 00:28:50.626 }, 00:28:50.626 "auth": { 00:28:50.626 "state": "completed", 00:28:50.626 "digest": "sha256", 00:28:50.626 "dhgroup": "ffdhe4096" 00:28:50.626 } 00:28:50.626 } 00:28:50.626 ]' 00:28:50.626 22:27:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:28:50.626 22:27:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:28:50.626 22:27:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:28:50.887 22:27:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:28:50.887 22:27:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:28:50.887 22:27:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:28:50.887 22:27:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:28:50.887 22:27:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:28:50.887 22:27:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZmRjZjBlNWY4Y2FjY2E1ODM3ZmZkZjYxNDBjZjJjMDg5MzE4MGViYTljYjYwOWQ1YmU4OGM4NGNiMTE3OWI5NASd1M0=: 00:28:50.887 22:27:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 --hostid 80c5a598-37ec-ec11-9bc7-a4bf01928204 -l 0 --dhchap-secret DHHC-1:03:ZmRjZjBlNWY4Y2FjY2E1ODM3ZmZkZjYxNDBjZjJjMDg5MzE4MGViYTljYjYwOWQ1YmU4OGM4NGNiMTE3OWI5NASd1M0=: 00:28:51.830 22:27:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:28:51.830 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:28:51.830 22:27:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 00:28:51.830 22:27:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:51.830 22:27:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:28:51.830 22:27:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:51.830 22:27:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:28:51.830 22:27:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:28:51.830 22:27:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:28:51.830 22:27:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:28:51.830 22:27:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 0 00:28:51.830 22:27:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:28:51.830 22:27:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:28:51.830 22:27:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:28:51.830 22:27:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:28:51.830 22:27:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:28:51.830 22:27:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:28:51.830 22:27:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:51.830 22:27:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:28:51.830 22:27:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:51.830 22:27:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:28:51.830 22:27:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:28:51.830 22:27:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:28:52.400 00:28:52.400 22:27:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:28:52.400 22:27:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:28:52.400 22:27:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:28:52.400 22:27:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:52.400 22:27:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:28:52.400 22:27:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:52.400 22:27:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:28:52.400 22:27:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:52.400 22:27:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:28:52.400 { 00:28:52.400 "cntlid": 33, 00:28:52.400 "qid": 0, 00:28:52.400 "state": "enabled", 00:28:52.400 "thread": "nvmf_tgt_poll_group_000", 00:28:52.400 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204", 00:28:52.400 "listen_address": { 00:28:52.400 "trtype": "TCP", 00:28:52.400 "adrfam": "IPv4", 00:28:52.400 "traddr": "10.0.0.2", 00:28:52.400 "trsvcid": "4420" 00:28:52.400 }, 00:28:52.400 "peer_address": { 00:28:52.400 "trtype": "TCP", 00:28:52.400 "adrfam": "IPv4", 00:28:52.400 "traddr": "10.0.0.1", 00:28:52.400 "trsvcid": "33842" 00:28:52.400 }, 00:28:52.400 "auth": { 00:28:52.400 "state": "completed", 00:28:52.400 "digest": "sha256", 00:28:52.401 "dhgroup": "ffdhe6144" 00:28:52.401 } 00:28:52.401 } 00:28:52.401 ]' 00:28:52.401 22:27:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:28:52.661 22:27:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:28:52.661 22:27:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:28:52.661 22:27:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:28:52.661 22:27:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:28:52.661 22:27:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:28:52.661 22:27:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:28:52.661 22:27:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:28:52.922 22:27:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YjNjM2IxZjk3OTkwYjNlNGExMjc5ZDc1MTFiZGNlNjVlODUxZWU0YmVjNDJlMWQ5Ps097Q==: --dhchap-ctrl-secret DHHC-1:03:NzBkNjkxNDA0ZDc5Mzg4OTI1MGMwMTA5YmZlNmIwNGM2ZTc0ZjgyM2VmMWVkMDk0YTUxOGZiNWIwNWI4YThkN3OXbH8=: 00:28:52.922 22:27:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 --hostid 80c5a598-37ec-ec11-9bc7-a4bf01928204 -l 0 --dhchap-secret DHHC-1:00:YjNjM2IxZjk3OTkwYjNlNGExMjc5ZDc1MTFiZGNlNjVlODUxZWU0YmVjNDJlMWQ5Ps097Q==: --dhchap-ctrl-secret DHHC-1:03:NzBkNjkxNDA0ZDc5Mzg4OTI1MGMwMTA5YmZlNmIwNGM2ZTc0ZjgyM2VmMWVkMDk0YTUxOGZiNWIwNWI4YThkN3OXbH8=: 00:28:53.492 22:27:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:28:53.492 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:28:53.492 22:27:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 00:28:53.492 22:27:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:53.492 22:27:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:28:53.492 22:27:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:53.492 22:27:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:28:53.492 22:27:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:28:53.492 22:27:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:28:53.753 22:27:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 1 00:28:53.753 22:27:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:28:53.753 22:27:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:28:53.753 22:27:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:28:53.753 22:27:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:28:53.753 22:27:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:28:53.753 22:27:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:28:53.753 22:27:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:53.753 22:27:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:28:53.753 22:27:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:53.753 22:27:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:28:53.753 22:27:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:28:53.753 22:27:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:28:54.013 00:28:54.013 22:27:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:28:54.013 22:27:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:28:54.013 22:27:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:28:54.273 22:27:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:54.273 22:27:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:28:54.273 22:27:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:54.274 22:27:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:28:54.274 22:27:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:54.274 22:27:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:28:54.274 { 00:28:54.274 "cntlid": 35, 00:28:54.274 "qid": 0, 00:28:54.274 "state": "enabled", 00:28:54.274 "thread": "nvmf_tgt_poll_group_000", 00:28:54.274 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204", 00:28:54.274 "listen_address": { 00:28:54.274 "trtype": "TCP", 00:28:54.274 "adrfam": "IPv4", 00:28:54.274 "traddr": "10.0.0.2", 00:28:54.274 "trsvcid": "4420" 00:28:54.274 }, 00:28:54.274 "peer_address": { 00:28:54.274 "trtype": "TCP", 00:28:54.274 "adrfam": "IPv4", 00:28:54.274 "traddr": "10.0.0.1", 00:28:54.274 "trsvcid": "33880" 00:28:54.274 }, 00:28:54.274 "auth": { 00:28:54.274 "state": "completed", 00:28:54.274 "digest": "sha256", 00:28:54.274 "dhgroup": "ffdhe6144" 00:28:54.274 } 00:28:54.274 } 00:28:54.274 ]' 00:28:54.274 22:27:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:28:54.274 22:27:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:28:54.274 22:27:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:28:54.274 22:27:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:28:54.274 22:27:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:28:54.535 22:27:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:28:54.535 22:27:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:28:54.535 22:27:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:28:54.535 22:27:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MTlmY2Y1MzA1OTQzNDU3YTRjMWE0MzVlNTU1ZGE2YjhHWWQq: --dhchap-ctrl-secret DHHC-1:02:OTNlZTNhZmFiNmUzMTdhNTdmYTY4ZGUyNmMxYTE5NWI3ZmQyNTM3Y2Y5OTNkNmI1qvQ86g==: 00:28:54.535 22:27:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 --hostid 80c5a598-37ec-ec11-9bc7-a4bf01928204 -l 0 --dhchap-secret DHHC-1:01:MTlmY2Y1MzA1OTQzNDU3YTRjMWE0MzVlNTU1ZGE2YjhHWWQq: --dhchap-ctrl-secret DHHC-1:02:OTNlZTNhZmFiNmUzMTdhNTdmYTY4ZGUyNmMxYTE5NWI3ZmQyNTM3Y2Y5OTNkNmI1qvQ86g==: 00:28:55.480 22:27:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:28:55.480 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:28:55.480 22:27:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 00:28:55.480 22:27:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:55.480 22:27:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:28:55.480 22:27:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:55.480 22:27:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:28:55.480 22:27:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:28:55.480 22:27:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:28:55.480 22:27:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 2 00:28:55.480 22:27:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:28:55.480 22:27:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:28:55.480 22:27:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:28:55.480 22:27:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:28:55.480 22:27:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:28:55.480 22:27:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:28:55.480 22:27:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:55.480 22:27:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:28:55.480 22:27:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:55.480 22:27:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:28:55.480 22:27:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:28:55.480 22:27:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:28:56.052 00:28:56.052 22:27:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:28:56.052 22:27:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:28:56.052 22:27:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:28:56.052 22:27:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:56.052 22:27:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:28:56.052 22:27:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:56.052 22:27:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:28:56.052 22:27:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:56.052 22:27:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:28:56.052 { 00:28:56.052 "cntlid": 37, 00:28:56.052 "qid": 0, 00:28:56.052 "state": "enabled", 00:28:56.052 "thread": "nvmf_tgt_poll_group_000", 00:28:56.052 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204", 00:28:56.052 "listen_address": { 00:28:56.052 "trtype": "TCP", 00:28:56.052 "adrfam": "IPv4", 00:28:56.052 "traddr": "10.0.0.2", 00:28:56.052 "trsvcid": "4420" 00:28:56.052 }, 00:28:56.052 "peer_address": { 00:28:56.052 "trtype": "TCP", 00:28:56.052 "adrfam": "IPv4", 00:28:56.052 "traddr": "10.0.0.1", 00:28:56.052 "trsvcid": "33906" 00:28:56.052 }, 00:28:56.052 "auth": { 00:28:56.052 "state": "completed", 00:28:56.052 "digest": "sha256", 00:28:56.052 "dhgroup": "ffdhe6144" 00:28:56.052 } 00:28:56.052 } 00:28:56.052 ]' 00:28:56.052 22:27:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:28:56.052 22:27:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:28:56.052 22:27:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:28:56.313 22:27:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:28:56.313 22:27:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:28:56.313 22:27:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:28:56.313 22:27:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:28:56.313 22:27:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:28:56.313 22:27:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:Y2E5N2Q0MjVmNDNjOGYxZGQ3YzQ5ZmZhYzIyZjk0MzU0MGViZTAyYzEzYzkxMTZhU0B5KA==: --dhchap-ctrl-secret DHHC-1:01:NTRiOGYyNWE3ZmFiZjM0YTY3Y2M5MmM5MTAxOWM1ZjDN3lJm: 00:28:56.313 22:27:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 --hostid 80c5a598-37ec-ec11-9bc7-a4bf01928204 -l 0 --dhchap-secret DHHC-1:02:Y2E5N2Q0MjVmNDNjOGYxZGQ3YzQ5ZmZhYzIyZjk0MzU0MGViZTAyYzEzYzkxMTZhU0B5KA==: --dhchap-ctrl-secret DHHC-1:01:NTRiOGYyNWE3ZmFiZjM0YTY3Y2M5MmM5MTAxOWM1ZjDN3lJm: 00:28:57.256 22:27:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:28:57.256 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:28:57.256 22:27:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 00:28:57.256 22:27:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:57.256 22:27:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:28:57.256 22:27:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:57.256 22:27:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:28:57.256 22:27:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:28:57.256 22:27:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:28:57.256 22:27:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 3 00:28:57.256 22:27:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:28:57.256 22:27:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:28:57.256 22:27:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:28:57.257 22:27:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:28:57.257 22:27:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:28:57.257 22:27:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 --dhchap-key key3 00:28:57.257 22:27:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:57.257 22:27:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:28:57.257 22:27:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:57.257 22:27:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:28:57.257 22:27:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:28:57.257 22:27:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:28:57.826 00:28:57.826 22:27:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:28:57.826 22:27:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:28:57.826 22:27:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:28:57.826 22:27:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:57.826 22:27:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:28:57.826 22:27:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:57.826 22:27:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:28:57.826 22:27:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:57.826 22:27:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:28:57.826 { 00:28:57.826 "cntlid": 39, 00:28:57.826 "qid": 0, 00:28:57.826 "state": "enabled", 00:28:57.826 "thread": "nvmf_tgt_poll_group_000", 00:28:57.826 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204", 00:28:57.826 "listen_address": { 00:28:57.826 "trtype": "TCP", 00:28:57.826 "adrfam": "IPv4", 00:28:57.826 "traddr": "10.0.0.2", 00:28:57.826 "trsvcid": "4420" 00:28:57.826 }, 00:28:57.826 "peer_address": { 00:28:57.826 "trtype": "TCP", 00:28:57.826 "adrfam": "IPv4", 00:28:57.826 "traddr": "10.0.0.1", 00:28:57.826 "trsvcid": "54574" 00:28:57.826 }, 00:28:57.826 "auth": { 00:28:57.826 "state": "completed", 00:28:57.826 "digest": "sha256", 00:28:57.826 "dhgroup": "ffdhe6144" 00:28:57.826 } 00:28:57.826 } 00:28:57.826 ]' 00:28:57.826 22:27:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:28:58.088 22:27:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:28:58.088 22:27:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:28:58.088 22:27:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:28:58.088 22:27:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:28:58.088 22:27:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:28:58.088 22:27:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:28:58.088 22:27:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:28:58.382 22:27:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZmRjZjBlNWY4Y2FjY2E1ODM3ZmZkZjYxNDBjZjJjMDg5MzE4MGViYTljYjYwOWQ1YmU4OGM4NGNiMTE3OWI5NASd1M0=: 00:28:58.382 22:27:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 --hostid 80c5a598-37ec-ec11-9bc7-a4bf01928204 -l 0 --dhchap-secret DHHC-1:03:ZmRjZjBlNWY4Y2FjY2E1ODM3ZmZkZjYxNDBjZjJjMDg5MzE4MGViYTljYjYwOWQ1YmU4OGM4NGNiMTE3OWI5NASd1M0=: 00:28:58.994 22:27:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:28:58.994 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:28:58.994 22:27:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 00:28:58.994 22:27:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:58.994 22:27:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:28:58.994 22:27:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:58.994 22:27:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:28:58.994 22:27:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:28:58.994 22:27:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:28:58.994 22:27:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:28:59.261 22:27:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 0 00:28:59.261 22:27:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:28:59.261 22:27:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:28:59.261 22:27:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:28:59.261 22:27:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:28:59.261 22:27:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:28:59.261 22:27:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:28:59.261 22:27:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:59.261 22:27:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:28:59.262 22:27:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:59.262 22:27:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:28:59.262 22:27:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:28:59.262 22:27:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:28:59.832 00:28:59.832 22:27:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:28:59.832 22:27:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:28:59.832 22:27:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:28:59.832 22:27:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:59.832 22:27:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:28:59.832 22:27:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:59.832 22:27:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:28:59.832 22:27:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:59.832 22:27:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:28:59.832 { 00:28:59.832 "cntlid": 41, 00:28:59.832 "qid": 0, 00:28:59.832 "state": "enabled", 00:28:59.832 "thread": "nvmf_tgt_poll_group_000", 00:28:59.832 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204", 00:28:59.832 "listen_address": { 00:28:59.832 "trtype": "TCP", 00:28:59.832 "adrfam": "IPv4", 00:28:59.832 "traddr": "10.0.0.2", 00:28:59.832 "trsvcid": "4420" 00:28:59.832 }, 00:28:59.832 "peer_address": { 00:28:59.832 "trtype": "TCP", 00:28:59.832 "adrfam": "IPv4", 00:28:59.832 "traddr": "10.0.0.1", 00:28:59.832 "trsvcid": "54604" 00:28:59.832 }, 00:28:59.832 "auth": { 00:28:59.832 "state": "completed", 00:28:59.833 "digest": "sha256", 00:28:59.833 "dhgroup": "ffdhe8192" 00:28:59.833 } 00:28:59.833 } 00:28:59.833 ]' 00:28:59.833 22:27:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:29:00.093 22:27:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:29:00.093 22:27:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:29:00.093 22:27:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:29:00.093 22:27:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:29:00.093 22:27:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:29:00.093 22:27:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:29:00.093 22:27:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:29:00.354 22:27:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YjNjM2IxZjk3OTkwYjNlNGExMjc5ZDc1MTFiZGNlNjVlODUxZWU0YmVjNDJlMWQ5Ps097Q==: --dhchap-ctrl-secret DHHC-1:03:NzBkNjkxNDA0ZDc5Mzg4OTI1MGMwMTA5YmZlNmIwNGM2ZTc0ZjgyM2VmMWVkMDk0YTUxOGZiNWIwNWI4YThkN3OXbH8=: 00:29:00.354 22:27:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 --hostid 80c5a598-37ec-ec11-9bc7-a4bf01928204 -l 0 --dhchap-secret DHHC-1:00:YjNjM2IxZjk3OTkwYjNlNGExMjc5ZDc1MTFiZGNlNjVlODUxZWU0YmVjNDJlMWQ5Ps097Q==: --dhchap-ctrl-secret DHHC-1:03:NzBkNjkxNDA0ZDc5Mzg4OTI1MGMwMTA5YmZlNmIwNGM2ZTc0ZjgyM2VmMWVkMDk0YTUxOGZiNWIwNWI4YThkN3OXbH8=: 00:29:00.925 22:27:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:29:00.925 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:29:00.925 22:27:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 00:29:00.925 22:27:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:00.925 22:27:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:29:00.925 22:27:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:00.925 22:27:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:29:00.925 22:27:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:29:00.925 22:27:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:29:01.186 22:27:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 1 00:29:01.186 22:27:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:29:01.186 22:27:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:29:01.186 22:27:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:29:01.186 22:27:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:29:01.186 22:27:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:29:01.186 22:27:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:29:01.186 22:27:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:01.186 22:27:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:29:01.186 22:27:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:01.186 22:27:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:29:01.186 22:27:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:29:01.186 22:27:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:29:01.757 00:29:01.757 22:27:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:29:01.757 22:27:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:29:01.757 22:27:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:29:02.017 22:27:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:02.017 22:27:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:29:02.017 22:27:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:02.017 22:27:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:29:02.017 22:27:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:02.017 22:27:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:29:02.017 { 00:29:02.017 "cntlid": 43, 00:29:02.017 "qid": 0, 00:29:02.017 "state": "enabled", 00:29:02.017 "thread": "nvmf_tgt_poll_group_000", 00:29:02.017 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204", 00:29:02.017 "listen_address": { 00:29:02.017 "trtype": "TCP", 00:29:02.017 "adrfam": "IPv4", 00:29:02.017 "traddr": "10.0.0.2", 00:29:02.017 "trsvcid": "4420" 00:29:02.017 }, 00:29:02.017 "peer_address": { 00:29:02.017 "trtype": "TCP", 00:29:02.017 "adrfam": "IPv4", 00:29:02.017 "traddr": "10.0.0.1", 00:29:02.017 "trsvcid": "54634" 00:29:02.017 }, 00:29:02.017 "auth": { 00:29:02.017 "state": "completed", 00:29:02.017 "digest": "sha256", 00:29:02.017 "dhgroup": "ffdhe8192" 00:29:02.017 } 00:29:02.017 } 00:29:02.017 ]' 00:29:02.017 22:27:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:29:02.017 22:27:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:29:02.017 22:27:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:29:02.017 22:27:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:29:02.017 22:27:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:29:02.017 22:27:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:29:02.017 22:27:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:29:02.017 22:27:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:29:02.278 22:27:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MTlmY2Y1MzA1OTQzNDU3YTRjMWE0MzVlNTU1ZGE2YjhHWWQq: --dhchap-ctrl-secret DHHC-1:02:OTNlZTNhZmFiNmUzMTdhNTdmYTY4ZGUyNmMxYTE5NWI3ZmQyNTM3Y2Y5OTNkNmI1qvQ86g==: 00:29:02.278 22:27:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 --hostid 80c5a598-37ec-ec11-9bc7-a4bf01928204 -l 0 --dhchap-secret DHHC-1:01:MTlmY2Y1MzA1OTQzNDU3YTRjMWE0MzVlNTU1ZGE2YjhHWWQq: --dhchap-ctrl-secret DHHC-1:02:OTNlZTNhZmFiNmUzMTdhNTdmYTY4ZGUyNmMxYTE5NWI3ZmQyNTM3Y2Y5OTNkNmI1qvQ86g==: 00:29:03.220 22:27:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:29:03.220 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:29:03.220 22:27:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 00:29:03.220 22:27:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:03.221 22:27:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:29:03.221 22:27:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:03.221 22:27:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:29:03.221 22:27:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:29:03.221 22:27:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:29:03.221 22:27:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 2 00:29:03.221 22:27:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:29:03.221 22:27:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:29:03.221 22:27:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:29:03.221 22:27:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:29:03.221 22:27:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:29:03.221 22:27:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:29:03.221 22:27:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:03.221 22:27:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:29:03.221 22:27:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:03.221 22:27:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:29:03.221 22:27:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:29:03.221 22:27:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:29:03.792 00:29:03.792 22:27:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:29:03.792 22:27:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:29:03.792 22:27:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:29:03.792 22:27:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:03.792 22:27:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:29:03.792 22:27:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:03.792 22:27:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:29:03.792 22:27:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:04.052 22:27:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:29:04.052 { 00:29:04.052 "cntlid": 45, 00:29:04.052 "qid": 0, 00:29:04.052 "state": "enabled", 00:29:04.052 "thread": "nvmf_tgt_poll_group_000", 00:29:04.052 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204", 00:29:04.052 "listen_address": { 00:29:04.052 "trtype": "TCP", 00:29:04.052 "adrfam": "IPv4", 00:29:04.052 "traddr": "10.0.0.2", 00:29:04.052 "trsvcid": "4420" 00:29:04.052 }, 00:29:04.052 "peer_address": { 00:29:04.052 "trtype": "TCP", 00:29:04.052 "adrfam": "IPv4", 00:29:04.052 "traddr": "10.0.0.1", 00:29:04.052 "trsvcid": "54656" 00:29:04.052 }, 00:29:04.052 "auth": { 00:29:04.052 "state": "completed", 00:29:04.052 "digest": "sha256", 00:29:04.052 "dhgroup": "ffdhe8192" 00:29:04.052 } 00:29:04.052 } 00:29:04.052 ]' 00:29:04.052 22:27:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:29:04.052 22:27:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:29:04.052 22:27:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:29:04.052 22:27:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:29:04.052 22:27:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:29:04.052 22:27:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:29:04.052 22:27:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:29:04.052 22:27:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:29:04.314 22:27:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:Y2E5N2Q0MjVmNDNjOGYxZGQ3YzQ5ZmZhYzIyZjk0MzU0MGViZTAyYzEzYzkxMTZhU0B5KA==: --dhchap-ctrl-secret DHHC-1:01:NTRiOGYyNWE3ZmFiZjM0YTY3Y2M5MmM5MTAxOWM1ZjDN3lJm: 00:29:04.314 22:27:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 --hostid 80c5a598-37ec-ec11-9bc7-a4bf01928204 -l 0 --dhchap-secret DHHC-1:02:Y2E5N2Q0MjVmNDNjOGYxZGQ3YzQ5ZmZhYzIyZjk0MzU0MGViZTAyYzEzYzkxMTZhU0B5KA==: --dhchap-ctrl-secret DHHC-1:01:NTRiOGYyNWE3ZmFiZjM0YTY3Y2M5MmM5MTAxOWM1ZjDN3lJm: 00:29:04.885 22:28:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:29:04.885 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:29:04.885 22:28:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 00:29:04.885 22:28:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:04.885 22:28:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:29:04.885 22:28:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:04.885 22:28:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:29:04.885 22:28:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:29:04.885 22:28:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:29:05.147 22:28:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 3 00:29:05.147 22:28:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:29:05.147 22:28:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:29:05.147 22:28:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:29:05.147 22:28:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:29:05.147 22:28:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:29:05.147 22:28:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 --dhchap-key key3 00:29:05.147 22:28:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:05.147 22:28:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:29:05.147 22:28:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:05.147 22:28:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:29:05.147 22:28:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:29:05.147 22:28:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:29:05.718 00:29:05.718 22:28:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:29:05.718 22:28:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:29:05.718 22:28:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:29:05.979 22:28:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:05.979 22:28:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:29:05.979 22:28:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:05.979 22:28:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:29:05.979 22:28:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:05.979 22:28:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:29:05.979 { 00:29:05.979 "cntlid": 47, 00:29:05.979 "qid": 0, 00:29:05.979 "state": "enabled", 00:29:05.979 "thread": "nvmf_tgt_poll_group_000", 00:29:05.979 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204", 00:29:05.979 "listen_address": { 00:29:05.979 "trtype": "TCP", 00:29:05.979 "adrfam": "IPv4", 00:29:05.979 "traddr": "10.0.0.2", 00:29:05.979 "trsvcid": "4420" 00:29:05.979 }, 00:29:05.979 "peer_address": { 00:29:05.979 "trtype": "TCP", 00:29:05.979 "adrfam": "IPv4", 00:29:05.979 "traddr": "10.0.0.1", 00:29:05.979 "trsvcid": "54694" 00:29:05.979 }, 00:29:05.979 "auth": { 00:29:05.979 "state": "completed", 00:29:05.979 "digest": "sha256", 00:29:05.979 "dhgroup": "ffdhe8192" 00:29:05.979 } 00:29:05.979 } 00:29:05.979 ]' 00:29:05.979 22:28:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:29:05.979 22:28:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:29:05.979 22:28:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:29:05.979 22:28:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:29:05.979 22:28:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:29:05.979 22:28:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:29:05.979 22:28:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:29:05.979 22:28:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:29:06.240 22:28:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZmRjZjBlNWY4Y2FjY2E1ODM3ZmZkZjYxNDBjZjJjMDg5MzE4MGViYTljYjYwOWQ1YmU4OGM4NGNiMTE3OWI5NASd1M0=: 00:29:06.240 22:28:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 --hostid 80c5a598-37ec-ec11-9bc7-a4bf01928204 -l 0 --dhchap-secret DHHC-1:03:ZmRjZjBlNWY4Y2FjY2E1ODM3ZmZkZjYxNDBjZjJjMDg5MzE4MGViYTljYjYwOWQ1YmU4OGM4NGNiMTE3OWI5NASd1M0=: 00:29:06.811 22:28:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:29:07.072 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:29:07.072 22:28:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 00:29:07.072 22:28:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:07.072 22:28:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:29:07.072 22:28:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:07.072 22:28:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:29:07.072 22:28:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:29:07.072 22:28:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:29:07.072 22:28:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:29:07.072 22:28:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:29:07.072 22:28:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 0 00:29:07.072 22:28:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:29:07.072 22:28:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:29:07.072 22:28:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:29:07.072 22:28:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:29:07.072 22:28:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:29:07.072 22:28:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:29:07.072 22:28:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:07.072 22:28:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:29:07.072 22:28:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:07.072 22:28:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:29:07.072 22:28:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:29:07.072 22:28:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:29:07.332 00:29:07.332 22:28:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:29:07.332 22:28:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:29:07.332 22:28:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:29:07.592 22:28:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:07.592 22:28:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:29:07.592 22:28:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:07.592 22:28:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:29:07.592 22:28:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:07.592 22:28:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:29:07.592 { 00:29:07.592 "cntlid": 49, 00:29:07.592 "qid": 0, 00:29:07.592 "state": "enabled", 00:29:07.592 "thread": "nvmf_tgt_poll_group_000", 00:29:07.592 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204", 00:29:07.592 "listen_address": { 00:29:07.592 "trtype": "TCP", 00:29:07.592 "adrfam": "IPv4", 00:29:07.592 "traddr": "10.0.0.2", 00:29:07.592 "trsvcid": "4420" 00:29:07.592 }, 00:29:07.592 "peer_address": { 00:29:07.592 "trtype": "TCP", 00:29:07.592 "adrfam": "IPv4", 00:29:07.592 "traddr": "10.0.0.1", 00:29:07.592 "trsvcid": "44262" 00:29:07.592 }, 00:29:07.592 "auth": { 00:29:07.592 "state": "completed", 00:29:07.592 "digest": "sha384", 00:29:07.592 "dhgroup": "null" 00:29:07.592 } 00:29:07.592 } 00:29:07.592 ]' 00:29:07.592 22:28:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:29:07.592 22:28:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:29:07.592 22:28:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:29:07.592 22:28:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:29:07.592 22:28:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:29:07.592 22:28:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:29:07.592 22:28:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:29:07.592 22:28:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:29:07.896 22:28:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YjNjM2IxZjk3OTkwYjNlNGExMjc5ZDc1MTFiZGNlNjVlODUxZWU0YmVjNDJlMWQ5Ps097Q==: --dhchap-ctrl-secret DHHC-1:03:NzBkNjkxNDA0ZDc5Mzg4OTI1MGMwMTA5YmZlNmIwNGM2ZTc0ZjgyM2VmMWVkMDk0YTUxOGZiNWIwNWI4YThkN3OXbH8=: 00:29:07.896 22:28:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 --hostid 80c5a598-37ec-ec11-9bc7-a4bf01928204 -l 0 --dhchap-secret DHHC-1:00:YjNjM2IxZjk3OTkwYjNlNGExMjc5ZDc1MTFiZGNlNjVlODUxZWU0YmVjNDJlMWQ5Ps097Q==: --dhchap-ctrl-secret DHHC-1:03:NzBkNjkxNDA0ZDc5Mzg4OTI1MGMwMTA5YmZlNmIwNGM2ZTc0ZjgyM2VmMWVkMDk0YTUxOGZiNWIwNWI4YThkN3OXbH8=: 00:29:08.466 22:28:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:29:08.466 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:29:08.726 22:28:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 00:29:08.726 22:28:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:08.726 22:28:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:29:08.726 22:28:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:08.726 22:28:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:29:08.726 22:28:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:29:08.726 22:28:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:29:08.726 22:28:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 1 00:29:08.726 22:28:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:29:08.726 22:28:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:29:08.726 22:28:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:29:08.726 22:28:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:29:08.726 22:28:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:29:08.726 22:28:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:29:08.726 22:28:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:08.726 22:28:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:29:08.726 22:28:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:08.726 22:28:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:29:08.726 22:28:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:29:08.726 22:28:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:29:08.985 00:29:08.985 22:28:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:29:08.985 22:28:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:29:08.985 22:28:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:29:09.245 22:28:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:09.245 22:28:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:29:09.245 22:28:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:09.245 22:28:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:29:09.245 22:28:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:09.245 22:28:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:29:09.245 { 00:29:09.245 "cntlid": 51, 00:29:09.245 "qid": 0, 00:29:09.245 "state": "enabled", 00:29:09.245 "thread": "nvmf_tgt_poll_group_000", 00:29:09.245 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204", 00:29:09.245 "listen_address": { 00:29:09.245 "trtype": "TCP", 00:29:09.245 "adrfam": "IPv4", 00:29:09.245 "traddr": "10.0.0.2", 00:29:09.245 "trsvcid": "4420" 00:29:09.245 }, 00:29:09.245 "peer_address": { 00:29:09.245 "trtype": "TCP", 00:29:09.245 "adrfam": "IPv4", 00:29:09.245 "traddr": "10.0.0.1", 00:29:09.245 "trsvcid": "44290" 00:29:09.245 }, 00:29:09.245 "auth": { 00:29:09.245 "state": "completed", 00:29:09.245 "digest": "sha384", 00:29:09.245 "dhgroup": "null" 00:29:09.245 } 00:29:09.245 } 00:29:09.245 ]' 00:29:09.245 22:28:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:29:09.245 22:28:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:29:09.245 22:28:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:29:09.245 22:28:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:29:09.245 22:28:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:29:09.245 22:28:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:29:09.245 22:28:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:29:09.245 22:28:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:29:09.504 22:28:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MTlmY2Y1MzA1OTQzNDU3YTRjMWE0MzVlNTU1ZGE2YjhHWWQq: --dhchap-ctrl-secret DHHC-1:02:OTNlZTNhZmFiNmUzMTdhNTdmYTY4ZGUyNmMxYTE5NWI3ZmQyNTM3Y2Y5OTNkNmI1qvQ86g==: 00:29:09.504 22:28:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 --hostid 80c5a598-37ec-ec11-9bc7-a4bf01928204 -l 0 --dhchap-secret DHHC-1:01:MTlmY2Y1MzA1OTQzNDU3YTRjMWE0MzVlNTU1ZGE2YjhHWWQq: --dhchap-ctrl-secret DHHC-1:02:OTNlZTNhZmFiNmUzMTdhNTdmYTY4ZGUyNmMxYTE5NWI3ZmQyNTM3Y2Y5OTNkNmI1qvQ86g==: 00:29:10.448 22:28:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:29:10.448 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:29:10.448 22:28:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 00:29:10.448 22:28:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:10.448 22:28:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:29:10.448 22:28:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:10.448 22:28:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:29:10.448 22:28:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:29:10.448 22:28:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:29:10.448 22:28:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 2 00:29:10.448 22:28:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:29:10.448 22:28:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:29:10.448 22:28:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:29:10.448 22:28:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:29:10.448 22:28:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:29:10.448 22:28:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:29:10.448 22:28:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:10.448 22:28:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:29:10.448 22:28:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:10.448 22:28:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:29:10.448 22:28:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:29:10.448 22:28:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:29:10.708 00:29:10.708 22:28:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:29:10.708 22:28:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:29:10.708 22:28:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:29:10.968 22:28:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:10.968 22:28:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:29:10.968 22:28:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:10.968 22:28:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:29:10.968 22:28:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:10.968 22:28:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:29:10.968 { 00:29:10.968 "cntlid": 53, 00:29:10.968 "qid": 0, 00:29:10.968 "state": "enabled", 00:29:10.968 "thread": "nvmf_tgt_poll_group_000", 00:29:10.968 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204", 00:29:10.968 "listen_address": { 00:29:10.968 "trtype": "TCP", 00:29:10.968 "adrfam": "IPv4", 00:29:10.968 "traddr": "10.0.0.2", 00:29:10.968 "trsvcid": "4420" 00:29:10.968 }, 00:29:10.968 "peer_address": { 00:29:10.968 "trtype": "TCP", 00:29:10.968 "adrfam": "IPv4", 00:29:10.968 "traddr": "10.0.0.1", 00:29:10.968 "trsvcid": "44336" 00:29:10.968 }, 00:29:10.968 "auth": { 00:29:10.968 "state": "completed", 00:29:10.968 "digest": "sha384", 00:29:10.968 "dhgroup": "null" 00:29:10.968 } 00:29:10.968 } 00:29:10.968 ]' 00:29:10.968 22:28:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:29:10.968 22:28:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:29:10.968 22:28:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:29:10.968 22:28:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:29:10.968 22:28:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:29:10.968 22:28:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:29:10.968 22:28:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:29:10.968 22:28:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:29:11.228 22:28:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:Y2E5N2Q0MjVmNDNjOGYxZGQ3YzQ5ZmZhYzIyZjk0MzU0MGViZTAyYzEzYzkxMTZhU0B5KA==: --dhchap-ctrl-secret DHHC-1:01:NTRiOGYyNWE3ZmFiZjM0YTY3Y2M5MmM5MTAxOWM1ZjDN3lJm: 00:29:11.228 22:28:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 --hostid 80c5a598-37ec-ec11-9bc7-a4bf01928204 -l 0 --dhchap-secret DHHC-1:02:Y2E5N2Q0MjVmNDNjOGYxZGQ3YzQ5ZmZhYzIyZjk0MzU0MGViZTAyYzEzYzkxMTZhU0B5KA==: --dhchap-ctrl-secret DHHC-1:01:NTRiOGYyNWE3ZmFiZjM0YTY3Y2M5MmM5MTAxOWM1ZjDN3lJm: 00:29:12.168 22:28:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:29:12.168 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:29:12.168 22:28:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 00:29:12.168 22:28:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:12.168 22:28:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:29:12.168 22:28:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:12.169 22:28:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:29:12.169 22:28:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:29:12.169 22:28:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:29:12.169 22:28:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 3 00:29:12.169 22:28:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:29:12.169 22:28:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:29:12.169 22:28:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:29:12.169 22:28:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:29:12.169 22:28:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:29:12.169 22:28:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 --dhchap-key key3 00:29:12.169 22:28:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:12.169 22:28:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:29:12.169 22:28:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:12.169 22:28:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:29:12.169 22:28:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:29:12.169 22:28:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:29:12.429 00:29:12.429 22:28:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:29:12.429 22:28:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:29:12.429 22:28:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:29:12.688 22:28:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:12.688 22:28:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:29:12.689 22:28:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:12.689 22:28:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:29:12.689 22:28:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:12.689 22:28:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:29:12.689 { 00:29:12.689 "cntlid": 55, 00:29:12.689 "qid": 0, 00:29:12.689 "state": "enabled", 00:29:12.689 "thread": "nvmf_tgt_poll_group_000", 00:29:12.689 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204", 00:29:12.689 "listen_address": { 00:29:12.689 "trtype": "TCP", 00:29:12.689 "adrfam": "IPv4", 00:29:12.689 "traddr": "10.0.0.2", 00:29:12.689 "trsvcid": "4420" 00:29:12.689 }, 00:29:12.689 "peer_address": { 00:29:12.689 "trtype": "TCP", 00:29:12.689 "adrfam": "IPv4", 00:29:12.689 "traddr": "10.0.0.1", 00:29:12.689 "trsvcid": "44368" 00:29:12.689 }, 00:29:12.689 "auth": { 00:29:12.689 "state": "completed", 00:29:12.689 "digest": "sha384", 00:29:12.689 "dhgroup": "null" 00:29:12.689 } 00:29:12.689 } 00:29:12.689 ]' 00:29:12.689 22:28:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:29:12.689 22:28:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:29:12.689 22:28:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:29:12.689 22:28:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:29:12.689 22:28:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:29:12.689 22:28:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:29:12.689 22:28:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:29:12.689 22:28:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:29:12.948 22:28:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZmRjZjBlNWY4Y2FjY2E1ODM3ZmZkZjYxNDBjZjJjMDg5MzE4MGViYTljYjYwOWQ1YmU4OGM4NGNiMTE3OWI5NASd1M0=: 00:29:12.948 22:28:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 --hostid 80c5a598-37ec-ec11-9bc7-a4bf01928204 -l 0 --dhchap-secret DHHC-1:03:ZmRjZjBlNWY4Y2FjY2E1ODM3ZmZkZjYxNDBjZjJjMDg5MzE4MGViYTljYjYwOWQ1YmU4OGM4NGNiMTE3OWI5NASd1M0=: 00:29:13.517 22:28:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:29:13.778 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:29:13.778 22:28:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 00:29:13.778 22:28:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:13.778 22:28:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:29:13.778 22:28:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:13.778 22:28:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:29:13.778 22:28:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:29:13.778 22:28:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:29:13.778 22:28:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:29:13.778 22:28:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 0 00:29:13.778 22:28:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:29:13.778 22:28:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:29:13.778 22:28:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:29:13.778 22:28:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:29:13.778 22:28:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:29:13.778 22:28:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:29:13.778 22:28:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:13.778 22:28:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:29:13.778 22:28:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:13.778 22:28:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:29:13.778 22:28:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:29:13.778 22:28:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:29:14.037 00:29:14.037 22:28:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:29:14.037 22:28:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:29:14.037 22:28:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:29:14.297 22:28:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:14.297 22:28:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:29:14.297 22:28:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:14.297 22:28:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:29:14.297 22:28:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:14.297 22:28:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:29:14.297 { 00:29:14.297 "cntlid": 57, 00:29:14.297 "qid": 0, 00:29:14.297 "state": "enabled", 00:29:14.297 "thread": "nvmf_tgt_poll_group_000", 00:29:14.297 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204", 00:29:14.297 "listen_address": { 00:29:14.297 "trtype": "TCP", 00:29:14.297 "adrfam": "IPv4", 00:29:14.297 "traddr": "10.0.0.2", 00:29:14.297 "trsvcid": "4420" 00:29:14.297 }, 00:29:14.297 "peer_address": { 00:29:14.297 "trtype": "TCP", 00:29:14.297 "adrfam": "IPv4", 00:29:14.297 "traddr": "10.0.0.1", 00:29:14.297 "trsvcid": "44382" 00:29:14.297 }, 00:29:14.297 "auth": { 00:29:14.297 "state": "completed", 00:29:14.297 "digest": "sha384", 00:29:14.297 "dhgroup": "ffdhe2048" 00:29:14.297 } 00:29:14.297 } 00:29:14.297 ]' 00:29:14.297 22:28:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:29:14.297 22:28:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:29:14.297 22:28:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:29:14.297 22:28:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:29:14.297 22:28:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:29:14.297 22:28:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:29:14.297 22:28:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:29:14.297 22:28:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:29:14.557 22:28:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YjNjM2IxZjk3OTkwYjNlNGExMjc5ZDc1MTFiZGNlNjVlODUxZWU0YmVjNDJlMWQ5Ps097Q==: --dhchap-ctrl-secret DHHC-1:03:NzBkNjkxNDA0ZDc5Mzg4OTI1MGMwMTA5YmZlNmIwNGM2ZTc0ZjgyM2VmMWVkMDk0YTUxOGZiNWIwNWI4YThkN3OXbH8=: 00:29:14.557 22:28:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 --hostid 80c5a598-37ec-ec11-9bc7-a4bf01928204 -l 0 --dhchap-secret DHHC-1:00:YjNjM2IxZjk3OTkwYjNlNGExMjc5ZDc1MTFiZGNlNjVlODUxZWU0YmVjNDJlMWQ5Ps097Q==: --dhchap-ctrl-secret DHHC-1:03:NzBkNjkxNDA0ZDc5Mzg4OTI1MGMwMTA5YmZlNmIwNGM2ZTc0ZjgyM2VmMWVkMDk0YTUxOGZiNWIwNWI4YThkN3OXbH8=: 00:29:15.502 22:28:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:29:15.502 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:29:15.502 22:28:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 00:29:15.502 22:28:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:15.502 22:28:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:29:15.502 22:28:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:15.502 22:28:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:29:15.502 22:28:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:29:15.502 22:28:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:29:15.502 22:28:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 1 00:29:15.502 22:28:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:29:15.502 22:28:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:29:15.502 22:28:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:29:15.502 22:28:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:29:15.502 22:28:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:29:15.502 22:28:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:29:15.502 22:28:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:15.502 22:28:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:29:15.502 22:28:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:15.502 22:28:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:29:15.502 22:28:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:29:15.502 22:28:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:29:15.762 00:29:15.762 22:28:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:29:15.762 22:28:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:29:15.762 22:28:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:29:16.023 22:28:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:16.023 22:28:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:29:16.023 22:28:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:16.023 22:28:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:29:16.023 22:28:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:16.023 22:28:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:29:16.023 { 00:29:16.023 "cntlid": 59, 00:29:16.023 "qid": 0, 00:29:16.023 "state": "enabled", 00:29:16.023 "thread": "nvmf_tgt_poll_group_000", 00:29:16.023 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204", 00:29:16.023 "listen_address": { 00:29:16.023 "trtype": "TCP", 00:29:16.023 "adrfam": "IPv4", 00:29:16.023 "traddr": "10.0.0.2", 00:29:16.023 "trsvcid": "4420" 00:29:16.023 }, 00:29:16.023 "peer_address": { 00:29:16.023 "trtype": "TCP", 00:29:16.023 "adrfam": "IPv4", 00:29:16.023 "traddr": "10.0.0.1", 00:29:16.023 "trsvcid": "44400" 00:29:16.023 }, 00:29:16.023 "auth": { 00:29:16.023 "state": "completed", 00:29:16.023 "digest": "sha384", 00:29:16.023 "dhgroup": "ffdhe2048" 00:29:16.023 } 00:29:16.023 } 00:29:16.023 ]' 00:29:16.023 22:28:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:29:16.023 22:28:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:29:16.023 22:28:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:29:16.023 22:28:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:29:16.023 22:28:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:29:16.023 22:28:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:29:16.023 22:28:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:29:16.023 22:28:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:29:16.284 22:28:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MTlmY2Y1MzA1OTQzNDU3YTRjMWE0MzVlNTU1ZGE2YjhHWWQq: --dhchap-ctrl-secret DHHC-1:02:OTNlZTNhZmFiNmUzMTdhNTdmYTY4ZGUyNmMxYTE5NWI3ZmQyNTM3Y2Y5OTNkNmI1qvQ86g==: 00:29:16.284 22:28:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 --hostid 80c5a598-37ec-ec11-9bc7-a4bf01928204 -l 0 --dhchap-secret DHHC-1:01:MTlmY2Y1MzA1OTQzNDU3YTRjMWE0MzVlNTU1ZGE2YjhHWWQq: --dhchap-ctrl-secret DHHC-1:02:OTNlZTNhZmFiNmUzMTdhNTdmYTY4ZGUyNmMxYTE5NWI3ZmQyNTM3Y2Y5OTNkNmI1qvQ86g==: 00:29:17.226 22:28:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:29:17.226 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:29:17.226 22:28:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 00:29:17.226 22:28:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:17.226 22:28:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:29:17.226 22:28:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:17.226 22:28:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:29:17.226 22:28:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:29:17.226 22:28:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:29:17.226 22:28:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 2 00:29:17.226 22:28:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:29:17.226 22:28:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:29:17.226 22:28:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:29:17.226 22:28:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:29:17.226 22:28:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:29:17.226 22:28:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:29:17.226 22:28:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:17.226 22:28:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:29:17.226 22:28:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:17.226 22:28:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:29:17.226 22:28:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:29:17.226 22:28:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:29:17.487 00:29:17.487 22:28:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:29:17.487 22:28:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:29:17.487 22:28:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:29:17.748 22:28:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:17.748 22:28:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:29:17.748 22:28:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:17.748 22:28:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:29:17.748 22:28:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:17.748 22:28:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:29:17.748 { 00:29:17.749 "cntlid": 61, 00:29:17.749 "qid": 0, 00:29:17.749 "state": "enabled", 00:29:17.749 "thread": "nvmf_tgt_poll_group_000", 00:29:17.749 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204", 00:29:17.749 "listen_address": { 00:29:17.749 "trtype": "TCP", 00:29:17.749 "adrfam": "IPv4", 00:29:17.749 "traddr": "10.0.0.2", 00:29:17.749 "trsvcid": "4420" 00:29:17.749 }, 00:29:17.749 "peer_address": { 00:29:17.749 "trtype": "TCP", 00:29:17.749 "adrfam": "IPv4", 00:29:17.749 "traddr": "10.0.0.1", 00:29:17.749 "trsvcid": "35804" 00:29:17.749 }, 00:29:17.749 "auth": { 00:29:17.749 "state": "completed", 00:29:17.749 "digest": "sha384", 00:29:17.749 "dhgroup": "ffdhe2048" 00:29:17.749 } 00:29:17.749 } 00:29:17.749 ]' 00:29:17.749 22:28:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:29:17.749 22:28:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:29:17.749 22:28:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:29:17.749 22:28:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:29:17.749 22:28:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:29:17.749 22:28:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:29:17.749 22:28:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:29:17.749 22:28:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:29:18.009 22:28:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:Y2E5N2Q0MjVmNDNjOGYxZGQ3YzQ5ZmZhYzIyZjk0MzU0MGViZTAyYzEzYzkxMTZhU0B5KA==: --dhchap-ctrl-secret DHHC-1:01:NTRiOGYyNWE3ZmFiZjM0YTY3Y2M5MmM5MTAxOWM1ZjDN3lJm: 00:29:18.009 22:28:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 --hostid 80c5a598-37ec-ec11-9bc7-a4bf01928204 -l 0 --dhchap-secret DHHC-1:02:Y2E5N2Q0MjVmNDNjOGYxZGQ3YzQ5ZmZhYzIyZjk0MzU0MGViZTAyYzEzYzkxMTZhU0B5KA==: --dhchap-ctrl-secret DHHC-1:01:NTRiOGYyNWE3ZmFiZjM0YTY3Y2M5MmM5MTAxOWM1ZjDN3lJm: 00:29:18.952 22:28:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:29:18.952 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:29:18.952 22:28:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 00:29:18.952 22:28:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:18.952 22:28:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:29:18.952 22:28:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:18.952 22:28:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:29:18.952 22:28:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:29:18.952 22:28:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:29:18.952 22:28:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 3 00:29:18.952 22:28:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:29:18.952 22:28:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:29:18.952 22:28:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:29:18.952 22:28:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:29:18.952 22:28:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:29:18.952 22:28:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 --dhchap-key key3 00:29:18.952 22:28:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:18.952 22:28:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:29:18.952 22:28:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:18.952 22:28:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:29:18.953 22:28:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:29:18.953 22:28:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:29:19.214 00:29:19.214 22:28:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:29:19.214 22:28:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:29:19.214 22:28:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:29:19.475 22:28:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:19.475 22:28:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:29:19.475 22:28:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:19.475 22:28:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:29:19.475 22:28:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:19.475 22:28:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:29:19.475 { 00:29:19.475 "cntlid": 63, 00:29:19.475 "qid": 0, 00:29:19.475 "state": "enabled", 00:29:19.475 "thread": "nvmf_tgt_poll_group_000", 00:29:19.475 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204", 00:29:19.475 "listen_address": { 00:29:19.475 "trtype": "TCP", 00:29:19.475 "adrfam": "IPv4", 00:29:19.475 "traddr": "10.0.0.2", 00:29:19.475 "trsvcid": "4420" 00:29:19.475 }, 00:29:19.475 "peer_address": { 00:29:19.475 "trtype": "TCP", 00:29:19.475 "adrfam": "IPv4", 00:29:19.475 "traddr": "10.0.0.1", 00:29:19.475 "trsvcid": "35846" 00:29:19.475 }, 00:29:19.475 "auth": { 00:29:19.475 "state": "completed", 00:29:19.475 "digest": "sha384", 00:29:19.475 "dhgroup": "ffdhe2048" 00:29:19.475 } 00:29:19.475 } 00:29:19.475 ]' 00:29:19.475 22:28:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:29:19.475 22:28:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:29:19.475 22:28:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:29:19.475 22:28:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:29:19.475 22:28:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:29:19.475 22:28:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:29:19.475 22:28:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:29:19.475 22:28:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:29:19.736 22:28:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZmRjZjBlNWY4Y2FjY2E1ODM3ZmZkZjYxNDBjZjJjMDg5MzE4MGViYTljYjYwOWQ1YmU4OGM4NGNiMTE3OWI5NASd1M0=: 00:29:19.736 22:28:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 --hostid 80c5a598-37ec-ec11-9bc7-a4bf01928204 -l 0 --dhchap-secret DHHC-1:03:ZmRjZjBlNWY4Y2FjY2E1ODM3ZmZkZjYxNDBjZjJjMDg5MzE4MGViYTljYjYwOWQ1YmU4OGM4NGNiMTE3OWI5NASd1M0=: 00:29:20.679 22:28:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:29:20.679 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:29:20.679 22:28:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 00:29:20.679 22:28:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:20.679 22:28:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:29:20.679 22:28:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:20.679 22:28:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:29:20.679 22:28:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:29:20.679 22:28:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:29:20.679 22:28:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:29:20.679 22:28:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 0 00:29:20.679 22:28:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:29:20.679 22:28:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:29:20.679 22:28:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:29:20.679 22:28:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:29:20.679 22:28:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:29:20.679 22:28:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:29:20.679 22:28:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:20.679 22:28:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:29:20.679 22:28:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:20.679 22:28:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:29:20.679 22:28:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:29:20.679 22:28:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:29:20.940 00:29:20.940 22:28:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:29:20.940 22:28:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:29:20.940 22:28:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:29:21.200 22:28:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:21.200 22:28:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:29:21.200 22:28:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:21.200 22:28:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:29:21.200 22:28:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:21.200 22:28:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:29:21.200 { 00:29:21.200 "cntlid": 65, 00:29:21.200 "qid": 0, 00:29:21.200 "state": "enabled", 00:29:21.200 "thread": "nvmf_tgt_poll_group_000", 00:29:21.200 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204", 00:29:21.200 "listen_address": { 00:29:21.200 "trtype": "TCP", 00:29:21.200 "adrfam": "IPv4", 00:29:21.200 "traddr": "10.0.0.2", 00:29:21.200 "trsvcid": "4420" 00:29:21.200 }, 00:29:21.200 "peer_address": { 00:29:21.200 "trtype": "TCP", 00:29:21.200 "adrfam": "IPv4", 00:29:21.200 "traddr": "10.0.0.1", 00:29:21.200 "trsvcid": "35870" 00:29:21.200 }, 00:29:21.200 "auth": { 00:29:21.200 "state": "completed", 00:29:21.200 "digest": "sha384", 00:29:21.200 "dhgroup": "ffdhe3072" 00:29:21.200 } 00:29:21.200 } 00:29:21.200 ]' 00:29:21.200 22:28:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:29:21.200 22:28:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:29:21.201 22:28:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:29:21.201 22:28:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:29:21.201 22:28:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:29:21.201 22:28:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:29:21.201 22:28:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:29:21.201 22:28:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:29:21.462 22:28:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YjNjM2IxZjk3OTkwYjNlNGExMjc5ZDc1MTFiZGNlNjVlODUxZWU0YmVjNDJlMWQ5Ps097Q==: --dhchap-ctrl-secret DHHC-1:03:NzBkNjkxNDA0ZDc5Mzg4OTI1MGMwMTA5YmZlNmIwNGM2ZTc0ZjgyM2VmMWVkMDk0YTUxOGZiNWIwNWI4YThkN3OXbH8=: 00:29:21.462 22:28:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 --hostid 80c5a598-37ec-ec11-9bc7-a4bf01928204 -l 0 --dhchap-secret DHHC-1:00:YjNjM2IxZjk3OTkwYjNlNGExMjc5ZDc1MTFiZGNlNjVlODUxZWU0YmVjNDJlMWQ5Ps097Q==: --dhchap-ctrl-secret DHHC-1:03:NzBkNjkxNDA0ZDc5Mzg4OTI1MGMwMTA5YmZlNmIwNGM2ZTc0ZjgyM2VmMWVkMDk0YTUxOGZiNWIwNWI4YThkN3OXbH8=: 00:29:22.403 22:28:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:29:22.403 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:29:22.403 22:28:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 00:29:22.403 22:28:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:22.403 22:28:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:29:22.403 22:28:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:22.403 22:28:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:29:22.403 22:28:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:29:22.403 22:28:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:29:22.403 22:28:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 1 00:29:22.403 22:28:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:29:22.403 22:28:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:29:22.403 22:28:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:29:22.403 22:28:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:29:22.403 22:28:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:29:22.403 22:28:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:29:22.403 22:28:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:22.403 22:28:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:29:22.403 22:28:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:22.403 22:28:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:29:22.403 22:28:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:29:22.403 22:28:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:29:22.665 00:29:22.665 22:28:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:29:22.665 22:28:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:29:22.665 22:28:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:29:22.926 22:28:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:22.926 22:28:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:29:22.926 22:28:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:22.926 22:28:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:29:22.926 22:28:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:22.926 22:28:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:29:22.926 { 00:29:22.926 "cntlid": 67, 00:29:22.926 "qid": 0, 00:29:22.926 "state": "enabled", 00:29:22.926 "thread": "nvmf_tgt_poll_group_000", 00:29:22.926 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204", 00:29:22.926 "listen_address": { 00:29:22.926 "trtype": "TCP", 00:29:22.926 "adrfam": "IPv4", 00:29:22.926 "traddr": "10.0.0.2", 00:29:22.926 "trsvcid": "4420" 00:29:22.926 }, 00:29:22.926 "peer_address": { 00:29:22.926 "trtype": "TCP", 00:29:22.926 "adrfam": "IPv4", 00:29:22.926 "traddr": "10.0.0.1", 00:29:22.926 "trsvcid": "35902" 00:29:22.926 }, 00:29:22.926 "auth": { 00:29:22.926 "state": "completed", 00:29:22.926 "digest": "sha384", 00:29:22.926 "dhgroup": "ffdhe3072" 00:29:22.926 } 00:29:22.926 } 00:29:22.926 ]' 00:29:22.927 22:28:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:29:22.927 22:28:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:29:22.927 22:28:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:29:22.927 22:28:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:29:22.927 22:28:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:29:22.927 22:28:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:29:22.927 22:28:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:29:22.927 22:28:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:29:23.187 22:28:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MTlmY2Y1MzA1OTQzNDU3YTRjMWE0MzVlNTU1ZGE2YjhHWWQq: --dhchap-ctrl-secret DHHC-1:02:OTNlZTNhZmFiNmUzMTdhNTdmYTY4ZGUyNmMxYTE5NWI3ZmQyNTM3Y2Y5OTNkNmI1qvQ86g==: 00:29:23.187 22:28:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 --hostid 80c5a598-37ec-ec11-9bc7-a4bf01928204 -l 0 --dhchap-secret DHHC-1:01:MTlmY2Y1MzA1OTQzNDU3YTRjMWE0MzVlNTU1ZGE2YjhHWWQq: --dhchap-ctrl-secret DHHC-1:02:OTNlZTNhZmFiNmUzMTdhNTdmYTY4ZGUyNmMxYTE5NWI3ZmQyNTM3Y2Y5OTNkNmI1qvQ86g==: 00:29:23.760 22:28:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:29:24.020 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:29:24.020 22:28:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 00:29:24.020 22:28:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:24.020 22:28:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:29:24.020 22:28:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:24.020 22:28:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:29:24.020 22:28:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:29:24.020 22:28:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:29:24.020 22:28:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 2 00:29:24.021 22:28:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:29:24.021 22:28:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:29:24.021 22:28:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:29:24.021 22:28:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:29:24.021 22:28:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:29:24.021 22:28:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:29:24.021 22:28:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:24.021 22:28:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:29:24.021 22:28:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:24.021 22:28:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:29:24.021 22:28:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:29:24.021 22:28:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:29:24.281 00:29:24.281 22:28:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:29:24.281 22:28:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:29:24.281 22:28:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:29:24.543 22:28:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:24.543 22:28:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:29:24.543 22:28:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:24.543 22:28:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:29:24.543 22:28:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:24.543 22:28:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:29:24.543 { 00:29:24.543 "cntlid": 69, 00:29:24.543 "qid": 0, 00:29:24.543 "state": "enabled", 00:29:24.543 "thread": "nvmf_tgt_poll_group_000", 00:29:24.543 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204", 00:29:24.543 "listen_address": { 00:29:24.543 "trtype": "TCP", 00:29:24.543 "adrfam": "IPv4", 00:29:24.543 "traddr": "10.0.0.2", 00:29:24.543 "trsvcid": "4420" 00:29:24.543 }, 00:29:24.543 "peer_address": { 00:29:24.543 "trtype": "TCP", 00:29:24.543 "adrfam": "IPv4", 00:29:24.543 "traddr": "10.0.0.1", 00:29:24.543 "trsvcid": "35922" 00:29:24.543 }, 00:29:24.543 "auth": { 00:29:24.543 "state": "completed", 00:29:24.543 "digest": "sha384", 00:29:24.543 "dhgroup": "ffdhe3072" 00:29:24.543 } 00:29:24.543 } 00:29:24.543 ]' 00:29:24.543 22:28:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:29:24.543 22:28:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:29:24.543 22:28:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:29:24.543 22:28:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:29:24.543 22:28:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:29:24.803 22:28:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:29:24.803 22:28:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:29:24.803 22:28:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:29:24.803 22:28:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:Y2E5N2Q0MjVmNDNjOGYxZGQ3YzQ5ZmZhYzIyZjk0MzU0MGViZTAyYzEzYzkxMTZhU0B5KA==: --dhchap-ctrl-secret DHHC-1:01:NTRiOGYyNWE3ZmFiZjM0YTY3Y2M5MmM5MTAxOWM1ZjDN3lJm: 00:29:24.803 22:28:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 --hostid 80c5a598-37ec-ec11-9bc7-a4bf01928204 -l 0 --dhchap-secret DHHC-1:02:Y2E5N2Q0MjVmNDNjOGYxZGQ3YzQ5ZmZhYzIyZjk0MzU0MGViZTAyYzEzYzkxMTZhU0B5KA==: --dhchap-ctrl-secret DHHC-1:01:NTRiOGYyNWE3ZmFiZjM0YTY3Y2M5MmM5MTAxOWM1ZjDN3lJm: 00:29:25.746 22:28:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:29:25.746 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:29:25.746 22:28:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 00:29:25.746 22:28:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:25.746 22:28:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:29:25.746 22:28:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:25.746 22:28:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:29:25.746 22:28:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:29:25.746 22:28:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:29:25.746 22:28:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 3 00:29:25.746 22:28:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:29:25.746 22:28:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:29:25.746 22:28:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:29:25.746 22:28:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:29:25.746 22:28:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:29:25.746 22:28:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 --dhchap-key key3 00:29:25.746 22:28:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:25.746 22:28:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:29:25.746 22:28:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:25.746 22:28:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:29:25.746 22:28:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:29:25.746 22:28:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:29:26.007 00:29:26.007 22:28:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:29:26.007 22:28:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:29:26.007 22:28:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:29:26.270 22:28:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:26.270 22:28:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:29:26.270 22:28:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:26.270 22:28:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:29:26.270 22:28:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:26.270 22:28:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:29:26.270 { 00:29:26.270 "cntlid": 71, 00:29:26.270 "qid": 0, 00:29:26.270 "state": "enabled", 00:29:26.270 "thread": "nvmf_tgt_poll_group_000", 00:29:26.270 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204", 00:29:26.270 "listen_address": { 00:29:26.270 "trtype": "TCP", 00:29:26.270 "adrfam": "IPv4", 00:29:26.270 "traddr": "10.0.0.2", 00:29:26.270 "trsvcid": "4420" 00:29:26.270 }, 00:29:26.270 "peer_address": { 00:29:26.270 "trtype": "TCP", 00:29:26.270 "adrfam": "IPv4", 00:29:26.270 "traddr": "10.0.0.1", 00:29:26.270 "trsvcid": "35948" 00:29:26.270 }, 00:29:26.270 "auth": { 00:29:26.270 "state": "completed", 00:29:26.270 "digest": "sha384", 00:29:26.270 "dhgroup": "ffdhe3072" 00:29:26.270 } 00:29:26.270 } 00:29:26.270 ]' 00:29:26.270 22:28:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:29:26.270 22:28:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:29:26.270 22:28:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:29:26.270 22:28:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:29:26.270 22:28:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:29:26.532 22:28:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:29:26.532 22:28:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:29:26.532 22:28:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:29:26.532 22:28:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZmRjZjBlNWY4Y2FjY2E1ODM3ZmZkZjYxNDBjZjJjMDg5MzE4MGViYTljYjYwOWQ1YmU4OGM4NGNiMTE3OWI5NASd1M0=: 00:29:26.532 22:28:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 --hostid 80c5a598-37ec-ec11-9bc7-a4bf01928204 -l 0 --dhchap-secret DHHC-1:03:ZmRjZjBlNWY4Y2FjY2E1ODM3ZmZkZjYxNDBjZjJjMDg5MzE4MGViYTljYjYwOWQ1YmU4OGM4NGNiMTE3OWI5NASd1M0=: 00:29:27.476 22:28:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:29:27.476 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:29:27.476 22:28:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 00:29:27.476 22:28:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:27.476 22:28:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:29:27.476 22:28:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:27.476 22:28:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:29:27.476 22:28:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:29:27.476 22:28:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:29:27.476 22:28:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:29:27.476 22:28:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 0 00:29:27.476 22:28:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:29:27.476 22:28:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:29:27.476 22:28:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:29:27.476 22:28:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:29:27.476 22:28:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:29:27.476 22:28:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:29:27.476 22:28:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:27.476 22:28:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:29:27.476 22:28:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:27.476 22:28:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:29:27.476 22:28:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:29:27.476 22:28:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:29:27.737 00:29:27.737 22:28:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:29:27.737 22:28:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:29:27.737 22:28:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:29:27.999 22:28:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:27.999 22:28:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:29:27.999 22:28:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:27.999 22:28:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:29:27.999 22:28:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:27.999 22:28:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:29:27.999 { 00:29:27.999 "cntlid": 73, 00:29:27.999 "qid": 0, 00:29:27.999 "state": "enabled", 00:29:27.999 "thread": "nvmf_tgt_poll_group_000", 00:29:27.999 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204", 00:29:27.999 "listen_address": { 00:29:27.999 "trtype": "TCP", 00:29:27.999 "adrfam": "IPv4", 00:29:27.999 "traddr": "10.0.0.2", 00:29:27.999 "trsvcid": "4420" 00:29:27.999 }, 00:29:27.999 "peer_address": { 00:29:27.999 "trtype": "TCP", 00:29:27.999 "adrfam": "IPv4", 00:29:27.999 "traddr": "10.0.0.1", 00:29:27.999 "trsvcid": "48366" 00:29:27.999 }, 00:29:27.999 "auth": { 00:29:27.999 "state": "completed", 00:29:27.999 "digest": "sha384", 00:29:27.999 "dhgroup": "ffdhe4096" 00:29:27.999 } 00:29:27.999 } 00:29:27.999 ]' 00:29:27.999 22:28:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:29:27.999 22:28:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:29:27.999 22:28:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:29:27.999 22:28:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:29:27.999 22:28:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:29:28.260 22:28:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:29:28.260 22:28:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:29:28.260 22:28:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:29:28.260 22:28:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YjNjM2IxZjk3OTkwYjNlNGExMjc5ZDc1MTFiZGNlNjVlODUxZWU0YmVjNDJlMWQ5Ps097Q==: --dhchap-ctrl-secret DHHC-1:03:NzBkNjkxNDA0ZDc5Mzg4OTI1MGMwMTA5YmZlNmIwNGM2ZTc0ZjgyM2VmMWVkMDk0YTUxOGZiNWIwNWI4YThkN3OXbH8=: 00:29:28.261 22:28:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 --hostid 80c5a598-37ec-ec11-9bc7-a4bf01928204 -l 0 --dhchap-secret DHHC-1:00:YjNjM2IxZjk3OTkwYjNlNGExMjc5ZDc1MTFiZGNlNjVlODUxZWU0YmVjNDJlMWQ5Ps097Q==: --dhchap-ctrl-secret DHHC-1:03:NzBkNjkxNDA0ZDc5Mzg4OTI1MGMwMTA5YmZlNmIwNGM2ZTc0ZjgyM2VmMWVkMDk0YTUxOGZiNWIwNWI4YThkN3OXbH8=: 00:29:29.203 22:28:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:29:29.203 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:29:29.203 22:28:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 00:29:29.203 22:28:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:29.203 22:28:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:29:29.203 22:28:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:29.204 22:28:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:29:29.204 22:28:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:29:29.204 22:28:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:29:29.204 22:28:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 1 00:29:29.204 22:28:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:29:29.204 22:28:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:29:29.204 22:28:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:29:29.204 22:28:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:29:29.204 22:28:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:29:29.204 22:28:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:29:29.204 22:28:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:29.204 22:28:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:29:29.204 22:28:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:29.204 22:28:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:29:29.204 22:28:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:29:29.204 22:28:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:29:29.465 00:29:29.465 22:28:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:29:29.465 22:28:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:29:29.465 22:28:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:29:29.725 22:28:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:29.725 22:28:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:29:29.725 22:28:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:29.725 22:28:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:29:29.725 22:28:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:29.725 22:28:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:29:29.725 { 00:29:29.725 "cntlid": 75, 00:29:29.725 "qid": 0, 00:29:29.725 "state": "enabled", 00:29:29.725 "thread": "nvmf_tgt_poll_group_000", 00:29:29.725 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204", 00:29:29.725 "listen_address": { 00:29:29.725 "trtype": "TCP", 00:29:29.725 "adrfam": "IPv4", 00:29:29.725 "traddr": "10.0.0.2", 00:29:29.725 "trsvcid": "4420" 00:29:29.725 }, 00:29:29.725 "peer_address": { 00:29:29.725 "trtype": "TCP", 00:29:29.725 "adrfam": "IPv4", 00:29:29.725 "traddr": "10.0.0.1", 00:29:29.725 "trsvcid": "48404" 00:29:29.725 }, 00:29:29.725 "auth": { 00:29:29.725 "state": "completed", 00:29:29.725 "digest": "sha384", 00:29:29.725 "dhgroup": "ffdhe4096" 00:29:29.725 } 00:29:29.725 } 00:29:29.725 ]' 00:29:29.725 22:28:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:29:29.725 22:28:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:29:29.725 22:28:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:29:29.725 22:28:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:29:29.725 22:28:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:29:29.985 22:28:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:29:29.985 22:28:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:29:29.985 22:28:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:29:29.985 22:28:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MTlmY2Y1MzA1OTQzNDU3YTRjMWE0MzVlNTU1ZGE2YjhHWWQq: --dhchap-ctrl-secret DHHC-1:02:OTNlZTNhZmFiNmUzMTdhNTdmYTY4ZGUyNmMxYTE5NWI3ZmQyNTM3Y2Y5OTNkNmI1qvQ86g==: 00:29:29.986 22:28:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 --hostid 80c5a598-37ec-ec11-9bc7-a4bf01928204 -l 0 --dhchap-secret DHHC-1:01:MTlmY2Y1MzA1OTQzNDU3YTRjMWE0MzVlNTU1ZGE2YjhHWWQq: --dhchap-ctrl-secret DHHC-1:02:OTNlZTNhZmFiNmUzMTdhNTdmYTY4ZGUyNmMxYTE5NWI3ZmQyNTM3Y2Y5OTNkNmI1qvQ86g==: 00:29:30.927 22:28:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:29:30.927 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:29:30.927 22:28:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 00:29:30.927 22:28:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:30.927 22:28:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:29:30.927 22:28:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:30.927 22:28:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:29:30.927 22:28:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:29:30.927 22:28:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:29:30.927 22:28:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 2 00:29:30.927 22:28:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:29:30.927 22:28:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:29:30.927 22:28:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:29:30.927 22:28:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:29:30.927 22:28:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:29:30.927 22:28:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:29:30.928 22:28:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:30.928 22:28:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:29:30.928 22:28:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:30.928 22:28:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:29:30.928 22:28:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:29:30.928 22:28:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:29:31.188 00:29:31.188 22:28:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:29:31.188 22:28:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:29:31.188 22:28:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:29:31.449 22:28:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:31.449 22:28:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:29:31.449 22:28:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:31.449 22:28:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:29:31.449 22:28:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:31.449 22:28:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:29:31.449 { 00:29:31.449 "cntlid": 77, 00:29:31.449 "qid": 0, 00:29:31.449 "state": "enabled", 00:29:31.449 "thread": "nvmf_tgt_poll_group_000", 00:29:31.449 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204", 00:29:31.449 "listen_address": { 00:29:31.449 "trtype": "TCP", 00:29:31.449 "adrfam": "IPv4", 00:29:31.449 "traddr": "10.0.0.2", 00:29:31.449 "trsvcid": "4420" 00:29:31.449 }, 00:29:31.449 "peer_address": { 00:29:31.449 "trtype": "TCP", 00:29:31.449 "adrfam": "IPv4", 00:29:31.449 "traddr": "10.0.0.1", 00:29:31.449 "trsvcid": "48422" 00:29:31.449 }, 00:29:31.449 "auth": { 00:29:31.449 "state": "completed", 00:29:31.449 "digest": "sha384", 00:29:31.449 "dhgroup": "ffdhe4096" 00:29:31.449 } 00:29:31.449 } 00:29:31.449 ]' 00:29:31.449 22:28:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:29:31.449 22:28:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:29:31.449 22:28:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:29:31.710 22:28:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:29:31.710 22:28:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:29:31.710 22:28:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:29:31.710 22:28:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:29:31.710 22:28:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:29:31.710 22:28:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:Y2E5N2Q0MjVmNDNjOGYxZGQ3YzQ5ZmZhYzIyZjk0MzU0MGViZTAyYzEzYzkxMTZhU0B5KA==: --dhchap-ctrl-secret DHHC-1:01:NTRiOGYyNWE3ZmFiZjM0YTY3Y2M5MmM5MTAxOWM1ZjDN3lJm: 00:29:31.710 22:28:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 --hostid 80c5a598-37ec-ec11-9bc7-a4bf01928204 -l 0 --dhchap-secret DHHC-1:02:Y2E5N2Q0MjVmNDNjOGYxZGQ3YzQ5ZmZhYzIyZjk0MzU0MGViZTAyYzEzYzkxMTZhU0B5KA==: --dhchap-ctrl-secret DHHC-1:01:NTRiOGYyNWE3ZmFiZjM0YTY3Y2M5MmM5MTAxOWM1ZjDN3lJm: 00:29:32.650 22:28:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:29:32.650 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:29:32.650 22:28:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 00:29:32.650 22:28:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:32.650 22:28:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:29:32.650 22:28:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:32.650 22:28:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:29:32.650 22:28:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:29:32.650 22:28:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:29:32.650 22:28:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 3 00:29:32.650 22:28:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:29:32.650 22:28:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:29:32.650 22:28:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:29:32.650 22:28:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:29:32.650 22:28:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:29:32.650 22:28:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 --dhchap-key key3 00:29:32.650 22:28:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:32.650 22:28:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:29:32.650 22:28:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:32.650 22:28:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:29:32.650 22:28:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:29:32.650 22:28:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:29:32.910 00:29:33.171 22:28:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:29:33.171 22:28:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:29:33.171 22:28:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:29:33.171 22:28:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:33.171 22:28:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:29:33.171 22:28:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:33.171 22:28:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:29:33.171 22:28:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:33.171 22:28:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:29:33.171 { 00:29:33.171 "cntlid": 79, 00:29:33.171 "qid": 0, 00:29:33.171 "state": "enabled", 00:29:33.171 "thread": "nvmf_tgt_poll_group_000", 00:29:33.171 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204", 00:29:33.171 "listen_address": { 00:29:33.171 "trtype": "TCP", 00:29:33.171 "adrfam": "IPv4", 00:29:33.171 "traddr": "10.0.0.2", 00:29:33.171 "trsvcid": "4420" 00:29:33.171 }, 00:29:33.171 "peer_address": { 00:29:33.171 "trtype": "TCP", 00:29:33.171 "adrfam": "IPv4", 00:29:33.171 "traddr": "10.0.0.1", 00:29:33.171 "trsvcid": "48448" 00:29:33.171 }, 00:29:33.171 "auth": { 00:29:33.171 "state": "completed", 00:29:33.171 "digest": "sha384", 00:29:33.171 "dhgroup": "ffdhe4096" 00:29:33.171 } 00:29:33.171 } 00:29:33.171 ]' 00:29:33.171 22:28:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:29:33.171 22:28:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:29:33.171 22:28:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:29:33.431 22:28:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:29:33.431 22:28:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:29:33.431 22:28:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:29:33.431 22:28:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:29:33.431 22:28:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:29:33.691 22:28:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZmRjZjBlNWY4Y2FjY2E1ODM3ZmZkZjYxNDBjZjJjMDg5MzE4MGViYTljYjYwOWQ1YmU4OGM4NGNiMTE3OWI5NASd1M0=: 00:29:33.691 22:28:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 --hostid 80c5a598-37ec-ec11-9bc7-a4bf01928204 -l 0 --dhchap-secret DHHC-1:03:ZmRjZjBlNWY4Y2FjY2E1ODM3ZmZkZjYxNDBjZjJjMDg5MzE4MGViYTljYjYwOWQ1YmU4OGM4NGNiMTE3OWI5NASd1M0=: 00:29:34.261 22:28:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:29:34.261 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:29:34.261 22:28:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 00:29:34.261 22:28:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:34.261 22:28:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:29:34.261 22:28:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:34.261 22:28:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:29:34.261 22:28:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:29:34.261 22:28:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:29:34.261 22:28:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:29:34.522 22:28:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 0 00:29:34.522 22:28:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:29:34.522 22:28:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:29:34.522 22:28:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:29:34.522 22:28:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:29:34.522 22:28:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:29:34.522 22:28:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:29:34.522 22:28:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:34.522 22:28:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:29:34.522 22:28:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:34.522 22:28:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:29:34.522 22:28:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:29:34.523 22:28:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:29:34.783 00:29:34.783 22:28:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:29:34.783 22:28:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:29:34.783 22:28:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:29:35.043 22:28:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:35.043 22:28:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:29:35.043 22:28:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:35.043 22:28:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:29:35.043 22:28:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:35.043 22:28:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:29:35.043 { 00:29:35.043 "cntlid": 81, 00:29:35.043 "qid": 0, 00:29:35.043 "state": "enabled", 00:29:35.043 "thread": "nvmf_tgt_poll_group_000", 00:29:35.043 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204", 00:29:35.043 "listen_address": { 00:29:35.043 "trtype": "TCP", 00:29:35.043 "adrfam": "IPv4", 00:29:35.043 "traddr": "10.0.0.2", 00:29:35.043 "trsvcid": "4420" 00:29:35.043 }, 00:29:35.043 "peer_address": { 00:29:35.043 "trtype": "TCP", 00:29:35.043 "adrfam": "IPv4", 00:29:35.043 "traddr": "10.0.0.1", 00:29:35.043 "trsvcid": "48484" 00:29:35.043 }, 00:29:35.043 "auth": { 00:29:35.043 "state": "completed", 00:29:35.043 "digest": "sha384", 00:29:35.043 "dhgroup": "ffdhe6144" 00:29:35.043 } 00:29:35.043 } 00:29:35.043 ]' 00:29:35.043 22:28:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:29:35.043 22:28:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:29:35.043 22:28:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:29:35.043 22:28:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:29:35.043 22:28:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:29:35.304 22:28:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:29:35.304 22:28:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:29:35.304 22:28:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:29:35.304 22:28:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YjNjM2IxZjk3OTkwYjNlNGExMjc5ZDc1MTFiZGNlNjVlODUxZWU0YmVjNDJlMWQ5Ps097Q==: --dhchap-ctrl-secret DHHC-1:03:NzBkNjkxNDA0ZDc5Mzg4OTI1MGMwMTA5YmZlNmIwNGM2ZTc0ZjgyM2VmMWVkMDk0YTUxOGZiNWIwNWI4YThkN3OXbH8=: 00:29:35.304 22:28:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 --hostid 80c5a598-37ec-ec11-9bc7-a4bf01928204 -l 0 --dhchap-secret DHHC-1:00:YjNjM2IxZjk3OTkwYjNlNGExMjc5ZDc1MTFiZGNlNjVlODUxZWU0YmVjNDJlMWQ5Ps097Q==: --dhchap-ctrl-secret DHHC-1:03:NzBkNjkxNDA0ZDc5Mzg4OTI1MGMwMTA5YmZlNmIwNGM2ZTc0ZjgyM2VmMWVkMDk0YTUxOGZiNWIwNWI4YThkN3OXbH8=: 00:29:36.246 22:28:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:29:36.246 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:29:36.246 22:28:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 00:29:36.246 22:28:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:36.246 22:28:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:29:36.246 22:28:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:36.246 22:28:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:29:36.246 22:28:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:29:36.246 22:28:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:29:36.246 22:28:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 1 00:29:36.246 22:28:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:29:36.246 22:28:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:29:36.246 22:28:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:29:36.246 22:28:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:29:36.246 22:28:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:29:36.246 22:28:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:29:36.246 22:28:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:36.246 22:28:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:29:36.246 22:28:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:36.246 22:28:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:29:36.246 22:28:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:29:36.246 22:28:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:29:36.818 00:29:36.818 22:28:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:29:36.818 22:28:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:29:36.818 22:28:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:29:36.818 22:28:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:36.818 22:28:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:29:36.818 22:28:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:36.818 22:28:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:29:36.818 22:28:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:36.818 22:28:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:29:36.818 { 00:29:36.818 "cntlid": 83, 00:29:36.818 "qid": 0, 00:29:36.818 "state": "enabled", 00:29:36.818 "thread": "nvmf_tgt_poll_group_000", 00:29:36.818 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204", 00:29:36.818 "listen_address": { 00:29:36.818 "trtype": "TCP", 00:29:36.818 "adrfam": "IPv4", 00:29:36.818 "traddr": "10.0.0.2", 00:29:36.818 "trsvcid": "4420" 00:29:36.818 }, 00:29:36.818 "peer_address": { 00:29:36.818 "trtype": "TCP", 00:29:36.818 "adrfam": "IPv4", 00:29:36.818 "traddr": "10.0.0.1", 00:29:36.818 "trsvcid": "39116" 00:29:36.818 }, 00:29:36.818 "auth": { 00:29:36.818 "state": "completed", 00:29:36.818 "digest": "sha384", 00:29:36.818 "dhgroup": "ffdhe6144" 00:29:36.818 } 00:29:36.818 } 00:29:36.818 ]' 00:29:36.818 22:28:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:29:37.080 22:28:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:29:37.080 22:28:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:29:37.080 22:28:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:29:37.080 22:28:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:29:37.080 22:28:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:29:37.080 22:28:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:29:37.080 22:28:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:29:37.340 22:28:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MTlmY2Y1MzA1OTQzNDU3YTRjMWE0MzVlNTU1ZGE2YjhHWWQq: --dhchap-ctrl-secret DHHC-1:02:OTNlZTNhZmFiNmUzMTdhNTdmYTY4ZGUyNmMxYTE5NWI3ZmQyNTM3Y2Y5OTNkNmI1qvQ86g==: 00:29:37.340 22:28:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 --hostid 80c5a598-37ec-ec11-9bc7-a4bf01928204 -l 0 --dhchap-secret DHHC-1:01:MTlmY2Y1MzA1OTQzNDU3YTRjMWE0MzVlNTU1ZGE2YjhHWWQq: --dhchap-ctrl-secret DHHC-1:02:OTNlZTNhZmFiNmUzMTdhNTdmYTY4ZGUyNmMxYTE5NWI3ZmQyNTM3Y2Y5OTNkNmI1qvQ86g==: 00:29:37.911 22:28:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:29:37.911 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:29:37.911 22:28:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 00:29:37.911 22:28:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:37.911 22:28:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:29:37.911 22:28:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:37.911 22:28:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:29:37.911 22:28:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:29:37.911 22:28:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:29:38.219 22:28:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 2 00:29:38.219 22:28:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:29:38.219 22:28:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:29:38.219 22:28:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:29:38.219 22:28:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:29:38.219 22:28:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:29:38.219 22:28:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:29:38.219 22:28:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:38.219 22:28:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:29:38.219 22:28:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:38.219 22:28:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:29:38.219 22:28:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:29:38.219 22:28:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:29:38.541 00:29:38.541 22:28:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:29:38.541 22:28:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:29:38.541 22:28:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:29:38.822 22:28:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:38.822 22:28:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:29:38.822 22:28:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:38.822 22:28:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:29:38.822 22:28:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:38.822 22:28:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:29:38.822 { 00:29:38.822 "cntlid": 85, 00:29:38.822 "qid": 0, 00:29:38.822 "state": "enabled", 00:29:38.822 "thread": "nvmf_tgt_poll_group_000", 00:29:38.822 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204", 00:29:38.822 "listen_address": { 00:29:38.822 "trtype": "TCP", 00:29:38.822 "adrfam": "IPv4", 00:29:38.822 "traddr": "10.0.0.2", 00:29:38.822 "trsvcid": "4420" 00:29:38.822 }, 00:29:38.822 "peer_address": { 00:29:38.822 "trtype": "TCP", 00:29:38.822 "adrfam": "IPv4", 00:29:38.822 "traddr": "10.0.0.1", 00:29:38.822 "trsvcid": "39140" 00:29:38.822 }, 00:29:38.822 "auth": { 00:29:38.822 "state": "completed", 00:29:38.822 "digest": "sha384", 00:29:38.822 "dhgroup": "ffdhe6144" 00:29:38.822 } 00:29:38.822 } 00:29:38.822 ]' 00:29:38.822 22:28:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:29:38.822 22:28:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:29:38.822 22:28:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:29:38.822 22:28:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:29:38.822 22:28:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:29:38.822 22:28:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:29:38.822 22:28:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:29:38.822 22:28:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:29:39.082 22:28:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:Y2E5N2Q0MjVmNDNjOGYxZGQ3YzQ5ZmZhYzIyZjk0MzU0MGViZTAyYzEzYzkxMTZhU0B5KA==: --dhchap-ctrl-secret DHHC-1:01:NTRiOGYyNWE3ZmFiZjM0YTY3Y2M5MmM5MTAxOWM1ZjDN3lJm: 00:29:39.082 22:28:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 --hostid 80c5a598-37ec-ec11-9bc7-a4bf01928204 -l 0 --dhchap-secret DHHC-1:02:Y2E5N2Q0MjVmNDNjOGYxZGQ3YzQ5ZmZhYzIyZjk0MzU0MGViZTAyYzEzYzkxMTZhU0B5KA==: --dhchap-ctrl-secret DHHC-1:01:NTRiOGYyNWE3ZmFiZjM0YTY3Y2M5MmM5MTAxOWM1ZjDN3lJm: 00:29:40.023 22:28:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:29:40.023 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:29:40.023 22:28:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 00:29:40.023 22:28:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:40.023 22:28:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:29:40.023 22:28:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:40.023 22:28:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:29:40.023 22:28:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:29:40.023 22:28:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:29:40.023 22:28:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 3 00:29:40.023 22:28:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:29:40.023 22:28:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:29:40.023 22:28:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:29:40.023 22:28:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:29:40.023 22:28:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:29:40.023 22:28:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 --dhchap-key key3 00:29:40.023 22:28:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:40.023 22:28:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:29:40.023 22:28:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:40.023 22:28:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:29:40.023 22:28:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:29:40.023 22:28:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:29:40.284 00:29:40.548 22:28:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:29:40.548 22:28:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:29:40.548 22:28:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:29:40.548 22:28:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:40.548 22:28:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:29:40.548 22:28:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:40.548 22:28:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:29:40.548 22:28:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:40.548 22:28:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:29:40.548 { 00:29:40.548 "cntlid": 87, 00:29:40.548 "qid": 0, 00:29:40.548 "state": "enabled", 00:29:40.548 "thread": "nvmf_tgt_poll_group_000", 00:29:40.548 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204", 00:29:40.548 "listen_address": { 00:29:40.548 "trtype": "TCP", 00:29:40.548 "adrfam": "IPv4", 00:29:40.548 "traddr": "10.0.0.2", 00:29:40.548 "trsvcid": "4420" 00:29:40.548 }, 00:29:40.548 "peer_address": { 00:29:40.548 "trtype": "TCP", 00:29:40.548 "adrfam": "IPv4", 00:29:40.548 "traddr": "10.0.0.1", 00:29:40.548 "trsvcid": "39162" 00:29:40.548 }, 00:29:40.548 "auth": { 00:29:40.548 "state": "completed", 00:29:40.548 "digest": "sha384", 00:29:40.548 "dhgroup": "ffdhe6144" 00:29:40.548 } 00:29:40.548 } 00:29:40.548 ]' 00:29:40.548 22:28:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:29:40.548 22:28:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:29:40.548 22:28:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:29:40.808 22:28:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:29:40.808 22:28:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:29:40.808 22:28:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:29:40.808 22:28:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:29:40.808 22:28:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:29:40.808 22:28:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZmRjZjBlNWY4Y2FjY2E1ODM3ZmZkZjYxNDBjZjJjMDg5MzE4MGViYTljYjYwOWQ1YmU4OGM4NGNiMTE3OWI5NASd1M0=: 00:29:40.808 22:28:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 --hostid 80c5a598-37ec-ec11-9bc7-a4bf01928204 -l 0 --dhchap-secret DHHC-1:03:ZmRjZjBlNWY4Y2FjY2E1ODM3ZmZkZjYxNDBjZjJjMDg5MzE4MGViYTljYjYwOWQ1YmU4OGM4NGNiMTE3OWI5NASd1M0=: 00:29:41.749 22:28:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:29:41.749 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:29:41.749 22:28:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 00:29:41.749 22:28:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:41.749 22:28:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:29:41.749 22:28:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:41.749 22:28:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:29:41.749 22:28:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:29:41.749 22:28:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:29:41.749 22:28:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:29:41.749 22:28:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 0 00:29:41.749 22:28:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:29:41.749 22:28:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:29:41.749 22:28:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:29:41.749 22:28:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:29:41.749 22:28:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:29:41.749 22:28:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:29:41.749 22:28:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:41.749 22:28:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:29:41.749 22:28:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:41.749 22:28:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:29:41.749 22:28:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:29:41.749 22:28:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:29:42.319 00:29:42.319 22:28:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:29:42.319 22:28:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:29:42.319 22:28:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:29:42.579 22:28:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:42.579 22:28:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:29:42.579 22:28:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:42.579 22:28:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:29:42.579 22:28:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:42.579 22:28:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:29:42.579 { 00:29:42.579 "cntlid": 89, 00:29:42.579 "qid": 0, 00:29:42.579 "state": "enabled", 00:29:42.579 "thread": "nvmf_tgt_poll_group_000", 00:29:42.579 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204", 00:29:42.579 "listen_address": { 00:29:42.579 "trtype": "TCP", 00:29:42.579 "adrfam": "IPv4", 00:29:42.579 "traddr": "10.0.0.2", 00:29:42.579 "trsvcid": "4420" 00:29:42.579 }, 00:29:42.579 "peer_address": { 00:29:42.579 "trtype": "TCP", 00:29:42.579 "adrfam": "IPv4", 00:29:42.579 "traddr": "10.0.0.1", 00:29:42.579 "trsvcid": "39188" 00:29:42.579 }, 00:29:42.579 "auth": { 00:29:42.579 "state": "completed", 00:29:42.579 "digest": "sha384", 00:29:42.579 "dhgroup": "ffdhe8192" 00:29:42.579 } 00:29:42.579 } 00:29:42.579 ]' 00:29:42.579 22:28:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:29:42.579 22:28:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:29:42.579 22:28:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:29:42.580 22:28:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:29:42.580 22:28:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:29:42.840 22:28:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:29:42.840 22:28:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:29:42.840 22:28:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:29:42.840 22:28:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YjNjM2IxZjk3OTkwYjNlNGExMjc5ZDc1MTFiZGNlNjVlODUxZWU0YmVjNDJlMWQ5Ps097Q==: --dhchap-ctrl-secret DHHC-1:03:NzBkNjkxNDA0ZDc5Mzg4OTI1MGMwMTA5YmZlNmIwNGM2ZTc0ZjgyM2VmMWVkMDk0YTUxOGZiNWIwNWI4YThkN3OXbH8=: 00:29:42.841 22:28:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 --hostid 80c5a598-37ec-ec11-9bc7-a4bf01928204 -l 0 --dhchap-secret DHHC-1:00:YjNjM2IxZjk3OTkwYjNlNGExMjc5ZDc1MTFiZGNlNjVlODUxZWU0YmVjNDJlMWQ5Ps097Q==: --dhchap-ctrl-secret DHHC-1:03:NzBkNjkxNDA0ZDc5Mzg4OTI1MGMwMTA5YmZlNmIwNGM2ZTc0ZjgyM2VmMWVkMDk0YTUxOGZiNWIwNWI4YThkN3OXbH8=: 00:29:43.782 22:28:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:29:43.782 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:29:43.782 22:28:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 00:29:43.782 22:28:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:43.782 22:28:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:29:43.782 22:28:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:43.782 22:28:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:29:43.782 22:28:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:29:43.782 22:28:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:29:43.782 22:28:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 1 00:29:43.782 22:28:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:29:43.782 22:28:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:29:43.782 22:28:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:29:43.782 22:28:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:29:43.782 22:28:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:29:43.782 22:28:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:29:43.782 22:28:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:43.782 22:28:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:29:43.782 22:28:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:43.782 22:28:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:29:43.782 22:28:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:29:43.782 22:28:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:29:44.352 00:29:44.352 22:28:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:29:44.352 22:28:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:29:44.352 22:28:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:29:44.612 22:28:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:44.612 22:28:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:29:44.612 22:28:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:44.612 22:28:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:29:44.612 22:28:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:44.612 22:28:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:29:44.612 { 00:29:44.612 "cntlid": 91, 00:29:44.612 "qid": 0, 00:29:44.612 "state": "enabled", 00:29:44.612 "thread": "nvmf_tgt_poll_group_000", 00:29:44.612 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204", 00:29:44.612 "listen_address": { 00:29:44.612 "trtype": "TCP", 00:29:44.612 "adrfam": "IPv4", 00:29:44.612 "traddr": "10.0.0.2", 00:29:44.612 "trsvcid": "4420" 00:29:44.612 }, 00:29:44.612 "peer_address": { 00:29:44.612 "trtype": "TCP", 00:29:44.612 "adrfam": "IPv4", 00:29:44.612 "traddr": "10.0.0.1", 00:29:44.612 "trsvcid": "39210" 00:29:44.612 }, 00:29:44.612 "auth": { 00:29:44.612 "state": "completed", 00:29:44.612 "digest": "sha384", 00:29:44.612 "dhgroup": "ffdhe8192" 00:29:44.612 } 00:29:44.612 } 00:29:44.612 ]' 00:29:44.612 22:28:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:29:44.612 22:28:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:29:44.612 22:28:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:29:44.612 22:28:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:29:44.612 22:28:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:29:44.612 22:28:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:29:44.612 22:28:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:29:44.612 22:28:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:29:44.873 22:28:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MTlmY2Y1MzA1OTQzNDU3YTRjMWE0MzVlNTU1ZGE2YjhHWWQq: --dhchap-ctrl-secret DHHC-1:02:OTNlZTNhZmFiNmUzMTdhNTdmYTY4ZGUyNmMxYTE5NWI3ZmQyNTM3Y2Y5OTNkNmI1qvQ86g==: 00:29:44.873 22:28:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 --hostid 80c5a598-37ec-ec11-9bc7-a4bf01928204 -l 0 --dhchap-secret DHHC-1:01:MTlmY2Y1MzA1OTQzNDU3YTRjMWE0MzVlNTU1ZGE2YjhHWWQq: --dhchap-ctrl-secret DHHC-1:02:OTNlZTNhZmFiNmUzMTdhNTdmYTY4ZGUyNmMxYTE5NWI3ZmQyNTM3Y2Y5OTNkNmI1qvQ86g==: 00:29:45.825 22:28:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:29:45.825 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:29:45.825 22:28:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 00:29:45.825 22:28:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:45.825 22:28:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:29:45.825 22:28:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:45.825 22:28:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:29:45.825 22:28:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:29:45.825 22:28:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:29:45.825 22:28:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 2 00:29:45.825 22:28:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:29:45.825 22:28:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:29:45.825 22:28:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:29:45.825 22:28:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:29:45.825 22:28:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:29:45.825 22:28:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:29:45.825 22:28:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:45.825 22:28:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:29:45.825 22:28:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:45.825 22:28:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:29:45.825 22:28:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:29:45.825 22:28:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:29:46.393 00:29:46.393 22:28:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:29:46.393 22:28:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:29:46.393 22:28:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:29:46.653 22:28:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:46.653 22:28:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:29:46.653 22:28:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:46.653 22:28:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:29:46.653 22:28:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:46.653 22:28:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:29:46.653 { 00:29:46.653 "cntlid": 93, 00:29:46.653 "qid": 0, 00:29:46.653 "state": "enabled", 00:29:46.653 "thread": "nvmf_tgt_poll_group_000", 00:29:46.653 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204", 00:29:46.653 "listen_address": { 00:29:46.653 "trtype": "TCP", 00:29:46.653 "adrfam": "IPv4", 00:29:46.653 "traddr": "10.0.0.2", 00:29:46.653 "trsvcid": "4420" 00:29:46.653 }, 00:29:46.653 "peer_address": { 00:29:46.653 "trtype": "TCP", 00:29:46.653 "adrfam": "IPv4", 00:29:46.653 "traddr": "10.0.0.1", 00:29:46.653 "trsvcid": "39240" 00:29:46.653 }, 00:29:46.653 "auth": { 00:29:46.653 "state": "completed", 00:29:46.653 "digest": "sha384", 00:29:46.653 "dhgroup": "ffdhe8192" 00:29:46.653 } 00:29:46.653 } 00:29:46.653 ]' 00:29:46.653 22:28:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:29:46.653 22:28:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:29:46.653 22:28:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:29:46.653 22:28:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:29:46.653 22:28:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:29:46.653 22:28:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:29:46.653 22:28:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:29:46.653 22:28:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:29:46.913 22:28:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:Y2E5N2Q0MjVmNDNjOGYxZGQ3YzQ5ZmZhYzIyZjk0MzU0MGViZTAyYzEzYzkxMTZhU0B5KA==: --dhchap-ctrl-secret DHHC-1:01:NTRiOGYyNWE3ZmFiZjM0YTY3Y2M5MmM5MTAxOWM1ZjDN3lJm: 00:29:46.913 22:28:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 --hostid 80c5a598-37ec-ec11-9bc7-a4bf01928204 -l 0 --dhchap-secret DHHC-1:02:Y2E5N2Q0MjVmNDNjOGYxZGQ3YzQ5ZmZhYzIyZjk0MzU0MGViZTAyYzEzYzkxMTZhU0B5KA==: --dhchap-ctrl-secret DHHC-1:01:NTRiOGYyNWE3ZmFiZjM0YTY3Y2M5MmM5MTAxOWM1ZjDN3lJm: 00:29:47.483 22:28:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:29:47.746 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:29:47.746 22:28:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 00:29:47.746 22:28:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:47.746 22:28:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:29:47.746 22:28:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:47.746 22:28:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:29:47.746 22:28:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:29:47.746 22:28:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:29:47.746 22:28:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 3 00:29:47.746 22:28:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:29:47.746 22:28:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:29:47.746 22:28:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:29:47.746 22:28:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:29:47.746 22:28:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:29:47.746 22:28:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 --dhchap-key key3 00:29:47.746 22:28:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:47.746 22:28:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:29:47.746 22:28:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:47.746 22:28:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:29:47.746 22:28:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:29:47.746 22:28:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:29:48.315 00:29:48.315 22:28:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:29:48.315 22:28:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:29:48.315 22:28:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:29:48.576 22:28:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:48.576 22:28:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:29:48.576 22:28:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:48.576 22:28:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:29:48.576 22:28:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:48.576 22:28:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:29:48.576 { 00:29:48.576 "cntlid": 95, 00:29:48.576 "qid": 0, 00:29:48.576 "state": "enabled", 00:29:48.576 "thread": "nvmf_tgt_poll_group_000", 00:29:48.576 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204", 00:29:48.576 "listen_address": { 00:29:48.576 "trtype": "TCP", 00:29:48.576 "adrfam": "IPv4", 00:29:48.576 "traddr": "10.0.0.2", 00:29:48.576 "trsvcid": "4420" 00:29:48.576 }, 00:29:48.576 "peer_address": { 00:29:48.576 "trtype": "TCP", 00:29:48.576 "adrfam": "IPv4", 00:29:48.576 "traddr": "10.0.0.1", 00:29:48.576 "trsvcid": "41326" 00:29:48.576 }, 00:29:48.576 "auth": { 00:29:48.576 "state": "completed", 00:29:48.576 "digest": "sha384", 00:29:48.576 "dhgroup": "ffdhe8192" 00:29:48.576 } 00:29:48.576 } 00:29:48.576 ]' 00:29:48.576 22:28:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:29:48.576 22:28:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:29:48.576 22:28:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:29:48.576 22:28:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:29:48.576 22:28:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:29:48.576 22:28:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:29:48.576 22:28:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:29:48.576 22:28:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:29:48.836 22:28:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZmRjZjBlNWY4Y2FjY2E1ODM3ZmZkZjYxNDBjZjJjMDg5MzE4MGViYTljYjYwOWQ1YmU4OGM4NGNiMTE3OWI5NASd1M0=: 00:29:48.836 22:28:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 --hostid 80c5a598-37ec-ec11-9bc7-a4bf01928204 -l 0 --dhchap-secret DHHC-1:03:ZmRjZjBlNWY4Y2FjY2E1ODM3ZmZkZjYxNDBjZjJjMDg5MzE4MGViYTljYjYwOWQ1YmU4OGM4NGNiMTE3OWI5NASd1M0=: 00:29:49.777 22:28:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:29:49.777 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:29:49.777 22:28:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 00:29:49.777 22:28:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:49.777 22:28:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:29:49.777 22:28:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:49.777 22:28:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:29:49.777 22:28:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:29:49.777 22:28:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:29:49.777 22:28:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:29:49.777 22:28:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:29:49.777 22:28:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 0 00:29:49.777 22:28:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:29:49.777 22:28:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:29:49.777 22:28:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:29:49.777 22:28:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:29:49.777 22:28:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:29:49.777 22:28:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:29:49.777 22:28:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:49.777 22:28:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:29:49.777 22:28:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:49.777 22:28:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:29:49.777 22:28:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:29:49.777 22:28:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:29:50.037 00:29:50.037 22:28:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:29:50.037 22:28:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:29:50.037 22:28:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:29:50.299 22:28:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:50.299 22:28:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:29:50.299 22:28:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:50.299 22:28:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:29:50.299 22:28:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:50.299 22:28:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:29:50.299 { 00:29:50.299 "cntlid": 97, 00:29:50.299 "qid": 0, 00:29:50.299 "state": "enabled", 00:29:50.299 "thread": "nvmf_tgt_poll_group_000", 00:29:50.299 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204", 00:29:50.299 "listen_address": { 00:29:50.299 "trtype": "TCP", 00:29:50.299 "adrfam": "IPv4", 00:29:50.299 "traddr": "10.0.0.2", 00:29:50.299 "trsvcid": "4420" 00:29:50.299 }, 00:29:50.299 "peer_address": { 00:29:50.299 "trtype": "TCP", 00:29:50.299 "adrfam": "IPv4", 00:29:50.299 "traddr": "10.0.0.1", 00:29:50.299 "trsvcid": "41352" 00:29:50.299 }, 00:29:50.299 "auth": { 00:29:50.299 "state": "completed", 00:29:50.299 "digest": "sha512", 00:29:50.299 "dhgroup": "null" 00:29:50.299 } 00:29:50.299 } 00:29:50.299 ]' 00:29:50.299 22:28:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:29:50.299 22:28:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:29:50.299 22:28:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:29:50.299 22:28:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:29:50.299 22:28:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:29:50.299 22:28:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:29:50.299 22:28:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:29:50.299 22:28:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:29:50.559 22:28:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YjNjM2IxZjk3OTkwYjNlNGExMjc5ZDc1MTFiZGNlNjVlODUxZWU0YmVjNDJlMWQ5Ps097Q==: --dhchap-ctrl-secret DHHC-1:03:NzBkNjkxNDA0ZDc5Mzg4OTI1MGMwMTA5YmZlNmIwNGM2ZTc0ZjgyM2VmMWVkMDk0YTUxOGZiNWIwNWI4YThkN3OXbH8=: 00:29:50.559 22:28:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 --hostid 80c5a598-37ec-ec11-9bc7-a4bf01928204 -l 0 --dhchap-secret DHHC-1:00:YjNjM2IxZjk3OTkwYjNlNGExMjc5ZDc1MTFiZGNlNjVlODUxZWU0YmVjNDJlMWQ5Ps097Q==: --dhchap-ctrl-secret DHHC-1:03:NzBkNjkxNDA0ZDc5Mzg4OTI1MGMwMTA5YmZlNmIwNGM2ZTc0ZjgyM2VmMWVkMDk0YTUxOGZiNWIwNWI4YThkN3OXbH8=: 00:29:51.500 22:28:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:29:51.500 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:29:51.500 22:28:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 00:29:51.500 22:28:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:51.500 22:28:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:29:51.500 22:28:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:51.500 22:28:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:29:51.500 22:28:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:29:51.500 22:28:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:29:51.500 22:28:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 1 00:29:51.500 22:28:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:29:51.500 22:28:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:29:51.500 22:28:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:29:51.500 22:28:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:29:51.500 22:28:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:29:51.500 22:28:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:29:51.500 22:28:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:51.500 22:28:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:29:51.500 22:28:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:51.500 22:28:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:29:51.500 22:28:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:29:51.500 22:28:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:29:51.760 00:29:51.760 22:28:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:29:51.760 22:28:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:29:51.760 22:28:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:29:52.020 22:28:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:52.020 22:28:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:29:52.020 22:28:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:52.020 22:28:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:29:52.020 22:28:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:52.020 22:28:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:29:52.020 { 00:29:52.020 "cntlid": 99, 00:29:52.020 "qid": 0, 00:29:52.020 "state": "enabled", 00:29:52.020 "thread": "nvmf_tgt_poll_group_000", 00:29:52.020 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204", 00:29:52.020 "listen_address": { 00:29:52.020 "trtype": "TCP", 00:29:52.020 "adrfam": "IPv4", 00:29:52.020 "traddr": "10.0.0.2", 00:29:52.020 "trsvcid": "4420" 00:29:52.020 }, 00:29:52.020 "peer_address": { 00:29:52.020 "trtype": "TCP", 00:29:52.020 "adrfam": "IPv4", 00:29:52.020 "traddr": "10.0.0.1", 00:29:52.020 "trsvcid": "41374" 00:29:52.020 }, 00:29:52.020 "auth": { 00:29:52.020 "state": "completed", 00:29:52.020 "digest": "sha512", 00:29:52.020 "dhgroup": "null" 00:29:52.020 } 00:29:52.020 } 00:29:52.020 ]' 00:29:52.020 22:28:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:29:52.020 22:28:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:29:52.020 22:28:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:29:52.020 22:28:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:29:52.020 22:28:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:29:52.020 22:28:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:29:52.020 22:28:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:29:52.020 22:28:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:29:52.280 22:28:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MTlmY2Y1MzA1OTQzNDU3YTRjMWE0MzVlNTU1ZGE2YjhHWWQq: --dhchap-ctrl-secret DHHC-1:02:OTNlZTNhZmFiNmUzMTdhNTdmYTY4ZGUyNmMxYTE5NWI3ZmQyNTM3Y2Y5OTNkNmI1qvQ86g==: 00:29:52.280 22:28:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 --hostid 80c5a598-37ec-ec11-9bc7-a4bf01928204 -l 0 --dhchap-secret DHHC-1:01:MTlmY2Y1MzA1OTQzNDU3YTRjMWE0MzVlNTU1ZGE2YjhHWWQq: --dhchap-ctrl-secret DHHC-1:02:OTNlZTNhZmFiNmUzMTdhNTdmYTY4ZGUyNmMxYTE5NWI3ZmQyNTM3Y2Y5OTNkNmI1qvQ86g==: 00:29:53.223 22:28:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:29:53.223 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:29:53.223 22:28:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 00:29:53.223 22:28:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:53.223 22:28:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:29:53.223 22:28:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:53.223 22:28:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:29:53.223 22:28:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:29:53.223 22:28:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:29:53.223 22:28:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 2 00:29:53.223 22:28:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:29:53.223 22:28:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:29:53.223 22:28:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:29:53.223 22:28:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:29:53.223 22:28:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:29:53.223 22:28:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:29:53.223 22:28:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:53.223 22:28:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:29:53.223 22:28:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:53.223 22:28:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:29:53.223 22:28:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:29:53.223 22:28:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:29:53.483 00:29:53.483 22:28:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:29:53.483 22:28:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:29:53.483 22:28:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:29:53.483 22:28:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:53.483 22:28:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:29:53.483 22:28:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:53.483 22:28:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:29:53.743 22:28:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:53.743 22:28:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:29:53.743 { 00:29:53.743 "cntlid": 101, 00:29:53.743 "qid": 0, 00:29:53.743 "state": "enabled", 00:29:53.743 "thread": "nvmf_tgt_poll_group_000", 00:29:53.744 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204", 00:29:53.744 "listen_address": { 00:29:53.744 "trtype": "TCP", 00:29:53.744 "adrfam": "IPv4", 00:29:53.744 "traddr": "10.0.0.2", 00:29:53.744 "trsvcid": "4420" 00:29:53.744 }, 00:29:53.744 "peer_address": { 00:29:53.744 "trtype": "TCP", 00:29:53.744 "adrfam": "IPv4", 00:29:53.744 "traddr": "10.0.0.1", 00:29:53.744 "trsvcid": "41392" 00:29:53.744 }, 00:29:53.744 "auth": { 00:29:53.744 "state": "completed", 00:29:53.744 "digest": "sha512", 00:29:53.744 "dhgroup": "null" 00:29:53.744 } 00:29:53.744 } 00:29:53.744 ]' 00:29:53.744 22:28:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:29:53.744 22:28:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:29:53.744 22:28:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:29:53.744 22:28:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:29:53.744 22:28:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:29:53.744 22:28:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:29:53.744 22:28:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:29:53.744 22:28:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:29:54.004 22:28:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:Y2E5N2Q0MjVmNDNjOGYxZGQ3YzQ5ZmZhYzIyZjk0MzU0MGViZTAyYzEzYzkxMTZhU0B5KA==: --dhchap-ctrl-secret DHHC-1:01:NTRiOGYyNWE3ZmFiZjM0YTY3Y2M5MmM5MTAxOWM1ZjDN3lJm: 00:29:54.004 22:28:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 --hostid 80c5a598-37ec-ec11-9bc7-a4bf01928204 -l 0 --dhchap-secret DHHC-1:02:Y2E5N2Q0MjVmNDNjOGYxZGQ3YzQ5ZmZhYzIyZjk0MzU0MGViZTAyYzEzYzkxMTZhU0B5KA==: --dhchap-ctrl-secret DHHC-1:01:NTRiOGYyNWE3ZmFiZjM0YTY3Y2M5MmM5MTAxOWM1ZjDN3lJm: 00:29:54.574 22:28:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:29:54.835 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:29:54.835 22:28:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 00:29:54.835 22:28:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:54.835 22:28:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:29:54.835 22:28:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:54.835 22:28:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:29:54.835 22:28:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:29:54.835 22:28:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:29:54.835 22:28:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 3 00:29:54.835 22:28:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:29:54.835 22:28:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:29:54.835 22:28:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:29:54.835 22:28:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:29:54.835 22:28:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:29:54.835 22:28:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 --dhchap-key key3 00:29:54.835 22:28:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:54.835 22:28:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:29:54.835 22:28:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:54.835 22:28:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:29:54.835 22:28:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:29:54.835 22:28:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:29:55.095 00:29:55.095 22:28:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:29:55.095 22:28:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:29:55.095 22:28:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:29:55.357 22:28:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:55.357 22:28:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:29:55.357 22:28:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:55.357 22:28:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:29:55.357 22:28:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:55.357 22:28:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:29:55.357 { 00:29:55.357 "cntlid": 103, 00:29:55.357 "qid": 0, 00:29:55.357 "state": "enabled", 00:29:55.357 "thread": "nvmf_tgt_poll_group_000", 00:29:55.357 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204", 00:29:55.357 "listen_address": { 00:29:55.357 "trtype": "TCP", 00:29:55.357 "adrfam": "IPv4", 00:29:55.357 "traddr": "10.0.0.2", 00:29:55.357 "trsvcid": "4420" 00:29:55.357 }, 00:29:55.357 "peer_address": { 00:29:55.357 "trtype": "TCP", 00:29:55.357 "adrfam": "IPv4", 00:29:55.357 "traddr": "10.0.0.1", 00:29:55.357 "trsvcid": "41414" 00:29:55.357 }, 00:29:55.357 "auth": { 00:29:55.357 "state": "completed", 00:29:55.357 "digest": "sha512", 00:29:55.357 "dhgroup": "null" 00:29:55.357 } 00:29:55.357 } 00:29:55.357 ]' 00:29:55.357 22:28:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:29:55.357 22:28:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:29:55.357 22:28:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:29:55.357 22:28:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:29:55.357 22:28:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:29:55.357 22:28:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:29:55.357 22:28:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:29:55.357 22:28:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:29:55.619 22:28:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZmRjZjBlNWY4Y2FjY2E1ODM3ZmZkZjYxNDBjZjJjMDg5MzE4MGViYTljYjYwOWQ1YmU4OGM4NGNiMTE3OWI5NASd1M0=: 00:29:55.619 22:28:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 --hostid 80c5a598-37ec-ec11-9bc7-a4bf01928204 -l 0 --dhchap-secret DHHC-1:03:ZmRjZjBlNWY4Y2FjY2E1ODM3ZmZkZjYxNDBjZjJjMDg5MzE4MGViYTljYjYwOWQ1YmU4OGM4NGNiMTE3OWI5NASd1M0=: 00:29:56.560 22:28:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:29:56.560 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:29:56.560 22:28:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 00:29:56.560 22:28:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:56.560 22:28:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:29:56.560 22:28:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:56.560 22:28:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:29:56.560 22:28:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:29:56.560 22:28:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:29:56.560 22:28:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:29:56.560 22:28:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 0 00:29:56.560 22:28:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:29:56.560 22:28:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:29:56.560 22:28:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:29:56.560 22:28:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:29:56.560 22:28:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:29:56.560 22:28:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:29:56.560 22:28:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:56.560 22:28:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:29:56.560 22:28:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:56.560 22:28:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:29:56.560 22:28:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:29:56.560 22:28:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:29:56.821 00:29:56.821 22:28:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:29:56.821 22:28:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:29:56.821 22:28:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:29:57.082 22:28:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:57.082 22:28:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:29:57.082 22:28:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:57.082 22:28:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:29:57.082 22:28:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:57.082 22:28:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:29:57.082 { 00:29:57.082 "cntlid": 105, 00:29:57.082 "qid": 0, 00:29:57.082 "state": "enabled", 00:29:57.082 "thread": "nvmf_tgt_poll_group_000", 00:29:57.082 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204", 00:29:57.082 "listen_address": { 00:29:57.082 "trtype": "TCP", 00:29:57.082 "adrfam": "IPv4", 00:29:57.082 "traddr": "10.0.0.2", 00:29:57.082 "trsvcid": "4420" 00:29:57.082 }, 00:29:57.082 "peer_address": { 00:29:57.082 "trtype": "TCP", 00:29:57.082 "adrfam": "IPv4", 00:29:57.082 "traddr": "10.0.0.1", 00:29:57.082 "trsvcid": "35388" 00:29:57.082 }, 00:29:57.082 "auth": { 00:29:57.082 "state": "completed", 00:29:57.082 "digest": "sha512", 00:29:57.082 "dhgroup": "ffdhe2048" 00:29:57.082 } 00:29:57.082 } 00:29:57.082 ]' 00:29:57.082 22:28:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:29:57.082 22:28:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:29:57.082 22:28:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:29:57.082 22:28:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:29:57.082 22:28:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:29:57.082 22:28:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:29:57.082 22:28:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:29:57.082 22:28:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:29:57.344 22:28:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YjNjM2IxZjk3OTkwYjNlNGExMjc5ZDc1MTFiZGNlNjVlODUxZWU0YmVjNDJlMWQ5Ps097Q==: --dhchap-ctrl-secret DHHC-1:03:NzBkNjkxNDA0ZDc5Mzg4OTI1MGMwMTA5YmZlNmIwNGM2ZTc0ZjgyM2VmMWVkMDk0YTUxOGZiNWIwNWI4YThkN3OXbH8=: 00:29:57.344 22:28:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 --hostid 80c5a598-37ec-ec11-9bc7-a4bf01928204 -l 0 --dhchap-secret DHHC-1:00:YjNjM2IxZjk3OTkwYjNlNGExMjc5ZDc1MTFiZGNlNjVlODUxZWU0YmVjNDJlMWQ5Ps097Q==: --dhchap-ctrl-secret DHHC-1:03:NzBkNjkxNDA0ZDc5Mzg4OTI1MGMwMTA5YmZlNmIwNGM2ZTc0ZjgyM2VmMWVkMDk0YTUxOGZiNWIwNWI4YThkN3OXbH8=: 00:29:58.286 22:28:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:29:58.286 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:29:58.286 22:28:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 00:29:58.286 22:28:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:58.286 22:28:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:29:58.286 22:28:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:58.286 22:28:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:29:58.286 22:28:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:29:58.286 22:28:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:29:58.286 22:28:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 1 00:29:58.286 22:28:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:29:58.286 22:28:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:29:58.286 22:28:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:29:58.286 22:28:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:29:58.286 22:28:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:29:58.286 22:28:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:29:58.286 22:28:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:58.286 22:28:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:29:58.286 22:28:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:58.286 22:28:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:29:58.286 22:28:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:29:58.286 22:28:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:29:58.546 00:29:58.546 22:28:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:29:58.546 22:28:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:29:58.546 22:28:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:29:58.807 22:28:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:58.807 22:28:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:29:58.807 22:28:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:58.807 22:28:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:29:58.807 22:28:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:58.807 22:28:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:29:58.807 { 00:29:58.807 "cntlid": 107, 00:29:58.807 "qid": 0, 00:29:58.807 "state": "enabled", 00:29:58.807 "thread": "nvmf_tgt_poll_group_000", 00:29:58.807 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204", 00:29:58.807 "listen_address": { 00:29:58.807 "trtype": "TCP", 00:29:58.807 "adrfam": "IPv4", 00:29:58.807 "traddr": "10.0.0.2", 00:29:58.807 "trsvcid": "4420" 00:29:58.807 }, 00:29:58.807 "peer_address": { 00:29:58.807 "trtype": "TCP", 00:29:58.807 "adrfam": "IPv4", 00:29:58.807 "traddr": "10.0.0.1", 00:29:58.807 "trsvcid": "35428" 00:29:58.807 }, 00:29:58.807 "auth": { 00:29:58.807 "state": "completed", 00:29:58.807 "digest": "sha512", 00:29:58.807 "dhgroup": "ffdhe2048" 00:29:58.807 } 00:29:58.807 } 00:29:58.807 ]' 00:29:58.807 22:28:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:29:58.807 22:28:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:29:58.807 22:28:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:29:58.807 22:28:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:29:58.807 22:28:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:29:58.807 22:28:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:29:58.807 22:28:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:29:58.807 22:28:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:29:59.067 22:28:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MTlmY2Y1MzA1OTQzNDU3YTRjMWE0MzVlNTU1ZGE2YjhHWWQq: --dhchap-ctrl-secret DHHC-1:02:OTNlZTNhZmFiNmUzMTdhNTdmYTY4ZGUyNmMxYTE5NWI3ZmQyNTM3Y2Y5OTNkNmI1qvQ86g==: 00:29:59.067 22:28:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 --hostid 80c5a598-37ec-ec11-9bc7-a4bf01928204 -l 0 --dhchap-secret DHHC-1:01:MTlmY2Y1MzA1OTQzNDU3YTRjMWE0MzVlNTU1ZGE2YjhHWWQq: --dhchap-ctrl-secret DHHC-1:02:OTNlZTNhZmFiNmUzMTdhNTdmYTY4ZGUyNmMxYTE5NWI3ZmQyNTM3Y2Y5OTNkNmI1qvQ86g==: 00:30:00.006 22:28:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:30:00.006 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:30:00.006 22:28:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 00:30:00.006 22:28:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:00.006 22:28:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:30:00.006 22:28:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:00.006 22:28:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:30:00.006 22:28:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:30:00.006 22:28:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:30:00.006 22:28:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 2 00:30:00.006 22:28:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:30:00.006 22:28:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:30:00.006 22:28:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:30:00.006 22:28:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:30:00.007 22:28:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:30:00.007 22:28:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:30:00.007 22:28:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:00.007 22:28:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:30:00.007 22:28:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:00.007 22:28:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:30:00.007 22:28:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:30:00.007 22:28:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:30:00.267 00:30:00.267 22:28:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:30:00.267 22:28:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:30:00.267 22:28:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:30:00.528 22:28:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:00.528 22:28:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:30:00.528 22:28:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:00.528 22:28:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:30:00.528 22:28:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:00.528 22:28:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:30:00.528 { 00:30:00.528 "cntlid": 109, 00:30:00.528 "qid": 0, 00:30:00.528 "state": "enabled", 00:30:00.528 "thread": "nvmf_tgt_poll_group_000", 00:30:00.528 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204", 00:30:00.528 "listen_address": { 00:30:00.528 "trtype": "TCP", 00:30:00.528 "adrfam": "IPv4", 00:30:00.528 "traddr": "10.0.0.2", 00:30:00.528 "trsvcid": "4420" 00:30:00.528 }, 00:30:00.528 "peer_address": { 00:30:00.528 "trtype": "TCP", 00:30:00.528 "adrfam": "IPv4", 00:30:00.528 "traddr": "10.0.0.1", 00:30:00.528 "trsvcid": "35446" 00:30:00.528 }, 00:30:00.528 "auth": { 00:30:00.528 "state": "completed", 00:30:00.528 "digest": "sha512", 00:30:00.528 "dhgroup": "ffdhe2048" 00:30:00.528 } 00:30:00.528 } 00:30:00.528 ]' 00:30:00.528 22:28:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:30:00.528 22:28:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:30:00.528 22:28:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:30:00.528 22:28:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:30:00.528 22:28:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:30:00.528 22:28:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:30:00.528 22:28:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:30:00.528 22:28:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:30:00.789 22:28:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:Y2E5N2Q0MjVmNDNjOGYxZGQ3YzQ5ZmZhYzIyZjk0MzU0MGViZTAyYzEzYzkxMTZhU0B5KA==: --dhchap-ctrl-secret DHHC-1:01:NTRiOGYyNWE3ZmFiZjM0YTY3Y2M5MmM5MTAxOWM1ZjDN3lJm: 00:30:00.789 22:28:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 --hostid 80c5a598-37ec-ec11-9bc7-a4bf01928204 -l 0 --dhchap-secret DHHC-1:02:Y2E5N2Q0MjVmNDNjOGYxZGQ3YzQ5ZmZhYzIyZjk0MzU0MGViZTAyYzEzYzkxMTZhU0B5KA==: --dhchap-ctrl-secret DHHC-1:01:NTRiOGYyNWE3ZmFiZjM0YTY3Y2M5MmM5MTAxOWM1ZjDN3lJm: 00:30:01.730 22:28:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:30:01.730 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:30:01.730 22:28:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 00:30:01.730 22:28:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:01.730 22:28:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:30:01.730 22:28:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:01.730 22:28:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:30:01.730 22:28:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:30:01.730 22:28:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:30:01.730 22:28:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 3 00:30:01.730 22:28:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:30:01.730 22:28:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:30:01.730 22:28:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:30:01.730 22:28:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:30:01.730 22:28:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:30:01.730 22:28:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 --dhchap-key key3 00:30:01.730 22:28:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:01.730 22:28:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:30:01.730 22:28:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:01.730 22:28:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:30:01.730 22:28:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:30:01.730 22:28:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:30:01.991 00:30:01.991 22:28:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:30:01.992 22:28:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:30:01.992 22:28:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:30:02.253 22:28:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:02.253 22:28:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:30:02.253 22:28:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:02.253 22:28:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:30:02.253 22:28:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:02.253 22:28:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:30:02.253 { 00:30:02.253 "cntlid": 111, 00:30:02.253 "qid": 0, 00:30:02.253 "state": "enabled", 00:30:02.253 "thread": "nvmf_tgt_poll_group_000", 00:30:02.253 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204", 00:30:02.253 "listen_address": { 00:30:02.253 "trtype": "TCP", 00:30:02.253 "adrfam": "IPv4", 00:30:02.253 "traddr": "10.0.0.2", 00:30:02.253 "trsvcid": "4420" 00:30:02.253 }, 00:30:02.253 "peer_address": { 00:30:02.253 "trtype": "TCP", 00:30:02.253 "adrfam": "IPv4", 00:30:02.253 "traddr": "10.0.0.1", 00:30:02.253 "trsvcid": "35474" 00:30:02.253 }, 00:30:02.253 "auth": { 00:30:02.253 "state": "completed", 00:30:02.253 "digest": "sha512", 00:30:02.253 "dhgroup": "ffdhe2048" 00:30:02.253 } 00:30:02.253 } 00:30:02.253 ]' 00:30:02.253 22:28:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:30:02.253 22:28:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:30:02.253 22:28:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:30:02.253 22:28:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:30:02.253 22:28:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:30:02.253 22:28:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:30:02.253 22:28:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:30:02.253 22:28:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:30:02.513 22:28:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZmRjZjBlNWY4Y2FjY2E1ODM3ZmZkZjYxNDBjZjJjMDg5MzE4MGViYTljYjYwOWQ1YmU4OGM4NGNiMTE3OWI5NASd1M0=: 00:30:02.513 22:28:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 --hostid 80c5a598-37ec-ec11-9bc7-a4bf01928204 -l 0 --dhchap-secret DHHC-1:03:ZmRjZjBlNWY4Y2FjY2E1ODM3ZmZkZjYxNDBjZjJjMDg5MzE4MGViYTljYjYwOWQ1YmU4OGM4NGNiMTE3OWI5NASd1M0=: 00:30:03.453 22:28:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:30:03.453 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:30:03.453 22:28:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 00:30:03.453 22:28:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:03.453 22:28:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:30:03.453 22:28:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:03.453 22:28:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:30:03.453 22:28:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:30:03.453 22:28:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:30:03.453 22:28:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:30:03.453 22:28:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 0 00:30:03.453 22:28:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:30:03.453 22:28:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:30:03.453 22:28:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:30:03.453 22:28:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:30:03.453 22:28:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:30:03.453 22:28:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:30:03.453 22:28:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:03.453 22:28:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:30:03.453 22:28:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:03.453 22:28:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:30:03.453 22:28:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:30:03.453 22:28:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:30:03.713 00:30:03.713 22:28:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:30:03.713 22:28:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:30:03.713 22:28:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:30:03.973 22:28:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:03.973 22:28:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:30:03.973 22:28:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:03.973 22:28:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:30:03.973 22:28:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:03.973 22:28:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:30:03.973 { 00:30:03.973 "cntlid": 113, 00:30:03.973 "qid": 0, 00:30:03.973 "state": "enabled", 00:30:03.973 "thread": "nvmf_tgt_poll_group_000", 00:30:03.973 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204", 00:30:03.973 "listen_address": { 00:30:03.973 "trtype": "TCP", 00:30:03.973 "adrfam": "IPv4", 00:30:03.973 "traddr": "10.0.0.2", 00:30:03.973 "trsvcid": "4420" 00:30:03.973 }, 00:30:03.973 "peer_address": { 00:30:03.973 "trtype": "TCP", 00:30:03.973 "adrfam": "IPv4", 00:30:03.973 "traddr": "10.0.0.1", 00:30:03.973 "trsvcid": "35498" 00:30:03.973 }, 00:30:03.973 "auth": { 00:30:03.973 "state": "completed", 00:30:03.973 "digest": "sha512", 00:30:03.973 "dhgroup": "ffdhe3072" 00:30:03.973 } 00:30:03.973 } 00:30:03.973 ]' 00:30:03.973 22:28:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:30:03.973 22:28:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:30:03.973 22:28:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:30:03.973 22:28:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:30:03.973 22:28:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:30:03.973 22:28:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:30:03.973 22:28:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:30:03.973 22:28:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:30:04.234 22:28:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YjNjM2IxZjk3OTkwYjNlNGExMjc5ZDc1MTFiZGNlNjVlODUxZWU0YmVjNDJlMWQ5Ps097Q==: --dhchap-ctrl-secret DHHC-1:03:NzBkNjkxNDA0ZDc5Mzg4OTI1MGMwMTA5YmZlNmIwNGM2ZTc0ZjgyM2VmMWVkMDk0YTUxOGZiNWIwNWI4YThkN3OXbH8=: 00:30:04.234 22:28:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 --hostid 80c5a598-37ec-ec11-9bc7-a4bf01928204 -l 0 --dhchap-secret DHHC-1:00:YjNjM2IxZjk3OTkwYjNlNGExMjc5ZDc1MTFiZGNlNjVlODUxZWU0YmVjNDJlMWQ5Ps097Q==: --dhchap-ctrl-secret DHHC-1:03:NzBkNjkxNDA0ZDc5Mzg4OTI1MGMwMTA5YmZlNmIwNGM2ZTc0ZjgyM2VmMWVkMDk0YTUxOGZiNWIwNWI4YThkN3OXbH8=: 00:30:05.175 22:29:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:30:05.175 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:30:05.175 22:29:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 00:30:05.175 22:29:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:05.175 22:29:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:30:05.175 22:29:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:05.175 22:29:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:30:05.175 22:29:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:30:05.175 22:29:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:30:05.175 22:29:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 1 00:30:05.175 22:29:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:30:05.175 22:29:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:30:05.175 22:29:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:30:05.175 22:29:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:30:05.175 22:29:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:30:05.175 22:29:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:30:05.175 22:29:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:05.175 22:29:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:30:05.175 22:29:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:05.175 22:29:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:30:05.175 22:29:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:30:05.175 22:29:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:30:05.436 00:30:05.436 22:29:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:30:05.436 22:29:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:30:05.436 22:29:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:30:05.696 22:29:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:05.696 22:29:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:30:05.696 22:29:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:05.696 22:29:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:30:05.696 22:29:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:05.696 22:29:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:30:05.696 { 00:30:05.696 "cntlid": 115, 00:30:05.696 "qid": 0, 00:30:05.696 "state": "enabled", 00:30:05.696 "thread": "nvmf_tgt_poll_group_000", 00:30:05.696 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204", 00:30:05.696 "listen_address": { 00:30:05.696 "trtype": "TCP", 00:30:05.696 "adrfam": "IPv4", 00:30:05.696 "traddr": "10.0.0.2", 00:30:05.696 "trsvcid": "4420" 00:30:05.696 }, 00:30:05.696 "peer_address": { 00:30:05.696 "trtype": "TCP", 00:30:05.696 "adrfam": "IPv4", 00:30:05.696 "traddr": "10.0.0.1", 00:30:05.696 "trsvcid": "35532" 00:30:05.696 }, 00:30:05.696 "auth": { 00:30:05.696 "state": "completed", 00:30:05.696 "digest": "sha512", 00:30:05.696 "dhgroup": "ffdhe3072" 00:30:05.696 } 00:30:05.696 } 00:30:05.696 ]' 00:30:05.696 22:29:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:30:05.696 22:29:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:30:05.696 22:29:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:30:05.696 22:29:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:30:05.696 22:29:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:30:05.696 22:29:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:30:05.696 22:29:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:30:05.696 22:29:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:30:05.957 22:29:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MTlmY2Y1MzA1OTQzNDU3YTRjMWE0MzVlNTU1ZGE2YjhHWWQq: --dhchap-ctrl-secret DHHC-1:02:OTNlZTNhZmFiNmUzMTdhNTdmYTY4ZGUyNmMxYTE5NWI3ZmQyNTM3Y2Y5OTNkNmI1qvQ86g==: 00:30:05.957 22:29:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 --hostid 80c5a598-37ec-ec11-9bc7-a4bf01928204 -l 0 --dhchap-secret DHHC-1:01:MTlmY2Y1MzA1OTQzNDU3YTRjMWE0MzVlNTU1ZGE2YjhHWWQq: --dhchap-ctrl-secret DHHC-1:02:OTNlZTNhZmFiNmUzMTdhNTdmYTY4ZGUyNmMxYTE5NWI3ZmQyNTM3Y2Y5OTNkNmI1qvQ86g==: 00:30:06.899 22:29:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:30:06.899 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:30:06.899 22:29:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 00:30:06.899 22:29:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:06.899 22:29:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:30:06.899 22:29:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:06.899 22:29:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:30:06.899 22:29:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:30:06.899 22:29:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:30:06.899 22:29:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 2 00:30:06.899 22:29:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:30:06.899 22:29:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:30:06.899 22:29:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:30:06.899 22:29:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:30:06.899 22:29:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:30:06.899 22:29:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:30:06.899 22:29:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:06.899 22:29:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:30:06.899 22:29:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:06.899 22:29:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:30:06.899 22:29:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:30:06.899 22:29:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:30:07.160 00:30:07.160 22:29:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:30:07.160 22:29:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:30:07.160 22:29:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:30:07.421 22:29:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:07.421 22:29:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:30:07.421 22:29:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:07.421 22:29:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:30:07.421 22:29:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:07.421 22:29:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:30:07.421 { 00:30:07.421 "cntlid": 117, 00:30:07.421 "qid": 0, 00:30:07.421 "state": "enabled", 00:30:07.421 "thread": "nvmf_tgt_poll_group_000", 00:30:07.421 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204", 00:30:07.421 "listen_address": { 00:30:07.421 "trtype": "TCP", 00:30:07.421 "adrfam": "IPv4", 00:30:07.421 "traddr": "10.0.0.2", 00:30:07.421 "trsvcid": "4420" 00:30:07.421 }, 00:30:07.421 "peer_address": { 00:30:07.421 "trtype": "TCP", 00:30:07.421 "adrfam": "IPv4", 00:30:07.421 "traddr": "10.0.0.1", 00:30:07.421 "trsvcid": "53894" 00:30:07.421 }, 00:30:07.421 "auth": { 00:30:07.421 "state": "completed", 00:30:07.421 "digest": "sha512", 00:30:07.421 "dhgroup": "ffdhe3072" 00:30:07.421 } 00:30:07.421 } 00:30:07.421 ]' 00:30:07.421 22:29:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:30:07.421 22:29:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:30:07.421 22:29:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:30:07.421 22:29:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:30:07.421 22:29:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:30:07.421 22:29:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:30:07.421 22:29:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:30:07.421 22:29:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:30:07.682 22:29:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:Y2E5N2Q0MjVmNDNjOGYxZGQ3YzQ5ZmZhYzIyZjk0MzU0MGViZTAyYzEzYzkxMTZhU0B5KA==: --dhchap-ctrl-secret DHHC-1:01:NTRiOGYyNWE3ZmFiZjM0YTY3Y2M5MmM5MTAxOWM1ZjDN3lJm: 00:30:07.682 22:29:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 --hostid 80c5a598-37ec-ec11-9bc7-a4bf01928204 -l 0 --dhchap-secret DHHC-1:02:Y2E5N2Q0MjVmNDNjOGYxZGQ3YzQ5ZmZhYzIyZjk0MzU0MGViZTAyYzEzYzkxMTZhU0B5KA==: --dhchap-ctrl-secret DHHC-1:01:NTRiOGYyNWE3ZmFiZjM0YTY3Y2M5MmM5MTAxOWM1ZjDN3lJm: 00:30:08.623 22:29:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:30:08.623 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:30:08.624 22:29:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 00:30:08.624 22:29:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:08.624 22:29:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:30:08.624 22:29:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:08.624 22:29:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:30:08.624 22:29:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:30:08.624 22:29:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:30:08.624 22:29:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 3 00:30:08.624 22:29:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:30:08.624 22:29:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:30:08.624 22:29:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:30:08.624 22:29:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:30:08.624 22:29:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:30:08.624 22:29:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 --dhchap-key key3 00:30:08.624 22:29:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:08.624 22:29:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:30:08.624 22:29:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:08.624 22:29:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:30:08.624 22:29:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:30:08.624 22:29:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:30:08.884 00:30:08.884 22:29:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:30:08.884 22:29:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:30:08.884 22:29:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:30:09.144 22:29:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:09.144 22:29:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:30:09.144 22:29:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:09.144 22:29:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:30:09.144 22:29:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:09.144 22:29:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:30:09.144 { 00:30:09.144 "cntlid": 119, 00:30:09.144 "qid": 0, 00:30:09.144 "state": "enabled", 00:30:09.144 "thread": "nvmf_tgt_poll_group_000", 00:30:09.144 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204", 00:30:09.144 "listen_address": { 00:30:09.144 "trtype": "TCP", 00:30:09.144 "adrfam": "IPv4", 00:30:09.144 "traddr": "10.0.0.2", 00:30:09.144 "trsvcid": "4420" 00:30:09.144 }, 00:30:09.144 "peer_address": { 00:30:09.144 "trtype": "TCP", 00:30:09.144 "adrfam": "IPv4", 00:30:09.144 "traddr": "10.0.0.1", 00:30:09.144 "trsvcid": "53924" 00:30:09.144 }, 00:30:09.144 "auth": { 00:30:09.144 "state": "completed", 00:30:09.144 "digest": "sha512", 00:30:09.144 "dhgroup": "ffdhe3072" 00:30:09.144 } 00:30:09.144 } 00:30:09.144 ]' 00:30:09.144 22:29:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:30:09.144 22:29:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:30:09.144 22:29:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:30:09.144 22:29:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:30:09.144 22:29:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:30:09.144 22:29:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:30:09.144 22:29:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:30:09.144 22:29:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:30:09.403 22:29:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZmRjZjBlNWY4Y2FjY2E1ODM3ZmZkZjYxNDBjZjJjMDg5MzE4MGViYTljYjYwOWQ1YmU4OGM4NGNiMTE3OWI5NASd1M0=: 00:30:09.404 22:29:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 --hostid 80c5a598-37ec-ec11-9bc7-a4bf01928204 -l 0 --dhchap-secret DHHC-1:03:ZmRjZjBlNWY4Y2FjY2E1ODM3ZmZkZjYxNDBjZjJjMDg5MzE4MGViYTljYjYwOWQ1YmU4OGM4NGNiMTE3OWI5NASd1M0=: 00:30:10.343 22:29:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:30:10.343 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:30:10.343 22:29:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 00:30:10.343 22:29:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:10.343 22:29:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:30:10.343 22:29:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:10.343 22:29:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:30:10.343 22:29:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:30:10.343 22:29:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:30:10.343 22:29:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:30:10.343 22:29:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 0 00:30:10.343 22:29:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:30:10.343 22:29:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:30:10.343 22:29:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:30:10.343 22:29:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:30:10.343 22:29:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:30:10.343 22:29:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:30:10.343 22:29:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:10.343 22:29:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:30:10.343 22:29:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:10.343 22:29:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:30:10.343 22:29:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:30:10.343 22:29:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:30:10.603 00:30:10.603 22:29:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:30:10.603 22:29:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:30:10.603 22:29:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:30:10.862 22:29:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:10.862 22:29:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:30:10.862 22:29:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:10.862 22:29:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:30:10.862 22:29:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:10.862 22:29:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:30:10.862 { 00:30:10.862 "cntlid": 121, 00:30:10.862 "qid": 0, 00:30:10.862 "state": "enabled", 00:30:10.862 "thread": "nvmf_tgt_poll_group_000", 00:30:10.862 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204", 00:30:10.862 "listen_address": { 00:30:10.862 "trtype": "TCP", 00:30:10.862 "adrfam": "IPv4", 00:30:10.862 "traddr": "10.0.0.2", 00:30:10.862 "trsvcid": "4420" 00:30:10.862 }, 00:30:10.862 "peer_address": { 00:30:10.862 "trtype": "TCP", 00:30:10.862 "adrfam": "IPv4", 00:30:10.862 "traddr": "10.0.0.1", 00:30:10.862 "trsvcid": "53950" 00:30:10.862 }, 00:30:10.862 "auth": { 00:30:10.862 "state": "completed", 00:30:10.862 "digest": "sha512", 00:30:10.862 "dhgroup": "ffdhe4096" 00:30:10.862 } 00:30:10.862 } 00:30:10.862 ]' 00:30:10.862 22:29:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:30:10.862 22:29:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:30:10.862 22:29:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:30:10.862 22:29:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:30:10.862 22:29:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:30:10.862 22:29:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:30:10.862 22:29:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:30:10.862 22:29:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:30:11.122 22:29:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YjNjM2IxZjk3OTkwYjNlNGExMjc5ZDc1MTFiZGNlNjVlODUxZWU0YmVjNDJlMWQ5Ps097Q==: --dhchap-ctrl-secret DHHC-1:03:NzBkNjkxNDA0ZDc5Mzg4OTI1MGMwMTA5YmZlNmIwNGM2ZTc0ZjgyM2VmMWVkMDk0YTUxOGZiNWIwNWI4YThkN3OXbH8=: 00:30:11.122 22:29:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 --hostid 80c5a598-37ec-ec11-9bc7-a4bf01928204 -l 0 --dhchap-secret DHHC-1:00:YjNjM2IxZjk3OTkwYjNlNGExMjc5ZDc1MTFiZGNlNjVlODUxZWU0YmVjNDJlMWQ5Ps097Q==: --dhchap-ctrl-secret DHHC-1:03:NzBkNjkxNDA0ZDc5Mzg4OTI1MGMwMTA5YmZlNmIwNGM2ZTc0ZjgyM2VmMWVkMDk0YTUxOGZiNWIwNWI4YThkN3OXbH8=: 00:30:12.063 22:29:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:30:12.063 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:30:12.063 22:29:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 00:30:12.063 22:29:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:12.063 22:29:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:30:12.063 22:29:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:12.063 22:29:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:30:12.063 22:29:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:30:12.063 22:29:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:30:12.063 22:29:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 1 00:30:12.063 22:29:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:30:12.063 22:29:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:30:12.063 22:29:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:30:12.063 22:29:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:30:12.063 22:29:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:30:12.063 22:29:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:30:12.063 22:29:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:12.064 22:29:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:30:12.064 22:29:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:12.064 22:29:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:30:12.064 22:29:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:30:12.064 22:29:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:30:12.325 00:30:12.325 22:29:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:30:12.325 22:29:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:30:12.325 22:29:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:30:12.586 22:29:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:12.586 22:29:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:30:12.586 22:29:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:12.586 22:29:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:30:12.586 22:29:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:12.586 22:29:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:30:12.586 { 00:30:12.586 "cntlid": 123, 00:30:12.586 "qid": 0, 00:30:12.586 "state": "enabled", 00:30:12.586 "thread": "nvmf_tgt_poll_group_000", 00:30:12.586 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204", 00:30:12.586 "listen_address": { 00:30:12.586 "trtype": "TCP", 00:30:12.586 "adrfam": "IPv4", 00:30:12.586 "traddr": "10.0.0.2", 00:30:12.586 "trsvcid": "4420" 00:30:12.586 }, 00:30:12.586 "peer_address": { 00:30:12.586 "trtype": "TCP", 00:30:12.586 "adrfam": "IPv4", 00:30:12.586 "traddr": "10.0.0.1", 00:30:12.586 "trsvcid": "53986" 00:30:12.586 }, 00:30:12.586 "auth": { 00:30:12.586 "state": "completed", 00:30:12.586 "digest": "sha512", 00:30:12.586 "dhgroup": "ffdhe4096" 00:30:12.586 } 00:30:12.586 } 00:30:12.586 ]' 00:30:12.586 22:29:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:30:12.586 22:29:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:30:12.586 22:29:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:30:12.586 22:29:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:30:12.586 22:29:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:30:12.586 22:29:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:30:12.586 22:29:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:30:12.586 22:29:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:30:12.846 22:29:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MTlmY2Y1MzA1OTQzNDU3YTRjMWE0MzVlNTU1ZGE2YjhHWWQq: --dhchap-ctrl-secret DHHC-1:02:OTNlZTNhZmFiNmUzMTdhNTdmYTY4ZGUyNmMxYTE5NWI3ZmQyNTM3Y2Y5OTNkNmI1qvQ86g==: 00:30:12.846 22:29:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 --hostid 80c5a598-37ec-ec11-9bc7-a4bf01928204 -l 0 --dhchap-secret DHHC-1:01:MTlmY2Y1MzA1OTQzNDU3YTRjMWE0MzVlNTU1ZGE2YjhHWWQq: --dhchap-ctrl-secret DHHC-1:02:OTNlZTNhZmFiNmUzMTdhNTdmYTY4ZGUyNmMxYTE5NWI3ZmQyNTM3Y2Y5OTNkNmI1qvQ86g==: 00:30:13.788 22:29:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:30:13.788 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:30:13.788 22:29:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 00:30:13.788 22:29:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:13.788 22:29:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:30:13.788 22:29:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:13.788 22:29:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:30:13.788 22:29:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:30:13.788 22:29:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:30:13.788 22:29:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 2 00:30:13.788 22:29:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:30:13.788 22:29:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:30:13.788 22:29:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:30:13.788 22:29:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:30:13.788 22:29:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:30:13.788 22:29:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:30:13.788 22:29:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:13.788 22:29:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:30:13.788 22:29:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:13.788 22:29:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:30:13.788 22:29:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:30:13.788 22:29:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:30:14.070 00:30:14.070 22:29:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:30:14.070 22:29:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:30:14.070 22:29:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:30:14.330 22:29:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:14.330 22:29:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:30:14.330 22:29:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:14.330 22:29:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:30:14.330 22:29:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:14.330 22:29:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:30:14.330 { 00:30:14.330 "cntlid": 125, 00:30:14.330 "qid": 0, 00:30:14.330 "state": "enabled", 00:30:14.330 "thread": "nvmf_tgt_poll_group_000", 00:30:14.330 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204", 00:30:14.330 "listen_address": { 00:30:14.330 "trtype": "TCP", 00:30:14.330 "adrfam": "IPv4", 00:30:14.330 "traddr": "10.0.0.2", 00:30:14.330 "trsvcid": "4420" 00:30:14.330 }, 00:30:14.330 "peer_address": { 00:30:14.330 "trtype": "TCP", 00:30:14.330 "adrfam": "IPv4", 00:30:14.330 "traddr": "10.0.0.1", 00:30:14.330 "trsvcid": "54006" 00:30:14.330 }, 00:30:14.330 "auth": { 00:30:14.330 "state": "completed", 00:30:14.330 "digest": "sha512", 00:30:14.330 "dhgroup": "ffdhe4096" 00:30:14.330 } 00:30:14.330 } 00:30:14.330 ]' 00:30:14.330 22:29:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:30:14.330 22:29:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:30:14.330 22:29:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:30:14.330 22:29:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:30:14.330 22:29:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:30:14.330 22:29:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:30:14.330 22:29:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:30:14.330 22:29:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:30:14.592 22:29:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:Y2E5N2Q0MjVmNDNjOGYxZGQ3YzQ5ZmZhYzIyZjk0MzU0MGViZTAyYzEzYzkxMTZhU0B5KA==: --dhchap-ctrl-secret DHHC-1:01:NTRiOGYyNWE3ZmFiZjM0YTY3Y2M5MmM5MTAxOWM1ZjDN3lJm: 00:30:14.592 22:29:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 --hostid 80c5a598-37ec-ec11-9bc7-a4bf01928204 -l 0 --dhchap-secret DHHC-1:02:Y2E5N2Q0MjVmNDNjOGYxZGQ3YzQ5ZmZhYzIyZjk0MzU0MGViZTAyYzEzYzkxMTZhU0B5KA==: --dhchap-ctrl-secret DHHC-1:01:NTRiOGYyNWE3ZmFiZjM0YTY3Y2M5MmM5MTAxOWM1ZjDN3lJm: 00:30:15.533 22:29:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:30:15.533 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:30:15.533 22:29:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 00:30:15.533 22:29:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:15.533 22:29:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:30:15.533 22:29:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:15.533 22:29:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:30:15.533 22:29:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:30:15.533 22:29:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:30:15.533 22:29:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 3 00:30:15.533 22:29:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:30:15.533 22:29:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:30:15.533 22:29:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:30:15.533 22:29:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:30:15.533 22:29:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:30:15.533 22:29:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 --dhchap-key key3 00:30:15.533 22:29:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:15.533 22:29:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:30:15.533 22:29:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:15.533 22:29:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:30:15.533 22:29:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:30:15.533 22:29:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:30:15.793 00:30:15.793 22:29:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:30:15.793 22:29:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:30:15.793 22:29:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:30:16.054 22:29:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:16.054 22:29:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:30:16.054 22:29:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:16.054 22:29:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:30:16.054 22:29:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:16.054 22:29:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:30:16.054 { 00:30:16.054 "cntlid": 127, 00:30:16.054 "qid": 0, 00:30:16.054 "state": "enabled", 00:30:16.054 "thread": "nvmf_tgt_poll_group_000", 00:30:16.054 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204", 00:30:16.054 "listen_address": { 00:30:16.054 "trtype": "TCP", 00:30:16.054 "adrfam": "IPv4", 00:30:16.054 "traddr": "10.0.0.2", 00:30:16.054 "trsvcid": "4420" 00:30:16.054 }, 00:30:16.054 "peer_address": { 00:30:16.054 "trtype": "TCP", 00:30:16.054 "adrfam": "IPv4", 00:30:16.054 "traddr": "10.0.0.1", 00:30:16.054 "trsvcid": "54020" 00:30:16.054 }, 00:30:16.054 "auth": { 00:30:16.054 "state": "completed", 00:30:16.054 "digest": "sha512", 00:30:16.054 "dhgroup": "ffdhe4096" 00:30:16.054 } 00:30:16.054 } 00:30:16.054 ]' 00:30:16.054 22:29:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:30:16.054 22:29:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:30:16.054 22:29:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:30:16.054 22:29:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:30:16.054 22:29:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:30:16.054 22:29:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:30:16.054 22:29:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:30:16.054 22:29:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:30:16.315 22:29:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZmRjZjBlNWY4Y2FjY2E1ODM3ZmZkZjYxNDBjZjJjMDg5MzE4MGViYTljYjYwOWQ1YmU4OGM4NGNiMTE3OWI5NASd1M0=: 00:30:16.315 22:29:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 --hostid 80c5a598-37ec-ec11-9bc7-a4bf01928204 -l 0 --dhchap-secret DHHC-1:03:ZmRjZjBlNWY4Y2FjY2E1ODM3ZmZkZjYxNDBjZjJjMDg5MzE4MGViYTljYjYwOWQ1YmU4OGM4NGNiMTE3OWI5NASd1M0=: 00:30:17.256 22:29:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:30:17.256 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:30:17.256 22:29:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 00:30:17.256 22:29:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:17.256 22:29:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:30:17.256 22:29:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:17.256 22:29:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:30:17.256 22:29:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:30:17.256 22:29:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:30:17.256 22:29:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:30:17.256 22:29:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 0 00:30:17.256 22:29:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:30:17.256 22:29:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:30:17.256 22:29:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:30:17.256 22:29:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:30:17.256 22:29:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:30:17.256 22:29:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:30:17.256 22:29:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:17.256 22:29:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:30:17.256 22:29:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:17.256 22:29:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:30:17.256 22:29:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:30:17.256 22:29:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:30:17.516 00:30:17.777 22:29:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:30:17.777 22:29:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:30:17.777 22:29:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:30:17.778 22:29:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:17.778 22:29:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:30:17.778 22:29:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:17.778 22:29:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:30:17.778 22:29:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:17.778 22:29:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:30:17.778 { 00:30:17.778 "cntlid": 129, 00:30:17.778 "qid": 0, 00:30:17.778 "state": "enabled", 00:30:17.778 "thread": "nvmf_tgt_poll_group_000", 00:30:17.778 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204", 00:30:17.778 "listen_address": { 00:30:17.778 "trtype": "TCP", 00:30:17.778 "adrfam": "IPv4", 00:30:17.778 "traddr": "10.0.0.2", 00:30:17.778 "trsvcid": "4420" 00:30:17.778 }, 00:30:17.778 "peer_address": { 00:30:17.778 "trtype": "TCP", 00:30:17.778 "adrfam": "IPv4", 00:30:17.778 "traddr": "10.0.0.1", 00:30:17.778 "trsvcid": "53376" 00:30:17.778 }, 00:30:17.778 "auth": { 00:30:17.778 "state": "completed", 00:30:17.778 "digest": "sha512", 00:30:17.778 "dhgroup": "ffdhe6144" 00:30:17.778 } 00:30:17.778 } 00:30:17.778 ]' 00:30:17.778 22:29:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:30:18.038 22:29:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:30:18.038 22:29:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:30:18.038 22:29:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:30:18.038 22:29:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:30:18.038 22:29:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:30:18.038 22:29:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:30:18.039 22:29:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:30:18.039 22:29:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YjNjM2IxZjk3OTkwYjNlNGExMjc5ZDc1MTFiZGNlNjVlODUxZWU0YmVjNDJlMWQ5Ps097Q==: --dhchap-ctrl-secret DHHC-1:03:NzBkNjkxNDA0ZDc5Mzg4OTI1MGMwMTA5YmZlNmIwNGM2ZTc0ZjgyM2VmMWVkMDk0YTUxOGZiNWIwNWI4YThkN3OXbH8=: 00:30:18.039 22:29:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 --hostid 80c5a598-37ec-ec11-9bc7-a4bf01928204 -l 0 --dhchap-secret DHHC-1:00:YjNjM2IxZjk3OTkwYjNlNGExMjc5ZDc1MTFiZGNlNjVlODUxZWU0YmVjNDJlMWQ5Ps097Q==: --dhchap-ctrl-secret DHHC-1:03:NzBkNjkxNDA0ZDc5Mzg4OTI1MGMwMTA5YmZlNmIwNGM2ZTc0ZjgyM2VmMWVkMDk0YTUxOGZiNWIwNWI4YThkN3OXbH8=: 00:30:19.106 22:29:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:30:19.106 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:30:19.106 22:29:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 00:30:19.106 22:29:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:19.106 22:29:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:30:19.106 22:29:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:19.106 22:29:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:30:19.106 22:29:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:30:19.106 22:29:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:30:19.106 22:29:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 1 00:30:19.106 22:29:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:30:19.106 22:29:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:30:19.106 22:29:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:30:19.106 22:29:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:30:19.106 22:29:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:30:19.106 22:29:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:30:19.106 22:29:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:19.106 22:29:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:30:19.106 22:29:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:19.106 22:29:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:30:19.106 22:29:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:30:19.106 22:29:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:30:19.366 00:30:19.627 22:29:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:30:19.627 22:29:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:30:19.627 22:29:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:30:19.627 22:29:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:19.627 22:29:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:30:19.627 22:29:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:19.627 22:29:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:30:19.627 22:29:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:19.627 22:29:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:30:19.627 { 00:30:19.627 "cntlid": 131, 00:30:19.627 "qid": 0, 00:30:19.627 "state": "enabled", 00:30:19.627 "thread": "nvmf_tgt_poll_group_000", 00:30:19.627 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204", 00:30:19.627 "listen_address": { 00:30:19.627 "trtype": "TCP", 00:30:19.627 "adrfam": "IPv4", 00:30:19.627 "traddr": "10.0.0.2", 00:30:19.627 "trsvcid": "4420" 00:30:19.627 }, 00:30:19.627 "peer_address": { 00:30:19.627 "trtype": "TCP", 00:30:19.627 "adrfam": "IPv4", 00:30:19.627 "traddr": "10.0.0.1", 00:30:19.627 "trsvcid": "53410" 00:30:19.627 }, 00:30:19.627 "auth": { 00:30:19.627 "state": "completed", 00:30:19.627 "digest": "sha512", 00:30:19.627 "dhgroup": "ffdhe6144" 00:30:19.627 } 00:30:19.627 } 00:30:19.627 ]' 00:30:19.627 22:29:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:30:19.627 22:29:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:30:19.627 22:29:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:30:19.887 22:29:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:30:19.887 22:29:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:30:19.887 22:29:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:30:19.887 22:29:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:30:19.888 22:29:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:30:19.888 22:29:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MTlmY2Y1MzA1OTQzNDU3YTRjMWE0MzVlNTU1ZGE2YjhHWWQq: --dhchap-ctrl-secret DHHC-1:02:OTNlZTNhZmFiNmUzMTdhNTdmYTY4ZGUyNmMxYTE5NWI3ZmQyNTM3Y2Y5OTNkNmI1qvQ86g==: 00:30:19.888 22:29:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 --hostid 80c5a598-37ec-ec11-9bc7-a4bf01928204 -l 0 --dhchap-secret DHHC-1:01:MTlmY2Y1MzA1OTQzNDU3YTRjMWE0MzVlNTU1ZGE2YjhHWWQq: --dhchap-ctrl-secret DHHC-1:02:OTNlZTNhZmFiNmUzMTdhNTdmYTY4ZGUyNmMxYTE5NWI3ZmQyNTM3Y2Y5OTNkNmI1qvQ86g==: 00:30:20.829 22:29:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:30:20.829 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:30:20.829 22:29:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 00:30:20.829 22:29:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:20.829 22:29:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:30:20.829 22:29:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:20.829 22:29:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:30:20.829 22:29:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:30:20.829 22:29:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:30:21.089 22:29:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 2 00:30:21.089 22:29:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:30:21.090 22:29:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:30:21.090 22:29:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:30:21.090 22:29:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:30:21.090 22:29:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:30:21.090 22:29:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:30:21.090 22:29:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:21.090 22:29:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:30:21.090 22:29:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:21.090 22:29:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:30:21.090 22:29:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:30:21.090 22:29:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:30:21.350 00:30:21.350 22:29:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:30:21.350 22:29:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:30:21.350 22:29:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:30:21.611 22:29:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:21.611 22:29:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:30:21.611 22:29:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:21.611 22:29:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:30:21.611 22:29:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:21.611 22:29:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:30:21.611 { 00:30:21.611 "cntlid": 133, 00:30:21.611 "qid": 0, 00:30:21.611 "state": "enabled", 00:30:21.611 "thread": "nvmf_tgt_poll_group_000", 00:30:21.611 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204", 00:30:21.611 "listen_address": { 00:30:21.611 "trtype": "TCP", 00:30:21.611 "adrfam": "IPv4", 00:30:21.611 "traddr": "10.0.0.2", 00:30:21.611 "trsvcid": "4420" 00:30:21.611 }, 00:30:21.611 "peer_address": { 00:30:21.611 "trtype": "TCP", 00:30:21.611 "adrfam": "IPv4", 00:30:21.611 "traddr": "10.0.0.1", 00:30:21.611 "trsvcid": "53428" 00:30:21.611 }, 00:30:21.611 "auth": { 00:30:21.611 "state": "completed", 00:30:21.611 "digest": "sha512", 00:30:21.611 "dhgroup": "ffdhe6144" 00:30:21.611 } 00:30:21.611 } 00:30:21.611 ]' 00:30:21.611 22:29:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:30:21.611 22:29:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:30:21.611 22:29:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:30:21.611 22:29:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:30:21.611 22:29:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:30:21.611 22:29:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:30:21.611 22:29:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:30:21.611 22:29:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:30:21.872 22:29:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:Y2E5N2Q0MjVmNDNjOGYxZGQ3YzQ5ZmZhYzIyZjk0MzU0MGViZTAyYzEzYzkxMTZhU0B5KA==: --dhchap-ctrl-secret DHHC-1:01:NTRiOGYyNWE3ZmFiZjM0YTY3Y2M5MmM5MTAxOWM1ZjDN3lJm: 00:30:21.872 22:29:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 --hostid 80c5a598-37ec-ec11-9bc7-a4bf01928204 -l 0 --dhchap-secret DHHC-1:02:Y2E5N2Q0MjVmNDNjOGYxZGQ3YzQ5ZmZhYzIyZjk0MzU0MGViZTAyYzEzYzkxMTZhU0B5KA==: --dhchap-ctrl-secret DHHC-1:01:NTRiOGYyNWE3ZmFiZjM0YTY3Y2M5MmM5MTAxOWM1ZjDN3lJm: 00:30:22.814 22:29:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:30:22.814 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:30:22.814 22:29:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 00:30:22.814 22:29:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:22.814 22:29:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:30:22.814 22:29:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:22.814 22:29:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:30:22.814 22:29:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:30:22.814 22:29:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:30:22.814 22:29:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 3 00:30:22.814 22:29:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:30:22.814 22:29:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:30:22.814 22:29:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:30:22.814 22:29:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:30:22.814 22:29:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:30:22.814 22:29:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 --dhchap-key key3 00:30:22.814 22:29:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:22.814 22:29:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:30:22.814 22:29:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:22.814 22:29:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:30:22.814 22:29:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:30:22.814 22:29:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:30:23.075 00:30:23.075 22:29:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:30:23.075 22:29:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:30:23.075 22:29:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:30:23.337 22:29:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:23.337 22:29:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:30:23.337 22:29:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:23.337 22:29:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:30:23.337 22:29:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:23.337 22:29:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:30:23.337 { 00:30:23.337 "cntlid": 135, 00:30:23.337 "qid": 0, 00:30:23.337 "state": "enabled", 00:30:23.337 "thread": "nvmf_tgt_poll_group_000", 00:30:23.337 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204", 00:30:23.337 "listen_address": { 00:30:23.337 "trtype": "TCP", 00:30:23.337 "adrfam": "IPv4", 00:30:23.337 "traddr": "10.0.0.2", 00:30:23.337 "trsvcid": "4420" 00:30:23.337 }, 00:30:23.337 "peer_address": { 00:30:23.337 "trtype": "TCP", 00:30:23.337 "adrfam": "IPv4", 00:30:23.337 "traddr": "10.0.0.1", 00:30:23.337 "trsvcid": "53464" 00:30:23.337 }, 00:30:23.337 "auth": { 00:30:23.337 "state": "completed", 00:30:23.337 "digest": "sha512", 00:30:23.337 "dhgroup": "ffdhe6144" 00:30:23.337 } 00:30:23.337 } 00:30:23.337 ]' 00:30:23.337 22:29:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:30:23.337 22:29:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:30:23.337 22:29:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:30:23.337 22:29:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:30:23.337 22:29:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:30:23.597 22:29:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:30:23.598 22:29:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:30:23.598 22:29:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:30:23.598 22:29:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZmRjZjBlNWY4Y2FjY2E1ODM3ZmZkZjYxNDBjZjJjMDg5MzE4MGViYTljYjYwOWQ1YmU4OGM4NGNiMTE3OWI5NASd1M0=: 00:30:23.598 22:29:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 --hostid 80c5a598-37ec-ec11-9bc7-a4bf01928204 -l 0 --dhchap-secret DHHC-1:03:ZmRjZjBlNWY4Y2FjY2E1ODM3ZmZkZjYxNDBjZjJjMDg5MzE4MGViYTljYjYwOWQ1YmU4OGM4NGNiMTE3OWI5NASd1M0=: 00:30:24.540 22:29:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:30:24.540 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:30:24.540 22:29:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 00:30:24.540 22:29:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:24.540 22:29:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:30:24.540 22:29:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:24.540 22:29:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:30:24.540 22:29:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:30:24.540 22:29:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:30:24.540 22:29:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:30:24.540 22:29:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 0 00:30:24.540 22:29:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:30:24.540 22:29:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:30:24.540 22:29:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:30:24.540 22:29:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:30:24.540 22:29:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:30:24.540 22:29:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:30:24.540 22:29:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:24.540 22:29:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:30:24.540 22:29:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:24.540 22:29:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:30:24.540 22:29:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:30:24.540 22:29:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:30:25.112 00:30:25.112 22:29:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:30:25.112 22:29:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:30:25.112 22:29:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:30:25.373 22:29:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:25.373 22:29:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:30:25.373 22:29:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:25.373 22:29:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:30:25.373 22:29:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:25.373 22:29:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:30:25.373 { 00:30:25.373 "cntlid": 137, 00:30:25.373 "qid": 0, 00:30:25.373 "state": "enabled", 00:30:25.373 "thread": "nvmf_tgt_poll_group_000", 00:30:25.373 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204", 00:30:25.373 "listen_address": { 00:30:25.373 "trtype": "TCP", 00:30:25.373 "adrfam": "IPv4", 00:30:25.373 "traddr": "10.0.0.2", 00:30:25.373 "trsvcid": "4420" 00:30:25.373 }, 00:30:25.373 "peer_address": { 00:30:25.373 "trtype": "TCP", 00:30:25.373 "adrfam": "IPv4", 00:30:25.373 "traddr": "10.0.0.1", 00:30:25.373 "trsvcid": "53490" 00:30:25.373 }, 00:30:25.373 "auth": { 00:30:25.373 "state": "completed", 00:30:25.373 "digest": "sha512", 00:30:25.373 "dhgroup": "ffdhe8192" 00:30:25.373 } 00:30:25.373 } 00:30:25.373 ]' 00:30:25.373 22:29:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:30:25.373 22:29:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:30:25.373 22:29:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:30:25.373 22:29:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:30:25.373 22:29:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:30:25.373 22:29:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:30:25.373 22:29:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:30:25.373 22:29:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:30:25.634 22:29:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YjNjM2IxZjk3OTkwYjNlNGExMjc5ZDc1MTFiZGNlNjVlODUxZWU0YmVjNDJlMWQ5Ps097Q==: --dhchap-ctrl-secret DHHC-1:03:NzBkNjkxNDA0ZDc5Mzg4OTI1MGMwMTA5YmZlNmIwNGM2ZTc0ZjgyM2VmMWVkMDk0YTUxOGZiNWIwNWI4YThkN3OXbH8=: 00:30:25.634 22:29:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 --hostid 80c5a598-37ec-ec11-9bc7-a4bf01928204 -l 0 --dhchap-secret DHHC-1:00:YjNjM2IxZjk3OTkwYjNlNGExMjc5ZDc1MTFiZGNlNjVlODUxZWU0YmVjNDJlMWQ5Ps097Q==: --dhchap-ctrl-secret DHHC-1:03:NzBkNjkxNDA0ZDc5Mzg4OTI1MGMwMTA5YmZlNmIwNGM2ZTc0ZjgyM2VmMWVkMDk0YTUxOGZiNWIwNWI4YThkN3OXbH8=: 00:30:26.588 22:29:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:30:26.588 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:30:26.588 22:29:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 00:30:26.588 22:29:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:26.588 22:29:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:30:26.588 22:29:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:26.588 22:29:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:30:26.588 22:29:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:30:26.588 22:29:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:30:26.588 22:29:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 1 00:30:26.588 22:29:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:30:26.588 22:29:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:30:26.588 22:29:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:30:26.588 22:29:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:30:26.588 22:29:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:30:26.588 22:29:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:30:26.588 22:29:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:26.588 22:29:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:30:26.588 22:29:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:26.588 22:29:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:30:26.588 22:29:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:30:26.588 22:29:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:30:27.161 00:30:27.161 22:29:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:30:27.161 22:29:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:30:27.161 22:29:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:30:27.423 22:29:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:27.423 22:29:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:30:27.423 22:29:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:27.423 22:29:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:30:27.423 22:29:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:27.423 22:29:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:30:27.423 { 00:30:27.423 "cntlid": 139, 00:30:27.423 "qid": 0, 00:30:27.423 "state": "enabled", 00:30:27.423 "thread": "nvmf_tgt_poll_group_000", 00:30:27.423 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204", 00:30:27.423 "listen_address": { 00:30:27.423 "trtype": "TCP", 00:30:27.423 "adrfam": "IPv4", 00:30:27.423 "traddr": "10.0.0.2", 00:30:27.423 "trsvcid": "4420" 00:30:27.423 }, 00:30:27.423 "peer_address": { 00:30:27.423 "trtype": "TCP", 00:30:27.423 "adrfam": "IPv4", 00:30:27.423 "traddr": "10.0.0.1", 00:30:27.423 "trsvcid": "45442" 00:30:27.423 }, 00:30:27.423 "auth": { 00:30:27.423 "state": "completed", 00:30:27.423 "digest": "sha512", 00:30:27.423 "dhgroup": "ffdhe8192" 00:30:27.423 } 00:30:27.423 } 00:30:27.423 ]' 00:30:27.423 22:29:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:30:27.423 22:29:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:30:27.423 22:29:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:30:27.423 22:29:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:30:27.423 22:29:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:30:27.423 22:29:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:30:27.423 22:29:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:30:27.423 22:29:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:30:27.685 22:29:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MTlmY2Y1MzA1OTQzNDU3YTRjMWE0MzVlNTU1ZGE2YjhHWWQq: --dhchap-ctrl-secret DHHC-1:02:OTNlZTNhZmFiNmUzMTdhNTdmYTY4ZGUyNmMxYTE5NWI3ZmQyNTM3Y2Y5OTNkNmI1qvQ86g==: 00:30:27.685 22:29:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 --hostid 80c5a598-37ec-ec11-9bc7-a4bf01928204 -l 0 --dhchap-secret DHHC-1:01:MTlmY2Y1MzA1OTQzNDU3YTRjMWE0MzVlNTU1ZGE2YjhHWWQq: --dhchap-ctrl-secret DHHC-1:02:OTNlZTNhZmFiNmUzMTdhNTdmYTY4ZGUyNmMxYTE5NWI3ZmQyNTM3Y2Y5OTNkNmI1qvQ86g==: 00:30:28.628 22:29:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:30:28.628 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:30:28.628 22:29:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 00:30:28.628 22:29:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:28.628 22:29:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:30:28.628 22:29:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:28.628 22:29:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:30:28.628 22:29:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:30:28.628 22:29:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:30:28.628 22:29:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 2 00:30:28.628 22:29:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:30:28.628 22:29:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:30:28.628 22:29:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:30:28.628 22:29:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:30:28.628 22:29:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:30:28.628 22:29:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:30:28.628 22:29:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:28.628 22:29:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:30:28.628 22:29:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:28.628 22:29:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:30:28.628 22:29:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:30:28.628 22:29:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:30:29.202 00:30:29.202 22:29:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:30:29.202 22:29:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:30:29.202 22:29:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:30:29.463 22:29:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:29.463 22:29:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:30:29.463 22:29:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:29.463 22:29:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:30:29.463 22:29:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:29.463 22:29:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:30:29.463 { 00:30:29.463 "cntlid": 141, 00:30:29.463 "qid": 0, 00:30:29.463 "state": "enabled", 00:30:29.463 "thread": "nvmf_tgt_poll_group_000", 00:30:29.463 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204", 00:30:29.463 "listen_address": { 00:30:29.463 "trtype": "TCP", 00:30:29.463 "adrfam": "IPv4", 00:30:29.463 "traddr": "10.0.0.2", 00:30:29.463 "trsvcid": "4420" 00:30:29.463 }, 00:30:29.463 "peer_address": { 00:30:29.463 "trtype": "TCP", 00:30:29.463 "adrfam": "IPv4", 00:30:29.463 "traddr": "10.0.0.1", 00:30:29.463 "trsvcid": "45468" 00:30:29.463 }, 00:30:29.463 "auth": { 00:30:29.463 "state": "completed", 00:30:29.463 "digest": "sha512", 00:30:29.463 "dhgroup": "ffdhe8192" 00:30:29.463 } 00:30:29.463 } 00:30:29.463 ]' 00:30:29.463 22:29:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:30:29.463 22:29:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:30:29.463 22:29:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:30:29.463 22:29:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:30:29.463 22:29:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:30:29.463 22:29:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:30:29.463 22:29:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:30:29.463 22:29:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:30:29.723 22:29:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:Y2E5N2Q0MjVmNDNjOGYxZGQ3YzQ5ZmZhYzIyZjk0MzU0MGViZTAyYzEzYzkxMTZhU0B5KA==: --dhchap-ctrl-secret DHHC-1:01:NTRiOGYyNWE3ZmFiZjM0YTY3Y2M5MmM5MTAxOWM1ZjDN3lJm: 00:30:29.723 22:29:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 --hostid 80c5a598-37ec-ec11-9bc7-a4bf01928204 -l 0 --dhchap-secret DHHC-1:02:Y2E5N2Q0MjVmNDNjOGYxZGQ3YzQ5ZmZhYzIyZjk0MzU0MGViZTAyYzEzYzkxMTZhU0B5KA==: --dhchap-ctrl-secret DHHC-1:01:NTRiOGYyNWE3ZmFiZjM0YTY3Y2M5MmM5MTAxOWM1ZjDN3lJm: 00:30:30.305 22:29:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:30:30.570 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:30:30.570 22:29:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 00:30:30.570 22:29:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:30.570 22:29:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:30:30.570 22:29:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:30.570 22:29:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:30:30.570 22:29:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:30:30.570 22:29:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:30:30.570 22:29:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 3 00:30:30.570 22:29:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:30:30.570 22:29:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:30:30.570 22:29:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:30:30.570 22:29:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:30:30.570 22:29:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:30:30.570 22:29:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 --dhchap-key key3 00:30:30.570 22:29:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:30.570 22:29:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:30:30.570 22:29:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:30.570 22:29:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:30:30.571 22:29:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:30:30.571 22:29:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:30:31.144 00:30:31.144 22:29:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:30:31.144 22:29:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:30:31.144 22:29:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:30:31.406 22:29:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:31.406 22:29:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:30:31.406 22:29:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:31.406 22:29:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:30:31.406 22:29:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:31.406 22:29:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:30:31.406 { 00:30:31.406 "cntlid": 143, 00:30:31.406 "qid": 0, 00:30:31.406 "state": "enabled", 00:30:31.406 "thread": "nvmf_tgt_poll_group_000", 00:30:31.406 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204", 00:30:31.406 "listen_address": { 00:30:31.406 "trtype": "TCP", 00:30:31.406 "adrfam": "IPv4", 00:30:31.406 "traddr": "10.0.0.2", 00:30:31.406 "trsvcid": "4420" 00:30:31.406 }, 00:30:31.406 "peer_address": { 00:30:31.406 "trtype": "TCP", 00:30:31.406 "adrfam": "IPv4", 00:30:31.406 "traddr": "10.0.0.1", 00:30:31.406 "trsvcid": "45492" 00:30:31.406 }, 00:30:31.406 "auth": { 00:30:31.406 "state": "completed", 00:30:31.406 "digest": "sha512", 00:30:31.406 "dhgroup": "ffdhe8192" 00:30:31.406 } 00:30:31.406 } 00:30:31.406 ]' 00:30:31.406 22:29:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:30:31.406 22:29:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:30:31.406 22:29:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:30:31.406 22:29:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:30:31.406 22:29:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:30:31.406 22:29:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:30:31.406 22:29:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:30:31.406 22:29:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:30:31.667 22:29:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZmRjZjBlNWY4Y2FjY2E1ODM3ZmZkZjYxNDBjZjJjMDg5MzE4MGViYTljYjYwOWQ1YmU4OGM4NGNiMTE3OWI5NASd1M0=: 00:30:31.667 22:29:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 --hostid 80c5a598-37ec-ec11-9bc7-a4bf01928204 -l 0 --dhchap-secret DHHC-1:03:ZmRjZjBlNWY4Y2FjY2E1ODM3ZmZkZjYxNDBjZjJjMDg5MzE4MGViYTljYjYwOWQ1YmU4OGM4NGNiMTE3OWI5NASd1M0=: 00:30:32.610 22:29:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:30:32.610 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:30:32.611 22:29:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 00:30:32.611 22:29:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:32.611 22:29:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:30:32.611 22:29:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:32.611 22:29:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # IFS=, 00:30:32.611 22:29:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@130 -- # printf %s sha256,sha384,sha512 00:30:32.611 22:29:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # IFS=, 00:30:32.611 22:29:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@130 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:30:32.611 22:29:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:30:32.611 22:29:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:30:32.611 22:29:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@141 -- # connect_authenticate sha512 ffdhe8192 0 00:30:32.611 22:29:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:30:32.611 22:29:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:30:32.611 22:29:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:30:32.611 22:29:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:30:32.611 22:29:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:30:32.611 22:29:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:30:32.611 22:29:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:32.611 22:29:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:30:32.611 22:29:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:32.611 22:29:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:30:32.611 22:29:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:30:32.611 22:29:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:30:33.184 00:30:33.184 22:29:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:30:33.184 22:29:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:30:33.184 22:29:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:30:33.445 22:29:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:33.445 22:29:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:30:33.445 22:29:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:33.445 22:29:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:30:33.445 22:29:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:33.445 22:29:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:30:33.445 { 00:30:33.445 "cntlid": 145, 00:30:33.445 "qid": 0, 00:30:33.445 "state": "enabled", 00:30:33.445 "thread": "nvmf_tgt_poll_group_000", 00:30:33.445 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204", 00:30:33.445 "listen_address": { 00:30:33.445 "trtype": "TCP", 00:30:33.445 "adrfam": "IPv4", 00:30:33.445 "traddr": "10.0.0.2", 00:30:33.445 "trsvcid": "4420" 00:30:33.445 }, 00:30:33.445 "peer_address": { 00:30:33.445 "trtype": "TCP", 00:30:33.445 "adrfam": "IPv4", 00:30:33.445 "traddr": "10.0.0.1", 00:30:33.445 "trsvcid": "45516" 00:30:33.445 }, 00:30:33.445 "auth": { 00:30:33.445 "state": "completed", 00:30:33.445 "digest": "sha512", 00:30:33.445 "dhgroup": "ffdhe8192" 00:30:33.445 } 00:30:33.445 } 00:30:33.445 ]' 00:30:33.445 22:29:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:30:33.445 22:29:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:30:33.445 22:29:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:30:33.445 22:29:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:30:33.445 22:29:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:30:33.445 22:29:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:30:33.445 22:29:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:30:33.445 22:29:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:30:33.706 22:29:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YjNjM2IxZjk3OTkwYjNlNGExMjc5ZDc1MTFiZGNlNjVlODUxZWU0YmVjNDJlMWQ5Ps097Q==: --dhchap-ctrl-secret DHHC-1:03:NzBkNjkxNDA0ZDc5Mzg4OTI1MGMwMTA5YmZlNmIwNGM2ZTc0ZjgyM2VmMWVkMDk0YTUxOGZiNWIwNWI4YThkN3OXbH8=: 00:30:33.707 22:29:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 --hostid 80c5a598-37ec-ec11-9bc7-a4bf01928204 -l 0 --dhchap-secret DHHC-1:00:YjNjM2IxZjk3OTkwYjNlNGExMjc5ZDc1MTFiZGNlNjVlODUxZWU0YmVjNDJlMWQ5Ps097Q==: --dhchap-ctrl-secret DHHC-1:03:NzBkNjkxNDA0ZDc5Mzg4OTI1MGMwMTA5YmZlNmIwNGM2ZTc0ZjgyM2VmMWVkMDk0YTUxOGZiNWIwNWI4YThkN3OXbH8=: 00:30:34.651 22:29:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:30:34.651 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:30:34.651 22:29:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 00:30:34.651 22:29:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:34.651 22:29:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:30:34.651 22:29:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:34.651 22:29:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@144 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 --dhchap-key key1 00:30:34.651 22:29:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:34.651 22:29:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:30:34.651 22:29:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:34.651 22:29:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@145 -- # NOT bdev_connect -b nvme0 --dhchap-key key2 00:30:34.651 22:29:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:30:34.651 22:29:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key2 00:30:34.651 22:29:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=bdev_connect 00:30:34.651 22:29:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:30:34.651 22:29:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t bdev_connect 00:30:34.651 22:29:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:30:34.651 22:29:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # bdev_connect -b nvme0 --dhchap-key key2 00:30:34.651 22:29:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 00:30:34.651 22:29:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 00:30:34.914 request: 00:30:34.914 { 00:30:34.914 "name": "nvme0", 00:30:34.914 "trtype": "tcp", 00:30:34.914 "traddr": "10.0.0.2", 00:30:34.914 "adrfam": "ipv4", 00:30:34.914 "trsvcid": "4420", 00:30:34.914 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:30:34.914 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204", 00:30:34.914 "prchk_reftag": false, 00:30:34.914 "prchk_guard": false, 00:30:34.914 "hdgst": false, 00:30:34.914 "ddgst": false, 00:30:34.914 "dhchap_key": "key2", 00:30:34.914 "allow_unrecognized_csi": false, 00:30:34.914 "method": "bdev_nvme_attach_controller", 00:30:34.914 "req_id": 1 00:30:34.914 } 00:30:34.914 Got JSON-RPC error response 00:30:34.914 response: 00:30:34.914 { 00:30:34.914 "code": -5, 00:30:34.914 "message": "Input/output error" 00:30:34.914 } 00:30:34.914 22:29:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:30:34.914 22:29:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:30:34.914 22:29:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:30:34.914 22:29:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:30:34.914 22:29:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@146 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 00:30:34.914 22:29:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:34.914 22:29:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:30:34.914 22:29:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:34.914 22:29:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@149 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:30:34.914 22:29:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:34.914 22:29:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:30:34.914 22:29:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:34.914 22:29:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@150 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:30:34.914 22:29:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:30:34.914 22:29:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:30:34.914 22:29:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=bdev_connect 00:30:34.914 22:29:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:30:34.914 22:29:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t bdev_connect 00:30:34.914 22:29:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:30:34.914 22:29:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:30:34.914 22:29:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:30:34.914 22:29:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:30:35.488 request: 00:30:35.488 { 00:30:35.488 "name": "nvme0", 00:30:35.488 "trtype": "tcp", 00:30:35.488 "traddr": "10.0.0.2", 00:30:35.488 "adrfam": "ipv4", 00:30:35.488 "trsvcid": "4420", 00:30:35.488 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:30:35.488 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204", 00:30:35.488 "prchk_reftag": false, 00:30:35.488 "prchk_guard": false, 00:30:35.488 "hdgst": false, 00:30:35.488 "ddgst": false, 00:30:35.488 "dhchap_key": "key1", 00:30:35.488 "dhchap_ctrlr_key": "ckey2", 00:30:35.488 "allow_unrecognized_csi": false, 00:30:35.488 "method": "bdev_nvme_attach_controller", 00:30:35.488 "req_id": 1 00:30:35.488 } 00:30:35.488 Got JSON-RPC error response 00:30:35.488 response: 00:30:35.488 { 00:30:35.488 "code": -5, 00:30:35.488 "message": "Input/output error" 00:30:35.488 } 00:30:35.488 22:29:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:30:35.488 22:29:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:30:35.488 22:29:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:30:35.488 22:29:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:30:35.488 22:29:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@151 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 00:30:35.488 22:29:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:35.488 22:29:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:30:35.488 22:29:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:35.488 22:29:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@154 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 --dhchap-key key1 00:30:35.488 22:29:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:35.488 22:29:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:30:35.488 22:29:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:35.488 22:29:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@155 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:30:35.488 22:29:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:30:35.488 22:29:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:30:35.488 22:29:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=bdev_connect 00:30:35.488 22:29:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:30:35.488 22:29:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t bdev_connect 00:30:35.488 22:29:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:30:35.488 22:29:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:30:35.488 22:29:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:30:35.488 22:29:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:30:36.061 request: 00:30:36.061 { 00:30:36.061 "name": "nvme0", 00:30:36.061 "trtype": "tcp", 00:30:36.061 "traddr": "10.0.0.2", 00:30:36.061 "adrfam": "ipv4", 00:30:36.061 "trsvcid": "4420", 00:30:36.061 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:30:36.061 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204", 00:30:36.061 "prchk_reftag": false, 00:30:36.061 "prchk_guard": false, 00:30:36.061 "hdgst": false, 00:30:36.061 "ddgst": false, 00:30:36.061 "dhchap_key": "key1", 00:30:36.061 "dhchap_ctrlr_key": "ckey1", 00:30:36.061 "allow_unrecognized_csi": false, 00:30:36.061 "method": "bdev_nvme_attach_controller", 00:30:36.061 "req_id": 1 00:30:36.061 } 00:30:36.061 Got JSON-RPC error response 00:30:36.061 response: 00:30:36.061 { 00:30:36.061 "code": -5, 00:30:36.061 "message": "Input/output error" 00:30:36.061 } 00:30:36.061 22:29:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:30:36.061 22:29:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:30:36.061 22:29:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:30:36.061 22:29:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:30:36.061 22:29:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@156 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 00:30:36.061 22:29:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:36.061 22:29:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:30:36.061 22:29:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:36.061 22:29:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@159 -- # killprocess 181460 00:30:36.061 22:29:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@950 -- # '[' -z 181460 ']' 00:30:36.061 22:29:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # kill -0 181460 00:30:36.061 22:29:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@955 -- # uname 00:30:36.061 22:29:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:30:36.061 22:29:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 181460 00:30:36.061 22:29:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:30:36.061 22:29:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:30:36.061 22:29:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@968 -- # echo 'killing process with pid 181460' 00:30:36.061 killing process with pid 181460 00:30:36.061 22:29:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@969 -- # kill 181460 00:30:36.061 22:29:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@974 -- # wait 181460 00:30:36.323 22:29:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@160 -- # nvmfappstart --wait-for-rpc -L nvmf_auth 00:30:36.323 22:29:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:30:36.323 22:29:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@724 -- # xtrace_disable 00:30:36.323 22:29:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:30:36.323 22:29:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@507 -- # nvmfpid=209526 00:30:36.323 22:29:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@508 -- # waitforlisten 209526 00:30:36.323 22:29:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc -L nvmf_auth 00:30:36.323 22:29:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@831 -- # '[' -z 209526 ']' 00:30:36.323 22:29:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:36.323 22:29:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@836 -- # local max_retries=100 00:30:36.323 22:29:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:36.323 22:29:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # xtrace_disable 00:30:36.323 22:29:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:30:37.268 22:29:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:30:37.268 22:29:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # return 0 00:30:37.268 22:29:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:30:37.268 22:29:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@730 -- # xtrace_disable 00:30:37.268 22:29:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:30:37.268 22:29:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:37.268 22:29:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@161 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:30:37.268 22:29:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@163 -- # waitforlisten 209526 00:30:37.268 22:29:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@831 -- # '[' -z 209526 ']' 00:30:37.268 22:29:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:37.268 22:29:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@836 -- # local max_retries=100 00:30:37.268 22:29:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:37.268 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:37.268 22:29:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # xtrace_disable 00:30:37.268 22:29:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:30:37.268 22:29:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:30:37.268 22:29:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # return 0 00:30:37.268 22:29:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@164 -- # rpc_cmd 00:30:37.268 22:29:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:37.268 22:29:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:30:37.530 null0 00:30:37.531 22:29:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:37.531 22:29:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:30:37.531 22:29:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.Es7 00:30:37.531 22:29:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:37.531 22:29:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:30:37.531 22:29:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:37.531 22:29:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha512.wXB ]] 00:30:37.531 22:29:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.wXB 00:30:37.531 22:29:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:37.531 22:29:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:30:37.531 22:29:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:37.531 22:29:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:30:37.531 22:29:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.Mja 00:30:37.531 22:29:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:37.531 22:29:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:30:37.531 22:29:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:37.531 22:29:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha384.2FQ ]] 00:30:37.531 22:29:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.2FQ 00:30:37.531 22:29:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:37.531 22:29:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:30:37.531 22:29:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:37.531 22:29:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:30:37.531 22:29:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.LJP 00:30:37.531 22:29:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:37.531 22:29:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:30:37.531 22:29:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:37.531 22:29:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha256.bGO ]] 00:30:37.531 22:29:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.bGO 00:30:37.531 22:29:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:37.531 22:29:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:30:37.531 22:29:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:37.531 22:29:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:30:37.531 22:29:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.JZd 00:30:37.531 22:29:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:37.531 22:29:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:30:37.531 22:29:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:37.531 22:29:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n '' ]] 00:30:37.531 22:29:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@179 -- # connect_authenticate sha512 ffdhe8192 3 00:30:37.531 22:29:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:30:37.531 22:29:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:30:37.531 22:29:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:30:37.531 22:29:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:30:37.531 22:29:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:30:37.531 22:29:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 --dhchap-key key3 00:30:37.531 22:29:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:37.531 22:29:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:30:37.531 22:29:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:37.531 22:29:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:30:37.531 22:29:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:30:37.531 22:29:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:30:38.472 nvme0n1 00:30:38.472 22:29:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:30:38.472 22:29:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:30:38.472 22:29:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:30:38.732 22:29:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:38.732 22:29:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:30:38.732 22:29:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:38.732 22:29:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:30:38.732 22:29:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:38.732 22:29:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:30:38.732 { 00:30:38.732 "cntlid": 1, 00:30:38.732 "qid": 0, 00:30:38.732 "state": "enabled", 00:30:38.732 "thread": "nvmf_tgt_poll_group_000", 00:30:38.732 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204", 00:30:38.732 "listen_address": { 00:30:38.732 "trtype": "TCP", 00:30:38.732 "adrfam": "IPv4", 00:30:38.732 "traddr": "10.0.0.2", 00:30:38.732 "trsvcid": "4420" 00:30:38.732 }, 00:30:38.732 "peer_address": { 00:30:38.732 "trtype": "TCP", 00:30:38.732 "adrfam": "IPv4", 00:30:38.732 "traddr": "10.0.0.1", 00:30:38.732 "trsvcid": "41342" 00:30:38.732 }, 00:30:38.732 "auth": { 00:30:38.732 "state": "completed", 00:30:38.732 "digest": "sha512", 00:30:38.732 "dhgroup": "ffdhe8192" 00:30:38.732 } 00:30:38.732 } 00:30:38.732 ]' 00:30:38.732 22:29:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:30:38.732 22:29:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:30:38.732 22:29:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:30:38.732 22:29:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:30:38.732 22:29:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:30:38.732 22:29:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:30:38.732 22:29:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:30:38.732 22:29:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:30:38.993 22:29:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZmRjZjBlNWY4Y2FjY2E1ODM3ZmZkZjYxNDBjZjJjMDg5MzE4MGViYTljYjYwOWQ1YmU4OGM4NGNiMTE3OWI5NASd1M0=: 00:30:38.993 22:29:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 --hostid 80c5a598-37ec-ec11-9bc7-a4bf01928204 -l 0 --dhchap-secret DHHC-1:03:ZmRjZjBlNWY4Y2FjY2E1ODM3ZmZkZjYxNDBjZjJjMDg5MzE4MGViYTljYjYwOWQ1YmU4OGM4NGNiMTE3OWI5NASd1M0=: 00:30:39.564 22:29:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:30:39.824 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:30:39.824 22:29:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 00:30:39.824 22:29:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:39.824 22:29:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:30:39.824 22:29:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:39.824 22:29:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@182 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 --dhchap-key key3 00:30:39.824 22:29:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:39.824 22:29:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:30:39.824 22:29:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:39.824 22:29:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@183 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 00:30:39.824 22:29:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 00:30:39.824 22:29:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@184 -- # NOT bdev_connect -b nvme0 --dhchap-key key3 00:30:39.824 22:29:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:30:39.824 22:29:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key3 00:30:39.824 22:29:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=bdev_connect 00:30:39.824 22:29:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:30:39.824 22:29:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t bdev_connect 00:30:39.824 22:29:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:30:39.824 22:29:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # bdev_connect -b nvme0 --dhchap-key key3 00:30:39.824 22:29:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:30:39.824 22:29:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:30:40.084 request: 00:30:40.084 { 00:30:40.084 "name": "nvme0", 00:30:40.084 "trtype": "tcp", 00:30:40.084 "traddr": "10.0.0.2", 00:30:40.084 "adrfam": "ipv4", 00:30:40.084 "trsvcid": "4420", 00:30:40.084 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:30:40.084 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204", 00:30:40.084 "prchk_reftag": false, 00:30:40.084 "prchk_guard": false, 00:30:40.084 "hdgst": false, 00:30:40.084 "ddgst": false, 00:30:40.084 "dhchap_key": "key3", 00:30:40.084 "allow_unrecognized_csi": false, 00:30:40.084 "method": "bdev_nvme_attach_controller", 00:30:40.084 "req_id": 1 00:30:40.084 } 00:30:40.084 Got JSON-RPC error response 00:30:40.084 response: 00:30:40.084 { 00:30:40.084 "code": -5, 00:30:40.084 "message": "Input/output error" 00:30:40.084 } 00:30:40.084 22:29:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:30:40.084 22:29:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:30:40.084 22:29:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:30:40.084 22:29:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:30:40.084 22:29:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@187 -- # IFS=, 00:30:40.084 22:29:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@188 -- # printf %s sha256,sha384,sha512 00:30:40.084 22:29:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@187 -- # hostrpc bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:30:40.084 22:29:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:30:40.344 22:29:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@193 -- # NOT bdev_connect -b nvme0 --dhchap-key key3 00:30:40.344 22:29:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:30:40.344 22:29:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key3 00:30:40.344 22:29:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=bdev_connect 00:30:40.344 22:29:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:30:40.344 22:29:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t bdev_connect 00:30:40.344 22:29:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:30:40.344 22:29:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # bdev_connect -b nvme0 --dhchap-key key3 00:30:40.344 22:29:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:30:40.344 22:29:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:30:40.344 request: 00:30:40.344 { 00:30:40.344 "name": "nvme0", 00:30:40.344 "trtype": "tcp", 00:30:40.344 "traddr": "10.0.0.2", 00:30:40.344 "adrfam": "ipv4", 00:30:40.344 "trsvcid": "4420", 00:30:40.344 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:30:40.344 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204", 00:30:40.344 "prchk_reftag": false, 00:30:40.344 "prchk_guard": false, 00:30:40.344 "hdgst": false, 00:30:40.344 "ddgst": false, 00:30:40.344 "dhchap_key": "key3", 00:30:40.344 "allow_unrecognized_csi": false, 00:30:40.344 "method": "bdev_nvme_attach_controller", 00:30:40.344 "req_id": 1 00:30:40.344 } 00:30:40.344 Got JSON-RPC error response 00:30:40.344 response: 00:30:40.344 { 00:30:40.344 "code": -5, 00:30:40.344 "message": "Input/output error" 00:30:40.344 } 00:30:40.344 22:29:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:30:40.344 22:29:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:30:40.344 22:29:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:30:40.344 22:29:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:30:40.344 22:29:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # IFS=, 00:30:40.344 22:29:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@198 -- # printf %s sha256,sha384,sha512 00:30:40.344 22:29:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # IFS=, 00:30:40.344 22:29:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@198 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:30:40.344 22:29:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:30:40.344 22:29:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:30:40.605 22:29:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@208 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 00:30:40.605 22:29:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:40.605 22:29:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:30:40.605 22:29:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:40.605 22:29:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@209 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 00:30:40.605 22:29:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:40.605 22:29:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:30:40.605 22:29:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:40.605 22:29:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@210 -- # NOT bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:30:40.605 22:29:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:30:40.605 22:29:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:30:40.605 22:29:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=bdev_connect 00:30:40.605 22:29:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:30:40.605 22:29:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t bdev_connect 00:30:40.605 22:29:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:30:40.605 22:29:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:30:40.605 22:29:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:30:40.605 22:29:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:30:40.866 request: 00:30:40.866 { 00:30:40.866 "name": "nvme0", 00:30:40.866 "trtype": "tcp", 00:30:40.866 "traddr": "10.0.0.2", 00:30:40.866 "adrfam": "ipv4", 00:30:40.866 "trsvcid": "4420", 00:30:40.866 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:30:40.866 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204", 00:30:40.866 "prchk_reftag": false, 00:30:40.866 "prchk_guard": false, 00:30:40.866 "hdgst": false, 00:30:40.866 "ddgst": false, 00:30:40.866 "dhchap_key": "key0", 00:30:40.866 "dhchap_ctrlr_key": "key1", 00:30:40.866 "allow_unrecognized_csi": false, 00:30:40.866 "method": "bdev_nvme_attach_controller", 00:30:40.866 "req_id": 1 00:30:40.866 } 00:30:40.866 Got JSON-RPC error response 00:30:40.866 response: 00:30:40.866 { 00:30:40.866 "code": -5, 00:30:40.866 "message": "Input/output error" 00:30:40.866 } 00:30:40.866 22:29:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:30:40.866 22:29:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:30:40.866 22:29:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:30:40.866 22:29:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:30:40.866 22:29:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@213 -- # bdev_connect -b nvme0 --dhchap-key key0 00:30:40.866 22:29:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 00:30:40.866 22:29:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 00:30:41.126 nvme0n1 00:30:41.126 22:29:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # hostrpc bdev_nvme_get_controllers 00:30:41.126 22:29:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # jq -r '.[].name' 00:30:41.126 22:29:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:30:41.386 22:29:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:41.386 22:29:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@215 -- # hostrpc bdev_nvme_detach_controller nvme0 00:30:41.386 22:29:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:30:41.646 22:29:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@218 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 --dhchap-key key1 00:30:41.646 22:29:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:41.646 22:29:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:30:41.646 22:29:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:41.646 22:29:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@219 -- # bdev_connect -b nvme0 --dhchap-key key1 00:30:41.646 22:29:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:30:41.646 22:29:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:30:42.585 nvme0n1 00:30:42.585 22:29:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # hostrpc bdev_nvme_get_controllers 00:30:42.585 22:29:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # jq -r '.[].name' 00:30:42.585 22:29:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:30:42.585 22:29:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:42.585 22:29:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@222 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 --dhchap-key key2 --dhchap-ctrlr-key key3 00:30:42.585 22:29:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:42.585 22:29:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:30:42.585 22:29:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:42.585 22:29:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # hostrpc bdev_nvme_get_controllers 00:30:42.585 22:29:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # jq -r '.[].name' 00:30:42.585 22:29:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:30:42.845 22:29:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:42.845 22:29:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@225 -- # nvme_connect --dhchap-secret DHHC-1:02:Y2E5N2Q0MjVmNDNjOGYxZGQ3YzQ5ZmZhYzIyZjk0MzU0MGViZTAyYzEzYzkxMTZhU0B5KA==: --dhchap-ctrl-secret DHHC-1:03:ZmRjZjBlNWY4Y2FjY2E1ODM3ZmZkZjYxNDBjZjJjMDg5MzE4MGViYTljYjYwOWQ1YmU4OGM4NGNiMTE3OWI5NASd1M0=: 00:30:42.845 22:29:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 --hostid 80c5a598-37ec-ec11-9bc7-a4bf01928204 -l 0 --dhchap-secret DHHC-1:02:Y2E5N2Q0MjVmNDNjOGYxZGQ3YzQ5ZmZhYzIyZjk0MzU0MGViZTAyYzEzYzkxMTZhU0B5KA==: --dhchap-ctrl-secret DHHC-1:03:ZmRjZjBlNWY4Y2FjY2E1ODM3ZmZkZjYxNDBjZjJjMDg5MzE4MGViYTljYjYwOWQ1YmU4OGM4NGNiMTE3OWI5NASd1M0=: 00:30:43.800 22:29:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@226 -- # nvme_get_ctrlr 00:30:43.800 22:29:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@41 -- # local dev 00:30:43.800 22:29:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@43 -- # for dev in /sys/devices/virtual/nvme-fabrics/ctl/nvme* 00:30:43.800 22:29:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nqn.2024-03.io.spdk:cnode0 == \n\q\n\.\2\0\2\4\-\0\3\.\i\o\.\s\p\d\k\:\c\n\o\d\e\0 ]] 00:30:43.800 22:29:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # echo nvme0 00:30:43.800 22:29:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # break 00:30:43.800 22:29:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@226 -- # nctrlr=nvme0 00:30:43.800 22:29:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@227 -- # hostrpc bdev_nvme_detach_controller nvme0 00:30:43.801 22:29:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:30:43.801 22:29:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@228 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 00:30:43.801 22:29:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:30:43.801 22:29:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 00:30:43.801 22:29:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=bdev_connect 00:30:43.801 22:29:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:30:43.801 22:29:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t bdev_connect 00:30:43.801 22:29:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:30:43.801 22:29:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # bdev_connect -b nvme0 --dhchap-key key1 00:30:43.801 22:29:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:30:43.801 22:29:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:30:44.372 request: 00:30:44.372 { 00:30:44.372 "name": "nvme0", 00:30:44.372 "trtype": "tcp", 00:30:44.372 "traddr": "10.0.0.2", 00:30:44.372 "adrfam": "ipv4", 00:30:44.372 "trsvcid": "4420", 00:30:44.372 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:30:44.372 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204", 00:30:44.372 "prchk_reftag": false, 00:30:44.372 "prchk_guard": false, 00:30:44.372 "hdgst": false, 00:30:44.372 "ddgst": false, 00:30:44.372 "dhchap_key": "key1", 00:30:44.372 "allow_unrecognized_csi": false, 00:30:44.372 "method": "bdev_nvme_attach_controller", 00:30:44.372 "req_id": 1 00:30:44.372 } 00:30:44.372 Got JSON-RPC error response 00:30:44.372 response: 00:30:44.372 { 00:30:44.372 "code": -5, 00:30:44.372 "message": "Input/output error" 00:30:44.372 } 00:30:44.372 22:29:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:30:44.372 22:29:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:30:44.372 22:29:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:30:44.372 22:29:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:30:44.372 22:29:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@229 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:30:44.372 22:29:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:30:44.372 22:29:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:30:44.944 nvme0n1 00:30:45.204 22:29:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # hostrpc bdev_nvme_get_controllers 00:30:45.204 22:29:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # jq -r '.[].name' 00:30:45.204 22:29:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:30:45.204 22:29:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:45.204 22:29:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@231 -- # hostrpc bdev_nvme_detach_controller nvme0 00:30:45.204 22:29:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:30:45.466 22:29:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@233 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 00:30:45.466 22:29:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:45.466 22:29:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:30:45.466 22:29:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:45.466 22:29:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@234 -- # bdev_connect -b nvme0 00:30:45.466 22:29:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 00:30:45.466 22:29:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 00:30:45.728 nvme0n1 00:30:45.728 22:29:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # hostrpc bdev_nvme_get_controllers 00:30:45.728 22:29:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # jq -r '.[].name' 00:30:45.728 22:29:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:30:45.990 22:29:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:45.990 22:29:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@236 -- # hostrpc bdev_nvme_detach_controller nvme0 00:30:45.990 22:29:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:30:45.990 22:29:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@239 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 --dhchap-key key1 --dhchap-ctrlr-key key3 00:30:45.990 22:29:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:45.990 22:29:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:30:45.990 22:29:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:45.990 22:29:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@240 -- # nvme_set_keys nvme0 DHHC-1:01:MTlmY2Y1MzA1OTQzNDU3YTRjMWE0MzVlNTU1ZGE2YjhHWWQq: '' 2s 00:30:45.990 22:29:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # local ctl key ckey dev timeout 00:30:45.990 22:29:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ctl=nvme0 00:30:45.990 22:29:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # key=DHHC-1:01:MTlmY2Y1MzA1OTQzNDU3YTRjMWE0MzVlNTU1ZGE2YjhHWWQq: 00:30:45.990 22:29:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ckey= 00:30:45.990 22:29:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # timeout=2s 00:30:45.990 22:29:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # dev=/sys/devices/virtual/nvme-fabrics/ctl/nvme0 00:30:45.990 22:29:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # [[ -z DHHC-1:01:MTlmY2Y1MzA1OTQzNDU3YTRjMWE0MzVlNTU1ZGE2YjhHWWQq: ]] 00:30:45.990 22:29:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # echo DHHC-1:01:MTlmY2Y1MzA1OTQzNDU3YTRjMWE0MzVlNTU1ZGE2YjhHWWQq: 00:30:45.990 22:29:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # [[ -z '' ]] 00:30:45.990 22:29:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # [[ -z 2s ]] 00:30:45.990 22:29:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # sleep 2s 00:30:48.548 22:29:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@241 -- # waitforblk nvme0n1 00:30:48.548 22:29:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1235 -- # local i=0 00:30:48.548 22:29:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1236 -- # lsblk -l -o NAME 00:30:48.548 22:29:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1236 -- # grep -q -w nvme0n1 00:30:48.548 22:29:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1242 -- # lsblk -l -o NAME 00:30:48.548 22:29:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1242 -- # grep -q -w nvme0n1 00:30:48.548 22:29:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # return 0 00:30:48.548 22:29:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@243 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 --dhchap-key key1 --dhchap-ctrlr-key key2 00:30:48.548 22:29:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:48.548 22:29:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:30:48.548 22:29:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:48.548 22:29:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@244 -- # nvme_set_keys nvme0 '' DHHC-1:02:Y2E5N2Q0MjVmNDNjOGYxZGQ3YzQ5ZmZhYzIyZjk0MzU0MGViZTAyYzEzYzkxMTZhU0B5KA==: 2s 00:30:48.548 22:29:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # local ctl key ckey dev timeout 00:30:48.548 22:29:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ctl=nvme0 00:30:48.548 22:29:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # key= 00:30:48.548 22:29:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ckey=DHHC-1:02:Y2E5N2Q0MjVmNDNjOGYxZGQ3YzQ5ZmZhYzIyZjk0MzU0MGViZTAyYzEzYzkxMTZhU0B5KA==: 00:30:48.548 22:29:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # timeout=2s 00:30:48.548 22:29:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # dev=/sys/devices/virtual/nvme-fabrics/ctl/nvme0 00:30:48.548 22:29:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # [[ -z '' ]] 00:30:48.548 22:29:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # [[ -z DHHC-1:02:Y2E5N2Q0MjVmNDNjOGYxZGQ3YzQ5ZmZhYzIyZjk0MzU0MGViZTAyYzEzYzkxMTZhU0B5KA==: ]] 00:30:48.548 22:29:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # echo DHHC-1:02:Y2E5N2Q0MjVmNDNjOGYxZGQ3YzQ5ZmZhYzIyZjk0MzU0MGViZTAyYzEzYzkxMTZhU0B5KA==: 00:30:48.548 22:29:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # [[ -z 2s ]] 00:30:48.548 22:29:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # sleep 2s 00:30:50.466 22:29:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@245 -- # waitforblk nvme0n1 00:30:50.466 22:29:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1235 -- # local i=0 00:30:50.466 22:29:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1236 -- # lsblk -l -o NAME 00:30:50.466 22:29:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1236 -- # grep -q -w nvme0n1 00:30:50.466 22:29:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1242 -- # lsblk -l -o NAME 00:30:50.466 22:29:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1242 -- # grep -q -w nvme0n1 00:30:50.466 22:29:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # return 0 00:30:50.466 22:29:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@246 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:30:50.466 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:30:50.466 22:29:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@249 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 --dhchap-key key0 --dhchap-ctrlr-key key1 00:30:50.466 22:29:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:50.466 22:29:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:30:50.466 22:29:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:50.466 22:29:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@250 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:30:50.466 22:29:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:30:50.466 22:29:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:30:51.039 nvme0n1 00:30:51.039 22:29:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@252 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 --dhchap-key key2 --dhchap-ctrlr-key key3 00:30:51.039 22:29:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:51.039 22:29:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:30:51.039 22:29:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:51.039 22:29:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@253 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:30:51.039 22:29:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:30:51.610 22:29:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # hostrpc bdev_nvme_get_controllers 00:30:51.610 22:29:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # jq -r '.[].name' 00:30:51.610 22:29:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:30:51.871 22:29:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:51.871 22:29:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@256 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 00:30:51.871 22:29:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:51.871 22:29:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:30:51.871 22:29:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:51.871 22:29:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@257 -- # hostrpc bdev_nvme_set_keys nvme0 00:30:51.871 22:29:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 00:30:51.871 22:29:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # hostrpc bdev_nvme_get_controllers 00:30:51.871 22:29:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # jq -r '.[].name' 00:30:51.871 22:29:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:30:52.132 22:29:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:52.132 22:29:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@260 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 --dhchap-key key2 --dhchap-ctrlr-key key3 00:30:52.132 22:29:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:52.132 22:29:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:30:52.132 22:29:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:52.132 22:29:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@261 -- # NOT hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:30:52.132 22:29:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:30:52.132 22:29:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:30:52.132 22:29:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=hostrpc 00:30:52.132 22:29:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:30:52.132 22:29:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t hostrpc 00:30:52.132 22:29:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:30:52.132 22:29:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:30:52.132 22:29:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:30:52.704 request: 00:30:52.704 { 00:30:52.704 "name": "nvme0", 00:30:52.704 "dhchap_key": "key1", 00:30:52.704 "dhchap_ctrlr_key": "key3", 00:30:52.704 "method": "bdev_nvme_set_keys", 00:30:52.704 "req_id": 1 00:30:52.704 } 00:30:52.704 Got JSON-RPC error response 00:30:52.704 response: 00:30:52.704 { 00:30:52.704 "code": -13, 00:30:52.704 "message": "Permission denied" 00:30:52.704 } 00:30:52.704 22:29:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:30:52.704 22:29:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:30:52.704 22:29:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:30:52.704 22:29:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:30:52.704 22:29:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # hostrpc bdev_nvme_get_controllers 00:30:52.704 22:29:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:30:52.704 22:29:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # jq length 00:30:52.704 22:29:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # (( 1 != 0 )) 00:30:52.704 22:29:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@263 -- # sleep 1s 00:30:54.088 22:29:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # hostrpc bdev_nvme_get_controllers 00:30:54.088 22:29:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # jq length 00:30:54.088 22:29:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:30:54.088 22:29:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # (( 0 != 0 )) 00:30:54.088 22:29:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@267 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 --dhchap-key key0 --dhchap-ctrlr-key key1 00:30:54.088 22:29:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:54.088 22:29:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:30:54.088 22:29:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:54.088 22:29:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@268 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:30:54.088 22:29:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:30:54.088 22:29:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:30:55.029 nvme0n1 00:30:55.029 22:29:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@270 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 --dhchap-key key2 --dhchap-ctrlr-key key3 00:30:55.029 22:29:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:55.029 22:29:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:30:55.029 22:29:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:55.029 22:29:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@271 -- # NOT hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:30:55.029 22:29:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:30:55.029 22:29:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:30:55.029 22:29:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=hostrpc 00:30:55.029 22:29:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:30:55.029 22:29:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t hostrpc 00:30:55.029 22:29:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:30:55.029 22:29:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:30:55.029 22:29:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:30:55.289 request: 00:30:55.289 { 00:30:55.289 "name": "nvme0", 00:30:55.289 "dhchap_key": "key2", 00:30:55.289 "dhchap_ctrlr_key": "key0", 00:30:55.289 "method": "bdev_nvme_set_keys", 00:30:55.289 "req_id": 1 00:30:55.289 } 00:30:55.289 Got JSON-RPC error response 00:30:55.289 response: 00:30:55.289 { 00:30:55.289 "code": -13, 00:30:55.289 "message": "Permission denied" 00:30:55.289 } 00:30:55.289 22:29:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:30:55.289 22:29:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:30:55.289 22:29:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:30:55.289 22:29:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:30:55.289 22:29:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # hostrpc bdev_nvme_get_controllers 00:30:55.289 22:29:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # jq length 00:30:55.289 22:29:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:30:55.549 22:29:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # (( 1 != 0 )) 00:30:55.549 22:29:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@273 -- # sleep 1s 00:30:56.488 22:29:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # hostrpc bdev_nvme_get_controllers 00:30:56.489 22:29:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # jq length 00:30:56.489 22:29:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:30:56.749 22:29:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # (( 0 != 0 )) 00:30:56.749 22:29:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@276 -- # trap - SIGINT SIGTERM EXIT 00:30:56.749 22:29:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@277 -- # cleanup 00:30:56.749 22:29:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@21 -- # killprocess 181802 00:30:56.749 22:29:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@950 -- # '[' -z 181802 ']' 00:30:56.749 22:29:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # kill -0 181802 00:30:56.749 22:29:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@955 -- # uname 00:30:56.749 22:29:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:30:56.749 22:29:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 181802 00:30:56.749 22:29:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:30:56.749 22:29:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:30:56.749 22:29:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@968 -- # echo 'killing process with pid 181802' 00:30:56.749 killing process with pid 181802 00:30:56.749 22:29:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@969 -- # kill 181802 00:30:56.749 22:29:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@974 -- # wait 181802 00:30:57.009 22:29:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@22 -- # nvmftestfini 00:30:57.009 22:29:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@514 -- # nvmfcleanup 00:30:57.009 22:29:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@121 -- # sync 00:30:57.009 22:29:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:30:57.009 22:29:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@124 -- # set +e 00:30:57.009 22:29:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:30:57.009 22:29:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:30:57.009 rmmod nvme_tcp 00:30:57.009 rmmod nvme_fabrics 00:30:57.009 rmmod nvme_keyring 00:30:57.009 22:29:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:30:57.009 22:29:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@128 -- # set -e 00:30:57.009 22:29:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@129 -- # return 0 00:30:57.009 22:29:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@515 -- # '[' -n 209526 ']' 00:30:57.009 22:29:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@516 -- # killprocess 209526 00:30:57.009 22:29:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@950 -- # '[' -z 209526 ']' 00:30:57.009 22:29:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # kill -0 209526 00:30:57.009 22:29:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@955 -- # uname 00:30:57.009 22:29:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:30:57.009 22:29:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 209526 00:30:57.269 22:29:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:30:57.269 22:29:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:30:57.269 22:29:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@968 -- # echo 'killing process with pid 209526' 00:30:57.269 killing process with pid 209526 00:30:57.269 22:29:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@969 -- # kill 209526 00:30:57.269 22:29:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@974 -- # wait 209526 00:30:57.269 22:29:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:30:57.270 22:29:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:30:57.270 22:29:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:30:57.270 22:29:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@297 -- # iptr 00:30:57.270 22:29:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@789 -- # iptables-save 00:30:57.270 22:29:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:30:57.270 22:29:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@789 -- # iptables-restore 00:30:57.270 22:29:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:30:57.270 22:29:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@302 -- # remove_spdk_ns 00:30:57.270 22:29:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:57.270 22:29:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:57.270 22:29:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:59.823 22:29:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:30:59.823 22:29:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@23 -- # rm -f /tmp/spdk.key-null.Es7 /tmp/spdk.key-sha256.Mja /tmp/spdk.key-sha384.LJP /tmp/spdk.key-sha512.JZd /tmp/spdk.key-sha512.wXB /tmp/spdk.key-sha384.2FQ /tmp/spdk.key-sha256.bGO '' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf-auth.log 00:30:59.823 00:30:59.823 real 2m49.022s 00:30:59.823 user 6m15.928s 00:30:59.823 sys 0m24.460s 00:30:59.823 22:29:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1126 -- # xtrace_disable 00:30:59.823 22:29:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:30:59.823 ************************************ 00:30:59.823 END TEST nvmf_auth_target 00:30:59.823 ************************************ 00:30:59.823 22:29:54 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@39 -- # '[' tcp = tcp ']' 00:30:59.823 22:29:54 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@40 -- # run_test nvmf_bdevio_no_huge /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:30:59.823 22:29:54 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:30:59.823 22:29:54 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:30:59.823 22:29:54 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:30:59.823 ************************************ 00:30:59.823 START TEST nvmf_bdevio_no_huge 00:30:59.823 ************************************ 00:30:59.823 22:29:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:30:59.823 * Looking for test storage... 00:30:59.823 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:30:59.823 22:29:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:30:59.823 22:29:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1681 -- # lcov --version 00:30:59.823 22:29:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:30:59.823 22:29:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:30:59.823 22:29:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:30:59.823 22:29:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@333 -- # local ver1 ver1_l 00:30:59.823 22:29:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@334 -- # local ver2 ver2_l 00:30:59.823 22:29:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@336 -- # IFS=.-: 00:30:59.823 22:29:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@336 -- # read -ra ver1 00:30:59.823 22:29:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@337 -- # IFS=.-: 00:30:59.823 22:29:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@337 -- # read -ra ver2 00:30:59.823 22:29:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@338 -- # local 'op=<' 00:30:59.823 22:29:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@340 -- # ver1_l=2 00:30:59.823 22:29:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@341 -- # ver2_l=1 00:30:59.823 22:29:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:30:59.823 22:29:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@344 -- # case "$op" in 00:30:59.823 22:29:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@345 -- # : 1 00:30:59.823 22:29:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@364 -- # (( v = 0 )) 00:30:59.823 22:29:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:30:59.823 22:29:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@365 -- # decimal 1 00:30:59.823 22:29:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@353 -- # local d=1 00:30:59.823 22:29:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:30:59.823 22:29:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@355 -- # echo 1 00:30:59.823 22:29:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@365 -- # ver1[v]=1 00:30:59.823 22:29:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@366 -- # decimal 2 00:30:59.823 22:29:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@353 -- # local d=2 00:30:59.823 22:29:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:30:59.823 22:29:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@355 -- # echo 2 00:30:59.823 22:29:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@366 -- # ver2[v]=2 00:30:59.823 22:29:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:30:59.823 22:29:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:30:59.823 22:29:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@368 -- # return 0 00:30:59.823 22:29:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:30:59.824 22:29:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:30:59.824 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:59.824 --rc genhtml_branch_coverage=1 00:30:59.824 --rc genhtml_function_coverage=1 00:30:59.824 --rc genhtml_legend=1 00:30:59.824 --rc geninfo_all_blocks=1 00:30:59.824 --rc geninfo_unexecuted_blocks=1 00:30:59.824 00:30:59.824 ' 00:30:59.824 22:29:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:30:59.824 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:59.824 --rc genhtml_branch_coverage=1 00:30:59.824 --rc genhtml_function_coverage=1 00:30:59.824 --rc genhtml_legend=1 00:30:59.824 --rc geninfo_all_blocks=1 00:30:59.824 --rc geninfo_unexecuted_blocks=1 00:30:59.824 00:30:59.824 ' 00:30:59.824 22:29:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:30:59.824 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:59.824 --rc genhtml_branch_coverage=1 00:30:59.824 --rc genhtml_function_coverage=1 00:30:59.824 --rc genhtml_legend=1 00:30:59.824 --rc geninfo_all_blocks=1 00:30:59.824 --rc geninfo_unexecuted_blocks=1 00:30:59.824 00:30:59.824 ' 00:30:59.824 22:29:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:30:59.824 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:59.824 --rc genhtml_branch_coverage=1 00:30:59.824 --rc genhtml_function_coverage=1 00:30:59.824 --rc genhtml_legend=1 00:30:59.824 --rc geninfo_all_blocks=1 00:30:59.824 --rc geninfo_unexecuted_blocks=1 00:30:59.824 00:30:59.824 ' 00:30:59.824 22:29:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:59.824 22:29:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # uname -s 00:30:59.824 22:29:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:59.824 22:29:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:59.824 22:29:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:59.824 22:29:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:59.824 22:29:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:59.824 22:29:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:59.824 22:29:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:59.824 22:29:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:59.824 22:29:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:59.824 22:29:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:59.824 22:29:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 00:30:59.824 22:29:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@18 -- # NVME_HOSTID=80c5a598-37ec-ec11-9bc7-a4bf01928204 00:30:59.824 22:29:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:59.824 22:29:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:59.824 22:29:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:59.824 22:29:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:59.824 22:29:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:59.824 22:29:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@15 -- # shopt -s extglob 00:30:59.824 22:29:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:59.824 22:29:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:59.824 22:29:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:59.824 22:29:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:59.824 22:29:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:59.824 22:29:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:59.824 22:29:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@5 -- # export PATH 00:30:59.824 22:29:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:59.824 22:29:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@51 -- # : 0 00:30:59.824 22:29:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:30:59.824 22:29:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:30:59.824 22:29:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:59.824 22:29:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:59.824 22:29:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:59.824 22:29:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:30:59.824 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:30:59.824 22:29:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:30:59.824 22:29:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:30:59.824 22:29:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@55 -- # have_pci_nics=0 00:30:59.824 22:29:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:30:59.824 22:29:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:30:59.824 22:29:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@14 -- # nvmftestinit 00:30:59.824 22:29:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:30:59.824 22:29:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:59.824 22:29:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@474 -- # prepare_net_devs 00:30:59.824 22:29:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@436 -- # local -g is_hw=no 00:30:59.824 22:29:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@438 -- # remove_spdk_ns 00:30:59.825 22:29:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:59.825 22:29:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:59.825 22:29:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:59.825 22:29:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:30:59.825 22:29:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:30:59.825 22:29:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@309 -- # xtrace_disable 00:30:59.825 22:29:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:31:08.047 22:30:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:31:08.047 22:30:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@315 -- # pci_devs=() 00:31:08.047 22:30:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@315 -- # local -a pci_devs 00:31:08.047 22:30:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@316 -- # pci_net_devs=() 00:31:08.047 22:30:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:31:08.047 22:30:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@317 -- # pci_drivers=() 00:31:08.048 22:30:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@317 -- # local -A pci_drivers 00:31:08.048 22:30:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@319 -- # net_devs=() 00:31:08.048 22:30:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@319 -- # local -ga net_devs 00:31:08.048 22:30:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@320 -- # e810=() 00:31:08.048 22:30:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@320 -- # local -ga e810 00:31:08.048 22:30:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@321 -- # x722=() 00:31:08.048 22:30:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@321 -- # local -ga x722 00:31:08.048 22:30:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@322 -- # mlx=() 00:31:08.048 22:30:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@322 -- # local -ga mlx 00:31:08.048 22:30:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:31:08.048 22:30:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:31:08.048 22:30:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:31:08.048 22:30:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:31:08.048 22:30:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:31:08.048 22:30:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:31:08.048 22:30:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:31:08.048 22:30:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:31:08.048 22:30:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:31:08.048 22:30:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:31:08.048 22:30:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:31:08.048 22:30:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:31:08.048 22:30:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:31:08.048 22:30:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:31:08.048 22:30:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:31:08.048 22:30:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:31:08.048 22:30:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:31:08.048 22:30:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:31:08.048 22:30:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:08.048 22:30:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:31:08.048 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:31:08.048 22:30:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:08.048 22:30:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:08.048 22:30:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:08.048 22:30:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:08.048 22:30:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:08.048 22:30:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:08.048 22:30:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:31:08.048 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:31:08.048 22:30:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:08.048 22:30:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:08.048 22:30:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:08.048 22:30:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:08.048 22:30:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:08.048 22:30:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:31:08.048 22:30:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:31:08.048 22:30:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:31:08.048 22:30:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:31:08.048 22:30:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:08.048 22:30:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:31:08.048 22:30:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:08.048 22:30:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@416 -- # [[ up == up ]] 00:31:08.048 22:30:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:31:08.048 22:30:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:08.048 22:30:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:31:08.048 Found net devices under 0000:4b:00.0: cvl_0_0 00:31:08.048 22:30:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:31:08.048 22:30:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:31:08.048 22:30:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:08.048 22:30:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:31:08.048 22:30:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:08.048 22:30:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@416 -- # [[ up == up ]] 00:31:08.048 22:30:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:31:08.048 22:30:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:08.048 22:30:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:31:08.048 Found net devices under 0000:4b:00.1: cvl_0_1 00:31:08.048 22:30:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:31:08.048 22:30:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:31:08.048 22:30:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@440 -- # is_hw=yes 00:31:08.048 22:30:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:31:08.048 22:30:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:31:08.048 22:30:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:31:08.048 22:30:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:31:08.048 22:30:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:31:08.048 22:30:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:31:08.048 22:30:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:31:08.048 22:30:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:31:08.048 22:30:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:31:08.049 22:30:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:31:08.049 22:30:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:31:08.049 22:30:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:31:08.049 22:30:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:31:08.049 22:30:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:31:08.049 22:30:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:31:08.049 22:30:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:31:08.049 22:30:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:31:08.049 22:30:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:31:08.049 22:30:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:31:08.049 22:30:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:31:08.049 22:30:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:31:08.049 22:30:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:31:08.049 22:30:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:31:08.049 22:30:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:31:08.049 22:30:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:31:08.049 22:30:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:31:08.049 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:31:08.049 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.472 ms 00:31:08.049 00:31:08.049 --- 10.0.0.2 ping statistics --- 00:31:08.049 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:08.049 rtt min/avg/max/mdev = 0.472/0.472/0.472/0.000 ms 00:31:08.049 22:30:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:31:08.049 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:31:08.049 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.116 ms 00:31:08.049 00:31:08.049 --- 10.0.0.1 ping statistics --- 00:31:08.049 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:08.049 rtt min/avg/max/mdev = 0.116/0.116/0.116/0.000 ms 00:31:08.049 22:30:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:31:08.049 22:30:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@448 -- # return 0 00:31:08.049 22:30:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:31:08.049 22:30:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:31:08.049 22:30:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:31:08.049 22:30:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:31:08.049 22:30:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:31:08.049 22:30:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:31:08.049 22:30:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:31:08.049 22:30:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:31:08.049 22:30:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:31:08.049 22:30:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@724 -- # xtrace_disable 00:31:08.049 22:30:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:31:08.049 22:30:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@507 -- # nvmfpid=217962 00:31:08.049 22:30:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@508 -- # waitforlisten 217962 00:31:08.049 22:30:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --no-huge -s 1024 -m 0x78 00:31:08.049 22:30:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@831 -- # '[' -z 217962 ']' 00:31:08.049 22:30:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:08.049 22:30:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@836 -- # local max_retries=100 00:31:08.049 22:30:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:08.049 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:08.049 22:30:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@840 -- # xtrace_disable 00:31:08.049 22:30:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:31:08.049 [2024-10-01 22:30:02.389787] Starting SPDK v25.01-pre git sha1 1b1c3081e / DPDK 24.03.0 initialization... 00:31:08.049 [2024-10-01 22:30:02.389861] [ DPDK EAL parameters: nvmf -c 0x78 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk0 --proc-type=auto ] 00:31:08.049 [2024-10-01 22:30:02.485510] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:31:08.049 [2024-10-01 22:30:02.593458] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:31:08.049 [2024-10-01 22:30:02.593509] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:31:08.049 [2024-10-01 22:30:02.593518] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:31:08.049 [2024-10-01 22:30:02.593525] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:31:08.049 [2024-10-01 22:30:02.593532] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:31:08.049 [2024-10-01 22:30:02.593708] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 4 00:31:08.049 [2024-10-01 22:30:02.593905] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 5 00:31:08.049 [2024-10-01 22:30:02.594062] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:31:08.049 [2024-10-01 22:30:02.594062] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 6 00:31:08.049 22:30:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:31:08.049 22:30:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@864 -- # return 0 00:31:08.049 22:30:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:31:08.049 22:30:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@730 -- # xtrace_disable 00:31:08.049 22:30:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:31:08.049 22:30:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:31:08.049 22:30:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:31:08.049 22:30:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:08.049 22:30:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:31:08.049 [2024-10-01 22:30:03.272544] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:08.049 22:30:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:08.049 22:30:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:31:08.050 22:30:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:08.050 22:30:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:31:08.050 Malloc0 00:31:08.050 22:30:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:08.050 22:30:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:31:08.050 22:30:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:08.050 22:30:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:31:08.311 22:30:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:08.311 22:30:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:31:08.311 22:30:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:08.311 22:30:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:31:08.311 22:30:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:08.311 22:30:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:31:08.311 22:30:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:08.311 22:30:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:31:08.311 [2024-10-01 22:30:03.326545] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:08.311 22:30:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:08.311 22:30:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 --no-huge -s 1024 00:31:08.311 22:30:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:31:08.311 22:30:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@558 -- # config=() 00:31:08.311 22:30:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@558 -- # local subsystem config 00:31:08.311 22:30:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:31:08.311 22:30:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:31:08.311 { 00:31:08.311 "params": { 00:31:08.311 "name": "Nvme$subsystem", 00:31:08.311 "trtype": "$TEST_TRANSPORT", 00:31:08.311 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:08.311 "adrfam": "ipv4", 00:31:08.311 "trsvcid": "$NVMF_PORT", 00:31:08.311 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:08.311 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:08.311 "hdgst": ${hdgst:-false}, 00:31:08.311 "ddgst": ${ddgst:-false} 00:31:08.311 }, 00:31:08.311 "method": "bdev_nvme_attach_controller" 00:31:08.311 } 00:31:08.311 EOF 00:31:08.311 )") 00:31:08.311 22:30:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@580 -- # cat 00:31:08.311 22:30:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@582 -- # jq . 00:31:08.311 22:30:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@583 -- # IFS=, 00:31:08.311 22:30:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:31:08.311 "params": { 00:31:08.311 "name": "Nvme1", 00:31:08.311 "trtype": "tcp", 00:31:08.311 "traddr": "10.0.0.2", 00:31:08.311 "adrfam": "ipv4", 00:31:08.311 "trsvcid": "4420", 00:31:08.311 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:31:08.311 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:31:08.311 "hdgst": false, 00:31:08.311 "ddgst": false 00:31:08.311 }, 00:31:08.311 "method": "bdev_nvme_attach_controller" 00:31:08.311 }' 00:31:08.311 [2024-10-01 22:30:03.393132] Starting SPDK v25.01-pre git sha1 1b1c3081e / DPDK 24.03.0 initialization... 00:31:08.311 [2024-10-01 22:30:03.393206] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk_pid218154 ] 00:31:08.311 [2024-10-01 22:30:03.466647] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:31:08.572 [2024-10-01 22:30:03.564396] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:31:08.572 [2024-10-01 22:30:03.564513] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:31:08.572 [2024-10-01 22:30:03.564516] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:31:08.834 I/O targets: 00:31:08.834 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:31:08.834 00:31:08.834 00:31:08.834 CUnit - A unit testing framework for C - Version 2.1-3 00:31:08.834 http://cunit.sourceforge.net/ 00:31:08.834 00:31:08.834 00:31:08.834 Suite: bdevio tests on: Nvme1n1 00:31:08.834 Test: blockdev write read block ...passed 00:31:08.834 Test: blockdev write zeroes read block ...passed 00:31:08.834 Test: blockdev write zeroes read no split ...passed 00:31:08.834 Test: blockdev write zeroes read split ...passed 00:31:08.834 Test: blockdev write zeroes read split partial ...passed 00:31:08.834 Test: blockdev reset ...[2024-10-01 22:30:04.005805] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:08.834 [2024-10-01 22:30:04.005877] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2549a50 (9): Bad file descriptor 00:31:08.834 [2024-10-01 22:30:04.025728] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:31:08.834 passed 00:31:08.834 Test: blockdev write read 8 blocks ...passed 00:31:08.834 Test: blockdev write read size > 128k ...passed 00:31:08.834 Test: blockdev write read invalid size ...passed 00:31:09.095 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:31:09.095 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:31:09.095 Test: blockdev write read max offset ...passed 00:31:09.095 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:31:09.095 Test: blockdev writev readv 8 blocks ...passed 00:31:09.095 Test: blockdev writev readv 30 x 1block ...passed 00:31:09.095 Test: blockdev writev readv block ...passed 00:31:09.095 Test: blockdev writev readv size > 128k ...passed 00:31:09.095 Test: blockdev writev readv size > 128k in two iovs ...passed 00:31:09.095 Test: blockdev comparev and writev ...[2024-10-01 22:30:04.288265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:31:09.095 [2024-10-01 22:30:04.288294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:09.095 [2024-10-01 22:30:04.288305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:31:09.095 [2024-10-01 22:30:04.288311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:09.095 [2024-10-01 22:30:04.288686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:31:09.095 [2024-10-01 22:30:04.288695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:31:09.095 [2024-10-01 22:30:04.288705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:31:09.095 [2024-10-01 22:30:04.288711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:31:09.095 [2024-10-01 22:30:04.289076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:31:09.095 [2024-10-01 22:30:04.289085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:31:09.095 [2024-10-01 22:30:04.289094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:31:09.095 [2024-10-01 22:30:04.289100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:31:09.095 [2024-10-01 22:30:04.289467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:31:09.095 [2024-10-01 22:30:04.289476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:31:09.095 [2024-10-01 22:30:04.289486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:31:09.095 [2024-10-01 22:30:04.289491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:31:09.095 passed 00:31:09.356 Test: blockdev nvme passthru rw ...passed 00:31:09.356 Test: blockdev nvme passthru vendor specific ...[2024-10-01 22:30:04.374074] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:31:09.356 [2024-10-01 22:30:04.374087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:31:09.356 [2024-10-01 22:30:04.374321] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:31:09.356 [2024-10-01 22:30:04.374330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:31:09.356 [2024-10-01 22:30:04.374544] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:31:09.356 [2024-10-01 22:30:04.374554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:31:09.356 [2024-10-01 22:30:04.374746] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:31:09.356 [2024-10-01 22:30:04.374756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:31:09.356 passed 00:31:09.356 Test: blockdev nvme admin passthru ...passed 00:31:09.356 Test: blockdev copy ...passed 00:31:09.356 00:31:09.356 Run Summary: Type Total Ran Passed Failed Inactive 00:31:09.356 suites 1 1 n/a 0 0 00:31:09.356 tests 23 23 23 0 0 00:31:09.356 asserts 152 152 152 0 n/a 00:31:09.356 00:31:09.356 Elapsed time = 1.196 seconds 00:31:09.927 22:30:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:31:09.927 22:30:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:09.927 22:30:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:31:09.927 22:30:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:09.927 22:30:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:31:09.927 22:30:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@30 -- # nvmftestfini 00:31:09.927 22:30:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@514 -- # nvmfcleanup 00:31:09.927 22:30:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@121 -- # sync 00:31:09.927 22:30:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:31:09.927 22:30:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@124 -- # set +e 00:31:09.927 22:30:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@125 -- # for i in {1..20} 00:31:09.927 22:30:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:31:09.927 rmmod nvme_tcp 00:31:09.927 rmmod nvme_fabrics 00:31:09.927 rmmod nvme_keyring 00:31:09.927 22:30:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:31:09.927 22:30:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@128 -- # set -e 00:31:09.927 22:30:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@129 -- # return 0 00:31:09.927 22:30:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@515 -- # '[' -n 217962 ']' 00:31:09.927 22:30:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@516 -- # killprocess 217962 00:31:09.927 22:30:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@950 -- # '[' -z 217962 ']' 00:31:09.927 22:30:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@954 -- # kill -0 217962 00:31:09.927 22:30:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@955 -- # uname 00:31:09.927 22:30:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:31:09.927 22:30:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 217962 00:31:09.927 22:30:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@956 -- # process_name=reactor_3 00:31:09.927 22:30:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@960 -- # '[' reactor_3 = sudo ']' 00:31:09.927 22:30:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@968 -- # echo 'killing process with pid 217962' 00:31:09.927 killing process with pid 217962 00:31:09.927 22:30:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@969 -- # kill 217962 00:31:09.927 22:30:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@974 -- # wait 217962 00:31:10.500 22:30:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:31:10.500 22:30:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:31:10.500 22:30:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:31:10.500 22:30:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@297 -- # iptr 00:31:10.500 22:30:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@789 -- # iptables-restore 00:31:10.500 22:30:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@789 -- # iptables-save 00:31:10.500 22:30:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:31:10.500 22:30:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:31:10.501 22:30:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@302 -- # remove_spdk_ns 00:31:10.501 22:30:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:10.501 22:30:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:10.501 22:30:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:12.416 22:30:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:31:12.416 00:31:12.416 real 0m12.911s 00:31:12.416 user 0m15.688s 00:31:12.416 sys 0m7.196s 00:31:12.416 22:30:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1126 -- # xtrace_disable 00:31:12.416 22:30:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:31:12.416 ************************************ 00:31:12.416 END TEST nvmf_bdevio_no_huge 00:31:12.416 ************************************ 00:31:12.416 22:30:07 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@41 -- # run_test nvmf_tls /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:31:12.416 22:30:07 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:31:12.416 22:30:07 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:31:12.416 22:30:07 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:31:12.416 ************************************ 00:31:12.416 START TEST nvmf_tls 00:31:12.416 ************************************ 00:31:12.416 22:30:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:31:12.679 * Looking for test storage... 00:31:12.679 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:31:12.679 22:30:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:31:12.679 22:30:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1681 -- # lcov --version 00:31:12.679 22:30:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:31:12.679 22:30:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:31:12.679 22:30:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:31:12.679 22:30:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@333 -- # local ver1 ver1_l 00:31:12.679 22:30:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@334 -- # local ver2 ver2_l 00:31:12.679 22:30:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@336 -- # IFS=.-: 00:31:12.679 22:30:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@336 -- # read -ra ver1 00:31:12.679 22:30:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@337 -- # IFS=.-: 00:31:12.679 22:30:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@337 -- # read -ra ver2 00:31:12.679 22:30:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@338 -- # local 'op=<' 00:31:12.679 22:30:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@340 -- # ver1_l=2 00:31:12.679 22:30:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@341 -- # ver2_l=1 00:31:12.679 22:30:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:31:12.679 22:30:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@344 -- # case "$op" in 00:31:12.679 22:30:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@345 -- # : 1 00:31:12.679 22:30:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@364 -- # (( v = 0 )) 00:31:12.679 22:30:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:31:12.679 22:30:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@365 -- # decimal 1 00:31:12.679 22:30:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@353 -- # local d=1 00:31:12.679 22:30:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:31:12.679 22:30:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@355 -- # echo 1 00:31:12.679 22:30:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@365 -- # ver1[v]=1 00:31:12.679 22:30:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@366 -- # decimal 2 00:31:12.679 22:30:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@353 -- # local d=2 00:31:12.679 22:30:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:31:12.679 22:30:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@355 -- # echo 2 00:31:12.679 22:30:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@366 -- # ver2[v]=2 00:31:12.679 22:30:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:31:12.679 22:30:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:31:12.679 22:30:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@368 -- # return 0 00:31:12.679 22:30:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:31:12.679 22:30:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:31:12.679 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:12.679 --rc genhtml_branch_coverage=1 00:31:12.679 --rc genhtml_function_coverage=1 00:31:12.679 --rc genhtml_legend=1 00:31:12.679 --rc geninfo_all_blocks=1 00:31:12.679 --rc geninfo_unexecuted_blocks=1 00:31:12.679 00:31:12.679 ' 00:31:12.679 22:30:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:31:12.679 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:12.679 --rc genhtml_branch_coverage=1 00:31:12.679 --rc genhtml_function_coverage=1 00:31:12.679 --rc genhtml_legend=1 00:31:12.679 --rc geninfo_all_blocks=1 00:31:12.679 --rc geninfo_unexecuted_blocks=1 00:31:12.679 00:31:12.679 ' 00:31:12.679 22:30:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:31:12.679 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:12.679 --rc genhtml_branch_coverage=1 00:31:12.679 --rc genhtml_function_coverage=1 00:31:12.680 --rc genhtml_legend=1 00:31:12.680 --rc geninfo_all_blocks=1 00:31:12.680 --rc geninfo_unexecuted_blocks=1 00:31:12.680 00:31:12.680 ' 00:31:12.680 22:30:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:31:12.680 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:12.680 --rc genhtml_branch_coverage=1 00:31:12.680 --rc genhtml_function_coverage=1 00:31:12.680 --rc genhtml_legend=1 00:31:12.680 --rc geninfo_all_blocks=1 00:31:12.680 --rc geninfo_unexecuted_blocks=1 00:31:12.680 00:31:12.680 ' 00:31:12.680 22:30:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:31:12.680 22:30:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@7 -- # uname -s 00:31:12.680 22:30:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:12.680 22:30:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:12.680 22:30:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:12.680 22:30:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:12.680 22:30:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:12.680 22:30:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:12.680 22:30:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:12.680 22:30:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:12.680 22:30:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:12.680 22:30:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:12.680 22:30:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 00:31:12.680 22:30:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@18 -- # NVME_HOSTID=80c5a598-37ec-ec11-9bc7-a4bf01928204 00:31:12.680 22:30:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:12.680 22:30:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:12.680 22:30:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:31:12.680 22:30:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:31:12.680 22:30:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:31:12.680 22:30:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@15 -- # shopt -s extglob 00:31:12.680 22:30:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:12.680 22:30:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:12.680 22:30:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:12.680 22:30:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:12.680 22:30:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:12.680 22:30:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:12.680 22:30:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@5 -- # export PATH 00:31:12.680 22:30:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:12.680 22:30:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@51 -- # : 0 00:31:12.680 22:30:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:31:12.680 22:30:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:31:12.680 22:30:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:31:12.680 22:30:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:12.680 22:30:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:12.680 22:30:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:31:12.680 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:31:12.680 22:30:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:31:12.680 22:30:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:31:12.680 22:30:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@55 -- # have_pci_nics=0 00:31:12.680 22:30:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:31:12.680 22:30:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@63 -- # nvmftestinit 00:31:12.680 22:30:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:31:12.680 22:30:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:31:12.680 22:30:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@474 -- # prepare_net_devs 00:31:12.680 22:30:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@436 -- # local -g is_hw=no 00:31:12.680 22:30:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@438 -- # remove_spdk_ns 00:31:12.680 22:30:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:12.680 22:30:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:12.680 22:30:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:12.680 22:30:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:31:12.680 22:30:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:31:12.680 22:30:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@309 -- # xtrace_disable 00:31:12.680 22:30:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:31:20.822 22:30:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:31:20.822 22:30:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@315 -- # pci_devs=() 00:31:20.822 22:30:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@315 -- # local -a pci_devs 00:31:20.822 22:30:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@316 -- # pci_net_devs=() 00:31:20.822 22:30:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:31:20.822 22:30:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@317 -- # pci_drivers=() 00:31:20.822 22:30:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@317 -- # local -A pci_drivers 00:31:20.822 22:30:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@319 -- # net_devs=() 00:31:20.822 22:30:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@319 -- # local -ga net_devs 00:31:20.822 22:30:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@320 -- # e810=() 00:31:20.822 22:30:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@320 -- # local -ga e810 00:31:20.822 22:30:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@321 -- # x722=() 00:31:20.822 22:30:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@321 -- # local -ga x722 00:31:20.822 22:30:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@322 -- # mlx=() 00:31:20.822 22:30:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@322 -- # local -ga mlx 00:31:20.822 22:30:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:31:20.822 22:30:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:31:20.822 22:30:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:31:20.822 22:30:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:31:20.822 22:30:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:31:20.822 22:30:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:31:20.822 22:30:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:31:20.822 22:30:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:31:20.822 22:30:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:31:20.822 22:30:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:31:20.822 22:30:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:31:20.822 22:30:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:31:20.822 22:30:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:31:20.822 22:30:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:31:20.822 22:30:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:31:20.822 22:30:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:31:20.822 22:30:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:31:20.822 22:30:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:31:20.822 22:30:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:20.822 22:30:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:31:20.822 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:31:20.823 22:30:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:20.823 22:30:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:20.823 22:30:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:20.823 22:30:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:20.823 22:30:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:20.823 22:30:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:20.823 22:30:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:31:20.823 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:31:20.823 22:30:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:20.823 22:30:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:20.823 22:30:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:20.823 22:30:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:20.823 22:30:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:20.823 22:30:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:31:20.823 22:30:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:31:20.823 22:30:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:31:20.823 22:30:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:31:20.823 22:30:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:20.823 22:30:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:31:20.823 22:30:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:20.823 22:30:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@416 -- # [[ up == up ]] 00:31:20.823 22:30:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:31:20.823 22:30:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:20.823 22:30:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:31:20.823 Found net devices under 0000:4b:00.0: cvl_0_0 00:31:20.823 22:30:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:31:20.823 22:30:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:31:20.823 22:30:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:20.823 22:30:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:31:20.823 22:30:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:20.823 22:30:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@416 -- # [[ up == up ]] 00:31:20.823 22:30:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:31:20.823 22:30:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:20.823 22:30:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:31:20.823 Found net devices under 0000:4b:00.1: cvl_0_1 00:31:20.823 22:30:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:31:20.823 22:30:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:31:20.823 22:30:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@440 -- # is_hw=yes 00:31:20.823 22:30:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:31:20.823 22:30:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:31:20.823 22:30:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:31:20.823 22:30:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:31:20.823 22:30:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:31:20.823 22:30:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:31:20.823 22:30:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:31:20.823 22:30:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:31:20.823 22:30:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:31:20.823 22:30:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:31:20.823 22:30:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:31:20.823 22:30:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:31:20.823 22:30:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:31:20.823 22:30:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:31:20.823 22:30:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:31:20.823 22:30:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:31:20.823 22:30:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:31:20.823 22:30:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:31:20.823 22:30:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:31:20.823 22:30:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:31:20.823 22:30:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:31:20.823 22:30:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:31:20.823 22:30:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:31:20.823 22:30:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:31:20.823 22:30:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:31:20.823 22:30:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:31:20.823 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:31:20.823 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.638 ms 00:31:20.823 00:31:20.823 --- 10.0.0.2 ping statistics --- 00:31:20.823 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:20.823 rtt min/avg/max/mdev = 0.638/0.638/0.638/0.000 ms 00:31:20.823 22:30:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:31:20.823 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:31:20.823 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.304 ms 00:31:20.823 00:31:20.823 --- 10.0.0.1 ping statistics --- 00:31:20.823 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:20.823 rtt min/avg/max/mdev = 0.304/0.304/0.304/0.000 ms 00:31:20.823 22:30:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:31:20.823 22:30:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@448 -- # return 0 00:31:20.824 22:30:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:31:20.824 22:30:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:31:20.824 22:30:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:31:20.824 22:30:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:31:20.824 22:30:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:31:20.824 22:30:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:31:20.824 22:30:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:31:20.824 22:30:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@64 -- # nvmfappstart -m 0x2 --wait-for-rpc 00:31:20.824 22:30:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:31:20.824 22:30:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:31:20.824 22:30:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:31:20.824 22:30:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # nvmfpid=223288 00:31:20.824 22:30:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # waitforlisten 223288 00:31:20.824 22:30:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 --wait-for-rpc 00:31:20.824 22:30:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 223288 ']' 00:31:20.824 22:30:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:20.824 22:30:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:31:20.824 22:30:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:20.824 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:20.824 22:30:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:31:20.824 22:30:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:31:20.824 [2024-10-01 22:30:15.378750] Starting SPDK v25.01-pre git sha1 1b1c3081e / DPDK 24.03.0 initialization... 00:31:20.824 [2024-10-01 22:30:15.378820] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:20.824 [2024-10-01 22:30:15.468466] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:20.824 [2024-10-01 22:30:15.561707] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:31:20.824 [2024-10-01 22:30:15.561764] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:31:20.824 [2024-10-01 22:30:15.561773] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:31:20.824 [2024-10-01 22:30:15.561781] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:31:20.824 [2024-10-01 22:30:15.561787] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:31:20.824 [2024-10-01 22:30:15.561811] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:31:21.093 22:30:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:31:21.093 22:30:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:31:21.093 22:30:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:31:21.093 22:30:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:31:21.093 22:30:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:31:21.093 22:30:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:31:21.093 22:30:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@66 -- # '[' tcp '!=' tcp ']' 00:31:21.093 22:30:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_set_default_impl -i ssl 00:31:21.356 true 00:31:21.356 22:30:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:31:21.356 22:30:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # jq -r .tls_version 00:31:21.356 22:30:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # version=0 00:31:21.356 22:30:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@75 -- # [[ 0 != \0 ]] 00:31:21.356 22:30:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@81 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:31:21.617 22:30:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:31:21.617 22:30:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # jq -r .tls_version 00:31:21.878 22:30:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # version=13 00:31:21.878 22:30:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@83 -- # [[ 13 != \1\3 ]] 00:31:21.878 22:30:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 7 00:31:22.139 22:30:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:31:22.139 22:30:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # jq -r .tls_version 00:31:22.139 22:30:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # version=7 00:31:22.139 22:30:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@91 -- # [[ 7 != \7 ]] 00:31:22.139 22:30:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:31:22.139 22:30:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # jq -r .enable_ktls 00:31:22.399 22:30:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # ktls=false 00:31:22.399 22:30:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@98 -- # [[ false != \f\a\l\s\e ]] 00:31:22.399 22:30:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --enable-ktls 00:31:22.660 22:30:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:31:22.660 22:30:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # jq -r .enable_ktls 00:31:22.660 22:30:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # ktls=true 00:31:22.660 22:30:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@106 -- # [[ true != \t\r\u\e ]] 00:31:22.660 22:30:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@112 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --disable-ktls 00:31:22.920 22:30:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:31:22.920 22:30:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # jq -r .enable_ktls 00:31:23.180 22:30:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # ktls=false 00:31:23.180 22:30:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@114 -- # [[ false != \f\a\l\s\e ]] 00:31:23.180 22:30:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@119 -- # format_interchange_psk 00112233445566778899aabbccddeeff 1 00:31:23.180 22:30:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@741 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 1 00:31:23.180 22:30:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@728 -- # local prefix key digest 00:31:23.180 22:30:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # prefix=NVMeTLSkey-1 00:31:23.180 22:30:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # key=00112233445566778899aabbccddeeff 00:31:23.180 22:30:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # digest=1 00:31:23.180 22:30:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@731 -- # python - 00:31:23.180 22:30:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@119 -- # key=NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:31:23.181 22:30:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@120 -- # format_interchange_psk ffeeddccbbaa99887766554433221100 1 00:31:23.181 22:30:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@741 -- # format_key NVMeTLSkey-1 ffeeddccbbaa99887766554433221100 1 00:31:23.181 22:30:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@728 -- # local prefix key digest 00:31:23.181 22:30:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # prefix=NVMeTLSkey-1 00:31:23.181 22:30:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # key=ffeeddccbbaa99887766554433221100 00:31:23.181 22:30:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # digest=1 00:31:23.181 22:30:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@731 -- # python - 00:31:23.181 22:30:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@120 -- # key_2=NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:31:23.181 22:30:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@122 -- # mktemp 00:31:23.181 22:30:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@122 -- # key_path=/tmp/tmp.0y3IhAMUvI 00:31:23.181 22:30:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@123 -- # mktemp 00:31:23.181 22:30:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@123 -- # key_2_path=/tmp/tmp.vUfcOTLrz6 00:31:23.181 22:30:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@125 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:31:23.181 22:30:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@126 -- # echo -n NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:31:23.181 22:30:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@128 -- # chmod 0600 /tmp/tmp.0y3IhAMUvI 00:31:23.181 22:30:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@129 -- # chmod 0600 /tmp/tmp.vUfcOTLrz6 00:31:23.181 22:30:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:31:23.441 22:30:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@132 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_start_init 00:31:23.701 22:30:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@134 -- # setup_nvmf_tgt /tmp/tmp.0y3IhAMUvI 00:31:23.701 22:30:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.0y3IhAMUvI 00:31:23.701 22:30:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:31:23.967 [2024-10-01 22:30:19.024393] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:23.967 22:30:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:31:23.967 22:30:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:31:24.228 [2024-10-01 22:30:19.345160] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:31:24.228 [2024-10-01 22:30:19.345356] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:24.228 22:30:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:31:24.488 malloc0 00:31:24.488 22:30:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:31:24.488 22:30:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.0y3IhAMUvI 00:31:24.750 22:30:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:31:25.009 22:30:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@138 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -S ssl -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 hostnqn:nqn.2016-06.io.spdk:host1' --psk-path /tmp/tmp.0y3IhAMUvI 00:31:35.005 Initializing NVMe Controllers 00:31:35.005 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:31:35.005 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:31:35.005 Initialization complete. Launching workers. 00:31:35.005 ======================================================== 00:31:35.005 Latency(us) 00:31:35.005 Device Information : IOPS MiB/s Average min max 00:31:35.005 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 18452.76 72.08 3468.37 1137.41 4274.87 00:31:35.005 ======================================================== 00:31:35.005 Total : 18452.76 72.08 3468.37 1137.41 4274.87 00:31:35.005 00:31:35.005 22:30:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@144 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.0y3IhAMUvI 00:31:35.005 22:30:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:31:35.005 22:30:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:31:35.005 22:30:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:31:35.005 22:30:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.0y3IhAMUvI 00:31:35.005 22:30:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:31:35.005 22:30:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=226033 00:31:35.005 22:30:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:31:35.005 22:30:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 226033 /var/tmp/bdevperf.sock 00:31:35.005 22:30:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:31:35.005 22:30:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 226033 ']' 00:31:35.005 22:30:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:31:35.005 22:30:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:31:35.005 22:30:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:31:35.005 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:31:35.005 22:30:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:31:35.005 22:30:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:31:35.005 [2024-10-01 22:30:30.168456] Starting SPDK v25.01-pre git sha1 1b1c3081e / DPDK 24.03.0 initialization... 00:31:35.005 [2024-10-01 22:30:30.168514] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid226033 ] 00:31:35.005 [2024-10-01 22:30:30.220038] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:35.265 [2024-10-01 22:30:30.272664] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:31:35.838 22:30:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:31:35.838 22:30:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:31:35.838 22:30:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.0y3IhAMUvI 00:31:36.097 22:30:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:31:36.097 [2024-10-01 22:30:31.271515] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:31:36.358 TLSTESTn1 00:31:36.358 22:30:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:31:36.358 Running I/O for 10 seconds... 00:31:46.640 5861.00 IOPS, 22.89 MiB/s 6001.50 IOPS, 23.44 MiB/s 5771.67 IOPS, 22.55 MiB/s 5764.75 IOPS, 22.52 MiB/s 5662.00 IOPS, 22.12 MiB/s 5703.83 IOPS, 22.28 MiB/s 5581.00 IOPS, 21.80 MiB/s 5494.75 IOPS, 21.46 MiB/s 5560.11 IOPS, 21.72 MiB/s 5532.10 IOPS, 21.61 MiB/s 00:31:46.640 Latency(us) 00:31:46.640 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:46.640 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:31:46.640 Verification LBA range: start 0x0 length 0x2000 00:31:46.640 TLSTESTn1 : 10.02 5533.62 21.62 0.00 0.00 23099.92 5024.43 67283.63 00:31:46.640 =================================================================================================================== 00:31:46.640 Total : 5533.62 21.62 0.00 0.00 23099.92 5024.43 67283.63 00:31:46.640 { 00:31:46.640 "results": [ 00:31:46.640 { 00:31:46.640 "job": "TLSTESTn1", 00:31:46.640 "core_mask": "0x4", 00:31:46.640 "workload": "verify", 00:31:46.640 "status": "finished", 00:31:46.640 "verify_range": { 00:31:46.640 "start": 0, 00:31:46.640 "length": 8192 00:31:46.640 }, 00:31:46.640 "queue_depth": 128, 00:31:46.640 "io_size": 4096, 00:31:46.640 "runtime": 10.02039, 00:31:46.640 "iops": 5533.616955028697, 00:31:46.640 "mibps": 21.615691230580847, 00:31:46.640 "io_failed": 0, 00:31:46.640 "io_timeout": 0, 00:31:46.640 "avg_latency_us": 23099.91926947886, 00:31:46.640 "min_latency_us": 5024.426666666666, 00:31:46.640 "max_latency_us": 67283.62666666666 00:31:46.640 } 00:31:46.640 ], 00:31:46.640 "core_count": 1 00:31:46.640 } 00:31:46.640 22:30:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@45 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:31:46.640 22:30:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@46 -- # killprocess 226033 00:31:46.640 22:30:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 226033 ']' 00:31:46.640 22:30:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 226033 00:31:46.640 22:30:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:31:46.640 22:30:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:31:46.640 22:30:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 226033 00:31:46.640 22:30:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:31:46.640 22:30:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:31:46.640 22:30:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 226033' 00:31:46.640 killing process with pid 226033 00:31:46.640 22:30:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 226033 00:31:46.640 Received shutdown signal, test time was about 10.000000 seconds 00:31:46.640 00:31:46.640 Latency(us) 00:31:46.640 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:46.640 =================================================================================================================== 00:31:46.640 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:31:46.640 22:30:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 226033 00:31:46.640 22:30:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@147 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.vUfcOTLrz6 00:31:46.640 22:30:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:31:46.640 22:30:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.vUfcOTLrz6 00:31:46.640 22:30:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:31:46.640 22:30:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:31:46.640 22:30:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:31:46.640 22:30:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:31:46.640 22:30:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.vUfcOTLrz6 00:31:46.640 22:30:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:31:46.640 22:30:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:31:46.640 22:30:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:31:46.640 22:30:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.vUfcOTLrz6 00:31:46.640 22:30:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:31:46.640 22:30:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=228371 00:31:46.640 22:30:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:31:46.640 22:30:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 228371 /var/tmp/bdevperf.sock 00:31:46.640 22:30:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:31:46.640 22:30:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 228371 ']' 00:31:46.640 22:30:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:31:46.640 22:30:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:31:46.640 22:30:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:31:46.640 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:31:46.640 22:30:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:31:46.640 22:30:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:31:46.640 [2024-10-01 22:30:41.815456] Starting SPDK v25.01-pre git sha1 1b1c3081e / DPDK 24.03.0 initialization... 00:31:46.640 [2024-10-01 22:30:41.815513] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid228371 ] 00:31:46.640 [2024-10-01 22:30:41.866006] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:46.899 [2024-10-01 22:30:41.919004] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:31:46.899 22:30:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:31:46.899 22:30:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:31:46.899 22:30:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.vUfcOTLrz6 00:31:47.158 22:30:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:31:47.158 [2024-10-01 22:30:42.386384] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:31:47.158 [2024-10-01 22:30:42.390703] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:31:47.158 [2024-10-01 22:30:42.391319] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1545e00 (107): Transport endpoint is not connected 00:31:47.158 [2024-10-01 22:30:42.392314] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1545e00 (9): Bad file descriptor 00:31:47.158 [2024-10-01 22:30:42.393315] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:47.158 [2024-10-01 22:30:42.393326] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:31:47.158 [2024-10-01 22:30:42.393331] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode1, Operation not permitted 00:31:47.158 [2024-10-01 22:30:42.393339] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:47.158 request: 00:31:47.158 { 00:31:47.158 "name": "TLSTEST", 00:31:47.158 "trtype": "tcp", 00:31:47.158 "traddr": "10.0.0.2", 00:31:47.158 "adrfam": "ipv4", 00:31:47.158 "trsvcid": "4420", 00:31:47.158 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:31:47.158 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:31:47.158 "prchk_reftag": false, 00:31:47.158 "prchk_guard": false, 00:31:47.158 "hdgst": false, 00:31:47.158 "ddgst": false, 00:31:47.158 "psk": "key0", 00:31:47.158 "allow_unrecognized_csi": false, 00:31:47.158 "method": "bdev_nvme_attach_controller", 00:31:47.158 "req_id": 1 00:31:47.158 } 00:31:47.158 Got JSON-RPC error response 00:31:47.158 response: 00:31:47.158 { 00:31:47.158 "code": -5, 00:31:47.158 "message": "Input/output error" 00:31:47.158 } 00:31:47.158 22:30:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 228371 00:31:47.158 22:30:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 228371 ']' 00:31:47.158 22:30:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 228371 00:31:47.158 22:30:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:31:47.417 22:30:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:31:47.417 22:30:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 228371 00:31:47.417 22:30:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:31:47.417 22:30:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:31:47.417 22:30:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 228371' 00:31:47.417 killing process with pid 228371 00:31:47.417 22:30:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 228371 00:31:47.417 Received shutdown signal, test time was about 10.000000 seconds 00:31:47.417 00:31:47.417 Latency(us) 00:31:47.417 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:47.417 =================================================================================================================== 00:31:47.417 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:31:47.417 22:30:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 228371 00:31:47.417 22:30:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:31:47.417 22:30:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:31:47.417 22:30:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:31:47.417 22:30:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:31:47.417 22:30:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:31:47.417 22:30:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@150 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.0y3IhAMUvI 00:31:47.417 22:30:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:31:47.417 22:30:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.0y3IhAMUvI 00:31:47.417 22:30:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:31:47.417 22:30:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:31:47.417 22:30:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:31:47.417 22:30:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:31:47.417 22:30:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.0y3IhAMUvI 00:31:47.417 22:30:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:31:47.417 22:30:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:31:47.417 22:30:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host2 00:31:47.417 22:30:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.0y3IhAMUvI 00:31:47.417 22:30:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:31:47.417 22:30:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=228558 00:31:47.417 22:30:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:31:47.417 22:30:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 228558 /var/tmp/bdevperf.sock 00:31:47.417 22:30:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:31:47.417 22:30:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 228558 ']' 00:31:47.417 22:30:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:31:47.417 22:30:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:31:47.417 22:30:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:31:47.417 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:31:47.417 22:30:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:31:47.417 22:30:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:31:47.687 [2024-10-01 22:30:42.680246] Starting SPDK v25.01-pre git sha1 1b1c3081e / DPDK 24.03.0 initialization... 00:31:47.687 [2024-10-01 22:30:42.680300] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid228558 ] 00:31:47.687 [2024-10-01 22:30:42.731031] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:47.687 [2024-10-01 22:30:42.784194] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:31:48.257 22:30:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:31:48.257 22:30:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:31:48.257 22:30:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.0y3IhAMUvI 00:31:48.516 22:30:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 --psk key0 00:31:48.516 [2024-10-01 22:30:43.762478] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:31:48.775 [2024-10-01 22:30:43.773359] tcp.c: 969:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:31:48.775 [2024-10-01 22:30:43.773378] posix.c: 574:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:31:48.775 [2024-10-01 22:30:43.773399] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:31:48.776 [2024-10-01 22:30:43.773675] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2148e00 (107): Transport endpoint is not connected 00:31:48.776 [2024-10-01 22:30:43.774671] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2148e00 (9): Bad file descriptor 00:31:48.776 [2024-10-01 22:30:43.775672] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:48.776 [2024-10-01 22:30:43.775683] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:31:48.776 [2024-10-01 22:30:43.775688] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode1, Operation not permitted 00:31:48.776 [2024-10-01 22:30:43.775696] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:48.776 request: 00:31:48.776 { 00:31:48.776 "name": "TLSTEST", 00:31:48.776 "trtype": "tcp", 00:31:48.776 "traddr": "10.0.0.2", 00:31:48.776 "adrfam": "ipv4", 00:31:48.776 "trsvcid": "4420", 00:31:48.776 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:31:48.776 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:31:48.776 "prchk_reftag": false, 00:31:48.776 "prchk_guard": false, 00:31:48.776 "hdgst": false, 00:31:48.776 "ddgst": false, 00:31:48.776 "psk": "key0", 00:31:48.776 "allow_unrecognized_csi": false, 00:31:48.776 "method": "bdev_nvme_attach_controller", 00:31:48.776 "req_id": 1 00:31:48.776 } 00:31:48.776 Got JSON-RPC error response 00:31:48.776 response: 00:31:48.776 { 00:31:48.776 "code": -5, 00:31:48.776 "message": "Input/output error" 00:31:48.776 } 00:31:48.776 22:30:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 228558 00:31:48.776 22:30:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 228558 ']' 00:31:48.776 22:30:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 228558 00:31:48.776 22:30:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:31:48.776 22:30:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:31:48.776 22:30:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 228558 00:31:48.776 22:30:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:31:48.776 22:30:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:31:48.776 22:30:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 228558' 00:31:48.776 killing process with pid 228558 00:31:48.776 22:30:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 228558 00:31:48.776 Received shutdown signal, test time was about 10.000000 seconds 00:31:48.776 00:31:48.776 Latency(us) 00:31:48.776 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:48.776 =================================================================================================================== 00:31:48.776 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:31:48.776 22:30:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 228558 00:31:48.776 22:30:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:31:48.776 22:30:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:31:48.776 22:30:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:31:48.776 22:30:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:31:48.776 22:30:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:31:48.776 22:30:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@153 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.0y3IhAMUvI 00:31:48.776 22:30:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:31:48.776 22:30:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.0y3IhAMUvI 00:31:48.776 22:30:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:31:48.776 22:30:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:31:48.776 22:30:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:31:48.776 22:30:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:31:48.776 22:30:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.0y3IhAMUvI 00:31:48.776 22:30:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:31:48.776 22:30:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode2 00:31:48.776 22:30:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:31:48.776 22:30:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.0y3IhAMUvI 00:31:48.776 22:30:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:31:48.776 22:30:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=228748 00:31:48.776 22:30:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:31:48.776 22:30:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 228748 /var/tmp/bdevperf.sock 00:31:48.776 22:30:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:31:48.776 22:30:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 228748 ']' 00:31:48.776 22:30:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:31:48.776 22:30:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:31:48.776 22:30:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:31:48.776 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:31:48.776 22:30:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:31:48.776 22:30:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:31:49.035 [2024-10-01 22:30:44.075110] Starting SPDK v25.01-pre git sha1 1b1c3081e / DPDK 24.03.0 initialization... 00:31:49.035 [2024-10-01 22:30:44.075166] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid228748 ] 00:31:49.035 [2024-10-01 22:30:44.126339] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:49.035 [2024-10-01 22:30:44.178886] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:31:49.975 22:30:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:31:49.975 22:30:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:31:49.975 22:30:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.0y3IhAMUvI 00:31:49.975 22:30:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -q nqn.2016-06.io.spdk:host1 --psk key0 00:31:49.975 [2024-10-01 22:30:45.207342] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:31:49.975 [2024-10-01 22:30:45.216876] tcp.c: 969:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:31:49.975 [2024-10-01 22:30:45.216894] posix.c: 574:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:31:49.975 [2024-10-01 22:30:45.216914] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:31:49.975 [2024-10-01 22:30:45.217503] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23bbe00 (107): Transport endpoint is not connected 00:31:49.975 [2024-10-01 22:30:45.218499] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23bbe00 (9): Bad file descriptor 00:31:49.975 [2024-10-01 22:30:45.219500] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:31:49.975 [2024-10-01 22:30:45.219508] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:31:49.975 [2024-10-01 22:30:45.219514] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode2, Operation not permitted 00:31:49.975 [2024-10-01 22:30:45.219521] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:31:49.975 request: 00:31:49.975 { 00:31:49.975 "name": "TLSTEST", 00:31:49.975 "trtype": "tcp", 00:31:49.975 "traddr": "10.0.0.2", 00:31:49.975 "adrfam": "ipv4", 00:31:49.975 "trsvcid": "4420", 00:31:49.975 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:31:49.975 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:31:49.975 "prchk_reftag": false, 00:31:49.975 "prchk_guard": false, 00:31:49.975 "hdgst": false, 00:31:49.975 "ddgst": false, 00:31:49.975 "psk": "key0", 00:31:49.975 "allow_unrecognized_csi": false, 00:31:49.975 "method": "bdev_nvme_attach_controller", 00:31:49.975 "req_id": 1 00:31:49.975 } 00:31:49.975 Got JSON-RPC error response 00:31:49.975 response: 00:31:49.975 { 00:31:49.975 "code": -5, 00:31:49.975 "message": "Input/output error" 00:31:49.975 } 00:31:50.236 22:30:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 228748 00:31:50.236 22:30:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 228748 ']' 00:31:50.236 22:30:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 228748 00:31:50.236 22:30:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:31:50.236 22:30:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:31:50.236 22:30:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 228748 00:31:50.236 22:30:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:31:50.236 22:30:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:31:50.236 22:30:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 228748' 00:31:50.236 killing process with pid 228748 00:31:50.236 22:30:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 228748 00:31:50.236 Received shutdown signal, test time was about 10.000000 seconds 00:31:50.236 00:31:50.236 Latency(us) 00:31:50.236 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:50.236 =================================================================================================================== 00:31:50.236 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:31:50.236 22:30:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 228748 00:31:50.236 22:30:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:31:50.236 22:30:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:31:50.236 22:30:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:31:50.236 22:30:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:31:50.236 22:30:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:31:50.236 22:30:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@156 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:31:50.236 22:30:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:31:50.236 22:30:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:31:50.236 22:30:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:31:50.236 22:30:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:31:50.236 22:30:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:31:50.236 22:30:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:31:50.236 22:30:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:31:50.236 22:30:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:31:50.236 22:30:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:31:50.236 22:30:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:31:50.236 22:30:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk= 00:31:50.236 22:30:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:31:50.236 22:30:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=229074 00:31:50.236 22:30:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:31:50.236 22:30:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 229074 /var/tmp/bdevperf.sock 00:31:50.236 22:30:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:31:50.236 22:30:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 229074 ']' 00:31:50.236 22:30:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:31:50.236 22:30:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:31:50.236 22:30:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:31:50.236 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:31:50.236 22:30:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:31:50.236 22:30:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:31:50.497 [2024-10-01 22:30:45.505420] Starting SPDK v25.01-pre git sha1 1b1c3081e / DPDK 24.03.0 initialization... 00:31:50.497 [2024-10-01 22:30:45.505475] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid229074 ] 00:31:50.497 [2024-10-01 22:30:45.555920] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:50.497 [2024-10-01 22:30:45.608197] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:31:50.497 22:30:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:31:50.497 22:30:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:31:50.497 22:30:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 '' 00:31:50.757 [2024-10-01 22:30:45.888568] keyring.c: 24:keyring_file_check_path: *ERROR*: Non-absolute paths are not allowed: 00:31:50.757 [2024-10-01 22:30:45.888593] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:31:50.757 request: 00:31:50.757 { 00:31:50.757 "name": "key0", 00:31:50.757 "path": "", 00:31:50.757 "method": "keyring_file_add_key", 00:31:50.757 "req_id": 1 00:31:50.757 } 00:31:50.757 Got JSON-RPC error response 00:31:50.757 response: 00:31:50.757 { 00:31:50.757 "code": -1, 00:31:50.757 "message": "Operation not permitted" 00:31:50.757 } 00:31:50.757 22:30:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:31:51.018 [2024-10-01 22:30:46.041021] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:31:51.018 [2024-10-01 22:30:46.041044] bdev_nvme.c:6410:spdk_bdev_nvme_create: *ERROR*: Could not load PSK: key0 00:31:51.018 request: 00:31:51.018 { 00:31:51.018 "name": "TLSTEST", 00:31:51.018 "trtype": "tcp", 00:31:51.018 "traddr": "10.0.0.2", 00:31:51.018 "adrfam": "ipv4", 00:31:51.018 "trsvcid": "4420", 00:31:51.018 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:31:51.018 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:31:51.018 "prchk_reftag": false, 00:31:51.018 "prchk_guard": false, 00:31:51.018 "hdgst": false, 00:31:51.018 "ddgst": false, 00:31:51.018 "psk": "key0", 00:31:51.018 "allow_unrecognized_csi": false, 00:31:51.018 "method": "bdev_nvme_attach_controller", 00:31:51.018 "req_id": 1 00:31:51.018 } 00:31:51.018 Got JSON-RPC error response 00:31:51.018 response: 00:31:51.018 { 00:31:51.018 "code": -126, 00:31:51.018 "message": "Required key not available" 00:31:51.018 } 00:31:51.018 22:30:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 229074 00:31:51.018 22:30:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 229074 ']' 00:31:51.018 22:30:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 229074 00:31:51.018 22:30:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:31:51.018 22:30:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:31:51.018 22:30:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 229074 00:31:51.018 22:30:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:31:51.018 22:30:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:31:51.018 22:30:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 229074' 00:31:51.018 killing process with pid 229074 00:31:51.018 22:30:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 229074 00:31:51.018 Received shutdown signal, test time was about 10.000000 seconds 00:31:51.018 00:31:51.018 Latency(us) 00:31:51.018 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:51.018 =================================================================================================================== 00:31:51.018 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:31:51.018 22:30:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 229074 00:31:51.278 22:30:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:31:51.278 22:30:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:31:51.278 22:30:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:31:51.278 22:30:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:31:51.278 22:30:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:31:51.278 22:30:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@159 -- # killprocess 223288 00:31:51.278 22:30:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 223288 ']' 00:31:51.278 22:30:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 223288 00:31:51.279 22:30:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:31:51.279 22:30:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:31:51.279 22:30:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 223288 00:31:51.279 22:30:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:31:51.279 22:30:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:31:51.279 22:30:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 223288' 00:31:51.279 killing process with pid 223288 00:31:51.279 22:30:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 223288 00:31:51.279 22:30:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 223288 00:31:51.279 22:30:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@160 -- # format_interchange_psk 00112233445566778899aabbccddeeff0011223344556677 2 00:31:51.279 22:30:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@741 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff0011223344556677 2 00:31:51.279 22:30:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@728 -- # local prefix key digest 00:31:51.279 22:30:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # prefix=NVMeTLSkey-1 00:31:51.279 22:30:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # key=00112233445566778899aabbccddeeff0011223344556677 00:31:51.279 22:30:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # digest=2 00:31:51.279 22:30:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@731 -- # python - 00:31:51.539 22:30:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@160 -- # key_long=NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:31:51.539 22:30:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@161 -- # mktemp 00:31:51.539 22:30:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@161 -- # key_long_path=/tmp/tmp.zLn9L4hQ3j 00:31:51.539 22:30:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@162 -- # echo -n NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:31:51.539 22:30:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@163 -- # chmod 0600 /tmp/tmp.zLn9L4hQ3j 00:31:51.539 22:30:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@164 -- # nvmfappstart -m 0x2 00:31:51.539 22:30:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:31:51.539 22:30:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:31:51.539 22:30:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:31:51.539 22:30:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # nvmfpid=229421 00:31:51.539 22:30:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # waitforlisten 229421 00:31:51.539 22:30:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:31:51.539 22:30:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 229421 ']' 00:31:51.539 22:30:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:51.539 22:30:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:31:51.539 22:30:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:51.539 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:51.539 22:30:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:31:51.539 22:30:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:31:51.539 [2024-10-01 22:30:46.622003] Starting SPDK v25.01-pre git sha1 1b1c3081e / DPDK 24.03.0 initialization... 00:31:51.539 [2024-10-01 22:30:46.622058] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:51.539 [2024-10-01 22:30:46.705024] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:51.539 [2024-10-01 22:30:46.760219] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:31:51.539 [2024-10-01 22:30:46.760255] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:31:51.539 [2024-10-01 22:30:46.760261] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:31:51.539 [2024-10-01 22:30:46.760266] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:31:51.539 [2024-10-01 22:30:46.760270] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:31:51.539 [2024-10-01 22:30:46.760285] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:31:52.480 22:30:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:31:52.480 22:30:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:31:52.480 22:30:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:31:52.480 22:30:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:31:52.480 22:30:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:31:52.480 22:30:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:31:52.480 22:30:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@166 -- # setup_nvmf_tgt /tmp/tmp.zLn9L4hQ3j 00:31:52.480 22:30:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.zLn9L4hQ3j 00:31:52.480 22:30:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:31:52.480 [2024-10-01 22:30:47.604163] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:52.480 22:30:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:31:52.741 22:30:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:31:52.741 [2024-10-01 22:30:47.940980] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:31:52.741 [2024-10-01 22:30:47.941177] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:52.741 22:30:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:31:53.002 malloc0 00:31:53.002 22:30:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:31:53.263 22:30:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.zLn9L4hQ3j 00:31:53.263 22:30:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:31:53.524 22:30:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@168 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.zLn9L4hQ3j 00:31:53.524 22:30:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:31:53.524 22:30:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:31:53.524 22:30:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:31:53.524 22:30:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.zLn9L4hQ3j 00:31:53.524 22:30:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:31:53.524 22:30:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=229786 00:31:53.524 22:30:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:31:53.524 22:30:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 229786 /var/tmp/bdevperf.sock 00:31:53.524 22:30:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:31:53.524 22:30:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 229786 ']' 00:31:53.524 22:30:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:31:53.524 22:30:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:31:53.524 22:30:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:31:53.524 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:31:53.524 22:30:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:31:53.524 22:30:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:31:53.524 [2024-10-01 22:30:48.659633] Starting SPDK v25.01-pre git sha1 1b1c3081e / DPDK 24.03.0 initialization... 00:31:53.524 [2024-10-01 22:30:48.659686] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid229786 ] 00:31:53.524 [2024-10-01 22:30:48.710949] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:53.524 [2024-10-01 22:30:48.763789] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:31:54.464 22:30:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:31:54.464 22:30:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:31:54.464 22:30:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.zLn9L4hQ3j 00:31:54.464 22:30:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:31:54.724 [2024-10-01 22:30:49.748387] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:31:54.724 TLSTESTn1 00:31:54.724 22:30:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:31:54.724 Running I/O for 10 seconds... 00:32:04.777 4777.00 IOPS, 18.66 MiB/s 4411.00 IOPS, 17.23 MiB/s 4958.67 IOPS, 19.37 MiB/s 4872.25 IOPS, 19.03 MiB/s 4813.40 IOPS, 18.80 MiB/s 4730.67 IOPS, 18.48 MiB/s 4799.14 IOPS, 18.75 MiB/s 4706.25 IOPS, 18.38 MiB/s 4551.22 IOPS, 17.78 MiB/s 4628.70 IOPS, 18.08 MiB/s 00:32:04.777 Latency(us) 00:32:04.777 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:04.777 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:32:04.777 Verification LBA range: start 0x0 length 0x2000 00:32:04.777 TLSTESTn1 : 10.01 4634.96 18.11 0.00 0.00 27581.91 4396.37 85196.80 00:32:04.777 =================================================================================================================== 00:32:04.777 Total : 4634.96 18.11 0.00 0.00 27581.91 4396.37 85196.80 00:32:04.777 { 00:32:04.777 "results": [ 00:32:04.777 { 00:32:04.777 "job": "TLSTESTn1", 00:32:04.777 "core_mask": "0x4", 00:32:04.777 "workload": "verify", 00:32:04.777 "status": "finished", 00:32:04.777 "verify_range": { 00:32:04.777 "start": 0, 00:32:04.777 "length": 8192 00:32:04.777 }, 00:32:04.777 "queue_depth": 128, 00:32:04.777 "io_size": 4096, 00:32:04.777 "runtime": 10.013678, 00:32:04.777 "iops": 4634.960301299882, 00:32:04.777 "mibps": 18.105313676952665, 00:32:04.777 "io_failed": 0, 00:32:04.777 "io_timeout": 0, 00:32:04.777 "avg_latency_us": 27581.906664655737, 00:32:04.777 "min_latency_us": 4396.373333333333, 00:32:04.777 "max_latency_us": 85196.8 00:32:04.777 } 00:32:04.777 ], 00:32:04.777 "core_count": 1 00:32:04.777 } 00:32:04.777 22:30:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@45 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:32:04.777 22:30:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@46 -- # killprocess 229786 00:32:04.777 22:30:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 229786 ']' 00:32:04.777 22:30:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 229786 00:32:04.777 22:30:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:32:04.777 22:30:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:32:04.777 22:30:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 229786 00:32:05.039 22:31:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:32:05.039 22:31:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:32:05.039 22:31:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 229786' 00:32:05.039 killing process with pid 229786 00:32:05.039 22:31:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 229786 00:32:05.039 Received shutdown signal, test time was about 10.000000 seconds 00:32:05.039 00:32:05.039 Latency(us) 00:32:05.039 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:05.039 =================================================================================================================== 00:32:05.039 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:32:05.039 22:31:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 229786 00:32:05.039 22:31:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@171 -- # chmod 0666 /tmp/tmp.zLn9L4hQ3j 00:32:05.039 22:31:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@172 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.zLn9L4hQ3j 00:32:05.039 22:31:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:32:05.039 22:31:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.zLn9L4hQ3j 00:32:05.039 22:31:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:32:05.039 22:31:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:32:05.039 22:31:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:32:05.039 22:31:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:32:05.039 22:31:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.zLn9L4hQ3j 00:32:05.039 22:31:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:32:05.039 22:31:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:32:05.039 22:31:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:32:05.039 22:31:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.zLn9L4hQ3j 00:32:05.039 22:31:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:32:05.039 22:31:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=232108 00:32:05.039 22:31:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:32:05.039 22:31:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 232108 /var/tmp/bdevperf.sock 00:32:05.039 22:31:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:32:05.039 22:31:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 232108 ']' 00:32:05.039 22:31:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:32:05.039 22:31:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:32:05.039 22:31:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:32:05.039 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:32:05.039 22:31:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:32:05.039 22:31:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:32:05.039 [2024-10-01 22:31:00.288565] Starting SPDK v25.01-pre git sha1 1b1c3081e / DPDK 24.03.0 initialization... 00:32:05.039 [2024-10-01 22:31:00.288688] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid232108 ] 00:32:05.301 [2024-10-01 22:31:00.343235] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:05.301 [2024-10-01 22:31:00.395653] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:32:05.875 22:31:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:32:05.875 22:31:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:32:05.875 22:31:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.zLn9L4hQ3j 00:32:06.136 [2024-10-01 22:31:01.233041] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.zLn9L4hQ3j': 0100666 00:32:06.136 [2024-10-01 22:31:01.233068] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:32:06.136 request: 00:32:06.136 { 00:32:06.136 "name": "key0", 00:32:06.136 "path": "/tmp/tmp.zLn9L4hQ3j", 00:32:06.136 "method": "keyring_file_add_key", 00:32:06.136 "req_id": 1 00:32:06.136 } 00:32:06.136 Got JSON-RPC error response 00:32:06.136 response: 00:32:06.136 { 00:32:06.136 "code": -1, 00:32:06.136 "message": "Operation not permitted" 00:32:06.136 } 00:32:06.136 22:31:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:32:06.397 [2024-10-01 22:31:01.409553] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:32:06.397 [2024-10-01 22:31:01.409571] bdev_nvme.c:6410:spdk_bdev_nvme_create: *ERROR*: Could not load PSK: key0 00:32:06.397 request: 00:32:06.397 { 00:32:06.397 "name": "TLSTEST", 00:32:06.397 "trtype": "tcp", 00:32:06.397 "traddr": "10.0.0.2", 00:32:06.397 "adrfam": "ipv4", 00:32:06.397 "trsvcid": "4420", 00:32:06.397 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:32:06.397 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:32:06.397 "prchk_reftag": false, 00:32:06.397 "prchk_guard": false, 00:32:06.397 "hdgst": false, 00:32:06.397 "ddgst": false, 00:32:06.397 "psk": "key0", 00:32:06.397 "allow_unrecognized_csi": false, 00:32:06.397 "method": "bdev_nvme_attach_controller", 00:32:06.397 "req_id": 1 00:32:06.397 } 00:32:06.397 Got JSON-RPC error response 00:32:06.397 response: 00:32:06.397 { 00:32:06.397 "code": -126, 00:32:06.397 "message": "Required key not available" 00:32:06.397 } 00:32:06.397 22:31:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 232108 00:32:06.397 22:31:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 232108 ']' 00:32:06.397 22:31:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 232108 00:32:06.397 22:31:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:32:06.397 22:31:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:32:06.397 22:31:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 232108 00:32:06.397 22:31:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:32:06.397 22:31:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:32:06.397 22:31:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 232108' 00:32:06.397 killing process with pid 232108 00:32:06.397 22:31:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 232108 00:32:06.397 Received shutdown signal, test time was about 10.000000 seconds 00:32:06.397 00:32:06.397 Latency(us) 00:32:06.397 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:06.397 =================================================================================================================== 00:32:06.397 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:32:06.397 22:31:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 232108 00:32:06.658 22:31:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:32:06.658 22:31:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:32:06.658 22:31:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:32:06.658 22:31:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:32:06.658 22:31:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:32:06.658 22:31:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@175 -- # killprocess 229421 00:32:06.658 22:31:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 229421 ']' 00:32:06.658 22:31:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 229421 00:32:06.658 22:31:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:32:06.658 22:31:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:32:06.658 22:31:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 229421 00:32:06.658 22:31:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:32:06.658 22:31:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:32:06.658 22:31:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 229421' 00:32:06.658 killing process with pid 229421 00:32:06.658 22:31:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 229421 00:32:06.658 22:31:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 229421 00:32:06.658 22:31:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@176 -- # nvmfappstart -m 0x2 00:32:06.658 22:31:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:32:06.658 22:31:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:32:06.658 22:31:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:32:06.658 22:31:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # nvmfpid=232431 00:32:06.658 22:31:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:32:06.658 22:31:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # waitforlisten 232431 00:32:06.658 22:31:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 232431 ']' 00:32:06.658 22:31:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:06.658 22:31:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:32:06.658 22:31:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:06.658 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:06.658 22:31:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:32:06.658 22:31:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:32:06.920 [2024-10-01 22:31:01.953831] Starting SPDK v25.01-pre git sha1 1b1c3081e / DPDK 24.03.0 initialization... 00:32:06.920 [2024-10-01 22:31:01.953887] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:32:06.920 [2024-10-01 22:31:02.037016] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:06.920 [2024-10-01 22:31:02.090540] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:32:06.920 [2024-10-01 22:31:02.090570] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:32:06.920 [2024-10-01 22:31:02.090575] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:32:06.920 [2024-10-01 22:31:02.090580] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:32:06.920 [2024-10-01 22:31:02.090584] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:32:06.920 [2024-10-01 22:31:02.090599] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:32:07.491 22:31:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:32:07.491 22:31:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:32:07.491 22:31:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:32:07.491 22:31:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:32:07.491 22:31:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:32:07.753 22:31:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:32:07.753 22:31:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@178 -- # NOT setup_nvmf_tgt /tmp/tmp.zLn9L4hQ3j 00:32:07.753 22:31:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:32:07.753 22:31:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg setup_nvmf_tgt /tmp/tmp.zLn9L4hQ3j 00:32:07.753 22:31:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=setup_nvmf_tgt 00:32:07.753 22:31:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:32:07.753 22:31:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t setup_nvmf_tgt 00:32:07.753 22:31:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:32:07.753 22:31:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # setup_nvmf_tgt /tmp/tmp.zLn9L4hQ3j 00:32:07.753 22:31:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.zLn9L4hQ3j 00:32:07.753 22:31:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:32:07.753 [2024-10-01 22:31:02.921161] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:32:07.753 22:31:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:32:08.017 22:31:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:32:08.017 [2024-10-01 22:31:03.237941] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:32:08.017 [2024-10-01 22:31:03.238133] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:08.017 22:31:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:32:08.280 malloc0 00:32:08.280 22:31:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:32:08.541 22:31:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.zLn9L4hQ3j 00:32:08.541 [2024-10-01 22:31:03.725160] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.zLn9L4hQ3j': 0100666 00:32:08.541 [2024-10-01 22:31:03.725182] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:32:08.541 request: 00:32:08.541 { 00:32:08.541 "name": "key0", 00:32:08.541 "path": "/tmp/tmp.zLn9L4hQ3j", 00:32:08.541 "method": "keyring_file_add_key", 00:32:08.541 "req_id": 1 00:32:08.541 } 00:32:08.541 Got JSON-RPC error response 00:32:08.541 response: 00:32:08.541 { 00:32:08.541 "code": -1, 00:32:08.541 "message": "Operation not permitted" 00:32:08.541 } 00:32:08.541 22:31:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:32:08.801 [2024-10-01 22:31:03.877553] tcp.c:3792:nvmf_tcp_subsystem_add_host: *ERROR*: Key 'key0' does not exist 00:32:08.801 [2024-10-01 22:31:03.877577] subsystem.c:1055:spdk_nvmf_subsystem_add_host_ext: *ERROR*: Unable to add host to TCP transport 00:32:08.801 request: 00:32:08.801 { 00:32:08.801 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:32:08.801 "host": "nqn.2016-06.io.spdk:host1", 00:32:08.801 "psk": "key0", 00:32:08.801 "method": "nvmf_subsystem_add_host", 00:32:08.801 "req_id": 1 00:32:08.801 } 00:32:08.801 Got JSON-RPC error response 00:32:08.801 response: 00:32:08.801 { 00:32:08.801 "code": -32603, 00:32:08.801 "message": "Internal error" 00:32:08.801 } 00:32:08.801 22:31:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:32:08.801 22:31:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:32:08.801 22:31:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:32:08.801 22:31:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:32:08.801 22:31:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@181 -- # killprocess 232431 00:32:08.801 22:31:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 232431 ']' 00:32:08.801 22:31:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 232431 00:32:08.801 22:31:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:32:08.801 22:31:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:32:08.801 22:31:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 232431 00:32:08.801 22:31:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:32:08.801 22:31:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:32:08.801 22:31:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 232431' 00:32:08.801 killing process with pid 232431 00:32:08.801 22:31:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 232431 00:32:08.801 22:31:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 232431 00:32:09.062 22:31:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@182 -- # chmod 0600 /tmp/tmp.zLn9L4hQ3j 00:32:09.062 22:31:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@185 -- # nvmfappstart -m 0x2 00:32:09.062 22:31:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:32:09.062 22:31:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:32:09.062 22:31:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:32:09.062 22:31:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # nvmfpid=232857 00:32:09.062 22:31:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # waitforlisten 232857 00:32:09.062 22:31:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:32:09.062 22:31:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 232857 ']' 00:32:09.062 22:31:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:09.062 22:31:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:32:09.062 22:31:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:09.062 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:09.062 22:31:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:32:09.062 22:31:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:32:09.062 [2024-10-01 22:31:04.187630] Starting SPDK v25.01-pre git sha1 1b1c3081e / DPDK 24.03.0 initialization... 00:32:09.062 [2024-10-01 22:31:04.187687] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:32:09.062 [2024-10-01 22:31:04.270450] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:09.323 [2024-10-01 22:31:04.324251] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:32:09.323 [2024-10-01 22:31:04.324283] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:32:09.323 [2024-10-01 22:31:04.324289] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:32:09.323 [2024-10-01 22:31:04.324294] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:32:09.323 [2024-10-01 22:31:04.324298] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:32:09.323 [2024-10-01 22:31:04.324312] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:32:09.896 22:31:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:32:09.896 22:31:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:32:09.896 22:31:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:32:09.896 22:31:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:32:09.896 22:31:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:32:09.896 22:31:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:32:09.896 22:31:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@186 -- # setup_nvmf_tgt /tmp/tmp.zLn9L4hQ3j 00:32:09.896 22:31:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.zLn9L4hQ3j 00:32:09.896 22:31:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:32:10.157 [2024-10-01 22:31:05.168372] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:32:10.157 22:31:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:32:10.157 22:31:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:32:10.418 [2024-10-01 22:31:05.505195] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:32:10.418 [2024-10-01 22:31:05.505382] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:10.418 22:31:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:32:10.679 malloc0 00:32:10.679 22:31:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:32:10.679 22:31:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.zLn9L4hQ3j 00:32:10.939 22:31:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:32:10.939 22:31:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@189 -- # bdevperf_pid=233222 00:32:10.939 22:31:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@191 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:32:10.939 22:31:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@188 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:32:10.939 22:31:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@192 -- # waitforlisten 233222 /var/tmp/bdevperf.sock 00:32:10.939 22:31:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 233222 ']' 00:32:10.939 22:31:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:32:10.939 22:31:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:32:10.939 22:31:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:32:10.939 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:32:10.939 22:31:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:32:10.939 22:31:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:32:11.199 [2024-10-01 22:31:06.224758] Starting SPDK v25.01-pre git sha1 1b1c3081e / DPDK 24.03.0 initialization... 00:32:11.199 [2024-10-01 22:31:06.224811] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid233222 ] 00:32:11.199 [2024-10-01 22:31:06.275838] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:11.199 [2024-10-01 22:31:06.328280] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:32:11.460 22:31:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:32:11.460 22:31:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:32:11.460 22:31:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@193 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.zLn9L4hQ3j 00:32:11.460 22:31:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@194 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:32:11.722 [2024-10-01 22:31:06.787999] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:32:11.722 TLSTESTn1 00:32:11.722 22:31:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@198 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py save_config 00:32:11.984 22:31:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@198 -- # tgtconf='{ 00:32:11.984 "subsystems": [ 00:32:11.984 { 00:32:11.984 "subsystem": "keyring", 00:32:11.984 "config": [ 00:32:11.984 { 00:32:11.984 "method": "keyring_file_add_key", 00:32:11.984 "params": { 00:32:11.984 "name": "key0", 00:32:11.984 "path": "/tmp/tmp.zLn9L4hQ3j" 00:32:11.984 } 00:32:11.984 } 00:32:11.984 ] 00:32:11.984 }, 00:32:11.984 { 00:32:11.984 "subsystem": "iobuf", 00:32:11.984 "config": [ 00:32:11.984 { 00:32:11.984 "method": "iobuf_set_options", 00:32:11.984 "params": { 00:32:11.984 "small_pool_count": 8192, 00:32:11.984 "large_pool_count": 1024, 00:32:11.984 "small_bufsize": 8192, 00:32:11.984 "large_bufsize": 135168 00:32:11.984 } 00:32:11.984 } 00:32:11.984 ] 00:32:11.984 }, 00:32:11.984 { 00:32:11.984 "subsystem": "sock", 00:32:11.984 "config": [ 00:32:11.984 { 00:32:11.984 "method": "sock_set_default_impl", 00:32:11.984 "params": { 00:32:11.984 "impl_name": "posix" 00:32:11.984 } 00:32:11.984 }, 00:32:11.984 { 00:32:11.984 "method": "sock_impl_set_options", 00:32:11.984 "params": { 00:32:11.984 "impl_name": "ssl", 00:32:11.984 "recv_buf_size": 4096, 00:32:11.984 "send_buf_size": 4096, 00:32:11.984 "enable_recv_pipe": true, 00:32:11.984 "enable_quickack": false, 00:32:11.984 "enable_placement_id": 0, 00:32:11.984 "enable_zerocopy_send_server": true, 00:32:11.984 "enable_zerocopy_send_client": false, 00:32:11.984 "zerocopy_threshold": 0, 00:32:11.984 "tls_version": 0, 00:32:11.984 "enable_ktls": false 00:32:11.984 } 00:32:11.984 }, 00:32:11.984 { 00:32:11.984 "method": "sock_impl_set_options", 00:32:11.984 "params": { 00:32:11.984 "impl_name": "posix", 00:32:11.984 "recv_buf_size": 2097152, 00:32:11.984 "send_buf_size": 2097152, 00:32:11.984 "enable_recv_pipe": true, 00:32:11.984 "enable_quickack": false, 00:32:11.984 "enable_placement_id": 0, 00:32:11.984 "enable_zerocopy_send_server": true, 00:32:11.984 "enable_zerocopy_send_client": false, 00:32:11.984 "zerocopy_threshold": 0, 00:32:11.984 "tls_version": 0, 00:32:11.984 "enable_ktls": false 00:32:11.984 } 00:32:11.984 } 00:32:11.984 ] 00:32:11.984 }, 00:32:11.984 { 00:32:11.984 "subsystem": "vmd", 00:32:11.984 "config": [] 00:32:11.984 }, 00:32:11.984 { 00:32:11.984 "subsystem": "accel", 00:32:11.984 "config": [ 00:32:11.984 { 00:32:11.984 "method": "accel_set_options", 00:32:11.984 "params": { 00:32:11.984 "small_cache_size": 128, 00:32:11.984 "large_cache_size": 16, 00:32:11.984 "task_count": 2048, 00:32:11.984 "sequence_count": 2048, 00:32:11.984 "buf_count": 2048 00:32:11.984 } 00:32:11.984 } 00:32:11.984 ] 00:32:11.984 }, 00:32:11.984 { 00:32:11.984 "subsystem": "bdev", 00:32:11.984 "config": [ 00:32:11.984 { 00:32:11.984 "method": "bdev_set_options", 00:32:11.984 "params": { 00:32:11.984 "bdev_io_pool_size": 65535, 00:32:11.984 "bdev_io_cache_size": 256, 00:32:11.984 "bdev_auto_examine": true, 00:32:11.984 "iobuf_small_cache_size": 128, 00:32:11.984 "iobuf_large_cache_size": 16, 00:32:11.984 "bdev_io_stack_size": 4096 00:32:11.984 } 00:32:11.984 }, 00:32:11.984 { 00:32:11.984 "method": "bdev_raid_set_options", 00:32:11.984 "params": { 00:32:11.984 "process_window_size_kb": 1024, 00:32:11.984 "process_max_bandwidth_mb_sec": 0 00:32:11.984 } 00:32:11.984 }, 00:32:11.984 { 00:32:11.984 "method": "bdev_iscsi_set_options", 00:32:11.984 "params": { 00:32:11.984 "timeout_sec": 30 00:32:11.984 } 00:32:11.984 }, 00:32:11.984 { 00:32:11.984 "method": "bdev_nvme_set_options", 00:32:11.984 "params": { 00:32:11.984 "action_on_timeout": "none", 00:32:11.984 "timeout_us": 0, 00:32:11.984 "timeout_admin_us": 0, 00:32:11.984 "keep_alive_timeout_ms": 10000, 00:32:11.984 "arbitration_burst": 0, 00:32:11.984 "low_priority_weight": 0, 00:32:11.984 "medium_priority_weight": 0, 00:32:11.984 "high_priority_weight": 0, 00:32:11.984 "nvme_adminq_poll_period_us": 10000, 00:32:11.984 "nvme_ioq_poll_period_us": 0, 00:32:11.984 "io_queue_requests": 0, 00:32:11.984 "delay_cmd_submit": true, 00:32:11.984 "transport_retry_count": 4, 00:32:11.984 "bdev_retry_count": 3, 00:32:11.984 "transport_ack_timeout": 0, 00:32:11.984 "ctrlr_loss_timeout_sec": 0, 00:32:11.985 "reconnect_delay_sec": 0, 00:32:11.985 "fast_io_fail_timeout_sec": 0, 00:32:11.985 "disable_auto_failback": false, 00:32:11.985 "generate_uuids": false, 00:32:11.985 "transport_tos": 0, 00:32:11.985 "nvme_error_stat": false, 00:32:11.985 "rdma_srq_size": 0, 00:32:11.985 "io_path_stat": false, 00:32:11.985 "allow_accel_sequence": false, 00:32:11.985 "rdma_max_cq_size": 0, 00:32:11.985 "rdma_cm_event_timeout_ms": 0, 00:32:11.985 "dhchap_digests": [ 00:32:11.985 "sha256", 00:32:11.985 "sha384", 00:32:11.985 "sha512" 00:32:11.985 ], 00:32:11.985 "dhchap_dhgroups": [ 00:32:11.985 "null", 00:32:11.985 "ffdhe2048", 00:32:11.985 "ffdhe3072", 00:32:11.985 "ffdhe4096", 00:32:11.985 "ffdhe6144", 00:32:11.985 "ffdhe8192" 00:32:11.985 ] 00:32:11.985 } 00:32:11.985 }, 00:32:11.985 { 00:32:11.985 "method": "bdev_nvme_set_hotplug", 00:32:11.985 "params": { 00:32:11.985 "period_us": 100000, 00:32:11.985 "enable": false 00:32:11.985 } 00:32:11.985 }, 00:32:11.985 { 00:32:11.985 "method": "bdev_malloc_create", 00:32:11.985 "params": { 00:32:11.985 "name": "malloc0", 00:32:11.985 "num_blocks": 8192, 00:32:11.985 "block_size": 4096, 00:32:11.985 "physical_block_size": 4096, 00:32:11.985 "uuid": "d06825d8-8500-4d7a-ae49-6e66a6305e3f", 00:32:11.985 "optimal_io_boundary": 0, 00:32:11.985 "md_size": 0, 00:32:11.985 "dif_type": 0, 00:32:11.985 "dif_is_head_of_md": false, 00:32:11.985 "dif_pi_format": 0 00:32:11.985 } 00:32:11.985 }, 00:32:11.985 { 00:32:11.985 "method": "bdev_wait_for_examine" 00:32:11.985 } 00:32:11.985 ] 00:32:11.985 }, 00:32:11.985 { 00:32:11.985 "subsystem": "nbd", 00:32:11.985 "config": [] 00:32:11.985 }, 00:32:11.985 { 00:32:11.985 "subsystem": "scheduler", 00:32:11.985 "config": [ 00:32:11.985 { 00:32:11.985 "method": "framework_set_scheduler", 00:32:11.985 "params": { 00:32:11.985 "name": "static" 00:32:11.985 } 00:32:11.985 } 00:32:11.985 ] 00:32:11.985 }, 00:32:11.985 { 00:32:11.985 "subsystem": "nvmf", 00:32:11.985 "config": [ 00:32:11.985 { 00:32:11.985 "method": "nvmf_set_config", 00:32:11.985 "params": { 00:32:11.985 "discovery_filter": "match_any", 00:32:11.985 "admin_cmd_passthru": { 00:32:11.985 "identify_ctrlr": false 00:32:11.985 }, 00:32:11.985 "dhchap_digests": [ 00:32:11.985 "sha256", 00:32:11.985 "sha384", 00:32:11.985 "sha512" 00:32:11.985 ], 00:32:11.985 "dhchap_dhgroups": [ 00:32:11.985 "null", 00:32:11.985 "ffdhe2048", 00:32:11.985 "ffdhe3072", 00:32:11.985 "ffdhe4096", 00:32:11.985 "ffdhe6144", 00:32:11.985 "ffdhe8192" 00:32:11.985 ] 00:32:11.985 } 00:32:11.985 }, 00:32:11.985 { 00:32:11.985 "method": "nvmf_set_max_subsystems", 00:32:11.985 "params": { 00:32:11.985 "max_subsystems": 1024 00:32:11.985 } 00:32:11.985 }, 00:32:11.985 { 00:32:11.985 "method": "nvmf_set_crdt", 00:32:11.985 "params": { 00:32:11.985 "crdt1": 0, 00:32:11.985 "crdt2": 0, 00:32:11.985 "crdt3": 0 00:32:11.985 } 00:32:11.985 }, 00:32:11.985 { 00:32:11.985 "method": "nvmf_create_transport", 00:32:11.985 "params": { 00:32:11.985 "trtype": "TCP", 00:32:11.985 "max_queue_depth": 128, 00:32:11.985 "max_io_qpairs_per_ctrlr": 127, 00:32:11.985 "in_capsule_data_size": 4096, 00:32:11.985 "max_io_size": 131072, 00:32:11.985 "io_unit_size": 131072, 00:32:11.985 "max_aq_depth": 128, 00:32:11.985 "num_shared_buffers": 511, 00:32:11.985 "buf_cache_size": 4294967295, 00:32:11.985 "dif_insert_or_strip": false, 00:32:11.985 "zcopy": false, 00:32:11.985 "c2h_success": false, 00:32:11.985 "sock_priority": 0, 00:32:11.985 "abort_timeout_sec": 1, 00:32:11.985 "ack_timeout": 0, 00:32:11.985 "data_wr_pool_size": 0 00:32:11.985 } 00:32:11.985 }, 00:32:11.985 { 00:32:11.985 "method": "nvmf_create_subsystem", 00:32:11.985 "params": { 00:32:11.985 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:32:11.985 "allow_any_host": false, 00:32:11.985 "serial_number": "SPDK00000000000001", 00:32:11.985 "model_number": "SPDK bdev Controller", 00:32:11.985 "max_namespaces": 10, 00:32:11.985 "min_cntlid": 1, 00:32:11.985 "max_cntlid": 65519, 00:32:11.985 "ana_reporting": false 00:32:11.985 } 00:32:11.985 }, 00:32:11.985 { 00:32:11.985 "method": "nvmf_subsystem_add_host", 00:32:11.985 "params": { 00:32:11.985 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:32:11.985 "host": "nqn.2016-06.io.spdk:host1", 00:32:11.985 "psk": "key0" 00:32:11.985 } 00:32:11.985 }, 00:32:11.985 { 00:32:11.985 "method": "nvmf_subsystem_add_ns", 00:32:11.985 "params": { 00:32:11.985 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:32:11.985 "namespace": { 00:32:11.985 "nsid": 1, 00:32:11.985 "bdev_name": "malloc0", 00:32:11.985 "nguid": "D06825D885004D7AAE496E66A6305E3F", 00:32:11.985 "uuid": "d06825d8-8500-4d7a-ae49-6e66a6305e3f", 00:32:11.985 "no_auto_visible": false 00:32:11.985 } 00:32:11.985 } 00:32:11.985 }, 00:32:11.985 { 00:32:11.985 "method": "nvmf_subsystem_add_listener", 00:32:11.985 "params": { 00:32:11.985 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:32:11.985 "listen_address": { 00:32:11.985 "trtype": "TCP", 00:32:11.985 "adrfam": "IPv4", 00:32:11.985 "traddr": "10.0.0.2", 00:32:11.985 "trsvcid": "4420" 00:32:11.985 }, 00:32:11.985 "secure_channel": true 00:32:11.985 } 00:32:11.985 } 00:32:11.985 ] 00:32:11.985 } 00:32:11.985 ] 00:32:11.985 }' 00:32:11.985 22:31:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@199 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:32:12.246 22:31:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@199 -- # bdevperfconf='{ 00:32:12.246 "subsystems": [ 00:32:12.246 { 00:32:12.246 "subsystem": "keyring", 00:32:12.246 "config": [ 00:32:12.246 { 00:32:12.246 "method": "keyring_file_add_key", 00:32:12.246 "params": { 00:32:12.246 "name": "key0", 00:32:12.246 "path": "/tmp/tmp.zLn9L4hQ3j" 00:32:12.246 } 00:32:12.246 } 00:32:12.246 ] 00:32:12.246 }, 00:32:12.246 { 00:32:12.246 "subsystem": "iobuf", 00:32:12.246 "config": [ 00:32:12.246 { 00:32:12.246 "method": "iobuf_set_options", 00:32:12.246 "params": { 00:32:12.246 "small_pool_count": 8192, 00:32:12.246 "large_pool_count": 1024, 00:32:12.246 "small_bufsize": 8192, 00:32:12.246 "large_bufsize": 135168 00:32:12.246 } 00:32:12.246 } 00:32:12.246 ] 00:32:12.246 }, 00:32:12.246 { 00:32:12.246 "subsystem": "sock", 00:32:12.246 "config": [ 00:32:12.246 { 00:32:12.246 "method": "sock_set_default_impl", 00:32:12.246 "params": { 00:32:12.246 "impl_name": "posix" 00:32:12.246 } 00:32:12.246 }, 00:32:12.246 { 00:32:12.246 "method": "sock_impl_set_options", 00:32:12.246 "params": { 00:32:12.246 "impl_name": "ssl", 00:32:12.246 "recv_buf_size": 4096, 00:32:12.246 "send_buf_size": 4096, 00:32:12.246 "enable_recv_pipe": true, 00:32:12.246 "enable_quickack": false, 00:32:12.246 "enable_placement_id": 0, 00:32:12.246 "enable_zerocopy_send_server": true, 00:32:12.246 "enable_zerocopy_send_client": false, 00:32:12.246 "zerocopy_threshold": 0, 00:32:12.246 "tls_version": 0, 00:32:12.246 "enable_ktls": false 00:32:12.246 } 00:32:12.246 }, 00:32:12.246 { 00:32:12.246 "method": "sock_impl_set_options", 00:32:12.246 "params": { 00:32:12.246 "impl_name": "posix", 00:32:12.246 "recv_buf_size": 2097152, 00:32:12.246 "send_buf_size": 2097152, 00:32:12.246 "enable_recv_pipe": true, 00:32:12.246 "enable_quickack": false, 00:32:12.246 "enable_placement_id": 0, 00:32:12.246 "enable_zerocopy_send_server": true, 00:32:12.246 "enable_zerocopy_send_client": false, 00:32:12.246 "zerocopy_threshold": 0, 00:32:12.246 "tls_version": 0, 00:32:12.246 "enable_ktls": false 00:32:12.246 } 00:32:12.246 } 00:32:12.246 ] 00:32:12.246 }, 00:32:12.246 { 00:32:12.246 "subsystem": "vmd", 00:32:12.246 "config": [] 00:32:12.246 }, 00:32:12.246 { 00:32:12.246 "subsystem": "accel", 00:32:12.246 "config": [ 00:32:12.246 { 00:32:12.246 "method": "accel_set_options", 00:32:12.246 "params": { 00:32:12.246 "small_cache_size": 128, 00:32:12.246 "large_cache_size": 16, 00:32:12.246 "task_count": 2048, 00:32:12.246 "sequence_count": 2048, 00:32:12.246 "buf_count": 2048 00:32:12.246 } 00:32:12.246 } 00:32:12.246 ] 00:32:12.246 }, 00:32:12.246 { 00:32:12.246 "subsystem": "bdev", 00:32:12.246 "config": [ 00:32:12.246 { 00:32:12.246 "method": "bdev_set_options", 00:32:12.246 "params": { 00:32:12.246 "bdev_io_pool_size": 65535, 00:32:12.246 "bdev_io_cache_size": 256, 00:32:12.246 "bdev_auto_examine": true, 00:32:12.246 "iobuf_small_cache_size": 128, 00:32:12.246 "iobuf_large_cache_size": 16, 00:32:12.246 "bdev_io_stack_size": 4096 00:32:12.246 } 00:32:12.246 }, 00:32:12.246 { 00:32:12.246 "method": "bdev_raid_set_options", 00:32:12.246 "params": { 00:32:12.246 "process_window_size_kb": 1024, 00:32:12.246 "process_max_bandwidth_mb_sec": 0 00:32:12.246 } 00:32:12.246 }, 00:32:12.246 { 00:32:12.246 "method": "bdev_iscsi_set_options", 00:32:12.246 "params": { 00:32:12.246 "timeout_sec": 30 00:32:12.246 } 00:32:12.246 }, 00:32:12.246 { 00:32:12.246 "method": "bdev_nvme_set_options", 00:32:12.246 "params": { 00:32:12.246 "action_on_timeout": "none", 00:32:12.246 "timeout_us": 0, 00:32:12.246 "timeout_admin_us": 0, 00:32:12.246 "keep_alive_timeout_ms": 10000, 00:32:12.246 "arbitration_burst": 0, 00:32:12.246 "low_priority_weight": 0, 00:32:12.246 "medium_priority_weight": 0, 00:32:12.246 "high_priority_weight": 0, 00:32:12.246 "nvme_adminq_poll_period_us": 10000, 00:32:12.246 "nvme_ioq_poll_period_us": 0, 00:32:12.246 "io_queue_requests": 512, 00:32:12.246 "delay_cmd_submit": true, 00:32:12.246 "transport_retry_count": 4, 00:32:12.246 "bdev_retry_count": 3, 00:32:12.246 "transport_ack_timeout": 0, 00:32:12.246 "ctrlr_loss_timeout_sec": 0, 00:32:12.246 "reconnect_delay_sec": 0, 00:32:12.246 "fast_io_fail_timeout_sec": 0, 00:32:12.246 "disable_auto_failback": false, 00:32:12.246 "generate_uuids": false, 00:32:12.246 "transport_tos": 0, 00:32:12.246 "nvme_error_stat": false, 00:32:12.246 "rdma_srq_size": 0, 00:32:12.246 "io_path_stat": false, 00:32:12.246 "allow_accel_sequence": false, 00:32:12.246 "rdma_max_cq_size": 0, 00:32:12.246 "rdma_cm_event_timeout_ms": 0, 00:32:12.246 "dhchap_digests": [ 00:32:12.246 "sha256", 00:32:12.246 "sha384", 00:32:12.246 "sha512" 00:32:12.246 ], 00:32:12.246 "dhchap_dhgroups": [ 00:32:12.246 "null", 00:32:12.246 "ffdhe2048", 00:32:12.246 "ffdhe3072", 00:32:12.246 "ffdhe4096", 00:32:12.246 "ffdhe6144", 00:32:12.246 "ffdhe8192" 00:32:12.247 ] 00:32:12.247 } 00:32:12.247 }, 00:32:12.247 { 00:32:12.247 "method": "bdev_nvme_attach_controller", 00:32:12.247 "params": { 00:32:12.247 "name": "TLSTEST", 00:32:12.247 "trtype": "TCP", 00:32:12.247 "adrfam": "IPv4", 00:32:12.247 "traddr": "10.0.0.2", 00:32:12.247 "trsvcid": "4420", 00:32:12.247 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:32:12.247 "prchk_reftag": false, 00:32:12.247 "prchk_guard": false, 00:32:12.247 "ctrlr_loss_timeout_sec": 0, 00:32:12.247 "reconnect_delay_sec": 0, 00:32:12.247 "fast_io_fail_timeout_sec": 0, 00:32:12.247 "psk": "key0", 00:32:12.247 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:32:12.247 "hdgst": false, 00:32:12.247 "ddgst": false 00:32:12.247 } 00:32:12.247 }, 00:32:12.247 { 00:32:12.247 "method": "bdev_nvme_set_hotplug", 00:32:12.247 "params": { 00:32:12.247 "period_us": 100000, 00:32:12.247 "enable": false 00:32:12.247 } 00:32:12.247 }, 00:32:12.247 { 00:32:12.247 "method": "bdev_wait_for_examine" 00:32:12.247 } 00:32:12.247 ] 00:32:12.247 }, 00:32:12.247 { 00:32:12.247 "subsystem": "nbd", 00:32:12.247 "config": [] 00:32:12.247 } 00:32:12.247 ] 00:32:12.247 }' 00:32:12.247 22:31:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@201 -- # killprocess 233222 00:32:12.247 22:31:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 233222 ']' 00:32:12.247 22:31:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 233222 00:32:12.247 22:31:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:32:12.247 22:31:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:32:12.247 22:31:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 233222 00:32:12.247 22:31:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:32:12.247 22:31:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:32:12.247 22:31:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 233222' 00:32:12.247 killing process with pid 233222 00:32:12.247 22:31:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 233222 00:32:12.247 Received shutdown signal, test time was about 10.000000 seconds 00:32:12.247 00:32:12.247 Latency(us) 00:32:12.247 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:12.247 =================================================================================================================== 00:32:12.247 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:32:12.247 22:31:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 233222 00:32:12.507 22:31:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@202 -- # killprocess 232857 00:32:12.507 22:31:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 232857 ']' 00:32:12.507 22:31:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 232857 00:32:12.507 22:31:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:32:12.507 22:31:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:32:12.507 22:31:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 232857 00:32:12.507 22:31:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:32:12.507 22:31:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:32:12.507 22:31:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 232857' 00:32:12.507 killing process with pid 232857 00:32:12.507 22:31:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 232857 00:32:12.507 22:31:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 232857 00:32:12.768 22:31:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@205 -- # nvmfappstart -m 0x2 -c /dev/fd/62 00:32:12.768 22:31:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:32:12.768 22:31:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:32:12.768 22:31:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:32:12.768 22:31:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@205 -- # echo '{ 00:32:12.768 "subsystems": [ 00:32:12.768 { 00:32:12.768 "subsystem": "keyring", 00:32:12.768 "config": [ 00:32:12.768 { 00:32:12.768 "method": "keyring_file_add_key", 00:32:12.768 "params": { 00:32:12.768 "name": "key0", 00:32:12.768 "path": "/tmp/tmp.zLn9L4hQ3j" 00:32:12.768 } 00:32:12.768 } 00:32:12.768 ] 00:32:12.768 }, 00:32:12.768 { 00:32:12.768 "subsystem": "iobuf", 00:32:12.768 "config": [ 00:32:12.768 { 00:32:12.768 "method": "iobuf_set_options", 00:32:12.768 "params": { 00:32:12.768 "small_pool_count": 8192, 00:32:12.768 "large_pool_count": 1024, 00:32:12.768 "small_bufsize": 8192, 00:32:12.768 "large_bufsize": 135168 00:32:12.768 } 00:32:12.768 } 00:32:12.768 ] 00:32:12.768 }, 00:32:12.768 { 00:32:12.768 "subsystem": "sock", 00:32:12.768 "config": [ 00:32:12.768 { 00:32:12.768 "method": "sock_set_default_impl", 00:32:12.768 "params": { 00:32:12.768 "impl_name": "posix" 00:32:12.768 } 00:32:12.768 }, 00:32:12.768 { 00:32:12.768 "method": "sock_impl_set_options", 00:32:12.768 "params": { 00:32:12.768 "impl_name": "ssl", 00:32:12.768 "recv_buf_size": 4096, 00:32:12.768 "send_buf_size": 4096, 00:32:12.768 "enable_recv_pipe": true, 00:32:12.768 "enable_quickack": false, 00:32:12.768 "enable_placement_id": 0, 00:32:12.768 "enable_zerocopy_send_server": true, 00:32:12.768 "enable_zerocopy_send_client": false, 00:32:12.768 "zerocopy_threshold": 0, 00:32:12.768 "tls_version": 0, 00:32:12.768 "enable_ktls": false 00:32:12.768 } 00:32:12.768 }, 00:32:12.768 { 00:32:12.768 "method": "sock_impl_set_options", 00:32:12.768 "params": { 00:32:12.768 "impl_name": "posix", 00:32:12.768 "recv_buf_size": 2097152, 00:32:12.768 "send_buf_size": 2097152, 00:32:12.768 "enable_recv_pipe": true, 00:32:12.768 "enable_quickack": false, 00:32:12.768 "enable_placement_id": 0, 00:32:12.768 "enable_zerocopy_send_server": true, 00:32:12.768 "enable_zerocopy_send_client": false, 00:32:12.768 "zerocopy_threshold": 0, 00:32:12.768 "tls_version": 0, 00:32:12.768 "enable_ktls": false 00:32:12.768 } 00:32:12.768 } 00:32:12.768 ] 00:32:12.768 }, 00:32:12.768 { 00:32:12.768 "subsystem": "vmd", 00:32:12.768 "config": [] 00:32:12.768 }, 00:32:12.768 { 00:32:12.768 "subsystem": "accel", 00:32:12.768 "config": [ 00:32:12.768 { 00:32:12.768 "method": "accel_set_options", 00:32:12.768 "params": { 00:32:12.768 "small_cache_size": 128, 00:32:12.768 "large_cache_size": 16, 00:32:12.768 "task_count": 2048, 00:32:12.768 "sequence_count": 2048, 00:32:12.768 "buf_count": 2048 00:32:12.768 } 00:32:12.768 } 00:32:12.768 ] 00:32:12.768 }, 00:32:12.768 { 00:32:12.768 "subsystem": "bdev", 00:32:12.768 "config": [ 00:32:12.768 { 00:32:12.768 "method": "bdev_set_options", 00:32:12.768 "params": { 00:32:12.768 "bdev_io_pool_size": 65535, 00:32:12.768 "bdev_io_cache_size": 256, 00:32:12.768 "bdev_auto_examine": true, 00:32:12.768 "iobuf_small_cache_size": 128, 00:32:12.768 "iobuf_large_cache_size": 16, 00:32:12.768 "bdev_io_stack_size": 4096 00:32:12.768 } 00:32:12.768 }, 00:32:12.768 { 00:32:12.768 "method": "bdev_raid_set_options", 00:32:12.768 "params": { 00:32:12.768 "process_window_size_kb": 1024, 00:32:12.768 "process_max_bandwidth_mb_sec": 0 00:32:12.768 } 00:32:12.768 }, 00:32:12.768 { 00:32:12.768 "method": "bdev_iscsi_set_options", 00:32:12.768 "params": { 00:32:12.768 "timeout_sec": 30 00:32:12.768 } 00:32:12.768 }, 00:32:12.768 { 00:32:12.768 "method": "bdev_nvme_set_options", 00:32:12.768 "params": { 00:32:12.768 "action_on_timeout": "none", 00:32:12.768 "timeout_us": 0, 00:32:12.768 "timeout_admin_us": 0, 00:32:12.768 "keep_alive_timeout_ms": 10000, 00:32:12.768 "arbitration_burst": 0, 00:32:12.768 "low_priority_weight": 0, 00:32:12.768 "medium_priority_weight": 0, 00:32:12.768 "high_priority_weight": 0, 00:32:12.768 "nvme_adminq_poll_period_us": 10000, 00:32:12.768 "nvme_ioq_poll_period_us": 0, 00:32:12.768 "io_queue_requests": 0, 00:32:12.768 "delay_cmd_submit": true, 00:32:12.768 "transport_retry_count": 4, 00:32:12.768 "bdev_retry_count": 3, 00:32:12.768 "transport_ack_timeout": 0, 00:32:12.768 "ctrlr_loss_timeout_sec": 0, 00:32:12.768 "reconnect_delay_sec": 0, 00:32:12.768 "fast_io_fail_timeout_sec": 0, 00:32:12.768 "disable_auto_failback": false, 00:32:12.768 "generate_uuids": false, 00:32:12.768 "transport_tos": 0, 00:32:12.768 "nvme_error_stat": false, 00:32:12.768 "rdma_srq_size": 0, 00:32:12.768 "io_path_stat": false, 00:32:12.768 "allow_accel_sequence": false, 00:32:12.768 "rdma_max_cq_size": 0, 00:32:12.768 "rdma_cm_event_timeout_ms": 0, 00:32:12.769 "dhchap_digests": [ 00:32:12.769 "sha256", 00:32:12.769 "sha384", 00:32:12.769 "sha512" 00:32:12.769 ], 00:32:12.769 "dhchap_dhgroups": [ 00:32:12.769 "null", 00:32:12.769 "ffdhe2048", 00:32:12.769 "ffdhe3072", 00:32:12.769 "ffdhe4096", 00:32:12.769 "ffdhe6144", 00:32:12.769 "ffdhe8192" 00:32:12.769 ] 00:32:12.769 } 00:32:12.769 }, 00:32:12.769 { 00:32:12.769 "method": "bdev_nvme_set_hotplug", 00:32:12.769 "params": { 00:32:12.769 "period_us": 100000, 00:32:12.769 "enable": false 00:32:12.769 } 00:32:12.769 }, 00:32:12.769 { 00:32:12.769 "method": "bdev_malloc_create", 00:32:12.769 "params": { 00:32:12.769 "name": "malloc0", 00:32:12.769 "num_blocks": 8192, 00:32:12.769 "block_size": 4096, 00:32:12.769 "physical_block_size": 4096, 00:32:12.769 "uuid": "d06825d8-8500-4d7a-ae49-6e66a6305e3f", 00:32:12.769 "optimal_io_boundary": 0, 00:32:12.769 "md_size": 0, 00:32:12.769 "dif_type": 0, 00:32:12.769 "dif_is_head_of_md": false, 00:32:12.769 "dif_pi_format": 0 00:32:12.769 } 00:32:12.769 }, 00:32:12.769 { 00:32:12.769 "method": "bdev_wait_for_examine" 00:32:12.769 } 00:32:12.769 ] 00:32:12.769 }, 00:32:12.769 { 00:32:12.769 "subsystem": "nbd", 00:32:12.769 "config": [] 00:32:12.769 }, 00:32:12.769 { 00:32:12.769 "subsystem": "scheduler", 00:32:12.769 "config": [ 00:32:12.769 { 00:32:12.769 "method": "framework_set_scheduler", 00:32:12.769 "params": { 00:32:12.769 "name": "static" 00:32:12.769 } 00:32:12.769 } 00:32:12.769 ] 00:32:12.769 }, 00:32:12.769 { 00:32:12.769 "subsystem": "nvmf", 00:32:12.769 "config": [ 00:32:12.769 { 00:32:12.769 "method": "nvmf_set_config", 00:32:12.769 "params": { 00:32:12.769 "discovery_filter": "match_any", 00:32:12.769 "admin_cmd_passthru": { 00:32:12.769 "identify_ctrlr": false 00:32:12.769 }, 00:32:12.769 "dhchap_digests": [ 00:32:12.769 "sha256", 00:32:12.769 "sha384", 00:32:12.769 "sha512" 00:32:12.769 ], 00:32:12.769 "dhchap_dhgroups": [ 00:32:12.769 "null", 00:32:12.769 "ffdhe2048", 00:32:12.769 "ffdhe3072", 00:32:12.769 "ffdhe4096", 00:32:12.769 "ffdhe6144", 00:32:12.769 "ffdhe8192" 00:32:12.769 ] 00:32:12.769 } 00:32:12.769 }, 00:32:12.769 { 00:32:12.769 "method": "nvmf_set_max_subsystems", 00:32:12.769 "params": { 00:32:12.769 "max_subsystems": 1024 00:32:12.769 } 00:32:12.769 }, 00:32:12.769 { 00:32:12.769 "method": "nvmf_set_crdt", 00:32:12.769 "params": { 00:32:12.769 "crdt1": 0, 00:32:12.769 "crdt2": 0, 00:32:12.769 "crdt3": 0 00:32:12.769 } 00:32:12.769 }, 00:32:12.769 { 00:32:12.769 "method": "nvmf_create_transport", 00:32:12.769 "params": { 00:32:12.769 "trtype": "TCP", 00:32:12.769 "max_queue_depth": 128, 00:32:12.769 "max_io_qpairs_per_ctrlr": 127, 00:32:12.769 "in_capsule_data_size": 4096, 00:32:12.769 "max_io_size": 131072, 00:32:12.769 "io_unit_size": 131072, 00:32:12.769 "max_aq_depth": 128, 00:32:12.769 "num_shared_buffers": 511, 00:32:12.769 "buf_cache_size": 4294967295, 00:32:12.769 "dif_insert_or_strip": false, 00:32:12.769 "zcopy": false, 00:32:12.769 "c2h_success": false, 00:32:12.769 "sock_priority": 0, 00:32:12.769 "abort_timeout_sec": 1, 00:32:12.769 "ack_timeout": 0, 00:32:12.769 "data_wr_pool_size": 0 00:32:12.769 } 00:32:12.769 }, 00:32:12.769 { 00:32:12.769 "method": "nvmf_create_subsystem", 00:32:12.769 "params": { 00:32:12.769 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:32:12.769 "allow_any_host": false, 00:32:12.769 "serial_number": "SPDK00000000000001", 00:32:12.769 "model_number": "SPDK bdev Controller", 00:32:12.769 "max_namespaces": 10, 00:32:12.769 "min_cntlid": 1, 00:32:12.769 "max_cntlid": 65519, 00:32:12.769 "ana_reporting": false 00:32:12.769 } 00:32:12.769 }, 00:32:12.769 { 00:32:12.769 "method": "nvmf_subsystem_add_host", 00:32:12.769 "params": { 00:32:12.769 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:32:12.769 "host": "nqn.2016-06.io.spdk:host1", 00:32:12.769 "psk": "key0" 00:32:12.769 } 00:32:12.769 }, 00:32:12.769 { 00:32:12.769 "method": "nvmf_subsystem_add_ns", 00:32:12.769 "params": { 00:32:12.769 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:32:12.769 "namespace": { 00:32:12.769 "nsid": 1, 00:32:12.769 "bdev_name": "malloc0", 00:32:12.769 "nguid": "D06825D885004D7AAE496E66A6305E3F", 00:32:12.769 "uuid": "d06825d8-8500-4d7a-ae49-6e66a6305e3f", 00:32:12.769 "no_auto_visible": false 00:32:12.769 } 00:32:12.769 } 00:32:12.769 }, 00:32:12.769 { 00:32:12.769 "method": "nvmf_subsystem_add_listener", 00:32:12.769 "params": { 00:32:12.769 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:32:12.769 "listen_address": { 00:32:12.769 "trtype": "TCP", 00:32:12.769 "adrfam": "IPv4", 00:32:12.769 "traddr": "10.0.0.2", 00:32:12.769 "trsvcid": "4420" 00:32:12.769 }, 00:32:12.769 "secure_channel": true 00:32:12.769 } 00:32:12.769 } 00:32:12.769 ] 00:32:12.769 } 00:32:12.769 ] 00:32:12.769 }' 00:32:12.769 22:31:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # nvmfpid=233577 00:32:12.769 22:31:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # waitforlisten 233577 00:32:12.769 22:31:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 -c /dev/fd/62 00:32:12.769 22:31:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 233577 ']' 00:32:12.769 22:31:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:12.769 22:31:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:32:12.769 22:31:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:12.769 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:12.769 22:31:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:32:12.769 22:31:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:32:12.769 [2024-10-01 22:31:07.906792] Starting SPDK v25.01-pre git sha1 1b1c3081e / DPDK 24.03.0 initialization... 00:32:12.769 [2024-10-01 22:31:07.906850] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:32:12.770 [2024-10-01 22:31:07.989954] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:13.030 [2024-10-01 22:31:08.043215] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:32:13.030 [2024-10-01 22:31:08.043248] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:32:13.030 [2024-10-01 22:31:08.043254] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:32:13.030 [2024-10-01 22:31:08.043258] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:32:13.030 [2024-10-01 22:31:08.043262] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:32:13.030 [2024-10-01 22:31:08.043306] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:32:13.291 [2024-10-01 22:31:08.289154] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:32:13.291 [2024-10-01 22:31:08.321179] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:32:13.291 [2024-10-01 22:31:08.321377] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:13.551 22:31:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:32:13.551 22:31:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:32:13.551 22:31:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:32:13.551 22:31:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:32:13.551 22:31:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:32:13.551 22:31:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:32:13.551 22:31:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@209 -- # bdevperf_pid=233826 00:32:13.551 22:31:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@210 -- # waitforlisten 233826 /var/tmp/bdevperf.sock 00:32:13.551 22:31:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 233826 ']' 00:32:13.551 22:31:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@206 -- # echo '{ 00:32:13.551 "subsystems": [ 00:32:13.551 { 00:32:13.551 "subsystem": "keyring", 00:32:13.551 "config": [ 00:32:13.551 { 00:32:13.551 "method": "keyring_file_add_key", 00:32:13.551 "params": { 00:32:13.551 "name": "key0", 00:32:13.551 "path": "/tmp/tmp.zLn9L4hQ3j" 00:32:13.551 } 00:32:13.551 } 00:32:13.551 ] 00:32:13.551 }, 00:32:13.551 { 00:32:13.551 "subsystem": "iobuf", 00:32:13.551 "config": [ 00:32:13.551 { 00:32:13.551 "method": "iobuf_set_options", 00:32:13.551 "params": { 00:32:13.551 "small_pool_count": 8192, 00:32:13.551 "large_pool_count": 1024, 00:32:13.551 "small_bufsize": 8192, 00:32:13.551 "large_bufsize": 135168 00:32:13.551 } 00:32:13.551 } 00:32:13.551 ] 00:32:13.551 }, 00:32:13.551 { 00:32:13.551 "subsystem": "sock", 00:32:13.551 "config": [ 00:32:13.551 { 00:32:13.551 "method": "sock_set_default_impl", 00:32:13.551 "params": { 00:32:13.551 "impl_name": "posix" 00:32:13.551 } 00:32:13.551 }, 00:32:13.551 { 00:32:13.551 "method": "sock_impl_set_options", 00:32:13.551 "params": { 00:32:13.551 "impl_name": "ssl", 00:32:13.551 "recv_buf_size": 4096, 00:32:13.551 "send_buf_size": 4096, 00:32:13.551 "enable_recv_pipe": true, 00:32:13.551 "enable_quickack": false, 00:32:13.551 "enable_placement_id": 0, 00:32:13.551 "enable_zerocopy_send_server": true, 00:32:13.551 "enable_zerocopy_send_client": false, 00:32:13.551 "zerocopy_threshold": 0, 00:32:13.551 "tls_version": 0, 00:32:13.551 "enable_ktls": false 00:32:13.551 } 00:32:13.551 }, 00:32:13.551 { 00:32:13.551 "method": "sock_impl_set_options", 00:32:13.551 "params": { 00:32:13.551 "impl_name": "posix", 00:32:13.551 "recv_buf_size": 2097152, 00:32:13.551 "send_buf_size": 2097152, 00:32:13.551 "enable_recv_pipe": true, 00:32:13.551 "enable_quickack": false, 00:32:13.551 "enable_placement_id": 0, 00:32:13.551 "enable_zerocopy_send_server": true, 00:32:13.552 "enable_zerocopy_send_client": false, 00:32:13.552 "zerocopy_threshold": 0, 00:32:13.552 "tls_version": 0, 00:32:13.552 "enable_ktls": false 00:32:13.552 } 00:32:13.552 } 00:32:13.552 ] 00:32:13.552 }, 00:32:13.552 { 00:32:13.552 "subsystem": "vmd", 00:32:13.552 "config": [] 00:32:13.552 }, 00:32:13.552 { 00:32:13.552 "subsystem": "accel", 00:32:13.552 "config": [ 00:32:13.552 { 00:32:13.552 "method": "accel_set_options", 00:32:13.552 "params": { 00:32:13.552 "small_cache_size": 128, 00:32:13.552 "large_cache_size": 16, 00:32:13.552 "task_count": 2048, 00:32:13.552 "sequence_count": 2048, 00:32:13.552 "buf_count": 2048 00:32:13.552 } 00:32:13.552 } 00:32:13.552 ] 00:32:13.552 }, 00:32:13.552 { 00:32:13.552 "subsystem": "bdev", 00:32:13.552 "config": [ 00:32:13.552 { 00:32:13.552 "method": "bdev_set_options", 00:32:13.552 "params": { 00:32:13.552 "bdev_io_pool_size": 65535, 00:32:13.552 "bdev_io_cache_size": 256, 00:32:13.552 "bdev_auto_examine": true, 00:32:13.552 "iobuf_small_cache_size": 128, 00:32:13.552 "iobuf_large_cache_size": 16, 00:32:13.552 "bdev_io_stack_size": 4096 00:32:13.552 } 00:32:13.552 }, 00:32:13.552 { 00:32:13.552 "method": "bdev_raid_set_options", 00:32:13.552 "params": { 00:32:13.552 "process_window_size_kb": 1024, 00:32:13.552 "process_max_bandwidth_mb_sec": 0 00:32:13.552 } 00:32:13.552 }, 00:32:13.552 { 00:32:13.552 "method": "bdev_iscsi_set_options", 00:32:13.552 "params": { 00:32:13.552 "timeout_sec": 30 00:32:13.552 } 00:32:13.552 }, 00:32:13.552 { 00:32:13.552 "method": "bdev_nvme_set_options", 00:32:13.552 "params": { 00:32:13.552 "action_on_timeout": "none", 00:32:13.552 "timeout_us": 0, 00:32:13.552 "timeout_admin_us": 0, 00:32:13.552 "keep_alive_timeout_ms": 10000, 00:32:13.552 "arbitration_burst": 0, 00:32:13.552 "low_priority_weight": 0, 00:32:13.552 "medium_priority_weight": 0, 00:32:13.552 "high_priority_weight": 0, 00:32:13.552 "nvme_adminq_poll_period_us": 10000, 00:32:13.552 "nvme_ioq_poll_period_us": 0, 00:32:13.552 "io_queue_requests": 512, 00:32:13.552 "delay_cmd_submit": true, 00:32:13.552 "transport_retry_count": 4, 00:32:13.552 "bdev_retry_count": 3, 00:32:13.552 "transport_ack_timeout": 0, 00:32:13.552 "ctrlr_loss_timeout_sec": 0, 00:32:13.552 "reconnect_delay_sec": 0, 00:32:13.552 "fast_io_fail_timeout_sec": 0, 00:32:13.552 "disable_auto_failback": false, 00:32:13.552 "generate_uuids": false, 00:32:13.552 "transport_tos": 0, 00:32:13.552 "nvme_error_stat": false, 00:32:13.552 "rdma_srq_size": 0, 00:32:13.552 "io_path_stat": false, 00:32:13.552 "allow_accel_sequence": false, 00:32:13.552 "rdma_max_cq_size": 0, 00:32:13.552 "rdma_cm_event_timeout_ms": 0, 00:32:13.552 "dhchap_digests": [ 00:32:13.552 "sha256", 00:32:13.552 "sha384", 00:32:13.552 "sha512" 00:32:13.552 ], 00:32:13.552 "dhchap_dhgroups": [ 00:32:13.552 "null", 00:32:13.552 "ffdhe2048", 00:32:13.552 "ffdhe3072", 00:32:13.552 "ffdhe4096", 00:32:13.552 "ffdhe6144", 00:32:13.552 "ffdhe8192" 00:32:13.552 ] 00:32:13.552 } 00:32:13.552 }, 00:32:13.552 { 00:32:13.552 "method": "bdev_nvme_attach_controller", 00:32:13.552 "params": { 00:32:13.552 "name": "TLSTEST", 00:32:13.552 "trtype": "TCP", 00:32:13.552 "adrfam": "IPv4", 00:32:13.552 "traddr": "10.0.0.2", 00:32:13.552 "trsvcid": "4420", 00:32:13.552 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:32:13.552 "prchk_reftag": false, 00:32:13.552 "prchk_guard": false, 00:32:13.552 "ctrlr_loss_timeout_sec": 0, 00:32:13.552 "reconnect_delay_sec": 0, 00:32:13.552 "fast_io_fail_timeout_sec": 0, 00:32:13.552 "psk": "key0", 00:32:13.552 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:32:13.552 "hdgst": false, 00:32:13.552 "ddgst": false 00:32:13.552 } 00:32:13.552 }, 00:32:13.552 { 00:32:13.552 "method": "bdev_nvme_set_hotplug", 00:32:13.552 "params": { 00:32:13.552 "period_us": 100000, 00:32:13.552 "enable": false 00:32:13.552 } 00:32:13.552 }, 00:32:13.552 { 00:32:13.552 "method": "bdev_wait_for_examine" 00:32:13.552 } 00:32:13.552 ] 00:32:13.552 }, 00:32:13.552 { 00:32:13.552 "subsystem": "nbd", 00:32:13.552 "config": [] 00:32:13.552 } 00:32:13.552 ] 00:32:13.552 }' 00:32:13.552 22:31:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:32:13.552 22:31:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:32:13.552 22:31:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@206 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -c /dev/fd/63 00:32:13.552 22:31:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:32:13.552 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:32:13.552 22:31:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:32:13.552 22:31:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:32:13.552 [2024-10-01 22:31:08.778454] Starting SPDK v25.01-pre git sha1 1b1c3081e / DPDK 24.03.0 initialization... 00:32:13.552 [2024-10-01 22:31:08.778508] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid233826 ] 00:32:13.812 [2024-10-01 22:31:08.829592] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:13.812 [2024-10-01 22:31:08.882211] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:32:14.071 [2024-10-01 22:31:09.069005] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:32:14.641 22:31:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:32:14.641 22:31:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:32:14.641 22:31:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@213 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:32:14.641 Running I/O for 10 seconds... 00:32:24.743 1029.00 IOPS, 4.02 MiB/s 1718.00 IOPS, 6.71 MiB/s 3040.33 IOPS, 11.88 MiB/s 3207.75 IOPS, 12.53 MiB/s 2980.40 IOPS, 11.64 MiB/s 2781.00 IOPS, 10.86 MiB/s 3262.86 IOPS, 12.75 MiB/s 3572.25 IOPS, 13.95 MiB/s 3430.67 IOPS, 13.40 MiB/s 3370.60 IOPS, 13.17 MiB/s 00:32:24.743 Latency(us) 00:32:24.743 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:24.743 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:32:24.743 Verification LBA range: start 0x0 length 0x2000 00:32:24.743 TLSTESTn1 : 10.09 3353.96 13.10 0.00 0.00 38025.40 6089.39 205346.13 00:32:24.743 =================================================================================================================== 00:32:24.743 Total : 3353.96 13.10 0.00 0.00 38025.40 6089.39 205346.13 00:32:24.743 { 00:32:24.743 "results": [ 00:32:24.743 { 00:32:24.743 "job": "TLSTESTn1", 00:32:24.743 "core_mask": "0x4", 00:32:24.743 "workload": "verify", 00:32:24.743 "status": "finished", 00:32:24.743 "verify_range": { 00:32:24.743 "start": 0, 00:32:24.743 "length": 8192 00:32:24.743 }, 00:32:24.743 "queue_depth": 128, 00:32:24.743 "io_size": 4096, 00:32:24.743 "runtime": 10.087776, 00:32:24.743 "iops": 3353.9602782615316, 00:32:24.743 "mibps": 13.101407336959108, 00:32:24.743 "io_failed": 0, 00:32:24.743 "io_timeout": 0, 00:32:24.743 "avg_latency_us": 38025.40053831452, 00:32:24.743 "min_latency_us": 6089.386666666666, 00:32:24.743 "max_latency_us": 205346.13333333333 00:32:24.743 } 00:32:24.743 ], 00:32:24.743 "core_count": 1 00:32:24.743 } 00:32:24.743 22:31:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@215 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:32:24.743 22:31:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@216 -- # killprocess 233826 00:32:24.743 22:31:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 233826 ']' 00:32:24.743 22:31:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 233826 00:32:24.743 22:31:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:32:24.743 22:31:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:32:24.743 22:31:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 233826 00:32:24.743 22:31:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:32:24.743 22:31:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:32:24.743 22:31:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 233826' 00:32:24.743 killing process with pid 233826 00:32:24.743 22:31:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 233826 00:32:24.743 Received shutdown signal, test time was about 10.000000 seconds 00:32:24.743 00:32:24.743 Latency(us) 00:32:24.743 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:24.743 =================================================================================================================== 00:32:24.743 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:32:24.743 22:31:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 233826 00:32:25.003 22:31:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@217 -- # killprocess 233577 00:32:25.003 22:31:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 233577 ']' 00:32:25.003 22:31:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 233577 00:32:25.003 22:31:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:32:25.003 22:31:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:32:25.003 22:31:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 233577 00:32:25.003 22:31:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:32:25.003 22:31:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:32:25.003 22:31:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 233577' 00:32:25.003 killing process with pid 233577 00:32:25.003 22:31:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 233577 00:32:25.003 22:31:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 233577 00:32:25.264 22:31:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@220 -- # nvmfappstart 00:32:25.264 22:31:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:32:25.264 22:31:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:32:25.264 22:31:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:32:25.264 22:31:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # nvmfpid=235952 00:32:25.264 22:31:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # waitforlisten 235952 00:32:25.264 22:31:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:32:25.264 22:31:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 235952 ']' 00:32:25.264 22:31:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:25.264 22:31:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:32:25.264 22:31:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:25.264 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:25.264 22:31:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:32:25.264 22:31:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:32:25.264 [2024-10-01 22:31:20.360907] Starting SPDK v25.01-pre git sha1 1b1c3081e / DPDK 24.03.0 initialization... 00:32:25.264 [2024-10-01 22:31:20.360965] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:32:25.264 [2024-10-01 22:31:20.426821] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:25.264 [2024-10-01 22:31:20.492380] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:32:25.264 [2024-10-01 22:31:20.492417] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:32:25.264 [2024-10-01 22:31:20.492425] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:32:25.264 [2024-10-01 22:31:20.492432] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:32:25.264 [2024-10-01 22:31:20.492438] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:32:25.264 [2024-10-01 22:31:20.492462] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:32:26.205 22:31:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:32:26.205 22:31:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:32:26.205 22:31:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:32:26.205 22:31:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:32:26.205 22:31:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:32:26.205 22:31:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:32:26.205 22:31:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@221 -- # setup_nvmf_tgt /tmp/tmp.zLn9L4hQ3j 00:32:26.205 22:31:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.zLn9L4hQ3j 00:32:26.205 22:31:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:32:26.205 [2024-10-01 22:31:21.341078] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:32:26.205 22:31:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:32:26.465 22:31:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:32:26.465 [2024-10-01 22:31:21.693971] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:32:26.465 [2024-10-01 22:31:21.694187] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:26.725 22:31:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:32:26.725 malloc0 00:32:26.725 22:31:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:32:26.986 22:31:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.zLn9L4hQ3j 00:32:27.247 22:31:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:32:27.247 22:31:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@222 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:32:27.247 22:31:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@224 -- # bdevperf_pid=236467 00:32:27.247 22:31:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@226 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:32:27.247 22:31:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@227 -- # waitforlisten 236467 /var/tmp/bdevperf.sock 00:32:27.247 22:31:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 236467 ']' 00:32:27.247 22:31:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:32:27.247 22:31:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:32:27.247 22:31:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:32:27.247 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:32:27.247 22:31:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:32:27.247 22:31:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:32:27.247 [2024-10-01 22:31:22.476145] Starting SPDK v25.01-pre git sha1 1b1c3081e / DPDK 24.03.0 initialization... 00:32:27.247 [2024-10-01 22:31:22.476201] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid236467 ] 00:32:27.507 [2024-10-01 22:31:22.553259] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:27.507 [2024-10-01 22:31:22.607602] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:32:28.079 22:31:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:32:28.079 22:31:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:32:28.079 22:31:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@229 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.zLn9L4hQ3j 00:32:28.339 22:31:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@230 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:32:28.339 [2024-10-01 22:31:23.590893] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:32:28.600 nvme0n1 00:32:28.600 22:31:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@234 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:32:28.600 Running I/O for 1 seconds... 00:32:29.539 4158.00 IOPS, 16.24 MiB/s 00:32:29.539 Latency(us) 00:32:29.539 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:29.539 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:32:29.539 Verification LBA range: start 0x0 length 0x2000 00:32:29.539 nvme0n1 : 1.02 4212.59 16.46 0.00 0.00 30161.50 5570.56 89565.87 00:32:29.539 =================================================================================================================== 00:32:29.539 Total : 4212.59 16.46 0.00 0.00 30161.50 5570.56 89565.87 00:32:29.539 { 00:32:29.539 "results": [ 00:32:29.539 { 00:32:29.539 "job": "nvme0n1", 00:32:29.539 "core_mask": "0x2", 00:32:29.539 "workload": "verify", 00:32:29.539 "status": "finished", 00:32:29.539 "verify_range": { 00:32:29.539 "start": 0, 00:32:29.539 "length": 8192 00:32:29.539 }, 00:32:29.539 "queue_depth": 128, 00:32:29.539 "io_size": 4096, 00:32:29.539 "runtime": 1.017426, 00:32:29.539 "iops": 4212.591382567381, 00:32:29.539 "mibps": 16.45543508815383, 00:32:29.539 "io_failed": 0, 00:32:29.539 "io_timeout": 0, 00:32:29.539 "avg_latency_us": 30161.49711619226, 00:32:29.539 "min_latency_us": 5570.56, 00:32:29.539 "max_latency_us": 89565.86666666667 00:32:29.539 } 00:32:29.539 ], 00:32:29.539 "core_count": 1 00:32:29.539 } 00:32:29.800 22:31:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@236 -- # killprocess 236467 00:32:29.800 22:31:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 236467 ']' 00:32:29.800 22:31:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 236467 00:32:29.800 22:31:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:32:29.800 22:31:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:32:29.800 22:31:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 236467 00:32:29.800 22:31:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:32:29.800 22:31:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:32:29.800 22:31:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 236467' 00:32:29.800 killing process with pid 236467 00:32:29.800 22:31:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 236467 00:32:29.800 Received shutdown signal, test time was about 1.000000 seconds 00:32:29.800 00:32:29.800 Latency(us) 00:32:29.800 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:29.800 =================================================================================================================== 00:32:29.800 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:32:29.800 22:31:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 236467 00:32:29.800 22:31:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@237 -- # killprocess 235952 00:32:29.800 22:31:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 235952 ']' 00:32:29.800 22:31:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 235952 00:32:29.800 22:31:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:32:29.800 22:31:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:32:29.800 22:31:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 235952 00:32:30.061 22:31:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:32:30.061 22:31:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:32:30.061 22:31:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 235952' 00:32:30.061 killing process with pid 235952 00:32:30.061 22:31:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 235952 00:32:30.061 22:31:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 235952 00:32:30.061 22:31:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@242 -- # nvmfappstart 00:32:30.061 22:31:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:32:30.061 22:31:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:32:30.061 22:31:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:32:30.061 22:31:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # nvmfpid=236996 00:32:30.061 22:31:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # waitforlisten 236996 00:32:30.061 22:31:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:32:30.061 22:31:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 236996 ']' 00:32:30.061 22:31:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:30.061 22:31:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:32:30.061 22:31:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:30.061 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:30.061 22:31:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:32:30.061 22:31:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:32:30.322 [2024-10-01 22:31:25.357080] Starting SPDK v25.01-pre git sha1 1b1c3081e / DPDK 24.03.0 initialization... 00:32:30.322 [2024-10-01 22:31:25.357137] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:32:30.322 [2024-10-01 22:31:25.422988] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:30.322 [2024-10-01 22:31:25.484969] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:32:30.322 [2024-10-01 22:31:25.485010] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:32:30.322 [2024-10-01 22:31:25.485020] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:32:30.322 [2024-10-01 22:31:25.485027] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:32:30.322 [2024-10-01 22:31:25.485032] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:32:30.322 [2024-10-01 22:31:25.485053] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:32:31.263 22:31:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:32:31.263 22:31:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:32:31.263 22:31:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:32:31.263 22:31:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:32:31.263 22:31:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:32:31.263 22:31:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:32:31.263 22:31:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@243 -- # rpc_cmd 00:32:31.263 22:31:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:31.263 22:31:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:32:31.263 [2024-10-01 22:31:26.200641] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:32:31.263 malloc0 00:32:31.263 [2024-10-01 22:31:26.227408] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:32:31.263 [2024-10-01 22:31:26.227623] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:31.263 22:31:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:31.263 22:31:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@256 -- # bdevperf_pid=237339 00:32:31.263 22:31:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@258 -- # waitforlisten 237339 /var/tmp/bdevperf.sock 00:32:31.263 22:31:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@254 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:32:31.263 22:31:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 237339 ']' 00:32:31.263 22:31:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:32:31.263 22:31:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:32:31.263 22:31:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:32:31.263 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:32:31.263 22:31:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:32:31.263 22:31:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:32:31.263 [2024-10-01 22:31:26.305019] Starting SPDK v25.01-pre git sha1 1b1c3081e / DPDK 24.03.0 initialization... 00:32:31.263 [2024-10-01 22:31:26.305066] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid237339 ] 00:32:31.263 [2024-10-01 22:31:26.380361] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:31.263 [2024-10-01 22:31:26.434785] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:32:32.203 22:31:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:32:32.203 22:31:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:32:32.203 22:31:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@259 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.zLn9L4hQ3j 00:32:32.203 22:31:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@260 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:32:32.203 [2024-10-01 22:31:27.394325] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:32:32.463 nvme0n1 00:32:32.463 22:31:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@264 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:32:32.463 Running I/O for 1 seconds... 00:32:33.402 3069.00 IOPS, 11.99 MiB/s 00:32:33.402 Latency(us) 00:32:33.402 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:33.402 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:32:33.402 Verification LBA range: start 0x0 length 0x2000 00:32:33.402 nvme0n1 : 1.02 3135.05 12.25 0.00 0.00 40446.80 4669.44 179131.73 00:32:33.402 =================================================================================================================== 00:32:33.402 Total : 3135.05 12.25 0.00 0.00 40446.80 4669.44 179131.73 00:32:33.402 { 00:32:33.402 "results": [ 00:32:33.402 { 00:32:33.402 "job": "nvme0n1", 00:32:33.402 "core_mask": "0x2", 00:32:33.403 "workload": "verify", 00:32:33.403 "status": "finished", 00:32:33.403 "verify_range": { 00:32:33.403 "start": 0, 00:32:33.403 "length": 8192 00:32:33.403 }, 00:32:33.403 "queue_depth": 128, 00:32:33.403 "io_size": 4096, 00:32:33.403 "runtime": 1.019762, 00:32:33.403 "iops": 3135.045236045273, 00:32:33.403 "mibps": 12.246270453301848, 00:32:33.403 "io_failed": 0, 00:32:33.403 "io_timeout": 0, 00:32:33.403 "avg_latency_us": 40446.79780627672, 00:32:33.403 "min_latency_us": 4669.44, 00:32:33.403 "max_latency_us": 179131.73333333334 00:32:33.403 } 00:32:33.403 ], 00:32:33.403 "core_count": 1 00:32:33.403 } 00:32:33.403 22:31:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@267 -- # rpc_cmd save_config 00:32:33.403 22:31:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:33.403 22:31:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:32:33.662 22:31:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:33.662 22:31:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@267 -- # tgtcfg='{ 00:32:33.662 "subsystems": [ 00:32:33.662 { 00:32:33.662 "subsystem": "keyring", 00:32:33.662 "config": [ 00:32:33.662 { 00:32:33.662 "method": "keyring_file_add_key", 00:32:33.662 "params": { 00:32:33.662 "name": "key0", 00:32:33.662 "path": "/tmp/tmp.zLn9L4hQ3j" 00:32:33.662 } 00:32:33.662 } 00:32:33.662 ] 00:32:33.662 }, 00:32:33.662 { 00:32:33.662 "subsystem": "iobuf", 00:32:33.662 "config": [ 00:32:33.662 { 00:32:33.662 "method": "iobuf_set_options", 00:32:33.662 "params": { 00:32:33.662 "small_pool_count": 8192, 00:32:33.662 "large_pool_count": 1024, 00:32:33.662 "small_bufsize": 8192, 00:32:33.662 "large_bufsize": 135168 00:32:33.662 } 00:32:33.662 } 00:32:33.662 ] 00:32:33.662 }, 00:32:33.662 { 00:32:33.662 "subsystem": "sock", 00:32:33.662 "config": [ 00:32:33.662 { 00:32:33.662 "method": "sock_set_default_impl", 00:32:33.662 "params": { 00:32:33.662 "impl_name": "posix" 00:32:33.662 } 00:32:33.662 }, 00:32:33.662 { 00:32:33.662 "method": "sock_impl_set_options", 00:32:33.662 "params": { 00:32:33.662 "impl_name": "ssl", 00:32:33.662 "recv_buf_size": 4096, 00:32:33.662 "send_buf_size": 4096, 00:32:33.662 "enable_recv_pipe": true, 00:32:33.662 "enable_quickack": false, 00:32:33.662 "enable_placement_id": 0, 00:32:33.662 "enable_zerocopy_send_server": true, 00:32:33.662 "enable_zerocopy_send_client": false, 00:32:33.662 "zerocopy_threshold": 0, 00:32:33.662 "tls_version": 0, 00:32:33.662 "enable_ktls": false 00:32:33.662 } 00:32:33.662 }, 00:32:33.662 { 00:32:33.662 "method": "sock_impl_set_options", 00:32:33.662 "params": { 00:32:33.662 "impl_name": "posix", 00:32:33.662 "recv_buf_size": 2097152, 00:32:33.662 "send_buf_size": 2097152, 00:32:33.662 "enable_recv_pipe": true, 00:32:33.662 "enable_quickack": false, 00:32:33.662 "enable_placement_id": 0, 00:32:33.662 "enable_zerocopy_send_server": true, 00:32:33.662 "enable_zerocopy_send_client": false, 00:32:33.662 "zerocopy_threshold": 0, 00:32:33.662 "tls_version": 0, 00:32:33.662 "enable_ktls": false 00:32:33.662 } 00:32:33.662 } 00:32:33.662 ] 00:32:33.662 }, 00:32:33.662 { 00:32:33.662 "subsystem": "vmd", 00:32:33.662 "config": [] 00:32:33.662 }, 00:32:33.662 { 00:32:33.662 "subsystem": "accel", 00:32:33.662 "config": [ 00:32:33.662 { 00:32:33.662 "method": "accel_set_options", 00:32:33.662 "params": { 00:32:33.662 "small_cache_size": 128, 00:32:33.662 "large_cache_size": 16, 00:32:33.662 "task_count": 2048, 00:32:33.662 "sequence_count": 2048, 00:32:33.662 "buf_count": 2048 00:32:33.662 } 00:32:33.662 } 00:32:33.662 ] 00:32:33.662 }, 00:32:33.662 { 00:32:33.663 "subsystem": "bdev", 00:32:33.663 "config": [ 00:32:33.663 { 00:32:33.663 "method": "bdev_set_options", 00:32:33.663 "params": { 00:32:33.663 "bdev_io_pool_size": 65535, 00:32:33.663 "bdev_io_cache_size": 256, 00:32:33.663 "bdev_auto_examine": true, 00:32:33.663 "iobuf_small_cache_size": 128, 00:32:33.663 "iobuf_large_cache_size": 16, 00:32:33.663 "bdev_io_stack_size": 4096 00:32:33.663 } 00:32:33.663 }, 00:32:33.663 { 00:32:33.663 "method": "bdev_raid_set_options", 00:32:33.663 "params": { 00:32:33.663 "process_window_size_kb": 1024, 00:32:33.663 "process_max_bandwidth_mb_sec": 0 00:32:33.663 } 00:32:33.663 }, 00:32:33.663 { 00:32:33.663 "method": "bdev_iscsi_set_options", 00:32:33.663 "params": { 00:32:33.663 "timeout_sec": 30 00:32:33.663 } 00:32:33.663 }, 00:32:33.663 { 00:32:33.663 "method": "bdev_nvme_set_options", 00:32:33.663 "params": { 00:32:33.663 "action_on_timeout": "none", 00:32:33.663 "timeout_us": 0, 00:32:33.663 "timeout_admin_us": 0, 00:32:33.663 "keep_alive_timeout_ms": 10000, 00:32:33.663 "arbitration_burst": 0, 00:32:33.663 "low_priority_weight": 0, 00:32:33.663 "medium_priority_weight": 0, 00:32:33.663 "high_priority_weight": 0, 00:32:33.663 "nvme_adminq_poll_period_us": 10000, 00:32:33.663 "nvme_ioq_poll_period_us": 0, 00:32:33.663 "io_queue_requests": 0, 00:32:33.663 "delay_cmd_submit": true, 00:32:33.663 "transport_retry_count": 4, 00:32:33.663 "bdev_retry_count": 3, 00:32:33.663 "transport_ack_timeout": 0, 00:32:33.663 "ctrlr_loss_timeout_sec": 0, 00:32:33.663 "reconnect_delay_sec": 0, 00:32:33.663 "fast_io_fail_timeout_sec": 0, 00:32:33.663 "disable_auto_failback": false, 00:32:33.663 "generate_uuids": false, 00:32:33.663 "transport_tos": 0, 00:32:33.663 "nvme_error_stat": false, 00:32:33.663 "rdma_srq_size": 0, 00:32:33.663 "io_path_stat": false, 00:32:33.663 "allow_accel_sequence": false, 00:32:33.663 "rdma_max_cq_size": 0, 00:32:33.663 "rdma_cm_event_timeout_ms": 0, 00:32:33.663 "dhchap_digests": [ 00:32:33.663 "sha256", 00:32:33.663 "sha384", 00:32:33.663 "sha512" 00:32:33.663 ], 00:32:33.663 "dhchap_dhgroups": [ 00:32:33.663 "null", 00:32:33.663 "ffdhe2048", 00:32:33.663 "ffdhe3072", 00:32:33.663 "ffdhe4096", 00:32:33.663 "ffdhe6144", 00:32:33.663 "ffdhe8192" 00:32:33.663 ] 00:32:33.663 } 00:32:33.663 }, 00:32:33.663 { 00:32:33.663 "method": "bdev_nvme_set_hotplug", 00:32:33.663 "params": { 00:32:33.663 "period_us": 100000, 00:32:33.663 "enable": false 00:32:33.663 } 00:32:33.663 }, 00:32:33.663 { 00:32:33.663 "method": "bdev_malloc_create", 00:32:33.663 "params": { 00:32:33.663 "name": "malloc0", 00:32:33.663 "num_blocks": 8192, 00:32:33.663 "block_size": 4096, 00:32:33.663 "physical_block_size": 4096, 00:32:33.663 "uuid": "f1285eea-bf99-4661-baba-05589e8d3094", 00:32:33.663 "optimal_io_boundary": 0, 00:32:33.663 "md_size": 0, 00:32:33.663 "dif_type": 0, 00:32:33.663 "dif_is_head_of_md": false, 00:32:33.663 "dif_pi_format": 0 00:32:33.663 } 00:32:33.663 }, 00:32:33.663 { 00:32:33.663 "method": "bdev_wait_for_examine" 00:32:33.663 } 00:32:33.663 ] 00:32:33.663 }, 00:32:33.663 { 00:32:33.663 "subsystem": "nbd", 00:32:33.663 "config": [] 00:32:33.663 }, 00:32:33.663 { 00:32:33.663 "subsystem": "scheduler", 00:32:33.663 "config": [ 00:32:33.663 { 00:32:33.663 "method": "framework_set_scheduler", 00:32:33.663 "params": { 00:32:33.663 "name": "static" 00:32:33.663 } 00:32:33.663 } 00:32:33.663 ] 00:32:33.663 }, 00:32:33.663 { 00:32:33.663 "subsystem": "nvmf", 00:32:33.663 "config": [ 00:32:33.663 { 00:32:33.663 "method": "nvmf_set_config", 00:32:33.663 "params": { 00:32:33.663 "discovery_filter": "match_any", 00:32:33.663 "admin_cmd_passthru": { 00:32:33.663 "identify_ctrlr": false 00:32:33.663 }, 00:32:33.663 "dhchap_digests": [ 00:32:33.663 "sha256", 00:32:33.663 "sha384", 00:32:33.663 "sha512" 00:32:33.663 ], 00:32:33.663 "dhchap_dhgroups": [ 00:32:33.663 "null", 00:32:33.663 "ffdhe2048", 00:32:33.663 "ffdhe3072", 00:32:33.663 "ffdhe4096", 00:32:33.663 "ffdhe6144", 00:32:33.663 "ffdhe8192" 00:32:33.663 ] 00:32:33.663 } 00:32:33.663 }, 00:32:33.663 { 00:32:33.663 "method": "nvmf_set_max_subsystems", 00:32:33.663 "params": { 00:32:33.663 "max_subsystems": 1024 00:32:33.663 } 00:32:33.663 }, 00:32:33.663 { 00:32:33.663 "method": "nvmf_set_crdt", 00:32:33.663 "params": { 00:32:33.663 "crdt1": 0, 00:32:33.663 "crdt2": 0, 00:32:33.663 "crdt3": 0 00:32:33.663 } 00:32:33.663 }, 00:32:33.663 { 00:32:33.663 "method": "nvmf_create_transport", 00:32:33.663 "params": { 00:32:33.663 "trtype": "TCP", 00:32:33.663 "max_queue_depth": 128, 00:32:33.663 "max_io_qpairs_per_ctrlr": 127, 00:32:33.663 "in_capsule_data_size": 4096, 00:32:33.663 "max_io_size": 131072, 00:32:33.663 "io_unit_size": 131072, 00:32:33.663 "max_aq_depth": 128, 00:32:33.663 "num_shared_buffers": 511, 00:32:33.663 "buf_cache_size": 4294967295, 00:32:33.663 "dif_insert_or_strip": false, 00:32:33.663 "zcopy": false, 00:32:33.663 "c2h_success": false, 00:32:33.663 "sock_priority": 0, 00:32:33.663 "abort_timeout_sec": 1, 00:32:33.663 "ack_timeout": 0, 00:32:33.663 "data_wr_pool_size": 0 00:32:33.663 } 00:32:33.663 }, 00:32:33.663 { 00:32:33.663 "method": "nvmf_create_subsystem", 00:32:33.663 "params": { 00:32:33.663 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:32:33.663 "allow_any_host": false, 00:32:33.663 "serial_number": "00000000000000000000", 00:32:33.663 "model_number": "SPDK bdev Controller", 00:32:33.663 "max_namespaces": 32, 00:32:33.663 "min_cntlid": 1, 00:32:33.663 "max_cntlid": 65519, 00:32:33.663 "ana_reporting": false 00:32:33.663 } 00:32:33.663 }, 00:32:33.663 { 00:32:33.663 "method": "nvmf_subsystem_add_host", 00:32:33.663 "params": { 00:32:33.663 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:32:33.663 "host": "nqn.2016-06.io.spdk:host1", 00:32:33.663 "psk": "key0" 00:32:33.663 } 00:32:33.663 }, 00:32:33.663 { 00:32:33.663 "method": "nvmf_subsystem_add_ns", 00:32:33.663 "params": { 00:32:33.663 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:32:33.663 "namespace": { 00:32:33.663 "nsid": 1, 00:32:33.663 "bdev_name": "malloc0", 00:32:33.663 "nguid": "F1285EEABF994661BABA05589E8D3094", 00:32:33.663 "uuid": "f1285eea-bf99-4661-baba-05589e8d3094", 00:32:33.663 "no_auto_visible": false 00:32:33.663 } 00:32:33.663 } 00:32:33.663 }, 00:32:33.663 { 00:32:33.663 "method": "nvmf_subsystem_add_listener", 00:32:33.663 "params": { 00:32:33.663 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:32:33.663 "listen_address": { 00:32:33.663 "trtype": "TCP", 00:32:33.663 "adrfam": "IPv4", 00:32:33.663 "traddr": "10.0.0.2", 00:32:33.663 "trsvcid": "4420" 00:32:33.663 }, 00:32:33.663 "secure_channel": false, 00:32:33.663 "sock_impl": "ssl" 00:32:33.663 } 00:32:33.663 } 00:32:33.663 ] 00:32:33.663 } 00:32:33.663 ] 00:32:33.664 }' 00:32:33.664 22:31:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@268 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:32:33.928 22:31:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@268 -- # bperfcfg='{ 00:32:33.928 "subsystems": [ 00:32:33.928 { 00:32:33.928 "subsystem": "keyring", 00:32:33.928 "config": [ 00:32:33.928 { 00:32:33.928 "method": "keyring_file_add_key", 00:32:33.928 "params": { 00:32:33.928 "name": "key0", 00:32:33.928 "path": "/tmp/tmp.zLn9L4hQ3j" 00:32:33.928 } 00:32:33.928 } 00:32:33.928 ] 00:32:33.928 }, 00:32:33.928 { 00:32:33.928 "subsystem": "iobuf", 00:32:33.928 "config": [ 00:32:33.928 { 00:32:33.928 "method": "iobuf_set_options", 00:32:33.928 "params": { 00:32:33.928 "small_pool_count": 8192, 00:32:33.928 "large_pool_count": 1024, 00:32:33.928 "small_bufsize": 8192, 00:32:33.928 "large_bufsize": 135168 00:32:33.928 } 00:32:33.928 } 00:32:33.928 ] 00:32:33.928 }, 00:32:33.928 { 00:32:33.928 "subsystem": "sock", 00:32:33.928 "config": [ 00:32:33.928 { 00:32:33.928 "method": "sock_set_default_impl", 00:32:33.928 "params": { 00:32:33.928 "impl_name": "posix" 00:32:33.928 } 00:32:33.928 }, 00:32:33.928 { 00:32:33.928 "method": "sock_impl_set_options", 00:32:33.928 "params": { 00:32:33.928 "impl_name": "ssl", 00:32:33.928 "recv_buf_size": 4096, 00:32:33.928 "send_buf_size": 4096, 00:32:33.928 "enable_recv_pipe": true, 00:32:33.928 "enable_quickack": false, 00:32:33.928 "enable_placement_id": 0, 00:32:33.928 "enable_zerocopy_send_server": true, 00:32:33.928 "enable_zerocopy_send_client": false, 00:32:33.928 "zerocopy_threshold": 0, 00:32:33.928 "tls_version": 0, 00:32:33.928 "enable_ktls": false 00:32:33.928 } 00:32:33.928 }, 00:32:33.928 { 00:32:33.928 "method": "sock_impl_set_options", 00:32:33.928 "params": { 00:32:33.928 "impl_name": "posix", 00:32:33.928 "recv_buf_size": 2097152, 00:32:33.928 "send_buf_size": 2097152, 00:32:33.928 "enable_recv_pipe": true, 00:32:33.928 "enable_quickack": false, 00:32:33.928 "enable_placement_id": 0, 00:32:33.928 "enable_zerocopy_send_server": true, 00:32:33.928 "enable_zerocopy_send_client": false, 00:32:33.928 "zerocopy_threshold": 0, 00:32:33.928 "tls_version": 0, 00:32:33.928 "enable_ktls": false 00:32:33.928 } 00:32:33.928 } 00:32:33.928 ] 00:32:33.928 }, 00:32:33.928 { 00:32:33.928 "subsystem": "vmd", 00:32:33.928 "config": [] 00:32:33.928 }, 00:32:33.928 { 00:32:33.928 "subsystem": "accel", 00:32:33.928 "config": [ 00:32:33.928 { 00:32:33.928 "method": "accel_set_options", 00:32:33.928 "params": { 00:32:33.928 "small_cache_size": 128, 00:32:33.928 "large_cache_size": 16, 00:32:33.928 "task_count": 2048, 00:32:33.928 "sequence_count": 2048, 00:32:33.928 "buf_count": 2048 00:32:33.928 } 00:32:33.928 } 00:32:33.928 ] 00:32:33.928 }, 00:32:33.928 { 00:32:33.928 "subsystem": "bdev", 00:32:33.928 "config": [ 00:32:33.928 { 00:32:33.928 "method": "bdev_set_options", 00:32:33.928 "params": { 00:32:33.928 "bdev_io_pool_size": 65535, 00:32:33.928 "bdev_io_cache_size": 256, 00:32:33.928 "bdev_auto_examine": true, 00:32:33.928 "iobuf_small_cache_size": 128, 00:32:33.928 "iobuf_large_cache_size": 16, 00:32:33.928 "bdev_io_stack_size": 4096 00:32:33.928 } 00:32:33.928 }, 00:32:33.928 { 00:32:33.928 "method": "bdev_raid_set_options", 00:32:33.928 "params": { 00:32:33.928 "process_window_size_kb": 1024, 00:32:33.928 "process_max_bandwidth_mb_sec": 0 00:32:33.928 } 00:32:33.928 }, 00:32:33.928 { 00:32:33.928 "method": "bdev_iscsi_set_options", 00:32:33.928 "params": { 00:32:33.928 "timeout_sec": 30 00:32:33.928 } 00:32:33.928 }, 00:32:33.928 { 00:32:33.928 "method": "bdev_nvme_set_options", 00:32:33.928 "params": { 00:32:33.928 "action_on_timeout": "none", 00:32:33.928 "timeout_us": 0, 00:32:33.928 "timeout_admin_us": 0, 00:32:33.928 "keep_alive_timeout_ms": 10000, 00:32:33.928 "arbitration_burst": 0, 00:32:33.928 "low_priority_weight": 0, 00:32:33.928 "medium_priority_weight": 0, 00:32:33.928 "high_priority_weight": 0, 00:32:33.928 "nvme_adminq_poll_period_us": 10000, 00:32:33.928 "nvme_ioq_poll_period_us": 0, 00:32:33.928 "io_queue_requests": 512, 00:32:33.928 "delay_cmd_submit": true, 00:32:33.928 "transport_retry_count": 4, 00:32:33.928 "bdev_retry_count": 3, 00:32:33.928 "transport_ack_timeout": 0, 00:32:33.928 "ctrlr_loss_timeout_sec": 0, 00:32:33.928 "reconnect_delay_sec": 0, 00:32:33.928 "fast_io_fail_timeout_sec": 0, 00:32:33.928 "disable_auto_failback": false, 00:32:33.928 "generate_uuids": false, 00:32:33.928 "transport_tos": 0, 00:32:33.928 "nvme_error_stat": false, 00:32:33.928 "rdma_srq_size": 0, 00:32:33.928 "io_path_stat": false, 00:32:33.928 "allow_accel_sequence": false, 00:32:33.928 "rdma_max_cq_size": 0, 00:32:33.928 "rdma_cm_event_timeout_ms": 0, 00:32:33.928 "dhchap_digests": [ 00:32:33.928 "sha256", 00:32:33.928 "sha384", 00:32:33.928 "sha512" 00:32:33.928 ], 00:32:33.928 "dhchap_dhgroups": [ 00:32:33.928 "null", 00:32:33.929 "ffdhe2048", 00:32:33.929 "ffdhe3072", 00:32:33.929 "ffdhe4096", 00:32:33.929 "ffdhe6144", 00:32:33.929 "ffdhe8192" 00:32:33.929 ] 00:32:33.929 } 00:32:33.929 }, 00:32:33.929 { 00:32:33.929 "method": "bdev_nvme_attach_controller", 00:32:33.929 "params": { 00:32:33.929 "name": "nvme0", 00:32:33.929 "trtype": "TCP", 00:32:33.929 "adrfam": "IPv4", 00:32:33.929 "traddr": "10.0.0.2", 00:32:33.929 "trsvcid": "4420", 00:32:33.929 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:32:33.929 "prchk_reftag": false, 00:32:33.929 "prchk_guard": false, 00:32:33.929 "ctrlr_loss_timeout_sec": 0, 00:32:33.929 "reconnect_delay_sec": 0, 00:32:33.929 "fast_io_fail_timeout_sec": 0, 00:32:33.929 "psk": "key0", 00:32:33.929 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:32:33.929 "hdgst": false, 00:32:33.929 "ddgst": false 00:32:33.929 } 00:32:33.929 }, 00:32:33.929 { 00:32:33.929 "method": "bdev_nvme_set_hotplug", 00:32:33.929 "params": { 00:32:33.929 "period_us": 100000, 00:32:33.929 "enable": false 00:32:33.929 } 00:32:33.929 }, 00:32:33.929 { 00:32:33.929 "method": "bdev_enable_histogram", 00:32:33.929 "params": { 00:32:33.929 "name": "nvme0n1", 00:32:33.929 "enable": true 00:32:33.929 } 00:32:33.929 }, 00:32:33.929 { 00:32:33.929 "method": "bdev_wait_for_examine" 00:32:33.929 } 00:32:33.929 ] 00:32:33.929 }, 00:32:33.929 { 00:32:33.929 "subsystem": "nbd", 00:32:33.929 "config": [] 00:32:33.929 } 00:32:33.929 ] 00:32:33.929 }' 00:32:33.929 22:31:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@270 -- # killprocess 237339 00:32:33.929 22:31:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 237339 ']' 00:32:33.929 22:31:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 237339 00:32:33.929 22:31:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:32:33.929 22:31:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:32:33.929 22:31:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 237339 00:32:33.929 22:31:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:32:33.929 22:31:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:32:33.929 22:31:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 237339' 00:32:33.929 killing process with pid 237339 00:32:33.929 22:31:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 237339 00:32:33.929 Received shutdown signal, test time was about 1.000000 seconds 00:32:33.929 00:32:33.929 Latency(us) 00:32:33.929 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:33.929 =================================================================================================================== 00:32:33.929 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:32:33.929 22:31:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 237339 00:32:34.190 22:31:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@271 -- # killprocess 236996 00:32:34.190 22:31:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 236996 ']' 00:32:34.190 22:31:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 236996 00:32:34.190 22:31:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:32:34.190 22:31:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:32:34.190 22:31:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 236996 00:32:34.190 22:31:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:32:34.190 22:31:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:32:34.190 22:31:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 236996' 00:32:34.190 killing process with pid 236996 00:32:34.190 22:31:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 236996 00:32:34.190 22:31:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 236996 00:32:34.451 22:31:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@273 -- # nvmfappstart -c /dev/fd/62 00:32:34.451 22:31:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:32:34.451 22:31:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:32:34.451 22:31:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:32:34.451 22:31:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@273 -- # echo '{ 00:32:34.451 "subsystems": [ 00:32:34.451 { 00:32:34.451 "subsystem": "keyring", 00:32:34.451 "config": [ 00:32:34.451 { 00:32:34.451 "method": "keyring_file_add_key", 00:32:34.451 "params": { 00:32:34.451 "name": "key0", 00:32:34.451 "path": "/tmp/tmp.zLn9L4hQ3j" 00:32:34.451 } 00:32:34.451 } 00:32:34.451 ] 00:32:34.451 }, 00:32:34.451 { 00:32:34.451 "subsystem": "iobuf", 00:32:34.451 "config": [ 00:32:34.451 { 00:32:34.451 "method": "iobuf_set_options", 00:32:34.451 "params": { 00:32:34.451 "small_pool_count": 8192, 00:32:34.451 "large_pool_count": 1024, 00:32:34.451 "small_bufsize": 8192, 00:32:34.451 "large_bufsize": 135168 00:32:34.451 } 00:32:34.451 } 00:32:34.451 ] 00:32:34.451 }, 00:32:34.451 { 00:32:34.451 "subsystem": "sock", 00:32:34.451 "config": [ 00:32:34.451 { 00:32:34.451 "method": "sock_set_default_impl", 00:32:34.451 "params": { 00:32:34.451 "impl_name": "posix" 00:32:34.451 } 00:32:34.451 }, 00:32:34.451 { 00:32:34.451 "method": "sock_impl_set_options", 00:32:34.451 "params": { 00:32:34.451 "impl_name": "ssl", 00:32:34.451 "recv_buf_size": 4096, 00:32:34.451 "send_buf_size": 4096, 00:32:34.451 "enable_recv_pipe": true, 00:32:34.451 "enable_quickack": false, 00:32:34.451 "enable_placement_id": 0, 00:32:34.451 "enable_zerocopy_send_server": true, 00:32:34.451 "enable_zerocopy_send_client": false, 00:32:34.451 "zerocopy_threshold": 0, 00:32:34.451 "tls_version": 0, 00:32:34.451 "enable_ktls": false 00:32:34.451 } 00:32:34.451 }, 00:32:34.451 { 00:32:34.451 "method": "sock_impl_set_options", 00:32:34.451 "params": { 00:32:34.451 "impl_name": "posix", 00:32:34.451 "recv_buf_size": 2097152, 00:32:34.451 "send_buf_size": 2097152, 00:32:34.451 "enable_recv_pipe": true, 00:32:34.451 "enable_quickack": false, 00:32:34.451 "enable_placement_id": 0, 00:32:34.451 "enable_zerocopy_send_server": true, 00:32:34.451 "enable_zerocopy_send_client": false, 00:32:34.451 "zerocopy_threshold": 0, 00:32:34.451 "tls_version": 0, 00:32:34.451 "enable_ktls": false 00:32:34.451 } 00:32:34.451 } 00:32:34.451 ] 00:32:34.451 }, 00:32:34.451 { 00:32:34.451 "subsystem": "vmd", 00:32:34.451 "config": [] 00:32:34.451 }, 00:32:34.451 { 00:32:34.451 "subsystem": "accel", 00:32:34.451 "config": [ 00:32:34.451 { 00:32:34.451 "method": "accel_set_options", 00:32:34.451 "params": { 00:32:34.451 "small_cache_size": 128, 00:32:34.451 "large_cache_size": 16, 00:32:34.451 "task_count": 2048, 00:32:34.451 "sequence_count": 2048, 00:32:34.451 "buf_count": 2048 00:32:34.451 } 00:32:34.451 } 00:32:34.451 ] 00:32:34.451 }, 00:32:34.451 { 00:32:34.451 "subsystem": "bdev", 00:32:34.451 "config": [ 00:32:34.451 { 00:32:34.451 "method": "bdev_set_options", 00:32:34.451 "params": { 00:32:34.451 "bdev_io_pool_size": 65535, 00:32:34.451 "bdev_io_cache_size": 256, 00:32:34.451 "bdev_auto_examine": true, 00:32:34.451 "iobuf_small_cache_size": 128, 00:32:34.451 "iobuf_large_cache_size": 16, 00:32:34.451 "bdev_io_stack_size": 4096 00:32:34.451 } 00:32:34.451 }, 00:32:34.451 { 00:32:34.451 "method": "bdev_raid_set_options", 00:32:34.451 "params": { 00:32:34.451 "process_window_size_kb": 1024, 00:32:34.451 "process_max_bandwidth_mb_sec": 0 00:32:34.451 } 00:32:34.451 }, 00:32:34.451 { 00:32:34.451 "method": "bdev_iscsi_set_options", 00:32:34.451 "params": { 00:32:34.451 "timeout_sec": 30 00:32:34.451 } 00:32:34.451 }, 00:32:34.451 { 00:32:34.451 "method": "bdev_nvme_set_options", 00:32:34.451 "params": { 00:32:34.451 "action_on_timeout": "none", 00:32:34.451 "timeout_us": 0, 00:32:34.451 "timeout_admin_us": 0, 00:32:34.451 "keep_alive_timeout_ms": 10000, 00:32:34.451 "arbitration_burst": 0, 00:32:34.451 "low_priority_weight": 0, 00:32:34.451 "medium_priority_weight": 0, 00:32:34.451 "high_priority_weight": 0, 00:32:34.451 "nvme_adminq_poll_period_us": 10000, 00:32:34.451 "nvme_ioq_poll_period_us": 0, 00:32:34.451 "io_queue_requests": 0, 00:32:34.451 "delay_cmd_submit": true, 00:32:34.451 "transport_retry_count": 4, 00:32:34.451 "bdev_retry_count": 3, 00:32:34.451 "transport_ack_timeout": 0, 00:32:34.451 "ctrlr_loss_timeout_sec": 0, 00:32:34.451 "reconnect_delay_sec": 0, 00:32:34.451 "fast_io_fail_timeout_sec": 0, 00:32:34.451 "disable_auto_failback": false, 00:32:34.451 "generate_uuids": false, 00:32:34.451 "transport_tos": 0, 00:32:34.451 "nvme_error_stat": false, 00:32:34.451 "rdma_srq_size": 0, 00:32:34.451 "io_path_stat": false, 00:32:34.451 "allow_accel_sequence": false, 00:32:34.451 "rdma_max_cq_size": 0, 00:32:34.451 "rdma_cm_event_timeout_ms": 0, 00:32:34.451 "dhchap_digests": [ 00:32:34.452 "sha256", 00:32:34.452 "sha384", 00:32:34.452 "sha512" 00:32:34.452 ], 00:32:34.452 "dhchap_dhgroups": [ 00:32:34.452 "null", 00:32:34.452 "ffdhe2048", 00:32:34.452 "ffdhe3072", 00:32:34.452 "ffdhe4096", 00:32:34.452 "ffdhe6144", 00:32:34.452 "ffdhe8192" 00:32:34.452 ] 00:32:34.452 } 00:32:34.452 }, 00:32:34.452 { 00:32:34.452 "method": "bdev_nvme_set_hotplug", 00:32:34.452 "params": { 00:32:34.452 "period_us": 100000, 00:32:34.452 "enable": false 00:32:34.452 } 00:32:34.452 }, 00:32:34.452 { 00:32:34.452 "method": "bdev_malloc_create", 00:32:34.452 "params": { 00:32:34.452 "name": "malloc0", 00:32:34.452 "num_blocks": 8192, 00:32:34.452 "block_size": 4096, 00:32:34.452 "physical_block_size": 4096, 00:32:34.452 "uuid": "f1285eea-bf99-4661-baba-05589e8d3094", 00:32:34.452 "optimal_io_boundary": 0, 00:32:34.452 "md_size": 0, 00:32:34.452 "dif_type": 0, 00:32:34.452 "dif_is_head_of_md": false, 00:32:34.452 "dif_pi_format": 0 00:32:34.452 } 00:32:34.452 }, 00:32:34.452 { 00:32:34.452 "method": "bdev_wait_for_examine" 00:32:34.452 } 00:32:34.452 ] 00:32:34.452 }, 00:32:34.452 { 00:32:34.452 "subsystem": "nbd", 00:32:34.452 "config": [] 00:32:34.452 }, 00:32:34.452 { 00:32:34.452 "subsystem": "scheduler", 00:32:34.452 "config": [ 00:32:34.452 { 00:32:34.452 "method": "framework_set_scheduler", 00:32:34.452 "params": { 00:32:34.452 "name": "static" 00:32:34.452 } 00:32:34.452 } 00:32:34.452 ] 00:32:34.452 }, 00:32:34.452 { 00:32:34.452 "subsystem": "nvmf", 00:32:34.452 "config": [ 00:32:34.452 { 00:32:34.452 "method": "nvmf_set_config", 00:32:34.452 "params": { 00:32:34.452 "discovery_filter": "match_any", 00:32:34.452 "admin_cmd_passthru": { 00:32:34.452 "identify_ctrlr": false 00:32:34.452 }, 00:32:34.452 "dhchap_digests": [ 00:32:34.452 "sha256", 00:32:34.452 "sha384", 00:32:34.452 "sha512" 00:32:34.452 ], 00:32:34.452 "dhchap_dhgroups": [ 00:32:34.452 "null", 00:32:34.452 "ffdhe2048", 00:32:34.452 "ffdhe3072", 00:32:34.452 "ffdhe4096", 00:32:34.452 "ffdhe6144", 00:32:34.452 "ffdhe8192" 00:32:34.452 ] 00:32:34.452 } 00:32:34.452 }, 00:32:34.452 { 00:32:34.452 "method": "nvmf_set_max_subsystems", 00:32:34.452 "params": { 00:32:34.452 "max_subsystems": 1024 00:32:34.452 } 00:32:34.452 }, 00:32:34.452 { 00:32:34.452 "method": "nvmf_set_crdt", 00:32:34.452 "params": { 00:32:34.452 "crdt1": 0, 00:32:34.452 "crdt2": 0, 00:32:34.452 "crdt3": 0 00:32:34.452 } 00:32:34.452 }, 00:32:34.452 { 00:32:34.452 "method": "nvmf_create_transport", 00:32:34.452 "params": { 00:32:34.452 "trtype": "TCP", 00:32:34.452 "max_queue_depth": 128, 00:32:34.452 "max_io_qpairs_per_ctrlr": 127, 00:32:34.452 "in_capsule_data_size": 4096, 00:32:34.452 "max_io_size": 131072, 00:32:34.452 "io_unit_size": 131072, 00:32:34.452 "max_aq_depth": 128, 00:32:34.452 "num_shared_buffers": 511, 00:32:34.452 "buf_cache_size": 4294967295, 00:32:34.452 "dif_insert_or_strip": false, 00:32:34.452 "zcopy": false, 00:32:34.452 "c2h_success": false, 00:32:34.452 "sock_priority": 0, 00:32:34.452 "abort_timeout_sec": 1, 00:32:34.452 "ack_timeout": 0, 00:32:34.452 "data_wr_pool_size": 0 00:32:34.452 } 00:32:34.452 }, 00:32:34.452 { 00:32:34.452 "method": "nvmf_create_subsystem", 00:32:34.452 "params": { 00:32:34.452 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:32:34.452 "allow_any_host": false, 00:32:34.452 "serial_number": "00000000000000000000", 00:32:34.452 "model_number": "SPDK bdev Controller", 00:32:34.452 "max_namespaces": 32, 00:32:34.452 "min_cntlid": 1, 00:32:34.452 "max_cntlid": 65519, 00:32:34.452 "ana_reporting": false 00:32:34.452 } 00:32:34.452 }, 00:32:34.452 { 00:32:34.452 "method": "nvmf_subsystem_add_host", 00:32:34.452 "params": { 00:32:34.452 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:32:34.452 "host": "nqn.2016-06.io.spdk:host1", 00:32:34.452 "psk": "key0" 00:32:34.452 } 00:32:34.452 }, 00:32:34.452 { 00:32:34.452 "method": "nvmf_subsystem_add_ns", 00:32:34.452 "params": { 00:32:34.452 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:32:34.452 "namespace": { 00:32:34.452 "nsid": 1, 00:32:34.452 "bdev_name": "malloc0", 00:32:34.452 "nguid": "F1285EEABF994661BABA05589E8D3094", 00:32:34.452 "uuid": "f1285eea-bf99-4661-baba-05589e8d3094", 00:32:34.452 "no_auto_visible": false 00:32:34.452 } 00:32:34.452 } 00:32:34.452 }, 00:32:34.452 { 00:32:34.452 "method": "nvmf_subsystem_add_listener", 00:32:34.452 "params": { 00:32:34.452 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:32:34.452 "listen_address": { 00:32:34.452 "trtype": "TCP", 00:32:34.452 "adrfam": "IPv4", 00:32:34.452 "traddr": "10.0.0.2", 00:32:34.452 "trsvcid": "4420" 00:32:34.452 }, 00:32:34.452 "secure_channel": false, 00:32:34.452 "sock_impl": "ssl" 00:32:34.452 } 00:32:34.452 } 00:32:34.452 ] 00:32:34.452 } 00:32:34.452 ] 00:32:34.452 }' 00:32:34.452 22:31:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # nvmfpid=237925 00:32:34.452 22:31:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # waitforlisten 237925 00:32:34.452 22:31:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -c /dev/fd/62 00:32:34.452 22:31:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 237925 ']' 00:32:34.452 22:31:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:34.452 22:31:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:32:34.452 22:31:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:34.452 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:34.452 22:31:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:32:34.452 22:31:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:32:34.452 [2024-10-01 22:31:29.528426] Starting SPDK v25.01-pre git sha1 1b1c3081e / DPDK 24.03.0 initialization... 00:32:34.452 [2024-10-01 22:31:29.528482] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:32:34.452 [2024-10-01 22:31:29.594170] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:34.452 [2024-10-01 22:31:29.658300] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:32:34.452 [2024-10-01 22:31:29.658339] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:32:34.452 [2024-10-01 22:31:29.658347] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:32:34.452 [2024-10-01 22:31:29.658354] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:32:34.452 [2024-10-01 22:31:29.658359] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:32:34.452 [2024-10-01 22:31:29.658406] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:32:34.711 [2024-10-01 22:31:29.910294] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:32:34.712 [2024-10-01 22:31:29.942302] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:32:34.712 [2024-10-01 22:31:29.942525] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:35.282 22:31:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:32:35.282 22:31:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:32:35.282 22:31:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:32:35.282 22:31:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:32:35.282 22:31:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:32:35.282 22:31:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:32:35.282 22:31:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@276 -- # bdevperf_pid=238057 00:32:35.282 22:31:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@277 -- # waitforlisten 238057 /var/tmp/bdevperf.sock 00:32:35.282 22:31:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 238057 ']' 00:32:35.282 22:31:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:32:35.282 22:31:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:32:35.282 22:31:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:32:35.282 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:32:35.282 22:31:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@274 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 -c /dev/fd/63 00:32:35.282 22:31:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:32:35.282 22:31:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:32:35.282 22:31:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@274 -- # echo '{ 00:32:35.282 "subsystems": [ 00:32:35.282 { 00:32:35.282 "subsystem": "keyring", 00:32:35.282 "config": [ 00:32:35.282 { 00:32:35.282 "method": "keyring_file_add_key", 00:32:35.282 "params": { 00:32:35.282 "name": "key0", 00:32:35.282 "path": "/tmp/tmp.zLn9L4hQ3j" 00:32:35.282 } 00:32:35.282 } 00:32:35.282 ] 00:32:35.282 }, 00:32:35.282 { 00:32:35.282 "subsystem": "iobuf", 00:32:35.282 "config": [ 00:32:35.282 { 00:32:35.282 "method": "iobuf_set_options", 00:32:35.282 "params": { 00:32:35.282 "small_pool_count": 8192, 00:32:35.282 "large_pool_count": 1024, 00:32:35.282 "small_bufsize": 8192, 00:32:35.282 "large_bufsize": 135168 00:32:35.282 } 00:32:35.282 } 00:32:35.282 ] 00:32:35.282 }, 00:32:35.282 { 00:32:35.282 "subsystem": "sock", 00:32:35.282 "config": [ 00:32:35.282 { 00:32:35.282 "method": "sock_set_default_impl", 00:32:35.282 "params": { 00:32:35.282 "impl_name": "posix" 00:32:35.282 } 00:32:35.282 }, 00:32:35.282 { 00:32:35.282 "method": "sock_impl_set_options", 00:32:35.282 "params": { 00:32:35.282 "impl_name": "ssl", 00:32:35.282 "recv_buf_size": 4096, 00:32:35.282 "send_buf_size": 4096, 00:32:35.282 "enable_recv_pipe": true, 00:32:35.282 "enable_quickack": false, 00:32:35.282 "enable_placement_id": 0, 00:32:35.282 "enable_zerocopy_send_server": true, 00:32:35.282 "enable_zerocopy_send_client": false, 00:32:35.282 "zerocopy_threshold": 0, 00:32:35.282 "tls_version": 0, 00:32:35.282 "enable_ktls": false 00:32:35.282 } 00:32:35.282 }, 00:32:35.282 { 00:32:35.282 "method": "sock_impl_set_options", 00:32:35.282 "params": { 00:32:35.282 "impl_name": "posix", 00:32:35.282 "recv_buf_size": 2097152, 00:32:35.282 "send_buf_size": 2097152, 00:32:35.282 "enable_recv_pipe": true, 00:32:35.282 "enable_quickack": false, 00:32:35.282 "enable_placement_id": 0, 00:32:35.282 "enable_zerocopy_send_server": true, 00:32:35.282 "enable_zerocopy_send_client": false, 00:32:35.282 "zerocopy_threshold": 0, 00:32:35.282 "tls_version": 0, 00:32:35.282 "enable_ktls": false 00:32:35.282 } 00:32:35.282 } 00:32:35.282 ] 00:32:35.282 }, 00:32:35.282 { 00:32:35.282 "subsystem": "vmd", 00:32:35.282 "config": [] 00:32:35.282 }, 00:32:35.282 { 00:32:35.282 "subsystem": "accel", 00:32:35.282 "config": [ 00:32:35.282 { 00:32:35.282 "method": "accel_set_options", 00:32:35.282 "params": { 00:32:35.282 "small_cache_size": 128, 00:32:35.282 "large_cache_size": 16, 00:32:35.282 "task_count": 2048, 00:32:35.282 "sequence_count": 2048, 00:32:35.282 "buf_count": 2048 00:32:35.282 } 00:32:35.282 } 00:32:35.282 ] 00:32:35.282 }, 00:32:35.282 { 00:32:35.282 "subsystem": "bdev", 00:32:35.282 "config": [ 00:32:35.282 { 00:32:35.282 "method": "bdev_set_options", 00:32:35.282 "params": { 00:32:35.282 "bdev_io_pool_size": 65535, 00:32:35.282 "bdev_io_cache_size": 256, 00:32:35.282 "bdev_auto_examine": true, 00:32:35.282 "iobuf_small_cache_size": 128, 00:32:35.282 "iobuf_large_cache_size": 16, 00:32:35.282 "bdev_io_stack_size": 4096 00:32:35.282 } 00:32:35.282 }, 00:32:35.282 { 00:32:35.282 "method": "bdev_raid_set_options", 00:32:35.282 "params": { 00:32:35.282 "process_window_size_kb": 1024, 00:32:35.282 "process_max_bandwidth_mb_sec": 0 00:32:35.282 } 00:32:35.282 }, 00:32:35.282 { 00:32:35.282 "method": "bdev_iscsi_set_options", 00:32:35.282 "params": { 00:32:35.282 "timeout_sec": 30 00:32:35.282 } 00:32:35.282 }, 00:32:35.282 { 00:32:35.282 "method": "bdev_nvme_set_options", 00:32:35.282 "params": { 00:32:35.282 "action_on_timeout": "none", 00:32:35.282 "timeout_us": 0, 00:32:35.282 "timeout_admin_us": 0, 00:32:35.282 "keep_alive_timeout_ms": 10000, 00:32:35.283 "arbitration_burst": 0, 00:32:35.283 "low_priority_weight": 0, 00:32:35.283 "medium_priority_weight": 0, 00:32:35.283 "high_priority_weight": 0, 00:32:35.283 "nvme_adminq_poll_period_us": 10000, 00:32:35.283 "nvme_ioq_poll_period_us": 0, 00:32:35.283 "io_queue_requests": 512, 00:32:35.283 "delay_cmd_submit": true, 00:32:35.283 "transport_retry_count": 4, 00:32:35.283 "bdev_retry_count": 3, 00:32:35.283 "transport_ack_timeout": 0, 00:32:35.283 "ctrlr_loss_timeout_sec": 0, 00:32:35.283 "reconnect_delay_sec": 0, 00:32:35.283 "fast_io_fail_timeout_sec": 0, 00:32:35.283 "disable_auto_failback": false, 00:32:35.283 "generate_uuids": false, 00:32:35.283 "transport_tos": 0, 00:32:35.283 "nvme_error_stat": false, 00:32:35.283 "rdma_srq_size": 0, 00:32:35.283 "io_path_stat": false, 00:32:35.283 "allow_accel_sequence": false, 00:32:35.283 "rdma_max_cq_size": 0, 00:32:35.283 "rdma_cm_event_timeout_ms": 0, 00:32:35.283 "dhchap_digests": [ 00:32:35.283 "sha256", 00:32:35.283 "sha384", 00:32:35.283 "sha512" 00:32:35.283 ], 00:32:35.283 "dhchap_dhgroups": [ 00:32:35.283 "null", 00:32:35.283 "ffdhe2048", 00:32:35.283 "ffdhe3072", 00:32:35.283 "ffdhe4096", 00:32:35.283 "ffdhe6144", 00:32:35.283 "ffdhe8192" 00:32:35.283 ] 00:32:35.283 } 00:32:35.283 }, 00:32:35.283 { 00:32:35.283 "method": "bdev_nvme_attach_controller", 00:32:35.283 "params": { 00:32:35.283 "name": "nvme0", 00:32:35.283 "trtype": "TCP", 00:32:35.283 "adrfam": "IPv4", 00:32:35.283 "traddr": "10.0.0.2", 00:32:35.283 "trsvcid": "4420", 00:32:35.283 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:32:35.283 "prchk_reftag": false, 00:32:35.283 "prchk_guard": false, 00:32:35.283 "ctrlr_loss_timeout_sec": 0, 00:32:35.283 "reconnect_delay_sec": 0, 00:32:35.283 "fast_io_fail_timeout_sec": 0, 00:32:35.283 "psk": "key0", 00:32:35.283 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:32:35.283 "hdgst": false, 00:32:35.283 "ddgst": false 00:32:35.283 } 00:32:35.283 }, 00:32:35.283 { 00:32:35.283 "method": "bdev_nvme_set_hotplug", 00:32:35.283 "params": { 00:32:35.283 "period_us": 100000, 00:32:35.283 "enable": false 00:32:35.283 } 00:32:35.283 }, 00:32:35.283 { 00:32:35.283 "method": "bdev_enable_histogram", 00:32:35.283 "params": { 00:32:35.283 "name": "nvme0n1", 00:32:35.283 "enable": true 00:32:35.283 } 00:32:35.283 }, 00:32:35.283 { 00:32:35.283 "method": "bdev_wait_for_examine" 00:32:35.283 } 00:32:35.283 ] 00:32:35.283 }, 00:32:35.283 { 00:32:35.283 "subsystem": "nbd", 00:32:35.283 "config": [] 00:32:35.283 } 00:32:35.283 ] 00:32:35.283 }' 00:32:35.283 [2024-10-01 22:31:30.396834] Starting SPDK v25.01-pre git sha1 1b1c3081e / DPDK 24.03.0 initialization... 00:32:35.283 [2024-10-01 22:31:30.396889] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid238057 ] 00:32:35.283 [2024-10-01 22:31:30.473617] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:35.283 [2024-10-01 22:31:30.527698] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:32:35.542 [2024-10-01 22:31:30.712254] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:32:36.118 22:31:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:32:36.118 22:31:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:32:36.118 22:31:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:32:36.118 22:31:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # jq -r '.[].name' 00:32:36.118 22:31:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:36.118 22:31:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@280 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:32:36.378 Running I/O for 1 seconds... 00:32:37.320 2532.00 IOPS, 9.89 MiB/s 00:32:37.320 Latency(us) 00:32:37.320 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:37.320 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:32:37.320 Verification LBA range: start 0x0 length 0x2000 00:32:37.320 nvme0n1 : 1.02 2605.15 10.18 0.00 0.00 48776.99 5843.63 166024.53 00:32:37.320 =================================================================================================================== 00:32:37.320 Total : 2605.15 10.18 0.00 0.00 48776.99 5843.63 166024.53 00:32:37.320 { 00:32:37.320 "results": [ 00:32:37.320 { 00:32:37.320 "job": "nvme0n1", 00:32:37.320 "core_mask": "0x2", 00:32:37.320 "workload": "verify", 00:32:37.320 "status": "finished", 00:32:37.320 "verify_range": { 00:32:37.320 "start": 0, 00:32:37.320 "length": 8192 00:32:37.320 }, 00:32:37.320 "queue_depth": 128, 00:32:37.320 "io_size": 4096, 00:32:37.320 "runtime": 1.021053, 00:32:37.320 "iops": 2605.1536991713456, 00:32:37.320 "mibps": 10.176381637388069, 00:32:37.320 "io_failed": 0, 00:32:37.320 "io_timeout": 0, 00:32:37.320 "avg_latency_us": 48776.98502255639, 00:32:37.320 "min_latency_us": 5843.626666666667, 00:32:37.320 "max_latency_us": 166024.53333333333 00:32:37.320 } 00:32:37.320 ], 00:32:37.320 "core_count": 1 00:32:37.320 } 00:32:37.320 22:31:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@282 -- # trap - SIGINT SIGTERM EXIT 00:32:37.320 22:31:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@283 -- # cleanup 00:32:37.320 22:31:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@15 -- # process_shm --id 0 00:32:37.320 22:31:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@808 -- # type=--id 00:32:37.320 22:31:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@809 -- # id=0 00:32:37.320 22:31:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@810 -- # '[' --id = --pid ']' 00:32:37.320 22:31:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@814 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:32:37.320 22:31:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@814 -- # shm_files=nvmf_trace.0 00:32:37.320 22:31:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@816 -- # [[ -z nvmf_trace.0 ]] 00:32:37.320 22:31:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@820 -- # for n in $shm_files 00:32:37.320 22:31:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@821 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:32:37.320 nvmf_trace.0 00:32:37.320 22:31:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@823 -- # return 0 00:32:37.320 22:31:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@16 -- # killprocess 238057 00:32:37.320 22:31:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 238057 ']' 00:32:37.320 22:31:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 238057 00:32:37.320 22:31:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:32:37.581 22:31:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:32:37.581 22:31:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 238057 00:32:37.581 22:31:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:32:37.581 22:31:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:32:37.581 22:31:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 238057' 00:32:37.581 killing process with pid 238057 00:32:37.581 22:31:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 238057 00:32:37.581 Received shutdown signal, test time was about 1.000000 seconds 00:32:37.581 00:32:37.581 Latency(us) 00:32:37.581 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:37.581 =================================================================================================================== 00:32:37.581 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:32:37.581 22:31:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 238057 00:32:37.581 22:31:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@17 -- # nvmftestfini 00:32:37.581 22:31:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@514 -- # nvmfcleanup 00:32:37.581 22:31:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@121 -- # sync 00:32:37.581 22:31:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:32:37.581 22:31:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@124 -- # set +e 00:32:37.581 22:31:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@125 -- # for i in {1..20} 00:32:37.581 22:31:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:32:37.581 rmmod nvme_tcp 00:32:37.581 rmmod nvme_fabrics 00:32:37.842 rmmod nvme_keyring 00:32:37.842 22:31:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:32:37.842 22:31:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@128 -- # set -e 00:32:37.842 22:31:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@129 -- # return 0 00:32:37.842 22:31:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@515 -- # '[' -n 237925 ']' 00:32:37.842 22:31:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@516 -- # killprocess 237925 00:32:37.842 22:31:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 237925 ']' 00:32:37.842 22:31:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 237925 00:32:37.842 22:31:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:32:37.842 22:31:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:32:37.842 22:31:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 237925 00:32:37.842 22:31:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:32:37.842 22:31:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:32:37.842 22:31:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 237925' 00:32:37.842 killing process with pid 237925 00:32:37.842 22:31:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 237925 00:32:37.842 22:31:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 237925 00:32:38.102 22:31:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:32:38.102 22:31:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:32:38.102 22:31:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:32:38.102 22:31:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@297 -- # iptr 00:32:38.102 22:31:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@789 -- # iptables-save 00:32:38.102 22:31:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:32:38.103 22:31:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@789 -- # iptables-restore 00:32:38.103 22:31:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:32:38.103 22:31:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@302 -- # remove_spdk_ns 00:32:38.103 22:31:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:38.103 22:31:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:32:38.103 22:31:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:40.015 22:31:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:32:40.015 22:31:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@18 -- # rm -f /tmp/tmp.0y3IhAMUvI /tmp/tmp.vUfcOTLrz6 /tmp/tmp.zLn9L4hQ3j 00:32:40.015 00:32:40.015 real 1m27.556s 00:32:40.015 user 2m18.336s 00:32:40.015 sys 0m25.712s 00:32:40.015 22:31:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1126 -- # xtrace_disable 00:32:40.015 22:31:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:32:40.015 ************************************ 00:32:40.015 END TEST nvmf_tls 00:32:40.015 ************************************ 00:32:40.015 22:31:35 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@42 -- # run_test nvmf_fips /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:32:40.015 22:31:35 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:32:40.015 22:31:35 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:32:40.015 22:31:35 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:32:40.276 ************************************ 00:32:40.276 START TEST nvmf_fips 00:32:40.276 ************************************ 00:32:40.276 22:31:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:32:40.276 * Looking for test storage... 00:32:40.276 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips 00:32:40.276 22:31:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:32:40.276 22:31:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1681 -- # lcov --version 00:32:40.276 22:31:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:32:40.276 22:31:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:32:40.276 22:31:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:32:40.276 22:31:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@333 -- # local ver1 ver1_l 00:32:40.276 22:31:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@334 -- # local ver2 ver2_l 00:32:40.276 22:31:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # IFS=.-: 00:32:40.276 22:31:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # read -ra ver1 00:32:40.276 22:31:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # IFS=.-: 00:32:40.276 22:31:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # read -ra ver2 00:32:40.276 22:31:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@338 -- # local 'op=<' 00:32:40.276 22:31:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@340 -- # ver1_l=2 00:32:40.276 22:31:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@341 -- # ver2_l=1 00:32:40.276 22:31:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:32:40.276 22:31:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@344 -- # case "$op" in 00:32:40.276 22:31:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@345 -- # : 1 00:32:40.276 22:31:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v = 0 )) 00:32:40.276 22:31:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:32:40.276 22:31:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 1 00:32:40.276 22:31:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=1 00:32:40.276 22:31:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:32:40.276 22:31:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 1 00:32:40.276 22:31:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=1 00:32:40.276 22:31:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 2 00:32:40.276 22:31:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=2 00:32:40.276 22:31:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:32:40.276 22:31:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 2 00:32:40.276 22:31:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=2 00:32:40.276 22:31:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:32:40.276 22:31:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:32:40.276 22:31:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # return 0 00:32:40.276 22:31:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:32:40.276 22:31:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:32:40.276 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:40.276 --rc genhtml_branch_coverage=1 00:32:40.276 --rc genhtml_function_coverage=1 00:32:40.276 --rc genhtml_legend=1 00:32:40.276 --rc geninfo_all_blocks=1 00:32:40.276 --rc geninfo_unexecuted_blocks=1 00:32:40.276 00:32:40.276 ' 00:32:40.276 22:31:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:32:40.276 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:40.276 --rc genhtml_branch_coverage=1 00:32:40.276 --rc genhtml_function_coverage=1 00:32:40.276 --rc genhtml_legend=1 00:32:40.276 --rc geninfo_all_blocks=1 00:32:40.276 --rc geninfo_unexecuted_blocks=1 00:32:40.276 00:32:40.276 ' 00:32:40.276 22:31:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:32:40.276 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:40.276 --rc genhtml_branch_coverage=1 00:32:40.276 --rc genhtml_function_coverage=1 00:32:40.276 --rc genhtml_legend=1 00:32:40.276 --rc geninfo_all_blocks=1 00:32:40.276 --rc geninfo_unexecuted_blocks=1 00:32:40.276 00:32:40.277 ' 00:32:40.277 22:31:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:32:40.277 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:40.277 --rc genhtml_branch_coverage=1 00:32:40.277 --rc genhtml_function_coverage=1 00:32:40.277 --rc genhtml_legend=1 00:32:40.277 --rc geninfo_all_blocks=1 00:32:40.277 --rc geninfo_unexecuted_blocks=1 00:32:40.277 00:32:40.277 ' 00:32:40.277 22:31:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:32:40.277 22:31:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@7 -- # uname -s 00:32:40.277 22:31:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:32:40.277 22:31:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:32:40.277 22:31:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:32:40.277 22:31:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:32:40.277 22:31:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:32:40.277 22:31:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:32:40.277 22:31:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:32:40.277 22:31:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:32:40.277 22:31:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:32:40.277 22:31:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:32:40.277 22:31:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 00:32:40.277 22:31:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@18 -- # NVME_HOSTID=80c5a598-37ec-ec11-9bc7-a4bf01928204 00:32:40.277 22:31:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:32:40.277 22:31:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:32:40.277 22:31:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:32:40.277 22:31:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:32:40.277 22:31:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:32:40.277 22:31:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@15 -- # shopt -s extglob 00:32:40.277 22:31:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:40.277 22:31:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:40.277 22:31:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:40.277 22:31:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:40.277 22:31:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:40.277 22:31:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:40.277 22:31:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@5 -- # export PATH 00:32:40.277 22:31:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:40.277 22:31:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@51 -- # : 0 00:32:40.277 22:31:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:32:40.277 22:31:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:32:40.277 22:31:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:32:40.277 22:31:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:32:40.277 22:31:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:32:40.277 22:31:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:32:40.277 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:32:40.277 22:31:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:32:40.277 22:31:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:32:40.277 22:31:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@55 -- # have_pci_nics=0 00:32:40.277 22:31:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:32:40.277 22:31:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@90 -- # check_openssl_version 00:32:40.277 22:31:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@84 -- # local target=3.0.0 00:32:40.277 22:31:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # openssl version 00:32:40.277 22:31:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # awk '{print $2}' 00:32:40.277 22:31:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # ge 3.1.1 3.0.0 00:32:40.277 22:31:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@376 -- # cmp_versions 3.1.1 '>=' 3.0.0 00:32:40.277 22:31:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@333 -- # local ver1 ver1_l 00:32:40.277 22:31:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@334 -- # local ver2 ver2_l 00:32:40.277 22:31:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # IFS=.-: 00:32:40.277 22:31:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # read -ra ver1 00:32:40.277 22:31:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # IFS=.-: 00:32:40.277 22:31:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # read -ra ver2 00:32:40.277 22:31:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@338 -- # local 'op=>=' 00:32:40.277 22:31:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@340 -- # ver1_l=3 00:32:40.277 22:31:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@341 -- # ver2_l=3 00:32:40.277 22:31:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:32:40.277 22:31:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@344 -- # case "$op" in 00:32:40.277 22:31:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@348 -- # : 1 00:32:40.277 22:31:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v = 0 )) 00:32:40.277 22:31:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:32:40.277 22:31:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 3 00:32:40.277 22:31:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=3 00:32:40.277 22:31:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 3 =~ ^[0-9]+$ ]] 00:32:40.277 22:31:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 3 00:32:40.277 22:31:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=3 00:32:40.277 22:31:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 3 00:32:40.277 22:31:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=3 00:32:40.277 22:31:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 3 =~ ^[0-9]+$ ]] 00:32:40.277 22:31:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 3 00:32:40.539 22:31:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=3 00:32:40.539 22:31:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:32:40.539 22:31:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:32:40.539 22:31:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v++ )) 00:32:40.539 22:31:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:32:40.539 22:31:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 1 00:32:40.539 22:31:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=1 00:32:40.539 22:31:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:32:40.539 22:31:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 1 00:32:40.539 22:31:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=1 00:32:40.539 22:31:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 0 00:32:40.539 22:31:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=0 00:32:40.539 22:31:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 0 =~ ^[0-9]+$ ]] 00:32:40.539 22:31:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 0 00:32:40.539 22:31:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=0 00:32:40.539 22:31:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:32:40.539 22:31:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # return 0 00:32:40.539 22:31:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@96 -- # openssl info -modulesdir 00:32:40.539 22:31:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@96 -- # [[ ! -f /usr/lib64/ossl-modules/fips.so ]] 00:32:40.539 22:31:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@101 -- # openssl fipsinstall -help 00:32:40.539 22:31:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@101 -- # warn='This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode' 00:32:40.539 22:31:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@102 -- # [[ This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode == \T\h\i\s\ \c\o\m\m\a\n\d\ \i\s\ \n\o\t\ \e\n\a\b\l\e\d* ]] 00:32:40.539 22:31:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@105 -- # export callback=build_openssl_config 00:32:40.539 22:31:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@105 -- # callback=build_openssl_config 00:32:40.540 22:31:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@114 -- # build_openssl_config 00:32:40.540 22:31:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@38 -- # cat 00:32:40.540 22:31:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@58 -- # [[ ! -t 0 ]] 00:32:40.540 22:31:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@59 -- # cat - 00:32:40.540 22:31:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@115 -- # export OPENSSL_CONF=spdk_fips.conf 00:32:40.540 22:31:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@115 -- # OPENSSL_CONF=spdk_fips.conf 00:32:40.540 22:31:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # mapfile -t providers 00:32:40.540 22:31:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # openssl list -providers 00:32:40.540 22:31:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # grep name 00:32:40.540 22:31:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # (( 2 != 2 )) 00:32:40.540 22:31:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # [[ name: openssl base provider != *base* ]] 00:32:40.540 22:31:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # [[ name: red hat enterprise linux 9 - openssl fips provider != *fips* ]] 00:32:40.540 22:31:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@128 -- # NOT openssl md5 /dev/fd/62 00:32:40.540 22:31:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@650 -- # local es=0 00:32:40.540 22:31:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@652 -- # valid_exec_arg openssl md5 /dev/fd/62 00:32:40.540 22:31:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@128 -- # : 00:32:40.540 22:31:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@638 -- # local arg=openssl 00:32:40.540 22:31:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:32:40.540 22:31:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@642 -- # type -t openssl 00:32:40.540 22:31:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:32:40.540 22:31:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # type -P openssl 00:32:40.540 22:31:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:32:40.540 22:31:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # arg=/usr/bin/openssl 00:32:40.540 22:31:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # [[ -x /usr/bin/openssl ]] 00:32:40.540 22:31:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@653 -- # openssl md5 /dev/fd/62 00:32:40.540 Error setting digest 00:32:40.540 4012BF3D007F0000:error:0308010C:digital envelope routines:inner_evp_generic_fetch:unsupported:crypto/evp/evp_fetch.c:341:Global default library context, Algorithm (MD5 : 95), Properties () 00:32:40.540 4012BF3D007F0000:error:03000086:digital envelope routines:evp_md_init_internal:initialization error:crypto/evp/digest.c:272: 00:32:40.540 22:31:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@653 -- # es=1 00:32:40.540 22:31:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:32:40.540 22:31:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:32:40.540 22:31:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:32:40.540 22:31:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@131 -- # nvmftestinit 00:32:40.540 22:31:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:32:40.540 22:31:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:32:40.540 22:31:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@474 -- # prepare_net_devs 00:32:40.540 22:31:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@436 -- # local -g is_hw=no 00:32:40.540 22:31:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@438 -- # remove_spdk_ns 00:32:40.540 22:31:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:40.540 22:31:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:32:40.540 22:31:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:40.540 22:31:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:32:40.540 22:31:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:32:40.540 22:31:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@309 -- # xtrace_disable 00:32:40.540 22:31:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:32:48.810 22:31:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:32:48.810 22:31:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@315 -- # pci_devs=() 00:32:48.810 22:31:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@315 -- # local -a pci_devs 00:32:48.810 22:31:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@316 -- # pci_net_devs=() 00:32:48.810 22:31:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:32:48.810 22:31:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@317 -- # pci_drivers=() 00:32:48.810 22:31:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@317 -- # local -A pci_drivers 00:32:48.810 22:31:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@319 -- # net_devs=() 00:32:48.810 22:31:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@319 -- # local -ga net_devs 00:32:48.810 22:31:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@320 -- # e810=() 00:32:48.810 22:31:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@320 -- # local -ga e810 00:32:48.810 22:31:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@321 -- # x722=() 00:32:48.810 22:31:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@321 -- # local -ga x722 00:32:48.810 22:31:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@322 -- # mlx=() 00:32:48.810 22:31:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@322 -- # local -ga mlx 00:32:48.810 22:31:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:32:48.810 22:31:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:32:48.811 22:31:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:32:48.811 22:31:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:32:48.811 22:31:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:32:48.811 22:31:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:32:48.811 22:31:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:32:48.811 22:31:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:32:48.811 22:31:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:32:48.811 22:31:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:32:48.811 22:31:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:32:48.811 22:31:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:32:48.811 22:31:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:32:48.811 22:31:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:32:48.811 22:31:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:32:48.811 22:31:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:32:48.811 22:31:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:32:48.811 22:31:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:32:48.811 22:31:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:32:48.811 22:31:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:32:48.811 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:32:48.811 22:31:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:32:48.811 22:31:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:32:48.811 22:31:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:48.811 22:31:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:48.811 22:31:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:32:48.811 22:31:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:32:48.811 22:31:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:32:48.811 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:32:48.811 22:31:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:32:48.811 22:31:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:32:48.811 22:31:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:48.811 22:31:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:48.811 22:31:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:32:48.811 22:31:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:32:48.811 22:31:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:32:48.811 22:31:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:32:48.811 22:31:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:32:48.811 22:31:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:48.811 22:31:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:32:48.811 22:31:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:48.811 22:31:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@416 -- # [[ up == up ]] 00:32:48.811 22:31:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:32:48.811 22:31:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:48.811 22:31:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:32:48.811 Found net devices under 0000:4b:00.0: cvl_0_0 00:32:48.811 22:31:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:32:48.811 22:31:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:32:48.811 22:31:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:48.811 22:31:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:32:48.811 22:31:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:48.811 22:31:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@416 -- # [[ up == up ]] 00:32:48.811 22:31:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:32:48.811 22:31:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:48.811 22:31:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:32:48.811 Found net devices under 0000:4b:00.1: cvl_0_1 00:32:48.811 22:31:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:32:48.811 22:31:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:32:48.811 22:31:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@440 -- # is_hw=yes 00:32:48.811 22:31:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:32:48.811 22:31:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:32:48.811 22:31:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:32:48.811 22:31:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:32:48.811 22:31:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:32:48.811 22:31:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:32:48.811 22:31:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:32:48.811 22:31:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:32:48.811 22:31:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:32:48.811 22:31:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:32:48.811 22:31:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:32:48.811 22:31:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:32:48.811 22:31:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:32:48.811 22:31:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:32:48.811 22:31:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:32:48.811 22:31:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:32:48.811 22:31:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:32:48.811 22:31:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:32:48.811 22:31:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:32:48.811 22:31:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:32:48.811 22:31:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:32:48.811 22:31:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:32:48.811 22:31:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:32:48.811 22:31:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:32:48.811 22:31:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:32:48.811 22:31:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:32:48.811 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:32:48.811 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.670 ms 00:32:48.811 00:32:48.811 --- 10.0.0.2 ping statistics --- 00:32:48.811 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:48.811 rtt min/avg/max/mdev = 0.670/0.670/0.670/0.000 ms 00:32:48.811 22:31:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:32:48.811 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:32:48.811 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.261 ms 00:32:48.811 00:32:48.811 --- 10.0.0.1 ping statistics --- 00:32:48.811 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:48.811 rtt min/avg/max/mdev = 0.261/0.261/0.261/0.000 ms 00:32:48.811 22:31:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:32:48.811 22:31:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@448 -- # return 0 00:32:48.811 22:31:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:32:48.811 22:31:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:32:48.811 22:31:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:32:48.811 22:31:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:32:48.811 22:31:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:32:48.811 22:31:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:32:48.811 22:31:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:32:48.811 22:31:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@132 -- # nvmfappstart -m 0x2 00:32:48.811 22:31:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:32:48.811 22:31:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@724 -- # xtrace_disable 00:32:48.811 22:31:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:32:48.811 22:31:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:32:48.811 22:31:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@507 -- # nvmfpid=242792 00:32:48.811 22:31:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@508 -- # waitforlisten 242792 00:32:48.811 22:31:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@831 -- # '[' -z 242792 ']' 00:32:48.811 22:31:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:48.811 22:31:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@836 -- # local max_retries=100 00:32:48.811 22:31:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:48.811 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:48.812 22:31:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@840 -- # xtrace_disable 00:32:48.812 22:31:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:32:48.812 [2024-10-01 22:31:43.238244] Starting SPDK v25.01-pre git sha1 1b1c3081e / DPDK 24.03.0 initialization... 00:32:48.812 [2024-10-01 22:31:43.238296] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:32:48.812 [2024-10-01 22:31:43.315946] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:48.812 [2024-10-01 22:31:43.406313] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:32:48.812 [2024-10-01 22:31:43.406370] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:32:48.812 [2024-10-01 22:31:43.406379] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:32:48.812 [2024-10-01 22:31:43.406386] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:32:48.812 [2024-10-01 22:31:43.406392] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:32:48.812 [2024-10-01 22:31:43.406416] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:32:48.812 22:31:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:32:48.812 22:31:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@864 -- # return 0 00:32:48.812 22:31:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:32:48.812 22:31:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@730 -- # xtrace_disable 00:32:48.812 22:31:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:32:49.073 22:31:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:32:49.073 22:31:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@134 -- # trap cleanup EXIT 00:32:49.073 22:31:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@137 -- # key=NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:32:49.073 22:31:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@138 -- # mktemp -t spdk-psk.XXX 00:32:49.073 22:31:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@138 -- # key_path=/tmp/spdk-psk.x2A 00:32:49.073 22:31:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@139 -- # echo -n NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:32:49.073 22:31:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@140 -- # chmod 0600 /tmp/spdk-psk.x2A 00:32:49.073 22:31:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@142 -- # setup_nvmf_tgt_conf /tmp/spdk-psk.x2A 00:32:49.073 22:31:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@22 -- # local key=/tmp/spdk-psk.x2A 00:32:49.073 22:31:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:32:49.073 [2024-10-01 22:31:44.276741] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:32:49.073 [2024-10-01 22:31:44.292735] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:32:49.073 [2024-10-01 22:31:44.293030] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:49.335 malloc0 00:32:49.335 22:31:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@145 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:32:49.335 22:31:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@148 -- # bdevperf_pid=243120 00:32:49.335 22:31:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@149 -- # waitforlisten 243120 /var/tmp/bdevperf.sock 00:32:49.335 22:31:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@146 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:32:49.335 22:31:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@831 -- # '[' -z 243120 ']' 00:32:49.335 22:31:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:32:49.335 22:31:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@836 -- # local max_retries=100 00:32:49.335 22:31:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:32:49.335 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:32:49.335 22:31:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@840 -- # xtrace_disable 00:32:49.335 22:31:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:32:49.335 [2024-10-01 22:31:44.436656] Starting SPDK v25.01-pre git sha1 1b1c3081e / DPDK 24.03.0 initialization... 00:32:49.335 [2024-10-01 22:31:44.436728] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid243120 ] 00:32:49.335 [2024-10-01 22:31:44.494035] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:49.335 [2024-10-01 22:31:44.558035] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:32:50.280 22:31:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:32:50.280 22:31:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@864 -- # return 0 00:32:50.280 22:31:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@151 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/spdk-psk.x2A 00:32:50.280 22:31:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@152 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:32:50.280 [2024-10-01 22:31:45.503690] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:32:50.541 TLSTESTn1 00:32:50.541 22:31:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@156 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:32:50.541 Running I/O for 10 seconds... 00:33:00.831 2043.00 IOPS, 7.98 MiB/s 2507.50 IOPS, 9.79 MiB/s 3153.00 IOPS, 12.32 MiB/s 3715.50 IOPS, 14.51 MiB/s 3489.00 IOPS, 13.63 MiB/s 3213.83 IOPS, 12.55 MiB/s 3339.71 IOPS, 13.05 MiB/s 3522.12 IOPS, 13.76 MiB/s 3314.67 IOPS, 12.95 MiB/s 3289.10 IOPS, 12.85 MiB/s 00:33:00.831 Latency(us) 00:33:00.831 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:00.831 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:33:00.831 Verification LBA range: start 0x0 length 0x2000 00:33:00.831 TLSTESTn1 : 10.03 3293.10 12.86 0.00 0.00 38818.78 4751.36 161655.47 00:33:00.831 =================================================================================================================== 00:33:00.831 Total : 3293.10 12.86 0.00 0.00 38818.78 4751.36 161655.47 00:33:00.831 { 00:33:00.831 "results": [ 00:33:00.831 { 00:33:00.831 "job": "TLSTESTn1", 00:33:00.831 "core_mask": "0x4", 00:33:00.831 "workload": "verify", 00:33:00.831 "status": "finished", 00:33:00.831 "verify_range": { 00:33:00.831 "start": 0, 00:33:00.831 "length": 8192 00:33:00.831 }, 00:33:00.831 "queue_depth": 128, 00:33:00.831 "io_size": 4096, 00:33:00.831 "runtime": 10.026415, 00:33:00.831 "iops": 3293.1012729874037, 00:33:00.831 "mibps": 12.863676847607046, 00:33:00.831 "io_failed": 0, 00:33:00.831 "io_timeout": 0, 00:33:00.831 "avg_latency_us": 38818.78492458659, 00:33:00.831 "min_latency_us": 4751.36, 00:33:00.831 "max_latency_us": 161655.46666666667 00:33:00.831 } 00:33:00.831 ], 00:33:00.831 "core_count": 1 00:33:00.831 } 00:33:00.831 22:31:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@1 -- # cleanup 00:33:00.831 22:31:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@15 -- # process_shm --id 0 00:33:00.831 22:31:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@808 -- # type=--id 00:33:00.831 22:31:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@809 -- # id=0 00:33:00.831 22:31:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@810 -- # '[' --id = --pid ']' 00:33:00.831 22:31:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@814 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:33:00.831 22:31:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@814 -- # shm_files=nvmf_trace.0 00:33:00.831 22:31:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@816 -- # [[ -z nvmf_trace.0 ]] 00:33:00.831 22:31:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@820 -- # for n in $shm_files 00:33:00.831 22:31:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@821 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:33:00.831 nvmf_trace.0 00:33:00.831 22:31:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@823 -- # return 0 00:33:00.831 22:31:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@16 -- # killprocess 243120 00:33:00.831 22:31:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@950 -- # '[' -z 243120 ']' 00:33:00.831 22:31:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@954 -- # kill -0 243120 00:33:00.831 22:31:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@955 -- # uname 00:33:00.831 22:31:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:33:00.831 22:31:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 243120 00:33:00.831 22:31:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:33:00.831 22:31:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:33:00.831 22:31:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@968 -- # echo 'killing process with pid 243120' 00:33:00.831 killing process with pid 243120 00:33:00.831 22:31:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@969 -- # kill 243120 00:33:00.831 Received shutdown signal, test time was about 10.000000 seconds 00:33:00.831 00:33:00.831 Latency(us) 00:33:00.831 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:00.831 =================================================================================================================== 00:33:00.831 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:33:00.831 22:31:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@974 -- # wait 243120 00:33:01.092 22:31:56 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@17 -- # nvmftestfini 00:33:01.092 22:31:56 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@514 -- # nvmfcleanup 00:33:01.092 22:31:56 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@121 -- # sync 00:33:01.092 22:31:56 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:33:01.092 22:31:56 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@124 -- # set +e 00:33:01.092 22:31:56 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@125 -- # for i in {1..20} 00:33:01.092 22:31:56 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:33:01.092 rmmod nvme_tcp 00:33:01.092 rmmod nvme_fabrics 00:33:01.092 rmmod nvme_keyring 00:33:01.092 22:31:56 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:33:01.092 22:31:56 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@128 -- # set -e 00:33:01.092 22:31:56 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@129 -- # return 0 00:33:01.092 22:31:56 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@515 -- # '[' -n 242792 ']' 00:33:01.092 22:31:56 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@516 -- # killprocess 242792 00:33:01.092 22:31:56 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@950 -- # '[' -z 242792 ']' 00:33:01.092 22:31:56 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@954 -- # kill -0 242792 00:33:01.092 22:31:56 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@955 -- # uname 00:33:01.092 22:31:56 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:33:01.092 22:31:56 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 242792 00:33:01.092 22:31:56 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:33:01.092 22:31:56 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:33:01.092 22:31:56 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@968 -- # echo 'killing process with pid 242792' 00:33:01.092 killing process with pid 242792 00:33:01.092 22:31:56 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@969 -- # kill 242792 00:33:01.092 22:31:56 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@974 -- # wait 242792 00:33:01.353 22:31:56 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:33:01.353 22:31:56 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:33:01.353 22:31:56 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:33:01.353 22:31:56 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@297 -- # iptr 00:33:01.353 22:31:56 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@789 -- # iptables-save 00:33:01.353 22:31:56 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:33:01.353 22:31:56 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@789 -- # iptables-restore 00:33:01.353 22:31:56 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:33:01.353 22:31:56 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@302 -- # remove_spdk_ns 00:33:01.353 22:31:56 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:01.353 22:31:56 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:33:01.353 22:31:56 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:03.264 22:31:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:33:03.264 22:31:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@18 -- # rm -f /tmp/spdk-psk.x2A 00:33:03.264 00:33:03.264 real 0m23.203s 00:33:03.264 user 0m25.232s 00:33:03.264 sys 0m9.080s 00:33:03.264 22:31:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1126 -- # xtrace_disable 00:33:03.264 22:31:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:33:03.264 ************************************ 00:33:03.264 END TEST nvmf_fips 00:33:03.264 ************************************ 00:33:03.264 22:31:58 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@43 -- # run_test nvmf_control_msg_list /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/control_msg_list.sh --transport=tcp 00:33:03.264 22:31:58 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:33:03.264 22:31:58 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:33:03.264 22:31:58 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:33:03.525 ************************************ 00:33:03.525 START TEST nvmf_control_msg_list 00:33:03.525 ************************************ 00:33:03.525 22:31:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/control_msg_list.sh --transport=tcp 00:33:03.525 * Looking for test storage... 00:33:03.525 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:33:03.525 22:31:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:33:03.525 22:31:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1681 -- # lcov --version 00:33:03.525 22:31:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:33:03.525 22:31:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:33:03.525 22:31:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:33:03.525 22:31:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@333 -- # local ver1 ver1_l 00:33:03.525 22:31:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@334 -- # local ver2 ver2_l 00:33:03.525 22:31:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@336 -- # IFS=.-: 00:33:03.525 22:31:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@336 -- # read -ra ver1 00:33:03.525 22:31:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@337 -- # IFS=.-: 00:33:03.525 22:31:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@337 -- # read -ra ver2 00:33:03.525 22:31:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@338 -- # local 'op=<' 00:33:03.525 22:31:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@340 -- # ver1_l=2 00:33:03.525 22:31:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@341 -- # ver2_l=1 00:33:03.525 22:31:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:33:03.525 22:31:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@344 -- # case "$op" in 00:33:03.525 22:31:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@345 -- # : 1 00:33:03.525 22:31:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@364 -- # (( v = 0 )) 00:33:03.525 22:31:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:33:03.525 22:31:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@365 -- # decimal 1 00:33:03.525 22:31:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@353 -- # local d=1 00:33:03.525 22:31:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:33:03.525 22:31:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@355 -- # echo 1 00:33:03.525 22:31:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@365 -- # ver1[v]=1 00:33:03.525 22:31:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@366 -- # decimal 2 00:33:03.525 22:31:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@353 -- # local d=2 00:33:03.526 22:31:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:33:03.526 22:31:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@355 -- # echo 2 00:33:03.526 22:31:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@366 -- # ver2[v]=2 00:33:03.526 22:31:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:33:03.526 22:31:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:33:03.526 22:31:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@368 -- # return 0 00:33:03.526 22:31:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:33:03.526 22:31:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:33:03.526 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:03.526 --rc genhtml_branch_coverage=1 00:33:03.526 --rc genhtml_function_coverage=1 00:33:03.526 --rc genhtml_legend=1 00:33:03.526 --rc geninfo_all_blocks=1 00:33:03.526 --rc geninfo_unexecuted_blocks=1 00:33:03.526 00:33:03.526 ' 00:33:03.526 22:31:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:33:03.526 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:03.526 --rc genhtml_branch_coverage=1 00:33:03.526 --rc genhtml_function_coverage=1 00:33:03.526 --rc genhtml_legend=1 00:33:03.526 --rc geninfo_all_blocks=1 00:33:03.526 --rc geninfo_unexecuted_blocks=1 00:33:03.526 00:33:03.526 ' 00:33:03.526 22:31:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:33:03.526 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:03.526 --rc genhtml_branch_coverage=1 00:33:03.526 --rc genhtml_function_coverage=1 00:33:03.526 --rc genhtml_legend=1 00:33:03.526 --rc geninfo_all_blocks=1 00:33:03.526 --rc geninfo_unexecuted_blocks=1 00:33:03.526 00:33:03.526 ' 00:33:03.526 22:31:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:33:03.526 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:03.526 --rc genhtml_branch_coverage=1 00:33:03.526 --rc genhtml_function_coverage=1 00:33:03.526 --rc genhtml_legend=1 00:33:03.526 --rc geninfo_all_blocks=1 00:33:03.526 --rc geninfo_unexecuted_blocks=1 00:33:03.526 00:33:03.526 ' 00:33:03.526 22:31:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:33:03.526 22:31:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@7 -- # uname -s 00:33:03.526 22:31:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:33:03.526 22:31:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:33:03.526 22:31:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:33:03.526 22:31:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:33:03.526 22:31:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:33:03.526 22:31:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:33:03.526 22:31:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:33:03.526 22:31:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:33:03.526 22:31:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:33:03.526 22:31:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:33:03.526 22:31:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 00:33:03.526 22:31:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@18 -- # NVME_HOSTID=80c5a598-37ec-ec11-9bc7-a4bf01928204 00:33:03.526 22:31:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:33:03.526 22:31:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:33:03.526 22:31:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:33:03.526 22:31:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:33:03.526 22:31:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:33:03.526 22:31:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@15 -- # shopt -s extglob 00:33:03.526 22:31:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:33:03.526 22:31:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:03.526 22:31:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:03.526 22:31:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:03.526 22:31:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:03.526 22:31:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:03.526 22:31:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@5 -- # export PATH 00:33:03.526 22:31:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:03.526 22:31:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@51 -- # : 0 00:33:03.526 22:31:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:33:03.526 22:31:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:33:03.787 22:31:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:33:03.787 22:31:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:33:03.787 22:31:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:33:03.787 22:31:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:33:03.787 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:33:03.787 22:31:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:33:03.787 22:31:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:33:03.787 22:31:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@55 -- # have_pci_nics=0 00:33:03.787 22:31:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@12 -- # nvmftestinit 00:33:03.787 22:31:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:33:03.787 22:31:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:33:03.787 22:31:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@474 -- # prepare_net_devs 00:33:03.787 22:31:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@436 -- # local -g is_hw=no 00:33:03.787 22:31:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@438 -- # remove_spdk_ns 00:33:03.787 22:31:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:03.787 22:31:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:33:03.787 22:31:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:03.787 22:31:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:33:03.787 22:31:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:33:03.787 22:31:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@309 -- # xtrace_disable 00:33:03.787 22:31:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:33:11.925 22:32:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:33:11.925 22:32:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@315 -- # pci_devs=() 00:33:11.925 22:32:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@315 -- # local -a pci_devs 00:33:11.925 22:32:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@316 -- # pci_net_devs=() 00:33:11.925 22:32:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:33:11.925 22:32:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@317 -- # pci_drivers=() 00:33:11.925 22:32:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@317 -- # local -A pci_drivers 00:33:11.925 22:32:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@319 -- # net_devs=() 00:33:11.925 22:32:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@319 -- # local -ga net_devs 00:33:11.925 22:32:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@320 -- # e810=() 00:33:11.925 22:32:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@320 -- # local -ga e810 00:33:11.925 22:32:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@321 -- # x722=() 00:33:11.925 22:32:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@321 -- # local -ga x722 00:33:11.925 22:32:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@322 -- # mlx=() 00:33:11.925 22:32:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@322 -- # local -ga mlx 00:33:11.925 22:32:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:33:11.925 22:32:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:33:11.925 22:32:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:33:11.925 22:32:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:33:11.925 22:32:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:33:11.925 22:32:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:33:11.925 22:32:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:33:11.925 22:32:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:33:11.925 22:32:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:33:11.925 22:32:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:33:11.925 22:32:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:33:11.925 22:32:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:33:11.925 22:32:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:33:11.925 22:32:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:33:11.925 22:32:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:33:11.925 22:32:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:33:11.925 22:32:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:33:11.925 22:32:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:33:11.925 22:32:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:33:11.925 22:32:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:33:11.925 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:33:11.925 22:32:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:33:11.925 22:32:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:33:11.925 22:32:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:11.925 22:32:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:11.925 22:32:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:33:11.925 22:32:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:33:11.925 22:32:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:33:11.925 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:33:11.925 22:32:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:33:11.925 22:32:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:33:11.925 22:32:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:11.925 22:32:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:11.925 22:32:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:33:11.925 22:32:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:33:11.925 22:32:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:33:11.925 22:32:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:33:11.925 22:32:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:33:11.926 22:32:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:11.926 22:32:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:33:11.926 22:32:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:11.926 22:32:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@416 -- # [[ up == up ]] 00:33:11.926 22:32:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:33:11.926 22:32:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:11.926 22:32:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:33:11.926 Found net devices under 0000:4b:00.0: cvl_0_0 00:33:11.926 22:32:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:33:11.926 22:32:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:33:11.926 22:32:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:11.926 22:32:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:33:11.926 22:32:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:11.926 22:32:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@416 -- # [[ up == up ]] 00:33:11.926 22:32:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:33:11.926 22:32:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:11.926 22:32:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:33:11.926 Found net devices under 0000:4b:00.1: cvl_0_1 00:33:11.926 22:32:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:33:11.926 22:32:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:33:11.926 22:32:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@440 -- # is_hw=yes 00:33:11.926 22:32:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:33:11.926 22:32:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:33:11.926 22:32:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:33:11.926 22:32:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:33:11.926 22:32:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:33:11.926 22:32:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:33:11.926 22:32:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:33:11.926 22:32:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:33:11.926 22:32:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:33:11.926 22:32:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:33:11.926 22:32:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:33:11.926 22:32:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:33:11.926 22:32:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:33:11.926 22:32:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:33:11.926 22:32:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:33:11.926 22:32:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:33:11.926 22:32:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:33:11.926 22:32:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:33:11.926 22:32:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:33:11.926 22:32:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:33:11.926 22:32:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:33:11.926 22:32:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:33:11.926 22:32:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:33:11.926 22:32:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:33:11.926 22:32:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:33:11.926 22:32:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:33:11.926 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:33:11.926 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.589 ms 00:33:11.926 00:33:11.926 --- 10.0.0.2 ping statistics --- 00:33:11.926 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:11.926 rtt min/avg/max/mdev = 0.589/0.589/0.589/0.000 ms 00:33:11.926 22:32:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:33:11.926 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:33:11.926 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.197 ms 00:33:11.926 00:33:11.926 --- 10.0.0.1 ping statistics --- 00:33:11.926 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:11.926 rtt min/avg/max/mdev = 0.197/0.197/0.197/0.000 ms 00:33:11.926 22:32:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:33:11.926 22:32:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@448 -- # return 0 00:33:11.926 22:32:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:33:11.926 22:32:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:33:11.926 22:32:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:33:11.926 22:32:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:33:11.926 22:32:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:33:11.926 22:32:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:33:11.926 22:32:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:33:11.926 22:32:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@13 -- # nvmfappstart 00:33:11.926 22:32:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:33:11.926 22:32:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@724 -- # xtrace_disable 00:33:11.926 22:32:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:33:11.926 22:32:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@507 -- # nvmfpid=249585 00:33:11.926 22:32:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@508 -- # waitforlisten 249585 00:33:11.926 22:32:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:33:11.926 22:32:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@831 -- # '[' -z 249585 ']' 00:33:11.926 22:32:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:11.926 22:32:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@836 -- # local max_retries=100 00:33:11.926 22:32:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:11.926 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:11.926 22:32:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@840 -- # xtrace_disable 00:33:11.926 22:32:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:33:11.926 [2024-10-01 22:32:06.158006] Starting SPDK v25.01-pre git sha1 1b1c3081e / DPDK 24.03.0 initialization... 00:33:11.926 [2024-10-01 22:32:06.158057] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:33:11.926 [2024-10-01 22:32:06.225563] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:11.926 [2024-10-01 22:32:06.289023] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:33:11.926 [2024-10-01 22:32:06.289061] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:33:11.926 [2024-10-01 22:32:06.289069] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:33:11.926 [2024-10-01 22:32:06.289076] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:33:11.926 [2024-10-01 22:32:06.289082] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:33:11.927 [2024-10-01 22:32:06.289103] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:33:11.927 22:32:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:33:11.927 22:32:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@864 -- # return 0 00:33:11.927 22:32:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:33:11.927 22:32:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@730 -- # xtrace_disable 00:33:11.927 22:32:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:33:11.927 22:32:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:33:11.927 22:32:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@15 -- # subnqn=nqn.2024-07.io.spdk:cnode0 00:33:11.927 22:32:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@16 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:33:11.927 22:32:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@19 -- # rpc_cmd nvmf_create_transport '-t tcp -o' --in-capsule-data-size 768 --control-msg-num 1 00:33:11.927 22:32:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:11.927 22:32:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:33:11.927 [2024-10-01 22:32:07.002605] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:33:11.927 22:32:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:11.927 22:32:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2024-07.io.spdk:cnode0 -a 00:33:11.927 22:32:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:11.927 22:32:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:33:11.927 22:32:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:11.927 22:32:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@21 -- # rpc_cmd bdev_malloc_create -b Malloc0 32 512 00:33:11.927 22:32:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:11.927 22:32:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:33:11.927 Malloc0 00:33:11.927 22:32:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:11.927 22:32:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2024-07.io.spdk:cnode0 Malloc0 00:33:11.927 22:32:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:11.927 22:32:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:33:11.927 22:32:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:11.927 22:32:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2024-07.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:33:11.927 22:32:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:11.927 22:32:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:33:11.927 [2024-10-01 22:32:07.053588] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:33:11.927 22:32:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:11.927 22:32:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@27 -- # perf_pid1=249838 00:33:11.927 22:32:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x2 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:33:11.927 22:32:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@29 -- # perf_pid2=249839 00:33:11.927 22:32:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x4 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:33:11.927 22:32:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@31 -- # perf_pid3=249840 00:33:11.927 22:32:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@33 -- # wait 249838 00:33:11.927 22:32:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x8 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:33:11.927 [2024-10-01 22:32:07.124168] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:33:11.927 [2024-10-01 22:32:07.144260] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:33:11.927 [2024-10-01 22:32:07.144698] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:33:13.313 Initializing NVMe Controllers 00:33:13.313 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:33:13.313 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 1 00:33:13.313 Initialization complete. Launching workers. 00:33:13.313 ======================================================== 00:33:13.313 Latency(us) 00:33:13.313 Device Information : IOPS MiB/s Average min max 00:33:13.313 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 1: 2134.00 8.34 468.42 157.43 749.87 00:33:13.313 ======================================================== 00:33:13.313 Total : 2134.00 8.34 468.42 157.43 749.87 00:33:13.313 00:33:13.313 22:32:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@34 -- # wait 249839 00:33:13.313 Initializing NVMe Controllers 00:33:13.313 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:33:13.313 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 3 00:33:13.313 Initialization complete. Launching workers. 00:33:13.313 ======================================================== 00:33:13.313 Latency(us) 00:33:13.313 Device Information : IOPS MiB/s Average min max 00:33:13.313 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 3: 2147.00 8.39 465.58 144.57 40444.21 00:33:13.313 ======================================================== 00:33:13.313 Total : 2147.00 8.39 465.58 144.57 40444.21 00:33:13.313 00:33:13.313 Initializing NVMe Controllers 00:33:13.313 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:33:13.313 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 2 00:33:13.313 Initialization complete. Launching workers. 00:33:13.313 ======================================================== 00:33:13.313 Latency(us) 00:33:13.313 Device Information : IOPS MiB/s Average min max 00:33:13.313 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 2: 25.00 0.10 40896.97 40753.68 40992.92 00:33:13.313 ======================================================== 00:33:13.313 Total : 25.00 0.10 40896.97 40753.68 40992.92 00:33:13.313 00:33:13.313 22:32:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@35 -- # wait 249840 00:33:13.313 22:32:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:33:13.313 22:32:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@38 -- # nvmftestfini 00:33:13.313 22:32:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@514 -- # nvmfcleanup 00:33:13.313 22:32:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@121 -- # sync 00:33:13.313 22:32:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:33:13.313 22:32:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@124 -- # set +e 00:33:13.313 22:32:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@125 -- # for i in {1..20} 00:33:13.313 22:32:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:33:13.313 rmmod nvme_tcp 00:33:13.313 rmmod nvme_fabrics 00:33:13.313 rmmod nvme_keyring 00:33:13.313 22:32:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:33:13.313 22:32:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@128 -- # set -e 00:33:13.313 22:32:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@129 -- # return 0 00:33:13.313 22:32:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@515 -- # '[' -n 249585 ']' 00:33:13.313 22:32:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@516 -- # killprocess 249585 00:33:13.313 22:32:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@950 -- # '[' -z 249585 ']' 00:33:13.313 22:32:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@954 -- # kill -0 249585 00:33:13.313 22:32:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@955 -- # uname 00:33:13.313 22:32:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:33:13.313 22:32:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 249585 00:33:13.573 22:32:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:33:13.573 22:32:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:33:13.573 22:32:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@968 -- # echo 'killing process with pid 249585' 00:33:13.573 killing process with pid 249585 00:33:13.573 22:32:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@969 -- # kill 249585 00:33:13.573 22:32:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@974 -- # wait 249585 00:33:13.573 22:32:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:33:13.573 22:32:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:33:13.573 22:32:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:33:13.573 22:32:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@297 -- # iptr 00:33:13.573 22:32:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@789 -- # iptables-save 00:33:13.573 22:32:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:33:13.574 22:32:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@789 -- # iptables-restore 00:33:13.574 22:32:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:33:13.574 22:32:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@302 -- # remove_spdk_ns 00:33:13.574 22:32:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:13.574 22:32:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:33:13.574 22:32:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:16.116 22:32:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:33:16.116 00:33:16.116 real 0m12.291s 00:33:16.116 user 0m8.145s 00:33:16.116 sys 0m6.324s 00:33:16.116 22:32:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1126 -- # xtrace_disable 00:33:16.116 22:32:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:33:16.116 ************************************ 00:33:16.116 END TEST nvmf_control_msg_list 00:33:16.116 ************************************ 00:33:16.116 22:32:10 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@44 -- # run_test nvmf_wait_for_buf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/wait_for_buf.sh --transport=tcp 00:33:16.116 22:32:10 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:33:16.116 22:32:10 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:33:16.116 22:32:10 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:33:16.116 ************************************ 00:33:16.116 START TEST nvmf_wait_for_buf 00:33:16.116 ************************************ 00:33:16.116 22:32:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/wait_for_buf.sh --transport=tcp 00:33:16.116 * Looking for test storage... 00:33:16.116 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:33:16.116 22:32:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:33:16.116 22:32:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1681 -- # lcov --version 00:33:16.116 22:32:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:33:16.116 22:32:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:33:16.116 22:32:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:33:16.116 22:32:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:33:16.116 22:32:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:33:16.116 22:32:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@336 -- # IFS=.-: 00:33:16.116 22:32:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@336 -- # read -ra ver1 00:33:16.116 22:32:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@337 -- # IFS=.-: 00:33:16.116 22:32:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@337 -- # read -ra ver2 00:33:16.116 22:32:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@338 -- # local 'op=<' 00:33:16.117 22:32:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@340 -- # ver1_l=2 00:33:16.117 22:32:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@341 -- # ver2_l=1 00:33:16.117 22:32:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:33:16.117 22:32:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@344 -- # case "$op" in 00:33:16.117 22:32:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@345 -- # : 1 00:33:16.117 22:32:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@364 -- # (( v = 0 )) 00:33:16.117 22:32:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:33:16.117 22:32:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@365 -- # decimal 1 00:33:16.117 22:32:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@353 -- # local d=1 00:33:16.117 22:32:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:33:16.117 22:32:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@355 -- # echo 1 00:33:16.117 22:32:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@365 -- # ver1[v]=1 00:33:16.117 22:32:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@366 -- # decimal 2 00:33:16.117 22:32:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@353 -- # local d=2 00:33:16.117 22:32:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:33:16.117 22:32:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@355 -- # echo 2 00:33:16.117 22:32:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@366 -- # ver2[v]=2 00:33:16.117 22:32:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:33:16.117 22:32:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:33:16.117 22:32:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@368 -- # return 0 00:33:16.117 22:32:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:33:16.117 22:32:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:33:16.117 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:16.117 --rc genhtml_branch_coverage=1 00:33:16.117 --rc genhtml_function_coverage=1 00:33:16.117 --rc genhtml_legend=1 00:33:16.117 --rc geninfo_all_blocks=1 00:33:16.117 --rc geninfo_unexecuted_blocks=1 00:33:16.117 00:33:16.117 ' 00:33:16.117 22:32:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:33:16.117 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:16.117 --rc genhtml_branch_coverage=1 00:33:16.117 --rc genhtml_function_coverage=1 00:33:16.117 --rc genhtml_legend=1 00:33:16.117 --rc geninfo_all_blocks=1 00:33:16.117 --rc geninfo_unexecuted_blocks=1 00:33:16.117 00:33:16.117 ' 00:33:16.117 22:32:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:33:16.117 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:16.117 --rc genhtml_branch_coverage=1 00:33:16.117 --rc genhtml_function_coverage=1 00:33:16.117 --rc genhtml_legend=1 00:33:16.117 --rc geninfo_all_blocks=1 00:33:16.117 --rc geninfo_unexecuted_blocks=1 00:33:16.117 00:33:16.117 ' 00:33:16.117 22:32:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:33:16.117 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:16.117 --rc genhtml_branch_coverage=1 00:33:16.117 --rc genhtml_function_coverage=1 00:33:16.117 --rc genhtml_legend=1 00:33:16.117 --rc geninfo_all_blocks=1 00:33:16.117 --rc geninfo_unexecuted_blocks=1 00:33:16.117 00:33:16.117 ' 00:33:16.117 22:32:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:33:16.117 22:32:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@7 -- # uname -s 00:33:16.117 22:32:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:33:16.117 22:32:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:33:16.117 22:32:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:33:16.117 22:32:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:33:16.117 22:32:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:33:16.117 22:32:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:33:16.117 22:32:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:33:16.117 22:32:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:33:16.117 22:32:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:33:16.117 22:32:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:33:16.117 22:32:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 00:33:16.117 22:32:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@18 -- # NVME_HOSTID=80c5a598-37ec-ec11-9bc7-a4bf01928204 00:33:16.117 22:32:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:33:16.117 22:32:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:33:16.117 22:32:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:33:16.117 22:32:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:33:16.117 22:32:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:33:16.117 22:32:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@15 -- # shopt -s extglob 00:33:16.117 22:32:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:33:16.117 22:32:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:16.117 22:32:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:16.117 22:32:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:16.117 22:32:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:16.117 22:32:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:16.117 22:32:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@5 -- # export PATH 00:33:16.117 22:32:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:16.117 22:32:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@51 -- # : 0 00:33:16.117 22:32:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:33:16.117 22:32:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:33:16.117 22:32:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:33:16.117 22:32:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:33:16.117 22:32:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:33:16.117 22:32:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:33:16.117 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:33:16.117 22:32:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:33:16.117 22:32:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:33:16.117 22:32:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@55 -- # have_pci_nics=0 00:33:16.117 22:32:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@12 -- # nvmftestinit 00:33:16.117 22:32:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:33:16.117 22:32:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:33:16.117 22:32:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@474 -- # prepare_net_devs 00:33:16.117 22:32:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@436 -- # local -g is_hw=no 00:33:16.117 22:32:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@438 -- # remove_spdk_ns 00:33:16.117 22:32:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:16.117 22:32:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:33:16.117 22:32:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:16.117 22:32:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:33:16.117 22:32:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:33:16.118 22:32:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@309 -- # xtrace_disable 00:33:16.118 22:32:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:33:24.258 22:32:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:33:24.258 22:32:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@315 -- # pci_devs=() 00:33:24.258 22:32:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@315 -- # local -a pci_devs 00:33:24.258 22:32:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@316 -- # pci_net_devs=() 00:33:24.258 22:32:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:33:24.258 22:32:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@317 -- # pci_drivers=() 00:33:24.258 22:32:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@317 -- # local -A pci_drivers 00:33:24.258 22:32:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@319 -- # net_devs=() 00:33:24.258 22:32:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@319 -- # local -ga net_devs 00:33:24.258 22:32:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@320 -- # e810=() 00:33:24.258 22:32:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@320 -- # local -ga e810 00:33:24.258 22:32:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@321 -- # x722=() 00:33:24.258 22:32:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@321 -- # local -ga x722 00:33:24.258 22:32:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@322 -- # mlx=() 00:33:24.258 22:32:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@322 -- # local -ga mlx 00:33:24.258 22:32:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:33:24.258 22:32:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:33:24.258 22:32:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:33:24.258 22:32:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:33:24.258 22:32:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:33:24.258 22:32:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:33:24.258 22:32:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:33:24.258 22:32:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:33:24.258 22:32:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:33:24.258 22:32:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:33:24.258 22:32:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:33:24.258 22:32:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:33:24.258 22:32:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:33:24.258 22:32:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:33:24.258 22:32:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:33:24.258 22:32:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:33:24.258 22:32:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:33:24.258 22:32:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:33:24.258 22:32:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:33:24.258 22:32:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:33:24.258 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:33:24.258 22:32:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:33:24.258 22:32:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:33:24.258 22:32:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:24.258 22:32:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:24.258 22:32:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:33:24.258 22:32:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:33:24.258 22:32:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:33:24.258 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:33:24.258 22:32:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:33:24.258 22:32:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:33:24.258 22:32:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:24.258 22:32:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:24.258 22:32:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:33:24.258 22:32:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:33:24.258 22:32:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:33:24.258 22:32:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:33:24.258 22:32:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:33:24.258 22:32:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:24.258 22:32:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:33:24.258 22:32:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:24.258 22:32:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@416 -- # [[ up == up ]] 00:33:24.258 22:32:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:33:24.258 22:32:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:24.258 22:32:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:33:24.258 Found net devices under 0000:4b:00.0: cvl_0_0 00:33:24.258 22:32:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:33:24.258 22:32:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:33:24.258 22:32:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:24.258 22:32:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:33:24.258 22:32:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:24.258 22:32:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@416 -- # [[ up == up ]] 00:33:24.258 22:32:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:33:24.259 22:32:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:24.259 22:32:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:33:24.259 Found net devices under 0000:4b:00.1: cvl_0_1 00:33:24.259 22:32:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:33:24.259 22:32:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:33:24.259 22:32:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@440 -- # is_hw=yes 00:33:24.259 22:32:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:33:24.259 22:32:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:33:24.259 22:32:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:33:24.259 22:32:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:33:24.259 22:32:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:33:24.259 22:32:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:33:24.259 22:32:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:33:24.259 22:32:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:33:24.259 22:32:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:33:24.259 22:32:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:33:24.259 22:32:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:33:24.259 22:32:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:33:24.259 22:32:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:33:24.259 22:32:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:33:24.259 22:32:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:33:24.259 22:32:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:33:24.259 22:32:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:33:24.259 22:32:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:33:24.259 22:32:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:33:24.259 22:32:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:33:24.259 22:32:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:33:24.259 22:32:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:33:24.259 22:32:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:33:24.259 22:32:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:33:24.259 22:32:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:33:24.259 22:32:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:33:24.259 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:33:24.259 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.669 ms 00:33:24.259 00:33:24.259 --- 10.0.0.2 ping statistics --- 00:33:24.259 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:24.259 rtt min/avg/max/mdev = 0.669/0.669/0.669/0.000 ms 00:33:24.259 22:32:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:33:24.259 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:33:24.259 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.258 ms 00:33:24.259 00:33:24.259 --- 10.0.0.1 ping statistics --- 00:33:24.259 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:24.259 rtt min/avg/max/mdev = 0.258/0.258/0.258/0.000 ms 00:33:24.259 22:32:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:33:24.259 22:32:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@448 -- # return 0 00:33:24.259 22:32:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:33:24.259 22:32:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:33:24.259 22:32:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:33:24.259 22:32:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:33:24.259 22:32:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:33:24.259 22:32:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:33:24.259 22:32:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:33:24.259 22:32:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@13 -- # nvmfappstart --wait-for-rpc 00:33:24.259 22:32:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:33:24.259 22:32:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@724 -- # xtrace_disable 00:33:24.259 22:32:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:33:24.259 22:32:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@507 -- # nvmfpid=254308 00:33:24.259 22:32:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@508 -- # waitforlisten 254308 00:33:24.259 22:32:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@831 -- # '[' -z 254308 ']' 00:33:24.259 22:32:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:24.259 22:32:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@836 -- # local max_retries=100 00:33:24.259 22:32:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:24.259 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:24.259 22:32:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@840 -- # xtrace_disable 00:33:24.259 22:32:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:33:24.259 22:32:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:33:24.259 [2024-10-01 22:32:18.424020] Starting SPDK v25.01-pre git sha1 1b1c3081e / DPDK 24.03.0 initialization... 00:33:24.259 [2024-10-01 22:32:18.424088] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:33:24.259 [2024-10-01 22:32:18.496258] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:24.259 [2024-10-01 22:32:18.569322] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:33:24.259 [2024-10-01 22:32:18.569364] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:33:24.259 [2024-10-01 22:32:18.569372] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:33:24.259 [2024-10-01 22:32:18.569379] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:33:24.259 [2024-10-01 22:32:18.569385] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:33:24.259 [2024-10-01 22:32:18.569403] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:33:24.259 22:32:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:33:24.259 22:32:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@864 -- # return 0 00:33:24.259 22:32:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:33:24.259 22:32:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@730 -- # xtrace_disable 00:33:24.259 22:32:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:33:24.259 22:32:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:33:24.259 22:32:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@15 -- # subnqn=nqn.2024-07.io.spdk:cnode0 00:33:24.259 22:32:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@16 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:33:24.259 22:32:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@19 -- # rpc_cmd accel_set_options --small-cache-size 0 --large-cache-size 0 00:33:24.259 22:32:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:24.259 22:32:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:33:24.259 22:32:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:24.259 22:32:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@20 -- # rpc_cmd iobuf_set_options --small-pool-count 154 --small_bufsize=8192 00:33:24.259 22:32:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:24.259 22:32:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:33:24.259 22:32:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:24.259 22:32:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@21 -- # rpc_cmd framework_start_init 00:33:24.259 22:32:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:24.259 22:32:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:33:24.259 22:32:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:24.259 22:32:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@22 -- # rpc_cmd bdev_malloc_create -b Malloc0 32 512 00:33:24.259 22:32:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:24.259 22:32:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:33:24.259 Malloc0 00:33:24.259 22:32:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:24.259 22:32:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@23 -- # rpc_cmd nvmf_create_transport '-t tcp -o' -u 8192 -n 24 -b 24 00:33:24.259 22:32:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:24.259 22:32:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:33:24.259 [2024-10-01 22:32:19.380784] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:33:24.259 22:32:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:24.259 22:32:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2024-07.io.spdk:cnode0 -a -s SPDK00000000000001 00:33:24.259 22:32:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:24.259 22:32:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:33:24.260 22:32:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:24.260 22:32:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2024-07.io.spdk:cnode0 Malloc0 00:33:24.260 22:32:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:24.260 22:32:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:33:24.260 22:32:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:24.260 22:32:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2024-07.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:33:24.260 22:32:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:24.260 22:32:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:33:24.260 [2024-10-01 22:32:19.404966] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:33:24.260 22:32:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:24.260 22:32:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 4 -o 131072 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:33:24.260 [2024-10-01 22:32:19.484710] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:33:26.184 Initializing NVMe Controllers 00:33:26.184 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:33:26.184 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 0 00:33:26.184 Initialization complete. Launching workers. 00:33:26.184 ======================================================== 00:33:26.184 Latency(us) 00:33:26.184 Device Information : IOPS MiB/s Average min max 00:33:26.184 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 0: 25.00 3.12 165842.25 47870.78 191553.11 00:33:26.184 ======================================================== 00:33:26.184 Total : 25.00 3.12 165842.25 47870.78 191553.11 00:33:26.184 00:33:26.184 22:32:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # rpc_cmd iobuf_get_stats 00:33:26.184 22:32:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # jq -r '.[] | select(.module == "nvmf_TCP") | .small_pool.retry' 00:33:26.184 22:32:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:26.184 22:32:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:33:26.184 22:32:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:26.184 22:32:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # retry_count=374 00:33:26.184 22:32:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@33 -- # [[ 374 -eq 0 ]] 00:33:26.184 22:32:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:33:26.184 22:32:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@38 -- # nvmftestfini 00:33:26.184 22:32:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@514 -- # nvmfcleanup 00:33:26.184 22:32:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@121 -- # sync 00:33:26.184 22:32:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:33:26.184 22:32:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@124 -- # set +e 00:33:26.184 22:32:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@125 -- # for i in {1..20} 00:33:26.184 22:32:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:33:26.184 rmmod nvme_tcp 00:33:26.184 rmmod nvme_fabrics 00:33:26.184 rmmod nvme_keyring 00:33:26.184 22:32:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:33:26.184 22:32:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@128 -- # set -e 00:33:26.184 22:32:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@129 -- # return 0 00:33:26.184 22:32:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@515 -- # '[' -n 254308 ']' 00:33:26.184 22:32:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@516 -- # killprocess 254308 00:33:26.184 22:32:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@950 -- # '[' -z 254308 ']' 00:33:26.184 22:32:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@954 -- # kill -0 254308 00:33:26.184 22:32:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@955 -- # uname 00:33:26.184 22:32:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:33:26.184 22:32:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 254308 00:33:26.184 22:32:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:33:26.184 22:32:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:33:26.184 22:32:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@968 -- # echo 'killing process with pid 254308' 00:33:26.184 killing process with pid 254308 00:33:26.184 22:32:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@969 -- # kill 254308 00:33:26.184 22:32:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@974 -- # wait 254308 00:33:26.184 22:32:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:33:26.184 22:32:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:33:26.184 22:32:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:33:26.184 22:32:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@297 -- # iptr 00:33:26.184 22:32:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@789 -- # iptables-save 00:33:26.184 22:32:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:33:26.184 22:32:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@789 -- # iptables-restore 00:33:26.184 22:32:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:33:26.184 22:32:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@302 -- # remove_spdk_ns 00:33:26.184 22:32:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:26.184 22:32:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:33:26.184 22:32:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:28.736 22:32:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:33:28.736 00:33:28.736 real 0m12.542s 00:33:28.736 user 0m5.165s 00:33:28.736 sys 0m5.901s 00:33:28.736 22:32:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:33:28.736 22:32:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:33:28.736 ************************************ 00:33:28.736 END TEST nvmf_wait_for_buf 00:33:28.736 ************************************ 00:33:28.736 22:32:23 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@47 -- # '[' 0 -eq 1 ']' 00:33:28.736 22:32:23 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@53 -- # [[ phy == phy ]] 00:33:28.736 22:32:23 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@54 -- # '[' tcp = tcp ']' 00:33:28.736 22:32:23 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@55 -- # gather_supported_nvmf_pci_devs 00:33:28.736 22:32:23 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@309 -- # xtrace_disable 00:33:28.736 22:32:23 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:33:35.320 22:32:30 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:33:35.320 22:32:30 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@315 -- # pci_devs=() 00:33:35.320 22:32:30 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@315 -- # local -a pci_devs 00:33:35.320 22:32:30 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@316 -- # pci_net_devs=() 00:33:35.320 22:32:30 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:33:35.320 22:32:30 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@317 -- # pci_drivers=() 00:33:35.320 22:32:30 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@317 -- # local -A pci_drivers 00:33:35.320 22:32:30 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@319 -- # net_devs=() 00:33:35.320 22:32:30 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@319 -- # local -ga net_devs 00:33:35.320 22:32:30 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@320 -- # e810=() 00:33:35.320 22:32:30 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@320 -- # local -ga e810 00:33:35.320 22:32:30 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@321 -- # x722=() 00:33:35.320 22:32:30 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@321 -- # local -ga x722 00:33:35.320 22:32:30 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@322 -- # mlx=() 00:33:35.320 22:32:30 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@322 -- # local -ga mlx 00:33:35.320 22:32:30 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:33:35.320 22:32:30 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:33:35.320 22:32:30 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:33:35.320 22:32:30 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:33:35.320 22:32:30 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:33:35.320 22:32:30 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:33:35.320 22:32:30 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:33:35.320 22:32:30 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:33:35.320 22:32:30 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:33:35.320 22:32:30 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:33:35.320 22:32:30 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:33:35.320 22:32:30 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:33:35.320 22:32:30 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:33:35.320 22:32:30 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:33:35.320 22:32:30 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:33:35.320 22:32:30 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:33:35.320 22:32:30 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:33:35.321 22:32:30 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:33:35.321 22:32:30 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:33:35.321 22:32:30 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:33:35.321 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:33:35.321 22:32:30 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:33:35.321 22:32:30 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:33:35.321 22:32:30 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:35.321 22:32:30 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:35.321 22:32:30 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:33:35.321 22:32:30 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:33:35.321 22:32:30 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:33:35.321 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:33:35.321 22:32:30 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:33:35.321 22:32:30 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:33:35.321 22:32:30 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:35.321 22:32:30 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:35.321 22:32:30 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:33:35.321 22:32:30 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:33:35.321 22:32:30 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:33:35.321 22:32:30 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:33:35.321 22:32:30 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:33:35.321 22:32:30 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:35.321 22:32:30 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:33:35.321 22:32:30 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:35.321 22:32:30 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@416 -- # [[ up == up ]] 00:33:35.321 22:32:30 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:33:35.321 22:32:30 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:35.321 22:32:30 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:33:35.321 Found net devices under 0000:4b:00.0: cvl_0_0 00:33:35.321 22:32:30 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:33:35.321 22:32:30 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:33:35.321 22:32:30 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:35.321 22:32:30 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:33:35.321 22:32:30 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:35.321 22:32:30 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@416 -- # [[ up == up ]] 00:33:35.321 22:32:30 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:33:35.321 22:32:30 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:35.321 22:32:30 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:33:35.321 Found net devices under 0000:4b:00.1: cvl_0_1 00:33:35.321 22:32:30 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:33:35.321 22:32:30 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:33:35.321 22:32:30 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@56 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:33:35.321 22:32:30 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@57 -- # (( 2 > 0 )) 00:33:35.321 22:32:30 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@58 -- # run_test nvmf_perf_adq /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:33:35.321 22:32:30 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:33:35.321 22:32:30 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:33:35.321 22:32:30 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:33:35.321 ************************************ 00:33:35.321 START TEST nvmf_perf_adq 00:33:35.321 ************************************ 00:33:35.321 22:32:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:33:35.583 * Looking for test storage... 00:33:35.583 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:33:35.583 22:32:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:33:35.583 22:32:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1681 -- # lcov --version 00:33:35.583 22:32:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:33:35.583 22:32:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:33:35.583 22:32:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:33:35.583 22:32:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@333 -- # local ver1 ver1_l 00:33:35.583 22:32:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@334 -- # local ver2 ver2_l 00:33:35.583 22:32:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@336 -- # IFS=.-: 00:33:35.583 22:32:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@336 -- # read -ra ver1 00:33:35.583 22:32:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@337 -- # IFS=.-: 00:33:35.583 22:32:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@337 -- # read -ra ver2 00:33:35.583 22:32:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@338 -- # local 'op=<' 00:33:35.583 22:32:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@340 -- # ver1_l=2 00:33:35.583 22:32:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@341 -- # ver2_l=1 00:33:35.583 22:32:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:33:35.583 22:32:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@344 -- # case "$op" in 00:33:35.583 22:32:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@345 -- # : 1 00:33:35.583 22:32:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@364 -- # (( v = 0 )) 00:33:35.583 22:32:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:33:35.583 22:32:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@365 -- # decimal 1 00:33:35.583 22:32:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@353 -- # local d=1 00:33:35.583 22:32:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:33:35.583 22:32:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@355 -- # echo 1 00:33:35.583 22:32:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@365 -- # ver1[v]=1 00:33:35.583 22:32:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@366 -- # decimal 2 00:33:35.583 22:32:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@353 -- # local d=2 00:33:35.583 22:32:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:33:35.583 22:32:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@355 -- # echo 2 00:33:35.583 22:32:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@366 -- # ver2[v]=2 00:33:35.583 22:32:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:33:35.583 22:32:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:33:35.583 22:32:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@368 -- # return 0 00:33:35.583 22:32:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:33:35.583 22:32:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:33:35.583 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:35.583 --rc genhtml_branch_coverage=1 00:33:35.583 --rc genhtml_function_coverage=1 00:33:35.583 --rc genhtml_legend=1 00:33:35.583 --rc geninfo_all_blocks=1 00:33:35.583 --rc geninfo_unexecuted_blocks=1 00:33:35.583 00:33:35.583 ' 00:33:35.583 22:32:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:33:35.583 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:35.583 --rc genhtml_branch_coverage=1 00:33:35.583 --rc genhtml_function_coverage=1 00:33:35.583 --rc genhtml_legend=1 00:33:35.583 --rc geninfo_all_blocks=1 00:33:35.583 --rc geninfo_unexecuted_blocks=1 00:33:35.583 00:33:35.583 ' 00:33:35.583 22:32:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:33:35.583 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:35.583 --rc genhtml_branch_coverage=1 00:33:35.583 --rc genhtml_function_coverage=1 00:33:35.583 --rc genhtml_legend=1 00:33:35.583 --rc geninfo_all_blocks=1 00:33:35.583 --rc geninfo_unexecuted_blocks=1 00:33:35.583 00:33:35.583 ' 00:33:35.583 22:32:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:33:35.583 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:35.583 --rc genhtml_branch_coverage=1 00:33:35.583 --rc genhtml_function_coverage=1 00:33:35.583 --rc genhtml_legend=1 00:33:35.583 --rc geninfo_all_blocks=1 00:33:35.583 --rc geninfo_unexecuted_blocks=1 00:33:35.583 00:33:35.583 ' 00:33:35.583 22:32:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:33:35.583 22:32:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@7 -- # uname -s 00:33:35.583 22:32:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:33:35.583 22:32:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:33:35.583 22:32:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:33:35.583 22:32:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:33:35.583 22:32:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:33:35.583 22:32:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:33:35.583 22:32:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:33:35.583 22:32:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:33:35.583 22:32:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:33:35.583 22:32:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:33:35.583 22:32:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 00:33:35.583 22:32:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@18 -- # NVME_HOSTID=80c5a598-37ec-ec11-9bc7-a4bf01928204 00:33:35.583 22:32:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:33:35.583 22:32:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:33:35.583 22:32:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:33:35.583 22:32:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:33:35.583 22:32:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:33:35.583 22:32:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@15 -- # shopt -s extglob 00:33:35.583 22:32:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:33:35.584 22:32:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:35.584 22:32:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:35.584 22:32:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:35.584 22:32:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:35.584 22:32:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:35.584 22:32:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@5 -- # export PATH 00:33:35.584 22:32:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:35.584 22:32:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@51 -- # : 0 00:33:35.584 22:32:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:33:35.584 22:32:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:33:35.584 22:32:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:33:35.584 22:32:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:33:35.584 22:32:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:33:35.584 22:32:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:33:35.584 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:33:35.584 22:32:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:33:35.584 22:32:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:33:35.584 22:32:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@55 -- # have_pci_nics=0 00:33:35.584 22:32:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@11 -- # gather_supported_nvmf_pci_devs 00:33:35.584 22:32:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@309 -- # xtrace_disable 00:33:35.584 22:32:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:33:43.724 22:32:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:33:43.724 22:32:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # pci_devs=() 00:33:43.724 22:32:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # local -a pci_devs 00:33:43.724 22:32:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # pci_net_devs=() 00:33:43.724 22:32:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:33:43.724 22:32:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # pci_drivers=() 00:33:43.724 22:32:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # local -A pci_drivers 00:33:43.724 22:32:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # net_devs=() 00:33:43.724 22:32:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # local -ga net_devs 00:33:43.724 22:32:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # e810=() 00:33:43.724 22:32:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # local -ga e810 00:33:43.724 22:32:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # x722=() 00:33:43.724 22:32:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # local -ga x722 00:33:43.724 22:32:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # mlx=() 00:33:43.724 22:32:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # local -ga mlx 00:33:43.724 22:32:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:33:43.724 22:32:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:33:43.724 22:32:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:33:43.724 22:32:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:33:43.724 22:32:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:33:43.724 22:32:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:33:43.724 22:32:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:33:43.724 22:32:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:33:43.724 22:32:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:33:43.724 22:32:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:33:43.724 22:32:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:33:43.724 22:32:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:33:43.724 22:32:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:33:43.724 22:32:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:33:43.724 22:32:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:33:43.724 22:32:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:33:43.724 22:32:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:33:43.724 22:32:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:33:43.724 22:32:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:33:43.724 22:32:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:33:43.724 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:33:43.724 22:32:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:33:43.724 22:32:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:33:43.724 22:32:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:43.724 22:32:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:43.724 22:32:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:33:43.724 22:32:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:33:43.724 22:32:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:33:43.724 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:33:43.724 22:32:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:33:43.724 22:32:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:33:43.724 22:32:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:43.724 22:32:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:43.724 22:32:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:33:43.724 22:32:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:33:43.724 22:32:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:33:43.724 22:32:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:33:43.724 22:32:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:33:43.724 22:32:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:43.724 22:32:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:33:43.724 22:32:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:43.724 22:32:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ up == up ]] 00:33:43.724 22:32:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:33:43.724 22:32:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:43.724 22:32:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:33:43.724 Found net devices under 0000:4b:00.0: cvl_0_0 00:33:43.724 22:32:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:33:43.724 22:32:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:33:43.724 22:32:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:43.724 22:32:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:33:43.724 22:32:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:43.724 22:32:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ up == up ]] 00:33:43.724 22:32:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:33:43.724 22:32:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:43.724 22:32:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:33:43.724 Found net devices under 0000:4b:00.1: cvl_0_1 00:33:43.724 22:32:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:33:43.724 22:32:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:33:43.724 22:32:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@12 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:33:43.724 22:32:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@13 -- # (( 2 == 0 )) 00:33:43.724 22:32:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@18 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:33:43.724 22:32:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@68 -- # adq_reload_driver 00:33:43.724 22:32:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@58 -- # modprobe -a sch_mqprio 00:33:43.724 22:32:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@61 -- # rmmod ice 00:33:44.663 22:32:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@62 -- # modprobe ice 00:33:48.866 22:32:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@63 -- # sleep 5 00:33:53.079 22:32:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@76 -- # nvmftestinit 00:33:53.079 22:32:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:33:53.079 22:32:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:33:53.079 22:32:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@474 -- # prepare_net_devs 00:33:53.079 22:32:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@436 -- # local -g is_hw=no 00:33:53.079 22:32:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@438 -- # remove_spdk_ns 00:33:53.079 22:32:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:53.079 22:32:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:33:53.079 22:32:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:53.079 22:32:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:33:53.079 22:32:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:33:53.079 22:32:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@309 -- # xtrace_disable 00:33:53.079 22:32:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:33:53.079 22:32:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:33:53.079 22:32:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # pci_devs=() 00:33:53.079 22:32:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # local -a pci_devs 00:33:53.079 22:32:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # pci_net_devs=() 00:33:53.079 22:32:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:33:53.079 22:32:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # pci_drivers=() 00:33:53.079 22:32:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # local -A pci_drivers 00:33:53.079 22:32:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # net_devs=() 00:33:53.079 22:32:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # local -ga net_devs 00:33:53.079 22:32:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # e810=() 00:33:53.079 22:32:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # local -ga e810 00:33:53.079 22:32:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # x722=() 00:33:53.079 22:32:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # local -ga x722 00:33:53.079 22:32:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # mlx=() 00:33:53.079 22:32:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # local -ga mlx 00:33:53.079 22:32:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:33:53.079 22:32:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:33:53.079 22:32:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:33:53.079 22:32:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:33:53.079 22:32:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:33:53.079 22:32:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:33:53.079 22:32:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:33:53.080 22:32:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:33:53.080 22:32:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:33:53.080 22:32:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:33:53.080 22:32:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:33:53.080 22:32:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:33:53.080 22:32:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:33:53.080 22:32:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:33:53.080 22:32:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:33:53.080 22:32:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:33:53.080 22:32:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:33:53.080 22:32:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:33:53.080 22:32:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:33:53.080 22:32:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:33:53.080 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:33:53.080 22:32:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:33:53.080 22:32:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:33:53.080 22:32:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:53.080 22:32:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:53.080 22:32:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:33:53.080 22:32:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:33:53.080 22:32:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:33:53.080 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:33:53.080 22:32:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:33:53.080 22:32:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:33:53.080 22:32:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:53.080 22:32:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:53.080 22:32:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:33:53.080 22:32:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:33:53.080 22:32:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:33:53.080 22:32:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:33:53.080 22:32:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:33:53.080 22:32:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:53.080 22:32:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:33:53.080 22:32:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:53.080 22:32:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ up == up ]] 00:33:53.080 22:32:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:33:53.080 22:32:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:53.080 22:32:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:33:53.080 Found net devices under 0000:4b:00.0: cvl_0_0 00:33:53.080 22:32:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:33:53.080 22:32:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:33:53.080 22:32:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:53.080 22:32:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:33:53.080 22:32:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:53.080 22:32:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ up == up ]] 00:33:53.080 22:32:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:33:53.080 22:32:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:53.080 22:32:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:33:53.080 Found net devices under 0000:4b:00.1: cvl_0_1 00:33:53.080 22:32:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:33:53.080 22:32:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:33:53.080 22:32:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@440 -- # is_hw=yes 00:33:53.080 22:32:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:33:53.080 22:32:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:33:53.080 22:32:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:33:53.080 22:32:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:33:53.080 22:32:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:33:53.080 22:32:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:33:53.080 22:32:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:33:53.080 22:32:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:33:53.080 22:32:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:33:53.080 22:32:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:33:53.080 22:32:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:33:53.080 22:32:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:33:53.080 22:32:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:33:53.080 22:32:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:33:53.080 22:32:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:33:53.080 22:32:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:33:53.080 22:32:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:33:53.080 22:32:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:33:53.434 22:32:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:33:53.434 22:32:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:33:53.434 22:32:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:33:53.434 22:32:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:33:53.434 22:32:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:33:53.434 22:32:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:33:53.434 22:32:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:33:53.434 22:32:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:33:53.434 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:33:53.434 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.653 ms 00:33:53.434 00:33:53.434 --- 10.0.0.2 ping statistics --- 00:33:53.434 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:53.434 rtt min/avg/max/mdev = 0.653/0.653/0.653/0.000 ms 00:33:53.434 22:32:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:33:53.434 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:33:53.434 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.287 ms 00:33:53.434 00:33:53.434 --- 10.0.0.1 ping statistics --- 00:33:53.434 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:53.434 rtt min/avg/max/mdev = 0.287/0.287/0.287/0.000 ms 00:33:53.434 22:32:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:33:53.434 22:32:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@448 -- # return 0 00:33:53.434 22:32:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:33:53.434 22:32:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:33:53.434 22:32:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:33:53.434 22:32:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:33:53.434 22:32:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:33:53.434 22:32:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:33:53.434 22:32:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:33:53.434 22:32:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@77 -- # nvmfappstart -m 0xF --wait-for-rpc 00:33:53.434 22:32:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:33:53.434 22:32:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@724 -- # xtrace_disable 00:33:53.434 22:32:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:33:53.434 22:32:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@507 -- # nvmfpid=264831 00:33:53.434 22:32:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@508 -- # waitforlisten 264831 00:33:53.434 22:32:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:33:53.434 22:32:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@831 -- # '[' -z 264831 ']' 00:33:53.434 22:32:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:53.434 22:32:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@836 -- # local max_retries=100 00:33:53.434 22:32:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:53.434 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:53.434 22:32:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@840 -- # xtrace_disable 00:33:53.434 22:32:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:33:53.717 [2024-10-01 22:32:48.681425] Starting SPDK v25.01-pre git sha1 1b1c3081e / DPDK 24.03.0 initialization... 00:33:53.717 [2024-10-01 22:32:48.681490] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:33:53.717 [2024-10-01 22:32:48.754757] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:33:53.717 [2024-10-01 22:32:48.831435] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:33:53.717 [2024-10-01 22:32:48.831473] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:33:53.717 [2024-10-01 22:32:48.831480] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:33:53.717 [2024-10-01 22:32:48.831487] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:33:53.717 [2024-10-01 22:32:48.831493] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:33:53.717 [2024-10-01 22:32:48.831651] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:33:53.717 [2024-10-01 22:32:48.831752] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:33:53.717 [2024-10-01 22:32:48.832072] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:33:53.717 [2024-10-01 22:32:48.832074] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:33:54.288 22:32:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:33:54.288 22:32:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@864 -- # return 0 00:33:54.288 22:32:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:33:54.288 22:32:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@730 -- # xtrace_disable 00:33:54.288 22:32:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:33:54.288 22:32:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:33:54.288 22:32:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@78 -- # adq_configure_nvmf_target 0 00:33:54.288 22:32:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # rpc_cmd sock_get_default_impl 00:33:54.288 22:32:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # jq -r .impl_name 00:33:54.288 22:32:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:54.288 22:32:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:33:54.549 22:32:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:54.549 22:32:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # socket_impl=posix 00:33:54.549 22:32:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@43 -- # rpc_cmd sock_impl_set_options --enable-placement-id 0 --enable-zerocopy-send-server -i posix 00:33:54.549 22:32:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:54.549 22:32:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:33:54.549 22:32:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:54.549 22:32:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@44 -- # rpc_cmd framework_start_init 00:33:54.549 22:32:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:54.549 22:32:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:33:54.549 22:32:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:54.549 22:32:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 0 00:33:54.549 22:32:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:54.549 22:32:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:33:54.549 [2024-10-01 22:32:49.715081] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:33:54.549 22:32:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:54.549 22:32:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@46 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:33:54.549 22:32:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:54.549 22:32:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:33:54.549 Malloc1 00:33:54.549 22:32:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:54.549 22:32:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:33:54.549 22:32:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:54.549 22:32:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:33:54.549 22:32:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:54.549 22:32:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:33:54.549 22:32:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:54.549 22:32:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:33:54.549 22:32:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:54.549 22:32:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:33:54.549 22:32:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:54.549 22:32:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:33:54.549 [2024-10-01 22:32:49.774347] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:33:54.549 22:32:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:54.549 22:32:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@82 -- # perfpid=265120 00:33:54.549 22:32:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@83 -- # sleep 2 00:33:54.549 22:32:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:33:57.093 22:32:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@85 -- # rpc_cmd nvmf_get_stats 00:33:57.093 22:32:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:57.093 22:32:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:33:57.093 22:32:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:57.093 22:32:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@85 -- # nvmf_stats='{ 00:33:57.093 "tick_rate": 2400000000, 00:33:57.093 "poll_groups": [ 00:33:57.093 { 00:33:57.093 "name": "nvmf_tgt_poll_group_000", 00:33:57.093 "admin_qpairs": 1, 00:33:57.093 "io_qpairs": 1, 00:33:57.093 "current_admin_qpairs": 1, 00:33:57.093 "current_io_qpairs": 1, 00:33:57.093 "pending_bdev_io": 0, 00:33:57.093 "completed_nvme_io": 19399, 00:33:57.093 "transports": [ 00:33:57.093 { 00:33:57.093 "trtype": "TCP" 00:33:57.093 } 00:33:57.093 ] 00:33:57.093 }, 00:33:57.093 { 00:33:57.093 "name": "nvmf_tgt_poll_group_001", 00:33:57.093 "admin_qpairs": 0, 00:33:57.093 "io_qpairs": 1, 00:33:57.093 "current_admin_qpairs": 0, 00:33:57.093 "current_io_qpairs": 1, 00:33:57.093 "pending_bdev_io": 0, 00:33:57.093 "completed_nvme_io": 27767, 00:33:57.093 "transports": [ 00:33:57.093 { 00:33:57.093 "trtype": "TCP" 00:33:57.093 } 00:33:57.093 ] 00:33:57.093 }, 00:33:57.093 { 00:33:57.093 "name": "nvmf_tgt_poll_group_002", 00:33:57.093 "admin_qpairs": 0, 00:33:57.093 "io_qpairs": 1, 00:33:57.093 "current_admin_qpairs": 0, 00:33:57.093 "current_io_qpairs": 1, 00:33:57.093 "pending_bdev_io": 0, 00:33:57.093 "completed_nvme_io": 19703, 00:33:57.093 "transports": [ 00:33:57.093 { 00:33:57.093 "trtype": "TCP" 00:33:57.093 } 00:33:57.093 ] 00:33:57.093 }, 00:33:57.093 { 00:33:57.093 "name": "nvmf_tgt_poll_group_003", 00:33:57.093 "admin_qpairs": 0, 00:33:57.093 "io_qpairs": 1, 00:33:57.093 "current_admin_qpairs": 0, 00:33:57.093 "current_io_qpairs": 1, 00:33:57.093 "pending_bdev_io": 0, 00:33:57.093 "completed_nvme_io": 20195, 00:33:57.093 "transports": [ 00:33:57.093 { 00:33:57.093 "trtype": "TCP" 00:33:57.093 } 00:33:57.093 ] 00:33:57.093 } 00:33:57.093 ] 00:33:57.093 }' 00:33:57.093 22:32:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@86 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 1) | length' 00:33:57.093 22:32:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@86 -- # wc -l 00:33:57.093 22:32:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@86 -- # count=4 00:33:57.093 22:32:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@87 -- # [[ 4 -ne 4 ]] 00:33:57.093 22:32:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@91 -- # wait 265120 00:34:05.230 Initializing NVMe Controllers 00:34:05.230 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:34:05.230 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:34:05.230 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:34:05.230 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:34:05.230 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:34:05.230 Initialization complete. Launching workers. 00:34:05.230 ======================================================== 00:34:05.230 Latency(us) 00:34:05.230 Device Information : IOPS MiB/s Average min max 00:34:05.230 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 11023.79 43.06 5805.32 1256.35 9484.98 00:34:05.230 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 15015.29 58.65 4262.50 1115.01 8684.94 00:34:05.230 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 13092.49 51.14 4888.36 1291.26 12006.57 00:34:05.230 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 13375.39 52.25 4784.42 1363.82 12072.30 00:34:05.230 ======================================================== 00:34:05.230 Total : 52506.96 205.11 4875.42 1115.01 12072.30 00:34:05.230 00:34:05.230 22:32:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@92 -- # nvmftestfini 00:34:05.230 22:32:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@514 -- # nvmfcleanup 00:34:05.230 22:32:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@121 -- # sync 00:34:05.230 22:32:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:34:05.230 22:32:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@124 -- # set +e 00:34:05.230 22:32:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@125 -- # for i in {1..20} 00:34:05.230 22:32:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:34:05.230 rmmod nvme_tcp 00:34:05.230 rmmod nvme_fabrics 00:34:05.230 rmmod nvme_keyring 00:34:05.230 22:32:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:34:05.230 22:33:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@128 -- # set -e 00:34:05.230 22:33:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@129 -- # return 0 00:34:05.230 22:33:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@515 -- # '[' -n 264831 ']' 00:34:05.230 22:33:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@516 -- # killprocess 264831 00:34:05.230 22:33:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@950 -- # '[' -z 264831 ']' 00:34:05.230 22:33:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@954 -- # kill -0 264831 00:34:05.230 22:33:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@955 -- # uname 00:34:05.230 22:33:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:34:05.230 22:33:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 264831 00:34:05.230 22:33:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:34:05.230 22:33:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:34:05.230 22:33:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@968 -- # echo 'killing process with pid 264831' 00:34:05.230 killing process with pid 264831 00:34:05.230 22:33:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@969 -- # kill 264831 00:34:05.230 22:33:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@974 -- # wait 264831 00:34:05.230 22:33:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:34:05.230 22:33:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:34:05.230 22:33:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:34:05.230 22:33:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@297 -- # iptr 00:34:05.230 22:33:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@789 -- # iptables-save 00:34:05.231 22:33:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:34:05.231 22:33:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@789 -- # iptables-restore 00:34:05.231 22:33:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:34:05.231 22:33:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@302 -- # remove_spdk_ns 00:34:05.231 22:33:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:05.231 22:33:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:34:05.231 22:33:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:07.139 22:33:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:34:07.139 22:33:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@94 -- # adq_reload_driver 00:34:07.139 22:33:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@58 -- # modprobe -a sch_mqprio 00:34:07.139 22:33:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@61 -- # rmmod ice 00:34:09.048 22:33:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@62 -- # modprobe ice 00:34:11.587 22:33:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@63 -- # sleep 5 00:34:16.878 22:33:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@97 -- # nvmftestinit 00:34:16.878 22:33:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:34:16.878 22:33:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:34:16.878 22:33:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@474 -- # prepare_net_devs 00:34:16.878 22:33:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@436 -- # local -g is_hw=no 00:34:16.878 22:33:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@438 -- # remove_spdk_ns 00:34:16.878 22:33:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:16.878 22:33:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:34:16.878 22:33:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:16.878 22:33:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:34:16.878 22:33:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:34:16.878 22:33:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@309 -- # xtrace_disable 00:34:16.878 22:33:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:34:16.878 22:33:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:34:16.878 22:33:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # pci_devs=() 00:34:16.878 22:33:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # local -a pci_devs 00:34:16.878 22:33:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # pci_net_devs=() 00:34:16.878 22:33:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:34:16.878 22:33:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # pci_drivers=() 00:34:16.878 22:33:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # local -A pci_drivers 00:34:16.878 22:33:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # net_devs=() 00:34:16.878 22:33:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # local -ga net_devs 00:34:16.878 22:33:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # e810=() 00:34:16.878 22:33:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # local -ga e810 00:34:16.878 22:33:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # x722=() 00:34:16.878 22:33:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # local -ga x722 00:34:16.878 22:33:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # mlx=() 00:34:16.878 22:33:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # local -ga mlx 00:34:16.878 22:33:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:34:16.878 22:33:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:34:16.878 22:33:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:34:16.878 22:33:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:34:16.878 22:33:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:34:16.878 22:33:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:34:16.878 22:33:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:34:16.878 22:33:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:34:16.878 22:33:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:34:16.878 22:33:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:34:16.878 22:33:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:34:16.878 22:33:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:34:16.878 22:33:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:34:16.878 22:33:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:34:16.878 22:33:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:34:16.878 22:33:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:34:16.878 22:33:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:34:16.878 22:33:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:34:16.878 22:33:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:34:16.878 22:33:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:34:16.878 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:34:16.878 22:33:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:34:16.878 22:33:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:34:16.878 22:33:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:16.878 22:33:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:16.878 22:33:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:34:16.878 22:33:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:34:16.878 22:33:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:34:16.878 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:34:16.878 22:33:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:34:16.878 22:33:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:34:16.878 22:33:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:16.878 22:33:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:16.878 22:33:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:34:16.878 22:33:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:34:16.878 22:33:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:34:16.878 22:33:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:34:16.878 22:33:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:34:16.878 22:33:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:16.878 22:33:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:34:16.878 22:33:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:16.879 22:33:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ up == up ]] 00:34:16.879 22:33:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:34:16.879 22:33:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:16.879 22:33:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:34:16.879 Found net devices under 0000:4b:00.0: cvl_0_0 00:34:16.879 22:33:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:34:16.879 22:33:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:34:16.879 22:33:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:16.879 22:33:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:34:16.879 22:33:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:16.879 22:33:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ up == up ]] 00:34:16.879 22:33:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:34:16.879 22:33:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:16.879 22:33:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:34:16.879 Found net devices under 0000:4b:00.1: cvl_0_1 00:34:16.879 22:33:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:34:16.879 22:33:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:34:16.879 22:33:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@440 -- # is_hw=yes 00:34:16.879 22:33:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:34:16.879 22:33:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:34:16.879 22:33:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:34:16.879 22:33:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:34:16.879 22:33:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:34:16.879 22:33:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:34:16.879 22:33:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:34:16.879 22:33:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:34:16.879 22:33:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:34:16.879 22:33:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:34:16.879 22:33:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:34:16.879 22:33:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:34:16.879 22:33:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:34:16.879 22:33:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:34:16.879 22:33:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:34:16.879 22:33:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:34:16.879 22:33:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:34:16.879 22:33:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:34:16.879 22:33:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:34:16.879 22:33:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:34:16.879 22:33:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:34:16.879 22:33:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:34:16.879 22:33:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:34:16.879 22:33:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:34:16.879 22:33:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:34:16.879 22:33:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:34:16.879 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:34:16.879 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.683 ms 00:34:16.879 00:34:16.879 --- 10.0.0.2 ping statistics --- 00:34:16.879 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:16.879 rtt min/avg/max/mdev = 0.683/0.683/0.683/0.000 ms 00:34:16.879 22:33:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:34:16.879 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:34:16.879 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.268 ms 00:34:16.879 00:34:16.879 --- 10.0.0.1 ping statistics --- 00:34:16.879 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:16.879 rtt min/avg/max/mdev = 0.268/0.268/0.268/0.000 ms 00:34:16.879 22:33:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:34:16.879 22:33:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@448 -- # return 0 00:34:16.879 22:33:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:34:16.879 22:33:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:34:16.879 22:33:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:34:16.879 22:33:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:34:16.879 22:33:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:34:16.879 22:33:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:34:16.879 22:33:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:34:16.879 22:33:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@98 -- # adq_configure_driver 00:34:16.879 22:33:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@22 -- # ip netns exec cvl_0_0_ns_spdk ethtool --offload cvl_0_0 hw-tc-offload on 00:34:16.879 22:33:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@24 -- # ip netns exec cvl_0_0_ns_spdk ethtool --set-priv-flags cvl_0_0 channel-pkt-inspect-optimize off 00:34:16.879 22:33:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@26 -- # sysctl -w net.core.busy_poll=1 00:34:16.879 net.core.busy_poll = 1 00:34:16.879 22:33:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@27 -- # sysctl -w net.core.busy_read=1 00:34:16.879 net.core.busy_read = 1 00:34:16.879 22:33:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@29 -- # tc=/usr/sbin/tc 00:34:16.879 22:33:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@31 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 root mqprio num_tc 2 map 0 1 queues 2@0 2@2 hw 1 mode channel 00:34:16.879 22:33:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 ingress 00:34:16.879 22:33:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@35 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc filter add dev cvl_0_0 protocol ip parent ffff: prio 1 flower dst_ip 10.0.0.2/32 ip_proto tcp dst_port 4420 skip_sw hw_tc 1 00:34:16.879 22:33:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@38 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/nvmf/set_xps_rxqs cvl_0_0 00:34:17.140 22:33:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@99 -- # nvmfappstart -m 0xF --wait-for-rpc 00:34:17.140 22:33:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:34:17.140 22:33:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@724 -- # xtrace_disable 00:34:17.140 22:33:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:34:17.140 22:33:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@507 -- # nvmfpid=270472 00:34:17.140 22:33:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@508 -- # waitforlisten 270472 00:34:17.140 22:33:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:34:17.140 22:33:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@831 -- # '[' -z 270472 ']' 00:34:17.140 22:33:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:17.140 22:33:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@836 -- # local max_retries=100 00:34:17.140 22:33:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:17.140 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:17.140 22:33:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@840 -- # xtrace_disable 00:34:17.140 22:33:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:34:17.140 [2024-10-01 22:33:12.236646] Starting SPDK v25.01-pre git sha1 1b1c3081e / DPDK 24.03.0 initialization... 00:34:17.140 [2024-10-01 22:33:12.236716] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:34:17.140 [2024-10-01 22:33:12.305444] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:34:17.140 [2024-10-01 22:33:12.370283] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:34:17.140 [2024-10-01 22:33:12.370322] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:34:17.140 [2024-10-01 22:33:12.370331] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:34:17.140 [2024-10-01 22:33:12.370338] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:34:17.140 [2024-10-01 22:33:12.370344] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:34:17.140 [2024-10-01 22:33:12.370483] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:34:17.140 [2024-10-01 22:33:12.370595] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:34:17.140 [2024-10-01 22:33:12.370748] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:34:17.140 [2024-10-01 22:33:12.370870] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:34:18.084 22:33:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:34:18.084 22:33:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@864 -- # return 0 00:34:18.084 22:33:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:34:18.084 22:33:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@730 -- # xtrace_disable 00:34:18.084 22:33:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:34:18.084 22:33:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:34:18.084 22:33:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@100 -- # adq_configure_nvmf_target 1 00:34:18.084 22:33:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # rpc_cmd sock_get_default_impl 00:34:18.084 22:33:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # jq -r .impl_name 00:34:18.084 22:33:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:18.084 22:33:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:34:18.084 22:33:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:18.084 22:33:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # socket_impl=posix 00:34:18.084 22:33:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@43 -- # rpc_cmd sock_impl_set_options --enable-placement-id 1 --enable-zerocopy-send-server -i posix 00:34:18.084 22:33:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:18.084 22:33:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:34:18.084 22:33:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:18.084 22:33:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@44 -- # rpc_cmd framework_start_init 00:34:18.084 22:33:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:18.084 22:33:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:34:18.084 22:33:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:18.084 22:33:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 1 00:34:18.084 22:33:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:18.084 22:33:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:34:18.084 [2024-10-01 22:33:13.256158] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:34:18.084 22:33:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:18.084 22:33:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@46 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:34:18.084 22:33:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:18.084 22:33:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:34:18.084 Malloc1 00:34:18.084 22:33:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:18.084 22:33:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:34:18.084 22:33:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:18.084 22:33:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:34:18.084 22:33:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:18.084 22:33:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:34:18.084 22:33:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:18.084 22:33:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:34:18.084 22:33:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:18.084 22:33:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:34:18.084 22:33:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:18.084 22:33:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:34:18.084 [2024-10-01 22:33:13.315552] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:34:18.084 22:33:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:18.084 22:33:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@104 -- # perfpid=270725 00:34:18.084 22:33:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@105 -- # sleep 2 00:34:18.084 22:33:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@101 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:34:20.631 22:33:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@107 -- # rpc_cmd nvmf_get_stats 00:34:20.631 22:33:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:20.631 22:33:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:34:20.631 22:33:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:20.631 22:33:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@107 -- # nvmf_stats='{ 00:34:20.631 "tick_rate": 2400000000, 00:34:20.631 "poll_groups": [ 00:34:20.631 { 00:34:20.631 "name": "nvmf_tgt_poll_group_000", 00:34:20.631 "admin_qpairs": 1, 00:34:20.631 "io_qpairs": 3, 00:34:20.631 "current_admin_qpairs": 1, 00:34:20.631 "current_io_qpairs": 3, 00:34:20.631 "pending_bdev_io": 0, 00:34:20.631 "completed_nvme_io": 29715, 00:34:20.631 "transports": [ 00:34:20.631 { 00:34:20.631 "trtype": "TCP" 00:34:20.631 } 00:34:20.631 ] 00:34:20.631 }, 00:34:20.631 { 00:34:20.631 "name": "nvmf_tgt_poll_group_001", 00:34:20.631 "admin_qpairs": 0, 00:34:20.631 "io_qpairs": 1, 00:34:20.631 "current_admin_qpairs": 0, 00:34:20.631 "current_io_qpairs": 1, 00:34:20.631 "pending_bdev_io": 0, 00:34:20.631 "completed_nvme_io": 34635, 00:34:20.631 "transports": [ 00:34:20.631 { 00:34:20.631 "trtype": "TCP" 00:34:20.631 } 00:34:20.631 ] 00:34:20.631 }, 00:34:20.631 { 00:34:20.631 "name": "nvmf_tgt_poll_group_002", 00:34:20.631 "admin_qpairs": 0, 00:34:20.631 "io_qpairs": 0, 00:34:20.631 "current_admin_qpairs": 0, 00:34:20.631 "current_io_qpairs": 0, 00:34:20.631 "pending_bdev_io": 0, 00:34:20.631 "completed_nvme_io": 0, 00:34:20.631 "transports": [ 00:34:20.631 { 00:34:20.631 "trtype": "TCP" 00:34:20.631 } 00:34:20.631 ] 00:34:20.631 }, 00:34:20.631 { 00:34:20.631 "name": "nvmf_tgt_poll_group_003", 00:34:20.631 "admin_qpairs": 0, 00:34:20.631 "io_qpairs": 0, 00:34:20.631 "current_admin_qpairs": 0, 00:34:20.631 "current_io_qpairs": 0, 00:34:20.631 "pending_bdev_io": 0, 00:34:20.631 "completed_nvme_io": 0, 00:34:20.631 "transports": [ 00:34:20.631 { 00:34:20.631 "trtype": "TCP" 00:34:20.631 } 00:34:20.631 ] 00:34:20.631 } 00:34:20.631 ] 00:34:20.631 }' 00:34:20.631 22:33:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@108 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 0) | length' 00:34:20.631 22:33:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@108 -- # wc -l 00:34:20.631 22:33:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@108 -- # count=2 00:34:20.631 22:33:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@109 -- # [[ 2 -lt 2 ]] 00:34:20.631 22:33:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@114 -- # wait 270725 00:34:28.772 Initializing NVMe Controllers 00:34:28.772 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:34:28.772 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:34:28.772 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:34:28.772 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:34:28.772 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:34:28.772 Initialization complete. Launching workers. 00:34:28.772 ======================================================== 00:34:28.772 Latency(us) 00:34:28.772 Device Information : IOPS MiB/s Average min max 00:34:28.772 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 7004.50 27.36 9150.68 839.88 57770.25 00:34:28.772 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 6757.20 26.40 9470.79 1319.10 55682.84 00:34:28.772 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 18590.50 72.62 3441.99 1013.91 45025.41 00:34:28.772 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 6739.50 26.33 9497.40 1362.95 57019.63 00:34:28.772 ======================================================== 00:34:28.772 Total : 39091.70 152.70 6550.96 839.88 57770.25 00:34:28.772 00:34:28.772 22:33:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@115 -- # nvmftestfini 00:34:28.772 22:33:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@514 -- # nvmfcleanup 00:34:28.772 22:33:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@121 -- # sync 00:34:28.772 22:33:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:34:28.772 22:33:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@124 -- # set +e 00:34:28.772 22:33:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@125 -- # for i in {1..20} 00:34:28.772 22:33:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:34:28.772 rmmod nvme_tcp 00:34:28.772 rmmod nvme_fabrics 00:34:28.772 rmmod nvme_keyring 00:34:28.772 22:33:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:34:28.772 22:33:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@128 -- # set -e 00:34:28.772 22:33:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@129 -- # return 0 00:34:28.772 22:33:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@515 -- # '[' -n 270472 ']' 00:34:28.772 22:33:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@516 -- # killprocess 270472 00:34:28.772 22:33:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@950 -- # '[' -z 270472 ']' 00:34:28.772 22:33:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@954 -- # kill -0 270472 00:34:28.772 22:33:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@955 -- # uname 00:34:28.772 22:33:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:34:28.772 22:33:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 270472 00:34:28.772 22:33:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:34:28.772 22:33:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:34:28.772 22:33:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@968 -- # echo 'killing process with pid 270472' 00:34:28.772 killing process with pid 270472 00:34:28.772 22:33:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@969 -- # kill 270472 00:34:28.772 22:33:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@974 -- # wait 270472 00:34:28.772 22:33:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:34:28.772 22:33:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:34:28.772 22:33:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:34:28.772 22:33:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@297 -- # iptr 00:34:28.772 22:33:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@789 -- # iptables-save 00:34:28.772 22:33:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:34:28.772 22:33:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@789 -- # iptables-restore 00:34:28.772 22:33:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:34:28.772 22:33:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@302 -- # remove_spdk_ns 00:34:28.772 22:33:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:28.772 22:33:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:34:28.772 22:33:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:32.074 22:33:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:34:32.074 22:33:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@117 -- # trap - SIGINT SIGTERM EXIT 00:34:32.074 00:34:32.074 real 0m56.387s 00:34:32.074 user 2m50.061s 00:34:32.074 sys 0m12.763s 00:34:32.074 22:33:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1126 -- # xtrace_disable 00:34:32.074 22:33:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:34:32.074 ************************************ 00:34:32.074 END TEST nvmf_perf_adq 00:34:32.074 ************************************ 00:34:32.074 22:33:26 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@65 -- # run_test nvmf_shutdown /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:34:32.074 22:33:26 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:34:32.074 22:33:26 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:34:32.074 22:33:26 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:34:32.074 ************************************ 00:34:32.074 START TEST nvmf_shutdown 00:34:32.074 ************************************ 00:34:32.074 22:33:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:34:32.074 * Looking for test storage... 00:34:32.074 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:34:32.074 22:33:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:34:32.074 22:33:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1681 -- # lcov --version 00:34:32.074 22:33:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:34:32.074 22:33:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:34:32.074 22:33:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:34:32.074 22:33:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@333 -- # local ver1 ver1_l 00:34:32.074 22:33:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@334 -- # local ver2 ver2_l 00:34:32.074 22:33:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@336 -- # IFS=.-: 00:34:32.074 22:33:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@336 -- # read -ra ver1 00:34:32.074 22:33:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@337 -- # IFS=.-: 00:34:32.074 22:33:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@337 -- # read -ra ver2 00:34:32.074 22:33:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@338 -- # local 'op=<' 00:34:32.074 22:33:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@340 -- # ver1_l=2 00:34:32.074 22:33:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@341 -- # ver2_l=1 00:34:32.074 22:33:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:34:32.074 22:33:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@344 -- # case "$op" in 00:34:32.074 22:33:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@345 -- # : 1 00:34:32.074 22:33:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@364 -- # (( v = 0 )) 00:34:32.074 22:33:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:34:32.074 22:33:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@365 -- # decimal 1 00:34:32.074 22:33:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@353 -- # local d=1 00:34:32.074 22:33:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:34:32.074 22:33:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@355 -- # echo 1 00:34:32.074 22:33:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@365 -- # ver1[v]=1 00:34:32.074 22:33:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@366 -- # decimal 2 00:34:32.074 22:33:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@353 -- # local d=2 00:34:32.074 22:33:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:34:32.074 22:33:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@355 -- # echo 2 00:34:32.074 22:33:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@366 -- # ver2[v]=2 00:34:32.074 22:33:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:34:32.074 22:33:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:34:32.074 22:33:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@368 -- # return 0 00:34:32.074 22:33:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:34:32.074 22:33:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:34:32.074 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:32.074 --rc genhtml_branch_coverage=1 00:34:32.074 --rc genhtml_function_coverage=1 00:34:32.074 --rc genhtml_legend=1 00:34:32.074 --rc geninfo_all_blocks=1 00:34:32.074 --rc geninfo_unexecuted_blocks=1 00:34:32.074 00:34:32.074 ' 00:34:32.074 22:33:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:34:32.074 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:32.074 --rc genhtml_branch_coverage=1 00:34:32.074 --rc genhtml_function_coverage=1 00:34:32.074 --rc genhtml_legend=1 00:34:32.074 --rc geninfo_all_blocks=1 00:34:32.074 --rc geninfo_unexecuted_blocks=1 00:34:32.074 00:34:32.074 ' 00:34:32.074 22:33:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:34:32.074 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:32.074 --rc genhtml_branch_coverage=1 00:34:32.074 --rc genhtml_function_coverage=1 00:34:32.074 --rc genhtml_legend=1 00:34:32.074 --rc geninfo_all_blocks=1 00:34:32.074 --rc geninfo_unexecuted_blocks=1 00:34:32.074 00:34:32.074 ' 00:34:32.074 22:33:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:34:32.074 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:32.074 --rc genhtml_branch_coverage=1 00:34:32.074 --rc genhtml_function_coverage=1 00:34:32.074 --rc genhtml_legend=1 00:34:32.074 --rc geninfo_all_blocks=1 00:34:32.074 --rc geninfo_unexecuted_blocks=1 00:34:32.074 00:34:32.074 ' 00:34:32.074 22:33:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:34:32.074 22:33:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@7 -- # uname -s 00:34:32.074 22:33:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:34:32.074 22:33:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:34:32.074 22:33:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:34:32.074 22:33:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:34:32.074 22:33:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:34:32.074 22:33:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:34:32.074 22:33:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:34:32.074 22:33:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:34:32.074 22:33:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:34:32.074 22:33:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:34:32.074 22:33:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 00:34:32.074 22:33:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@18 -- # NVME_HOSTID=80c5a598-37ec-ec11-9bc7-a4bf01928204 00:34:32.075 22:33:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:34:32.075 22:33:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:34:32.075 22:33:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:34:32.075 22:33:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:34:32.075 22:33:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:34:32.075 22:33:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@15 -- # shopt -s extglob 00:34:32.075 22:33:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:34:32.075 22:33:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:34:32.075 22:33:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:34:32.075 22:33:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:32.075 22:33:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:32.075 22:33:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:32.075 22:33:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@5 -- # export PATH 00:34:32.075 22:33:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:32.075 22:33:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@51 -- # : 0 00:34:32.075 22:33:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:34:32.075 22:33:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:34:32.075 22:33:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:34:32.075 22:33:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:34:32.075 22:33:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:34:32.075 22:33:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:34:32.075 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:34:32.075 22:33:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:34:32.075 22:33:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:34:32.075 22:33:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@55 -- # have_pci_nics=0 00:34:32.075 22:33:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@12 -- # MALLOC_BDEV_SIZE=64 00:34:32.075 22:33:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:34:32.075 22:33:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@162 -- # run_test nvmf_shutdown_tc1 nvmf_shutdown_tc1 00:34:32.075 22:33:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:34:32.075 22:33:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1107 -- # xtrace_disable 00:34:32.075 22:33:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:34:32.075 ************************************ 00:34:32.075 START TEST nvmf_shutdown_tc1 00:34:32.075 ************************************ 00:34:32.075 22:33:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1125 -- # nvmf_shutdown_tc1 00:34:32.075 22:33:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@75 -- # starttarget 00:34:32.075 22:33:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@16 -- # nvmftestinit 00:34:32.075 22:33:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:34:32.075 22:33:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:34:32.075 22:33:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@474 -- # prepare_net_devs 00:34:32.075 22:33:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@436 -- # local -g is_hw=no 00:34:32.075 22:33:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@438 -- # remove_spdk_ns 00:34:32.075 22:33:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:32.075 22:33:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:34:32.075 22:33:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:32.075 22:33:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:34:32.075 22:33:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:34:32.075 22:33:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@309 -- # xtrace_disable 00:34:32.075 22:33:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:34:40.222 22:33:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:34:40.222 22:33:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@315 -- # pci_devs=() 00:34:40.222 22:33:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@315 -- # local -a pci_devs 00:34:40.222 22:33:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:34:40.222 22:33:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:34:40.222 22:33:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@317 -- # pci_drivers=() 00:34:40.222 22:33:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:34:40.222 22:33:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@319 -- # net_devs=() 00:34:40.222 22:33:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@319 -- # local -ga net_devs 00:34:40.222 22:33:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@320 -- # e810=() 00:34:40.222 22:33:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@320 -- # local -ga e810 00:34:40.222 22:33:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@321 -- # x722=() 00:34:40.222 22:33:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@321 -- # local -ga x722 00:34:40.222 22:33:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@322 -- # mlx=() 00:34:40.222 22:33:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@322 -- # local -ga mlx 00:34:40.222 22:33:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:34:40.222 22:33:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:34:40.222 22:33:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:34:40.222 22:33:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:34:40.222 22:33:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:34:40.222 22:33:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:34:40.222 22:33:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:34:40.222 22:33:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:34:40.222 22:33:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:34:40.222 22:33:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:34:40.222 22:33:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:34:40.222 22:33:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:34:40.222 22:33:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:34:40.222 22:33:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:34:40.222 22:33:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:34:40.222 22:33:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:34:40.222 22:33:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:34:40.222 22:33:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:34:40.223 22:33:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:34:40.223 22:33:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:34:40.223 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:34:40.223 22:33:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:34:40.223 22:33:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:34:40.223 22:33:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:40.223 22:33:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:40.223 22:33:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:34:40.223 22:33:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:34:40.223 22:33:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:34:40.223 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:34:40.223 22:33:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:34:40.223 22:33:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:34:40.223 22:33:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:40.223 22:33:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:40.223 22:33:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:34:40.223 22:33:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:34:40.223 22:33:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:34:40.223 22:33:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:34:40.223 22:33:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:34:40.223 22:33:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:40.223 22:33:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:34:40.223 22:33:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:40.223 22:33:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@416 -- # [[ up == up ]] 00:34:40.223 22:33:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:34:40.223 22:33:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:40.223 22:33:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:34:40.223 Found net devices under 0000:4b:00.0: cvl_0_0 00:34:40.223 22:33:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:34:40.223 22:33:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:34:40.223 22:33:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:40.223 22:33:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:34:40.223 22:33:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:40.223 22:33:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@416 -- # [[ up == up ]] 00:34:40.223 22:33:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:34:40.223 22:33:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:40.223 22:33:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:34:40.223 Found net devices under 0000:4b:00.1: cvl_0_1 00:34:40.223 22:33:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:34:40.223 22:33:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:34:40.223 22:33:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@440 -- # is_hw=yes 00:34:40.223 22:33:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:34:40.223 22:33:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:34:40.223 22:33:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:34:40.223 22:33:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:34:40.223 22:33:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:34:40.223 22:33:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:34:40.223 22:33:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:34:40.223 22:33:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:34:40.223 22:33:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:34:40.223 22:33:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:34:40.223 22:33:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:34:40.223 22:33:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:34:40.223 22:33:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:34:40.223 22:33:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:34:40.223 22:33:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:34:40.223 22:33:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:34:40.223 22:33:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:34:40.223 22:33:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:34:40.223 22:33:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:34:40.223 22:33:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:34:40.223 22:33:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:34:40.223 22:33:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:34:40.223 22:33:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:34:40.223 22:33:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:34:40.223 22:33:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:34:40.223 22:33:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:34:40.223 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:34:40.223 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.572 ms 00:34:40.223 00:34:40.223 --- 10.0.0.2 ping statistics --- 00:34:40.223 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:40.223 rtt min/avg/max/mdev = 0.572/0.572/0.572/0.000 ms 00:34:40.223 22:33:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:34:40.223 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:34:40.223 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.274 ms 00:34:40.223 00:34:40.223 --- 10.0.0.1 ping statistics --- 00:34:40.223 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:40.223 rtt min/avg/max/mdev = 0.274/0.274/0.274/0.000 ms 00:34:40.223 22:33:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:34:40.223 22:33:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@448 -- # return 0 00:34:40.223 22:33:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:34:40.223 22:33:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:34:40.223 22:33:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:34:40.223 22:33:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:34:40.223 22:33:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:34:40.223 22:33:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:34:40.223 22:33:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:34:40.223 22:33:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:34:40.223 22:33:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:34:40.223 22:33:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@724 -- # xtrace_disable 00:34:40.223 22:33:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:34:40.223 22:33:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@507 -- # nvmfpid=277262 00:34:40.224 22:33:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@508 -- # waitforlisten 277262 00:34:40.224 22:33:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:34:40.224 22:33:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@831 -- # '[' -z 277262 ']' 00:34:40.224 22:33:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:40.224 22:33:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@836 -- # local max_retries=100 00:34:40.224 22:33:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:40.224 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:40.224 22:33:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@840 -- # xtrace_disable 00:34:40.224 22:33:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:34:40.224 [2024-10-01 22:33:34.731171] Starting SPDK v25.01-pre git sha1 1b1c3081e / DPDK 24.03.0 initialization... 00:34:40.224 [2024-10-01 22:33:34.731238] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:34:40.224 [2024-10-01 22:33:34.822034] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:34:40.224 [2024-10-01 22:33:34.916959] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:34:40.224 [2024-10-01 22:33:34.917022] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:34:40.224 [2024-10-01 22:33:34.917032] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:34:40.224 [2024-10-01 22:33:34.917040] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:34:40.224 [2024-10-01 22:33:34.917046] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:34:40.224 [2024-10-01 22:33:34.917183] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:34:40.224 [2024-10-01 22:33:34.917350] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:34:40.224 [2024-10-01 22:33:34.917516] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:34:40.224 [2024-10-01 22:33:34.917516] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 4 00:34:40.484 22:33:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:34:40.484 22:33:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@864 -- # return 0 00:34:40.484 22:33:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:34:40.485 22:33:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@730 -- # xtrace_disable 00:34:40.485 22:33:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:34:40.485 22:33:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:34:40.485 22:33:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:34:40.485 22:33:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:40.485 22:33:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:34:40.485 [2024-10-01 22:33:35.591652] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:34:40.485 22:33:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:40.485 22:33:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:34:40.485 22:33:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:34:40.485 22:33:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@724 -- # xtrace_disable 00:34:40.485 22:33:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:34:40.485 22:33:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:34:40.485 22:33:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:34:40.485 22:33:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:34:40.485 22:33:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:34:40.485 22:33:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:34:40.485 22:33:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:34:40.485 22:33:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:34:40.485 22:33:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:34:40.485 22:33:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:34:40.485 22:33:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:34:40.485 22:33:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:34:40.485 22:33:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:34:40.485 22:33:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:34:40.485 22:33:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:34:40.485 22:33:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:34:40.485 22:33:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:34:40.485 22:33:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:34:40.485 22:33:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:34:40.485 22:33:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:34:40.485 22:33:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:34:40.485 22:33:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:34:40.485 22:33:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@36 -- # rpc_cmd 00:34:40.485 22:33:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:40.485 22:33:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:34:40.485 Malloc1 00:34:40.485 [2024-10-01 22:33:35.699090] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:34:40.485 Malloc2 00:34:40.745 Malloc3 00:34:40.745 Malloc4 00:34:40.745 Malloc5 00:34:40.745 Malloc6 00:34:40.745 Malloc7 00:34:40.745 Malloc8 00:34:40.745 Malloc9 00:34:41.006 Malloc10 00:34:41.006 22:33:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:41.006 22:33:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:34:41.006 22:33:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@730 -- # xtrace_disable 00:34:41.006 22:33:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:34:41.006 22:33:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@79 -- # perfpid=277502 00:34:41.006 22:33:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@80 -- # waitforlisten 277502 /var/tmp/bdevperf.sock 00:34:41.006 22:33:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@831 -- # '[' -z 277502 ']' 00:34:41.006 22:33:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:34:41.006 22:33:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@836 -- # local max_retries=100 00:34:41.006 22:33:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json /dev/fd/63 00:34:41.006 22:33:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:34:41.006 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:34:41.006 22:33:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@78 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:34:41.006 22:33:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@840 -- # xtrace_disable 00:34:41.006 22:33:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:34:41.006 22:33:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # config=() 00:34:41.006 22:33:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # local subsystem config 00:34:41.006 22:33:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:34:41.006 22:33:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:34:41.006 { 00:34:41.006 "params": { 00:34:41.006 "name": "Nvme$subsystem", 00:34:41.006 "trtype": "$TEST_TRANSPORT", 00:34:41.006 "traddr": "$NVMF_FIRST_TARGET_IP", 00:34:41.006 "adrfam": "ipv4", 00:34:41.006 "trsvcid": "$NVMF_PORT", 00:34:41.006 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:34:41.006 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:34:41.006 "hdgst": ${hdgst:-false}, 00:34:41.006 "ddgst": ${ddgst:-false} 00:34:41.006 }, 00:34:41.006 "method": "bdev_nvme_attach_controller" 00:34:41.006 } 00:34:41.006 EOF 00:34:41.006 )") 00:34:41.006 22:33:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # cat 00:34:41.006 22:33:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:34:41.006 22:33:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:34:41.006 { 00:34:41.006 "params": { 00:34:41.006 "name": "Nvme$subsystem", 00:34:41.006 "trtype": "$TEST_TRANSPORT", 00:34:41.006 "traddr": "$NVMF_FIRST_TARGET_IP", 00:34:41.006 "adrfam": "ipv4", 00:34:41.006 "trsvcid": "$NVMF_PORT", 00:34:41.006 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:34:41.006 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:34:41.006 "hdgst": ${hdgst:-false}, 00:34:41.006 "ddgst": ${ddgst:-false} 00:34:41.006 }, 00:34:41.006 "method": "bdev_nvme_attach_controller" 00:34:41.006 } 00:34:41.006 EOF 00:34:41.006 )") 00:34:41.006 22:33:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # cat 00:34:41.006 22:33:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:34:41.006 22:33:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:34:41.006 { 00:34:41.006 "params": { 00:34:41.006 "name": "Nvme$subsystem", 00:34:41.006 "trtype": "$TEST_TRANSPORT", 00:34:41.006 "traddr": "$NVMF_FIRST_TARGET_IP", 00:34:41.006 "adrfam": "ipv4", 00:34:41.006 "trsvcid": "$NVMF_PORT", 00:34:41.006 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:34:41.006 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:34:41.006 "hdgst": ${hdgst:-false}, 00:34:41.006 "ddgst": ${ddgst:-false} 00:34:41.006 }, 00:34:41.006 "method": "bdev_nvme_attach_controller" 00:34:41.006 } 00:34:41.006 EOF 00:34:41.006 )") 00:34:41.006 22:33:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # cat 00:34:41.006 22:33:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:34:41.006 22:33:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:34:41.006 { 00:34:41.006 "params": { 00:34:41.006 "name": "Nvme$subsystem", 00:34:41.006 "trtype": "$TEST_TRANSPORT", 00:34:41.006 "traddr": "$NVMF_FIRST_TARGET_IP", 00:34:41.006 "adrfam": "ipv4", 00:34:41.006 "trsvcid": "$NVMF_PORT", 00:34:41.006 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:34:41.006 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:34:41.006 "hdgst": ${hdgst:-false}, 00:34:41.006 "ddgst": ${ddgst:-false} 00:34:41.006 }, 00:34:41.006 "method": "bdev_nvme_attach_controller" 00:34:41.006 } 00:34:41.006 EOF 00:34:41.006 )") 00:34:41.006 22:33:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # cat 00:34:41.006 22:33:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:34:41.006 22:33:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:34:41.006 { 00:34:41.006 "params": { 00:34:41.006 "name": "Nvme$subsystem", 00:34:41.006 "trtype": "$TEST_TRANSPORT", 00:34:41.006 "traddr": "$NVMF_FIRST_TARGET_IP", 00:34:41.006 "adrfam": "ipv4", 00:34:41.006 "trsvcid": "$NVMF_PORT", 00:34:41.006 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:34:41.006 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:34:41.006 "hdgst": ${hdgst:-false}, 00:34:41.006 "ddgst": ${ddgst:-false} 00:34:41.006 }, 00:34:41.006 "method": "bdev_nvme_attach_controller" 00:34:41.006 } 00:34:41.006 EOF 00:34:41.006 )") 00:34:41.006 22:33:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # cat 00:34:41.006 22:33:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:34:41.007 22:33:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:34:41.007 { 00:34:41.007 "params": { 00:34:41.007 "name": "Nvme$subsystem", 00:34:41.007 "trtype": "$TEST_TRANSPORT", 00:34:41.007 "traddr": "$NVMF_FIRST_TARGET_IP", 00:34:41.007 "adrfam": "ipv4", 00:34:41.007 "trsvcid": "$NVMF_PORT", 00:34:41.007 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:34:41.007 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:34:41.007 "hdgst": ${hdgst:-false}, 00:34:41.007 "ddgst": ${ddgst:-false} 00:34:41.007 }, 00:34:41.007 "method": "bdev_nvme_attach_controller" 00:34:41.007 } 00:34:41.007 EOF 00:34:41.007 )") 00:34:41.007 22:33:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # cat 00:34:41.007 22:33:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:34:41.007 22:33:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:34:41.007 { 00:34:41.007 "params": { 00:34:41.007 "name": "Nvme$subsystem", 00:34:41.007 "trtype": "$TEST_TRANSPORT", 00:34:41.007 "traddr": "$NVMF_FIRST_TARGET_IP", 00:34:41.007 "adrfam": "ipv4", 00:34:41.007 "trsvcid": "$NVMF_PORT", 00:34:41.007 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:34:41.007 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:34:41.007 "hdgst": ${hdgst:-false}, 00:34:41.007 "ddgst": ${ddgst:-false} 00:34:41.007 }, 00:34:41.007 "method": "bdev_nvme_attach_controller" 00:34:41.007 } 00:34:41.007 EOF 00:34:41.007 )") 00:34:41.007 22:33:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # cat 00:34:41.007 [2024-10-01 22:33:36.159616] Starting SPDK v25.01-pre git sha1 1b1c3081e / DPDK 24.03.0 initialization... 00:34:41.007 [2024-10-01 22:33:36.159694] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:34:41.007 22:33:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:34:41.007 22:33:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:34:41.007 { 00:34:41.007 "params": { 00:34:41.007 "name": "Nvme$subsystem", 00:34:41.007 "trtype": "$TEST_TRANSPORT", 00:34:41.007 "traddr": "$NVMF_FIRST_TARGET_IP", 00:34:41.007 "adrfam": "ipv4", 00:34:41.007 "trsvcid": "$NVMF_PORT", 00:34:41.007 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:34:41.007 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:34:41.007 "hdgst": ${hdgst:-false}, 00:34:41.007 "ddgst": ${ddgst:-false} 00:34:41.007 }, 00:34:41.007 "method": "bdev_nvme_attach_controller" 00:34:41.007 } 00:34:41.007 EOF 00:34:41.007 )") 00:34:41.007 22:33:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # cat 00:34:41.007 22:33:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:34:41.007 22:33:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:34:41.007 { 00:34:41.007 "params": { 00:34:41.007 "name": "Nvme$subsystem", 00:34:41.007 "trtype": "$TEST_TRANSPORT", 00:34:41.007 "traddr": "$NVMF_FIRST_TARGET_IP", 00:34:41.007 "adrfam": "ipv4", 00:34:41.007 "trsvcid": "$NVMF_PORT", 00:34:41.007 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:34:41.007 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:34:41.007 "hdgst": ${hdgst:-false}, 00:34:41.007 "ddgst": ${ddgst:-false} 00:34:41.007 }, 00:34:41.007 "method": "bdev_nvme_attach_controller" 00:34:41.007 } 00:34:41.007 EOF 00:34:41.007 )") 00:34:41.007 22:33:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # cat 00:34:41.007 22:33:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:34:41.007 22:33:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:34:41.007 { 00:34:41.007 "params": { 00:34:41.007 "name": "Nvme$subsystem", 00:34:41.007 "trtype": "$TEST_TRANSPORT", 00:34:41.007 "traddr": "$NVMF_FIRST_TARGET_IP", 00:34:41.007 "adrfam": "ipv4", 00:34:41.007 "trsvcid": "$NVMF_PORT", 00:34:41.007 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:34:41.007 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:34:41.007 "hdgst": ${hdgst:-false}, 00:34:41.007 "ddgst": ${ddgst:-false} 00:34:41.007 }, 00:34:41.007 "method": "bdev_nvme_attach_controller" 00:34:41.007 } 00:34:41.007 EOF 00:34:41.007 )") 00:34:41.007 22:33:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # cat 00:34:41.007 22:33:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # jq . 00:34:41.007 22:33:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@583 -- # IFS=, 00:34:41.007 22:33:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:34:41.007 "params": { 00:34:41.007 "name": "Nvme1", 00:34:41.007 "trtype": "tcp", 00:34:41.007 "traddr": "10.0.0.2", 00:34:41.007 "adrfam": "ipv4", 00:34:41.007 "trsvcid": "4420", 00:34:41.007 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:34:41.007 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:34:41.007 "hdgst": false, 00:34:41.007 "ddgst": false 00:34:41.007 }, 00:34:41.007 "method": "bdev_nvme_attach_controller" 00:34:41.007 },{ 00:34:41.007 "params": { 00:34:41.007 "name": "Nvme2", 00:34:41.007 "trtype": "tcp", 00:34:41.007 "traddr": "10.0.0.2", 00:34:41.007 "adrfam": "ipv4", 00:34:41.007 "trsvcid": "4420", 00:34:41.007 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:34:41.007 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:34:41.007 "hdgst": false, 00:34:41.007 "ddgst": false 00:34:41.007 }, 00:34:41.007 "method": "bdev_nvme_attach_controller" 00:34:41.007 },{ 00:34:41.007 "params": { 00:34:41.007 "name": "Nvme3", 00:34:41.007 "trtype": "tcp", 00:34:41.007 "traddr": "10.0.0.2", 00:34:41.007 "adrfam": "ipv4", 00:34:41.007 "trsvcid": "4420", 00:34:41.007 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:34:41.007 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:34:41.007 "hdgst": false, 00:34:41.007 "ddgst": false 00:34:41.007 }, 00:34:41.007 "method": "bdev_nvme_attach_controller" 00:34:41.007 },{ 00:34:41.007 "params": { 00:34:41.007 "name": "Nvme4", 00:34:41.007 "trtype": "tcp", 00:34:41.007 "traddr": "10.0.0.2", 00:34:41.007 "adrfam": "ipv4", 00:34:41.007 "trsvcid": "4420", 00:34:41.007 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:34:41.007 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:34:41.007 "hdgst": false, 00:34:41.007 "ddgst": false 00:34:41.007 }, 00:34:41.007 "method": "bdev_nvme_attach_controller" 00:34:41.007 },{ 00:34:41.007 "params": { 00:34:41.007 "name": "Nvme5", 00:34:41.007 "trtype": "tcp", 00:34:41.007 "traddr": "10.0.0.2", 00:34:41.007 "adrfam": "ipv4", 00:34:41.007 "trsvcid": "4420", 00:34:41.007 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:34:41.007 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:34:41.007 "hdgst": false, 00:34:41.007 "ddgst": false 00:34:41.007 }, 00:34:41.007 "method": "bdev_nvme_attach_controller" 00:34:41.007 },{ 00:34:41.007 "params": { 00:34:41.007 "name": "Nvme6", 00:34:41.007 "trtype": "tcp", 00:34:41.007 "traddr": "10.0.0.2", 00:34:41.007 "adrfam": "ipv4", 00:34:41.007 "trsvcid": "4420", 00:34:41.007 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:34:41.007 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:34:41.007 "hdgst": false, 00:34:41.007 "ddgst": false 00:34:41.007 }, 00:34:41.007 "method": "bdev_nvme_attach_controller" 00:34:41.007 },{ 00:34:41.007 "params": { 00:34:41.007 "name": "Nvme7", 00:34:41.007 "trtype": "tcp", 00:34:41.007 "traddr": "10.0.0.2", 00:34:41.007 "adrfam": "ipv4", 00:34:41.007 "trsvcid": "4420", 00:34:41.007 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:34:41.007 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:34:41.007 "hdgst": false, 00:34:41.007 "ddgst": false 00:34:41.007 }, 00:34:41.007 "method": "bdev_nvme_attach_controller" 00:34:41.007 },{ 00:34:41.007 "params": { 00:34:41.007 "name": "Nvme8", 00:34:41.007 "trtype": "tcp", 00:34:41.007 "traddr": "10.0.0.2", 00:34:41.007 "adrfam": "ipv4", 00:34:41.007 "trsvcid": "4420", 00:34:41.007 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:34:41.007 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:34:41.007 "hdgst": false, 00:34:41.007 "ddgst": false 00:34:41.007 }, 00:34:41.007 "method": "bdev_nvme_attach_controller" 00:34:41.007 },{ 00:34:41.007 "params": { 00:34:41.007 "name": "Nvme9", 00:34:41.007 "trtype": "tcp", 00:34:41.007 "traddr": "10.0.0.2", 00:34:41.007 "adrfam": "ipv4", 00:34:41.007 "trsvcid": "4420", 00:34:41.008 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:34:41.008 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:34:41.008 "hdgst": false, 00:34:41.008 "ddgst": false 00:34:41.008 }, 00:34:41.008 "method": "bdev_nvme_attach_controller" 00:34:41.008 },{ 00:34:41.008 "params": { 00:34:41.008 "name": "Nvme10", 00:34:41.008 "trtype": "tcp", 00:34:41.008 "traddr": "10.0.0.2", 00:34:41.008 "adrfam": "ipv4", 00:34:41.008 "trsvcid": "4420", 00:34:41.008 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:34:41.008 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:34:41.008 "hdgst": false, 00:34:41.008 "ddgst": false 00:34:41.008 }, 00:34:41.008 "method": "bdev_nvme_attach_controller" 00:34:41.008 }' 00:34:41.008 [2024-10-01 22:33:36.222757] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:41.267 [2024-10-01 22:33:36.287515] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:34:42.649 22:33:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:34:42.649 22:33:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@864 -- # return 0 00:34:42.649 22:33:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@81 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:34:42.649 22:33:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:42.649 22:33:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:34:42.649 22:33:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:42.649 22:33:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@84 -- # kill -9 277502 00:34:42.649 22:33:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@85 -- # rm -f /var/run/spdk_bdev1 00:34:42.649 22:33:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@88 -- # sleep 1 00:34:43.589 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh: line 74: 277502 Killed $rootdir/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json <(gen_nvmf_target_json "${num_subsystems[@]}") 00:34:43.589 22:33:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@89 -- # kill -0 277262 00:34:43.589 22:33:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:34:43.589 22:33:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@92 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:34:43.589 22:33:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # config=() 00:34:43.589 22:33:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # local subsystem config 00:34:43.589 22:33:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:34:43.589 22:33:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:34:43.589 { 00:34:43.589 "params": { 00:34:43.589 "name": "Nvme$subsystem", 00:34:43.589 "trtype": "$TEST_TRANSPORT", 00:34:43.589 "traddr": "$NVMF_FIRST_TARGET_IP", 00:34:43.589 "adrfam": "ipv4", 00:34:43.589 "trsvcid": "$NVMF_PORT", 00:34:43.589 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:34:43.589 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:34:43.589 "hdgst": ${hdgst:-false}, 00:34:43.589 "ddgst": ${ddgst:-false} 00:34:43.589 }, 00:34:43.589 "method": "bdev_nvme_attach_controller" 00:34:43.589 } 00:34:43.589 EOF 00:34:43.589 )") 00:34:43.589 22:33:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # cat 00:34:43.589 22:33:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:34:43.589 22:33:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:34:43.589 { 00:34:43.589 "params": { 00:34:43.589 "name": "Nvme$subsystem", 00:34:43.589 "trtype": "$TEST_TRANSPORT", 00:34:43.589 "traddr": "$NVMF_FIRST_TARGET_IP", 00:34:43.589 "adrfam": "ipv4", 00:34:43.589 "trsvcid": "$NVMF_PORT", 00:34:43.589 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:34:43.589 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:34:43.589 "hdgst": ${hdgst:-false}, 00:34:43.589 "ddgst": ${ddgst:-false} 00:34:43.589 }, 00:34:43.589 "method": "bdev_nvme_attach_controller" 00:34:43.589 } 00:34:43.589 EOF 00:34:43.589 )") 00:34:43.589 22:33:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # cat 00:34:43.589 22:33:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:34:43.589 22:33:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:34:43.589 { 00:34:43.589 "params": { 00:34:43.589 "name": "Nvme$subsystem", 00:34:43.589 "trtype": "$TEST_TRANSPORT", 00:34:43.589 "traddr": "$NVMF_FIRST_TARGET_IP", 00:34:43.589 "adrfam": "ipv4", 00:34:43.589 "trsvcid": "$NVMF_PORT", 00:34:43.589 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:34:43.589 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:34:43.589 "hdgst": ${hdgst:-false}, 00:34:43.589 "ddgst": ${ddgst:-false} 00:34:43.589 }, 00:34:43.589 "method": "bdev_nvme_attach_controller" 00:34:43.589 } 00:34:43.589 EOF 00:34:43.589 )") 00:34:43.589 22:33:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # cat 00:34:43.589 22:33:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:34:43.589 22:33:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:34:43.589 { 00:34:43.589 "params": { 00:34:43.589 "name": "Nvme$subsystem", 00:34:43.589 "trtype": "$TEST_TRANSPORT", 00:34:43.589 "traddr": "$NVMF_FIRST_TARGET_IP", 00:34:43.589 "adrfam": "ipv4", 00:34:43.589 "trsvcid": "$NVMF_PORT", 00:34:43.589 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:34:43.589 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:34:43.589 "hdgst": ${hdgst:-false}, 00:34:43.589 "ddgst": ${ddgst:-false} 00:34:43.589 }, 00:34:43.589 "method": "bdev_nvme_attach_controller" 00:34:43.589 } 00:34:43.589 EOF 00:34:43.589 )") 00:34:43.589 22:33:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # cat 00:34:43.589 22:33:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:34:43.589 22:33:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:34:43.589 { 00:34:43.589 "params": { 00:34:43.589 "name": "Nvme$subsystem", 00:34:43.589 "trtype": "$TEST_TRANSPORT", 00:34:43.589 "traddr": "$NVMF_FIRST_TARGET_IP", 00:34:43.589 "adrfam": "ipv4", 00:34:43.589 "trsvcid": "$NVMF_PORT", 00:34:43.589 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:34:43.589 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:34:43.589 "hdgst": ${hdgst:-false}, 00:34:43.589 "ddgst": ${ddgst:-false} 00:34:43.589 }, 00:34:43.589 "method": "bdev_nvme_attach_controller" 00:34:43.589 } 00:34:43.589 EOF 00:34:43.589 )") 00:34:43.589 22:33:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # cat 00:34:43.589 22:33:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:34:43.589 22:33:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:34:43.589 { 00:34:43.589 "params": { 00:34:43.589 "name": "Nvme$subsystem", 00:34:43.589 "trtype": "$TEST_TRANSPORT", 00:34:43.589 "traddr": "$NVMF_FIRST_TARGET_IP", 00:34:43.589 "adrfam": "ipv4", 00:34:43.589 "trsvcid": "$NVMF_PORT", 00:34:43.589 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:34:43.589 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:34:43.589 "hdgst": ${hdgst:-false}, 00:34:43.589 "ddgst": ${ddgst:-false} 00:34:43.589 }, 00:34:43.589 "method": "bdev_nvme_attach_controller" 00:34:43.589 } 00:34:43.589 EOF 00:34:43.589 )") 00:34:43.589 [2024-10-01 22:33:38.662507] Starting SPDK v25.01-pre git sha1 1b1c3081e / DPDK 24.03.0 initialization... 00:34:43.589 [2024-10-01 22:33:38.662564] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid278039 ] 00:34:43.589 22:33:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # cat 00:34:43.589 22:33:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:34:43.589 22:33:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:34:43.589 { 00:34:43.590 "params": { 00:34:43.590 "name": "Nvme$subsystem", 00:34:43.590 "trtype": "$TEST_TRANSPORT", 00:34:43.590 "traddr": "$NVMF_FIRST_TARGET_IP", 00:34:43.590 "adrfam": "ipv4", 00:34:43.590 "trsvcid": "$NVMF_PORT", 00:34:43.590 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:34:43.590 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:34:43.590 "hdgst": ${hdgst:-false}, 00:34:43.590 "ddgst": ${ddgst:-false} 00:34:43.590 }, 00:34:43.590 "method": "bdev_nvme_attach_controller" 00:34:43.590 } 00:34:43.590 EOF 00:34:43.590 )") 00:34:43.590 22:33:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # cat 00:34:43.590 22:33:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:34:43.590 22:33:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:34:43.590 { 00:34:43.590 "params": { 00:34:43.590 "name": "Nvme$subsystem", 00:34:43.590 "trtype": "$TEST_TRANSPORT", 00:34:43.590 "traddr": "$NVMF_FIRST_TARGET_IP", 00:34:43.590 "adrfam": "ipv4", 00:34:43.590 "trsvcid": "$NVMF_PORT", 00:34:43.590 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:34:43.590 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:34:43.590 "hdgst": ${hdgst:-false}, 00:34:43.590 "ddgst": ${ddgst:-false} 00:34:43.590 }, 00:34:43.590 "method": "bdev_nvme_attach_controller" 00:34:43.590 } 00:34:43.590 EOF 00:34:43.590 )") 00:34:43.590 22:33:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # cat 00:34:43.590 22:33:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:34:43.590 22:33:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:34:43.590 { 00:34:43.590 "params": { 00:34:43.590 "name": "Nvme$subsystem", 00:34:43.590 "trtype": "$TEST_TRANSPORT", 00:34:43.590 "traddr": "$NVMF_FIRST_TARGET_IP", 00:34:43.590 "adrfam": "ipv4", 00:34:43.590 "trsvcid": "$NVMF_PORT", 00:34:43.590 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:34:43.590 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:34:43.590 "hdgst": ${hdgst:-false}, 00:34:43.590 "ddgst": ${ddgst:-false} 00:34:43.590 }, 00:34:43.590 "method": "bdev_nvme_attach_controller" 00:34:43.590 } 00:34:43.590 EOF 00:34:43.590 )") 00:34:43.590 22:33:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # cat 00:34:43.590 22:33:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:34:43.590 22:33:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:34:43.590 { 00:34:43.590 "params": { 00:34:43.590 "name": "Nvme$subsystem", 00:34:43.590 "trtype": "$TEST_TRANSPORT", 00:34:43.590 "traddr": "$NVMF_FIRST_TARGET_IP", 00:34:43.590 "adrfam": "ipv4", 00:34:43.590 "trsvcid": "$NVMF_PORT", 00:34:43.590 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:34:43.590 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:34:43.590 "hdgst": ${hdgst:-false}, 00:34:43.590 "ddgst": ${ddgst:-false} 00:34:43.590 }, 00:34:43.590 "method": "bdev_nvme_attach_controller" 00:34:43.590 } 00:34:43.590 EOF 00:34:43.590 )") 00:34:43.590 22:33:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # cat 00:34:43.590 22:33:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # jq . 00:34:43.590 22:33:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@583 -- # IFS=, 00:34:43.590 22:33:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:34:43.590 "params": { 00:34:43.590 "name": "Nvme1", 00:34:43.590 "trtype": "tcp", 00:34:43.590 "traddr": "10.0.0.2", 00:34:43.590 "adrfam": "ipv4", 00:34:43.590 "trsvcid": "4420", 00:34:43.590 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:34:43.590 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:34:43.590 "hdgst": false, 00:34:43.590 "ddgst": false 00:34:43.590 }, 00:34:43.590 "method": "bdev_nvme_attach_controller" 00:34:43.590 },{ 00:34:43.590 "params": { 00:34:43.590 "name": "Nvme2", 00:34:43.590 "trtype": "tcp", 00:34:43.590 "traddr": "10.0.0.2", 00:34:43.590 "adrfam": "ipv4", 00:34:43.590 "trsvcid": "4420", 00:34:43.590 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:34:43.590 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:34:43.590 "hdgst": false, 00:34:43.590 "ddgst": false 00:34:43.590 }, 00:34:43.590 "method": "bdev_nvme_attach_controller" 00:34:43.590 },{ 00:34:43.590 "params": { 00:34:43.590 "name": "Nvme3", 00:34:43.590 "trtype": "tcp", 00:34:43.590 "traddr": "10.0.0.2", 00:34:43.590 "adrfam": "ipv4", 00:34:43.590 "trsvcid": "4420", 00:34:43.590 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:34:43.590 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:34:43.590 "hdgst": false, 00:34:43.590 "ddgst": false 00:34:43.590 }, 00:34:43.590 "method": "bdev_nvme_attach_controller" 00:34:43.590 },{ 00:34:43.590 "params": { 00:34:43.590 "name": "Nvme4", 00:34:43.590 "trtype": "tcp", 00:34:43.590 "traddr": "10.0.0.2", 00:34:43.590 "adrfam": "ipv4", 00:34:43.590 "trsvcid": "4420", 00:34:43.590 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:34:43.590 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:34:43.590 "hdgst": false, 00:34:43.590 "ddgst": false 00:34:43.590 }, 00:34:43.590 "method": "bdev_nvme_attach_controller" 00:34:43.590 },{ 00:34:43.590 "params": { 00:34:43.590 "name": "Nvme5", 00:34:43.590 "trtype": "tcp", 00:34:43.590 "traddr": "10.0.0.2", 00:34:43.590 "adrfam": "ipv4", 00:34:43.590 "trsvcid": "4420", 00:34:43.590 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:34:43.590 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:34:43.590 "hdgst": false, 00:34:43.590 "ddgst": false 00:34:43.590 }, 00:34:43.590 "method": "bdev_nvme_attach_controller" 00:34:43.590 },{ 00:34:43.590 "params": { 00:34:43.590 "name": "Nvme6", 00:34:43.590 "trtype": "tcp", 00:34:43.590 "traddr": "10.0.0.2", 00:34:43.590 "adrfam": "ipv4", 00:34:43.590 "trsvcid": "4420", 00:34:43.590 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:34:43.590 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:34:43.590 "hdgst": false, 00:34:43.590 "ddgst": false 00:34:43.590 }, 00:34:43.590 "method": "bdev_nvme_attach_controller" 00:34:43.590 },{ 00:34:43.591 "params": { 00:34:43.591 "name": "Nvme7", 00:34:43.591 "trtype": "tcp", 00:34:43.591 "traddr": "10.0.0.2", 00:34:43.591 "adrfam": "ipv4", 00:34:43.591 "trsvcid": "4420", 00:34:43.591 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:34:43.591 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:34:43.591 "hdgst": false, 00:34:43.591 "ddgst": false 00:34:43.591 }, 00:34:43.591 "method": "bdev_nvme_attach_controller" 00:34:43.591 },{ 00:34:43.591 "params": { 00:34:43.591 "name": "Nvme8", 00:34:43.591 "trtype": "tcp", 00:34:43.591 "traddr": "10.0.0.2", 00:34:43.591 "adrfam": "ipv4", 00:34:43.591 "trsvcid": "4420", 00:34:43.591 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:34:43.591 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:34:43.591 "hdgst": false, 00:34:43.591 "ddgst": false 00:34:43.591 }, 00:34:43.591 "method": "bdev_nvme_attach_controller" 00:34:43.591 },{ 00:34:43.591 "params": { 00:34:43.591 "name": "Nvme9", 00:34:43.591 "trtype": "tcp", 00:34:43.591 "traddr": "10.0.0.2", 00:34:43.591 "adrfam": "ipv4", 00:34:43.591 "trsvcid": "4420", 00:34:43.591 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:34:43.591 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:34:43.591 "hdgst": false, 00:34:43.591 "ddgst": false 00:34:43.591 }, 00:34:43.591 "method": "bdev_nvme_attach_controller" 00:34:43.591 },{ 00:34:43.591 "params": { 00:34:43.591 "name": "Nvme10", 00:34:43.591 "trtype": "tcp", 00:34:43.591 "traddr": "10.0.0.2", 00:34:43.591 "adrfam": "ipv4", 00:34:43.591 "trsvcid": "4420", 00:34:43.591 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:34:43.591 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:34:43.591 "hdgst": false, 00:34:43.591 "ddgst": false 00:34:43.591 }, 00:34:43.591 "method": "bdev_nvme_attach_controller" 00:34:43.591 }' 00:34:43.591 [2024-10-01 22:33:38.725752] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:43.591 [2024-10-01 22:33:38.790147] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:34:44.974 Running I/O for 1 seconds... 00:34:46.200 1863.00 IOPS, 116.44 MiB/s 00:34:46.200 Latency(us) 00:34:46.200 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:46.200 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:34:46.200 Verification LBA range: start 0x0 length 0x400 00:34:46.200 Nvme1n1 : 1.11 231.67 14.48 0.00 0.00 273457.28 20097.71 246415.36 00:34:46.200 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:34:46.200 Verification LBA range: start 0x0 length 0x400 00:34:46.200 Nvme2n1 : 1.13 227.07 14.19 0.00 0.00 274237.87 16274.77 284863.15 00:34:46.200 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:34:46.200 Verification LBA range: start 0x0 length 0x400 00:34:46.200 Nvme3n1 : 1.11 231.21 14.45 0.00 0.00 264515.20 20753.07 241172.48 00:34:46.200 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:34:46.200 Verification LBA range: start 0x0 length 0x400 00:34:46.200 Nvme4n1 : 1.18 271.27 16.95 0.00 0.00 222047.23 18677.76 248162.99 00:34:46.200 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:34:46.200 Verification LBA range: start 0x0 length 0x400 00:34:46.200 Nvme5n1 : 1.12 229.05 14.32 0.00 0.00 257486.08 30801.92 227191.47 00:34:46.200 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:34:46.200 Verification LBA range: start 0x0 length 0x400 00:34:46.200 Nvme6n1 : 1.11 230.73 14.42 0.00 0.00 250651.09 20316.16 230686.72 00:34:46.200 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:34:46.200 Verification LBA range: start 0x0 length 0x400 00:34:46.200 Nvme7n1 : 1.12 228.09 14.26 0.00 0.00 249238.19 14964.05 246415.36 00:34:46.200 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:34:46.200 Verification LBA range: start 0x0 length 0x400 00:34:46.200 Nvme8n1 : 1.19 269.89 16.87 0.00 0.00 208250.20 16274.77 265639.25 00:34:46.200 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:34:46.200 Verification LBA range: start 0x0 length 0x400 00:34:46.200 Nvme9n1 : 1.20 267.74 16.73 0.00 0.00 206003.43 3181.23 269134.51 00:34:46.200 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:34:46.200 Verification LBA range: start 0x0 length 0x400 00:34:46.200 Nvme10n1 : 1.20 266.92 16.68 0.00 0.00 203093.25 10813.44 263891.63 00:34:46.200 =================================================================================================================== 00:34:46.200 Total : 2453.63 153.35 0.00 0.00 238075.30 3181.23 284863.15 00:34:46.460 22:33:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@95 -- # stoptarget 00:34:46.460 22:33:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:34:46.460 22:33:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:34:46.460 22:33:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:34:46.460 22:33:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@46 -- # nvmftestfini 00:34:46.460 22:33:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@514 -- # nvmfcleanup 00:34:46.460 22:33:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@121 -- # sync 00:34:46.460 22:33:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:34:46.460 22:33:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@124 -- # set +e 00:34:46.460 22:33:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@125 -- # for i in {1..20} 00:34:46.460 22:33:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:34:46.460 rmmod nvme_tcp 00:34:46.460 rmmod nvme_fabrics 00:34:46.460 rmmod nvme_keyring 00:34:46.460 22:33:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:34:46.460 22:33:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@128 -- # set -e 00:34:46.460 22:33:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@129 -- # return 0 00:34:46.460 22:33:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@515 -- # '[' -n 277262 ']' 00:34:46.460 22:33:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@516 -- # killprocess 277262 00:34:46.460 22:33:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@950 -- # '[' -z 277262 ']' 00:34:46.460 22:33:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@954 -- # kill -0 277262 00:34:46.460 22:33:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@955 -- # uname 00:34:46.460 22:33:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:34:46.460 22:33:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 277262 00:34:46.461 22:33:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:34:46.461 22:33:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:34:46.461 22:33:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@968 -- # echo 'killing process with pid 277262' 00:34:46.461 killing process with pid 277262 00:34:46.461 22:33:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@969 -- # kill 277262 00:34:46.461 22:33:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@974 -- # wait 277262 00:34:46.722 22:33:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:34:46.722 22:33:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:34:46.722 22:33:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:34:46.722 22:33:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@297 -- # iptr 00:34:46.722 22:33:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:34:46.722 22:33:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@789 -- # iptables-save 00:34:46.723 22:33:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@789 -- # iptables-restore 00:34:46.723 22:33:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:34:46.723 22:33:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@302 -- # remove_spdk_ns 00:34:46.723 22:33:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:46.723 22:33:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:34:46.723 22:33:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:49.263 22:33:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:34:49.263 00:34:49.263 real 0m16.729s 00:34:49.263 user 0m34.107s 00:34:49.263 sys 0m6.779s 00:34:49.263 22:33:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1126 -- # xtrace_disable 00:34:49.263 22:33:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:34:49.263 ************************************ 00:34:49.263 END TEST nvmf_shutdown_tc1 00:34:49.263 ************************************ 00:34:49.263 22:33:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@163 -- # run_test nvmf_shutdown_tc2 nvmf_shutdown_tc2 00:34:49.263 22:33:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:34:49.263 22:33:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1107 -- # xtrace_disable 00:34:49.263 22:33:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:34:49.263 ************************************ 00:34:49.263 START TEST nvmf_shutdown_tc2 00:34:49.263 ************************************ 00:34:49.263 22:33:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1125 -- # nvmf_shutdown_tc2 00:34:49.263 22:33:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@100 -- # starttarget 00:34:49.263 22:33:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@16 -- # nvmftestinit 00:34:49.263 22:33:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:34:49.263 22:33:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:34:49.263 22:33:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@474 -- # prepare_net_devs 00:34:49.263 22:33:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@436 -- # local -g is_hw=no 00:34:49.263 22:33:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@438 -- # remove_spdk_ns 00:34:49.263 22:33:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:49.263 22:33:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:34:49.263 22:33:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:49.263 22:33:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:34:49.263 22:33:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:34:49.263 22:33:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@309 -- # xtrace_disable 00:34:49.263 22:33:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:34:49.263 22:33:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:34:49.263 22:33:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@315 -- # pci_devs=() 00:34:49.264 22:33:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@315 -- # local -a pci_devs 00:34:49.264 22:33:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:34:49.264 22:33:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:34:49.264 22:33:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@317 -- # pci_drivers=() 00:34:49.264 22:33:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:34:49.264 22:33:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@319 -- # net_devs=() 00:34:49.264 22:33:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@319 -- # local -ga net_devs 00:34:49.264 22:33:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@320 -- # e810=() 00:34:49.264 22:33:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@320 -- # local -ga e810 00:34:49.264 22:33:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@321 -- # x722=() 00:34:49.264 22:33:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@321 -- # local -ga x722 00:34:49.264 22:33:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@322 -- # mlx=() 00:34:49.264 22:33:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@322 -- # local -ga mlx 00:34:49.264 22:33:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:34:49.264 22:33:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:34:49.264 22:33:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:34:49.264 22:33:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:34:49.264 22:33:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:34:49.264 22:33:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:34:49.264 22:33:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:34:49.264 22:33:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:34:49.264 22:33:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:34:49.264 22:33:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:34:49.264 22:33:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:34:49.264 22:33:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:34:49.264 22:33:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:34:49.264 22:33:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:34:49.264 22:33:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:34:49.264 22:33:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:34:49.264 22:33:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:34:49.264 22:33:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:34:49.264 22:33:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:34:49.264 22:33:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:34:49.264 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:34:49.264 22:33:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:34:49.264 22:33:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:34:49.264 22:33:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:49.264 22:33:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:49.264 22:33:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:34:49.264 22:33:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:34:49.264 22:33:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:34:49.264 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:34:49.264 22:33:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:34:49.264 22:33:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:34:49.264 22:33:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:49.264 22:33:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:49.264 22:33:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:34:49.264 22:33:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:34:49.264 22:33:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:34:49.264 22:33:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:34:49.264 22:33:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:34:49.264 22:33:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:49.264 22:33:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:34:49.264 22:33:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:49.264 22:33:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@416 -- # [[ up == up ]] 00:34:49.264 22:33:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:34:49.264 22:33:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:49.264 22:33:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:34:49.264 Found net devices under 0000:4b:00.0: cvl_0_0 00:34:49.264 22:33:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:34:49.264 22:33:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:34:49.264 22:33:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:49.264 22:33:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:34:49.264 22:33:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:49.264 22:33:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@416 -- # [[ up == up ]] 00:34:49.264 22:33:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:34:49.264 22:33:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:49.264 22:33:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:34:49.264 Found net devices under 0000:4b:00.1: cvl_0_1 00:34:49.264 22:33:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:34:49.264 22:33:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:34:49.264 22:33:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@440 -- # is_hw=yes 00:34:49.264 22:33:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:34:49.264 22:33:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:34:49.264 22:33:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:34:49.264 22:33:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:34:49.264 22:33:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:34:49.264 22:33:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:34:49.264 22:33:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:34:49.264 22:33:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:34:49.264 22:33:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:34:49.264 22:33:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:34:49.264 22:33:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:34:49.264 22:33:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:34:49.264 22:33:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:34:49.264 22:33:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:34:49.264 22:33:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:34:49.264 22:33:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:34:49.265 22:33:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:34:49.265 22:33:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:34:49.265 22:33:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:34:49.265 22:33:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:34:49.265 22:33:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:34:49.265 22:33:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:34:49.265 22:33:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:34:49.265 22:33:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:34:49.265 22:33:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:34:49.265 22:33:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:34:49.265 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:34:49.265 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.688 ms 00:34:49.265 00:34:49.265 --- 10.0.0.2 ping statistics --- 00:34:49.265 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:49.265 rtt min/avg/max/mdev = 0.688/0.688/0.688/0.000 ms 00:34:49.265 22:33:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:34:49.265 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:34:49.265 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.279 ms 00:34:49.265 00:34:49.265 --- 10.0.0.1 ping statistics --- 00:34:49.265 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:49.265 rtt min/avg/max/mdev = 0.279/0.279/0.279/0.000 ms 00:34:49.265 22:33:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:34:49.265 22:33:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@448 -- # return 0 00:34:49.265 22:33:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:34:49.265 22:33:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:34:49.265 22:33:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:34:49.265 22:33:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:34:49.265 22:33:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:34:49.265 22:33:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:34:49.265 22:33:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:34:49.265 22:33:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:34:49.265 22:33:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:34:49.265 22:33:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@724 -- # xtrace_disable 00:34:49.265 22:33:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:34:49.265 22:33:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@507 -- # nvmfpid=279162 00:34:49.265 22:33:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@508 -- # waitforlisten 279162 00:34:49.265 22:33:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:34:49.265 22:33:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@831 -- # '[' -z 279162 ']' 00:34:49.265 22:33:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:49.265 22:33:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@836 -- # local max_retries=100 00:34:49.265 22:33:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:49.265 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:49.265 22:33:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@840 -- # xtrace_disable 00:34:49.265 22:33:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:34:49.526 [2024-10-01 22:33:44.534808] Starting SPDK v25.01-pre git sha1 1b1c3081e / DPDK 24.03.0 initialization... 00:34:49.526 [2024-10-01 22:33:44.534895] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:34:49.526 [2024-10-01 22:33:44.624071] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:34:49.526 [2024-10-01 22:33:44.686073] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:34:49.526 [2024-10-01 22:33:44.686110] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:34:49.526 [2024-10-01 22:33:44.686116] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:34:49.526 [2024-10-01 22:33:44.686120] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:34:49.526 [2024-10-01 22:33:44.686125] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:34:49.526 [2024-10-01 22:33:44.686227] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:34:49.526 [2024-10-01 22:33:44.686384] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:34:49.526 [2024-10-01 22:33:44.686539] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:34:49.526 [2024-10-01 22:33:44.686541] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 4 00:34:50.098 22:33:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:34:50.098 22:33:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@864 -- # return 0 00:34:50.098 22:33:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:34:50.098 22:33:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@730 -- # xtrace_disable 00:34:50.098 22:33:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:34:50.359 22:33:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:34:50.359 22:33:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:34:50.359 22:33:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:50.359 22:33:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:34:50.359 [2024-10-01 22:33:45.381755] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:34:50.359 22:33:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:50.359 22:33:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:34:50.359 22:33:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:34:50.359 22:33:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@724 -- # xtrace_disable 00:34:50.359 22:33:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:34:50.359 22:33:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:34:50.359 22:33:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:34:50.359 22:33:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:34:50.359 22:33:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:34:50.359 22:33:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:34:50.359 22:33:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:34:50.359 22:33:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:34:50.359 22:33:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:34:50.359 22:33:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:34:50.359 22:33:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:34:50.359 22:33:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:34:50.359 22:33:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:34:50.359 22:33:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:34:50.359 22:33:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:34:50.359 22:33:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:34:50.359 22:33:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:34:50.359 22:33:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:34:50.359 22:33:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:34:50.359 22:33:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:34:50.359 22:33:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:34:50.359 22:33:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:34:50.359 22:33:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@36 -- # rpc_cmd 00:34:50.359 22:33:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:50.359 22:33:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:34:50.359 Malloc1 00:34:50.359 [2024-10-01 22:33:45.480795] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:34:50.359 Malloc2 00:34:50.359 Malloc3 00:34:50.359 Malloc4 00:34:50.619 Malloc5 00:34:50.619 Malloc6 00:34:50.619 Malloc7 00:34:50.619 Malloc8 00:34:50.619 Malloc9 00:34:50.619 Malloc10 00:34:50.619 22:33:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:50.619 22:33:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:34:50.619 22:33:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@730 -- # xtrace_disable 00:34:50.619 22:33:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:34:50.880 22:33:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@104 -- # perfpid=279538 00:34:50.880 22:33:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@105 -- # waitforlisten 279538 /var/tmp/bdevperf.sock 00:34:50.880 22:33:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@831 -- # '[' -z 279538 ']' 00:34:50.880 22:33:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:34:50.880 22:33:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@836 -- # local max_retries=100 00:34:50.880 22:33:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:34:50.880 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:34:50.880 22:33:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:34:50.880 22:33:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@840 -- # xtrace_disable 00:34:50.880 22:33:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@103 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:34:50.880 22:33:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:34:50.880 22:33:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@558 -- # config=() 00:34:50.880 22:33:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@558 -- # local subsystem config 00:34:50.880 22:33:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:34:50.880 22:33:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:34:50.880 { 00:34:50.880 "params": { 00:34:50.880 "name": "Nvme$subsystem", 00:34:50.880 "trtype": "$TEST_TRANSPORT", 00:34:50.880 "traddr": "$NVMF_FIRST_TARGET_IP", 00:34:50.880 "adrfam": "ipv4", 00:34:50.880 "trsvcid": "$NVMF_PORT", 00:34:50.880 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:34:50.880 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:34:50.880 "hdgst": ${hdgst:-false}, 00:34:50.880 "ddgst": ${ddgst:-false} 00:34:50.880 }, 00:34:50.880 "method": "bdev_nvme_attach_controller" 00:34:50.880 } 00:34:50.880 EOF 00:34:50.880 )") 00:34:50.880 22:33:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@580 -- # cat 00:34:50.880 22:33:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:34:50.880 22:33:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:34:50.880 { 00:34:50.880 "params": { 00:34:50.880 "name": "Nvme$subsystem", 00:34:50.880 "trtype": "$TEST_TRANSPORT", 00:34:50.880 "traddr": "$NVMF_FIRST_TARGET_IP", 00:34:50.880 "adrfam": "ipv4", 00:34:50.880 "trsvcid": "$NVMF_PORT", 00:34:50.880 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:34:50.880 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:34:50.880 "hdgst": ${hdgst:-false}, 00:34:50.880 "ddgst": ${ddgst:-false} 00:34:50.880 }, 00:34:50.880 "method": "bdev_nvme_attach_controller" 00:34:50.880 } 00:34:50.880 EOF 00:34:50.880 )") 00:34:50.880 22:33:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@580 -- # cat 00:34:50.880 22:33:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:34:50.880 22:33:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:34:50.880 { 00:34:50.880 "params": { 00:34:50.880 "name": "Nvme$subsystem", 00:34:50.880 "trtype": "$TEST_TRANSPORT", 00:34:50.880 "traddr": "$NVMF_FIRST_TARGET_IP", 00:34:50.880 "adrfam": "ipv4", 00:34:50.880 "trsvcid": "$NVMF_PORT", 00:34:50.880 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:34:50.880 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:34:50.880 "hdgst": ${hdgst:-false}, 00:34:50.880 "ddgst": ${ddgst:-false} 00:34:50.880 }, 00:34:50.880 "method": "bdev_nvme_attach_controller" 00:34:50.880 } 00:34:50.880 EOF 00:34:50.880 )") 00:34:50.880 22:33:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@580 -- # cat 00:34:50.880 22:33:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:34:50.880 22:33:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:34:50.880 { 00:34:50.880 "params": { 00:34:50.880 "name": "Nvme$subsystem", 00:34:50.880 "trtype": "$TEST_TRANSPORT", 00:34:50.880 "traddr": "$NVMF_FIRST_TARGET_IP", 00:34:50.880 "adrfam": "ipv4", 00:34:50.880 "trsvcid": "$NVMF_PORT", 00:34:50.880 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:34:50.880 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:34:50.880 "hdgst": ${hdgst:-false}, 00:34:50.880 "ddgst": ${ddgst:-false} 00:34:50.880 }, 00:34:50.880 "method": "bdev_nvme_attach_controller" 00:34:50.880 } 00:34:50.880 EOF 00:34:50.880 )") 00:34:50.880 22:33:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@580 -- # cat 00:34:50.880 22:33:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:34:50.880 22:33:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:34:50.880 { 00:34:50.880 "params": { 00:34:50.880 "name": "Nvme$subsystem", 00:34:50.880 "trtype": "$TEST_TRANSPORT", 00:34:50.880 "traddr": "$NVMF_FIRST_TARGET_IP", 00:34:50.880 "adrfam": "ipv4", 00:34:50.880 "trsvcid": "$NVMF_PORT", 00:34:50.880 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:34:50.880 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:34:50.880 "hdgst": ${hdgst:-false}, 00:34:50.880 "ddgst": ${ddgst:-false} 00:34:50.880 }, 00:34:50.880 "method": "bdev_nvme_attach_controller" 00:34:50.880 } 00:34:50.881 EOF 00:34:50.881 )") 00:34:50.881 22:33:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@580 -- # cat 00:34:50.881 22:33:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:34:50.881 22:33:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:34:50.881 { 00:34:50.881 "params": { 00:34:50.881 "name": "Nvme$subsystem", 00:34:50.881 "trtype": "$TEST_TRANSPORT", 00:34:50.881 "traddr": "$NVMF_FIRST_TARGET_IP", 00:34:50.881 "adrfam": "ipv4", 00:34:50.881 "trsvcid": "$NVMF_PORT", 00:34:50.881 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:34:50.881 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:34:50.881 "hdgst": ${hdgst:-false}, 00:34:50.881 "ddgst": ${ddgst:-false} 00:34:50.881 }, 00:34:50.881 "method": "bdev_nvme_attach_controller" 00:34:50.881 } 00:34:50.881 EOF 00:34:50.881 )") 00:34:50.881 22:33:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@580 -- # cat 00:34:50.881 22:33:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:34:50.881 22:33:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:34:50.881 { 00:34:50.881 "params": { 00:34:50.881 "name": "Nvme$subsystem", 00:34:50.881 "trtype": "$TEST_TRANSPORT", 00:34:50.881 "traddr": "$NVMF_FIRST_TARGET_IP", 00:34:50.881 "adrfam": "ipv4", 00:34:50.881 "trsvcid": "$NVMF_PORT", 00:34:50.881 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:34:50.881 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:34:50.881 "hdgst": ${hdgst:-false}, 00:34:50.881 "ddgst": ${ddgst:-false} 00:34:50.881 }, 00:34:50.881 "method": "bdev_nvme_attach_controller" 00:34:50.881 } 00:34:50.881 EOF 00:34:50.881 )") 00:34:50.881 22:33:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@580 -- # cat 00:34:50.881 22:33:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:34:50.881 22:33:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:34:50.881 { 00:34:50.881 "params": { 00:34:50.881 "name": "Nvme$subsystem", 00:34:50.881 "trtype": "$TEST_TRANSPORT", 00:34:50.881 "traddr": "$NVMF_FIRST_TARGET_IP", 00:34:50.881 "adrfam": "ipv4", 00:34:50.881 "trsvcid": "$NVMF_PORT", 00:34:50.881 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:34:50.881 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:34:50.881 "hdgst": ${hdgst:-false}, 00:34:50.881 "ddgst": ${ddgst:-false} 00:34:50.881 }, 00:34:50.881 "method": "bdev_nvme_attach_controller" 00:34:50.881 } 00:34:50.881 EOF 00:34:50.881 )") 00:34:50.881 [2024-10-01 22:33:45.945275] Starting SPDK v25.01-pre git sha1 1b1c3081e / DPDK 24.03.0 initialization... 00:34:50.881 [2024-10-01 22:33:45.945377] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid279538 ] 00:34:50.881 22:33:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@580 -- # cat 00:34:50.881 22:33:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:34:50.881 22:33:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:34:50.881 { 00:34:50.881 "params": { 00:34:50.881 "name": "Nvme$subsystem", 00:34:50.881 "trtype": "$TEST_TRANSPORT", 00:34:50.881 "traddr": "$NVMF_FIRST_TARGET_IP", 00:34:50.881 "adrfam": "ipv4", 00:34:50.881 "trsvcid": "$NVMF_PORT", 00:34:50.881 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:34:50.881 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:34:50.881 "hdgst": ${hdgst:-false}, 00:34:50.881 "ddgst": ${ddgst:-false} 00:34:50.881 }, 00:34:50.881 "method": "bdev_nvme_attach_controller" 00:34:50.881 } 00:34:50.881 EOF 00:34:50.881 )") 00:34:50.881 22:33:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@580 -- # cat 00:34:50.881 22:33:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:34:50.881 22:33:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:34:50.881 { 00:34:50.881 "params": { 00:34:50.881 "name": "Nvme$subsystem", 00:34:50.881 "trtype": "$TEST_TRANSPORT", 00:34:50.881 "traddr": "$NVMF_FIRST_TARGET_IP", 00:34:50.881 "adrfam": "ipv4", 00:34:50.881 "trsvcid": "$NVMF_PORT", 00:34:50.881 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:34:50.881 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:34:50.881 "hdgst": ${hdgst:-false}, 00:34:50.881 "ddgst": ${ddgst:-false} 00:34:50.881 }, 00:34:50.881 "method": "bdev_nvme_attach_controller" 00:34:50.881 } 00:34:50.881 EOF 00:34:50.881 )") 00:34:50.881 22:33:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@580 -- # cat 00:34:50.881 22:33:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # jq . 00:34:50.881 22:33:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@583 -- # IFS=, 00:34:50.881 22:33:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:34:50.881 "params": { 00:34:50.881 "name": "Nvme1", 00:34:50.881 "trtype": "tcp", 00:34:50.881 "traddr": "10.0.0.2", 00:34:50.881 "adrfam": "ipv4", 00:34:50.881 "trsvcid": "4420", 00:34:50.881 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:34:50.881 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:34:50.881 "hdgst": false, 00:34:50.881 "ddgst": false 00:34:50.881 }, 00:34:50.881 "method": "bdev_nvme_attach_controller" 00:34:50.881 },{ 00:34:50.881 "params": { 00:34:50.881 "name": "Nvme2", 00:34:50.881 "trtype": "tcp", 00:34:50.881 "traddr": "10.0.0.2", 00:34:50.881 "adrfam": "ipv4", 00:34:50.881 "trsvcid": "4420", 00:34:50.881 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:34:50.881 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:34:50.881 "hdgst": false, 00:34:50.881 "ddgst": false 00:34:50.881 }, 00:34:50.881 "method": "bdev_nvme_attach_controller" 00:34:50.881 },{ 00:34:50.881 "params": { 00:34:50.881 "name": "Nvme3", 00:34:50.881 "trtype": "tcp", 00:34:50.881 "traddr": "10.0.0.2", 00:34:50.881 "adrfam": "ipv4", 00:34:50.881 "trsvcid": "4420", 00:34:50.881 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:34:50.881 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:34:50.881 "hdgst": false, 00:34:50.881 "ddgst": false 00:34:50.881 }, 00:34:50.881 "method": "bdev_nvme_attach_controller" 00:34:50.881 },{ 00:34:50.881 "params": { 00:34:50.881 "name": "Nvme4", 00:34:50.881 "trtype": "tcp", 00:34:50.881 "traddr": "10.0.0.2", 00:34:50.881 "adrfam": "ipv4", 00:34:50.881 "trsvcid": "4420", 00:34:50.881 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:34:50.881 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:34:50.881 "hdgst": false, 00:34:50.881 "ddgst": false 00:34:50.881 }, 00:34:50.881 "method": "bdev_nvme_attach_controller" 00:34:50.881 },{ 00:34:50.881 "params": { 00:34:50.881 "name": "Nvme5", 00:34:50.881 "trtype": "tcp", 00:34:50.881 "traddr": "10.0.0.2", 00:34:50.881 "adrfam": "ipv4", 00:34:50.881 "trsvcid": "4420", 00:34:50.881 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:34:50.881 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:34:50.881 "hdgst": false, 00:34:50.881 "ddgst": false 00:34:50.881 }, 00:34:50.881 "method": "bdev_nvme_attach_controller" 00:34:50.881 },{ 00:34:50.881 "params": { 00:34:50.881 "name": "Nvme6", 00:34:50.881 "trtype": "tcp", 00:34:50.881 "traddr": "10.0.0.2", 00:34:50.881 "adrfam": "ipv4", 00:34:50.881 "trsvcid": "4420", 00:34:50.881 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:34:50.881 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:34:50.881 "hdgst": false, 00:34:50.881 "ddgst": false 00:34:50.881 }, 00:34:50.881 "method": "bdev_nvme_attach_controller" 00:34:50.881 },{ 00:34:50.881 "params": { 00:34:50.881 "name": "Nvme7", 00:34:50.881 "trtype": "tcp", 00:34:50.881 "traddr": "10.0.0.2", 00:34:50.881 "adrfam": "ipv4", 00:34:50.881 "trsvcid": "4420", 00:34:50.881 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:34:50.881 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:34:50.881 "hdgst": false, 00:34:50.881 "ddgst": false 00:34:50.881 }, 00:34:50.881 "method": "bdev_nvme_attach_controller" 00:34:50.881 },{ 00:34:50.881 "params": { 00:34:50.881 "name": "Nvme8", 00:34:50.881 "trtype": "tcp", 00:34:50.881 "traddr": "10.0.0.2", 00:34:50.881 "adrfam": "ipv4", 00:34:50.881 "trsvcid": "4420", 00:34:50.881 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:34:50.881 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:34:50.881 "hdgst": false, 00:34:50.881 "ddgst": false 00:34:50.881 }, 00:34:50.881 "method": "bdev_nvme_attach_controller" 00:34:50.881 },{ 00:34:50.881 "params": { 00:34:50.881 "name": "Nvme9", 00:34:50.881 "trtype": "tcp", 00:34:50.881 "traddr": "10.0.0.2", 00:34:50.881 "adrfam": "ipv4", 00:34:50.881 "trsvcid": "4420", 00:34:50.881 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:34:50.881 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:34:50.881 "hdgst": false, 00:34:50.881 "ddgst": false 00:34:50.881 }, 00:34:50.881 "method": "bdev_nvme_attach_controller" 00:34:50.881 },{ 00:34:50.881 "params": { 00:34:50.881 "name": "Nvme10", 00:34:50.881 "trtype": "tcp", 00:34:50.881 "traddr": "10.0.0.2", 00:34:50.881 "adrfam": "ipv4", 00:34:50.881 "trsvcid": "4420", 00:34:50.881 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:34:50.881 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:34:50.881 "hdgst": false, 00:34:50.881 "ddgst": false 00:34:50.881 }, 00:34:50.882 "method": "bdev_nvme_attach_controller" 00:34:50.882 }' 00:34:50.882 [2024-10-01 22:33:46.009176] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:50.882 [2024-10-01 22:33:46.074250] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:34:52.267 Running I/O for 10 seconds... 00:34:52.267 22:33:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:34:52.267 22:33:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@864 -- # return 0 00:34:52.267 22:33:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@106 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:34:52.267 22:33:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:52.267 22:33:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:34:52.528 22:33:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:52.528 22:33:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@108 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:34:52.528 22:33:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@51 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:34:52.528 22:33:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@55 -- # '[' -z Nvme1n1 ']' 00:34:52.528 22:33:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@58 -- # local ret=1 00:34:52.528 22:33:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # local i 00:34:52.528 22:33:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i = 10 )) 00:34:52.528 22:33:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:34:52.528 22:33:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:34:52.528 22:33:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:34:52.528 22:33:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:52.528 22:33:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:34:52.528 22:33:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:52.528 22:33:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # read_io_count=3 00:34:52.528 22:33:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@64 -- # '[' 3 -ge 100 ']' 00:34:52.528 22:33:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@68 -- # sleep 0.25 00:34:52.788 22:33:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i-- )) 00:34:52.788 22:33:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:34:52.788 22:33:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:34:52.788 22:33:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:34:52.788 22:33:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:52.788 22:33:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:34:52.788 22:33:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:52.788 22:33:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # read_io_count=67 00:34:52.788 22:33:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@64 -- # '[' 67 -ge 100 ']' 00:34:52.788 22:33:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@68 -- # sleep 0.25 00:34:53.049 22:33:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i-- )) 00:34:53.049 22:33:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:34:53.049 22:33:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:34:53.049 22:33:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:34:53.049 22:33:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:53.049 22:33:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:34:53.311 22:33:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:53.311 22:33:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # read_io_count=131 00:34:53.311 22:33:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@64 -- # '[' 131 -ge 100 ']' 00:34:53.311 22:33:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@65 -- # ret=0 00:34:53.311 22:33:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@66 -- # break 00:34:53.311 22:33:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@70 -- # return 0 00:34:53.311 22:33:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@111 -- # killprocess 279538 00:34:53.311 22:33:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@950 -- # '[' -z 279538 ']' 00:34:53.311 22:33:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # kill -0 279538 00:34:53.311 22:33:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@955 -- # uname 00:34:53.311 22:33:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:34:53.311 22:33:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 279538 00:34:53.311 22:33:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:34:53.311 22:33:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:34:53.311 22:33:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@968 -- # echo 'killing process with pid 279538' 00:34:53.311 killing process with pid 279538 00:34:53.311 22:33:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@969 -- # kill 279538 00:34:53.311 22:33:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@974 -- # wait 279538 00:34:53.311 Received shutdown signal, test time was about 0.983081 seconds 00:34:53.311 00:34:53.311 Latency(us) 00:34:53.312 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:53.312 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:34:53.312 Verification LBA range: start 0x0 length 0x400 00:34:53.312 Nvme1n1 : 0.96 200.13 12.51 0.00 0.00 315986.77 18022.40 255153.49 00:34:53.312 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:34:53.312 Verification LBA range: start 0x0 length 0x400 00:34:53.312 Nvme2n1 : 0.95 202.19 12.64 0.00 0.00 304688.07 15400.96 230686.72 00:34:53.312 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:34:53.312 Verification LBA range: start 0x0 length 0x400 00:34:53.312 Nvme3n1 : 0.98 266.36 16.65 0.00 0.00 226532.85 4532.91 223696.21 00:34:53.312 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:34:53.312 Verification LBA range: start 0x0 length 0x400 00:34:53.312 Nvme4n1 : 0.97 263.00 16.44 0.00 0.00 225859.41 34078.72 232434.35 00:34:53.312 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:34:53.312 Verification LBA range: start 0x0 length 0x400 00:34:53.312 Nvme5n1 : 0.98 266.65 16.67 0.00 0.00 217259.51 3686.40 232434.35 00:34:53.312 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:34:53.312 Verification LBA range: start 0x0 length 0x400 00:34:53.312 Nvme6n1 : 0.98 261.29 16.33 0.00 0.00 217736.75 17585.49 265639.25 00:34:53.312 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:34:53.312 Verification LBA range: start 0x0 length 0x400 00:34:53.312 Nvme7n1 : 0.98 260.64 16.29 0.00 0.00 213422.19 13216.43 220200.96 00:34:53.312 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:34:53.312 Verification LBA range: start 0x0 length 0x400 00:34:53.312 Nvme8n1 : 0.96 268.89 16.81 0.00 0.00 201210.45 1693.01 256901.12 00:34:53.312 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:34:53.312 Verification LBA range: start 0x0 length 0x400 00:34:53.312 Nvme9n1 : 0.95 202.42 12.65 0.00 0.00 260431.64 14636.37 255153.49 00:34:53.312 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:34:53.312 Verification LBA range: start 0x0 length 0x400 00:34:53.312 Nvme10n1 : 0.97 198.68 12.42 0.00 0.00 260098.56 18459.31 274377.39 00:34:53.312 =================================================================================================================== 00:34:53.312 Total : 2390.26 149.39 0.00 0.00 239657.70 1693.01 274377.39 00:34:53.572 22:33:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@114 -- # sleep 1 00:34:54.515 22:33:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@115 -- # kill -0 279162 00:34:54.515 22:33:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@117 -- # stoptarget 00:34:54.515 22:33:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:34:54.515 22:33:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:34:54.515 22:33:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:34:54.515 22:33:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@46 -- # nvmftestfini 00:34:54.515 22:33:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@514 -- # nvmfcleanup 00:34:54.515 22:33:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@121 -- # sync 00:34:54.515 22:33:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:34:54.515 22:33:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@124 -- # set +e 00:34:54.515 22:33:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@125 -- # for i in {1..20} 00:34:54.515 22:33:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:34:54.515 rmmod nvme_tcp 00:34:54.515 rmmod nvme_fabrics 00:34:54.515 rmmod nvme_keyring 00:34:54.515 22:33:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:34:54.515 22:33:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@128 -- # set -e 00:34:54.515 22:33:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@129 -- # return 0 00:34:54.515 22:33:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@515 -- # '[' -n 279162 ']' 00:34:54.515 22:33:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@516 -- # killprocess 279162 00:34:54.515 22:33:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@950 -- # '[' -z 279162 ']' 00:34:54.515 22:33:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # kill -0 279162 00:34:54.515 22:33:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@955 -- # uname 00:34:54.515 22:33:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:34:54.515 22:33:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 279162 00:34:54.776 22:33:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:34:54.776 22:33:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:34:54.777 22:33:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@968 -- # echo 'killing process with pid 279162' 00:34:54.777 killing process with pid 279162 00:34:54.777 22:33:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@969 -- # kill 279162 00:34:54.777 22:33:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@974 -- # wait 279162 00:34:55.038 22:33:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:34:55.038 22:33:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:34:55.038 22:33:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:34:55.038 22:33:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@297 -- # iptr 00:34:55.038 22:33:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@789 -- # iptables-save 00:34:55.038 22:33:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:34:55.038 22:33:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@789 -- # iptables-restore 00:34:55.038 22:33:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:34:55.039 22:33:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@302 -- # remove_spdk_ns 00:34:55.039 22:33:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:55.039 22:33:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:34:55.039 22:33:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:56.952 22:33:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:34:56.952 00:34:56.952 real 0m8.084s 00:34:56.952 user 0m24.529s 00:34:56.952 sys 0m1.365s 00:34:56.952 22:33:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1126 -- # xtrace_disable 00:34:56.952 22:33:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:34:56.952 ************************************ 00:34:56.952 END TEST nvmf_shutdown_tc2 00:34:56.952 ************************************ 00:34:57.214 22:33:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@164 -- # run_test nvmf_shutdown_tc3 nvmf_shutdown_tc3 00:34:57.214 22:33:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:34:57.214 22:33:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1107 -- # xtrace_disable 00:34:57.214 22:33:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:34:57.214 ************************************ 00:34:57.214 START TEST nvmf_shutdown_tc3 00:34:57.214 ************************************ 00:34:57.214 22:33:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1125 -- # nvmf_shutdown_tc3 00:34:57.214 22:33:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@122 -- # starttarget 00:34:57.214 22:33:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@16 -- # nvmftestinit 00:34:57.214 22:33:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:34:57.214 22:33:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:34:57.214 22:33:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@474 -- # prepare_net_devs 00:34:57.214 22:33:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@436 -- # local -g is_hw=no 00:34:57.214 22:33:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@438 -- # remove_spdk_ns 00:34:57.214 22:33:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:57.214 22:33:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:34:57.214 22:33:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:57.214 22:33:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:34:57.214 22:33:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:34:57.214 22:33:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@309 -- # xtrace_disable 00:34:57.214 22:33:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:34:57.214 22:33:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:34:57.214 22:33:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@315 -- # pci_devs=() 00:34:57.214 22:33:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@315 -- # local -a pci_devs 00:34:57.214 22:33:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:34:57.214 22:33:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:34:57.214 22:33:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@317 -- # pci_drivers=() 00:34:57.214 22:33:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:34:57.214 22:33:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@319 -- # net_devs=() 00:34:57.215 22:33:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@319 -- # local -ga net_devs 00:34:57.215 22:33:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@320 -- # e810=() 00:34:57.215 22:33:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@320 -- # local -ga e810 00:34:57.215 22:33:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@321 -- # x722=() 00:34:57.215 22:33:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@321 -- # local -ga x722 00:34:57.215 22:33:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@322 -- # mlx=() 00:34:57.215 22:33:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@322 -- # local -ga mlx 00:34:57.215 22:33:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:34:57.215 22:33:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:34:57.215 22:33:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:34:57.215 22:33:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:34:57.215 22:33:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:34:57.215 22:33:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:34:57.215 22:33:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:34:57.215 22:33:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:34:57.215 22:33:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:34:57.215 22:33:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:34:57.215 22:33:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:34:57.215 22:33:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:34:57.215 22:33:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:34:57.215 22:33:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:34:57.215 22:33:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:34:57.215 22:33:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:34:57.215 22:33:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:34:57.215 22:33:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:34:57.215 22:33:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:34:57.215 22:33:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:34:57.215 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:34:57.215 22:33:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:34:57.215 22:33:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:34:57.215 22:33:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:57.215 22:33:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:57.215 22:33:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:34:57.215 22:33:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:34:57.215 22:33:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:34:57.215 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:34:57.215 22:33:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:34:57.215 22:33:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:34:57.215 22:33:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:57.215 22:33:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:57.215 22:33:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:34:57.215 22:33:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:34:57.215 22:33:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:34:57.215 22:33:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:34:57.215 22:33:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:34:57.215 22:33:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:57.215 22:33:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:34:57.215 22:33:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:57.215 22:33:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@416 -- # [[ up == up ]] 00:34:57.215 22:33:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:34:57.215 22:33:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:57.215 22:33:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:34:57.215 Found net devices under 0000:4b:00.0: cvl_0_0 00:34:57.215 22:33:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:34:57.215 22:33:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:34:57.215 22:33:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:57.215 22:33:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:34:57.215 22:33:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:57.215 22:33:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@416 -- # [[ up == up ]] 00:34:57.215 22:33:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:34:57.215 22:33:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:57.215 22:33:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:34:57.215 Found net devices under 0000:4b:00.1: cvl_0_1 00:34:57.215 22:33:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:34:57.215 22:33:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:34:57.215 22:33:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@440 -- # is_hw=yes 00:34:57.215 22:33:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:34:57.215 22:33:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:34:57.215 22:33:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:34:57.215 22:33:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:34:57.215 22:33:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:34:57.215 22:33:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:34:57.215 22:33:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:34:57.215 22:33:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:34:57.215 22:33:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:34:57.215 22:33:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:34:57.215 22:33:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:34:57.215 22:33:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:34:57.215 22:33:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:34:57.215 22:33:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:34:57.215 22:33:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:34:57.215 22:33:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:34:57.215 22:33:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:34:57.215 22:33:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:34:57.215 22:33:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:34:57.216 22:33:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:34:57.216 22:33:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:34:57.216 22:33:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:34:57.478 22:33:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:34:57.478 22:33:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:34:57.478 22:33:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:34:57.478 22:33:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:34:57.478 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:34:57.478 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.632 ms 00:34:57.478 00:34:57.478 --- 10.0.0.2 ping statistics --- 00:34:57.478 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:57.478 rtt min/avg/max/mdev = 0.632/0.632/0.632/0.000 ms 00:34:57.478 22:33:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:34:57.478 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:34:57.478 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.272 ms 00:34:57.478 00:34:57.478 --- 10.0.0.1 ping statistics --- 00:34:57.478 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:57.478 rtt min/avg/max/mdev = 0.272/0.272/0.272/0.000 ms 00:34:57.478 22:33:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:34:57.478 22:33:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@448 -- # return 0 00:34:57.478 22:33:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:34:57.478 22:33:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:34:57.478 22:33:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:34:57.478 22:33:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:34:57.478 22:33:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:34:57.478 22:33:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:34:57.478 22:33:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:34:57.478 22:33:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:34:57.478 22:33:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:34:57.478 22:33:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@724 -- # xtrace_disable 00:34:57.478 22:33:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:34:57.478 22:33:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@507 -- # nvmfpid=281006 00:34:57.478 22:33:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@508 -- # waitforlisten 281006 00:34:57.478 22:33:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:34:57.478 22:33:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@831 -- # '[' -z 281006 ']' 00:34:57.478 22:33:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:57.478 22:33:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@836 -- # local max_retries=100 00:34:57.478 22:33:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:57.478 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:57.478 22:33:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@840 -- # xtrace_disable 00:34:57.478 22:33:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:34:57.478 [2024-10-01 22:33:52.703144] Starting SPDK v25.01-pre git sha1 1b1c3081e / DPDK 24.03.0 initialization... 00:34:57.478 [2024-10-01 22:33:52.703206] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:34:57.739 [2024-10-01 22:33:52.790915] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:34:57.739 [2024-10-01 22:33:52.852066] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:34:57.739 [2024-10-01 22:33:52.852099] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:34:57.739 [2024-10-01 22:33:52.852105] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:34:57.739 [2024-10-01 22:33:52.852110] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:34:57.739 [2024-10-01 22:33:52.852114] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:34:57.739 [2024-10-01 22:33:52.852228] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:34:57.739 [2024-10-01 22:33:52.852390] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:34:57.739 [2024-10-01 22:33:52.852551] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:34:57.739 [2024-10-01 22:33:52.852553] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 4 00:34:58.419 22:33:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:34:58.419 22:33:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@864 -- # return 0 00:34:58.419 22:33:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:34:58.419 22:33:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@730 -- # xtrace_disable 00:34:58.419 22:33:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:34:58.419 22:33:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:34:58.419 22:33:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:34:58.419 22:33:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:58.419 22:33:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:34:58.419 [2024-10-01 22:33:53.557240] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:34:58.419 22:33:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:58.419 22:33:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:34:58.419 22:33:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:34:58.419 22:33:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@724 -- # xtrace_disable 00:34:58.419 22:33:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:34:58.419 22:33:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:34:58.419 22:33:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:34:58.419 22:33:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:34:58.419 22:33:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:34:58.419 22:33:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:34:58.419 22:33:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:34:58.419 22:33:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:34:58.419 22:33:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:34:58.419 22:33:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:34:58.419 22:33:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:34:58.419 22:33:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:34:58.419 22:33:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:34:58.419 22:33:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:34:58.419 22:33:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:34:58.419 22:33:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:34:58.419 22:33:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:34:58.419 22:33:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:34:58.419 22:33:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:34:58.419 22:33:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:34:58.419 22:33:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:34:58.419 22:33:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:34:58.419 22:33:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@36 -- # rpc_cmd 00:34:58.419 22:33:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:58.419 22:33:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:34:58.419 Malloc1 00:34:58.419 [2024-10-01 22:33:53.656238] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:34:58.693 Malloc2 00:34:58.693 Malloc3 00:34:58.693 Malloc4 00:34:58.693 Malloc5 00:34:58.693 Malloc6 00:34:58.693 Malloc7 00:34:58.693 Malloc8 00:34:58.953 Malloc9 00:34:58.953 Malloc10 00:34:58.953 22:33:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:58.953 22:33:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:34:58.953 22:33:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@730 -- # xtrace_disable 00:34:58.953 22:33:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:34:58.953 22:33:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@126 -- # perfpid=281387 00:34:58.953 22:33:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@127 -- # waitforlisten 281387 /var/tmp/bdevperf.sock 00:34:58.953 22:33:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@831 -- # '[' -z 281387 ']' 00:34:58.953 22:33:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:34:58.953 22:33:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@836 -- # local max_retries=100 00:34:58.953 22:33:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:34:58.953 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:34:58.953 22:33:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:34:58.953 22:33:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@840 -- # xtrace_disable 00:34:58.953 22:33:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@125 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:34:58.953 22:33:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:34:58.953 22:33:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@558 -- # config=() 00:34:58.953 22:33:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@558 -- # local subsystem config 00:34:58.953 22:33:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:34:58.953 22:33:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:34:58.953 { 00:34:58.953 "params": { 00:34:58.953 "name": "Nvme$subsystem", 00:34:58.953 "trtype": "$TEST_TRANSPORT", 00:34:58.953 "traddr": "$NVMF_FIRST_TARGET_IP", 00:34:58.953 "adrfam": "ipv4", 00:34:58.953 "trsvcid": "$NVMF_PORT", 00:34:58.953 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:34:58.953 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:34:58.953 "hdgst": ${hdgst:-false}, 00:34:58.953 "ddgst": ${ddgst:-false} 00:34:58.953 }, 00:34:58.953 "method": "bdev_nvme_attach_controller" 00:34:58.953 } 00:34:58.953 EOF 00:34:58.953 )") 00:34:58.953 22:33:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@580 -- # cat 00:34:58.953 22:33:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:34:58.953 22:33:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:34:58.953 { 00:34:58.953 "params": { 00:34:58.953 "name": "Nvme$subsystem", 00:34:58.953 "trtype": "$TEST_TRANSPORT", 00:34:58.953 "traddr": "$NVMF_FIRST_TARGET_IP", 00:34:58.953 "adrfam": "ipv4", 00:34:58.953 "trsvcid": "$NVMF_PORT", 00:34:58.953 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:34:58.953 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:34:58.953 "hdgst": ${hdgst:-false}, 00:34:58.953 "ddgst": ${ddgst:-false} 00:34:58.953 }, 00:34:58.953 "method": "bdev_nvme_attach_controller" 00:34:58.953 } 00:34:58.953 EOF 00:34:58.953 )") 00:34:58.953 22:33:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@580 -- # cat 00:34:58.953 22:33:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:34:58.954 22:33:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:34:58.954 { 00:34:58.954 "params": { 00:34:58.954 "name": "Nvme$subsystem", 00:34:58.954 "trtype": "$TEST_TRANSPORT", 00:34:58.954 "traddr": "$NVMF_FIRST_TARGET_IP", 00:34:58.954 "adrfam": "ipv4", 00:34:58.954 "trsvcid": "$NVMF_PORT", 00:34:58.954 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:34:58.954 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:34:58.954 "hdgst": ${hdgst:-false}, 00:34:58.954 "ddgst": ${ddgst:-false} 00:34:58.954 }, 00:34:58.954 "method": "bdev_nvme_attach_controller" 00:34:58.954 } 00:34:58.954 EOF 00:34:58.954 )") 00:34:58.954 22:33:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@580 -- # cat 00:34:58.954 22:33:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:34:58.954 22:33:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:34:58.954 { 00:34:58.954 "params": { 00:34:58.954 "name": "Nvme$subsystem", 00:34:58.954 "trtype": "$TEST_TRANSPORT", 00:34:58.954 "traddr": "$NVMF_FIRST_TARGET_IP", 00:34:58.954 "adrfam": "ipv4", 00:34:58.954 "trsvcid": "$NVMF_PORT", 00:34:58.954 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:34:58.954 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:34:58.954 "hdgst": ${hdgst:-false}, 00:34:58.954 "ddgst": ${ddgst:-false} 00:34:58.954 }, 00:34:58.954 "method": "bdev_nvme_attach_controller" 00:34:58.954 } 00:34:58.954 EOF 00:34:58.954 )") 00:34:58.954 22:33:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@580 -- # cat 00:34:58.954 22:33:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:34:58.954 22:33:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:34:58.954 { 00:34:58.954 "params": { 00:34:58.954 "name": "Nvme$subsystem", 00:34:58.954 "trtype": "$TEST_TRANSPORT", 00:34:58.954 "traddr": "$NVMF_FIRST_TARGET_IP", 00:34:58.954 "adrfam": "ipv4", 00:34:58.954 "trsvcid": "$NVMF_PORT", 00:34:58.954 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:34:58.954 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:34:58.954 "hdgst": ${hdgst:-false}, 00:34:58.954 "ddgst": ${ddgst:-false} 00:34:58.954 }, 00:34:58.954 "method": "bdev_nvme_attach_controller" 00:34:58.954 } 00:34:58.954 EOF 00:34:58.954 )") 00:34:58.954 22:33:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@580 -- # cat 00:34:58.954 22:33:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:34:58.954 22:33:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:34:58.954 { 00:34:58.954 "params": { 00:34:58.954 "name": "Nvme$subsystem", 00:34:58.954 "trtype": "$TEST_TRANSPORT", 00:34:58.954 "traddr": "$NVMF_FIRST_TARGET_IP", 00:34:58.954 "adrfam": "ipv4", 00:34:58.954 "trsvcid": "$NVMF_PORT", 00:34:58.954 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:34:58.954 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:34:58.954 "hdgst": ${hdgst:-false}, 00:34:58.954 "ddgst": ${ddgst:-false} 00:34:58.954 }, 00:34:58.954 "method": "bdev_nvme_attach_controller" 00:34:58.954 } 00:34:58.954 EOF 00:34:58.954 )") 00:34:58.954 22:33:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@580 -- # cat 00:34:58.954 [2024-10-01 22:33:54.104000] Starting SPDK v25.01-pre git sha1 1b1c3081e / DPDK 24.03.0 initialization... 00:34:58.954 [2024-10-01 22:33:54.104054] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid281387 ] 00:34:58.954 22:33:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:34:58.954 22:33:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:34:58.954 { 00:34:58.954 "params": { 00:34:58.954 "name": "Nvme$subsystem", 00:34:58.954 "trtype": "$TEST_TRANSPORT", 00:34:58.954 "traddr": "$NVMF_FIRST_TARGET_IP", 00:34:58.954 "adrfam": "ipv4", 00:34:58.954 "trsvcid": "$NVMF_PORT", 00:34:58.954 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:34:58.954 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:34:58.954 "hdgst": ${hdgst:-false}, 00:34:58.954 "ddgst": ${ddgst:-false} 00:34:58.954 }, 00:34:58.954 "method": "bdev_nvme_attach_controller" 00:34:58.954 } 00:34:58.954 EOF 00:34:58.954 )") 00:34:58.954 22:33:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@580 -- # cat 00:34:58.954 22:33:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:34:58.954 22:33:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:34:58.954 { 00:34:58.954 "params": { 00:34:58.954 "name": "Nvme$subsystem", 00:34:58.954 "trtype": "$TEST_TRANSPORT", 00:34:58.954 "traddr": "$NVMF_FIRST_TARGET_IP", 00:34:58.954 "adrfam": "ipv4", 00:34:58.954 "trsvcid": "$NVMF_PORT", 00:34:58.954 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:34:58.954 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:34:58.954 "hdgst": ${hdgst:-false}, 00:34:58.954 "ddgst": ${ddgst:-false} 00:34:58.954 }, 00:34:58.954 "method": "bdev_nvme_attach_controller" 00:34:58.954 } 00:34:58.954 EOF 00:34:58.954 )") 00:34:58.954 22:33:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@580 -- # cat 00:34:58.954 22:33:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:34:58.954 22:33:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:34:58.954 { 00:34:58.954 "params": { 00:34:58.954 "name": "Nvme$subsystem", 00:34:58.954 "trtype": "$TEST_TRANSPORT", 00:34:58.954 "traddr": "$NVMF_FIRST_TARGET_IP", 00:34:58.954 "adrfam": "ipv4", 00:34:58.954 "trsvcid": "$NVMF_PORT", 00:34:58.954 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:34:58.954 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:34:58.954 "hdgst": ${hdgst:-false}, 00:34:58.954 "ddgst": ${ddgst:-false} 00:34:58.954 }, 00:34:58.954 "method": "bdev_nvme_attach_controller" 00:34:58.954 } 00:34:58.954 EOF 00:34:58.954 )") 00:34:58.954 22:33:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@580 -- # cat 00:34:58.954 22:33:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:34:58.954 22:33:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:34:58.954 { 00:34:58.954 "params": { 00:34:58.954 "name": "Nvme$subsystem", 00:34:58.954 "trtype": "$TEST_TRANSPORT", 00:34:58.954 "traddr": "$NVMF_FIRST_TARGET_IP", 00:34:58.954 "adrfam": "ipv4", 00:34:58.954 "trsvcid": "$NVMF_PORT", 00:34:58.954 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:34:58.954 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:34:58.954 "hdgst": ${hdgst:-false}, 00:34:58.954 "ddgst": ${ddgst:-false} 00:34:58.954 }, 00:34:58.954 "method": "bdev_nvme_attach_controller" 00:34:58.954 } 00:34:58.954 EOF 00:34:58.954 )") 00:34:58.954 22:33:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@580 -- # cat 00:34:58.954 22:33:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # jq . 00:34:58.954 22:33:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@583 -- # IFS=, 00:34:58.954 22:33:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:34:58.954 "params": { 00:34:58.954 "name": "Nvme1", 00:34:58.954 "trtype": "tcp", 00:34:58.954 "traddr": "10.0.0.2", 00:34:58.955 "adrfam": "ipv4", 00:34:58.955 "trsvcid": "4420", 00:34:58.955 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:34:58.955 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:34:58.955 "hdgst": false, 00:34:58.955 "ddgst": false 00:34:58.955 }, 00:34:58.955 "method": "bdev_nvme_attach_controller" 00:34:58.955 },{ 00:34:58.955 "params": { 00:34:58.955 "name": "Nvme2", 00:34:58.955 "trtype": "tcp", 00:34:58.955 "traddr": "10.0.0.2", 00:34:58.955 "adrfam": "ipv4", 00:34:58.955 "trsvcid": "4420", 00:34:58.955 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:34:58.955 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:34:58.955 "hdgst": false, 00:34:58.955 "ddgst": false 00:34:58.955 }, 00:34:58.955 "method": "bdev_nvme_attach_controller" 00:34:58.955 },{ 00:34:58.955 "params": { 00:34:58.955 "name": "Nvme3", 00:34:58.955 "trtype": "tcp", 00:34:58.955 "traddr": "10.0.0.2", 00:34:58.955 "adrfam": "ipv4", 00:34:58.955 "trsvcid": "4420", 00:34:58.955 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:34:58.955 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:34:58.955 "hdgst": false, 00:34:58.955 "ddgst": false 00:34:58.955 }, 00:34:58.955 "method": "bdev_nvme_attach_controller" 00:34:58.955 },{ 00:34:58.955 "params": { 00:34:58.955 "name": "Nvme4", 00:34:58.955 "trtype": "tcp", 00:34:58.955 "traddr": "10.0.0.2", 00:34:58.955 "adrfam": "ipv4", 00:34:58.955 "trsvcid": "4420", 00:34:58.955 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:34:58.955 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:34:58.955 "hdgst": false, 00:34:58.955 "ddgst": false 00:34:58.955 }, 00:34:58.955 "method": "bdev_nvme_attach_controller" 00:34:58.955 },{ 00:34:58.955 "params": { 00:34:58.955 "name": "Nvme5", 00:34:58.955 "trtype": "tcp", 00:34:58.955 "traddr": "10.0.0.2", 00:34:58.955 "adrfam": "ipv4", 00:34:58.955 "trsvcid": "4420", 00:34:58.955 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:34:58.955 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:34:58.955 "hdgst": false, 00:34:58.955 "ddgst": false 00:34:58.955 }, 00:34:58.955 "method": "bdev_nvme_attach_controller" 00:34:58.955 },{ 00:34:58.955 "params": { 00:34:58.955 "name": "Nvme6", 00:34:58.955 "trtype": "tcp", 00:34:58.955 "traddr": "10.0.0.2", 00:34:58.955 "adrfam": "ipv4", 00:34:58.955 "trsvcid": "4420", 00:34:58.955 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:34:58.955 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:34:58.955 "hdgst": false, 00:34:58.955 "ddgst": false 00:34:58.955 }, 00:34:58.955 "method": "bdev_nvme_attach_controller" 00:34:58.955 },{ 00:34:58.955 "params": { 00:34:58.955 "name": "Nvme7", 00:34:58.955 "trtype": "tcp", 00:34:58.955 "traddr": "10.0.0.2", 00:34:58.955 "adrfam": "ipv4", 00:34:58.955 "trsvcid": "4420", 00:34:58.955 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:34:58.955 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:34:58.955 "hdgst": false, 00:34:58.955 "ddgst": false 00:34:58.955 }, 00:34:58.955 "method": "bdev_nvme_attach_controller" 00:34:58.955 },{ 00:34:58.955 "params": { 00:34:58.955 "name": "Nvme8", 00:34:58.955 "trtype": "tcp", 00:34:58.955 "traddr": "10.0.0.2", 00:34:58.955 "adrfam": "ipv4", 00:34:58.955 "trsvcid": "4420", 00:34:58.955 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:34:58.955 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:34:58.955 "hdgst": false, 00:34:58.955 "ddgst": false 00:34:58.955 }, 00:34:58.955 "method": "bdev_nvme_attach_controller" 00:34:58.955 },{ 00:34:58.955 "params": { 00:34:58.955 "name": "Nvme9", 00:34:58.955 "trtype": "tcp", 00:34:58.955 "traddr": "10.0.0.2", 00:34:58.955 "adrfam": "ipv4", 00:34:58.955 "trsvcid": "4420", 00:34:58.955 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:34:58.955 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:34:58.955 "hdgst": false, 00:34:58.955 "ddgst": false 00:34:58.955 }, 00:34:58.955 "method": "bdev_nvme_attach_controller" 00:34:58.955 },{ 00:34:58.955 "params": { 00:34:58.955 "name": "Nvme10", 00:34:58.955 "trtype": "tcp", 00:34:58.955 "traddr": "10.0.0.2", 00:34:58.955 "adrfam": "ipv4", 00:34:58.955 "trsvcid": "4420", 00:34:58.955 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:34:58.955 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:34:58.955 "hdgst": false, 00:34:58.955 "ddgst": false 00:34:58.955 }, 00:34:58.955 "method": "bdev_nvme_attach_controller" 00:34:58.955 }' 00:34:58.955 [2024-10-01 22:33:54.165400] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:59.215 [2024-10-01 22:33:54.230460] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:35:01.128 Running I/O for 10 seconds... 00:35:01.389 22:33:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:35:01.389 22:33:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@864 -- # return 0 00:35:01.389 22:33:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@128 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:35:01.389 22:33:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:01.389 22:33:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:35:01.389 22:33:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:01.389 22:33:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@131 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:35:01.389 22:33:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@133 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:35:01.389 22:33:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@51 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:35:01.389 22:33:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@55 -- # '[' -z Nvme1n1 ']' 00:35:01.389 22:33:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@58 -- # local ret=1 00:35:01.389 22:33:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # local i 00:35:01.389 22:33:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i = 10 )) 00:35:01.389 22:33:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:35:01.389 22:33:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:35:01.389 22:33:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:35:01.389 22:33:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:01.389 22:33:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:35:01.671 22:33:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:01.671 22:33:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # read_io_count=131 00:35:01.671 22:33:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@64 -- # '[' 131 -ge 100 ']' 00:35:01.671 22:33:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@65 -- # ret=0 00:35:01.671 22:33:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@66 -- # break 00:35:01.671 22:33:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@70 -- # return 0 00:35:01.671 22:33:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@136 -- # killprocess 281006 00:35:01.671 22:33:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@950 -- # '[' -z 281006 ']' 00:35:01.671 22:33:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@954 -- # kill -0 281006 00:35:01.671 22:33:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@955 -- # uname 00:35:01.671 22:33:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:35:01.671 22:33:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 281006 00:35:01.671 22:33:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:35:01.671 22:33:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:35:01.671 22:33:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@968 -- # echo 'killing process with pid 281006' 00:35:01.671 killing process with pid 281006 00:35:01.671 22:33:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@969 -- # kill 281006 00:35:01.671 22:33:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@974 -- # wait 281006 00:35:01.672 [2024-10-01 22:33:56.746324] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c57a0 is same with the state(6) to be set 00:35:01.672 [2024-10-01 22:33:56.746373] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c57a0 is same with the state(6) to be set 00:35:01.672 [2024-10-01 22:33:56.746379] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c57a0 is same with the state(6) to be set 00:35:01.672 [2024-10-01 22:33:56.746384] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c57a0 is same with the state(6) to be set 00:35:01.672 [2024-10-01 22:33:56.746389] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c57a0 is same with the state(6) to be set 00:35:01.672 [2024-10-01 22:33:56.746394] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c57a0 is same with the state(6) to be set 00:35:01.672 [2024-10-01 22:33:56.746400] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c57a0 is same with the state(6) to be set 00:35:01.672 [2024-10-01 22:33:56.746406] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c57a0 is same with the state(6) to be set 00:35:01.672 [2024-10-01 22:33:56.746411] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c57a0 is same with the state(6) to be set 00:35:01.672 [2024-10-01 22:33:56.746415] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c57a0 is same with the state(6) to be set 00:35:01.672 [2024-10-01 22:33:56.746420] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c57a0 is same with the state(6) to be set 00:35:01.672 [2024-10-01 22:33:56.746425] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c57a0 is same with the state(6) to be set 00:35:01.672 [2024-10-01 22:33:56.746430] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c57a0 is same with the state(6) to be set 00:35:01.672 [2024-10-01 22:33:56.746435] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c57a0 is same with the state(6) to be set 00:35:01.672 [2024-10-01 22:33:56.746439] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c57a0 is same with the state(6) to be set 00:35:01.672 [2024-10-01 22:33:56.746444] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c57a0 is same with the state(6) to be set 00:35:01.672 [2024-10-01 22:33:56.746449] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c57a0 is same with the state(6) to be set 00:35:01.672 [2024-10-01 22:33:56.746459] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c57a0 is same with the state(6) to be set 00:35:01.672 [2024-10-01 22:33:56.746463] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c57a0 is same with the state(6) to be set 00:35:01.672 [2024-10-01 22:33:56.746468] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c57a0 is same with the state(6) to be set 00:35:01.672 [2024-10-01 22:33:56.746473] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c57a0 is same with the state(6) to be set 00:35:01.672 [2024-10-01 22:33:56.746478] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c57a0 is same with the state(6) to be set 00:35:01.672 [2024-10-01 22:33:56.746483] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c57a0 is same with the state(6) to be set 00:35:01.672 [2024-10-01 22:33:56.746487] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c57a0 is same with the state(6) to be set 00:35:01.672 [2024-10-01 22:33:56.746492] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c57a0 is same with the state(6) to be set 00:35:01.672 [2024-10-01 22:33:56.746497] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c57a0 is same with the state(6) to be set 00:35:01.672 [2024-10-01 22:33:56.746501] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c57a0 is same with the state(6) to be set 00:35:01.672 [2024-10-01 22:33:56.746506] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c57a0 is same with the state(6) to be set 00:35:01.672 [2024-10-01 22:33:56.746511] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c57a0 is same with the state(6) to be set 00:35:01.672 [2024-10-01 22:33:56.746515] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c57a0 is same with the state(6) to be set 00:35:01.672 [2024-10-01 22:33:56.746520] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c57a0 is same with the state(6) to be set 00:35:01.672 [2024-10-01 22:33:56.746525] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c57a0 is same with the state(6) to be set 00:35:01.672 [2024-10-01 22:33:56.746530] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c57a0 is same with the state(6) to be set 00:35:01.672 [2024-10-01 22:33:56.746535] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c57a0 is same with the state(6) to be set 00:35:01.672 [2024-10-01 22:33:56.746540] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c57a0 is same with the state(6) to be set 00:35:01.672 [2024-10-01 22:33:56.746544] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c57a0 is same with the state(6) to be set 00:35:01.672 [2024-10-01 22:33:56.746549] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c57a0 is same with the state(6) to be set 00:35:01.672 [2024-10-01 22:33:56.746554] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c57a0 is same with the state(6) to be set 00:35:01.672 [2024-10-01 22:33:56.746559] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c57a0 is same with the state(6) to be set 00:35:01.672 [2024-10-01 22:33:56.746564] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c57a0 is same with the state(6) to be set 00:35:01.672 [2024-10-01 22:33:56.746569] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c57a0 is same with the state(6) to be set 00:35:01.672 [2024-10-01 22:33:56.746574] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c57a0 is same with the state(6) to be set 00:35:01.672 [2024-10-01 22:33:56.746579] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c57a0 is same with the state(6) to be set 00:35:01.672 [2024-10-01 22:33:56.746584] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c57a0 is same with the state(6) to be set 00:35:01.672 [2024-10-01 22:33:56.746590] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c57a0 is same with the state(6) to be set 00:35:01.672 [2024-10-01 22:33:56.746594] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c57a0 is same with the state(6) to be set 00:35:01.672 [2024-10-01 22:33:56.746599] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c57a0 is same with the state(6) to be set 00:35:01.672 [2024-10-01 22:33:56.746604] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c57a0 is same with the state(6) to be set 00:35:01.672 [2024-10-01 22:33:56.746608] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c57a0 is same with the state(6) to be set 00:35:01.672 [2024-10-01 22:33:56.746613] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c57a0 is same with the state(6) to be set 00:35:01.672 [2024-10-01 22:33:56.746618] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c57a0 is same with the state(6) to be set 00:35:01.672 [2024-10-01 22:33:56.746622] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c57a0 is same with the state(6) to be set 00:35:01.672 [2024-10-01 22:33:56.746631] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c57a0 is same with the state(6) to be set 00:35:01.672 [2024-10-01 22:33:56.746636] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c57a0 is same with the state(6) to be set 00:35:01.672 [2024-10-01 22:33:56.746640] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c57a0 is same with the state(6) to be set 00:35:01.672 [2024-10-01 22:33:56.746645] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c57a0 is same with the state(6) to be set 00:35:01.672 [2024-10-01 22:33:56.746649] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c57a0 is same with the state(6) to be set 00:35:01.672 [2024-10-01 22:33:56.746654] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c57a0 is same with the state(6) to be set 00:35:01.672 [2024-10-01 22:33:56.746659] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c57a0 is same with the state(6) to be set 00:35:01.672 [2024-10-01 22:33:56.746663] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c57a0 is same with the state(6) to be set 00:35:01.672 [2024-10-01 22:33:56.746668] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c57a0 is same with the state(6) to be set 00:35:01.672 [2024-10-01 22:33:56.746672] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c57a0 is same with the state(6) to be set 00:35:01.672 [2024-10-01 22:33:56.746677] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c57a0 is same with the state(6) to be set 00:35:01.672 [2024-10-01 22:33:56.747657] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13f46e0 is same with the state(6) to be set 00:35:01.672 [2024-10-01 22:33:56.747684] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13f46e0 is same with the state(6) to be set 00:35:01.672 [2024-10-01 22:33:56.747690] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13f46e0 is same with the state(6) to be set 00:35:01.672 [2024-10-01 22:33:56.747695] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13f46e0 is same with the state(6) to be set 00:35:01.672 [2024-10-01 22:33:56.747699] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13f46e0 is same with the state(6) to be set 00:35:01.672 [2024-10-01 22:33:56.747704] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13f46e0 is same with the state(6) to be set 00:35:01.672 [2024-10-01 22:33:56.747709] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13f46e0 is same with the state(6) to be set 00:35:01.672 [2024-10-01 22:33:56.747714] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13f46e0 is same with the state(6) to be set 00:35:01.672 [2024-10-01 22:33:56.747722] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13f46e0 is same with the state(6) to be set 00:35:01.672 [2024-10-01 22:33:56.747727] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13f46e0 is same with the state(6) to be set 00:35:01.672 [2024-10-01 22:33:56.747732] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13f46e0 is same with the state(6) to be set 00:35:01.672 [2024-10-01 22:33:56.747737] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13f46e0 is same with the state(6) to be set 00:35:01.672 [2024-10-01 22:33:56.747741] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13f46e0 is same with the state(6) to be set 00:35:01.672 [2024-10-01 22:33:56.747746] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13f46e0 is same with the state(6) to be set 00:35:01.672 [2024-10-01 22:33:56.747750] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13f46e0 is same with the state(6) to be set 00:35:01.672 [2024-10-01 22:33:56.747755] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13f46e0 is same with the state(6) to be set 00:35:01.672 [2024-10-01 22:33:56.747760] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13f46e0 is same with the state(6) to be set 00:35:01.672 [2024-10-01 22:33:56.747765] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13f46e0 is same with the state(6) to be set 00:35:01.672 [2024-10-01 22:33:56.747770] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13f46e0 is same with the state(6) to be set 00:35:01.672 [2024-10-01 22:33:56.747775] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13f46e0 is same with the state(6) to be set 00:35:01.672 [2024-10-01 22:33:56.747779] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13f46e0 is same with the state(6) to be set 00:35:01.673 [2024-10-01 22:33:56.747784] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13f46e0 is same with the state(6) to be set 00:35:01.673 [2024-10-01 22:33:56.747789] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13f46e0 is same with the state(6) to be set 00:35:01.673 [2024-10-01 22:33:56.747794] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13f46e0 is same with the state(6) to be set 00:35:01.673 [2024-10-01 22:33:56.747798] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13f46e0 is same with the state(6) to be set 00:35:01.673 [2024-10-01 22:33:56.747803] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13f46e0 is same with the state(6) to be set 00:35:01.673 [2024-10-01 22:33:56.747807] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13f46e0 is same with the state(6) to be set 00:35:01.673 [2024-10-01 22:33:56.747812] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13f46e0 is same with the state(6) to be set 00:35:01.673 [2024-10-01 22:33:56.747817] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13f46e0 is same with the state(6) to be set 00:35:01.673 [2024-10-01 22:33:56.747822] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13f46e0 is same with the state(6) to be set 00:35:01.673 [2024-10-01 22:33:56.747826] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13f46e0 is same with the state(6) to be set 00:35:01.673 [2024-10-01 22:33:56.747831] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13f46e0 is same with the state(6) to be set 00:35:01.673 [2024-10-01 22:33:56.747836] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13f46e0 is same with the state(6) to be set 00:35:01.673 [2024-10-01 22:33:56.747840] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13f46e0 is same with the state(6) to be set 00:35:01.673 [2024-10-01 22:33:56.747844] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13f46e0 is same with the state(6) to be set 00:35:01.673 [2024-10-01 22:33:56.747850] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13f46e0 is same with the state(6) to be set 00:35:01.673 [2024-10-01 22:33:56.747855] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13f46e0 is same with the state(6) to be set 00:35:01.673 [2024-10-01 22:33:56.747859] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13f46e0 is same with the state(6) to be set 00:35:01.673 [2024-10-01 22:33:56.747864] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13f46e0 is same with the state(6) to be set 00:35:01.673 [2024-10-01 22:33:56.747869] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13f46e0 is same with the state(6) to be set 00:35:01.673 [2024-10-01 22:33:56.747873] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13f46e0 is same with the state(6) to be set 00:35:01.673 [2024-10-01 22:33:56.747878] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13f46e0 is same with the state(6) to be set 00:35:01.673 [2024-10-01 22:33:56.747883] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13f46e0 is same with the state(6) to be set 00:35:01.673 [2024-10-01 22:33:56.747888] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13f46e0 is same with the state(6) to be set 00:35:01.673 [2024-10-01 22:33:56.747892] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13f46e0 is same with the state(6) to be set 00:35:01.673 [2024-10-01 22:33:56.747897] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13f46e0 is same with the state(6) to be set 00:35:01.673 [2024-10-01 22:33:56.747901] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13f46e0 is same with the state(6) to be set 00:35:01.673 [2024-10-01 22:33:56.747906] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13f46e0 is same with the state(6) to be set 00:35:01.673 [2024-10-01 22:33:56.747911] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13f46e0 is same with the state(6) to be set 00:35:01.673 [2024-10-01 22:33:56.747916] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13f46e0 is same with the state(6) to be set 00:35:01.673 [2024-10-01 22:33:56.747921] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13f46e0 is same with the state(6) to be set 00:35:01.673 [2024-10-01 22:33:56.747926] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13f46e0 is same with the state(6) to be set 00:35:01.673 [2024-10-01 22:33:56.747930] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13f46e0 is same with the state(6) to be set 00:35:01.673 [2024-10-01 22:33:56.747936] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13f46e0 is same with the state(6) to be set 00:35:01.673 [2024-10-01 22:33:56.747940] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13f46e0 is same with the state(6) to be set 00:35:01.673 [2024-10-01 22:33:56.748950] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c5c70 is same with the state(6) to be set 00:35:01.673 [2024-10-01 22:33:56.748963] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c5c70 is same with the state(6) to be set 00:35:01.673 [2024-10-01 22:33:56.748968] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c5c70 is same with the state(6) to be set 00:35:01.673 [2024-10-01 22:33:56.748973] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c5c70 is same with the state(6) to be set 00:35:01.673 [2024-10-01 22:33:56.748977] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c5c70 is same with the state(6) to be set 00:35:01.673 [2024-10-01 22:33:56.748982] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c5c70 is same with the state(6) to be set 00:35:01.673 [2024-10-01 22:33:56.748988] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c5c70 is same with the state(6) to be set 00:35:01.673 [2024-10-01 22:33:56.748995] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c5c70 is same with the state(6) to be set 00:35:01.673 [2024-10-01 22:33:56.749000] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c5c70 is same with the state(6) to be set 00:35:01.673 [2024-10-01 22:33:56.749005] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c5c70 is same with the state(6) to be set 00:35:01.673 [2024-10-01 22:33:56.749010] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c5c70 is same with the state(6) to be set 00:35:01.673 [2024-10-01 22:33:56.749015] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c5c70 is same with the state(6) to be set 00:35:01.673 [2024-10-01 22:33:56.749020] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c5c70 is same with the state(6) to be set 00:35:01.673 [2024-10-01 22:33:56.749025] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c5c70 is same with the state(6) to be set 00:35:01.673 [2024-10-01 22:33:56.749031] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c5c70 is same with the state(6) to be set 00:35:01.673 [2024-10-01 22:33:56.749036] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c5c70 is same with the state(6) to be set 00:35:01.673 [2024-10-01 22:33:56.749042] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c5c70 is same with the state(6) to be set 00:35:01.673 [2024-10-01 22:33:56.749047] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c5c70 is same with the state(6) to be set 00:35:01.673 [2024-10-01 22:33:56.749052] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c5c70 is same with the state(6) to be set 00:35:01.673 [2024-10-01 22:33:56.749056] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c5c70 is same with the state(6) to be set 00:35:01.673 [2024-10-01 22:33:56.749061] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c5c70 is same with the state(6) to be set 00:35:01.673 [2024-10-01 22:33:56.749066] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c5c70 is same with the state(6) to be set 00:35:01.673 [2024-10-01 22:33:56.749070] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c5c70 is same with the state(6) to be set 00:35:01.673 [2024-10-01 22:33:56.749075] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c5c70 is same with the state(6) to be set 00:35:01.673 [2024-10-01 22:33:56.749080] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c5c70 is same with the state(6) to be set 00:35:01.673 [2024-10-01 22:33:56.749084] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c5c70 is same with the state(6) to be set 00:35:01.673 [2024-10-01 22:33:56.749089] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c5c70 is same with the state(6) to be set 00:35:01.673 [2024-10-01 22:33:56.749094] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c5c70 is same with the state(6) to be set 00:35:01.673 [2024-10-01 22:33:56.749099] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c5c70 is same with the state(6) to be set 00:35:01.673 [2024-10-01 22:33:56.749104] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c5c70 is same with the state(6) to be set 00:35:01.673 [2024-10-01 22:33:56.749109] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c5c70 is same with the state(6) to be set 00:35:01.673 [2024-10-01 22:33:56.749113] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c5c70 is same with the state(6) to be set 00:35:01.673 [2024-10-01 22:33:56.749118] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c5c70 is same with the state(6) to be set 00:35:01.673 [2024-10-01 22:33:56.749123] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c5c70 is same with the state(6) to be set 00:35:01.673 [2024-10-01 22:33:56.749128] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c5c70 is same with the state(6) to be set 00:35:01.673 [2024-10-01 22:33:56.749133] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c5c70 is same with the state(6) to be set 00:35:01.673 [2024-10-01 22:33:56.749138] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c5c70 is same with the state(6) to be set 00:35:01.673 [2024-10-01 22:33:56.749143] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c5c70 is same with the state(6) to be set 00:35:01.673 [2024-10-01 22:33:56.749148] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c5c70 is same with the state(6) to be set 00:35:01.673 [2024-10-01 22:33:56.749153] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c5c70 is same with the state(6) to be set 00:35:01.673 [2024-10-01 22:33:56.749157] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c5c70 is same with the state(6) to be set 00:35:01.673 [2024-10-01 22:33:56.749162] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c5c70 is same with the state(6) to be set 00:35:01.673 [2024-10-01 22:33:56.749167] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c5c70 is same with the state(6) to be set 00:35:01.673 [2024-10-01 22:33:56.749173] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c5c70 is same with the state(6) to be set 00:35:01.673 [2024-10-01 22:33:56.749178] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c5c70 is same with the state(6) to be set 00:35:01.673 [2024-10-01 22:33:56.749183] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c5c70 is same with the state(6) to be set 00:35:01.673 [2024-10-01 22:33:56.749188] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c5c70 is same with the state(6) to be set 00:35:01.673 [2024-10-01 22:33:56.749193] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c5c70 is same with the state(6) to be set 00:35:01.673 [2024-10-01 22:33:56.749198] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c5c70 is same with the state(6) to be set 00:35:01.673 [2024-10-01 22:33:56.749203] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c5c70 is same with the state(6) to be set 00:35:01.674 [2024-10-01 22:33:56.749208] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c5c70 is same with the state(6) to be set 00:35:01.674 [2024-10-01 22:33:56.749212] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c5c70 is same with the state(6) to be set 00:35:01.674 [2024-10-01 22:33:56.749217] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c5c70 is same with the state(6) to be set 00:35:01.674 [2024-10-01 22:33:56.749222] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c5c70 is same with the state(6) to be set 00:35:01.674 [2024-10-01 22:33:56.749226] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c5c70 is same with the state(6) to be set 00:35:01.674 [2024-10-01 22:33:56.749231] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c5c70 is same with the state(6) to be set 00:35:01.674 [2024-10-01 22:33:56.749236] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c5c70 is same with the state(6) to be set 00:35:01.674 [2024-10-01 22:33:56.749240] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c5c70 is same with the state(6) to be set 00:35:01.674 [2024-10-01 22:33:56.749245] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c5c70 is same with the state(6) to be set 00:35:01.674 [2024-10-01 22:33:56.749250] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c5c70 is same with the state(6) to be set 00:35:01.674 [2024-10-01 22:33:56.749255] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c5c70 is same with the state(6) to be set 00:35:01.674 [2024-10-01 22:33:56.749261] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c5c70 is same with the state(6) to be set 00:35:01.674 [2024-10-01 22:33:56.749265] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c5c70 is same with the state(6) to be set 00:35:01.674 [2024-10-01 22:33:56.750325] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c6140 is same with the state(6) to be set 00:35:01.674 [2024-10-01 22:33:56.750349] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c6140 is same with the state(6) to be set 00:35:01.674 [2024-10-01 22:33:56.750355] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c6140 is same with the state(6) to be set 00:35:01.674 [2024-10-01 22:33:56.750360] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c6140 is same with the state(6) to be set 00:35:01.674 [2024-10-01 22:33:56.750365] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c6140 is same with the state(6) to be set 00:35:01.674 [2024-10-01 22:33:56.750371] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c6140 is same with the state(6) to be set 00:35:01.674 [2024-10-01 22:33:56.750376] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c6140 is same with the state(6) to be set 00:35:01.674 [2024-10-01 22:33:56.750380] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c6140 is same with the state(6) to be set 00:35:01.674 [2024-10-01 22:33:56.750385] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c6140 is same with the state(6) to be set 00:35:01.674 [2024-10-01 22:33:56.750390] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c6140 is same with the state(6) to be set 00:35:01.674 [2024-10-01 22:33:56.750395] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c6140 is same with the state(6) to be set 00:35:01.674 [2024-10-01 22:33:56.750400] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c6140 is same with the state(6) to be set 00:35:01.674 [2024-10-01 22:33:56.750405] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c6140 is same with the state(6) to be set 00:35:01.674 [2024-10-01 22:33:56.750410] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c6140 is same with the state(6) to be set 00:35:01.674 [2024-10-01 22:33:56.750415] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c6140 is same with the state(6) to be set 00:35:01.674 [2024-10-01 22:33:56.750419] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c6140 is same with the state(6) to be set 00:35:01.674 [2024-10-01 22:33:56.750424] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c6140 is same with the state(6) to be set 00:35:01.674 [2024-10-01 22:33:56.750429] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c6140 is same with the state(6) to be set 00:35:01.674 [2024-10-01 22:33:56.750433] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c6140 is same with the state(6) to be set 00:35:01.674 [2024-10-01 22:33:56.750438] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c6140 is same with the state(6) to be set 00:35:01.674 [2024-10-01 22:33:56.750442] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c6140 is same with the state(6) to be set 00:35:01.674 [2024-10-01 22:33:56.750447] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c6140 is same with the state(6) to be set 00:35:01.674 [2024-10-01 22:33:56.750452] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c6140 is same with the state(6) to be set 00:35:01.674 [2024-10-01 22:33:56.750457] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c6140 is same with the state(6) to be set 00:35:01.674 [2024-10-01 22:33:56.750462] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c6140 is same with the state(6) to be set 00:35:01.674 [2024-10-01 22:33:56.750466] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c6140 is same with the state(6) to be set 00:35:01.674 [2024-10-01 22:33:56.750481] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c6140 is same with the state(6) to be set 00:35:01.674 [2024-10-01 22:33:56.750486] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c6140 is same with the state(6) to be set 00:35:01.674 [2024-10-01 22:33:56.750491] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c6140 is same with the state(6) to be set 00:35:01.674 [2024-10-01 22:33:56.750495] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c6140 is same with the state(6) to be set 00:35:01.674 [2024-10-01 22:33:56.750500] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c6140 is same with the state(6) to be set 00:35:01.674 [2024-10-01 22:33:56.750506] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c6140 is same with the state(6) to be set 00:35:01.674 [2024-10-01 22:33:56.750510] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c6140 is same with the state(6) to be set 00:35:01.674 [2024-10-01 22:33:56.750515] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c6140 is same with the state(6) to be set 00:35:01.674 [2024-10-01 22:33:56.750520] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c6140 is same with the state(6) to be set 00:35:01.674 [2024-10-01 22:33:56.750525] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c6140 is same with the state(6) to be set 00:35:01.674 [2024-10-01 22:33:56.750529] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c6140 is same with the state(6) to be set 00:35:01.674 [2024-10-01 22:33:56.750534] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c6140 is same with the state(6) to be set 00:35:01.674 [2024-10-01 22:33:56.750539] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c6140 is same with the state(6) to be set 00:35:01.674 [2024-10-01 22:33:56.750543] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c6140 is same with the state(6) to be set 00:35:01.674 [2024-10-01 22:33:56.750548] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c6140 is same with the state(6) to be set 00:35:01.674 [2024-10-01 22:33:56.750553] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c6140 is same with the state(6) to be set 00:35:01.674 [2024-10-01 22:33:56.750558] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c6140 is same with the state(6) to be set 00:35:01.674 [2024-10-01 22:33:56.750563] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c6140 is same with the state(6) to be set 00:35:01.674 [2024-10-01 22:33:56.750567] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c6140 is same with the state(6) to be set 00:35:01.674 [2024-10-01 22:33:56.750572] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c6140 is same with the state(6) to be set 00:35:01.674 [2024-10-01 22:33:56.750577] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c6140 is same with the state(6) to be set 00:35:01.674 [2024-10-01 22:33:56.750582] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c6140 is same with the state(6) to be set 00:35:01.674 [2024-10-01 22:33:56.750587] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c6140 is same with the state(6) to be set 00:35:01.674 [2024-10-01 22:33:56.750592] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c6140 is same with the state(6) to be set 00:35:01.674 [2024-10-01 22:33:56.750599] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c6140 is same with the state(6) to be set 00:35:01.674 [2024-10-01 22:33:56.750604] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c6140 is same with the state(6) to be set 00:35:01.674 [2024-10-01 22:33:56.750609] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c6140 is same with the state(6) to be set 00:35:01.674 [2024-10-01 22:33:56.750615] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c6140 is same with the state(6) to be set 00:35:01.674 [2024-10-01 22:33:56.750619] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c6140 is same with the state(6) to be set 00:35:01.674 [2024-10-01 22:33:56.750632] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c6140 is same with the state(6) to be set 00:35:01.674 [2024-10-01 22:33:56.750637] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c6140 is same with the state(6) to be set 00:35:01.674 [2024-10-01 22:33:56.750642] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c6140 is same with the state(6) to be set 00:35:01.674 [2024-10-01 22:33:56.750647] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c6140 is same with the state(6) to be set 00:35:01.674 [2024-10-01 22:33:56.750651] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c6140 is same with the state(6) to be set 00:35:01.674 [2024-10-01 22:33:56.750657] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c6140 is same with the state(6) to be set 00:35:01.674 [2024-10-01 22:33:56.750661] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c6140 is same with the state(6) to be set 00:35:01.674 [2024-10-01 22:33:56.750666] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c6140 is same with the state(6) to be set 00:35:01.674 [2024-10-01 22:33:56.751796] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c6b00 is same with the state(6) to be set 00:35:01.674 [2024-10-01 22:33:56.751812] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c6b00 is same with the state(6) to be set 00:35:01.674 [2024-10-01 22:33:56.751817] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c6b00 is same with the state(6) to be set 00:35:01.674 [2024-10-01 22:33:56.751823] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c6b00 is same with the state(6) to be set 00:35:01.674 [2024-10-01 22:33:56.751828] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c6b00 is same with the state(6) to be set 00:35:01.674 [2024-10-01 22:33:56.751833] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c6b00 is same with the state(6) to be set 00:35:01.674 [2024-10-01 22:33:56.751838] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c6b00 is same with the state(6) to be set 00:35:01.674 [2024-10-01 22:33:56.751843] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c6b00 is same with the state(6) to be set 00:35:01.674 [2024-10-01 22:33:56.751848] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c6b00 is same with the state(6) to be set 00:35:01.674 [2024-10-01 22:33:56.751852] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c6b00 is same with the state(6) to be set 00:35:01.675 [2024-10-01 22:33:56.751857] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c6b00 is same with the state(6) to be set 00:35:01.675 [2024-10-01 22:33:56.751862] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c6b00 is same with the state(6) to be set 00:35:01.675 [2024-10-01 22:33:56.751867] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c6b00 is same with the state(6) to be set 00:35:01.675 [2024-10-01 22:33:56.751872] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c6b00 is same with the state(6) to be set 00:35:01.675 [2024-10-01 22:33:56.751877] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c6b00 is same with the state(6) to be set 00:35:01.675 [2024-10-01 22:33:56.751882] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c6b00 is same with the state(6) to be set 00:35:01.675 [2024-10-01 22:33:56.751888] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c6b00 is same with the state(6) to be set 00:35:01.675 [2024-10-01 22:33:56.751899] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c6b00 is same with the state(6) to be set 00:35:01.675 [2024-10-01 22:33:56.751904] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c6b00 is same with the state(6) to be set 00:35:01.675 [2024-10-01 22:33:56.751909] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c6b00 is same with the state(6) to be set 00:35:01.675 [2024-10-01 22:33:56.751913] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c6b00 is same with the state(6) to be set 00:35:01.675 [2024-10-01 22:33:56.751918] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c6b00 is same with the state(6) to be set 00:35:01.675 [2024-10-01 22:33:56.751924] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c6b00 is same with the state(6) to be set 00:35:01.675 [2024-10-01 22:33:56.751929] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c6b00 is same with the state(6) to be set 00:35:01.675 [2024-10-01 22:33:56.751934] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c6b00 is same with the state(6) to be set 00:35:01.675 [2024-10-01 22:33:56.751939] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c6b00 is same with the state(6) to be set 00:35:01.675 [2024-10-01 22:33:56.751944] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c6b00 is same with the state(6) to be set 00:35:01.675 [2024-10-01 22:33:56.751949] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c6b00 is same with the state(6) to be set 00:35:01.675 [2024-10-01 22:33:56.751954] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c6b00 is same with the state(6) to be set 00:35:01.675 [2024-10-01 22:33:56.751959] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c6b00 is same with the state(6) to be set 00:35:01.675 [2024-10-01 22:33:56.751963] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c6b00 is same with the state(6) to be set 00:35:01.675 [2024-10-01 22:33:56.751968] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c6b00 is same with the state(6) to be set 00:35:01.675 [2024-10-01 22:33:56.751973] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c6b00 is same with the state(6) to be set 00:35:01.675 [2024-10-01 22:33:56.751978] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c6b00 is same with the state(6) to be set 00:35:01.675 [2024-10-01 22:33:56.751983] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c6b00 is same with the state(6) to be set 00:35:01.675 [2024-10-01 22:33:56.751988] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c6b00 is same with the state(6) to be set 00:35:01.675 [2024-10-01 22:33:56.751993] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c6b00 is same with the state(6) to be set 00:35:01.675 [2024-10-01 22:33:56.751998] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c6b00 is same with the state(6) to be set 00:35:01.675 [2024-10-01 22:33:56.752002] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c6b00 is same with the state(6) to be set 00:35:01.675 [2024-10-01 22:33:56.752007] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c6b00 is same with the state(6) to be set 00:35:01.675 [2024-10-01 22:33:56.752012] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c6b00 is same with the state(6) to be set 00:35:01.675 [2024-10-01 22:33:56.752017] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c6b00 is same with the state(6) to be set 00:35:01.675 [2024-10-01 22:33:56.752022] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c6b00 is same with the state(6) to be set 00:35:01.675 [2024-10-01 22:33:56.752027] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c6b00 is same with the state(6) to be set 00:35:01.675 [2024-10-01 22:33:56.752033] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c6b00 is same with the state(6) to be set 00:35:01.675 [2024-10-01 22:33:56.752038] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c6b00 is same with the state(6) to be set 00:35:01.675 [2024-10-01 22:33:56.752043] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c6b00 is same with the state(6) to be set 00:35:01.675 [2024-10-01 22:33:56.752047] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c6b00 is same with the state(6) to be set 00:35:01.675 [2024-10-01 22:33:56.752052] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c6b00 is same with the state(6) to be set 00:35:01.675 [2024-10-01 22:33:56.752057] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c6b00 is same with the state(6) to be set 00:35:01.675 [2024-10-01 22:33:56.752062] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c6b00 is same with the state(6) to be set 00:35:01.675 [2024-10-01 22:33:56.752066] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c6b00 is same with the state(6) to be set 00:35:01.675 [2024-10-01 22:33:56.752072] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c6b00 is same with the state(6) to be set 00:35:01.675 [2024-10-01 22:33:56.752077] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c6b00 is same with the state(6) to be set 00:35:01.675 [2024-10-01 22:33:56.752082] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c6b00 is same with the state(6) to be set 00:35:01.675 [2024-10-01 22:33:56.752086] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c6b00 is same with the state(6) to be set 00:35:01.675 [2024-10-01 22:33:56.752092] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c6b00 is same with the state(6) to be set 00:35:01.675 [2024-10-01 22:33:56.752097] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c6b00 is same with the state(6) to be set 00:35:01.675 [2024-10-01 22:33:56.752102] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c6b00 is same with the state(6) to be set 00:35:01.675 [2024-10-01 22:33:56.752107] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c6b00 is same with the state(6) to be set 00:35:01.675 [2024-10-01 22:33:56.752112] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c6b00 is same with the state(6) to be set 00:35:01.675 [2024-10-01 22:33:56.752116] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c6b00 is same with the state(6) to be set 00:35:01.675 [2024-10-01 22:33:56.752121] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c6b00 is same with the state(6) to be set 00:35:01.675 [2024-10-01 22:33:56.753558] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c7990 is same with the state(6) to be set 00:35:01.675 [2024-10-01 22:33:56.753581] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c7990 is same with the state(6) to be set 00:35:01.675 [2024-10-01 22:33:56.753587] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c7990 is same with the state(6) to be set 00:35:01.675 [2024-10-01 22:33:56.753593] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c7990 is same with the state(6) to be set 00:35:01.675 [2024-10-01 22:33:56.753598] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c7990 is same with the state(6) to be set 00:35:01.675 [2024-10-01 22:33:56.753603] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c7990 is same with the state(6) to be set 00:35:01.675 [2024-10-01 22:33:56.753608] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c7990 is same with the state(6) to be set 00:35:01.675 [2024-10-01 22:33:56.753613] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c7990 is same with the state(6) to be set 00:35:01.675 [2024-10-01 22:33:56.753621] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c7990 is same with the state(6) to be set 00:35:01.675 [2024-10-01 22:33:56.753631] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c7990 is same with the state(6) to be set 00:35:01.675 [2024-10-01 22:33:56.753636] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c7990 is same with the state(6) to be set 00:35:01.675 [2024-10-01 22:33:56.753641] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c7990 is same with the state(6) to be set 00:35:01.675 [2024-10-01 22:33:56.753646] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c7990 is same with the state(6) to be set 00:35:01.675 [2024-10-01 22:33:56.753651] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c7990 is same with the state(6) to be set 00:35:01.675 [2024-10-01 22:33:56.753656] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c7990 is same with the state(6) to be set 00:35:01.675 [2024-10-01 22:33:56.753661] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c7990 is same with the state(6) to be set 00:35:01.675 [2024-10-01 22:33:56.753666] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c7990 is same with the state(6) to be set 00:35:01.675 [2024-10-01 22:33:56.753671] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c7990 is same with the state(6) to be set 00:35:01.675 [2024-10-01 22:33:56.753676] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c7990 is same with the state(6) to be set 00:35:01.675 [2024-10-01 22:33:56.753680] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c7990 is same with the state(6) to be set 00:35:01.675 [2024-10-01 22:33:56.753685] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c7990 is same with the state(6) to be set 00:35:01.675 [2024-10-01 22:33:56.753690] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c7990 is same with the state(6) to be set 00:35:01.675 [2024-10-01 22:33:56.753695] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c7990 is same with the state(6) to be set 00:35:01.675 [2024-10-01 22:33:56.753700] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c7990 is same with the state(6) to be set 00:35:01.675 [2024-10-01 22:33:56.753704] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c7990 is same with the state(6) to be set 00:35:01.675 [2024-10-01 22:33:56.753709] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c7990 is same with the state(6) to be set 00:35:01.675 [2024-10-01 22:33:56.753713] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c7990 is same with the state(6) to be set 00:35:01.675 [2024-10-01 22:33:56.753718] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c7990 is same with the state(6) to be set 00:35:01.675 [2024-10-01 22:33:56.753723] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c7990 is same with the state(6) to be set 00:35:01.675 [2024-10-01 22:33:56.753727] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c7990 is same with the state(6) to be set 00:35:01.675 [2024-10-01 22:33:56.753732] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c7990 is same with the state(6) to be set 00:35:01.675 [2024-10-01 22:33:56.753736] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c7990 is same with the state(6) to be set 00:35:01.676 [2024-10-01 22:33:56.753741] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c7990 is same with the state(6) to be set 00:35:01.676 [2024-10-01 22:33:56.753746] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c7990 is same with the state(6) to be set 00:35:01.676 [2024-10-01 22:33:56.753751] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c7990 is same with the state(6) to be set 00:35:01.676 [2024-10-01 22:33:56.753758] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c7990 is same with the state(6) to be set 00:35:01.676 [2024-10-01 22:33:56.753763] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c7990 is same with the state(6) to be set 00:35:01.676 [2024-10-01 22:33:56.753768] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c7990 is same with the state(6) to be set 00:35:01.676 [2024-10-01 22:33:56.753773] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c7990 is same with the state(6) to be set 00:35:01.676 [2024-10-01 22:33:56.753778] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c7990 is same with the state(6) to be set 00:35:01.676 [2024-10-01 22:33:56.753782] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c7990 is same with the state(6) to be set 00:35:01.676 [2024-10-01 22:33:56.753787] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c7990 is same with the state(6) to be set 00:35:01.676 [2024-10-01 22:33:56.753792] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c7990 is same with the state(6) to be set 00:35:01.676 [2024-10-01 22:33:56.753797] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c7990 is same with the state(6) to be set 00:35:01.676 [2024-10-01 22:33:56.753802] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c7990 is same with the state(6) to be set 00:35:01.676 [2024-10-01 22:33:56.753807] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c7990 is same with the state(6) to be set 00:35:01.676 [2024-10-01 22:33:56.753812] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c7990 is same with the state(6) to be set 00:35:01.676 [2024-10-01 22:33:56.753817] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c7990 is same with the state(6) to be set 00:35:01.676 [2024-10-01 22:33:56.753822] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c7990 is same with the state(6) to be set 00:35:01.676 [2024-10-01 22:33:56.753826] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c7990 is same with the state(6) to be set 00:35:01.676 [2024-10-01 22:33:56.753831] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c7990 is same with the state(6) to be set 00:35:01.676 [2024-10-01 22:33:56.753836] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c7990 is same with the state(6) to be set 00:35:01.676 [2024-10-01 22:33:56.753840] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c7990 is same with the state(6) to be set 00:35:01.676 [2024-10-01 22:33:56.753845] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c7990 is same with the state(6) to be set 00:35:01.676 [2024-10-01 22:33:56.753850] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c7990 is same with the state(6) to be set 00:35:01.676 [2024-10-01 22:33:56.753855] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c7990 is same with the state(6) to be set 00:35:01.676 [2024-10-01 22:33:56.753860] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c7990 is same with the state(6) to be set 00:35:01.676 [2024-10-01 22:33:56.753865] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c7990 is same with the state(6) to be set 00:35:01.676 [2024-10-01 22:33:56.753869] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c7990 is same with the state(6) to be set 00:35:01.676 [2024-10-01 22:33:56.753874] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c7990 is same with the state(6) to be set 00:35:01.676 [2024-10-01 22:33:56.753879] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c7990 is same with the state(6) to be set 00:35:01.676 [2024-10-01 22:33:56.753883] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c7990 is same with the state(6) to be set 00:35:01.676 [2024-10-01 22:33:56.753889] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c7990 is same with the state(6) to be set 00:35:01.676 [2024-10-01 22:33:56.754329] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c7e60 is same with the state(6) to be set 00:35:01.676 [2024-10-01 22:33:56.754344] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c7e60 is same with the state(6) to be set 00:35:01.676 [2024-10-01 22:33:56.754349] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c7e60 is same with the state(6) to be set 00:35:01.676 [2024-10-01 22:33:56.754354] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c7e60 is same with the state(6) to be set 00:35:01.676 [2024-10-01 22:33:56.754360] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c7e60 is same with the state(6) to be set 00:35:01.676 [2024-10-01 22:33:56.754364] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c7e60 is same with the state(6) to be set 00:35:01.676 [2024-10-01 22:33:56.754369] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c7e60 is same with the state(6) to be set 00:35:01.676 [2024-10-01 22:33:56.754375] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c7e60 is same with the state(6) to be set 00:35:01.676 [2024-10-01 22:33:56.754380] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c7e60 is same with the state(6) to be set 00:35:01.676 [2024-10-01 22:33:56.754386] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c7e60 is same with the state(6) to be set 00:35:01.676 [2024-10-01 22:33:56.754391] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c7e60 is same with the state(6) to be set 00:35:01.676 [2024-10-01 22:33:56.754396] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c7e60 is same with the state(6) to be set 00:35:01.676 [2024-10-01 22:33:56.754401] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c7e60 is same with the state(6) to be set 00:35:01.676 [2024-10-01 22:33:56.754406] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c7e60 is same with the state(6) to be set 00:35:01.676 [2024-10-01 22:33:56.754411] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c7e60 is same with the state(6) to be set 00:35:01.676 [2024-10-01 22:33:56.754416] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c7e60 is same with the state(6) to be set 00:35:01.676 [2024-10-01 22:33:56.754420] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c7e60 is same with the state(6) to be set 00:35:01.676 [2024-10-01 22:33:56.754425] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c7e60 is same with the state(6) to be set 00:35:01.676 [2024-10-01 22:33:56.754430] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c7e60 is same with the state(6) to be set 00:35:01.676 [2024-10-01 22:33:56.754434] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c7e60 is same with the state(6) to be set 00:35:01.676 [2024-10-01 22:33:56.754439] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c7e60 is same with the state(6) to be set 00:35:01.676 [2024-10-01 22:33:56.754445] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c7e60 is same with the state(6) to be set 00:35:01.676 [2024-10-01 22:33:56.754449] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c7e60 is same with the state(6) to be set 00:35:01.676 [2024-10-01 22:33:56.754454] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c7e60 is same with the state(6) to be set 00:35:01.676 [2024-10-01 22:33:56.754459] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c7e60 is same with the state(6) to be set 00:35:01.676 [2024-10-01 22:33:56.754464] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c7e60 is same with the state(6) to be set 00:35:01.676 [2024-10-01 22:33:56.754472] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c7e60 is same with the state(6) to be set 00:35:01.676 [2024-10-01 22:33:56.754476] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c7e60 is same with the state(6) to be set 00:35:01.676 [2024-10-01 22:33:56.754481] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c7e60 is same with the state(6) to be set 00:35:01.676 [2024-10-01 22:33:56.754485] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c7e60 is same with the state(6) to be set 00:35:01.676 [2024-10-01 22:33:56.754490] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c7e60 is same with the state(6) to be set 00:35:01.676 [2024-10-01 22:33:56.754495] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c7e60 is same with the state(6) to be set 00:35:01.676 [2024-10-01 22:33:56.754500] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c7e60 is same with the state(6) to be set 00:35:01.676 [2024-10-01 22:33:56.754505] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c7e60 is same with the state(6) to be set 00:35:01.676 [2024-10-01 22:33:56.754510] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c7e60 is same with the state(6) to be set 00:35:01.676 [2024-10-01 22:33:56.754514] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c7e60 is same with the state(6) to be set 00:35:01.676 [2024-10-01 22:33:56.754519] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c7e60 is same with the state(6) to be set 00:35:01.676 [2024-10-01 22:33:56.754523] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c7e60 is same with the state(6) to be set 00:35:01.676 [2024-10-01 22:33:56.754528] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c7e60 is same with the state(6) to be set 00:35:01.676 [2024-10-01 22:33:56.754534] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c7e60 is same with the state(6) to be set 00:35:01.677 [2024-10-01 22:33:56.754539] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c7e60 is same with the state(6) to be set 00:35:01.677 [2024-10-01 22:33:56.754544] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c7e60 is same with the state(6) to be set 00:35:01.677 [2024-10-01 22:33:56.754549] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c7e60 is same with the state(6) to be set 00:35:01.677 [2024-10-01 22:33:56.754554] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c7e60 is same with the state(6) to be set 00:35:01.677 [2024-10-01 22:33:56.754558] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c7e60 is same with the state(6) to be set 00:35:01.677 [2024-10-01 22:33:56.754563] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c7e60 is same with the state(6) to be set 00:35:01.677 [2024-10-01 22:33:56.754568] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c7e60 is same with the state(6) to be set 00:35:01.677 [2024-10-01 22:33:56.754573] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c7e60 is same with the state(6) to be set 00:35:01.677 [2024-10-01 22:33:56.754577] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c7e60 is same with the state(6) to be set 00:35:01.677 [2024-10-01 22:33:56.754582] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c7e60 is same with the state(6) to be set 00:35:01.677 [2024-10-01 22:33:56.754587] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c7e60 is same with the state(6) to be set 00:35:01.677 [2024-10-01 22:33:56.754591] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c7e60 is same with the state(6) to be set 00:35:01.677 [2024-10-01 22:33:56.754596] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c7e60 is same with the state(6) to be set 00:35:01.677 [2024-10-01 22:33:56.754602] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c7e60 is same with the state(6) to be set 00:35:01.677 [2024-10-01 22:33:56.754608] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c7e60 is same with the state(6) to be set 00:35:01.677 [2024-10-01 22:33:56.754612] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c7e60 is same with the state(6) to be set 00:35:01.677 [2024-10-01 22:33:56.755324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.677 [2024-10-01 22:33:56.755362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:01.677 [2024-10-01 22:33:56.755380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.677 [2024-10-01 22:33:56.755389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:01.677 [2024-10-01 22:33:56.755399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.677 [2024-10-01 22:33:56.755407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:01.677 [2024-10-01 22:33:56.755417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.677 [2024-10-01 22:33:56.755425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:01.677 [2024-10-01 22:33:56.755435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.677 [2024-10-01 22:33:56.755442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:01.677 [2024-10-01 22:33:56.755451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.677 [2024-10-01 22:33:56.755460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:01.677 [2024-10-01 22:33:56.755471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.677 [2024-10-01 22:33:56.755479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:01.677 [2024-10-01 22:33:56.755489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.677 [2024-10-01 22:33:56.755497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:01.677 [2024-10-01 22:33:56.755507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.677 [2024-10-01 22:33:56.755515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:01.677 [2024-10-01 22:33:56.755525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.677 [2024-10-01 22:33:56.755532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:01.677 [2024-10-01 22:33:56.755542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.677 [2024-10-01 22:33:56.755551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:01.677 [2024-10-01 22:33:56.755561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.677 [2024-10-01 22:33:56.755574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:01.677 [2024-10-01 22:33:56.755583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.677 [2024-10-01 22:33:56.755591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:01.677 [2024-10-01 22:33:56.755602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.677 [2024-10-01 22:33:56.755610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:01.677 [2024-10-01 22:33:56.755620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.677 [2024-10-01 22:33:56.755637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:01.677 [2024-10-01 22:33:56.755647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.677 [2024-10-01 22:33:56.755654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:01.677 [2024-10-01 22:33:56.755664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.677 [2024-10-01 22:33:56.755672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:01.677 [2024-10-01 22:33:56.755681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.677 [2024-10-01 22:33:56.755689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:01.677 [2024-10-01 22:33:56.755698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.677 [2024-10-01 22:33:56.755706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:01.677 [2024-10-01 22:33:56.755715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.677 [2024-10-01 22:33:56.755724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:01.677 [2024-10-01 22:33:56.755733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.677 [2024-10-01 22:33:56.755741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:01.677 [2024-10-01 22:33:56.755750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.677 [2024-10-01 22:33:56.755757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:01.677 [2024-10-01 22:33:56.755767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.677 [2024-10-01 22:33:56.755774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:01.677 [2024-10-01 22:33:56.755784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.677 [2024-10-01 22:33:56.755791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:01.677 [2024-10-01 22:33:56.755803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.677 [2024-10-01 22:33:56.755810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:01.677 [2024-10-01 22:33:56.755819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.677 [2024-10-01 22:33:56.755827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:01.677 [2024-10-01 22:33:56.755837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.677 [2024-10-01 22:33:56.755844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:01.677 [2024-10-01 22:33:56.755853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.677 [2024-10-01 22:33:56.755860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:01.677 [2024-10-01 22:33:56.755871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.677 [2024-10-01 22:33:56.755879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:01.677 [2024-10-01 22:33:56.755888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.677 [2024-10-01 22:33:56.755896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:01.677 [2024-10-01 22:33:56.755905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.677 [2024-10-01 22:33:56.755913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:01.677 [2024-10-01 22:33:56.755922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.677 [2024-10-01 22:33:56.755930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:01.677 [2024-10-01 22:33:56.755940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.678 [2024-10-01 22:33:56.755947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:01.678 [2024-10-01 22:33:56.755957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.678 [2024-10-01 22:33:56.755964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:01.678 [2024-10-01 22:33:56.755978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.678 [2024-10-01 22:33:56.755986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:01.678 [2024-10-01 22:33:56.755995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.678 [2024-10-01 22:33:56.756002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:01.678 [2024-10-01 22:33:56.756012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:32768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.678 [2024-10-01 22:33:56.756022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:01.678 [2024-10-01 22:33:56.756031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:32896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.678 [2024-10-01 22:33:56.756038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:01.678 [2024-10-01 22:33:56.756048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:33024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.678 [2024-10-01 22:33:56.756055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:01.678 [2024-10-01 22:33:56.756065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:33152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.678 [2024-10-01 22:33:56.756073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:01.678 [2024-10-01 22:33:56.756082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.678 [2024-10-01 22:33:56.756089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:01.678 [2024-10-01 22:33:56.756099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.678 [2024-10-01 22:33:56.756106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:01.678 [2024-10-01 22:33:56.756116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.678 [2024-10-01 22:33:56.756124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:01.678 [2024-10-01 22:33:56.756134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.678 [2024-10-01 22:33:56.756141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:01.678 [2024-10-01 22:33:56.756150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.678 [2024-10-01 22:33:56.756157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:01.678 [2024-10-01 22:33:56.756168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.678 [2024-10-01 22:33:56.756175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:01.678 [2024-10-01 22:33:56.756184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.678 [2024-10-01 22:33:56.756192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:01.678 [2024-10-01 22:33:56.756201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.678 [2024-10-01 22:33:56.756208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:01.678 [2024-10-01 22:33:56.756220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.678 [2024-10-01 22:33:56.756227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:01.678 [2024-10-01 22:33:56.756238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.678 [2024-10-01 22:33:56.756246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:01.678 [2024-10-01 22:33:56.756255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.678 [2024-10-01 22:33:56.756262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:01.678 [2024-10-01 22:33:56.756272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.678 [2024-10-01 22:33:56.756280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:01.678 [2024-10-01 22:33:56.756290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.678 [2024-10-01 22:33:56.756297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:01.678 [2024-10-01 22:33:56.756306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.678 [2024-10-01 22:33:56.756313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:01.678 [2024-10-01 22:33:56.756322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.678 [2024-10-01 22:33:56.756330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:01.678 [2024-10-01 22:33:56.756340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.678 [2024-10-01 22:33:56.756347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:01.678 [2024-10-01 22:33:56.756357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.678 [2024-10-01 22:33:56.756364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:01.678 [2024-10-01 22:33:56.756373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.678 [2024-10-01 22:33:56.756381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:01.678 [2024-10-01 22:33:56.756390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.678 [2024-10-01 22:33:56.756398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:01.678 [2024-10-01 22:33:56.756407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.678 [2024-10-01 22:33:56.756414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:01.678 [2024-10-01 22:33:56.756424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.678 [2024-10-01 22:33:56.756432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:01.678 [2024-10-01 22:33:56.756442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.678 [2024-10-01 22:33:56.756450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:01.678 [2024-10-01 22:33:56.756460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.678 [2024-10-01 22:33:56.756467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:01.678 [2024-10-01 22:33:56.756476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.678 [2024-10-01 22:33:56.756484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:01.678 [2024-10-01 22:33:56.756512] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:35:01.678 [2024-10-01 22:33:56.756554] bdev_nvme.c:1730:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1be1ff0 was disconnected and freed. reset controller. 00:35:01.678 [2024-10-01 22:33:56.756800] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:35:01.678 [2024-10-01 22:33:56.756819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:01.678 [2024-10-01 22:33:56.756827] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:35:01.678 [2024-10-01 22:33:56.756835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:01.678 [2024-10-01 22:33:56.756844] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:35:01.678 [2024-10-01 22:33:56.756851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:01.678 [2024-10-01 22:33:56.756860] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:35:01.678 [2024-10-01 22:33:56.756867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:01.678 [2024-10-01 22:33:56.756875] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1576950 is same with the state(6) to be set 00:35:01.678 [2024-10-01 22:33:56.756899] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:35:01.678 [2024-10-01 22:33:56.756908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:01.678 [2024-10-01 22:33:56.756916] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:35:01.678 [2024-10-01 22:33:56.756924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:01.678 [2024-10-01 22:33:56.756932] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:35:01.678 [2024-10-01 22:33:56.756939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:01.679 [2024-10-01 22:33:56.756948] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:35:01.679 [2024-10-01 22:33:56.756955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:01.679 [2024-10-01 22:33:56.756962] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a10200 is same with the state(6) to be set 00:35:01.679 [2024-10-01 22:33:56.757003] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:35:01.679 [2024-10-01 22:33:56.757012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:01.679 [2024-10-01 22:33:56.757020] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:35:01.679 [2024-10-01 22:33:56.757028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:01.679 [2024-10-01 22:33:56.757036] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:35:01.679 [2024-10-01 22:33:56.757044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:01.679 [2024-10-01 22:33:56.757052] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:35:01.679 [2024-10-01 22:33:56.757059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:01.679 [2024-10-01 22:33:56.757066] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a0dc10 is same with the state(6) to be set 00:35:01.679 [2024-10-01 22:33:56.757091] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:35:01.679 [2024-10-01 22:33:56.757100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:01.679 [2024-10-01 22:33:56.757109] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:35:01.679 [2024-10-01 22:33:56.757116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:01.679 [2024-10-01 22:33:56.757124] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:35:01.679 [2024-10-01 22:33:56.757131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:01.679 [2024-10-01 22:33:56.757140] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:35:01.679 [2024-10-01 22:33:56.757147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:01.679 [2024-10-01 22:33:56.757154] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198fa30 is same with the state(6) to be set 00:35:01.679 [2024-10-01 22:33:56.757176] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:35:01.679 [2024-10-01 22:33:56.757185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:01.679 [2024-10-01 22:33:56.757193] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:35:01.679 [2024-10-01 22:33:56.757201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:01.679 [2024-10-01 22:33:56.757209] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:35:01.679 [2024-10-01 22:33:56.757216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:01.679 [2024-10-01 22:33:56.757224] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:35:01.679 [2024-10-01 22:33:56.757232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:01.679 [2024-10-01 22:33:56.757241] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19ede10 is same with the state(6) to be set 00:35:01.679 [2024-10-01 22:33:56.757261] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:35:01.679 [2024-10-01 22:33:56.757270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:01.679 [2024-10-01 22:33:56.757279] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:35:01.679 [2024-10-01 22:33:56.757287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:01.679 [2024-10-01 22:33:56.757295] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:35:01.679 [2024-10-01 22:33:56.757303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:01.679 [2024-10-01 22:33:56.757311] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:35:01.679 [2024-10-01 22:33:56.757318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:01.679 [2024-10-01 22:33:56.757325] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1491610 is same with the state(6) to be set 00:35:01.679 [2024-10-01 22:33:56.757349] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:35:01.679 [2024-10-01 22:33:56.757358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:01.679 [2024-10-01 22:33:56.757366] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:35:01.679 [2024-10-01 22:33:56.757374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:01.679 [2024-10-01 22:33:56.757382] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:35:01.679 [2024-10-01 22:33:56.757389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:01.679 [2024-10-01 22:33:56.757398] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:35:01.679 [2024-10-01 22:33:56.757405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:01.679 [2024-10-01 22:33:56.757415] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1995200 is same with the state(6) to be set 00:35:01.679 [2024-10-01 22:33:56.757439] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:35:01.679 [2024-10-01 22:33:56.757448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:01.679 [2024-10-01 22:33:56.757456] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:35:01.679 [2024-10-01 22:33:56.757464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:01.679 [2024-10-01 22:33:56.757472] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:35:01.679 [2024-10-01 22:33:56.757480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:01.679 [2024-10-01 22:33:56.757491] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:35:01.679 [2024-10-01 22:33:56.757498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:01.679 [2024-10-01 22:33:56.757505] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x156fb00 is same with the state(6) to be set 00:35:01.679 [2024-10-01 22:33:56.757528] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:35:01.679 [2024-10-01 22:33:56.757537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:01.679 [2024-10-01 22:33:56.757545] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:35:01.679 [2024-10-01 22:33:56.757553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:01.679 [2024-10-01 22:33:56.757561] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:35:01.679 [2024-10-01 22:33:56.757568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:01.679 [2024-10-01 22:33:56.757576] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:35:01.679 [2024-10-01 22:33:56.757584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:01.679 [2024-10-01 22:33:56.757591] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1578f40 is same with the state(6) to be set 00:35:01.679 [2024-10-01 22:33:56.758030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.679 [2024-10-01 22:33:56.758051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:01.679 [2024-10-01 22:33:56.758065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.679 [2024-10-01 22:33:56.758073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:01.679 [2024-10-01 22:33:56.758083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.679 [2024-10-01 22:33:56.758090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:01.679 [2024-10-01 22:33:56.758100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.679 [2024-10-01 22:33:56.758108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:01.679 [2024-10-01 22:33:56.758119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.679 [2024-10-01 22:33:56.758126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:01.679 [2024-10-01 22:33:56.758136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.679 [2024-10-01 22:33:56.758144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:01.679 [2024-10-01 22:33:56.758154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.680 [2024-10-01 22:33:56.758162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:01.680 [2024-10-01 22:33:56.758175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.680 [2024-10-01 22:33:56.758184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:01.680 [2024-10-01 22:33:56.758194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.680 [2024-10-01 22:33:56.758202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:01.680 [2024-10-01 22:33:56.758211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.680 [2024-10-01 22:33:56.758219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:01.680 [2024-10-01 22:33:56.758229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.680 [2024-10-01 22:33:56.758236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:01.680 [2024-10-01 22:33:56.758245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.680 [2024-10-01 22:33:56.758253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:01.680 [2024-10-01 22:33:56.758263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.680 [2024-10-01 22:33:56.758270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:01.680 [2024-10-01 22:33:56.758279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.680 [2024-10-01 22:33:56.758287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:01.680 [2024-10-01 22:33:56.758297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.680 [2024-10-01 22:33:56.758304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:01.680 [2024-10-01 22:33:56.758314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.680 [2024-10-01 22:33:56.758323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:01.680 [2024-10-01 22:33:56.758333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.680 [2024-10-01 22:33:56.758340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:01.680 [2024-10-01 22:33:56.758349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.680 [2024-10-01 22:33:56.758357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:01.680 [2024-10-01 22:33:56.758367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.680 [2024-10-01 22:33:56.758374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:01.680 [2024-10-01 22:33:56.758384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.680 [2024-10-01 22:33:56.758393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:01.680 [2024-10-01 22:33:56.758402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.680 [2024-10-01 22:33:56.758409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:01.680 [2024-10-01 22:33:56.758419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.680 [2024-10-01 22:33:56.758427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:01.680 [2024-10-01 22:33:56.758436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.680 [2024-10-01 22:33:56.758445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:01.680 [2024-10-01 22:33:56.758454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.680 [2024-10-01 22:33:56.758462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:01.680 [2024-10-01 22:33:56.758472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.680 [2024-10-01 22:33:56.758479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:01.680 [2024-10-01 22:33:56.758488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.680 [2024-10-01 22:33:56.758495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:01.680 [2024-10-01 22:33:56.758505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.680 [2024-10-01 22:33:56.758513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:01.680 [2024-10-01 22:33:56.758522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.680 [2024-10-01 22:33:56.758530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:01.680 [2024-10-01 22:33:56.758539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.680 [2024-10-01 22:33:56.758546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:01.680 [2024-10-01 22:33:56.758556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.680 [2024-10-01 22:33:56.758564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:01.680 [2024-10-01 22:33:56.758574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.680 [2024-10-01 22:33:56.758581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:01.680 [2024-10-01 22:33:56.758590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.680 [2024-10-01 22:33:56.758598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:01.680 [2024-10-01 22:33:56.758610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.680 [2024-10-01 22:33:56.758618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:01.680 [2024-10-01 22:33:56.758636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.680 [2024-10-01 22:33:56.758644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:01.680 [2024-10-01 22:33:56.765020] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c7e60 is same with the state(6) to be set 00:35:01.680 [2024-10-01 22:33:56.765041] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c7e60 is same with the state(6) to be set 00:35:01.680 [2024-10-01 22:33:56.765048] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c7e60 is same with the state(6) to be set 00:35:01.680 [2024-10-01 22:33:56.765054] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c7e60 is same with the state(6) to be set 00:35:01.680 [2024-10-01 22:33:56.765060] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c7e60 is same with the state(6) to be set 00:35:01.680 [2024-10-01 22:33:56.765065] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c7e60 is same with the state(6) to be set 00:35:01.680 [2024-10-01 22:33:56.765070] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c7e60 is same with the state(6) to be set 00:35:01.680 [2024-10-01 22:33:56.774666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.680 [2024-10-01 22:33:56.774701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:01.680 [2024-10-01 22:33:56.774714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.680 [2024-10-01 22:33:56.774725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:01.680 [2024-10-01 22:33:56.774737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.680 [2024-10-01 22:33:56.774746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:01.680 [2024-10-01 22:33:56.774758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.680 [2024-10-01 22:33:56.774766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:01.680 [2024-10-01 22:33:56.774775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.681 [2024-10-01 22:33:56.774784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:01.681 [2024-10-01 22:33:56.774794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.681 [2024-10-01 22:33:56.774802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:01.681 [2024-10-01 22:33:56.774812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.681 [2024-10-01 22:33:56.774819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:01.681 [2024-10-01 22:33:56.774835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.681 [2024-10-01 22:33:56.774843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:01.681 [2024-10-01 22:33:56.774852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.681 [2024-10-01 22:33:56.774861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:01.681 [2024-10-01 22:33:56.774871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.681 [2024-10-01 22:33:56.774878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:01.681 [2024-10-01 22:33:56.774888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.681 [2024-10-01 22:33:56.774896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:01.681 [2024-10-01 22:33:56.774906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.681 [2024-10-01 22:33:56.774914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:01.681 [2024-10-01 22:33:56.774923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.681 [2024-10-01 22:33:56.774931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:01.681 [2024-10-01 22:33:56.774941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.681 [2024-10-01 22:33:56.774948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:01.681 [2024-10-01 22:33:56.774958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.681 [2024-10-01 22:33:56.774967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:01.681 [2024-10-01 22:33:56.774977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.681 [2024-10-01 22:33:56.774985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:01.681 [2024-10-01 22:33:56.774995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.681 [2024-10-01 22:33:56.775003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:01.681 [2024-10-01 22:33:56.775012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.681 [2024-10-01 22:33:56.775020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:01.681 [2024-10-01 22:33:56.775030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.681 [2024-10-01 22:33:56.775038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:01.681 [2024-10-01 22:33:56.775048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.681 [2024-10-01 22:33:56.775057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:01.681 [2024-10-01 22:33:56.775067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.681 [2024-10-01 22:33:56.775077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:01.681 [2024-10-01 22:33:56.775087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.681 [2024-10-01 22:33:56.775094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:01.681 [2024-10-01 22:33:56.775104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.681 [2024-10-01 22:33:56.775112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:01.681 [2024-10-01 22:33:56.775122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.681 [2024-10-01 22:33:56.775131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:01.681 [2024-10-01 22:33:56.775141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.681 [2024-10-01 22:33:56.775149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:01.681 [2024-10-01 22:33:56.775159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.681 [2024-10-01 22:33:56.775167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:01.681 [2024-10-01 22:33:56.775177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.681 [2024-10-01 22:33:56.775185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:01.681 [2024-10-01 22:33:56.775196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.681 [2024-10-01 22:33:56.775204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:01.681 [2024-10-01 22:33:56.775214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.681 [2024-10-01 22:33:56.775222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:01.681 [2024-10-01 22:33:56.775232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.681 [2024-10-01 22:33:56.775239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:01.681 [2024-10-01 22:33:56.775288] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:35:01.681 [2024-10-01 22:33:56.775336] bdev_nvme.c:1730:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1be1c70 was disconnected and freed. reset controller. 00:35:01.681 [2024-10-01 22:33:56.775443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.681 [2024-10-01 22:33:56.775456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:01.681 [2024-10-01 22:33:56.775475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.681 [2024-10-01 22:33:56.775483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:01.681 [2024-10-01 22:33:56.775493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.681 [2024-10-01 22:33:56.775501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:01.681 [2024-10-01 22:33:56.775512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.681 [2024-10-01 22:33:56.775520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:01.681 [2024-10-01 22:33:56.775530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.681 [2024-10-01 22:33:56.775538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:01.681 [2024-10-01 22:33:56.775548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.681 [2024-10-01 22:33:56.775556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:01.681 [2024-10-01 22:33:56.775566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.681 [2024-10-01 22:33:56.775575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:01.681 [2024-10-01 22:33:56.775585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.681 [2024-10-01 22:33:56.775594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:01.681 [2024-10-01 22:33:56.775604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.681 [2024-10-01 22:33:56.775612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:01.681 [2024-10-01 22:33:56.775622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.681 [2024-10-01 22:33:56.775641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:01.681 [2024-10-01 22:33:56.775652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.681 [2024-10-01 22:33:56.775660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:01.681 [2024-10-01 22:33:56.775669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.681 [2024-10-01 22:33:56.775677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:01.681 [2024-10-01 22:33:56.775687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.681 [2024-10-01 22:33:56.775695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:01.681 [2024-10-01 22:33:56.775706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.682 [2024-10-01 22:33:56.775715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:01.682 [2024-10-01 22:33:56.775725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.682 [2024-10-01 22:33:56.775733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:01.682 [2024-10-01 22:33:56.775744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.682 [2024-10-01 22:33:56.775752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:01.682 [2024-10-01 22:33:56.775762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.682 [2024-10-01 22:33:56.775770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:01.682 [2024-10-01 22:33:56.775781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.682 [2024-10-01 22:33:56.775788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:01.682 [2024-10-01 22:33:56.775799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.682 [2024-10-01 22:33:56.775806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:01.682 [2024-10-01 22:33:56.775817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.682 [2024-10-01 22:33:56.775825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:01.682 [2024-10-01 22:33:56.775835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.682 [2024-10-01 22:33:56.775844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:01.682 [2024-10-01 22:33:56.775854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.682 [2024-10-01 22:33:56.775863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:01.682 [2024-10-01 22:33:56.775873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.682 [2024-10-01 22:33:56.775881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:01.682 [2024-10-01 22:33:56.775892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.682 [2024-10-01 22:33:56.775900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:01.682 [2024-10-01 22:33:56.775912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.682 [2024-10-01 22:33:56.775920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:01.682 [2024-10-01 22:33:56.775930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.682 [2024-10-01 22:33:56.775939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:01.682 [2024-10-01 22:33:56.775950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.682 [2024-10-01 22:33:56.775959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:01.682 [2024-10-01 22:33:56.775969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.682 [2024-10-01 22:33:56.775977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:01.682 [2024-10-01 22:33:56.775987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.682 [2024-10-01 22:33:56.775995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:01.682 [2024-10-01 22:33:56.776006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.682 [2024-10-01 22:33:56.776014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:01.682 [2024-10-01 22:33:56.776024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.682 [2024-10-01 22:33:56.776033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:01.682 [2024-10-01 22:33:56.776043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.682 [2024-10-01 22:33:56.776051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:01.682 [2024-10-01 22:33:56.776061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.682 [2024-10-01 22:33:56.776069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:01.682 [2024-10-01 22:33:56.776080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.682 [2024-10-01 22:33:56.776088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:01.682 [2024-10-01 22:33:56.776098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.682 [2024-10-01 22:33:56.776106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:01.682 [2024-10-01 22:33:56.776116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.682 [2024-10-01 22:33:56.776124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:01.682 [2024-10-01 22:33:56.776134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.682 [2024-10-01 22:33:56.776143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:01.682 [2024-10-01 22:33:56.776153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.682 [2024-10-01 22:33:56.776161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:01.682 [2024-10-01 22:33:56.776171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.682 [2024-10-01 22:33:56.776181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:01.682 [2024-10-01 22:33:56.776191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.682 [2024-10-01 22:33:56.776199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:01.682 [2024-10-01 22:33:56.776209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.682 [2024-10-01 22:33:56.776218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:01.682 [2024-10-01 22:33:56.776227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.682 [2024-10-01 22:33:56.776235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:01.682 [2024-10-01 22:33:56.776245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.682 [2024-10-01 22:33:56.776253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:01.682 [2024-10-01 22:33:56.776263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.682 [2024-10-01 22:33:56.776271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:01.682 [2024-10-01 22:33:56.776281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.682 [2024-10-01 22:33:56.776290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:01.682 [2024-10-01 22:33:56.776300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.682 [2024-10-01 22:33:56.776308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:01.682 [2024-10-01 22:33:56.776318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.682 [2024-10-01 22:33:56.776325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:01.682 [2024-10-01 22:33:56.776336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.682 [2024-10-01 22:33:56.776343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:01.682 [2024-10-01 22:33:56.776353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.682 [2024-10-01 22:33:56.776361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:01.682 [2024-10-01 22:33:56.776371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.682 [2024-10-01 22:33:56.776380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:01.682 [2024-10-01 22:33:56.776390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.682 [2024-10-01 22:33:56.776398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:01.682 [2024-10-01 22:33:56.776410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.682 [2024-10-01 22:33:56.776418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:01.682 [2024-10-01 22:33:56.776429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.682 [2024-10-01 22:33:56.776437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:01.682 [2024-10-01 22:33:56.776448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.683 [2024-10-01 22:33:56.776456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:01.683 [2024-10-01 22:33:56.776465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.683 [2024-10-01 22:33:56.776474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:01.683 [2024-10-01 22:33:56.776484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.683 [2024-10-01 22:33:56.776492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:01.683 [2024-10-01 22:33:56.776501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.683 [2024-10-01 22:33:56.776509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:01.683 [2024-10-01 22:33:56.776519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.683 [2024-10-01 22:33:56.776527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:01.683 [2024-10-01 22:33:56.776538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.683 [2024-10-01 22:33:56.776546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:01.683 [2024-10-01 22:33:56.776556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.683 [2024-10-01 22:33:56.776564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:01.683 [2024-10-01 22:33:56.776574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.683 [2024-10-01 22:33:56.776583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:01.683 [2024-10-01 22:33:56.776592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.683 [2024-10-01 22:33:56.776600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:01.683 [2024-10-01 22:33:56.776610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.683 [2024-10-01 22:33:56.776617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:01.683 [2024-10-01 22:33:56.776633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.683 [2024-10-01 22:33:56.776643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:01.683 [2024-10-01 22:33:56.776652] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x197f8d0 is same with the state(6) to be set 00:35:01.683 [2024-10-01 22:33:56.776697] bdev_nvme.c:1730:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x197f8d0 was disconnected and freed. reset controller. 00:35:01.683 [2024-10-01 22:33:56.778012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.683 [2024-10-01 22:33:56.778030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:01.683 [2024-10-01 22:33:56.778046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.683 [2024-10-01 22:33:56.778056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:01.683 [2024-10-01 22:33:56.778068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.683 [2024-10-01 22:33:56.778079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:01.683 [2024-10-01 22:33:56.778091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.683 [2024-10-01 22:33:56.778101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:01.683 [2024-10-01 22:33:56.778113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.683 [2024-10-01 22:33:56.778123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:01.683 [2024-10-01 22:33:56.778135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.683 [2024-10-01 22:33:56.778145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:01.683 [2024-10-01 22:33:56.778156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.683 [2024-10-01 22:33:56.778165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:01.683 [2024-10-01 22:33:56.778175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.683 [2024-10-01 22:33:56.778183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:01.683 [2024-10-01 22:33:56.778193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.683 [2024-10-01 22:33:56.778202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:01.683 [2024-10-01 22:33:56.778212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.683 [2024-10-01 22:33:56.778220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:01.683 [2024-10-01 22:33:56.778230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.683 [2024-10-01 22:33:56.778238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:01.683 [2024-10-01 22:33:56.778248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.683 [2024-10-01 22:33:56.778259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:01.683 [2024-10-01 22:33:56.778270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.683 [2024-10-01 22:33:56.778278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:01.683 [2024-10-01 22:33:56.778288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.683 [2024-10-01 22:33:56.778296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:01.683 [2024-10-01 22:33:56.778306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.683 [2024-10-01 22:33:56.778314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:01.683 [2024-10-01 22:33:56.778324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.683 [2024-10-01 22:33:56.778332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:01.683 [2024-10-01 22:33:56.778343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.683 [2024-10-01 22:33:56.778350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:01.683 [2024-10-01 22:33:56.778359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.683 [2024-10-01 22:33:56.778368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:01.683 [2024-10-01 22:33:56.778378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.683 [2024-10-01 22:33:56.778387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:01.683 [2024-10-01 22:33:56.778398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.683 [2024-10-01 22:33:56.778405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:01.683 [2024-10-01 22:33:56.778416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.683 [2024-10-01 22:33:56.778424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:01.683 [2024-10-01 22:33:56.778434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.683 [2024-10-01 22:33:56.778442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:01.683 [2024-10-01 22:33:56.778452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.683 [2024-10-01 22:33:56.778461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:01.683 [2024-10-01 22:33:56.778471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.683 [2024-10-01 22:33:56.778479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:01.683 [2024-10-01 22:33:56.778494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.683 [2024-10-01 22:33:56.778501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:01.683 [2024-10-01 22:33:56.778511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.683 [2024-10-01 22:33:56.778519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:01.683 [2024-10-01 22:33:56.778530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.683 [2024-10-01 22:33:56.778538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:01.683 [2024-10-01 22:33:56.778548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.683 [2024-10-01 22:33:56.778557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:01.683 [2024-10-01 22:33:56.778568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.684 [2024-10-01 22:33:56.778576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:01.684 [2024-10-01 22:33:56.778586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.684 [2024-10-01 22:33:56.778594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:01.684 [2024-10-01 22:33:56.778603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.684 [2024-10-01 22:33:56.778612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:01.684 [2024-10-01 22:33:56.778622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.684 [2024-10-01 22:33:56.778636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:01.684 [2024-10-01 22:33:56.778646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.684 [2024-10-01 22:33:56.778654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:01.684 [2024-10-01 22:33:56.778664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.684 [2024-10-01 22:33:56.778672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:01.684 [2024-10-01 22:33:56.778682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.684 [2024-10-01 22:33:56.778690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:01.684 [2024-10-01 22:33:56.778700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.684 [2024-10-01 22:33:56.778708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:01.684 [2024-10-01 22:33:56.778719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.684 [2024-10-01 22:33:56.778728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:01.684 [2024-10-01 22:33:56.778739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.684 [2024-10-01 22:33:56.778747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:01.684 [2024-10-01 22:33:56.778757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.684 [2024-10-01 22:33:56.778766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:01.684 [2024-10-01 22:33:56.778776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.684 [2024-10-01 22:33:56.778784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:01.684 [2024-10-01 22:33:56.778794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.684 [2024-10-01 22:33:56.778802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:01.684 [2024-10-01 22:33:56.778812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.684 [2024-10-01 22:33:56.778820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:01.684 [2024-10-01 22:33:56.778830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.684 [2024-10-01 22:33:56.778838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:01.684 [2024-10-01 22:33:56.778848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.684 [2024-10-01 22:33:56.778856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:01.684 [2024-10-01 22:33:56.778866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.684 [2024-10-01 22:33:56.778874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:01.684 [2024-10-01 22:33:56.778884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.684 [2024-10-01 22:33:56.778891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:01.684 [2024-10-01 22:33:56.778901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.684 [2024-10-01 22:33:56.778909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:01.684 [2024-10-01 22:33:56.778919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.684 [2024-10-01 22:33:56.778927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:01.684 [2024-10-01 22:33:56.778937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.684 [2024-10-01 22:33:56.778945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:01.684 [2024-10-01 22:33:56.778957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.684 [2024-10-01 22:33:56.778964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:01.684 [2024-10-01 22:33:56.778975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.684 [2024-10-01 22:33:56.778983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:01.684 [2024-10-01 22:33:56.778993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.684 [2024-10-01 22:33:56.779001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:01.684 [2024-10-01 22:33:56.779011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.684 [2024-10-01 22:33:56.779020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:01.684 [2024-10-01 22:33:56.779030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.684 [2024-10-01 22:33:56.779037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:01.684 [2024-10-01 22:33:56.779048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.684 [2024-10-01 22:33:56.779056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:01.684 [2024-10-01 22:33:56.779066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.684 [2024-10-01 22:33:56.779074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:01.684 [2024-10-01 22:33:56.779084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.684 [2024-10-01 22:33:56.779092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:01.684 [2024-10-01 22:33:56.779102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.684 [2024-10-01 22:33:56.779110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:01.684 [2024-10-01 22:33:56.779121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.684 [2024-10-01 22:33:56.779130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:01.684 [2024-10-01 22:33:56.779140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.684 [2024-10-01 22:33:56.779148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:01.684 [2024-10-01 22:33:56.779159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.684 [2024-10-01 22:33:56.779167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:01.684 [2024-10-01 22:33:56.779178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.684 [2024-10-01 22:33:56.779188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:01.684 [2024-10-01 22:33:56.779199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.684 [2024-10-01 22:33:56.779206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:01.684 [2024-10-01 22:33:56.779217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.684 [2024-10-01 22:33:56.779224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:01.684 [2024-10-01 22:33:56.779289] bdev_nvme.c:1730:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1be5d70 was disconnected and freed. reset controller. 00:35:01.684 [2024-10-01 22:33:56.779345] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1576950 (9): Bad file descriptor 00:35:01.684 [2024-10-01 22:33:56.779368] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a10200 (9): Bad file descriptor 00:35:01.684 [2024-10-01 22:33:56.779398] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:35:01.684 [2024-10-01 22:33:56.779408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:01.684 [2024-10-01 22:33:56.779416] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:35:01.684 [2024-10-01 22:33:56.779425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:01.684 [2024-10-01 22:33:56.779434] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:35:01.684 [2024-10-01 22:33:56.779442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:01.684 [2024-10-01 22:33:56.779451] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:35:01.685 [2024-10-01 22:33:56.779459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:01.685 [2024-10-01 22:33:56.779467] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a0f650 is same with the state(6) to be set 00:35:01.685 [2024-10-01 22:33:56.779486] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a0dc10 (9): Bad file descriptor 00:35:01.685 [2024-10-01 22:33:56.779501] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x198fa30 (9): Bad file descriptor 00:35:01.685 [2024-10-01 22:33:56.779514] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19ede10 (9): Bad file descriptor 00:35:01.685 [2024-10-01 22:33:56.779528] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1491610 (9): Bad file descriptor 00:35:01.685 [2024-10-01 22:33:56.779543] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1995200 (9): Bad file descriptor 00:35:01.685 [2024-10-01 22:33:56.779559] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x156fb00 (9): Bad file descriptor 00:35:01.685 [2024-10-01 22:33:56.779576] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1578f40 (9): Bad file descriptor 00:35:01.685 [2024-10-01 22:33:56.783479] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode7] resetting controller 00:35:01.685 [2024-10-01 22:33:56.784093] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode4] resetting controller 00:35:01.685 [2024-10-01 22:33:56.784122] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode6] resetting controller 00:35:01.685 [2024-10-01 22:33:56.784135] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10] resetting controller 00:35:01.685 [2024-10-01 22:33:56.784296] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:01.685 [2024-10-01 22:33:56.784313] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19ede10 with addr=10.0.0.2, port=4420 00:35:01.685 [2024-10-01 22:33:56.784323] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19ede10 is same with the state(6) to be set 00:35:01.685 [2024-10-01 22:33:56.785061] nvme_tcp.c:1252:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:35:01.685 [2024-10-01 22:33:56.785620] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:01.685 [2024-10-01 22:33:56.785646] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x198fa30 with addr=10.0.0.2, port=4420 00:35:01.685 [2024-10-01 22:33:56.785655] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198fa30 is same with the state(6) to be set 00:35:01.685 [2024-10-01 22:33:56.786089] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:01.685 [2024-10-01 22:33:56.786131] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1491610 with addr=10.0.0.2, port=4420 00:35:01.685 [2024-10-01 22:33:56.786142] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1491610 is same with the state(6) to be set 00:35:01.685 [2024-10-01 22:33:56.786472] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:01.685 [2024-10-01 22:33:56.786485] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a0dc10 with addr=10.0.0.2, port=4420 00:35:01.685 [2024-10-01 22:33:56.786494] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a0dc10 is same with the state(6) to be set 00:35:01.685 [2024-10-01 22:33:56.786506] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19ede10 (9): Bad file descriptor 00:35:01.685 [2024-10-01 22:33:56.786560] nvme_tcp.c:1252:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:35:01.685 [2024-10-01 22:33:56.786604] nvme_tcp.c:1252:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:35:01.685 [2024-10-01 22:33:56.786653] nvme_tcp.c:1252:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:35:01.685 [2024-10-01 22:33:56.786706] nvme_tcp.c:1252:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:35:01.685 [2024-10-01 22:33:56.786779] nvme_tcp.c:1252:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:35:01.685 [2024-10-01 22:33:56.786805] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x198fa30 (9): Bad file descriptor 00:35:01.685 [2024-10-01 22:33:56.786817] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1491610 (9): Bad file descriptor 00:35:01.685 [2024-10-01 22:33:56.786827] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a0dc10 (9): Bad file descriptor 00:35:01.685 [2024-10-01 22:33:56.786838] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode7] Ctrlr is in error state 00:35:01.685 [2024-10-01 22:33:56.786847] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode7] controller reinitialization failed 00:35:01.685 [2024-10-01 22:33:56.786857] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode7] in failed state. 00:35:01.685 [2024-10-01 22:33:56.786936] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:01.685 [2024-10-01 22:33:56.786947] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode4] Ctrlr is in error state 00:35:01.685 [2024-10-01 22:33:56.786954] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode4] controller reinitialization failed 00:35:01.685 [2024-10-01 22:33:56.786962] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4] in failed state. 00:35:01.685 [2024-10-01 22:33:56.786974] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode6] Ctrlr is in error state 00:35:01.685 [2024-10-01 22:33:56.786986] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode6] controller reinitialization failed 00:35:01.685 [2024-10-01 22:33:56.786996] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode6] in failed state. 00:35:01.685 [2024-10-01 22:33:56.787009] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode10] Ctrlr is in error state 00:35:01.685 [2024-10-01 22:33:56.787016] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode10] controller reinitialization failed 00:35:01.685 [2024-10-01 22:33:56.787024] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10] in failed state. 00:35:01.685 [2024-10-01 22:33:56.787065] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:01.685 [2024-10-01 22:33:56.787073] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:01.685 [2024-10-01 22:33:56.787081] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:01.685 [2024-10-01 22:33:56.789362] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a0f650 (9): Bad file descriptor 00:35:01.685 [2024-10-01 22:33:56.789483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.685 [2024-10-01 22:33:56.789496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:01.685 [2024-10-01 22:33:56.789513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.685 [2024-10-01 22:33:56.789522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:01.685 [2024-10-01 22:33:56.789533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.685 [2024-10-01 22:33:56.789542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:01.685 [2024-10-01 22:33:56.789552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.685 [2024-10-01 22:33:56.789561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:01.685 [2024-10-01 22:33:56.789572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.685 [2024-10-01 22:33:56.789580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:01.685 [2024-10-01 22:33:56.789592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.685 [2024-10-01 22:33:56.789600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:01.685 [2024-10-01 22:33:56.789611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.685 [2024-10-01 22:33:56.789620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:01.685 [2024-10-01 22:33:56.789635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.685 [2024-10-01 22:33:56.789643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:01.685 [2024-10-01 22:33:56.789654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.685 [2024-10-01 22:33:56.789662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:01.685 [2024-10-01 22:33:56.789675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.685 [2024-10-01 22:33:56.789684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:01.685 [2024-10-01 22:33:56.789694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.685 [2024-10-01 22:33:56.789703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:01.685 [2024-10-01 22:33:56.789714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.685 [2024-10-01 22:33:56.789723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:01.685 [2024-10-01 22:33:56.789734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.685 [2024-10-01 22:33:56.789743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:01.686 [2024-10-01 22:33:56.789754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.686 [2024-10-01 22:33:56.789763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:01.686 [2024-10-01 22:33:56.789773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.686 [2024-10-01 22:33:56.789782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:01.686 [2024-10-01 22:33:56.789793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.686 [2024-10-01 22:33:56.789802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:01.686 [2024-10-01 22:33:56.789813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.686 [2024-10-01 22:33:56.789822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:01.686 [2024-10-01 22:33:56.789832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.686 [2024-10-01 22:33:56.789841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:01.686 [2024-10-01 22:33:56.789851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.686 [2024-10-01 22:33:56.789860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:01.686 [2024-10-01 22:33:56.789870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.686 [2024-10-01 22:33:56.789878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:01.686 [2024-10-01 22:33:56.789888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.686 [2024-10-01 22:33:56.789897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:01.686 [2024-10-01 22:33:56.789907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.686 [2024-10-01 22:33:56.789917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:01.686 [2024-10-01 22:33:56.789927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.686 [2024-10-01 22:33:56.789936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:01.686 [2024-10-01 22:33:56.789946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.686 [2024-10-01 22:33:56.789954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:01.686 [2024-10-01 22:33:56.789964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.686 [2024-10-01 22:33:56.789973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:01.686 [2024-10-01 22:33:56.789983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.686 [2024-10-01 22:33:56.789991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:01.686 [2024-10-01 22:33:56.790002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.686 [2024-10-01 22:33:56.790010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:01.686 [2024-10-01 22:33:56.790020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.686 [2024-10-01 22:33:56.790028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:01.686 [2024-10-01 22:33:56.790039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.686 [2024-10-01 22:33:56.790047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:01.686 [2024-10-01 22:33:56.790057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.686 [2024-10-01 22:33:56.790066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:01.686 [2024-10-01 22:33:56.790076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.686 [2024-10-01 22:33:56.790084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:01.686 [2024-10-01 22:33:56.790094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.686 [2024-10-01 22:33:56.790102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:01.686 [2024-10-01 22:33:56.790112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.686 [2024-10-01 22:33:56.790120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:01.686 [2024-10-01 22:33:56.790130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.686 [2024-10-01 22:33:56.790138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:01.686 [2024-10-01 22:33:56.790150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.686 [2024-10-01 22:33:56.790159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:01.686 [2024-10-01 22:33:56.790168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.686 [2024-10-01 22:33:56.790177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:01.686 [2024-10-01 22:33:56.790187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.686 [2024-10-01 22:33:56.790196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:01.686 [2024-10-01 22:33:56.790205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.686 [2024-10-01 22:33:56.790214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:01.686 [2024-10-01 22:33:56.790224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.686 [2024-10-01 22:33:56.790233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:01.686 [2024-10-01 22:33:56.790243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.686 [2024-10-01 22:33:56.790252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:01.686 [2024-10-01 22:33:56.790262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.686 [2024-10-01 22:33:56.790270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:01.686 [2024-10-01 22:33:56.790280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.686 [2024-10-01 22:33:56.790289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:01.686 [2024-10-01 22:33:56.790299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.686 [2024-10-01 22:33:56.790308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:01.686 [2024-10-01 22:33:56.790318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.686 [2024-10-01 22:33:56.790327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:01.686 [2024-10-01 22:33:56.790337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.686 [2024-10-01 22:33:56.790345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:01.686 [2024-10-01 22:33:56.790355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.686 [2024-10-01 22:33:56.790364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:01.686 [2024-10-01 22:33:56.790374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.686 [2024-10-01 22:33:56.790384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:01.686 [2024-10-01 22:33:56.790394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.686 [2024-10-01 22:33:56.790402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:01.686 [2024-10-01 22:33:56.790412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.686 [2024-10-01 22:33:56.790420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:01.686 [2024-10-01 22:33:56.790430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.686 [2024-10-01 22:33:56.790438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:01.686 [2024-10-01 22:33:56.790448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.686 [2024-10-01 22:33:56.790456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:01.686 [2024-10-01 22:33:56.790466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.686 [2024-10-01 22:33:56.790474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:01.687 [2024-10-01 22:33:56.790484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.687 [2024-10-01 22:33:56.790493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:01.687 [2024-10-01 22:33:56.790503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.687 [2024-10-01 22:33:56.790512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:01.687 [2024-10-01 22:33:56.790523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.687 [2024-10-01 22:33:56.790532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:01.687 [2024-10-01 22:33:56.790543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.687 [2024-10-01 22:33:56.790551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:01.687 [2024-10-01 22:33:56.790563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.687 [2024-10-01 22:33:56.790572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:01.687 [2024-10-01 22:33:56.790582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.687 [2024-10-01 22:33:56.790591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:01.687 [2024-10-01 22:33:56.790601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.687 [2024-10-01 22:33:56.790609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:01.687 [2024-10-01 22:33:56.790622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.687 [2024-10-01 22:33:56.790633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:01.687 [2024-10-01 22:33:56.790645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.687 [2024-10-01 22:33:56.790653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:01.687 [2024-10-01 22:33:56.790663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.687 [2024-10-01 22:33:56.790671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:01.687 [2024-10-01 22:33:56.790681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.687 [2024-10-01 22:33:56.790690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:01.687 [2024-10-01 22:33:56.790701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.687 [2024-10-01 22:33:56.790708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:01.687 [2024-10-01 22:33:56.790718] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19cba40 is same with the state(6) to be set 00:35:01.687 [2024-10-01 22:33:56.792026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.687 [2024-10-01 22:33:56.792042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:01.687 [2024-10-01 22:33:56.792056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.687 [2024-10-01 22:33:56.792065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:01.687 [2024-10-01 22:33:56.792077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.687 [2024-10-01 22:33:56.792088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:01.687 [2024-10-01 22:33:56.792100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.687 [2024-10-01 22:33:56.792110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:01.687 [2024-10-01 22:33:56.792122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.687 [2024-10-01 22:33:56.792130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:01.687 [2024-10-01 22:33:56.792140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.687 [2024-10-01 22:33:56.792148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:01.687 [2024-10-01 22:33:56.792158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.687 [2024-10-01 22:33:56.792166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:01.687 [2024-10-01 22:33:56.792179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.687 [2024-10-01 22:33:56.792187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:01.687 [2024-10-01 22:33:56.792198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.687 [2024-10-01 22:33:56.792206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:01.687 [2024-10-01 22:33:56.792217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.687 [2024-10-01 22:33:56.792225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:01.687 [2024-10-01 22:33:56.792235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.687 [2024-10-01 22:33:56.792244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:01.687 [2024-10-01 22:33:56.792254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.687 [2024-10-01 22:33:56.792262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:01.687 [2024-10-01 22:33:56.792273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.687 [2024-10-01 22:33:56.792281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:01.687 [2024-10-01 22:33:56.792291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.687 [2024-10-01 22:33:56.792299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:01.687 [2024-10-01 22:33:56.792309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.687 [2024-10-01 22:33:56.792317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:01.687 [2024-10-01 22:33:56.792327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.687 [2024-10-01 22:33:56.792336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:01.687 [2024-10-01 22:33:56.792346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.687 [2024-10-01 22:33:56.792355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:01.687 [2024-10-01 22:33:56.792365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.687 [2024-10-01 22:33:56.792374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:01.687 [2024-10-01 22:33:56.792384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.687 [2024-10-01 22:33:56.792392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:01.687 [2024-10-01 22:33:56.792404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.687 [2024-10-01 22:33:56.792414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:01.687 [2024-10-01 22:33:56.792424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.687 [2024-10-01 22:33:56.792432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:01.687 [2024-10-01 22:33:56.792443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.687 [2024-10-01 22:33:56.792451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:01.687 [2024-10-01 22:33:56.792461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.687 [2024-10-01 22:33:56.792469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:01.687 [2024-10-01 22:33:56.792479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.687 [2024-10-01 22:33:56.792487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:01.687 [2024-10-01 22:33:56.792497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.687 [2024-10-01 22:33:56.792505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:01.687 [2024-10-01 22:33:56.792516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.687 [2024-10-01 22:33:56.792524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:01.687 [2024-10-01 22:33:56.792535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.687 [2024-10-01 22:33:56.792543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:01.687 [2024-10-01 22:33:56.792552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.688 [2024-10-01 22:33:56.792561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:01.688 [2024-10-01 22:33:56.792571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.688 [2024-10-01 22:33:56.792579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:01.688 [2024-10-01 22:33:56.792589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.688 [2024-10-01 22:33:56.792598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:01.688 [2024-10-01 22:33:56.792608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.688 [2024-10-01 22:33:56.792616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:01.688 [2024-10-01 22:33:56.792631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.688 [2024-10-01 22:33:56.792640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:01.688 [2024-10-01 22:33:56.792652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.688 [2024-10-01 22:33:56.792660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:01.688 [2024-10-01 22:33:56.792670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.688 [2024-10-01 22:33:56.792678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:01.688 [2024-10-01 22:33:56.792688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.688 [2024-10-01 22:33:56.792697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:01.688 [2024-10-01 22:33:56.792708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.688 [2024-10-01 22:33:56.792716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:01.688 [2024-10-01 22:33:56.792727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.688 [2024-10-01 22:33:56.792735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:01.688 [2024-10-01 22:33:56.792745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.688 [2024-10-01 22:33:56.792754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:01.688 [2024-10-01 22:33:56.792765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.688 [2024-10-01 22:33:56.792773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:01.688 [2024-10-01 22:33:56.792784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.688 [2024-10-01 22:33:56.792791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:01.688 [2024-10-01 22:33:56.792802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.688 [2024-10-01 22:33:56.792810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:01.688 [2024-10-01 22:33:56.792822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.688 [2024-10-01 22:33:56.792830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:01.688 [2024-10-01 22:33:56.792841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.688 [2024-10-01 22:33:56.792850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:01.688 [2024-10-01 22:33:56.792860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.688 [2024-10-01 22:33:56.792869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:01.688 [2024-10-01 22:33:56.792879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.688 [2024-10-01 22:33:56.792889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:01.688 [2024-10-01 22:33:56.792900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.688 [2024-10-01 22:33:56.792908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:01.688 [2024-10-01 22:33:56.792920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.688 [2024-10-01 22:33:56.792928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:01.688 [2024-10-01 22:33:56.792938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.688 [2024-10-01 22:33:56.792947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:01.688 [2024-10-01 22:33:56.792958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.688 [2024-10-01 22:33:56.792966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:01.688 [2024-10-01 22:33:56.792977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.688 [2024-10-01 22:33:56.792985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:01.688 [2024-10-01 22:33:56.792995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.688 [2024-10-01 22:33:56.793003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:01.688 [2024-10-01 22:33:56.793014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.688 [2024-10-01 22:33:56.793021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:01.688 [2024-10-01 22:33:56.793031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.688 [2024-10-01 22:33:56.793040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:01.688 [2024-10-01 22:33:56.793050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.688 [2024-10-01 22:33:56.793058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:01.688 [2024-10-01 22:33:56.793068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.688 [2024-10-01 22:33:56.793076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:01.688 [2024-10-01 22:33:56.793087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.688 [2024-10-01 22:33:56.793095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:01.688 [2024-10-01 22:33:56.793106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.688 [2024-10-01 22:33:56.793114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:01.688 [2024-10-01 22:33:56.793135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.688 [2024-10-01 22:33:56.793144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:01.688 [2024-10-01 22:33:56.793154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.688 [2024-10-01 22:33:56.793163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:01.688 [2024-10-01 22:33:56.793173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.688 [2024-10-01 22:33:56.793182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:01.688 [2024-10-01 22:33:56.793193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.688 [2024-10-01 22:33:56.793201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:01.688 [2024-10-01 22:33:56.793212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.688 [2024-10-01 22:33:56.793221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:01.688 [2024-10-01 22:33:56.793232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.688 [2024-10-01 22:33:56.793241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:01.688 [2024-10-01 22:33:56.793252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.688 [2024-10-01 22:33:56.793260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:01.688 [2024-10-01 22:33:56.793269] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19cccb0 is same with the state(6) to be set 00:35:01.688 [2024-10-01 22:33:56.794560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.688 [2024-10-01 22:33:56.794575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:01.688 [2024-10-01 22:33:56.794588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.688 [2024-10-01 22:33:56.794597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:01.688 [2024-10-01 22:33:56.794609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.688 [2024-10-01 22:33:56.794618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:01.688 [2024-10-01 22:33:56.794634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.689 [2024-10-01 22:33:56.794644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:01.689 [2024-10-01 22:33:56.794656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.689 [2024-10-01 22:33:56.794663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:01.689 [2024-10-01 22:33:56.794676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.689 [2024-10-01 22:33:56.794684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:01.689 [2024-10-01 22:33:56.794694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.689 [2024-10-01 22:33:56.794702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:01.689 [2024-10-01 22:33:56.794713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.689 [2024-10-01 22:33:56.794720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:01.689 [2024-10-01 22:33:56.794730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.689 [2024-10-01 22:33:56.794738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:01.689 [2024-10-01 22:33:56.794748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.689 [2024-10-01 22:33:56.794756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:01.689 [2024-10-01 22:33:56.794767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.689 [2024-10-01 22:33:56.794774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:01.689 [2024-10-01 22:33:56.794785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.689 [2024-10-01 22:33:56.794792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:01.689 [2024-10-01 22:33:56.794803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.689 [2024-10-01 22:33:56.794811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:01.689 [2024-10-01 22:33:56.794821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.689 [2024-10-01 22:33:56.794830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:01.689 [2024-10-01 22:33:56.794840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.689 [2024-10-01 22:33:56.794847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:01.689 [2024-10-01 22:33:56.794858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.689 [2024-10-01 22:33:56.794866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:01.689 [2024-10-01 22:33:56.794875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.689 [2024-10-01 22:33:56.794884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:01.689 [2024-10-01 22:33:56.794894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.689 [2024-10-01 22:33:56.794904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:01.689 [2024-10-01 22:33:56.794914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.689 [2024-10-01 22:33:56.794922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:01.689 [2024-10-01 22:33:56.794932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.689 [2024-10-01 22:33:56.794940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:01.689 [2024-10-01 22:33:56.794950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.689 [2024-10-01 22:33:56.794958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:01.689 [2024-10-01 22:33:56.794968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.689 [2024-10-01 22:33:56.794976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:01.689 [2024-10-01 22:33:56.794986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.689 [2024-10-01 22:33:56.794995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:01.689 [2024-10-01 22:33:56.795006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.689 [2024-10-01 22:33:56.795014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:01.689 [2024-10-01 22:33:56.795023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.689 [2024-10-01 22:33:56.795032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:01.689 [2024-10-01 22:33:56.795041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.689 [2024-10-01 22:33:56.795050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:01.689 [2024-10-01 22:33:56.795060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.689 [2024-10-01 22:33:56.795068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:01.689 [2024-10-01 22:33:56.795078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.689 [2024-10-01 22:33:56.795086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:01.689 [2024-10-01 22:33:56.795096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.689 [2024-10-01 22:33:56.795104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:01.689 [2024-10-01 22:33:56.795114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.689 [2024-10-01 22:33:56.795122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:01.689 [2024-10-01 22:33:56.795133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.689 [2024-10-01 22:33:56.795142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:01.689 [2024-10-01 22:33:56.795152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.689 [2024-10-01 22:33:56.795159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:01.689 [2024-10-01 22:33:56.795169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.689 [2024-10-01 22:33:56.795177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:01.689 [2024-10-01 22:33:56.795187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.689 [2024-10-01 22:33:56.795195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:01.689 [2024-10-01 22:33:56.795205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.689 [2024-10-01 22:33:56.795212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:01.689 [2024-10-01 22:33:56.795223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.689 [2024-10-01 22:33:56.795231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:01.689 [2024-10-01 22:33:56.795241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.689 [2024-10-01 22:33:56.795249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:01.690 [2024-10-01 22:33:56.795259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.690 [2024-10-01 22:33:56.795266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:01.690 [2024-10-01 22:33:56.795276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.690 [2024-10-01 22:33:56.795284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:01.690 [2024-10-01 22:33:56.795294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.690 [2024-10-01 22:33:56.795301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:01.690 [2024-10-01 22:33:56.795312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.690 [2024-10-01 22:33:56.795319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:01.690 [2024-10-01 22:33:56.795330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.690 [2024-10-01 22:33:56.795337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:01.690 [2024-10-01 22:33:56.795347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.690 [2024-10-01 22:33:56.795356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:01.690 [2024-10-01 22:33:56.795367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.690 [2024-10-01 22:33:56.795375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:01.690 [2024-10-01 22:33:56.795385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.690 [2024-10-01 22:33:56.795392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:01.690 [2024-10-01 22:33:56.795402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.690 [2024-10-01 22:33:56.795410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:01.690 [2024-10-01 22:33:56.795420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.690 [2024-10-01 22:33:56.795428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:01.690 [2024-10-01 22:33:56.795439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.690 [2024-10-01 22:33:56.795447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:01.690 [2024-10-01 22:33:56.795456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.690 [2024-10-01 22:33:56.795464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:01.690 [2024-10-01 22:33:56.795474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.690 [2024-10-01 22:33:56.795482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:01.690 [2024-10-01 22:33:56.795493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.690 [2024-10-01 22:33:56.795501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:01.690 [2024-10-01 22:33:56.795512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.690 [2024-10-01 22:33:56.795520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:01.690 [2024-10-01 22:33:56.795530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.690 [2024-10-01 22:33:56.795538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:01.690 [2024-10-01 22:33:56.795548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.690 [2024-10-01 22:33:56.795556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:01.690 [2024-10-01 22:33:56.795566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.690 [2024-10-01 22:33:56.795574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:01.690 [2024-10-01 22:33:56.795586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.690 [2024-10-01 22:33:56.795596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:01.690 [2024-10-01 22:33:56.795606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.690 [2024-10-01 22:33:56.795614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:01.690 [2024-10-01 22:33:56.795627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.690 [2024-10-01 22:33:56.795635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:01.690 [2024-10-01 22:33:56.795645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.690 [2024-10-01 22:33:56.795653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:01.690 [2024-10-01 22:33:56.795663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.690 [2024-10-01 22:33:56.795671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:01.690 [2024-10-01 22:33:56.795681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.690 [2024-10-01 22:33:56.795689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:01.690 [2024-10-01 22:33:56.795699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.690 [2024-10-01 22:33:56.795706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:01.690 [2024-10-01 22:33:56.795717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.690 [2024-10-01 22:33:56.795724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:01.690 [2024-10-01 22:33:56.795734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.690 [2024-10-01 22:33:56.795742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:01.690 [2024-10-01 22:33:56.795751] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19ce180 is same with the state(6) to be set 00:35:01.690 [2024-10-01 22:33:56.797052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.690 [2024-10-01 22:33:56.797068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:01.690 [2024-10-01 22:33:56.797082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.690 [2024-10-01 22:33:56.797091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:01.690 [2024-10-01 22:33:56.797103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.690 [2024-10-01 22:33:56.797113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:01.690 [2024-10-01 22:33:56.797124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.690 [2024-10-01 22:33:56.797135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:01.690 [2024-10-01 22:33:56.797146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.690 [2024-10-01 22:33:56.797154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:01.690 [2024-10-01 22:33:56.797164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.690 [2024-10-01 22:33:56.797172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:01.690 [2024-10-01 22:33:56.797182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.690 [2024-10-01 22:33:56.797190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:01.690 [2024-10-01 22:33:56.797200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.690 [2024-10-01 22:33:56.797208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:01.690 [2024-10-01 22:33:56.797218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.690 [2024-10-01 22:33:56.797226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:01.690 [2024-10-01 22:33:56.797236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.690 [2024-10-01 22:33:56.797245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:01.690 [2024-10-01 22:33:56.797255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.690 [2024-10-01 22:33:56.797263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:01.690 [2024-10-01 22:33:56.797273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.690 [2024-10-01 22:33:56.797282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:01.690 [2024-10-01 22:33:56.797291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.690 [2024-10-01 22:33:56.797299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:01.691 [2024-10-01 22:33:56.797309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.691 [2024-10-01 22:33:56.797317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:01.691 [2024-10-01 22:33:56.797327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.691 [2024-10-01 22:33:56.797335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:01.691 [2024-10-01 22:33:56.797345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.691 [2024-10-01 22:33:56.797353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:01.691 [2024-10-01 22:33:56.797365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.691 [2024-10-01 22:33:56.797374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:01.691 [2024-10-01 22:33:56.797383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.691 [2024-10-01 22:33:56.797392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:01.691 [2024-10-01 22:33:56.797401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.691 [2024-10-01 22:33:56.797409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:01.691 [2024-10-01 22:33:56.797419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.691 [2024-10-01 22:33:56.797427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:01.691 [2024-10-01 22:33:56.797437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.691 [2024-10-01 22:33:56.797445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:01.691 [2024-10-01 22:33:56.797455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.691 [2024-10-01 22:33:56.797463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:01.691 [2024-10-01 22:33:56.797474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.691 [2024-10-01 22:33:56.797482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:01.691 [2024-10-01 22:33:56.797492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.691 [2024-10-01 22:33:56.797501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:01.691 [2024-10-01 22:33:56.797510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.691 [2024-10-01 22:33:56.797519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:01.691 [2024-10-01 22:33:56.797528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.691 [2024-10-01 22:33:56.797537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:01.691 [2024-10-01 22:33:56.797547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.691 [2024-10-01 22:33:56.797555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:01.691 [2024-10-01 22:33:56.797565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.691 [2024-10-01 22:33:56.797573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:01.691 [2024-10-01 22:33:56.797583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.691 [2024-10-01 22:33:56.797594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:01.691 [2024-10-01 22:33:56.797604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.691 [2024-10-01 22:33:56.797612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:01.691 [2024-10-01 22:33:56.797622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.691 [2024-10-01 22:33:56.797635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:01.691 [2024-10-01 22:33:56.797645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.691 [2024-10-01 22:33:56.797653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:01.691 [2024-10-01 22:33:56.797663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.691 [2024-10-01 22:33:56.797671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:01.691 [2024-10-01 22:33:56.797681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.691 [2024-10-01 22:33:56.797689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:01.691 [2024-10-01 22:33:56.797698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.691 [2024-10-01 22:33:56.797707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:01.691 [2024-10-01 22:33:56.797718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.691 [2024-10-01 22:33:56.797726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:01.691 [2024-10-01 22:33:56.797736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.691 [2024-10-01 22:33:56.797744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:01.691 [2024-10-01 22:33:56.797754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.691 [2024-10-01 22:33:56.797762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:01.691 [2024-10-01 22:33:56.797772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.691 [2024-10-01 22:33:56.797780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:01.691 [2024-10-01 22:33:56.797790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.691 [2024-10-01 22:33:56.797798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:01.691 [2024-10-01 22:33:56.797809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.691 [2024-10-01 22:33:56.797817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:01.691 [2024-10-01 22:33:56.797829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.691 [2024-10-01 22:33:56.797837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:01.691 [2024-10-01 22:33:56.797847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.691 [2024-10-01 22:33:56.797856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:01.691 [2024-10-01 22:33:56.797866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.691 [2024-10-01 22:33:56.797874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:01.691 [2024-10-01 22:33:56.797884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.691 [2024-10-01 22:33:56.797892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:01.691 [2024-10-01 22:33:56.797903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.691 [2024-10-01 22:33:56.797912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:01.691 [2024-10-01 22:33:56.797922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.691 [2024-10-01 22:33:56.797930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:01.691 [2024-10-01 22:33:56.797940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.691 [2024-10-01 22:33:56.797948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:01.691 [2024-10-01 22:33:56.797958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.691 [2024-10-01 22:33:56.797966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:01.691 [2024-10-01 22:33:56.797976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.691 [2024-10-01 22:33:56.797984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:01.691 [2024-10-01 22:33:56.797993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.691 [2024-10-01 22:33:56.798001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:01.691 [2024-10-01 22:33:56.798012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.691 [2024-10-01 22:33:56.798021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:01.691 [2024-10-01 22:33:56.798031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.691 [2024-10-01 22:33:56.798039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:01.692 [2024-10-01 22:33:56.798050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.692 [2024-10-01 22:33:56.798061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:01.692 [2024-10-01 22:33:56.798070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.692 [2024-10-01 22:33:56.798079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:01.692 [2024-10-01 22:33:56.798089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.692 [2024-10-01 22:33:56.798098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:01.692 [2024-10-01 22:33:56.798108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.692 [2024-10-01 22:33:56.798116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:01.692 [2024-10-01 22:33:56.798126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.692 [2024-10-01 22:33:56.798134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:01.692 [2024-10-01 22:33:56.798144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.692 [2024-10-01 22:33:56.798152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:01.692 [2024-10-01 22:33:56.798162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.692 [2024-10-01 22:33:56.798170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:01.692 [2024-10-01 22:33:56.798180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.692 [2024-10-01 22:33:56.798188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:01.692 [2024-10-01 22:33:56.798199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.692 [2024-10-01 22:33:56.798207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:01.692 [2024-10-01 22:33:56.798217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.692 [2024-10-01 22:33:56.798225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:01.692 [2024-10-01 22:33:56.798235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.692 [2024-10-01 22:33:56.798244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:01.692 [2024-10-01 22:33:56.798253] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x197e3a0 is same with the state(6) to be set 00:35:01.692 [2024-10-01 22:33:56.800313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.692 [2024-10-01 22:33:56.800349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:01.692 [2024-10-01 22:33:56.800366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.692 [2024-10-01 22:33:56.800380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:01.692 [2024-10-01 22:33:56.800390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.692 [2024-10-01 22:33:56.800398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:01.692 [2024-10-01 22:33:56.800408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.692 [2024-10-01 22:33:56.800416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:01.692 [2024-10-01 22:33:56.800427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.692 [2024-10-01 22:33:56.800435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:01.692 [2024-10-01 22:33:56.800444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.692 [2024-10-01 22:33:56.800452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:01.692 [2024-10-01 22:33:56.800462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.692 [2024-10-01 22:33:56.800469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:01.692 [2024-10-01 22:33:56.800478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.692 [2024-10-01 22:33:56.800487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:01.692 [2024-10-01 22:33:56.800497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.692 [2024-10-01 22:33:56.800505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:01.692 [2024-10-01 22:33:56.800515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.692 [2024-10-01 22:33:56.800523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:01.692 [2024-10-01 22:33:56.800534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.692 [2024-10-01 22:33:56.800542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:01.692 [2024-10-01 22:33:56.800552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.692 [2024-10-01 22:33:56.800560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:01.692 [2024-10-01 22:33:56.800571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.692 [2024-10-01 22:33:56.800579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:01.692 [2024-10-01 22:33:56.800589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.692 [2024-10-01 22:33:56.800598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:01.692 [2024-10-01 22:33:56.800611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.692 [2024-10-01 22:33:56.800619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:01.692 [2024-10-01 22:33:56.800637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.692 [2024-10-01 22:33:56.800646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:01.692 [2024-10-01 22:33:56.800657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.692 [2024-10-01 22:33:56.800665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:01.692 [2024-10-01 22:33:56.800676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.692 [2024-10-01 22:33:56.800684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:01.692 [2024-10-01 22:33:56.800694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.692 [2024-10-01 22:33:56.800703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:01.692 [2024-10-01 22:33:56.800713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.692 [2024-10-01 22:33:56.800721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:01.692 [2024-10-01 22:33:56.800730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.692 [2024-10-01 22:33:56.800738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:01.692 [2024-10-01 22:33:56.800749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.692 [2024-10-01 22:33:56.800757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:01.692 [2024-10-01 22:33:56.800766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.692 [2024-10-01 22:33:56.800775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:01.692 [2024-10-01 22:33:56.800785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.692 [2024-10-01 22:33:56.800793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:01.692 [2024-10-01 22:33:56.800803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.692 [2024-10-01 22:33:56.800811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:01.692 [2024-10-01 22:33:56.800821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.692 [2024-10-01 22:33:56.800829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:01.692 [2024-10-01 22:33:56.800839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.692 [2024-10-01 22:33:56.800848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:01.692 [2024-10-01 22:33:56.800858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.692 [2024-10-01 22:33:56.800866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:01.692 [2024-10-01 22:33:56.800876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.693 [2024-10-01 22:33:56.800884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:01.693 [2024-10-01 22:33:56.800895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.693 [2024-10-01 22:33:56.800903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:01.693 [2024-10-01 22:33:56.800913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.693 [2024-10-01 22:33:56.800921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:01.693 [2024-10-01 22:33:56.800931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.693 [2024-10-01 22:33:56.800939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:01.693 [2024-10-01 22:33:56.800949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.693 [2024-10-01 22:33:56.800957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:01.693 [2024-10-01 22:33:56.800967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.693 [2024-10-01 22:33:56.800974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:01.693 [2024-10-01 22:33:56.800985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.693 [2024-10-01 22:33:56.800993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:01.693 [2024-10-01 22:33:56.801003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.693 [2024-10-01 22:33:56.801011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:01.693 [2024-10-01 22:33:56.801021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.693 [2024-10-01 22:33:56.801028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:01.693 [2024-10-01 22:33:56.801039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.693 [2024-10-01 22:33:56.801046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:01.693 [2024-10-01 22:33:56.801057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.693 [2024-10-01 22:33:56.801064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:01.693 [2024-10-01 22:33:56.801076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.693 [2024-10-01 22:33:56.801084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:01.693 [2024-10-01 22:33:56.801094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.693 [2024-10-01 22:33:56.801102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:01.693 [2024-10-01 22:33:56.801113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.693 [2024-10-01 22:33:56.801121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:01.693 [2024-10-01 22:33:56.801131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.693 [2024-10-01 22:33:56.801139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:01.693 [2024-10-01 22:33:56.801149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.693 [2024-10-01 22:33:56.801157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:01.693 [2024-10-01 22:33:56.801168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.693 [2024-10-01 22:33:56.801175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:01.693 [2024-10-01 22:33:56.801185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.693 [2024-10-01 22:33:56.801193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:01.693 [2024-10-01 22:33:56.801203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.693 [2024-10-01 22:33:56.801210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:01.693 [2024-10-01 22:33:56.801221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.693 [2024-10-01 22:33:56.801228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:01.693 [2024-10-01 22:33:56.801241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.693 [2024-10-01 22:33:56.801248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:01.693 [2024-10-01 22:33:56.801259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.693 [2024-10-01 22:33:56.801267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:01.693 [2024-10-01 22:33:56.801277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.693 [2024-10-01 22:33:56.801284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:01.693 [2024-10-01 22:33:56.801294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.693 [2024-10-01 22:33:56.801304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:01.693 [2024-10-01 22:33:56.801314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.693 [2024-10-01 22:33:56.801322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:01.693 [2024-10-01 22:33:56.801333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.693 [2024-10-01 22:33:56.801341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:01.693 [2024-10-01 22:33:56.801351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.693 [2024-10-01 22:33:56.801359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:01.693 [2024-10-01 22:33:56.801369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.693 [2024-10-01 22:33:56.801377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:01.693 [2024-10-01 22:33:56.801389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.693 [2024-10-01 22:33:56.801397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:01.693 [2024-10-01 22:33:56.801406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.693 [2024-10-01 22:33:56.801415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:01.693 [2024-10-01 22:33:56.801425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.693 [2024-10-01 22:33:56.801433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:01.693 [2024-10-01 22:33:56.801443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.693 [2024-10-01 22:33:56.801451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:01.693 [2024-10-01 22:33:56.801463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.693 [2024-10-01 22:33:56.801472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:01.693 [2024-10-01 22:33:56.801481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.693 [2024-10-01 22:33:56.801489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:01.693 [2024-10-01 22:33:56.801499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.693 [2024-10-01 22:33:56.801507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:01.693 [2024-10-01 22:33:56.801517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.693 [2024-10-01 22:33:56.801525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:01.693 [2024-10-01 22:33:56.801535] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be3400 is same with the state(6) to be set 00:35:01.693 [2024-10-01 22:33:56.802841] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:01.693 [2024-10-01 22:33:56.802863] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2] resetting controller 00:35:01.693 [2024-10-01 22:33:56.802875] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3] resetting controller 00:35:01.693 [2024-10-01 22:33:56.802887] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode5] resetting controller 00:35:01.694 [2024-10-01 22:33:56.802977] bdev_nvme.c:3029:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:35:01.694 [2024-10-01 22:33:56.803058] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode8] resetting controller 00:35:01.694 [2024-10-01 22:33:56.803458] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:01.694 [2024-10-01 22:33:56.803476] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1578f40 with addr=10.0.0.2, port=4420 00:35:01.694 [2024-10-01 22:33:56.803485] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1578f40 is same with the state(6) to be set 00:35:01.694 [2024-10-01 22:33:56.803834] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:01.694 [2024-10-01 22:33:56.803847] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x156fb00 with addr=10.0.0.2, port=4420 00:35:01.694 [2024-10-01 22:33:56.803855] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x156fb00 is same with the state(6) to be set 00:35:01.694 [2024-10-01 22:33:56.804181] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:01.694 [2024-10-01 22:33:56.804193] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1576950 with addr=10.0.0.2, port=4420 00:35:01.694 [2024-10-01 22:33:56.804200] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1576950 is same with the state(6) to be set 00:35:01.694 [2024-10-01 22:33:56.804538] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:01.694 [2024-10-01 22:33:56.804549] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1995200 with addr=10.0.0.2, port=4420 00:35:01.694 [2024-10-01 22:33:56.804557] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1995200 is same with the state(6) to be set 00:35:01.694 [2024-10-01 22:33:56.805927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.694 [2024-10-01 22:33:56.805941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:01.694 [2024-10-01 22:33:56.805953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.694 [2024-10-01 22:33:56.805961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:01.694 [2024-10-01 22:33:56.805971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.694 [2024-10-01 22:33:56.805979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:01.694 [2024-10-01 22:33:56.805989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.694 [2024-10-01 22:33:56.805996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:01.694 [2024-10-01 22:33:56.806006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.694 [2024-10-01 22:33:56.806014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:01.694 [2024-10-01 22:33:56.806028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.694 [2024-10-01 22:33:56.806036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:01.694 [2024-10-01 22:33:56.806046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.694 [2024-10-01 22:33:56.806053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:01.694 [2024-10-01 22:33:56.806063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.694 [2024-10-01 22:33:56.806070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:01.694 [2024-10-01 22:33:56.806081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.694 [2024-10-01 22:33:56.806089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:01.694 [2024-10-01 22:33:56.806098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.694 [2024-10-01 22:33:56.806106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:01.694 [2024-10-01 22:33:56.806116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.694 [2024-10-01 22:33:56.806125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:01.694 [2024-10-01 22:33:56.806135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.694 [2024-10-01 22:33:56.806142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:01.694 [2024-10-01 22:33:56.806152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.694 [2024-10-01 22:33:56.806160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:01.694 [2024-10-01 22:33:56.806170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.694 [2024-10-01 22:33:56.806178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:01.694 [2024-10-01 22:33:56.806188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.694 [2024-10-01 22:33:56.806196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:01.694 [2024-10-01 22:33:56.806206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.694 [2024-10-01 22:33:56.806213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:01.694 [2024-10-01 22:33:56.806224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.694 [2024-10-01 22:33:56.806231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:01.694 [2024-10-01 22:33:56.806241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.694 [2024-10-01 22:33:56.806250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:01.694 [2024-10-01 22:33:56.806261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.694 [2024-10-01 22:33:56.806268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:01.694 [2024-10-01 22:33:56.806278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.694 [2024-10-01 22:33:56.806286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:01.694 [2024-10-01 22:33:56.806296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.694 [2024-10-01 22:33:56.806304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:01.694 [2024-10-01 22:33:56.806314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.694 [2024-10-01 22:33:56.806322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:01.694 [2024-10-01 22:33:56.806333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.694 [2024-10-01 22:33:56.806341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:01.694 [2024-10-01 22:33:56.806351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.694 [2024-10-01 22:33:56.806358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:01.694 [2024-10-01 22:33:56.806369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.694 [2024-10-01 22:33:56.806377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:01.694 [2024-10-01 22:33:56.806386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.694 [2024-10-01 22:33:56.806395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:01.694 [2024-10-01 22:33:56.806405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.694 [2024-10-01 22:33:56.806413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:01.694 [2024-10-01 22:33:56.806423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.694 [2024-10-01 22:33:56.806432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:01.694 [2024-10-01 22:33:56.806441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.694 [2024-10-01 22:33:56.806450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:01.694 [2024-10-01 22:33:56.806459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.694 [2024-10-01 22:33:56.806467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:01.694 [2024-10-01 22:33:56.806479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.694 [2024-10-01 22:33:56.806487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:01.694 [2024-10-01 22:33:56.806497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.694 [2024-10-01 22:33:56.806505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:01.695 [2024-10-01 22:33:56.806514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.695 [2024-10-01 22:33:56.806522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:01.695 [2024-10-01 22:33:56.806532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.695 [2024-10-01 22:33:56.806540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:01.695 [2024-10-01 22:33:56.806550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.695 [2024-10-01 22:33:56.806558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:01.695 [2024-10-01 22:33:56.806567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.695 [2024-10-01 22:33:56.806575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:01.695 [2024-10-01 22:33:56.806586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.695 [2024-10-01 22:33:56.806593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:01.695 [2024-10-01 22:33:56.806604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.695 [2024-10-01 22:33:56.806612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:01.695 [2024-10-01 22:33:56.806623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.695 [2024-10-01 22:33:56.806634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:01.695 [2024-10-01 22:33:56.806644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.695 [2024-10-01 22:33:56.806652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:01.695 [2024-10-01 22:33:56.806662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.695 [2024-10-01 22:33:56.806670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:01.695 [2024-10-01 22:33:56.806680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.695 [2024-10-01 22:33:56.806689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:01.695 [2024-10-01 22:33:56.806699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.695 [2024-10-01 22:33:56.806708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:01.695 [2024-10-01 22:33:56.806718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.695 [2024-10-01 22:33:56.806725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:01.695 [2024-10-01 22:33:56.806736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.695 [2024-10-01 22:33:56.806743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:01.695 [2024-10-01 22:33:56.806753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.695 [2024-10-01 22:33:56.806761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:01.695 [2024-10-01 22:33:56.806771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.695 [2024-10-01 22:33:56.806779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:01.695 [2024-10-01 22:33:56.806789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.695 [2024-10-01 22:33:56.806797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:01.695 [2024-10-01 22:33:56.806806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.695 [2024-10-01 22:33:56.806815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:01.695 [2024-10-01 22:33:56.806825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.695 [2024-10-01 22:33:56.806833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:01.695 [2024-10-01 22:33:56.806843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.695 [2024-10-01 22:33:56.806851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:01.695 [2024-10-01 22:33:56.806862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.695 [2024-10-01 22:33:56.806870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:01.695 [2024-10-01 22:33:56.806880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.695 [2024-10-01 22:33:56.806888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:01.695 [2024-10-01 22:33:56.806899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.695 [2024-10-01 22:33:56.806907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:01.695 [2024-10-01 22:33:56.806917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.695 [2024-10-01 22:33:56.806925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:01.695 [2024-10-01 22:33:56.806937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.695 [2024-10-01 22:33:56.806945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:01.695 [2024-10-01 22:33:56.806954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.695 [2024-10-01 22:33:56.806962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:01.695 [2024-10-01 22:33:56.806971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.695 [2024-10-01 22:33:56.806979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:01.695 [2024-10-01 22:33:56.806989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.695 [2024-10-01 22:33:56.806997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:01.695 [2024-10-01 22:33:56.807006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.695 [2024-10-01 22:33:56.807014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:01.695 [2024-10-01 22:33:56.807023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.695 [2024-10-01 22:33:56.807032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:01.695 [2024-10-01 22:33:56.807042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.695 [2024-10-01 22:33:56.807049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:01.695 [2024-10-01 22:33:56.807059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.695 [2024-10-01 22:33:56.807067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:01.695 [2024-10-01 22:33:56.807076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.695 [2024-10-01 22:33:56.807084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:01.695 [2024-10-01 22:33:56.807093] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be4840 is same with the state(6) to be set 00:35:01.695 [2024-10-01 22:33:56.809510] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode7] resetting controller 00:35:01.695 [2024-10-01 22:33:56.809533] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10] resetting controller 00:35:01.695 [2024-10-01 22:33:56.809543] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode6] resetting controller 00:35:01.695 [2024-10-01 22:33:56.809554] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode4] resetting controller 00:35:01.695 task offset: 28160 on job bdev=Nvme7n1 fails 00:35:01.695 00:35:01.695 Latency(us) 00:35:01.695 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:01.695 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:35:01.695 Job: Nvme1n1 ended in about 0.89 seconds with error 00:35:01.695 Verification LBA range: start 0x0 length 0x400 00:35:01.695 Nvme1n1 : 0.89 143.69 8.98 71.84 0.00 293385.10 16274.77 249910.61 00:35:01.695 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:35:01.695 Job: Nvme2n1 ended in about 0.89 seconds with error 00:35:01.695 Verification LBA range: start 0x0 length 0x400 00:35:01.695 Nvme2n1 : 0.89 143.28 8.95 71.64 0.00 287847.54 18131.63 251658.24 00:35:01.695 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:35:01.695 Job: Nvme3n1 ended in about 0.90 seconds with error 00:35:01.695 Verification LBA range: start 0x0 length 0x400 00:35:01.695 Nvme3n1 : 0.90 142.88 8.93 71.44 0.00 282112.28 20534.61 270882.13 00:35:01.695 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:35:01.695 Job: Nvme4n1 ended in about 0.88 seconds with error 00:35:01.695 Verification LBA range: start 0x0 length 0x400 00:35:01.695 Nvme4n1 : 0.88 218.22 13.64 72.74 0.00 202681.39 14090.24 248162.99 00:35:01.695 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:35:01.695 Job: Nvme5n1 ended in about 0.90 seconds with error 00:35:01.695 Verification LBA range: start 0x0 length 0x400 00:35:01.696 Nvme5n1 : 0.90 142.48 8.91 71.24 0.00 270028.80 21189.97 253405.87 00:35:01.696 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:35:01.696 Job: Nvme6n1 ended in about 0.88 seconds with error 00:35:01.696 Verification LBA range: start 0x0 length 0x400 00:35:01.696 Nvme6n1 : 0.88 217.92 13.62 72.64 0.00 193329.71 20862.29 256901.12 00:35:01.696 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:35:01.696 Job: Nvme7n1 ended in about 0.88 seconds with error 00:35:01.696 Verification LBA range: start 0x0 length 0x400 00:35:01.696 Nvme7n1 : 0.88 223.53 13.97 72.99 0.00 184547.91 21626.88 230686.72 00:35:01.696 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:35:01.696 Job: Nvme8n1 ended in about 0.90 seconds with error 00:35:01.696 Verification LBA range: start 0x0 length 0x400 00:35:01.696 Nvme8n1 : 0.90 141.96 8.87 70.98 0.00 251805.01 17257.81 230686.72 00:35:01.696 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:35:01.696 Job: Nvme9n1 ended in about 0.91 seconds with error 00:35:01.696 Verification LBA range: start 0x0 length 0x400 00:35:01.696 Nvme9n1 : 0.91 141.10 8.82 70.55 0.00 247166.58 20206.93 253405.87 00:35:01.696 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:35:01.696 Job: Nvme10n1 ended in about 0.88 seconds with error 00:35:01.696 Verification LBA range: start 0x0 length 0x400 00:35:01.696 Nvme10n1 : 0.88 145.06 9.07 72.53 0.00 232494.08 17585.49 265639.25 00:35:01.696 =================================================================================================================== 00:35:01.696 Total : 1660.13 103.76 718.60 0.00 239797.00 14090.24 270882.13 00:35:01.696 [2024-10-01 22:33:56.836756] app.c:1061:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:35:01.696 [2024-10-01 22:33:56.836788] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode9] resetting controller 00:35:01.696 [2024-10-01 22:33:56.837203] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:01.696 [2024-10-01 22:33:56.837221] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a10200 with addr=10.0.0.2, port=4420 00:35:01.696 [2024-10-01 22:33:56.837230] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a10200 is same with the state(6) to be set 00:35:01.696 [2024-10-01 22:33:56.837245] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1578f40 (9): Bad file descriptor 00:35:01.696 [2024-10-01 22:33:56.837257] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x156fb00 (9): Bad file descriptor 00:35:01.696 [2024-10-01 22:33:56.837267] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1576950 (9): Bad file descriptor 00:35:01.696 [2024-10-01 22:33:56.837277] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1995200 (9): Bad file descriptor 00:35:01.696 [2024-10-01 22:33:56.837689] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:01.696 [2024-10-01 22:33:56.837705] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19ede10 with addr=10.0.0.2, port=4420 00:35:01.696 [2024-10-01 22:33:56.837712] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19ede10 is same with the state(6) to be set 00:35:01.696 [2024-10-01 22:33:56.837999] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:01.696 [2024-10-01 22:33:56.838011] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a0dc10 with addr=10.0.0.2, port=4420 00:35:01.696 [2024-10-01 22:33:56.838019] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a0dc10 is same with the state(6) to be set 00:35:01.696 [2024-10-01 22:33:56.838320] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:01.696 [2024-10-01 22:33:56.838332] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1491610 with addr=10.0.0.2, port=4420 00:35:01.696 [2024-10-01 22:33:56.838340] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1491610 is same with the state(6) to be set 00:35:01.696 [2024-10-01 22:33:56.838571] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:01.696 [2024-10-01 22:33:56.838581] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x198fa30 with addr=10.0.0.2, port=4420 00:35:01.696 [2024-10-01 22:33:56.838590] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198fa30 is same with the state(6) to be set 00:35:01.696 [2024-10-01 22:33:56.838891] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:01.696 [2024-10-01 22:33:56.838903] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a0f650 with addr=10.0.0.2, port=4420 00:35:01.696 [2024-10-01 22:33:56.838910] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a0f650 is same with the state(6) to be set 00:35:01.696 [2024-10-01 22:33:56.838919] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a10200 (9): Bad file descriptor 00:35:01.696 [2024-10-01 22:33:56.838928] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:01.696 [2024-10-01 22:33:56.838935] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:01.696 [2024-10-01 22:33:56.838944] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:01.696 [2024-10-01 22:33:56.838957] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:35:01.696 [2024-10-01 22:33:56.838964] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2] controller reinitialization failed 00:35:01.696 [2024-10-01 22:33:56.838971] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:35:01.696 [2024-10-01 22:33:56.838981] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode3] Ctrlr is in error state 00:35:01.696 [2024-10-01 22:33:56.838988] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode3] controller reinitialization failed 00:35:01.696 [2024-10-01 22:33:56.838995] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3] in failed state. 00:35:01.696 [2024-10-01 22:33:56.839006] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode5] Ctrlr is in error state 00:35:01.696 [2024-10-01 22:33:56.839013] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode5] controller reinitialization failed 00:35:01.696 [2024-10-01 22:33:56.839020] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5] in failed state. 00:35:01.696 [2024-10-01 22:33:56.839044] bdev_nvme.c:3029:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:35:01.696 [2024-10-01 22:33:56.839060] bdev_nvme.c:3029:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:35:01.696 [2024-10-01 22:33:56.839071] bdev_nvme.c:3029:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:35:01.696 [2024-10-01 22:33:56.839084] bdev_nvme.c:3029:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:35:01.696 [2024-10-01 22:33:56.839094] bdev_nvme.c:3029:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:35:01.696 [2024-10-01 22:33:56.839449] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:01.696 [2024-10-01 22:33:56.839461] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:01.696 [2024-10-01 22:33:56.839469] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:01.696 [2024-10-01 22:33:56.839475] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:01.696 [2024-10-01 22:33:56.839484] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19ede10 (9): Bad file descriptor 00:35:01.696 [2024-10-01 22:33:56.839493] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a0dc10 (9): Bad file descriptor 00:35:01.696 [2024-10-01 22:33:56.839503] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1491610 (9): Bad file descriptor 00:35:01.696 [2024-10-01 22:33:56.839513] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x198fa30 (9): Bad file descriptor 00:35:01.696 [2024-10-01 22:33:56.839522] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a0f650 (9): Bad file descriptor 00:35:01.696 [2024-10-01 22:33:56.839530] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode8] Ctrlr is in error state 00:35:01.696 [2024-10-01 22:33:56.839538] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode8] controller reinitialization failed 00:35:01.696 [2024-10-01 22:33:56.839545] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode8] in failed state. 00:35:01.696 [2024-10-01 22:33:56.840006] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:01.696 [2024-10-01 22:33:56.840020] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode7] Ctrlr is in error state 00:35:01.696 [2024-10-01 22:33:56.840027] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode7] controller reinitialization failed 00:35:01.696 [2024-10-01 22:33:56.840035] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode7] in failed state. 00:35:01.696 [2024-10-01 22:33:56.840045] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode10] Ctrlr is in error state 00:35:01.696 [2024-10-01 22:33:56.840052] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode10] controller reinitialization failed 00:35:01.696 [2024-10-01 22:33:56.840060] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10] in failed state. 00:35:01.696 [2024-10-01 22:33:56.840071] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode6] Ctrlr is in error state 00:35:01.696 [2024-10-01 22:33:56.840077] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode6] controller reinitialization failed 00:35:01.696 [2024-10-01 22:33:56.840084] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode6] in failed state. 00:35:01.696 [2024-10-01 22:33:56.840094] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode4] Ctrlr is in error state 00:35:01.696 [2024-10-01 22:33:56.840101] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode4] controller reinitialization failed 00:35:01.696 [2024-10-01 22:33:56.840108] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4] in failed state. 00:35:01.696 [2024-10-01 22:33:56.840118] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode9] Ctrlr is in error state 00:35:01.696 [2024-10-01 22:33:56.840125] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode9] controller reinitialization failed 00:35:01.696 [2024-10-01 22:33:56.840136] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode9] in failed state. 00:35:01.696 [2024-10-01 22:33:56.840172] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:01.696 [2024-10-01 22:33:56.840181] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:01.696 [2024-10-01 22:33:56.840188] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:01.696 [2024-10-01 22:33:56.840194] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:01.696 [2024-10-01 22:33:56.840200] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:01.957 22:33:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@137 -- # sleep 1 00:35:02.899 22:33:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@138 -- # NOT wait 281387 00:35:02.899 22:33:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@650 -- # local es=0 00:35:02.899 22:33:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@652 -- # valid_exec_arg wait 281387 00:35:02.899 22:33:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@638 -- # local arg=wait 00:35:02.899 22:33:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:35:02.899 22:33:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@642 -- # type -t wait 00:35:02.899 22:33:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:35:02.899 22:33:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@653 -- # wait 281387 00:35:02.899 22:33:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@653 -- # es=255 00:35:02.899 22:33:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:35:02.899 22:33:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@662 -- # es=127 00:35:02.899 22:33:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@663 -- # case "$es" in 00:35:02.899 22:33:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@670 -- # es=1 00:35:02.899 22:33:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:35:02.899 22:33:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@140 -- # stoptarget 00:35:02.899 22:33:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:35:02.899 22:33:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:35:02.899 22:33:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:35:02.899 22:33:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@46 -- # nvmftestfini 00:35:02.899 22:33:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@514 -- # nvmfcleanup 00:35:02.899 22:33:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@121 -- # sync 00:35:02.899 22:33:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:35:02.899 22:33:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@124 -- # set +e 00:35:02.899 22:33:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@125 -- # for i in {1..20} 00:35:02.899 22:33:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:35:02.899 rmmod nvme_tcp 00:35:02.899 rmmod nvme_fabrics 00:35:02.899 rmmod nvme_keyring 00:35:03.160 22:33:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:35:03.160 22:33:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@128 -- # set -e 00:35:03.160 22:33:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@129 -- # return 0 00:35:03.160 22:33:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@515 -- # '[' -n 281006 ']' 00:35:03.160 22:33:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@516 -- # killprocess 281006 00:35:03.160 22:33:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@950 -- # '[' -z 281006 ']' 00:35:03.160 22:33:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@954 -- # kill -0 281006 00:35:03.160 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 954: kill: (281006) - No such process 00:35:03.160 22:33:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@977 -- # echo 'Process with pid 281006 is not found' 00:35:03.160 Process with pid 281006 is not found 00:35:03.160 22:33:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:35:03.160 22:33:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:35:03.160 22:33:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:35:03.160 22:33:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@297 -- # iptr 00:35:03.160 22:33:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@789 -- # iptables-save 00:35:03.160 22:33:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:35:03.160 22:33:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@789 -- # iptables-restore 00:35:03.160 22:33:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:35:03.160 22:33:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@302 -- # remove_spdk_ns 00:35:03.160 22:33:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:03.160 22:33:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:35:03.160 22:33:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:05.074 22:34:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:35:05.074 00:35:05.074 real 0m7.978s 00:35:05.074 user 0m19.953s 00:35:05.074 sys 0m1.298s 00:35:05.074 22:34:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1126 -- # xtrace_disable 00:35:05.075 22:34:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:35:05.075 ************************************ 00:35:05.075 END TEST nvmf_shutdown_tc3 00:35:05.075 ************************************ 00:35:05.075 22:34:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@166 -- # [[ e810 == \e\8\1\0 ]] 00:35:05.075 22:34:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@166 -- # [[ tcp == \r\d\m\a ]] 00:35:05.075 22:34:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@167 -- # run_test nvmf_shutdown_tc4 nvmf_shutdown_tc4 00:35:05.075 22:34:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:35:05.075 22:34:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1107 -- # xtrace_disable 00:35:05.075 22:34:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:35:05.336 ************************************ 00:35:05.336 START TEST nvmf_shutdown_tc4 00:35:05.336 ************************************ 00:35:05.336 22:34:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@1125 -- # nvmf_shutdown_tc4 00:35:05.336 22:34:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@145 -- # starttarget 00:35:05.336 22:34:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@16 -- # nvmftestinit 00:35:05.336 22:34:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:35:05.336 22:34:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:35:05.336 22:34:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@474 -- # prepare_net_devs 00:35:05.336 22:34:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@436 -- # local -g is_hw=no 00:35:05.336 22:34:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@438 -- # remove_spdk_ns 00:35:05.336 22:34:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:05.336 22:34:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:35:05.336 22:34:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:05.336 22:34:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:35:05.336 22:34:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:35:05.336 22:34:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@309 -- # xtrace_disable 00:35:05.336 22:34:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:35:05.336 22:34:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:35:05.336 22:34:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@315 -- # pci_devs=() 00:35:05.336 22:34:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@315 -- # local -a pci_devs 00:35:05.336 22:34:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:35:05.336 22:34:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:35:05.336 22:34:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@317 -- # pci_drivers=() 00:35:05.336 22:34:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:35:05.336 22:34:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@319 -- # net_devs=() 00:35:05.336 22:34:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@319 -- # local -ga net_devs 00:35:05.336 22:34:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@320 -- # e810=() 00:35:05.336 22:34:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@320 -- # local -ga e810 00:35:05.336 22:34:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@321 -- # x722=() 00:35:05.337 22:34:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@321 -- # local -ga x722 00:35:05.337 22:34:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@322 -- # mlx=() 00:35:05.337 22:34:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@322 -- # local -ga mlx 00:35:05.337 22:34:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:35:05.337 22:34:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:35:05.337 22:34:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:35:05.337 22:34:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:35:05.337 22:34:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:35:05.337 22:34:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:35:05.337 22:34:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:35:05.337 22:34:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:35:05.337 22:34:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:35:05.337 22:34:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:35:05.337 22:34:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:35:05.337 22:34:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:35:05.337 22:34:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:35:05.337 22:34:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:35:05.337 22:34:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:35:05.337 22:34:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:35:05.337 22:34:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:35:05.337 22:34:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:35:05.337 22:34:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:35:05.337 22:34:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:35:05.337 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:35:05.337 22:34:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:35:05.337 22:34:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:35:05.337 22:34:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:05.337 22:34:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:05.337 22:34:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:35:05.337 22:34:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:35:05.337 22:34:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:35:05.337 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:35:05.337 22:34:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:35:05.337 22:34:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:35:05.337 22:34:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:05.337 22:34:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:05.337 22:34:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:35:05.337 22:34:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:35:05.337 22:34:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:35:05.337 22:34:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:35:05.337 22:34:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:35:05.337 22:34:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:05.337 22:34:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:35:05.337 22:34:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:05.337 22:34:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@416 -- # [[ up == up ]] 00:35:05.337 22:34:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:35:05.337 22:34:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:05.337 22:34:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:35:05.337 Found net devices under 0000:4b:00.0: cvl_0_0 00:35:05.337 22:34:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:35:05.337 22:34:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:35:05.337 22:34:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:05.337 22:34:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:35:05.337 22:34:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:05.337 22:34:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@416 -- # [[ up == up ]] 00:35:05.337 22:34:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:35:05.337 22:34:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:05.337 22:34:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:35:05.337 Found net devices under 0000:4b:00.1: cvl_0_1 00:35:05.337 22:34:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:35:05.337 22:34:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:35:05.337 22:34:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@440 -- # is_hw=yes 00:35:05.337 22:34:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:35:05.337 22:34:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:35:05.337 22:34:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:35:05.337 22:34:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:35:05.337 22:34:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:35:05.337 22:34:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:35:05.337 22:34:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:35:05.337 22:34:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:35:05.337 22:34:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:35:05.337 22:34:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:35:05.337 22:34:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:35:05.337 22:34:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:35:05.337 22:34:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:35:05.337 22:34:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:35:05.337 22:34:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:35:05.337 22:34:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:35:05.337 22:34:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:35:05.337 22:34:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:35:05.337 22:34:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:35:05.338 22:34:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:35:05.338 22:34:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:35:05.338 22:34:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:35:05.599 22:34:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:35:05.599 22:34:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:35:05.599 22:34:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:35:05.599 22:34:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:35:05.599 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:35:05.599 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.652 ms 00:35:05.599 00:35:05.599 --- 10.0.0.2 ping statistics --- 00:35:05.599 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:05.599 rtt min/avg/max/mdev = 0.652/0.652/0.652/0.000 ms 00:35:05.599 22:34:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:35:05.599 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:35:05.599 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.266 ms 00:35:05.599 00:35:05.599 --- 10.0.0.1 ping statistics --- 00:35:05.599 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:05.599 rtt min/avg/max/mdev = 0.266/0.266/0.266/0.000 ms 00:35:05.599 22:34:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:35:05.599 22:34:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@448 -- # return 0 00:35:05.599 22:34:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:35:05.599 22:34:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:35:05.599 22:34:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:35:05.599 22:34:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:35:05.599 22:34:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:35:05.599 22:34:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:35:05.599 22:34:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:35:05.599 22:34:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:35:05.599 22:34:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:35:05.599 22:34:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@724 -- # xtrace_disable 00:35:05.599 22:34:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:35:05.599 22:34:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@507 -- # nvmfpid=282756 00:35:05.599 22:34:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@508 -- # waitforlisten 282756 00:35:05.599 22:34:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:35:05.599 22:34:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@831 -- # '[' -z 282756 ']' 00:35:05.599 22:34:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:35:05.599 22:34:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@836 -- # local max_retries=100 00:35:05.599 22:34:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:35:05.599 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:35:05.599 22:34:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@840 -- # xtrace_disable 00:35:05.599 22:34:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:35:05.599 [2024-10-01 22:34:00.756496] Starting SPDK v25.01-pre git sha1 1b1c3081e / DPDK 24.03.0 initialization... 00:35:05.599 [2024-10-01 22:34:00.756549] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:35:05.599 [2024-10-01 22:34:00.843079] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:35:05.860 [2024-10-01 22:34:00.907257] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:35:05.860 [2024-10-01 22:34:00.907295] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:35:05.860 [2024-10-01 22:34:00.907301] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:35:05.860 [2024-10-01 22:34:00.907306] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:35:05.860 [2024-10-01 22:34:00.907310] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:35:05.860 [2024-10-01 22:34:00.907419] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:35:05.860 [2024-10-01 22:34:00.907577] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:35:05.860 [2024-10-01 22:34:00.907772] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:35:05.860 [2024-10-01 22:34:00.907774] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 4 00:35:06.432 22:34:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:35:06.432 22:34:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@864 -- # return 0 00:35:06.432 22:34:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:35:06.432 22:34:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@730 -- # xtrace_disable 00:35:06.432 22:34:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:35:06.432 22:34:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:35:06.432 22:34:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:35:06.432 22:34:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:06.432 22:34:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:35:06.432 [2024-10-01 22:34:01.608152] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:35:06.432 22:34:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:06.432 22:34:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:35:06.432 22:34:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:35:06.432 22:34:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@724 -- # xtrace_disable 00:35:06.432 22:34:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:35:06.432 22:34:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:35:06.432 22:34:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:35:06.432 22:34:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:35:06.432 22:34:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:35:06.432 22:34:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:35:06.432 22:34:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:35:06.432 22:34:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:35:06.432 22:34:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:35:06.432 22:34:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:35:06.432 22:34:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:35:06.432 22:34:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:35:06.432 22:34:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:35:06.432 22:34:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:35:06.432 22:34:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:35:06.432 22:34:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:35:06.432 22:34:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:35:06.432 22:34:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:35:06.432 22:34:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:35:06.432 22:34:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:35:06.432 22:34:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:35:06.432 22:34:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:35:06.432 22:34:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@36 -- # rpc_cmd 00:35:06.432 22:34:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:06.432 22:34:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:35:06.693 Malloc1 00:35:06.693 [2024-10-01 22:34:01.707207] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:35:06.693 Malloc2 00:35:06.693 Malloc3 00:35:06.693 Malloc4 00:35:06.693 Malloc5 00:35:06.693 Malloc6 00:35:06.693 Malloc7 00:35:06.953 Malloc8 00:35:06.953 Malloc9 00:35:06.953 Malloc10 00:35:06.953 22:34:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:06.953 22:34:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:35:06.953 22:34:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@730 -- # xtrace_disable 00:35:06.953 22:34:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:35:06.953 22:34:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@149 -- # perfpid=282983 00:35:06.953 22:34:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@150 -- # sleep 5 00:35:06.953 22:34:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@148 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 45056 -O 4096 -w randwrite -t 20 -r 'trtype:tcp adrfam:IPV4 traddr:10.0.0.2 trsvcid:4420' -P 4 00:35:06.953 [2024-10-01 22:34:02.170844] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:35:12.247 22:34:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@152 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:35:12.247 22:34:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@155 -- # killprocess 282756 00:35:12.247 22:34:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@950 -- # '[' -z 282756 ']' 00:35:12.247 22:34:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@954 -- # kill -0 282756 00:35:12.247 22:34:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@955 -- # uname 00:35:12.247 22:34:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:35:12.247 22:34:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 282756 00:35:12.247 22:34:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:35:12.247 22:34:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:35:12.247 22:34:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@968 -- # echo 'killing process with pid 282756' 00:35:12.247 killing process with pid 282756 00:35:12.247 22:34:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@969 -- # kill 282756 00:35:12.247 22:34:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@974 -- # wait 282756 00:35:12.247 [2024-10-01 22:34:07.190154] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x885790 is same with the state(6) to be set 00:35:12.247 [2024-10-01 22:34:07.190198] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x885790 is same with the state(6) to be set 00:35:12.247 [2024-10-01 22:34:07.190204] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x885790 is same with the state(6) to be set 00:35:12.247 [2024-10-01 22:34:07.190209] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x885790 is same with the state(6) to be set 00:35:12.247 [2024-10-01 22:34:07.190439] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x885c60 is same with the state(6) to be set 00:35:12.247 [2024-10-01 22:34:07.190469] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x885c60 is same with the state(6) to be set 00:35:12.247 [2024-10-01 22:34:07.190478] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x885c60 is same with the state(6) to be set 00:35:12.247 [2024-10-01 22:34:07.190485] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x885c60 is same with the state(6) to be set 00:35:12.247 [2024-10-01 22:34:07.190492] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x885c60 is same with the state(6) to be set 00:35:12.247 [2024-10-01 22:34:07.190499] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x885c60 is same with the state(6) to be set 00:35:12.247 [2024-10-01 22:34:07.190504] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x885c60 is same with the state(6) to be set 00:35:12.247 [2024-10-01 22:34:07.190510] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x885c60 is same with the state(6) to be set 00:35:12.247 [2024-10-01 22:34:07.190516] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x885c60 is same with the state(6) to be set 00:35:12.247 [2024-10-01 22:34:07.190527] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x885c60 is same with the state(6) to be set 00:35:12.247 [2024-10-01 22:34:07.190817] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x886130 is same with the state(6) to be set 00:35:12.247 [2024-10-01 22:34:07.190841] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x886130 is same with the state(6) to be set 00:35:12.247 [2024-10-01 22:34:07.190847] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x886130 is same with the state(6) to be set 00:35:12.247 [2024-10-01 22:34:07.190853] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x886130 is same with the state(6) to be set 00:35:12.247 [2024-10-01 22:34:07.190859] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x886130 is same with the state(6) to be set 00:35:12.247 [2024-10-01 22:34:07.190868] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x886130 is same with the state(6) to be set 00:35:12.247 [2024-10-01 22:34:07.190873] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x886130 is same with the state(6) to be set 00:35:12.247 [2024-10-01 22:34:07.190877] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x886130 is same with the state(6) to be set 00:35:12.247 [2024-10-01 22:34:07.190882] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x886130 is same with the state(6) to be set 00:35:12.247 [2024-10-01 22:34:07.190887] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x886130 is same with the state(6) to be set 00:35:12.247 [2024-10-01 22:34:07.190892] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x886130 is same with the state(6) to be set 00:35:12.247 [2024-10-01 22:34:07.190966] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8852c0 is same with the state(6) to be set 00:35:12.247 [2024-10-01 22:34:07.190988] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8852c0 is same with the state(6) to be set 00:35:12.247 [2024-10-01 22:34:07.190994] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8852c0 is same with the state(6) to be set 00:35:12.247 [2024-10-01 22:34:07.190999] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8852c0 is same with the state(6) to be set 00:35:12.247 [2024-10-01 22:34:07.191004] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8852c0 is same with the state(6) to be set 00:35:12.247 [2024-10-01 22:34:07.191815] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8845a0 is same with the state(6) to be set 00:35:12.247 [2024-10-01 22:34:07.191832] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8845a0 is same with the state(6) to be set 00:35:12.247 [2024-10-01 22:34:07.191838] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8845a0 is same with the state(6) to be set 00:35:12.247 [2024-10-01 22:34:07.191843] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8845a0 is same with the state(6) to be set 00:35:12.247 [2024-10-01 22:34:07.191849] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8845a0 is same with the state(6) to be set 00:35:12.247 [2024-10-01 22:34:07.192206] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x884a70 is same with the state(6) to be set 00:35:12.247 [2024-10-01 22:34:07.192222] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x884a70 is same with the state(6) to be set 00:35:12.247 [2024-10-01 22:34:07.192228] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x884a70 is same with the state(6) to be set 00:35:12.247 [2024-10-01 22:34:07.192234] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x884a70 is same with the state(6) to be set 00:35:12.247 [2024-10-01 22:34:07.192239] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x884a70 is same with the state(6) to be set 00:35:12.247 [2024-10-01 22:34:07.192431] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x884f40 is same with the state(6) to be set 00:35:12.247 [2024-10-01 22:34:07.192459] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x884f40 is same with the state(6) to be set 00:35:12.247 [2024-10-01 22:34:07.192464] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x884f40 is same with the state(6) to be set 00:35:12.247 [2024-10-01 22:34:07.192470] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x884f40 is same with the state(6) to be set 00:35:12.247 [2024-10-01 22:34:07.192474] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x884f40 is same with the state(6) to be set 00:35:12.247 [2024-10-01 22:34:07.192479] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x884f40 is same with the state(6) to be set 00:35:12.248 [2024-10-01 22:34:07.192484] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x884f40 is same with the state(6) to be set 00:35:12.248 [2024-10-01 22:34:07.192877] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8840d0 is same with the state(6) to be set 00:35:12.248 [2024-10-01 22:34:07.192894] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8840d0 is same with the state(6) to be set 00:35:12.248 [2024-10-01 22:34:07.192900] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8840d0 is same with the state(6) to be set 00:35:12.248 [2024-10-01 22:34:07.192905] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8840d0 is same with the state(6) to be set 00:35:12.248 [2024-10-01 22:34:07.192910] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8840d0 is same with the state(6) to be set 00:35:12.248 Write completed with error (sct=0, sc=8) 00:35:12.248 Write completed with error (sct=0, sc=8) 00:35:12.248 Write completed with error (sct=0, sc=8) 00:35:12.248 Write completed with error (sct=0, sc=8) 00:35:12.248 starting I/O failed: -6 00:35:12.248 Write completed with error (sct=0, sc=8) 00:35:12.248 Write completed with error (sct=0, sc=8) 00:35:12.248 Write completed with error (sct=0, sc=8) 00:35:12.248 Write completed with error (sct=0, sc=8) 00:35:12.248 starting I/O failed: -6 00:35:12.248 Write completed with error (sct=0, sc=8) 00:35:12.248 Write completed with error (sct=0, sc=8) 00:35:12.248 Write completed with error (sct=0, sc=8) 00:35:12.248 Write completed with error (sct=0, sc=8) 00:35:12.248 starting I/O failed: -6 00:35:12.248 Write completed with error (sct=0, sc=8) 00:35:12.248 Write completed with error (sct=0, sc=8) 00:35:12.248 Write completed with error (sct=0, sc=8) 00:35:12.248 Write completed with error (sct=0, sc=8) 00:35:12.248 starting I/O failed: -6 00:35:12.248 Write completed with error (sct=0, sc=8) 00:35:12.248 Write completed with error (sct=0, sc=8) 00:35:12.248 Write completed with error (sct=0, sc=8) 00:35:12.248 Write completed with error (sct=0, sc=8) 00:35:12.248 starting I/O failed: -6 00:35:12.248 Write completed with error (sct=0, sc=8) 00:35:12.248 Write completed with error (sct=0, sc=8) 00:35:12.248 Write completed with error (sct=0, sc=8) 00:35:12.248 Write completed with error (sct=0, sc=8) 00:35:12.248 starting I/O failed: -6 00:35:12.248 Write completed with error (sct=0, sc=8) 00:35:12.248 Write completed with error (sct=0, sc=8) 00:35:12.248 Write completed with error (sct=0, sc=8) 00:35:12.248 Write completed with error (sct=0, sc=8) 00:35:12.248 starting I/O failed: -6 00:35:12.248 Write completed with error (sct=0, sc=8) 00:35:12.248 Write completed with error (sct=0, sc=8) 00:35:12.248 Write completed with error (sct=0, sc=8) 00:35:12.248 Write completed with error (sct=0, sc=8) 00:35:12.248 starting I/O failed: -6 00:35:12.248 Write completed with error (sct=0, sc=8) 00:35:12.248 Write completed with error (sct=0, sc=8) 00:35:12.248 Write completed with error (sct=0, sc=8) 00:35:12.248 Write completed with error (sct=0, sc=8) 00:35:12.248 starting I/O failed: -6 00:35:12.248 Write completed with error (sct=0, sc=8) 00:35:12.248 Write completed with error (sct=0, sc=8) 00:35:12.248 Write completed with error (sct=0, sc=8) 00:35:12.248 [2024-10-01 22:34:07.194588] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:35:12.248 starting I/O failed: -6 00:35:12.248 starting I/O failed: -6 00:35:12.248 starting I/O failed: -6 00:35:12.248 starting I/O failed: -6 00:35:12.248 starting I/O failed: -6 00:35:12.248 Write completed with error (sct=0, sc=8) 00:35:12.248 starting I/O failed: -6 00:35:12.248 Write completed with error (sct=0, sc=8) 00:35:12.248 Write completed with error (sct=0, sc=8) 00:35:12.248 Write completed with error (sct=0, sc=8) 00:35:12.248 starting I/O failed: -6 00:35:12.248 Write completed with error (sct=0, sc=8) 00:35:12.248 starting I/O failed: -6 00:35:12.248 Write completed with error (sct=0, sc=8) 00:35:12.248 Write completed with error (sct=0, sc=8) 00:35:12.248 Write completed with error (sct=0, sc=8) 00:35:12.248 starting I/O failed: -6 00:35:12.248 Write completed with error (sct=0, sc=8) 00:35:12.248 starting I/O failed: -6 00:35:12.248 Write completed with error (sct=0, sc=8) 00:35:12.248 Write completed with error (sct=0, sc=8) 00:35:12.248 Write completed with error (sct=0, sc=8) 00:35:12.248 starting I/O failed: -6 00:35:12.248 Write completed with error (sct=0, sc=8) 00:35:12.248 starting I/O failed: -6 00:35:12.248 Write completed with error (sct=0, sc=8) 00:35:12.248 Write completed with error (sct=0, sc=8) 00:35:12.248 Write completed with error (sct=0, sc=8) 00:35:12.248 starting I/O failed: -6 00:35:12.248 Write completed with error (sct=0, sc=8) 00:35:12.248 starting I/O failed: -6 00:35:12.248 Write completed with error (sct=0, sc=8) 00:35:12.248 Write completed with error (sct=0, sc=8) 00:35:12.248 Write completed with error (sct=0, sc=8) 00:35:12.248 starting I/O failed: -6 00:35:12.248 Write completed with error (sct=0, sc=8) 00:35:12.248 starting I/O failed: -6 00:35:12.248 Write completed with error (sct=0, sc=8) 00:35:12.248 Write completed with error (sct=0, sc=8) 00:35:12.248 Write completed with error (sct=0, sc=8) 00:35:12.248 starting I/O failed: -6 00:35:12.248 Write completed with error (sct=0, sc=8) 00:35:12.248 starting I/O failed: -6 00:35:12.248 Write completed with error (sct=0, sc=8) 00:35:12.248 Write completed with error (sct=0, sc=8) 00:35:12.248 Write completed with error (sct=0, sc=8) 00:35:12.248 starting I/O failed: -6 00:35:12.248 Write completed with error (sct=0, sc=8) 00:35:12.248 starting I/O failed: -6 00:35:12.248 Write completed with error (sct=0, sc=8) 00:35:12.248 [2024-10-01 22:34:07.195639] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:35:12.248 Write completed with error (sct=0, sc=8) 00:35:12.248 starting I/O failed: -6 00:35:12.248 Write completed with error (sct=0, sc=8) 00:35:12.248 starting I/O failed: -6 00:35:12.248 Write completed with error (sct=0, sc=8) 00:35:12.248 starting I/O failed: -6 00:35:12.248 Write completed with error (sct=0, sc=8) 00:35:12.248 Write completed with error (sct=0, sc=8) 00:35:12.248 starting I/O failed: -6 00:35:12.248 Write completed with error (sct=0, sc=8) 00:35:12.248 starting I/O failed: -6 00:35:12.248 Write completed with error (sct=0, sc=8) 00:35:12.248 starting I/O failed: -6 00:35:12.248 Write completed with error (sct=0, sc=8) 00:35:12.248 Write completed with error (sct=0, sc=8) 00:35:12.248 starting I/O failed: -6 00:35:12.248 Write completed with error (sct=0, sc=8) 00:35:12.248 starting I/O failed: -6 00:35:12.248 Write completed with error (sct=0, sc=8) 00:35:12.248 starting I/O failed: -6 00:35:12.248 Write completed with error (sct=0, sc=8) 00:35:12.248 Write completed with error (sct=0, sc=8) 00:35:12.248 starting I/O failed: -6 00:35:12.248 Write completed with error (sct=0, sc=8) 00:35:12.248 starting I/O failed: -6 00:35:12.248 Write completed with error (sct=0, sc=8) 00:35:12.248 starting I/O failed: -6 00:35:12.248 Write completed with error (sct=0, sc=8) 00:35:12.248 Write completed with error (sct=0, sc=8) 00:35:12.248 starting I/O failed: -6 00:35:12.248 Write completed with error (sct=0, sc=8) 00:35:12.248 starting I/O failed: -6 00:35:12.248 Write completed with error (sct=0, sc=8) 00:35:12.248 starting I/O failed: -6 00:35:12.248 Write completed with error (sct=0, sc=8) 00:35:12.248 Write completed with error (sct=0, sc=8) 00:35:12.248 starting I/O failed: -6 00:35:12.248 Write completed with error (sct=0, sc=8) 00:35:12.248 starting I/O failed: -6 00:35:12.248 Write completed with error (sct=0, sc=8) 00:35:12.248 starting I/O failed: -6 00:35:12.248 Write completed with error (sct=0, sc=8) 00:35:12.248 Write completed with error (sct=0, sc=8) 00:35:12.248 starting I/O failed: -6 00:35:12.248 Write completed with error (sct=0, sc=8) 00:35:12.248 starting I/O failed: -6 00:35:12.248 Write completed with error (sct=0, sc=8) 00:35:12.248 starting I/O failed: -6 00:35:12.248 Write completed with error (sct=0, sc=8) 00:35:12.248 Write completed with error (sct=0, sc=8) 00:35:12.248 starting I/O failed: -6 00:35:12.248 Write completed with error (sct=0, sc=8) 00:35:12.248 starting I/O failed: -6 00:35:12.248 Write completed with error (sct=0, sc=8) 00:35:12.248 starting I/O failed: -6 00:35:12.248 Write completed with error (sct=0, sc=8) 00:35:12.248 Write completed with error (sct=0, sc=8) 00:35:12.248 starting I/O failed: -6 00:35:12.248 Write completed with error (sct=0, sc=8) 00:35:12.248 starting I/O failed: -6 00:35:12.248 Write completed with error (sct=0, sc=8) 00:35:12.248 starting I/O failed: -6 00:35:12.248 Write completed with error (sct=0, sc=8) 00:35:12.248 Write completed with error (sct=0, sc=8) 00:35:12.248 starting I/O failed: -6 00:35:12.248 Write completed with error (sct=0, sc=8) 00:35:12.248 starting I/O failed: -6 00:35:12.248 Write completed with error (sct=0, sc=8) 00:35:12.248 starting I/O failed: -6 00:35:12.248 Write completed with error (sct=0, sc=8) 00:35:12.248 Write completed with error (sct=0, sc=8) 00:35:12.248 starting I/O failed: -6 00:35:12.248 Write completed with error (sct=0, sc=8) 00:35:12.248 starting I/O failed: -6 00:35:12.248 Write completed with error (sct=0, sc=8) 00:35:12.248 starting I/O failed: -6 00:35:12.248 Write completed with error (sct=0, sc=8) 00:35:12.248 Write completed with error (sct=0, sc=8) 00:35:12.248 starting I/O failed: -6 00:35:12.248 Write completed with error (sct=0, sc=8) 00:35:12.248 starting I/O failed: -6 00:35:12.248 Write completed with error (sct=0, sc=8) 00:35:12.248 starting I/O failed: -6 00:35:12.248 Write completed with error (sct=0, sc=8) 00:35:12.248 Write completed with error (sct=0, sc=8) 00:35:12.248 starting I/O failed: -6 00:35:12.248 [2024-10-01 22:34:07.196551] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:35:12.248 Write completed with error (sct=0, sc=8) 00:35:12.248 starting I/O failed: -6 00:35:12.248 Write completed with error (sct=0, sc=8) 00:35:12.248 starting I/O failed: -6 00:35:12.248 Write completed with error (sct=0, sc=8) 00:35:12.248 starting I/O failed: -6 00:35:12.248 Write completed with error (sct=0, sc=8) 00:35:12.248 starting I/O failed: -6 00:35:12.248 Write completed with error (sct=0, sc=8) 00:35:12.248 starting I/O failed: -6 00:35:12.248 Write completed with error (sct=0, sc=8) 00:35:12.249 starting I/O failed: -6 00:35:12.249 Write completed with error (sct=0, sc=8) 00:35:12.249 starting I/O failed: -6 00:35:12.249 Write completed with error (sct=0, sc=8) 00:35:12.249 starting I/O failed: -6 00:35:12.249 Write completed with error (sct=0, sc=8) 00:35:12.249 starting I/O failed: -6 00:35:12.249 Write completed with error (sct=0, sc=8) 00:35:12.249 starting I/O failed: -6 00:35:12.249 Write completed with error (sct=0, sc=8) 00:35:12.249 starting I/O failed: -6 00:35:12.249 Write completed with error (sct=0, sc=8) 00:35:12.249 starting I/O failed: -6 00:35:12.249 Write completed with error (sct=0, sc=8) 00:35:12.249 starting I/O failed: -6 00:35:12.249 Write completed with error (sct=0, sc=8) 00:35:12.249 starting I/O failed: -6 00:35:12.249 Write completed with error (sct=0, sc=8) 00:35:12.249 starting I/O failed: -6 00:35:12.249 Write completed with error (sct=0, sc=8) 00:35:12.249 starting I/O failed: -6 00:35:12.249 Write completed with error (sct=0, sc=8) 00:35:12.249 starting I/O failed: -6 00:35:12.249 Write completed with error (sct=0, sc=8) 00:35:12.249 starting I/O failed: -6 00:35:12.249 Write completed with error (sct=0, sc=8) 00:35:12.249 starting I/O failed: -6 00:35:12.249 Write completed with error (sct=0, sc=8) 00:35:12.249 starting I/O failed: -6 00:35:12.249 Write completed with error (sct=0, sc=8) 00:35:12.249 starting I/O failed: -6 00:35:12.249 Write completed with error (sct=0, sc=8) 00:35:12.249 starting I/O failed: -6 00:35:12.249 Write completed with error (sct=0, sc=8) 00:35:12.249 starting I/O failed: -6 00:35:12.249 Write completed with error (sct=0, sc=8) 00:35:12.249 starting I/O failed: -6 00:35:12.249 Write completed with error (sct=0, sc=8) 00:35:12.249 starting I/O failed: -6 00:35:12.249 Write completed with error (sct=0, sc=8) 00:35:12.249 starting I/O failed: -6 00:35:12.249 Write completed with error (sct=0, sc=8) 00:35:12.249 starting I/O failed: -6 00:35:12.249 Write completed with error (sct=0, sc=8) 00:35:12.249 starting I/O failed: -6 00:35:12.249 Write completed with error (sct=0, sc=8) 00:35:12.249 starting I/O failed: -6 00:35:12.249 Write completed with error (sct=0, sc=8) 00:35:12.249 starting I/O failed: -6 00:35:12.249 Write completed with error (sct=0, sc=8) 00:35:12.249 starting I/O failed: -6 00:35:12.249 Write completed with error (sct=0, sc=8) 00:35:12.249 starting I/O failed: -6 00:35:12.249 Write completed with error (sct=0, sc=8) 00:35:12.249 starting I/O failed: -6 00:35:12.249 Write completed with error (sct=0, sc=8) 00:35:12.249 starting I/O failed: -6 00:35:12.249 Write completed with error (sct=0, sc=8) 00:35:12.249 starting I/O failed: -6 00:35:12.249 Write completed with error (sct=0, sc=8) 00:35:12.249 starting I/O failed: -6 00:35:12.249 Write completed with error (sct=0, sc=8) 00:35:12.249 starting I/O failed: -6 00:35:12.249 Write completed with error (sct=0, sc=8) 00:35:12.249 starting I/O failed: -6 00:35:12.249 Write completed with error (sct=0, sc=8) 00:35:12.249 starting I/O failed: -6 00:35:12.249 Write completed with error (sct=0, sc=8) 00:35:12.249 starting I/O failed: -6 00:35:12.249 Write completed with error (sct=0, sc=8) 00:35:12.249 starting I/O failed: -6 00:35:12.249 Write completed with error (sct=0, sc=8) 00:35:12.249 starting I/O failed: -6 00:35:12.249 Write completed with error (sct=0, sc=8) 00:35:12.249 starting I/O failed: -6 00:35:12.249 Write completed with error (sct=0, sc=8) 00:35:12.249 starting I/O failed: -6 00:35:12.249 Write completed with error (sct=0, sc=8) 00:35:12.249 starting I/O failed: -6 00:35:12.249 Write completed with error (sct=0, sc=8) 00:35:12.249 starting I/O failed: -6 00:35:12.249 Write completed with error (sct=0, sc=8) 00:35:12.249 starting I/O failed: -6 00:35:12.249 Write completed with error (sct=0, sc=8) 00:35:12.249 starting I/O failed: -6 00:35:12.249 Write completed with error (sct=0, sc=8) 00:35:12.249 starting I/O failed: -6 00:35:12.249 Write completed with error (sct=0, sc=8) 00:35:12.249 starting I/O failed: -6 00:35:12.249 Write completed with error (sct=0, sc=8) 00:35:12.249 starting I/O failed: -6 00:35:12.249 Write completed with error (sct=0, sc=8) 00:35:12.249 starting I/O failed: -6 00:35:12.249 Write completed with error (sct=0, sc=8) 00:35:12.249 starting I/O failed: -6 00:35:12.249 Write completed with error (sct=0, sc=8) 00:35:12.249 starting I/O failed: -6 00:35:12.249 Write completed with error (sct=0, sc=8) 00:35:12.249 starting I/O failed: -6 00:35:12.249 [2024-10-01 22:34:07.197673] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x886ad0 is same with the state(6) to be set 00:35:12.249 [2024-10-01 22:34:07.197697] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x886ad0 is same with the state(6) to be set 00:35:12.249 [2024-10-01 22:34:07.197702] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x886ad0 is same with the state(6) to be set 00:35:12.249 [2024-10-01 22:34:07.197707] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x886ad0 is same with the state(6) to be set 00:35:12.249 [2024-10-01 22:34:07.197712] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x886ad0 is same with the state(6) to be set 00:35:12.249 [2024-10-01 22:34:07.197717] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x886ad0 is same with the state(6) to be set 00:35:12.249 Write completed with error (sct=0, sc=8) 00:35:12.249 starting I/O failed: -6 00:35:12.249 Write completed with error (sct=0, sc=8) 00:35:12.249 starting I/O failed: -6 00:35:12.249 Write completed with error (sct=0, sc=8) 00:35:12.249 starting I/O failed: -6 00:35:12.249 Write completed with error (sct=0, sc=8) 00:35:12.249 starting I/O failed: -6 00:35:12.249 Write completed with error (sct=0, sc=8) 00:35:12.249 starting I/O failed: -6 00:35:12.249 Write completed with error (sct=0, sc=8) 00:35:12.249 starting I/O failed: -6 00:35:12.249 Write completed with error (sct=0, sc=8) 00:35:12.249 starting I/O failed: -6 00:35:12.249 [2024-10-01 22:34:07.197936] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x886fa0 is same with the state(6) to be set 00:35:12.249 [2024-10-01 22:34:07.197951] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x886fa0 is same with the state(6) to be set 00:35:12.249 [2024-10-01 22:34:07.197956] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x886fa0 is same with the state(6) to be set 00:35:12.249 [2024-10-01 22:34:07.197961] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x886fa0 is same with the state(6) to be set 00:35:12.249 [2024-10-01 22:34:07.197966] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x886fa0 is same with the state(6) to be set 00:35:12.249 [2024-10-01 22:34:07.197971] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x886fa0 is same with the state(6) to be set 00:35:12.249 [2024-10-01 22:34:07.197976] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x886fa0 is same with the state(6) to be set 00:35:12.249 [2024-10-01 22:34:07.198068] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:12.249 NVMe io qpair process completion error 00:35:12.249 [2024-10-01 22:34:07.198324] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x887470 is same with the state(6) to be set 00:35:12.249 [2024-10-01 22:34:07.198341] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x887470 is same with the state(6) to be set 00:35:12.249 [2024-10-01 22:34:07.198347] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x887470 is same with the state(6) to be set 00:35:12.249 [2024-10-01 22:34:07.198352] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x887470 is same with the state(6) to be set 00:35:12.249 [2024-10-01 22:34:07.198358] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x887470 is same with the state(6) to be set 00:35:12.249 [2024-10-01 22:34:07.198364] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x887470 is same with the state(6) to be set 00:35:12.249 [2024-10-01 22:34:07.198370] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x887470 is same with the state(6) to be set 00:35:12.249 Write completed with error (sct=0, sc=8) 00:35:12.249 Write completed with error (sct=0, sc=8) 00:35:12.249 Write completed with error (sct=0, sc=8) 00:35:12.249 starting I/O failed: -6 00:35:12.249 Write completed with error (sct=0, sc=8) 00:35:12.249 Write completed with error (sct=0, sc=8) 00:35:12.249 Write completed with error (sct=0, sc=8) 00:35:12.249 Write completed with error (sct=0, sc=8) 00:35:12.249 starting I/O failed: -6 00:35:12.249 Write completed with error (sct=0, sc=8) 00:35:12.249 Write completed with error (sct=0, sc=8) 00:35:12.249 Write completed with error (sct=0, sc=8) 00:35:12.249 Write completed with error (sct=0, sc=8) 00:35:12.249 starting I/O failed: -6 00:35:12.249 Write completed with error (sct=0, sc=8) 00:35:12.249 Write completed with error (sct=0, sc=8) 00:35:12.249 Write completed with error (sct=0, sc=8) 00:35:12.249 Write completed with error (sct=0, sc=8) 00:35:12.249 starting I/O failed: -6 00:35:12.249 Write completed with error (sct=0, sc=8) 00:35:12.249 Write completed with error (sct=0, sc=8) 00:35:12.249 Write completed with error (sct=0, sc=8) 00:35:12.249 Write completed with error (sct=0, sc=8) 00:35:12.249 starting I/O failed: -6 00:35:12.249 Write completed with error (sct=0, sc=8) 00:35:12.249 Write completed with error (sct=0, sc=8) 00:35:12.249 Write completed with error (sct=0, sc=8) 00:35:12.249 Write completed with error (sct=0, sc=8) 00:35:12.249 starting I/O failed: -6 00:35:12.249 Write completed with error (sct=0, sc=8) 00:35:12.249 Write completed with error (sct=0, sc=8) 00:35:12.249 Write completed with error (sct=0, sc=8) 00:35:12.249 Write completed with error (sct=0, sc=8) 00:35:12.249 starting I/O failed: -6 00:35:12.249 Write completed with error (sct=0, sc=8) 00:35:12.249 Write completed with error (sct=0, sc=8) 00:35:12.249 Write completed with error (sct=0, sc=8) 00:35:12.249 Write completed with error (sct=0, sc=8) 00:35:12.249 starting I/O failed: -6 00:35:12.249 Write completed with error (sct=0, sc=8) 00:35:12.249 Write completed with error (sct=0, sc=8) 00:35:12.249 Write completed with error (sct=0, sc=8) 00:35:12.249 Write completed with error (sct=0, sc=8) 00:35:12.249 starting I/O failed: -6 00:35:12.249 Write completed with error (sct=0, sc=8) 00:35:12.249 Write completed with error (sct=0, sc=8) 00:35:12.249 Write completed with error (sct=0, sc=8) 00:35:12.249 Write completed with error (sct=0, sc=8) 00:35:12.249 starting I/O failed: -6 00:35:12.249 [2024-10-01 22:34:07.199248] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:35:12.249 Write completed with error (sct=0, sc=8) 00:35:12.249 Write completed with error (sct=0, sc=8) 00:35:12.250 Write completed with error (sct=0, sc=8) 00:35:12.250 starting I/O failed: -6 00:35:12.250 Write completed with error (sct=0, sc=8) 00:35:12.250 starting I/O failed: -6 00:35:12.250 Write completed with error (sct=0, sc=8) 00:35:12.250 Write completed with error (sct=0, sc=8) 00:35:12.250 Write completed with error (sct=0, sc=8) 00:35:12.250 starting I/O failed: -6 00:35:12.250 Write completed with error (sct=0, sc=8) 00:35:12.250 starting I/O failed: -6 00:35:12.250 Write completed with error (sct=0, sc=8) 00:35:12.250 Write completed with error (sct=0, sc=8) 00:35:12.250 Write completed with error (sct=0, sc=8) 00:35:12.250 starting I/O failed: -6 00:35:12.250 Write completed with error (sct=0, sc=8) 00:35:12.250 starting I/O failed: -6 00:35:12.250 Write completed with error (sct=0, sc=8) 00:35:12.250 Write completed with error (sct=0, sc=8) 00:35:12.250 Write completed with error (sct=0, sc=8) 00:35:12.250 starting I/O failed: -6 00:35:12.250 Write completed with error (sct=0, sc=8) 00:35:12.250 starting I/O failed: -6 00:35:12.250 Write completed with error (sct=0, sc=8) 00:35:12.250 Write completed with error (sct=0, sc=8) 00:35:12.250 Write completed with error (sct=0, sc=8) 00:35:12.250 starting I/O failed: -6 00:35:12.250 Write completed with error (sct=0, sc=8) 00:35:12.250 starting I/O failed: -6 00:35:12.250 Write completed with error (sct=0, sc=8) 00:35:12.250 Write completed with error (sct=0, sc=8) 00:35:12.250 Write completed with error (sct=0, sc=8) 00:35:12.250 starting I/O failed: -6 00:35:12.250 Write completed with error (sct=0, sc=8) 00:35:12.250 starting I/O failed: -6 00:35:12.250 Write completed with error (sct=0, sc=8) 00:35:12.250 Write completed with error (sct=0, sc=8) 00:35:12.250 Write completed with error (sct=0, sc=8) 00:35:12.250 starting I/O failed: -6 00:35:12.250 Write completed with error (sct=0, sc=8) 00:35:12.250 starting I/O failed: -6 00:35:12.250 Write completed with error (sct=0, sc=8) 00:35:12.250 Write completed with error (sct=0, sc=8) 00:35:12.250 Write completed with error (sct=0, sc=8) 00:35:12.250 starting I/O failed: -6 00:35:12.250 Write completed with error (sct=0, sc=8) 00:35:12.250 starting I/O failed: -6 00:35:12.250 Write completed with error (sct=0, sc=8) 00:35:12.250 Write completed with error (sct=0, sc=8) 00:35:12.250 Write completed with error (sct=0, sc=8) 00:35:12.250 starting I/O failed: -6 00:35:12.250 Write completed with error (sct=0, sc=8) 00:35:12.250 starting I/O failed: -6 00:35:12.250 Write completed with error (sct=0, sc=8) 00:35:12.250 Write completed with error (sct=0, sc=8) 00:35:12.250 Write completed with error (sct=0, sc=8) 00:35:12.250 starting I/O failed: -6 00:35:12.250 Write completed with error (sct=0, sc=8) 00:35:12.250 starting I/O failed: -6 00:35:12.250 [2024-10-01 22:34:07.200047] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:35:12.250 Write completed with error (sct=0, sc=8) 00:35:12.250 starting I/O failed: -6 00:35:12.250 Write completed with error (sct=0, sc=8) 00:35:12.250 Write completed with error (sct=0, sc=8) 00:35:12.250 starting I/O failed: -6 00:35:12.250 Write completed with error (sct=0, sc=8) 00:35:12.250 starting I/O failed: -6 00:35:12.250 Write completed with error (sct=0, sc=8) 00:35:12.250 starting I/O failed: -6 00:35:12.250 Write completed with error (sct=0, sc=8) 00:35:12.250 Write completed with error (sct=0, sc=8) 00:35:12.250 starting I/O failed: -6 00:35:12.250 Write completed with error (sct=0, sc=8) 00:35:12.250 starting I/O failed: -6 00:35:12.250 Write completed with error (sct=0, sc=8) 00:35:12.250 starting I/O failed: -6 00:35:12.250 Write completed with error (sct=0, sc=8) 00:35:12.250 Write completed with error (sct=0, sc=8) 00:35:12.250 starting I/O failed: -6 00:35:12.250 Write completed with error (sct=0, sc=8) 00:35:12.250 starting I/O failed: -6 00:35:12.250 Write completed with error (sct=0, sc=8) 00:35:12.250 starting I/O failed: -6 00:35:12.250 Write completed with error (sct=0, sc=8) 00:35:12.250 Write completed with error (sct=0, sc=8) 00:35:12.250 starting I/O failed: -6 00:35:12.250 Write completed with error (sct=0, sc=8) 00:35:12.250 starting I/O failed: -6 00:35:12.250 Write completed with error (sct=0, sc=8) 00:35:12.250 starting I/O failed: -6 00:35:12.250 Write completed with error (sct=0, sc=8) 00:35:12.250 Write completed with error (sct=0, sc=8) 00:35:12.250 starting I/O failed: -6 00:35:12.250 Write completed with error (sct=0, sc=8) 00:35:12.250 starting I/O failed: -6 00:35:12.250 Write completed with error (sct=0, sc=8) 00:35:12.250 starting I/O failed: -6 00:35:12.250 Write completed with error (sct=0, sc=8) 00:35:12.250 Write completed with error (sct=0, sc=8) 00:35:12.250 starting I/O failed: -6 00:35:12.250 Write completed with error (sct=0, sc=8) 00:35:12.250 starting I/O failed: -6 00:35:12.250 Write completed with error (sct=0, sc=8) 00:35:12.250 starting I/O failed: -6 00:35:12.250 Write completed with error (sct=0, sc=8) 00:35:12.250 Write completed with error (sct=0, sc=8) 00:35:12.250 starting I/O failed: -6 00:35:12.250 Write completed with error (sct=0, sc=8) 00:35:12.250 starting I/O failed: -6 00:35:12.250 Write completed with error (sct=0, sc=8) 00:35:12.250 starting I/O failed: -6 00:35:12.250 Write completed with error (sct=0, sc=8) 00:35:12.250 Write completed with error (sct=0, sc=8) 00:35:12.250 starting I/O failed: -6 00:35:12.250 Write completed with error (sct=0, sc=8) 00:35:12.250 starting I/O failed: -6 00:35:12.250 Write completed with error (sct=0, sc=8) 00:35:12.250 starting I/O failed: -6 00:35:12.250 Write completed with error (sct=0, sc=8) 00:35:12.250 Write completed with error (sct=0, sc=8) 00:35:12.250 starting I/O failed: -6 00:35:12.250 Write completed with error (sct=0, sc=8) 00:35:12.250 starting I/O failed: -6 00:35:12.250 Write completed with error (sct=0, sc=8) 00:35:12.250 starting I/O failed: -6 00:35:12.250 Write completed with error (sct=0, sc=8) 00:35:12.250 Write completed with error (sct=0, sc=8) 00:35:12.250 starting I/O failed: -6 00:35:12.250 Write completed with error (sct=0, sc=8) 00:35:12.250 starting I/O failed: -6 00:35:12.250 Write completed with error (sct=0, sc=8) 00:35:12.250 starting I/O failed: -6 00:35:12.250 Write completed with error (sct=0, sc=8) 00:35:12.250 Write completed with error (sct=0, sc=8) 00:35:12.250 starting I/O failed: -6 00:35:12.250 Write completed with error (sct=0, sc=8) 00:35:12.250 starting I/O failed: -6 00:35:12.250 Write completed with error (sct=0, sc=8) 00:35:12.250 starting I/O failed: -6 00:35:12.250 Write completed with error (sct=0, sc=8) 00:35:12.250 Write completed with error (sct=0, sc=8) 00:35:12.250 starting I/O failed: -6 00:35:12.250 Write completed with error (sct=0, sc=8) 00:35:12.250 starting I/O failed: -6 00:35:12.250 Write completed with error (sct=0, sc=8) 00:35:12.250 starting I/O failed: -6 00:35:12.250 [2024-10-01 22:34:07.200976] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:12.250 Write completed with error (sct=0, sc=8) 00:35:12.250 starting I/O failed: -6 00:35:12.250 Write completed with error (sct=0, sc=8) 00:35:12.250 starting I/O failed: -6 00:35:12.250 Write completed with error (sct=0, sc=8) 00:35:12.250 starting I/O failed: -6 00:35:12.250 Write completed with error (sct=0, sc=8) 00:35:12.250 starting I/O failed: -6 00:35:12.250 Write completed with error (sct=0, sc=8) 00:35:12.250 starting I/O failed: -6 00:35:12.250 Write completed with error (sct=0, sc=8) 00:35:12.250 starting I/O failed: -6 00:35:12.250 Write completed with error (sct=0, sc=8) 00:35:12.250 starting I/O failed: -6 00:35:12.250 Write completed with error (sct=0, sc=8) 00:35:12.250 starting I/O failed: -6 00:35:12.250 Write completed with error (sct=0, sc=8) 00:35:12.250 starting I/O failed: -6 00:35:12.250 Write completed with error (sct=0, sc=8) 00:35:12.250 starting I/O failed: -6 00:35:12.250 Write completed with error (sct=0, sc=8) 00:35:12.250 starting I/O failed: -6 00:35:12.250 Write completed with error (sct=0, sc=8) 00:35:12.250 starting I/O failed: -6 00:35:12.250 Write completed with error (sct=0, sc=8) 00:35:12.250 starting I/O failed: -6 00:35:12.250 Write completed with error (sct=0, sc=8) 00:35:12.250 starting I/O failed: -6 00:35:12.250 Write completed with error (sct=0, sc=8) 00:35:12.250 starting I/O failed: -6 00:35:12.250 Write completed with error (sct=0, sc=8) 00:35:12.250 starting I/O failed: -6 00:35:12.250 Write completed with error (sct=0, sc=8) 00:35:12.250 starting I/O failed: -6 00:35:12.250 Write completed with error (sct=0, sc=8) 00:35:12.250 starting I/O failed: -6 00:35:12.250 Write completed with error (sct=0, sc=8) 00:35:12.250 starting I/O failed: -6 00:35:12.250 Write completed with error (sct=0, sc=8) 00:35:12.250 starting I/O failed: -6 00:35:12.250 Write completed with error (sct=0, sc=8) 00:35:12.250 starting I/O failed: -6 00:35:12.250 Write completed with error (sct=0, sc=8) 00:35:12.250 starting I/O failed: -6 00:35:12.250 Write completed with error (sct=0, sc=8) 00:35:12.250 starting I/O failed: -6 00:35:12.250 Write completed with error (sct=0, sc=8) 00:35:12.250 starting I/O failed: -6 00:35:12.250 Write completed with error (sct=0, sc=8) 00:35:12.250 starting I/O failed: -6 00:35:12.250 Write completed with error (sct=0, sc=8) 00:35:12.250 starting I/O failed: -6 00:35:12.250 Write completed with error (sct=0, sc=8) 00:35:12.250 starting I/O failed: -6 00:35:12.250 Write completed with error (sct=0, sc=8) 00:35:12.250 starting I/O failed: -6 00:35:12.250 Write completed with error (sct=0, sc=8) 00:35:12.250 starting I/O failed: -6 00:35:12.250 Write completed with error (sct=0, sc=8) 00:35:12.250 starting I/O failed: -6 00:35:12.250 Write completed with error (sct=0, sc=8) 00:35:12.250 starting I/O failed: -6 00:35:12.250 Write completed with error (sct=0, sc=8) 00:35:12.250 starting I/O failed: -6 00:35:12.250 Write completed with error (sct=0, sc=8) 00:35:12.250 starting I/O failed: -6 00:35:12.250 Write completed with error (sct=0, sc=8) 00:35:12.250 starting I/O failed: -6 00:35:12.250 Write completed with error (sct=0, sc=8) 00:35:12.250 starting I/O failed: -6 00:35:12.250 Write completed with error (sct=0, sc=8) 00:35:12.250 starting I/O failed: -6 00:35:12.250 Write completed with error (sct=0, sc=8) 00:35:12.250 starting I/O failed: -6 00:35:12.250 Write completed with error (sct=0, sc=8) 00:35:12.250 starting I/O failed: -6 00:35:12.250 Write completed with error (sct=0, sc=8) 00:35:12.251 starting I/O failed: -6 00:35:12.251 Write completed with error (sct=0, sc=8) 00:35:12.251 starting I/O failed: -6 00:35:12.251 Write completed with error (sct=0, sc=8) 00:35:12.251 starting I/O failed: -6 00:35:12.251 Write completed with error (sct=0, sc=8) 00:35:12.251 starting I/O failed: -6 00:35:12.251 Write completed with error (sct=0, sc=8) 00:35:12.251 starting I/O failed: -6 00:35:12.251 Write completed with error (sct=0, sc=8) 00:35:12.251 starting I/O failed: -6 00:35:12.251 Write completed with error (sct=0, sc=8) 00:35:12.251 starting I/O failed: -6 00:35:12.251 Write completed with error (sct=0, sc=8) 00:35:12.251 starting I/O failed: -6 00:35:12.251 Write completed with error (sct=0, sc=8) 00:35:12.251 starting I/O failed: -6 00:35:12.251 Write completed with error (sct=0, sc=8) 00:35:12.251 starting I/O failed: -6 00:35:12.251 Write completed with error (sct=0, sc=8) 00:35:12.251 starting I/O failed: -6 00:35:12.251 Write completed with error (sct=0, sc=8) 00:35:12.251 starting I/O failed: -6 00:35:12.251 Write completed with error (sct=0, sc=8) 00:35:12.251 starting I/O failed: -6 00:35:12.251 Write completed with error (sct=0, sc=8) 00:35:12.251 starting I/O failed: -6 00:35:12.251 Write completed with error (sct=0, sc=8) 00:35:12.251 starting I/O failed: -6 00:35:12.251 Write completed with error (sct=0, sc=8) 00:35:12.251 starting I/O failed: -6 00:35:12.251 Write completed with error (sct=0, sc=8) 00:35:12.251 starting I/O failed: -6 00:35:12.251 Write completed with error (sct=0, sc=8) 00:35:12.251 starting I/O failed: -6 00:35:12.251 Write completed with error (sct=0, sc=8) 00:35:12.251 starting I/O failed: -6 00:35:12.251 Write completed with error (sct=0, sc=8) 00:35:12.251 starting I/O failed: -6 00:35:12.251 Write completed with error (sct=0, sc=8) 00:35:12.251 starting I/O failed: -6 00:35:12.251 Write completed with error (sct=0, sc=8) 00:35:12.251 starting I/O failed: -6 00:35:12.251 Write completed with error (sct=0, sc=8) 00:35:12.251 starting I/O failed: -6 00:35:12.251 [2024-10-01 22:34:07.202412] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:35:12.251 NVMe io qpair process completion error 00:35:12.251 Write completed with error (sct=0, sc=8) 00:35:12.251 Write completed with error (sct=0, sc=8) 00:35:12.251 Write completed with error (sct=0, sc=8) 00:35:12.251 starting I/O failed: -6 00:35:12.251 Write completed with error (sct=0, sc=8) 00:35:12.251 Write completed with error (sct=0, sc=8) 00:35:12.251 Write completed with error (sct=0, sc=8) 00:35:12.251 Write completed with error (sct=0, sc=8) 00:35:12.251 starting I/O failed: -6 00:35:12.251 Write completed with error (sct=0, sc=8) 00:35:12.251 Write completed with error (sct=0, sc=8) 00:35:12.251 Write completed with error (sct=0, sc=8) 00:35:12.251 Write completed with error (sct=0, sc=8) 00:35:12.251 starting I/O failed: -6 00:35:12.251 Write completed with error (sct=0, sc=8) 00:35:12.251 Write completed with error (sct=0, sc=8) 00:35:12.251 Write completed with error (sct=0, sc=8) 00:35:12.251 Write completed with error (sct=0, sc=8) 00:35:12.251 starting I/O failed: -6 00:35:12.251 Write completed with error (sct=0, sc=8) 00:35:12.251 Write completed with error (sct=0, sc=8) 00:35:12.251 Write completed with error (sct=0, sc=8) 00:35:12.251 Write completed with error (sct=0, sc=8) 00:35:12.251 starting I/O failed: -6 00:35:12.251 Write completed with error (sct=0, sc=8) 00:35:12.251 Write completed with error (sct=0, sc=8) 00:35:12.251 Write completed with error (sct=0, sc=8) 00:35:12.251 Write completed with error (sct=0, sc=8) 00:35:12.251 starting I/O failed: -6 00:35:12.251 Write completed with error (sct=0, sc=8) 00:35:12.251 Write completed with error (sct=0, sc=8) 00:35:12.251 Write completed with error (sct=0, sc=8) 00:35:12.251 Write completed with error (sct=0, sc=8) 00:35:12.251 starting I/O failed: -6 00:35:12.251 Write completed with error (sct=0, sc=8) 00:35:12.251 Write completed with error (sct=0, sc=8) 00:35:12.251 Write completed with error (sct=0, sc=8) 00:35:12.251 Write completed with error (sct=0, sc=8) 00:35:12.251 starting I/O failed: -6 00:35:12.251 Write completed with error (sct=0, sc=8) 00:35:12.251 Write completed with error (sct=0, sc=8) 00:35:12.251 Write completed with error (sct=0, sc=8) 00:35:12.251 Write completed with error (sct=0, sc=8) 00:35:12.251 starting I/O failed: -6 00:35:12.251 Write completed with error (sct=0, sc=8) 00:35:12.251 Write completed with error (sct=0, sc=8) 00:35:12.251 Write completed with error (sct=0, sc=8) 00:35:12.251 Write completed with error (sct=0, sc=8) 00:35:12.251 starting I/O failed: -6 00:35:12.251 [2024-10-01 22:34:07.203620] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:35:12.251 starting I/O failed: -6 00:35:12.251 Write completed with error (sct=0, sc=8) 00:35:12.251 Write completed with error (sct=0, sc=8) 00:35:12.251 starting I/O failed: -6 00:35:12.251 Write completed with error (sct=0, sc=8) 00:35:12.251 starting I/O failed: -6 00:35:12.251 Write completed with error (sct=0, sc=8) 00:35:12.251 Write completed with error (sct=0, sc=8) 00:35:12.251 Write completed with error (sct=0, sc=8) 00:35:12.251 starting I/O failed: -6 00:35:12.251 Write completed with error (sct=0, sc=8) 00:35:12.251 starting I/O failed: -6 00:35:12.251 Write completed with error (sct=0, sc=8) 00:35:12.251 Write completed with error (sct=0, sc=8) 00:35:12.251 Write completed with error (sct=0, sc=8) 00:35:12.251 starting I/O failed: -6 00:35:12.251 Write completed with error (sct=0, sc=8) 00:35:12.251 starting I/O failed: -6 00:35:12.251 Write completed with error (sct=0, sc=8) 00:35:12.251 Write completed with error (sct=0, sc=8) 00:35:12.251 Write completed with error (sct=0, sc=8) 00:35:12.251 starting I/O failed: -6 00:35:12.251 Write completed with error (sct=0, sc=8) 00:35:12.251 starting I/O failed: -6 00:35:12.251 Write completed with error (sct=0, sc=8) 00:35:12.251 Write completed with error (sct=0, sc=8) 00:35:12.251 Write completed with error (sct=0, sc=8) 00:35:12.251 starting I/O failed: -6 00:35:12.251 Write completed with error (sct=0, sc=8) 00:35:12.251 starting I/O failed: -6 00:35:12.251 Write completed with error (sct=0, sc=8) 00:35:12.251 Write completed with error (sct=0, sc=8) 00:35:12.251 Write completed with error (sct=0, sc=8) 00:35:12.251 starting I/O failed: -6 00:35:12.251 Write completed with error (sct=0, sc=8) 00:35:12.251 starting I/O failed: -6 00:35:12.251 Write completed with error (sct=0, sc=8) 00:35:12.251 Write completed with error (sct=0, sc=8) 00:35:12.251 Write completed with error (sct=0, sc=8) 00:35:12.251 starting I/O failed: -6 00:35:12.251 Write completed with error (sct=0, sc=8) 00:35:12.251 starting I/O failed: -6 00:35:12.251 Write completed with error (sct=0, sc=8) 00:35:12.251 Write completed with error (sct=0, sc=8) 00:35:12.251 Write completed with error (sct=0, sc=8) 00:35:12.251 starting I/O failed: -6 00:35:12.251 Write completed with error (sct=0, sc=8) 00:35:12.251 starting I/O failed: -6 00:35:12.251 Write completed with error (sct=0, sc=8) 00:35:12.251 Write completed with error (sct=0, sc=8) 00:35:12.251 Write completed with error (sct=0, sc=8) 00:35:12.251 starting I/O failed: -6 00:35:12.251 Write completed with error (sct=0, sc=8) 00:35:12.251 starting I/O failed: -6 00:35:12.251 Write completed with error (sct=0, sc=8) 00:35:12.251 [2024-10-01 22:34:07.204459] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:35:12.251 Write completed with error (sct=0, sc=8) 00:35:12.251 Write completed with error (sct=0, sc=8) 00:35:12.251 starting I/O failed: -6 00:35:12.251 Write completed with error (sct=0, sc=8) 00:35:12.251 starting I/O failed: -6 00:35:12.251 Write completed with error (sct=0, sc=8) 00:35:12.251 starting I/O failed: -6 00:35:12.251 Write completed with error (sct=0, sc=8) 00:35:12.251 Write completed with error (sct=0, sc=8) 00:35:12.251 starting I/O failed: -6 00:35:12.251 Write completed with error (sct=0, sc=8) 00:35:12.251 starting I/O failed: -6 00:35:12.251 Write completed with error (sct=0, sc=8) 00:35:12.251 starting I/O failed: -6 00:35:12.251 Write completed with error (sct=0, sc=8) 00:35:12.251 Write completed with error (sct=0, sc=8) 00:35:12.251 starting I/O failed: -6 00:35:12.251 Write completed with error (sct=0, sc=8) 00:35:12.251 starting I/O failed: -6 00:35:12.251 Write completed with error (sct=0, sc=8) 00:35:12.251 starting I/O failed: -6 00:35:12.251 Write completed with error (sct=0, sc=8) 00:35:12.251 Write completed with error (sct=0, sc=8) 00:35:12.251 starting I/O failed: -6 00:35:12.251 Write completed with error (sct=0, sc=8) 00:35:12.251 starting I/O failed: -6 00:35:12.251 Write completed with error (sct=0, sc=8) 00:35:12.251 starting I/O failed: -6 00:35:12.251 Write completed with error (sct=0, sc=8) 00:35:12.251 Write completed with error (sct=0, sc=8) 00:35:12.251 starting I/O failed: -6 00:35:12.251 Write completed with error (sct=0, sc=8) 00:35:12.251 starting I/O failed: -6 00:35:12.251 Write completed with error (sct=0, sc=8) 00:35:12.251 starting I/O failed: -6 00:35:12.251 Write completed with error (sct=0, sc=8) 00:35:12.251 Write completed with error (sct=0, sc=8) 00:35:12.251 starting I/O failed: -6 00:35:12.251 Write completed with error (sct=0, sc=8) 00:35:12.251 starting I/O failed: -6 00:35:12.251 Write completed with error (sct=0, sc=8) 00:35:12.251 starting I/O failed: -6 00:35:12.251 Write completed with error (sct=0, sc=8) 00:35:12.251 Write completed with error (sct=0, sc=8) 00:35:12.251 starting I/O failed: -6 00:35:12.251 Write completed with error (sct=0, sc=8) 00:35:12.251 starting I/O failed: -6 00:35:12.251 Write completed with error (sct=0, sc=8) 00:35:12.251 starting I/O failed: -6 00:35:12.251 Write completed with error (sct=0, sc=8) 00:35:12.251 Write completed with error (sct=0, sc=8) 00:35:12.251 starting I/O failed: -6 00:35:12.251 Write completed with error (sct=0, sc=8) 00:35:12.251 starting I/O failed: -6 00:35:12.251 Write completed with error (sct=0, sc=8) 00:35:12.251 starting I/O failed: -6 00:35:12.251 Write completed with error (sct=0, sc=8) 00:35:12.251 Write completed with error (sct=0, sc=8) 00:35:12.251 starting I/O failed: -6 00:35:12.251 Write completed with error (sct=0, sc=8) 00:35:12.251 starting I/O failed: -6 00:35:12.251 Write completed with error (sct=0, sc=8) 00:35:12.251 starting I/O failed: -6 00:35:12.251 Write completed with error (sct=0, sc=8) 00:35:12.251 Write completed with error (sct=0, sc=8) 00:35:12.251 starting I/O failed: -6 00:35:12.251 Write completed with error (sct=0, sc=8) 00:35:12.251 starting I/O failed: -6 00:35:12.252 Write completed with error (sct=0, sc=8) 00:35:12.252 starting I/O failed: -6 00:35:12.252 Write completed with error (sct=0, sc=8) 00:35:12.252 Write completed with error (sct=0, sc=8) 00:35:12.252 starting I/O failed: -6 00:35:12.252 Write completed with error (sct=0, sc=8) 00:35:12.252 starting I/O failed: -6 00:35:12.252 Write completed with error (sct=0, sc=8) 00:35:12.252 starting I/O failed: -6 00:35:12.252 Write completed with error (sct=0, sc=8) 00:35:12.252 Write completed with error (sct=0, sc=8) 00:35:12.252 starting I/O failed: -6 00:35:12.252 Write completed with error (sct=0, sc=8) 00:35:12.252 starting I/O failed: -6 00:35:12.252 Write completed with error (sct=0, sc=8) 00:35:12.252 starting I/O failed: -6 00:35:12.252 Write completed with error (sct=0, sc=8) 00:35:12.252 Write completed with error (sct=0, sc=8) 00:35:12.252 starting I/O failed: -6 00:35:12.252 [2024-10-01 22:34:07.205401] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:12.252 Write completed with error (sct=0, sc=8) 00:35:12.252 starting I/O failed: -6 00:35:12.252 Write completed with error (sct=0, sc=8) 00:35:12.252 starting I/O failed: -6 00:35:12.252 Write completed with error (sct=0, sc=8) 00:35:12.252 starting I/O failed: -6 00:35:12.252 Write completed with error (sct=0, sc=8) 00:35:12.252 starting I/O failed: -6 00:35:12.252 Write completed with error (sct=0, sc=8) 00:35:12.252 starting I/O failed: -6 00:35:12.252 Write completed with error (sct=0, sc=8) 00:35:12.252 starting I/O failed: -6 00:35:12.252 Write completed with error (sct=0, sc=8) 00:35:12.252 starting I/O failed: -6 00:35:12.252 Write completed with error (sct=0, sc=8) 00:35:12.252 starting I/O failed: -6 00:35:12.252 Write completed with error (sct=0, sc=8) 00:35:12.252 starting I/O failed: -6 00:35:12.252 Write completed with error (sct=0, sc=8) 00:35:12.252 starting I/O failed: -6 00:35:12.252 Write completed with error (sct=0, sc=8) 00:35:12.252 starting I/O failed: -6 00:35:12.252 Write completed with error (sct=0, sc=8) 00:35:12.252 starting I/O failed: -6 00:35:12.252 Write completed with error (sct=0, sc=8) 00:35:12.252 starting I/O failed: -6 00:35:12.252 Write completed with error (sct=0, sc=8) 00:35:12.252 starting I/O failed: -6 00:35:12.252 Write completed with error (sct=0, sc=8) 00:35:12.252 starting I/O failed: -6 00:35:12.252 Write completed with error (sct=0, sc=8) 00:35:12.252 starting I/O failed: -6 00:35:12.252 Write completed with error (sct=0, sc=8) 00:35:12.252 starting I/O failed: -6 00:35:12.252 Write completed with error (sct=0, sc=8) 00:35:12.252 starting I/O failed: -6 00:35:12.252 Write completed with error (sct=0, sc=8) 00:35:12.252 starting I/O failed: -6 00:35:12.252 Write completed with error (sct=0, sc=8) 00:35:12.252 starting I/O failed: -6 00:35:12.252 Write completed with error (sct=0, sc=8) 00:35:12.252 starting I/O failed: -6 00:35:12.252 Write completed with error (sct=0, sc=8) 00:35:12.252 starting I/O failed: -6 00:35:12.252 Write completed with error (sct=0, sc=8) 00:35:12.252 starting I/O failed: -6 00:35:12.252 Write completed with error (sct=0, sc=8) 00:35:12.252 starting I/O failed: -6 00:35:12.252 Write completed with error (sct=0, sc=8) 00:35:12.252 starting I/O failed: -6 00:35:12.252 Write completed with error (sct=0, sc=8) 00:35:12.252 starting I/O failed: -6 00:35:12.252 Write completed with error (sct=0, sc=8) 00:35:12.252 starting I/O failed: -6 00:35:12.252 Write completed with error (sct=0, sc=8) 00:35:12.252 starting I/O failed: -6 00:35:12.252 Write completed with error (sct=0, sc=8) 00:35:12.252 starting I/O failed: -6 00:35:12.252 Write completed with error (sct=0, sc=8) 00:35:12.252 starting I/O failed: -6 00:35:12.252 Write completed with error (sct=0, sc=8) 00:35:12.252 starting I/O failed: -6 00:35:12.252 Write completed with error (sct=0, sc=8) 00:35:12.252 starting I/O failed: -6 00:35:12.252 Write completed with error (sct=0, sc=8) 00:35:12.252 starting I/O failed: -6 00:35:12.252 Write completed with error (sct=0, sc=8) 00:35:12.252 starting I/O failed: -6 00:35:12.252 Write completed with error (sct=0, sc=8) 00:35:12.252 starting I/O failed: -6 00:35:12.252 Write completed with error (sct=0, sc=8) 00:35:12.252 starting I/O failed: -6 00:35:12.252 Write completed with error (sct=0, sc=8) 00:35:12.252 starting I/O failed: -6 00:35:12.252 Write completed with error (sct=0, sc=8) 00:35:12.252 starting I/O failed: -6 00:35:12.252 Write completed with error (sct=0, sc=8) 00:35:12.252 starting I/O failed: -6 00:35:12.252 Write completed with error (sct=0, sc=8) 00:35:12.252 starting I/O failed: -6 00:35:12.252 Write completed with error (sct=0, sc=8) 00:35:12.252 starting I/O failed: -6 00:35:12.252 Write completed with error (sct=0, sc=8) 00:35:12.252 starting I/O failed: -6 00:35:12.252 Write completed with error (sct=0, sc=8) 00:35:12.252 starting I/O failed: -6 00:35:12.252 Write completed with error (sct=0, sc=8) 00:35:12.252 starting I/O failed: -6 00:35:12.252 Write completed with error (sct=0, sc=8) 00:35:12.252 starting I/O failed: -6 00:35:12.252 Write completed with error (sct=0, sc=8) 00:35:12.252 starting I/O failed: -6 00:35:12.252 Write completed with error (sct=0, sc=8) 00:35:12.252 starting I/O failed: -6 00:35:12.252 Write completed with error (sct=0, sc=8) 00:35:12.252 starting I/O failed: -6 00:35:12.252 Write completed with error (sct=0, sc=8) 00:35:12.252 starting I/O failed: -6 00:35:12.252 Write completed with error (sct=0, sc=8) 00:35:12.252 starting I/O failed: -6 00:35:12.252 Write completed with error (sct=0, sc=8) 00:35:12.252 starting I/O failed: -6 00:35:12.252 Write completed with error (sct=0, sc=8) 00:35:12.252 starting I/O failed: -6 00:35:12.252 Write completed with error (sct=0, sc=8) 00:35:12.252 starting I/O failed: -6 00:35:12.252 Write completed with error (sct=0, sc=8) 00:35:12.252 starting I/O failed: -6 00:35:12.252 Write completed with error (sct=0, sc=8) 00:35:12.252 starting I/O failed: -6 00:35:12.252 Write completed with error (sct=0, sc=8) 00:35:12.252 starting I/O failed: -6 00:35:12.252 Write completed with error (sct=0, sc=8) 00:35:12.252 starting I/O failed: -6 00:35:12.252 Write completed with error (sct=0, sc=8) 00:35:12.252 starting I/O failed: -6 00:35:12.252 Write completed with error (sct=0, sc=8) 00:35:12.252 starting I/O failed: -6 00:35:12.252 Write completed with error (sct=0, sc=8) 00:35:12.252 starting I/O failed: -6 00:35:12.252 Write completed with error (sct=0, sc=8) 00:35:12.252 starting I/O failed: -6 00:35:12.252 Write completed with error (sct=0, sc=8) 00:35:12.252 starting I/O failed: -6 00:35:12.252 [2024-10-01 22:34:07.208657] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:35:12.252 NVMe io qpair process completion error 00:35:12.252 Write completed with error (sct=0, sc=8) 00:35:12.252 starting I/O failed: -6 00:35:12.252 Write completed with error (sct=0, sc=8) 00:35:12.252 Write completed with error (sct=0, sc=8) 00:35:12.252 Write completed with error (sct=0, sc=8) 00:35:12.252 Write completed with error (sct=0, sc=8) 00:35:12.252 starting I/O failed: -6 00:35:12.252 Write completed with error (sct=0, sc=8) 00:35:12.252 Write completed with error (sct=0, sc=8) 00:35:12.252 Write completed with error (sct=0, sc=8) 00:35:12.252 Write completed with error (sct=0, sc=8) 00:35:12.252 starting I/O failed: -6 00:35:12.252 Write completed with error (sct=0, sc=8) 00:35:12.252 Write completed with error (sct=0, sc=8) 00:35:12.252 Write completed with error (sct=0, sc=8) 00:35:12.252 Write completed with error (sct=0, sc=8) 00:35:12.252 starting I/O failed: -6 00:35:12.252 Write completed with error (sct=0, sc=8) 00:35:12.252 Write completed with error (sct=0, sc=8) 00:35:12.252 Write completed with error (sct=0, sc=8) 00:35:12.252 Write completed with error (sct=0, sc=8) 00:35:12.252 starting I/O failed: -6 00:35:12.252 Write completed with error (sct=0, sc=8) 00:35:12.252 Write completed with error (sct=0, sc=8) 00:35:12.252 Write completed with error (sct=0, sc=8) 00:35:12.252 Write completed with error (sct=0, sc=8) 00:35:12.252 starting I/O failed: -6 00:35:12.252 Write completed with error (sct=0, sc=8) 00:35:12.252 Write completed with error (sct=0, sc=8) 00:35:12.252 Write completed with error (sct=0, sc=8) 00:35:12.252 Write completed with error (sct=0, sc=8) 00:35:12.252 starting I/O failed: -6 00:35:12.252 Write completed with error (sct=0, sc=8) 00:35:12.252 Write completed with error (sct=0, sc=8) 00:35:12.252 Write completed with error (sct=0, sc=8) 00:35:12.253 Write completed with error (sct=0, sc=8) 00:35:12.253 starting I/O failed: -6 00:35:12.253 Write completed with error (sct=0, sc=8) 00:35:12.253 Write completed with error (sct=0, sc=8) 00:35:12.253 Write completed with error (sct=0, sc=8) 00:35:12.253 [2024-10-01 22:34:07.209952] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:35:12.253 starting I/O failed: -6 00:35:12.253 Write completed with error (sct=0, sc=8) 00:35:12.253 Write completed with error (sct=0, sc=8) 00:35:12.253 starting I/O failed: -6 00:35:12.253 Write completed with error (sct=0, sc=8) 00:35:12.253 starting I/O failed: -6 00:35:12.253 Write completed with error (sct=0, sc=8) 00:35:12.253 Write completed with error (sct=0, sc=8) 00:35:12.253 Write completed with error (sct=0, sc=8) 00:35:12.253 starting I/O failed: -6 00:35:12.253 Write completed with error (sct=0, sc=8) 00:35:12.253 starting I/O failed: -6 00:35:12.253 Write completed with error (sct=0, sc=8) 00:35:12.253 Write completed with error (sct=0, sc=8) 00:35:12.253 Write completed with error (sct=0, sc=8) 00:35:12.253 starting I/O failed: -6 00:35:12.253 Write completed with error (sct=0, sc=8) 00:35:12.253 starting I/O failed: -6 00:35:12.253 Write completed with error (sct=0, sc=8) 00:35:12.253 Write completed with error (sct=0, sc=8) 00:35:12.253 Write completed with error (sct=0, sc=8) 00:35:12.253 starting I/O failed: -6 00:35:12.253 Write completed with error (sct=0, sc=8) 00:35:12.253 starting I/O failed: -6 00:35:12.253 Write completed with error (sct=0, sc=8) 00:35:12.253 Write completed with error (sct=0, sc=8) 00:35:12.253 Write completed with error (sct=0, sc=8) 00:35:12.253 starting I/O failed: -6 00:35:12.253 Write completed with error (sct=0, sc=8) 00:35:12.253 starting I/O failed: -6 00:35:12.253 Write completed with error (sct=0, sc=8) 00:35:12.253 Write completed with error (sct=0, sc=8) 00:35:12.253 Write completed with error (sct=0, sc=8) 00:35:12.253 starting I/O failed: -6 00:35:12.253 Write completed with error (sct=0, sc=8) 00:35:12.253 starting I/O failed: -6 00:35:12.253 Write completed with error (sct=0, sc=8) 00:35:12.253 Write completed with error (sct=0, sc=8) 00:35:12.253 Write completed with error (sct=0, sc=8) 00:35:12.253 starting I/O failed: -6 00:35:12.253 Write completed with error (sct=0, sc=8) 00:35:12.253 starting I/O failed: -6 00:35:12.253 Write completed with error (sct=0, sc=8) 00:35:12.253 Write completed with error (sct=0, sc=8) 00:35:12.253 Write completed with error (sct=0, sc=8) 00:35:12.253 starting I/O failed: -6 00:35:12.253 Write completed with error (sct=0, sc=8) 00:35:12.253 starting I/O failed: -6 00:35:12.253 Write completed with error (sct=0, sc=8) 00:35:12.253 Write completed with error (sct=0, sc=8) 00:35:12.253 Write completed with error (sct=0, sc=8) 00:35:12.253 starting I/O failed: -6 00:35:12.253 Write completed with error (sct=0, sc=8) 00:35:12.253 starting I/O failed: -6 00:35:12.253 Write completed with error (sct=0, sc=8) 00:35:12.253 Write completed with error (sct=0, sc=8) 00:35:12.253 Write completed with error (sct=0, sc=8) 00:35:12.253 starting I/O failed: -6 00:35:12.253 [2024-10-01 22:34:07.210776] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:35:12.253 starting I/O failed: -6 00:35:12.253 starting I/O failed: -6 00:35:12.253 starting I/O failed: -6 00:35:12.253 starting I/O failed: -6 00:35:12.253 starting I/O failed: -6 00:35:12.253 starting I/O failed: -6 00:35:12.253 starting I/O failed: -6 00:35:12.253 Write completed with error (sct=0, sc=8) 00:35:12.253 starting I/O failed: -6 00:35:12.253 Write completed with error (sct=0, sc=8) 00:35:12.253 Write completed with error (sct=0, sc=8) 00:35:12.253 starting I/O failed: -6 00:35:12.253 Write completed with error (sct=0, sc=8) 00:35:12.253 starting I/O failed: -6 00:35:12.253 Write completed with error (sct=0, sc=8) 00:35:12.253 starting I/O failed: -6 00:35:12.253 Write completed with error (sct=0, sc=8) 00:35:12.253 Write completed with error (sct=0, sc=8) 00:35:12.253 starting I/O failed: -6 00:35:12.253 Write completed with error (sct=0, sc=8) 00:35:12.253 starting I/O failed: -6 00:35:12.253 Write completed with error (sct=0, sc=8) 00:35:12.253 starting I/O failed: -6 00:35:12.253 Write completed with error (sct=0, sc=8) 00:35:12.253 Write completed with error (sct=0, sc=8) 00:35:12.253 starting I/O failed: -6 00:35:12.253 Write completed with error (sct=0, sc=8) 00:35:12.253 starting I/O failed: -6 00:35:12.253 Write completed with error (sct=0, sc=8) 00:35:12.253 starting I/O failed: -6 00:35:12.253 Write completed with error (sct=0, sc=8) 00:35:12.253 Write completed with error (sct=0, sc=8) 00:35:12.253 starting I/O failed: -6 00:35:12.253 Write completed with error (sct=0, sc=8) 00:35:12.253 starting I/O failed: -6 00:35:12.253 Write completed with error (sct=0, sc=8) 00:35:12.253 starting I/O failed: -6 00:35:12.253 Write completed with error (sct=0, sc=8) 00:35:12.253 Write completed with error (sct=0, sc=8) 00:35:12.253 starting I/O failed: -6 00:35:12.253 Write completed with error (sct=0, sc=8) 00:35:12.253 starting I/O failed: -6 00:35:12.253 Write completed with error (sct=0, sc=8) 00:35:12.253 starting I/O failed: -6 00:35:12.253 Write completed with error (sct=0, sc=8) 00:35:12.253 Write completed with error (sct=0, sc=8) 00:35:12.253 starting I/O failed: -6 00:35:12.253 Write completed with error (sct=0, sc=8) 00:35:12.253 starting I/O failed: -6 00:35:12.253 Write completed with error (sct=0, sc=8) 00:35:12.253 starting I/O failed: -6 00:35:12.253 Write completed with error (sct=0, sc=8) 00:35:12.253 Write completed with error (sct=0, sc=8) 00:35:12.253 starting I/O failed: -6 00:35:12.253 Write completed with error (sct=0, sc=8) 00:35:12.253 starting I/O failed: -6 00:35:12.253 Write completed with error (sct=0, sc=8) 00:35:12.253 starting I/O failed: -6 00:35:12.253 Write completed with error (sct=0, sc=8) 00:35:12.253 Write completed with error (sct=0, sc=8) 00:35:12.253 starting I/O failed: -6 00:35:12.253 Write completed with error (sct=0, sc=8) 00:35:12.253 starting I/O failed: -6 00:35:12.253 Write completed with error (sct=0, sc=8) 00:35:12.253 starting I/O failed: -6 00:35:12.253 Write completed with error (sct=0, sc=8) 00:35:12.253 Write completed with error (sct=0, sc=8) 00:35:12.253 starting I/O failed: -6 00:35:12.253 Write completed with error (sct=0, sc=8) 00:35:12.253 starting I/O failed: -6 00:35:12.253 Write completed with error (sct=0, sc=8) 00:35:12.253 starting I/O failed: -6 00:35:12.253 Write completed with error (sct=0, sc=8) 00:35:12.253 Write completed with error (sct=0, sc=8) 00:35:12.253 starting I/O failed: -6 00:35:12.253 Write completed with error (sct=0, sc=8) 00:35:12.253 starting I/O failed: -6 00:35:12.253 [2024-10-01 22:34:07.212122] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:12.253 Write completed with error (sct=0, sc=8) 00:35:12.253 starting I/O failed: -6 00:35:12.253 Write completed with error (sct=0, sc=8) 00:35:12.253 starting I/O failed: -6 00:35:12.253 Write completed with error (sct=0, sc=8) 00:35:12.253 starting I/O failed: -6 00:35:12.253 Write completed with error (sct=0, sc=8) 00:35:12.253 starting I/O failed: -6 00:35:12.253 Write completed with error (sct=0, sc=8) 00:35:12.253 starting I/O failed: -6 00:35:12.253 Write completed with error (sct=0, sc=8) 00:35:12.253 starting I/O failed: -6 00:35:12.253 Write completed with error (sct=0, sc=8) 00:35:12.253 starting I/O failed: -6 00:35:12.253 Write completed with error (sct=0, sc=8) 00:35:12.253 starting I/O failed: -6 00:35:12.253 Write completed with error (sct=0, sc=8) 00:35:12.253 starting I/O failed: -6 00:35:12.253 Write completed with error (sct=0, sc=8) 00:35:12.253 starting I/O failed: -6 00:35:12.253 Write completed with error (sct=0, sc=8) 00:35:12.253 starting I/O failed: -6 00:35:12.253 Write completed with error (sct=0, sc=8) 00:35:12.253 starting I/O failed: -6 00:35:12.253 Write completed with error (sct=0, sc=8) 00:35:12.253 starting I/O failed: -6 00:35:12.253 Write completed with error (sct=0, sc=8) 00:35:12.253 starting I/O failed: -6 00:35:12.253 Write completed with error (sct=0, sc=8) 00:35:12.253 starting I/O failed: -6 00:35:12.253 Write completed with error (sct=0, sc=8) 00:35:12.253 starting I/O failed: -6 00:35:12.253 Write completed with error (sct=0, sc=8) 00:35:12.253 starting I/O failed: -6 00:35:12.253 Write completed with error (sct=0, sc=8) 00:35:12.253 starting I/O failed: -6 00:35:12.253 Write completed with error (sct=0, sc=8) 00:35:12.253 starting I/O failed: -6 00:35:12.253 Write completed with error (sct=0, sc=8) 00:35:12.253 starting I/O failed: -6 00:35:12.253 Write completed with error (sct=0, sc=8) 00:35:12.253 starting I/O failed: -6 00:35:12.253 Write completed with error (sct=0, sc=8) 00:35:12.253 starting I/O failed: -6 00:35:12.253 Write completed with error (sct=0, sc=8) 00:35:12.253 starting I/O failed: -6 00:35:12.253 Write completed with error (sct=0, sc=8) 00:35:12.253 starting I/O failed: -6 00:35:12.253 Write completed with error (sct=0, sc=8) 00:35:12.253 starting I/O failed: -6 00:35:12.253 Write completed with error (sct=0, sc=8) 00:35:12.253 starting I/O failed: -6 00:35:12.253 Write completed with error (sct=0, sc=8) 00:35:12.253 starting I/O failed: -6 00:35:12.253 Write completed with error (sct=0, sc=8) 00:35:12.253 starting I/O failed: -6 00:35:12.253 Write completed with error (sct=0, sc=8) 00:35:12.253 starting I/O failed: -6 00:35:12.253 Write completed with error (sct=0, sc=8) 00:35:12.253 starting I/O failed: -6 00:35:12.253 Write completed with error (sct=0, sc=8) 00:35:12.253 starting I/O failed: -6 00:35:12.253 Write completed with error (sct=0, sc=8) 00:35:12.253 starting I/O failed: -6 00:35:12.253 Write completed with error (sct=0, sc=8) 00:35:12.253 starting I/O failed: -6 00:35:12.253 Write completed with error (sct=0, sc=8) 00:35:12.253 starting I/O failed: -6 00:35:12.253 Write completed with error (sct=0, sc=8) 00:35:12.253 starting I/O failed: -6 00:35:12.253 Write completed with error (sct=0, sc=8) 00:35:12.253 starting I/O failed: -6 00:35:12.253 Write completed with error (sct=0, sc=8) 00:35:12.253 starting I/O failed: -6 00:35:12.253 Write completed with error (sct=0, sc=8) 00:35:12.253 starting I/O failed: -6 00:35:12.253 Write completed with error (sct=0, sc=8) 00:35:12.253 starting I/O failed: -6 00:35:12.253 Write completed with error (sct=0, sc=8) 00:35:12.253 starting I/O failed: -6 00:35:12.253 Write completed with error (sct=0, sc=8) 00:35:12.254 starting I/O failed: -6 00:35:12.254 Write completed with error (sct=0, sc=8) 00:35:12.254 starting I/O failed: -6 00:35:12.254 Write completed with error (sct=0, sc=8) 00:35:12.254 starting I/O failed: -6 00:35:12.254 Write completed with error (sct=0, sc=8) 00:35:12.254 starting I/O failed: -6 00:35:12.254 Write completed with error (sct=0, sc=8) 00:35:12.254 starting I/O failed: -6 00:35:12.254 Write completed with error (sct=0, sc=8) 00:35:12.254 starting I/O failed: -6 00:35:12.254 Write completed with error (sct=0, sc=8) 00:35:12.254 starting I/O failed: -6 00:35:12.254 Write completed with error (sct=0, sc=8) 00:35:12.254 starting I/O failed: -6 00:35:12.254 Write completed with error (sct=0, sc=8) 00:35:12.254 starting I/O failed: -6 00:35:12.254 Write completed with error (sct=0, sc=8) 00:35:12.254 starting I/O failed: -6 00:35:12.254 Write completed with error (sct=0, sc=8) 00:35:12.254 starting I/O failed: -6 00:35:12.254 Write completed with error (sct=0, sc=8) 00:35:12.254 starting I/O failed: -6 00:35:12.254 Write completed with error (sct=0, sc=8) 00:35:12.254 starting I/O failed: -6 00:35:12.254 Write completed with error (sct=0, sc=8) 00:35:12.254 starting I/O failed: -6 00:35:12.254 Write completed with error (sct=0, sc=8) 00:35:12.254 starting I/O failed: -6 00:35:12.254 Write completed with error (sct=0, sc=8) 00:35:12.254 starting I/O failed: -6 00:35:12.254 Write completed with error (sct=0, sc=8) 00:35:12.254 starting I/O failed: -6 00:35:12.254 Write completed with error (sct=0, sc=8) 00:35:12.254 starting I/O failed: -6 00:35:12.254 Write completed with error (sct=0, sc=8) 00:35:12.254 starting I/O failed: -6 00:35:12.254 Write completed with error (sct=0, sc=8) 00:35:12.254 starting I/O failed: -6 00:35:12.254 Write completed with error (sct=0, sc=8) 00:35:12.254 starting I/O failed: -6 00:35:12.254 Write completed with error (sct=0, sc=8) 00:35:12.254 starting I/O failed: -6 00:35:12.254 Write completed with error (sct=0, sc=8) 00:35:12.254 starting I/O failed: -6 00:35:12.254 [2024-10-01 22:34:07.213769] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:35:12.254 NVMe io qpair process completion error 00:35:12.254 Write completed with error (sct=0, sc=8) 00:35:12.254 Write completed with error (sct=0, sc=8) 00:35:12.254 Write completed with error (sct=0, sc=8) 00:35:12.254 Write completed with error (sct=0, sc=8) 00:35:12.254 starting I/O failed: -6 00:35:12.254 Write completed with error (sct=0, sc=8) 00:35:12.254 Write completed with error (sct=0, sc=8) 00:35:12.254 Write completed with error (sct=0, sc=8) 00:35:12.254 Write completed with error (sct=0, sc=8) 00:35:12.254 starting I/O failed: -6 00:35:12.254 Write completed with error (sct=0, sc=8) 00:35:12.254 Write completed with error (sct=0, sc=8) 00:35:12.254 Write completed with error (sct=0, sc=8) 00:35:12.254 Write completed with error (sct=0, sc=8) 00:35:12.254 starting I/O failed: -6 00:35:12.254 Write completed with error (sct=0, sc=8) 00:35:12.254 Write completed with error (sct=0, sc=8) 00:35:12.254 Write completed with error (sct=0, sc=8) 00:35:12.254 Write completed with error (sct=0, sc=8) 00:35:12.254 starting I/O failed: -6 00:35:12.254 Write completed with error (sct=0, sc=8) 00:35:12.254 Write completed with error (sct=0, sc=8) 00:35:12.254 Write completed with error (sct=0, sc=8) 00:35:12.254 Write completed with error (sct=0, sc=8) 00:35:12.254 starting I/O failed: -6 00:35:12.254 Write completed with error (sct=0, sc=8) 00:35:12.254 Write completed with error (sct=0, sc=8) 00:35:12.254 Write completed with error (sct=0, sc=8) 00:35:12.254 Write completed with error (sct=0, sc=8) 00:35:12.254 starting I/O failed: -6 00:35:12.254 Write completed with error (sct=0, sc=8) 00:35:12.254 Write completed with error (sct=0, sc=8) 00:35:12.254 Write completed with error (sct=0, sc=8) 00:35:12.254 Write completed with error (sct=0, sc=8) 00:35:12.254 starting I/O failed: -6 00:35:12.254 Write completed with error (sct=0, sc=8) 00:35:12.254 Write completed with error (sct=0, sc=8) 00:35:12.254 Write completed with error (sct=0, sc=8) 00:35:12.254 Write completed with error (sct=0, sc=8) 00:35:12.254 starting I/O failed: -6 00:35:12.254 Write completed with error (sct=0, sc=8) 00:35:12.254 Write completed with error (sct=0, sc=8) 00:35:12.254 Write completed with error (sct=0, sc=8) 00:35:12.254 Write completed with error (sct=0, sc=8) 00:35:12.254 starting I/O failed: -6 00:35:12.254 Write completed with error (sct=0, sc=8) 00:35:12.254 Write completed with error (sct=0, sc=8) 00:35:12.254 [2024-10-01 22:34:07.214857] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:35:12.254 starting I/O failed: -6 00:35:12.254 starting I/O failed: -6 00:35:12.254 starting I/O failed: -6 00:35:12.254 starting I/O failed: -6 00:35:12.254 starting I/O failed: -6 00:35:12.254 Write completed with error (sct=0, sc=8) 00:35:12.254 starting I/O failed: -6 00:35:12.254 Write completed with error (sct=0, sc=8) 00:35:12.254 Write completed with error (sct=0, sc=8) 00:35:12.254 Write completed with error (sct=0, sc=8) 00:35:12.254 starting I/O failed: -6 00:35:12.254 Write completed with error (sct=0, sc=8) 00:35:12.254 starting I/O failed: -6 00:35:12.254 Write completed with error (sct=0, sc=8) 00:35:12.254 Write completed with error (sct=0, sc=8) 00:35:12.254 Write completed with error (sct=0, sc=8) 00:35:12.254 starting I/O failed: -6 00:35:12.254 Write completed with error (sct=0, sc=8) 00:35:12.254 starting I/O failed: -6 00:35:12.254 Write completed with error (sct=0, sc=8) 00:35:12.254 Write completed with error (sct=0, sc=8) 00:35:12.254 Write completed with error (sct=0, sc=8) 00:35:12.254 starting I/O failed: -6 00:35:12.254 Write completed with error (sct=0, sc=8) 00:35:12.254 starting I/O failed: -6 00:35:12.254 Write completed with error (sct=0, sc=8) 00:35:12.254 Write completed with error (sct=0, sc=8) 00:35:12.254 Write completed with error (sct=0, sc=8) 00:35:12.254 starting I/O failed: -6 00:35:12.254 Write completed with error (sct=0, sc=8) 00:35:12.254 starting I/O failed: -6 00:35:12.254 Write completed with error (sct=0, sc=8) 00:35:12.254 Write completed with error (sct=0, sc=8) 00:35:12.254 Write completed with error (sct=0, sc=8) 00:35:12.254 starting I/O failed: -6 00:35:12.254 Write completed with error (sct=0, sc=8) 00:35:12.254 starting I/O failed: -6 00:35:12.254 Write completed with error (sct=0, sc=8) 00:35:12.254 Write completed with error (sct=0, sc=8) 00:35:12.254 Write completed with error (sct=0, sc=8) 00:35:12.254 starting I/O failed: -6 00:35:12.254 Write completed with error (sct=0, sc=8) 00:35:12.254 starting I/O failed: -6 00:35:12.254 Write completed with error (sct=0, sc=8) 00:35:12.254 Write completed with error (sct=0, sc=8) 00:35:12.254 Write completed with error (sct=0, sc=8) 00:35:12.254 starting I/O failed: -6 00:35:12.254 Write completed with error (sct=0, sc=8) 00:35:12.254 starting I/O failed: -6 00:35:12.254 [2024-10-01 22:34:07.215871] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:35:12.254 starting I/O failed: -6 00:35:12.254 starting I/O failed: -6 00:35:12.254 starting I/O failed: -6 00:35:12.254 starting I/O failed: -6 00:35:12.254 starting I/O failed: -6 00:35:12.254 starting I/O failed: -6 00:35:12.254 starting I/O failed: -6 00:35:12.254 starting I/O failed: -6 00:35:12.254 Write completed with error (sct=0, sc=8) 00:35:12.254 starting I/O failed: -6 00:35:12.254 Write completed with error (sct=0, sc=8) 00:35:12.254 Write completed with error (sct=0, sc=8) 00:35:12.254 starting I/O failed: -6 00:35:12.254 Write completed with error (sct=0, sc=8) 00:35:12.254 starting I/O failed: -6 00:35:12.254 Write completed with error (sct=0, sc=8) 00:35:12.254 starting I/O failed: -6 00:35:12.254 Write completed with error (sct=0, sc=8) 00:35:12.254 Write completed with error (sct=0, sc=8) 00:35:12.254 starting I/O failed: -6 00:35:12.254 Write completed with error (sct=0, sc=8) 00:35:12.254 starting I/O failed: -6 00:35:12.254 Write completed with error (sct=0, sc=8) 00:35:12.254 starting I/O failed: -6 00:35:12.254 Write completed with error (sct=0, sc=8) 00:35:12.254 Write completed with error (sct=0, sc=8) 00:35:12.254 starting I/O failed: -6 00:35:12.254 Write completed with error (sct=0, sc=8) 00:35:12.254 starting I/O failed: -6 00:35:12.254 Write completed with error (sct=0, sc=8) 00:35:12.254 starting I/O failed: -6 00:35:12.254 Write completed with error (sct=0, sc=8) 00:35:12.254 Write completed with error (sct=0, sc=8) 00:35:12.254 starting I/O failed: -6 00:35:12.254 Write completed with error (sct=0, sc=8) 00:35:12.254 starting I/O failed: -6 00:35:12.254 Write completed with error (sct=0, sc=8) 00:35:12.254 starting I/O failed: -6 00:35:12.254 Write completed with error (sct=0, sc=8) 00:35:12.254 Write completed with error (sct=0, sc=8) 00:35:12.254 starting I/O failed: -6 00:35:12.254 Write completed with error (sct=0, sc=8) 00:35:12.254 starting I/O failed: -6 00:35:12.254 Write completed with error (sct=0, sc=8) 00:35:12.254 starting I/O failed: -6 00:35:12.254 Write completed with error (sct=0, sc=8) 00:35:12.254 Write completed with error (sct=0, sc=8) 00:35:12.254 starting I/O failed: -6 00:35:12.254 Write completed with error (sct=0, sc=8) 00:35:12.254 starting I/O failed: -6 00:35:12.254 Write completed with error (sct=0, sc=8) 00:35:12.254 starting I/O failed: -6 00:35:12.254 Write completed with error (sct=0, sc=8) 00:35:12.254 Write completed with error (sct=0, sc=8) 00:35:12.254 starting I/O failed: -6 00:35:12.254 Write completed with error (sct=0, sc=8) 00:35:12.254 starting I/O failed: -6 00:35:12.254 Write completed with error (sct=0, sc=8) 00:35:12.254 starting I/O failed: -6 00:35:12.254 Write completed with error (sct=0, sc=8) 00:35:12.254 Write completed with error (sct=0, sc=8) 00:35:12.254 starting I/O failed: -6 00:35:12.254 Write completed with error (sct=0, sc=8) 00:35:12.254 starting I/O failed: -6 00:35:12.254 Write completed with error (sct=0, sc=8) 00:35:12.254 starting I/O failed: -6 00:35:12.254 Write completed with error (sct=0, sc=8) 00:35:12.254 Write completed with error (sct=0, sc=8) 00:35:12.254 starting I/O failed: -6 00:35:12.254 Write completed with error (sct=0, sc=8) 00:35:12.254 starting I/O failed: -6 00:35:12.254 Write completed with error (sct=0, sc=8) 00:35:12.254 starting I/O failed: -6 00:35:12.255 [2024-10-01 22:34:07.217066] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:12.255 Write completed with error (sct=0, sc=8) 00:35:12.255 starting I/O failed: -6 00:35:12.255 Write completed with error (sct=0, sc=8) 00:35:12.255 starting I/O failed: -6 00:35:12.255 Write completed with error (sct=0, sc=8) 00:35:12.255 starting I/O failed: -6 00:35:12.255 Write completed with error (sct=0, sc=8) 00:35:12.255 starting I/O failed: -6 00:35:12.255 Write completed with error (sct=0, sc=8) 00:35:12.255 starting I/O failed: -6 00:35:12.255 Write completed with error (sct=0, sc=8) 00:35:12.255 starting I/O failed: -6 00:35:12.255 Write completed with error (sct=0, sc=8) 00:35:12.255 starting I/O failed: -6 00:35:12.255 Write completed with error (sct=0, sc=8) 00:35:12.255 starting I/O failed: -6 00:35:12.255 Write completed with error (sct=0, sc=8) 00:35:12.255 starting I/O failed: -6 00:35:12.255 Write completed with error (sct=0, sc=8) 00:35:12.255 starting I/O failed: -6 00:35:12.255 Write completed with error (sct=0, sc=8) 00:35:12.255 starting I/O failed: -6 00:35:12.255 Write completed with error (sct=0, sc=8) 00:35:12.255 starting I/O failed: -6 00:35:12.255 Write completed with error (sct=0, sc=8) 00:35:12.255 starting I/O failed: -6 00:35:12.255 Write completed with error (sct=0, sc=8) 00:35:12.255 starting I/O failed: -6 00:35:12.255 Write completed with error (sct=0, sc=8) 00:35:12.255 starting I/O failed: -6 00:35:12.255 Write completed with error (sct=0, sc=8) 00:35:12.255 starting I/O failed: -6 00:35:12.255 Write completed with error (sct=0, sc=8) 00:35:12.255 starting I/O failed: -6 00:35:12.255 Write completed with error (sct=0, sc=8) 00:35:12.255 starting I/O failed: -6 00:35:12.255 Write completed with error (sct=0, sc=8) 00:35:12.255 starting I/O failed: -6 00:35:12.255 Write completed with error (sct=0, sc=8) 00:35:12.255 starting I/O failed: -6 00:35:12.255 Write completed with error (sct=0, sc=8) 00:35:12.255 starting I/O failed: -6 00:35:12.255 Write completed with error (sct=0, sc=8) 00:35:12.255 starting I/O failed: -6 00:35:12.255 Write completed with error (sct=0, sc=8) 00:35:12.255 starting I/O failed: -6 00:35:12.255 Write completed with error (sct=0, sc=8) 00:35:12.255 starting I/O failed: -6 00:35:12.255 Write completed with error (sct=0, sc=8) 00:35:12.255 starting I/O failed: -6 00:35:12.255 Write completed with error (sct=0, sc=8) 00:35:12.255 starting I/O failed: -6 00:35:12.255 Write completed with error (sct=0, sc=8) 00:35:12.255 starting I/O failed: -6 00:35:12.255 Write completed with error (sct=0, sc=8) 00:35:12.255 starting I/O failed: -6 00:35:12.255 Write completed with error (sct=0, sc=8) 00:35:12.255 starting I/O failed: -6 00:35:12.255 Write completed with error (sct=0, sc=8) 00:35:12.255 starting I/O failed: -6 00:35:12.255 Write completed with error (sct=0, sc=8) 00:35:12.255 starting I/O failed: -6 00:35:12.255 Write completed with error (sct=0, sc=8) 00:35:12.255 starting I/O failed: -6 00:35:12.255 Write completed with error (sct=0, sc=8) 00:35:12.255 starting I/O failed: -6 00:35:12.255 Write completed with error (sct=0, sc=8) 00:35:12.255 starting I/O failed: -6 00:35:12.255 Write completed with error (sct=0, sc=8) 00:35:12.255 starting I/O failed: -6 00:35:12.255 Write completed with error (sct=0, sc=8) 00:35:12.255 starting I/O failed: -6 00:35:12.255 Write completed with error (sct=0, sc=8) 00:35:12.255 starting I/O failed: -6 00:35:12.255 Write completed with error (sct=0, sc=8) 00:35:12.255 starting I/O failed: -6 00:35:12.255 Write completed with error (sct=0, sc=8) 00:35:12.255 starting I/O failed: -6 00:35:12.255 Write completed with error (sct=0, sc=8) 00:35:12.255 starting I/O failed: -6 00:35:12.255 Write completed with error (sct=0, sc=8) 00:35:12.255 starting I/O failed: -6 00:35:12.255 Write completed with error (sct=0, sc=8) 00:35:12.255 starting I/O failed: -6 00:35:12.255 Write completed with error (sct=0, sc=8) 00:35:12.255 starting I/O failed: -6 00:35:12.255 Write completed with error (sct=0, sc=8) 00:35:12.255 starting I/O failed: -6 00:35:12.255 Write completed with error (sct=0, sc=8) 00:35:12.255 starting I/O failed: -6 00:35:12.255 Write completed with error (sct=0, sc=8) 00:35:12.255 starting I/O failed: -6 00:35:12.255 Write completed with error (sct=0, sc=8) 00:35:12.255 starting I/O failed: -6 00:35:12.255 Write completed with error (sct=0, sc=8) 00:35:12.255 starting I/O failed: -6 00:35:12.255 Write completed with error (sct=0, sc=8) 00:35:12.255 starting I/O failed: -6 00:35:12.255 Write completed with error (sct=0, sc=8) 00:35:12.255 starting I/O failed: -6 00:35:12.255 Write completed with error (sct=0, sc=8) 00:35:12.255 starting I/O failed: -6 00:35:12.255 Write completed with error (sct=0, sc=8) 00:35:12.255 starting I/O failed: -6 00:35:12.255 Write completed with error (sct=0, sc=8) 00:35:12.255 starting I/O failed: -6 00:35:12.255 Write completed with error (sct=0, sc=8) 00:35:12.255 starting I/O failed: -6 00:35:12.255 Write completed with error (sct=0, sc=8) 00:35:12.255 starting I/O failed: -6 00:35:12.255 Write completed with error (sct=0, sc=8) 00:35:12.255 starting I/O failed: -6 00:35:12.255 Write completed with error (sct=0, sc=8) 00:35:12.255 starting I/O failed: -6 00:35:12.255 Write completed with error (sct=0, sc=8) 00:35:12.255 starting I/O failed: -6 00:35:12.255 Write completed with error (sct=0, sc=8) 00:35:12.255 starting I/O failed: -6 00:35:12.255 Write completed with error (sct=0, sc=8) 00:35:12.255 starting I/O failed: -6 00:35:12.255 Write completed with error (sct=0, sc=8) 00:35:12.255 starting I/O failed: -6 00:35:12.255 Write completed with error (sct=0, sc=8) 00:35:12.255 starting I/O failed: -6 00:35:12.255 Write completed with error (sct=0, sc=8) 00:35:12.255 starting I/O failed: -6 00:35:12.255 [2024-10-01 22:34:07.221338] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:35:12.255 NVMe io qpair process completion error 00:35:12.255 Write completed with error (sct=0, sc=8) 00:35:12.255 Write completed with error (sct=0, sc=8) 00:35:12.255 starting I/O failed: -6 00:35:12.255 Write completed with error (sct=0, sc=8) 00:35:12.255 Write completed with error (sct=0, sc=8) 00:35:12.255 Write completed with error (sct=0, sc=8) 00:35:12.255 Write completed with error (sct=0, sc=8) 00:35:12.255 starting I/O failed: -6 00:35:12.255 Write completed with error (sct=0, sc=8) 00:35:12.255 Write completed with error (sct=0, sc=8) 00:35:12.255 Write completed with error (sct=0, sc=8) 00:35:12.255 Write completed with error (sct=0, sc=8) 00:35:12.255 starting I/O failed: -6 00:35:12.255 Write completed with error (sct=0, sc=8) 00:35:12.255 Write completed with error (sct=0, sc=8) 00:35:12.255 Write completed with error (sct=0, sc=8) 00:35:12.255 Write completed with error (sct=0, sc=8) 00:35:12.255 starting I/O failed: -6 00:35:12.255 Write completed with error (sct=0, sc=8) 00:35:12.255 Write completed with error (sct=0, sc=8) 00:35:12.255 Write completed with error (sct=0, sc=8) 00:35:12.255 Write completed with error (sct=0, sc=8) 00:35:12.255 starting I/O failed: -6 00:35:12.255 Write completed with error (sct=0, sc=8) 00:35:12.255 Write completed with error (sct=0, sc=8) 00:35:12.255 Write completed with error (sct=0, sc=8) 00:35:12.255 Write completed with error (sct=0, sc=8) 00:35:12.255 starting I/O failed: -6 00:35:12.255 Write completed with error (sct=0, sc=8) 00:35:12.255 Write completed with error (sct=0, sc=8) 00:35:12.255 Write completed with error (sct=0, sc=8) 00:35:12.255 Write completed with error (sct=0, sc=8) 00:35:12.255 starting I/O failed: -6 00:35:12.255 Write completed with error (sct=0, sc=8) 00:35:12.255 Write completed with error (sct=0, sc=8) 00:35:12.255 Write completed with error (sct=0, sc=8) 00:35:12.255 Write completed with error (sct=0, sc=8) 00:35:12.255 starting I/O failed: -6 00:35:12.255 Write completed with error (sct=0, sc=8) 00:35:12.255 Write completed with error (sct=0, sc=8) 00:35:12.255 Write completed with error (sct=0, sc=8) 00:35:12.255 Write completed with error (sct=0, sc=8) 00:35:12.255 starting I/O failed: -6 00:35:12.255 Write completed with error (sct=0, sc=8) 00:35:12.255 Write completed with error (sct=0, sc=8) 00:35:12.255 Write completed with error (sct=0, sc=8) 00:35:12.255 Write completed with error (sct=0, sc=8) 00:35:12.255 starting I/O failed: -6 00:35:12.255 Write completed with error (sct=0, sc=8) 00:35:12.255 Write completed with error (sct=0, sc=8) 00:35:12.255 Write completed with error (sct=0, sc=8) 00:35:12.255 Write completed with error (sct=0, sc=8) 00:35:12.255 starting I/O failed: -6 00:35:12.255 [2024-10-01 22:34:07.222478] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:35:12.255 starting I/O failed: -6 00:35:12.255 Write completed with error (sct=0, sc=8) 00:35:12.255 Write completed with error (sct=0, sc=8) 00:35:12.255 starting I/O failed: -6 00:35:12.255 Write completed with error (sct=0, sc=8) 00:35:12.255 Write completed with error (sct=0, sc=8) 00:35:12.255 starting I/O failed: -6 00:35:12.255 Write completed with error (sct=0, sc=8) 00:35:12.255 Write completed with error (sct=0, sc=8) 00:35:12.255 starting I/O failed: -6 00:35:12.255 Write completed with error (sct=0, sc=8) 00:35:12.255 Write completed with error (sct=0, sc=8) 00:35:12.255 starting I/O failed: -6 00:35:12.255 Write completed with error (sct=0, sc=8) 00:35:12.255 Write completed with error (sct=0, sc=8) 00:35:12.255 starting I/O failed: -6 00:35:12.255 Write completed with error (sct=0, sc=8) 00:35:12.255 Write completed with error (sct=0, sc=8) 00:35:12.255 starting I/O failed: -6 00:35:12.255 Write completed with error (sct=0, sc=8) 00:35:12.255 Write completed with error (sct=0, sc=8) 00:35:12.255 starting I/O failed: -6 00:35:12.255 Write completed with error (sct=0, sc=8) 00:35:12.255 Write completed with error (sct=0, sc=8) 00:35:12.255 starting I/O failed: -6 00:35:12.255 Write completed with error (sct=0, sc=8) 00:35:12.255 Write completed with error (sct=0, sc=8) 00:35:12.255 starting I/O failed: -6 00:35:12.255 Write completed with error (sct=0, sc=8) 00:35:12.255 Write completed with error (sct=0, sc=8) 00:35:12.255 starting I/O failed: -6 00:35:12.255 Write completed with error (sct=0, sc=8) 00:35:12.255 Write completed with error (sct=0, sc=8) 00:35:12.255 starting I/O failed: -6 00:35:12.255 Write completed with error (sct=0, sc=8) 00:35:12.255 Write completed with error (sct=0, sc=8) 00:35:12.255 starting I/O failed: -6 00:35:12.255 Write completed with error (sct=0, sc=8) 00:35:12.256 Write completed with error (sct=0, sc=8) 00:35:12.256 starting I/O failed: -6 00:35:12.256 Write completed with error (sct=0, sc=8) 00:35:12.256 Write completed with error (sct=0, sc=8) 00:35:12.256 starting I/O failed: -6 00:35:12.256 Write completed with error (sct=0, sc=8) 00:35:12.256 Write completed with error (sct=0, sc=8) 00:35:12.256 starting I/O failed: -6 00:35:12.256 Write completed with error (sct=0, sc=8) 00:35:12.256 Write completed with error (sct=0, sc=8) 00:35:12.256 starting I/O failed: -6 00:35:12.256 Write completed with error (sct=0, sc=8) 00:35:12.256 Write completed with error (sct=0, sc=8) 00:35:12.256 starting I/O failed: -6 00:35:12.256 Write completed with error (sct=0, sc=8) 00:35:12.256 [2024-10-01 22:34:07.223320] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:35:12.256 Write completed with error (sct=0, sc=8) 00:35:12.256 starting I/O failed: -6 00:35:12.256 Write completed with error (sct=0, sc=8) 00:35:12.256 Write completed with error (sct=0, sc=8) 00:35:12.256 starting I/O failed: -6 00:35:12.256 Write completed with error (sct=0, sc=8) 00:35:12.256 starting I/O failed: -6 00:35:12.256 Write completed with error (sct=0, sc=8) 00:35:12.256 starting I/O failed: -6 00:35:12.256 Write completed with error (sct=0, sc=8) 00:35:12.256 Write completed with error (sct=0, sc=8) 00:35:12.256 starting I/O failed: -6 00:35:12.256 Write completed with error (sct=0, sc=8) 00:35:12.256 starting I/O failed: -6 00:35:12.256 Write completed with error (sct=0, sc=8) 00:35:12.256 starting I/O failed: -6 00:35:12.256 Write completed with error (sct=0, sc=8) 00:35:12.256 Write completed with error (sct=0, sc=8) 00:35:12.256 starting I/O failed: -6 00:35:12.256 Write completed with error (sct=0, sc=8) 00:35:12.256 starting I/O failed: -6 00:35:12.256 Write completed with error (sct=0, sc=8) 00:35:12.256 starting I/O failed: -6 00:35:12.256 Write completed with error (sct=0, sc=8) 00:35:12.256 Write completed with error (sct=0, sc=8) 00:35:12.256 starting I/O failed: -6 00:35:12.256 Write completed with error (sct=0, sc=8) 00:35:12.256 starting I/O failed: -6 00:35:12.256 Write completed with error (sct=0, sc=8) 00:35:12.256 starting I/O failed: -6 00:35:12.256 Write completed with error (sct=0, sc=8) 00:35:12.256 Write completed with error (sct=0, sc=8) 00:35:12.256 starting I/O failed: -6 00:35:12.256 Write completed with error (sct=0, sc=8) 00:35:12.256 starting I/O failed: -6 00:35:12.256 Write completed with error (sct=0, sc=8) 00:35:12.256 starting I/O failed: -6 00:35:12.256 Write completed with error (sct=0, sc=8) 00:35:12.256 Write completed with error (sct=0, sc=8) 00:35:12.256 starting I/O failed: -6 00:35:12.256 Write completed with error (sct=0, sc=8) 00:35:12.256 starting I/O failed: -6 00:35:12.256 Write completed with error (sct=0, sc=8) 00:35:12.256 starting I/O failed: -6 00:35:12.256 Write completed with error (sct=0, sc=8) 00:35:12.256 Write completed with error (sct=0, sc=8) 00:35:12.256 starting I/O failed: -6 00:35:12.256 Write completed with error (sct=0, sc=8) 00:35:12.256 starting I/O failed: -6 00:35:12.256 Write completed with error (sct=0, sc=8) 00:35:12.256 starting I/O failed: -6 00:35:12.256 Write completed with error (sct=0, sc=8) 00:35:12.256 Write completed with error (sct=0, sc=8) 00:35:12.256 starting I/O failed: -6 00:35:12.256 Write completed with error (sct=0, sc=8) 00:35:12.256 starting I/O failed: -6 00:35:12.256 Write completed with error (sct=0, sc=8) 00:35:12.256 starting I/O failed: -6 00:35:12.256 Write completed with error (sct=0, sc=8) 00:35:12.256 Write completed with error (sct=0, sc=8) 00:35:12.256 starting I/O failed: -6 00:35:12.256 Write completed with error (sct=0, sc=8) 00:35:12.256 starting I/O failed: -6 00:35:12.256 Write completed with error (sct=0, sc=8) 00:35:12.256 starting I/O failed: -6 00:35:12.256 Write completed with error (sct=0, sc=8) 00:35:12.256 Write completed with error (sct=0, sc=8) 00:35:12.256 starting I/O failed: -6 00:35:12.256 Write completed with error (sct=0, sc=8) 00:35:12.256 starting I/O failed: -6 00:35:12.256 Write completed with error (sct=0, sc=8) 00:35:12.256 starting I/O failed: -6 00:35:12.256 Write completed with error (sct=0, sc=8) 00:35:12.256 Write completed with error (sct=0, sc=8) 00:35:12.256 starting I/O failed: -6 00:35:12.256 Write completed with error (sct=0, sc=8) 00:35:12.256 starting I/O failed: -6 00:35:12.256 Write completed with error (sct=0, sc=8) 00:35:12.256 starting I/O failed: -6 00:35:12.256 Write completed with error (sct=0, sc=8) 00:35:12.256 Write completed with error (sct=0, sc=8) 00:35:12.256 starting I/O failed: -6 00:35:12.256 Write completed with error (sct=0, sc=8) 00:35:12.256 starting I/O failed: -6 00:35:12.256 Write completed with error (sct=0, sc=8) 00:35:12.256 starting I/O failed: -6 00:35:12.256 Write completed with error (sct=0, sc=8) 00:35:12.256 [2024-10-01 22:34:07.224264] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:35:12.256 Write completed with error (sct=0, sc=8) 00:35:12.256 starting I/O failed: -6 00:35:12.256 Write completed with error (sct=0, sc=8) 00:35:12.256 starting I/O failed: -6 00:35:12.256 Write completed with error (sct=0, sc=8) 00:35:12.256 starting I/O failed: -6 00:35:12.256 Write completed with error (sct=0, sc=8) 00:35:12.256 starting I/O failed: -6 00:35:12.256 Write completed with error (sct=0, sc=8) 00:35:12.256 starting I/O failed: -6 00:35:12.256 Write completed with error (sct=0, sc=8) 00:35:12.256 starting I/O failed: -6 00:35:12.256 Write completed with error (sct=0, sc=8) 00:35:12.256 starting I/O failed: -6 00:35:12.256 Write completed with error (sct=0, sc=8) 00:35:12.256 starting I/O failed: -6 00:35:12.256 Write completed with error (sct=0, sc=8) 00:35:12.256 starting I/O failed: -6 00:35:12.256 Write completed with error (sct=0, sc=8) 00:35:12.256 starting I/O failed: -6 00:35:12.256 Write completed with error (sct=0, sc=8) 00:35:12.256 starting I/O failed: -6 00:35:12.256 Write completed with error (sct=0, sc=8) 00:35:12.256 starting I/O failed: -6 00:35:12.256 Write completed with error (sct=0, sc=8) 00:35:12.256 starting I/O failed: -6 00:35:12.256 Write completed with error (sct=0, sc=8) 00:35:12.256 starting I/O failed: -6 00:35:12.256 Write completed with error (sct=0, sc=8) 00:35:12.256 starting I/O failed: -6 00:35:12.256 Write completed with error (sct=0, sc=8) 00:35:12.256 starting I/O failed: -6 00:35:12.256 Write completed with error (sct=0, sc=8) 00:35:12.256 starting I/O failed: -6 00:35:12.256 Write completed with error (sct=0, sc=8) 00:35:12.256 starting I/O failed: -6 00:35:12.256 Write completed with error (sct=0, sc=8) 00:35:12.256 starting I/O failed: -6 00:35:12.256 Write completed with error (sct=0, sc=8) 00:35:12.256 starting I/O failed: -6 00:35:12.256 Write completed with error (sct=0, sc=8) 00:35:12.256 starting I/O failed: -6 00:35:12.256 Write completed with error (sct=0, sc=8) 00:35:12.256 starting I/O failed: -6 00:35:12.256 Write completed with error (sct=0, sc=8) 00:35:12.256 starting I/O failed: -6 00:35:12.256 Write completed with error (sct=0, sc=8) 00:35:12.256 starting I/O failed: -6 00:35:12.256 Write completed with error (sct=0, sc=8) 00:35:12.256 starting I/O failed: -6 00:35:12.256 Write completed with error (sct=0, sc=8) 00:35:12.256 starting I/O failed: -6 00:35:12.256 Write completed with error (sct=0, sc=8) 00:35:12.256 starting I/O failed: -6 00:35:12.256 Write completed with error (sct=0, sc=8) 00:35:12.256 starting I/O failed: -6 00:35:12.256 Write completed with error (sct=0, sc=8) 00:35:12.256 starting I/O failed: -6 00:35:12.256 Write completed with error (sct=0, sc=8) 00:35:12.256 starting I/O failed: -6 00:35:12.256 Write completed with error (sct=0, sc=8) 00:35:12.256 starting I/O failed: -6 00:35:12.256 Write completed with error (sct=0, sc=8) 00:35:12.256 starting I/O failed: -6 00:35:12.256 Write completed with error (sct=0, sc=8) 00:35:12.256 starting I/O failed: -6 00:35:12.256 Write completed with error (sct=0, sc=8) 00:35:12.256 starting I/O failed: -6 00:35:12.256 Write completed with error (sct=0, sc=8) 00:35:12.256 starting I/O failed: -6 00:35:12.256 Write completed with error (sct=0, sc=8) 00:35:12.256 starting I/O failed: -6 00:35:12.256 Write completed with error (sct=0, sc=8) 00:35:12.256 starting I/O failed: -6 00:35:12.256 Write completed with error (sct=0, sc=8) 00:35:12.256 starting I/O failed: -6 00:35:12.256 Write completed with error (sct=0, sc=8) 00:35:12.256 starting I/O failed: -6 00:35:12.256 Write completed with error (sct=0, sc=8) 00:35:12.256 starting I/O failed: -6 00:35:12.256 Write completed with error (sct=0, sc=8) 00:35:12.256 starting I/O failed: -6 00:35:12.256 Write completed with error (sct=0, sc=8) 00:35:12.256 starting I/O failed: -6 00:35:12.256 Write completed with error (sct=0, sc=8) 00:35:12.256 starting I/O failed: -6 00:35:12.256 Write completed with error (sct=0, sc=8) 00:35:12.256 starting I/O failed: -6 00:35:12.256 Write completed with error (sct=0, sc=8) 00:35:12.256 starting I/O failed: -6 00:35:12.256 Write completed with error (sct=0, sc=8) 00:35:12.256 starting I/O failed: -6 00:35:12.256 Write completed with error (sct=0, sc=8) 00:35:12.256 starting I/O failed: -6 00:35:12.256 Write completed with error (sct=0, sc=8) 00:35:12.256 starting I/O failed: -6 00:35:12.256 Write completed with error (sct=0, sc=8) 00:35:12.256 starting I/O failed: -6 00:35:12.256 Write completed with error (sct=0, sc=8) 00:35:12.256 starting I/O failed: -6 00:35:12.256 Write completed with error (sct=0, sc=8) 00:35:12.256 starting I/O failed: -6 00:35:12.256 Write completed with error (sct=0, sc=8) 00:35:12.256 starting I/O failed: -6 00:35:12.257 Write completed with error (sct=0, sc=8) 00:35:12.257 starting I/O failed: -6 00:35:12.257 Write completed with error (sct=0, sc=8) 00:35:12.257 starting I/O failed: -6 00:35:12.257 Write completed with error (sct=0, sc=8) 00:35:12.257 starting I/O failed: -6 00:35:12.257 Write completed with error (sct=0, sc=8) 00:35:12.257 starting I/O failed: -6 00:35:12.257 Write completed with error (sct=0, sc=8) 00:35:12.257 starting I/O failed: -6 00:35:12.257 Write completed with error (sct=0, sc=8) 00:35:12.257 starting I/O failed: -6 00:35:12.257 Write completed with error (sct=0, sc=8) 00:35:12.257 starting I/O failed: -6 00:35:12.257 Write completed with error (sct=0, sc=8) 00:35:12.257 starting I/O failed: -6 00:35:12.257 Write completed with error (sct=0, sc=8) 00:35:12.257 starting I/O failed: -6 00:35:12.257 Write completed with error (sct=0, sc=8) 00:35:12.257 starting I/O failed: -6 00:35:12.257 [2024-10-01 22:34:07.225903] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:12.257 NVMe io qpair process completion error 00:35:12.257 Write completed with error (sct=0, sc=8) 00:35:12.257 starting I/O failed: -6 00:35:12.257 Write completed with error (sct=0, sc=8) 00:35:12.257 Write completed with error (sct=0, sc=8) 00:35:12.257 Write completed with error (sct=0, sc=8) 00:35:12.257 Write completed with error (sct=0, sc=8) 00:35:12.257 starting I/O failed: -6 00:35:12.257 Write completed with error (sct=0, sc=8) 00:35:12.257 Write completed with error (sct=0, sc=8) 00:35:12.257 Write completed with error (sct=0, sc=8) 00:35:12.257 Write completed with error (sct=0, sc=8) 00:35:12.257 starting I/O failed: -6 00:35:12.257 Write completed with error (sct=0, sc=8) 00:35:12.257 Write completed with error (sct=0, sc=8) 00:35:12.257 Write completed with error (sct=0, sc=8) 00:35:12.257 Write completed with error (sct=0, sc=8) 00:35:12.257 starting I/O failed: -6 00:35:12.257 Write completed with error (sct=0, sc=8) 00:35:12.257 Write completed with error (sct=0, sc=8) 00:35:12.257 Write completed with error (sct=0, sc=8) 00:35:12.257 Write completed with error (sct=0, sc=8) 00:35:12.257 starting I/O failed: -6 00:35:12.257 Write completed with error (sct=0, sc=8) 00:35:12.257 Write completed with error (sct=0, sc=8) 00:35:12.257 Write completed with error (sct=0, sc=8) 00:35:12.257 Write completed with error (sct=0, sc=8) 00:35:12.257 starting I/O failed: -6 00:35:12.257 Write completed with error (sct=0, sc=8) 00:35:12.257 Write completed with error (sct=0, sc=8) 00:35:12.257 Write completed with error (sct=0, sc=8) 00:35:12.257 Write completed with error (sct=0, sc=8) 00:35:12.257 starting I/O failed: -6 00:35:12.257 Write completed with error (sct=0, sc=8) 00:35:12.257 Write completed with error (sct=0, sc=8) 00:35:12.257 Write completed with error (sct=0, sc=8) 00:35:12.257 Write completed with error (sct=0, sc=8) 00:35:12.257 starting I/O failed: -6 00:35:12.257 Write completed with error (sct=0, sc=8) 00:35:12.257 Write completed with error (sct=0, sc=8) 00:35:12.257 Write completed with error (sct=0, sc=8) 00:35:12.257 Write completed with error (sct=0, sc=8) 00:35:12.257 starting I/O failed: -6 00:35:12.257 Write completed with error (sct=0, sc=8) 00:35:12.257 Write completed with error (sct=0, sc=8) 00:35:12.257 Write completed with error (sct=0, sc=8) 00:35:12.257 Write completed with error (sct=0, sc=8) 00:35:12.257 starting I/O failed: -6 00:35:12.257 Write completed with error (sct=0, sc=8) 00:35:12.257 [2024-10-01 22:34:07.226930] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:35:12.257 Write completed with error (sct=0, sc=8) 00:35:12.257 Write completed with error (sct=0, sc=8) 00:35:12.257 starting I/O failed: -6 00:35:12.257 Write completed with error (sct=0, sc=8) 00:35:12.257 starting I/O failed: -6 00:35:12.257 Write completed with error (sct=0, sc=8) 00:35:12.257 Write completed with error (sct=0, sc=8) 00:35:12.257 Write completed with error (sct=0, sc=8) 00:35:12.257 starting I/O failed: -6 00:35:12.257 Write completed with error (sct=0, sc=8) 00:35:12.257 starting I/O failed: -6 00:35:12.257 Write completed with error (sct=0, sc=8) 00:35:12.257 Write completed with error (sct=0, sc=8) 00:35:12.257 Write completed with error (sct=0, sc=8) 00:35:12.257 starting I/O failed: -6 00:35:12.257 Write completed with error (sct=0, sc=8) 00:35:12.257 starting I/O failed: -6 00:35:12.257 Write completed with error (sct=0, sc=8) 00:35:12.257 Write completed with error (sct=0, sc=8) 00:35:12.257 Write completed with error (sct=0, sc=8) 00:35:12.257 starting I/O failed: -6 00:35:12.257 Write completed with error (sct=0, sc=8) 00:35:12.257 starting I/O failed: -6 00:35:12.257 Write completed with error (sct=0, sc=8) 00:35:12.257 Write completed with error (sct=0, sc=8) 00:35:12.257 Write completed with error (sct=0, sc=8) 00:35:12.257 starting I/O failed: -6 00:35:12.257 Write completed with error (sct=0, sc=8) 00:35:12.257 starting I/O failed: -6 00:35:12.257 Write completed with error (sct=0, sc=8) 00:35:12.257 Write completed with error (sct=0, sc=8) 00:35:12.257 Write completed with error (sct=0, sc=8) 00:35:12.257 starting I/O failed: -6 00:35:12.257 Write completed with error (sct=0, sc=8) 00:35:12.257 starting I/O failed: -6 00:35:12.257 Write completed with error (sct=0, sc=8) 00:35:12.257 Write completed with error (sct=0, sc=8) 00:35:12.257 Write completed with error (sct=0, sc=8) 00:35:12.257 starting I/O failed: -6 00:35:12.257 Write completed with error (sct=0, sc=8) 00:35:12.257 starting I/O failed: -6 00:35:12.257 Write completed with error (sct=0, sc=8) 00:35:12.257 Write completed with error (sct=0, sc=8) 00:35:12.257 Write completed with error (sct=0, sc=8) 00:35:12.257 starting I/O failed: -6 00:35:12.257 Write completed with error (sct=0, sc=8) 00:35:12.257 starting I/O failed: -6 00:35:12.257 Write completed with error (sct=0, sc=8) 00:35:12.257 Write completed with error (sct=0, sc=8) 00:35:12.257 Write completed with error (sct=0, sc=8) 00:35:12.257 starting I/O failed: -6 00:35:12.257 Write completed with error (sct=0, sc=8) 00:35:12.257 starting I/O failed: -6 00:35:12.257 Write completed with error (sct=0, sc=8) 00:35:12.257 Write completed with error (sct=0, sc=8) 00:35:12.257 Write completed with error (sct=0, sc=8) 00:35:12.257 starting I/O failed: -6 00:35:12.257 Write completed with error (sct=0, sc=8) 00:35:12.257 starting I/O failed: -6 00:35:12.257 Write completed with error (sct=0, sc=8) 00:35:12.257 Write completed with error (sct=0, sc=8) 00:35:12.257 Write completed with error (sct=0, sc=8) 00:35:12.257 starting I/O failed: -6 00:35:12.257 Write completed with error (sct=0, sc=8) 00:35:12.257 starting I/O failed: -6 00:35:12.257 Write completed with error (sct=0, sc=8) 00:35:12.257 Write completed with error (sct=0, sc=8) 00:35:12.257 [2024-10-01 22:34:07.227822] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:35:12.257 Write completed with error (sct=0, sc=8) 00:35:12.257 starting I/O failed: -6 00:35:12.257 Write completed with error (sct=0, sc=8) 00:35:12.257 starting I/O failed: -6 00:35:12.257 Write completed with error (sct=0, sc=8) 00:35:12.257 starting I/O failed: -6 00:35:12.257 Write completed with error (sct=0, sc=8) 00:35:12.257 Write completed with error (sct=0, sc=8) 00:35:12.257 starting I/O failed: -6 00:35:12.257 Write completed with error (sct=0, sc=8) 00:35:12.257 starting I/O failed: -6 00:35:12.257 Write completed with error (sct=0, sc=8) 00:35:12.257 starting I/O failed: -6 00:35:12.257 Write completed with error (sct=0, sc=8) 00:35:12.257 Write completed with error (sct=0, sc=8) 00:35:12.257 starting I/O failed: -6 00:35:12.257 Write completed with error (sct=0, sc=8) 00:35:12.257 starting I/O failed: -6 00:35:12.257 Write completed with error (sct=0, sc=8) 00:35:12.257 starting I/O failed: -6 00:35:12.257 Write completed with error (sct=0, sc=8) 00:35:12.257 Write completed with error (sct=0, sc=8) 00:35:12.257 starting I/O failed: -6 00:35:12.257 Write completed with error (sct=0, sc=8) 00:35:12.257 starting I/O failed: -6 00:35:12.257 Write completed with error (sct=0, sc=8) 00:35:12.257 starting I/O failed: -6 00:35:12.257 Write completed with error (sct=0, sc=8) 00:35:12.257 Write completed with error (sct=0, sc=8) 00:35:12.257 starting I/O failed: -6 00:35:12.257 Write completed with error (sct=0, sc=8) 00:35:12.257 starting I/O failed: -6 00:35:12.257 Write completed with error (sct=0, sc=8) 00:35:12.257 starting I/O failed: -6 00:35:12.257 Write completed with error (sct=0, sc=8) 00:35:12.257 Write completed with error (sct=0, sc=8) 00:35:12.257 starting I/O failed: -6 00:35:12.257 Write completed with error (sct=0, sc=8) 00:35:12.257 starting I/O failed: -6 00:35:12.257 Write completed with error (sct=0, sc=8) 00:35:12.257 starting I/O failed: -6 00:35:12.257 Write completed with error (sct=0, sc=8) 00:35:12.257 Write completed with error (sct=0, sc=8) 00:35:12.257 starting I/O failed: -6 00:35:12.257 Write completed with error (sct=0, sc=8) 00:35:12.257 starting I/O failed: -6 00:35:12.257 Write completed with error (sct=0, sc=8) 00:35:12.257 starting I/O failed: -6 00:35:12.257 Write completed with error (sct=0, sc=8) 00:35:12.257 Write completed with error (sct=0, sc=8) 00:35:12.257 starting I/O failed: -6 00:35:12.257 Write completed with error (sct=0, sc=8) 00:35:12.257 starting I/O failed: -6 00:35:12.257 Write completed with error (sct=0, sc=8) 00:35:12.257 starting I/O failed: -6 00:35:12.257 Write completed with error (sct=0, sc=8) 00:35:12.257 Write completed with error (sct=0, sc=8) 00:35:12.257 starting I/O failed: -6 00:35:12.257 Write completed with error (sct=0, sc=8) 00:35:12.257 starting I/O failed: -6 00:35:12.257 Write completed with error (sct=0, sc=8) 00:35:12.257 starting I/O failed: -6 00:35:12.257 Write completed with error (sct=0, sc=8) 00:35:12.258 Write completed with error (sct=0, sc=8) 00:35:12.258 starting I/O failed: -6 00:35:12.258 Write completed with error (sct=0, sc=8) 00:35:12.258 starting I/O failed: -6 00:35:12.258 Write completed with error (sct=0, sc=8) 00:35:12.258 starting I/O failed: -6 00:35:12.258 Write completed with error (sct=0, sc=8) 00:35:12.258 Write completed with error (sct=0, sc=8) 00:35:12.258 starting I/O failed: -6 00:35:12.258 Write completed with error (sct=0, sc=8) 00:35:12.258 starting I/O failed: -6 00:35:12.258 Write completed with error (sct=0, sc=8) 00:35:12.258 starting I/O failed: -6 00:35:12.258 Write completed with error (sct=0, sc=8) 00:35:12.258 Write completed with error (sct=0, sc=8) 00:35:12.258 starting I/O failed: -6 00:35:12.258 Write completed with error (sct=0, sc=8) 00:35:12.258 starting I/O failed: -6 00:35:12.258 Write completed with error (sct=0, sc=8) 00:35:12.258 starting I/O failed: -6 00:35:12.258 Write completed with error (sct=0, sc=8) 00:35:12.258 [2024-10-01 22:34:07.228741] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:12.258 Write completed with error (sct=0, sc=8) 00:35:12.258 starting I/O failed: -6 00:35:12.258 Write completed with error (sct=0, sc=8) 00:35:12.258 starting I/O failed: -6 00:35:12.258 Write completed with error (sct=0, sc=8) 00:35:12.258 starting I/O failed: -6 00:35:12.258 Write completed with error (sct=0, sc=8) 00:35:12.258 starting I/O failed: -6 00:35:12.258 Write completed with error (sct=0, sc=8) 00:35:12.258 starting I/O failed: -6 00:35:12.258 Write completed with error (sct=0, sc=8) 00:35:12.258 starting I/O failed: -6 00:35:12.258 Write completed with error (sct=0, sc=8) 00:35:12.258 starting I/O failed: -6 00:35:12.258 Write completed with error (sct=0, sc=8) 00:35:12.258 starting I/O failed: -6 00:35:12.258 Write completed with error (sct=0, sc=8) 00:35:12.258 starting I/O failed: -6 00:35:12.258 Write completed with error (sct=0, sc=8) 00:35:12.258 starting I/O failed: -6 00:35:12.258 Write completed with error (sct=0, sc=8) 00:35:12.258 starting I/O failed: -6 00:35:12.258 Write completed with error (sct=0, sc=8) 00:35:12.258 starting I/O failed: -6 00:35:12.258 Write completed with error (sct=0, sc=8) 00:35:12.258 starting I/O failed: -6 00:35:12.258 Write completed with error (sct=0, sc=8) 00:35:12.258 starting I/O failed: -6 00:35:12.258 Write completed with error (sct=0, sc=8) 00:35:12.258 starting I/O failed: -6 00:35:12.258 Write completed with error (sct=0, sc=8) 00:35:12.258 starting I/O failed: -6 00:35:12.258 Write completed with error (sct=0, sc=8) 00:35:12.258 starting I/O failed: -6 00:35:12.258 Write completed with error (sct=0, sc=8) 00:35:12.258 starting I/O failed: -6 00:35:12.258 Write completed with error (sct=0, sc=8) 00:35:12.258 starting I/O failed: -6 00:35:12.258 Write completed with error (sct=0, sc=8) 00:35:12.258 starting I/O failed: -6 00:35:12.258 Write completed with error (sct=0, sc=8) 00:35:12.258 starting I/O failed: -6 00:35:12.258 Write completed with error (sct=0, sc=8) 00:35:12.258 starting I/O failed: -6 00:35:12.258 Write completed with error (sct=0, sc=8) 00:35:12.258 starting I/O failed: -6 00:35:12.258 Write completed with error (sct=0, sc=8) 00:35:12.258 starting I/O failed: -6 00:35:12.258 Write completed with error (sct=0, sc=8) 00:35:12.258 starting I/O failed: -6 00:35:12.258 Write completed with error (sct=0, sc=8) 00:35:12.258 starting I/O failed: -6 00:35:12.258 Write completed with error (sct=0, sc=8) 00:35:12.258 starting I/O failed: -6 00:35:12.258 Write completed with error (sct=0, sc=8) 00:35:12.258 starting I/O failed: -6 00:35:12.258 Write completed with error (sct=0, sc=8) 00:35:12.258 starting I/O failed: -6 00:35:12.258 Write completed with error (sct=0, sc=8) 00:35:12.258 starting I/O failed: -6 00:35:12.258 Write completed with error (sct=0, sc=8) 00:35:12.258 starting I/O failed: -6 00:35:12.258 Write completed with error (sct=0, sc=8) 00:35:12.258 starting I/O failed: -6 00:35:12.258 Write completed with error (sct=0, sc=8) 00:35:12.258 starting I/O failed: -6 00:35:12.258 Write completed with error (sct=0, sc=8) 00:35:12.258 starting I/O failed: -6 00:35:12.258 Write completed with error (sct=0, sc=8) 00:35:12.258 starting I/O failed: -6 00:35:12.258 Write completed with error (sct=0, sc=8) 00:35:12.258 starting I/O failed: -6 00:35:12.258 Write completed with error (sct=0, sc=8) 00:35:12.258 starting I/O failed: -6 00:35:12.258 Write completed with error (sct=0, sc=8) 00:35:12.258 starting I/O failed: -6 00:35:12.258 Write completed with error (sct=0, sc=8) 00:35:12.258 starting I/O failed: -6 00:35:12.258 Write completed with error (sct=0, sc=8) 00:35:12.258 starting I/O failed: -6 00:35:12.258 Write completed with error (sct=0, sc=8) 00:35:12.258 starting I/O failed: -6 00:35:12.258 Write completed with error (sct=0, sc=8) 00:35:12.258 starting I/O failed: -6 00:35:12.258 Write completed with error (sct=0, sc=8) 00:35:12.258 starting I/O failed: -6 00:35:12.258 Write completed with error (sct=0, sc=8) 00:35:12.258 starting I/O failed: -6 00:35:12.258 Write completed with error (sct=0, sc=8) 00:35:12.258 starting I/O failed: -6 00:35:12.258 Write completed with error (sct=0, sc=8) 00:35:12.258 starting I/O failed: -6 00:35:12.258 Write completed with error (sct=0, sc=8) 00:35:12.258 starting I/O failed: -6 00:35:12.258 Write completed with error (sct=0, sc=8) 00:35:12.258 starting I/O failed: -6 00:35:12.258 Write completed with error (sct=0, sc=8) 00:35:12.258 starting I/O failed: -6 00:35:12.258 Write completed with error (sct=0, sc=8) 00:35:12.258 starting I/O failed: -6 00:35:12.258 Write completed with error (sct=0, sc=8) 00:35:12.258 starting I/O failed: -6 00:35:12.258 Write completed with error (sct=0, sc=8) 00:35:12.258 starting I/O failed: -6 00:35:12.258 Write completed with error (sct=0, sc=8) 00:35:12.258 starting I/O failed: -6 00:35:12.258 Write completed with error (sct=0, sc=8) 00:35:12.258 starting I/O failed: -6 00:35:12.258 Write completed with error (sct=0, sc=8) 00:35:12.258 starting I/O failed: -6 00:35:12.258 Write completed with error (sct=0, sc=8) 00:35:12.258 starting I/O failed: -6 00:35:12.258 Write completed with error (sct=0, sc=8) 00:35:12.258 starting I/O failed: -6 00:35:12.258 Write completed with error (sct=0, sc=8) 00:35:12.258 starting I/O failed: -6 00:35:12.258 Write completed with error (sct=0, sc=8) 00:35:12.258 starting I/O failed: -6 00:35:12.258 Write completed with error (sct=0, sc=8) 00:35:12.258 starting I/O failed: -6 00:35:12.258 [2024-10-01 22:34:07.230162] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:35:12.258 NVMe io qpair process completion error 00:35:12.258 Write completed with error (sct=0, sc=8) 00:35:12.258 Write completed with error (sct=0, sc=8) 00:35:12.258 Write completed with error (sct=0, sc=8) 00:35:12.258 starting I/O failed: -6 00:35:12.258 Write completed with error (sct=0, sc=8) 00:35:12.258 Write completed with error (sct=0, sc=8) 00:35:12.258 Write completed with error (sct=0, sc=8) 00:35:12.258 Write completed with error (sct=0, sc=8) 00:35:12.258 starting I/O failed: -6 00:35:12.258 Write completed with error (sct=0, sc=8) 00:35:12.258 Write completed with error (sct=0, sc=8) 00:35:12.258 Write completed with error (sct=0, sc=8) 00:35:12.258 Write completed with error (sct=0, sc=8) 00:35:12.258 starting I/O failed: -6 00:35:12.258 Write completed with error (sct=0, sc=8) 00:35:12.258 Write completed with error (sct=0, sc=8) 00:35:12.258 Write completed with error (sct=0, sc=8) 00:35:12.258 Write completed with error (sct=0, sc=8) 00:35:12.258 starting I/O failed: -6 00:35:12.258 Write completed with error (sct=0, sc=8) 00:35:12.258 Write completed with error (sct=0, sc=8) 00:35:12.258 Write completed with error (sct=0, sc=8) 00:35:12.258 Write completed with error (sct=0, sc=8) 00:35:12.258 starting I/O failed: -6 00:35:12.258 Write completed with error (sct=0, sc=8) 00:35:12.258 Write completed with error (sct=0, sc=8) 00:35:12.258 Write completed with error (sct=0, sc=8) 00:35:12.258 Write completed with error (sct=0, sc=8) 00:35:12.258 starting I/O failed: -6 00:35:12.258 Write completed with error (sct=0, sc=8) 00:35:12.258 Write completed with error (sct=0, sc=8) 00:35:12.258 Write completed with error (sct=0, sc=8) 00:35:12.258 Write completed with error (sct=0, sc=8) 00:35:12.258 starting I/O failed: -6 00:35:12.258 Write completed with error (sct=0, sc=8) 00:35:12.258 Write completed with error (sct=0, sc=8) 00:35:12.258 Write completed with error (sct=0, sc=8) 00:35:12.258 Write completed with error (sct=0, sc=8) 00:35:12.258 starting I/O failed: -6 00:35:12.258 Write completed with error (sct=0, sc=8) 00:35:12.258 Write completed with error (sct=0, sc=8) 00:35:12.258 Write completed with error (sct=0, sc=8) 00:35:12.258 Write completed with error (sct=0, sc=8) 00:35:12.258 starting I/O failed: -6 00:35:12.258 Write completed with error (sct=0, sc=8) 00:35:12.258 Write completed with error (sct=0, sc=8) 00:35:12.258 Write completed with error (sct=0, sc=8) 00:35:12.258 Write completed with error (sct=0, sc=8) 00:35:12.258 starting I/O failed: -6 00:35:12.258 Write completed with error (sct=0, sc=8) 00:35:12.258 Write completed with error (sct=0, sc=8) 00:35:12.258 Write completed with error (sct=0, sc=8) 00:35:12.258 [2024-10-01 22:34:07.231397] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:35:12.258 starting I/O failed: -6 00:35:12.258 Write completed with error (sct=0, sc=8) 00:35:12.258 starting I/O failed: -6 00:35:12.258 Write completed with error (sct=0, sc=8) 00:35:12.258 Write completed with error (sct=0, sc=8) 00:35:12.258 Write completed with error (sct=0, sc=8) 00:35:12.258 starting I/O failed: -6 00:35:12.258 Write completed with error (sct=0, sc=8) 00:35:12.258 starting I/O failed: -6 00:35:12.258 Write completed with error (sct=0, sc=8) 00:35:12.258 Write completed with error (sct=0, sc=8) 00:35:12.258 Write completed with error (sct=0, sc=8) 00:35:12.258 starting I/O failed: -6 00:35:12.258 Write completed with error (sct=0, sc=8) 00:35:12.258 starting I/O failed: -6 00:35:12.258 Write completed with error (sct=0, sc=8) 00:35:12.258 Write completed with error (sct=0, sc=8) 00:35:12.258 Write completed with error (sct=0, sc=8) 00:35:12.258 starting I/O failed: -6 00:35:12.258 Write completed with error (sct=0, sc=8) 00:35:12.259 starting I/O failed: -6 00:35:12.259 Write completed with error (sct=0, sc=8) 00:35:12.259 Write completed with error (sct=0, sc=8) 00:35:12.259 Write completed with error (sct=0, sc=8) 00:35:12.259 starting I/O failed: -6 00:35:12.259 Write completed with error (sct=0, sc=8) 00:35:12.259 starting I/O failed: -6 00:35:12.259 Write completed with error (sct=0, sc=8) 00:35:12.259 Write completed with error (sct=0, sc=8) 00:35:12.259 Write completed with error (sct=0, sc=8) 00:35:12.259 starting I/O failed: -6 00:35:12.259 Write completed with error (sct=0, sc=8) 00:35:12.259 starting I/O failed: -6 00:35:12.259 Write completed with error (sct=0, sc=8) 00:35:12.259 Write completed with error (sct=0, sc=8) 00:35:12.259 Write completed with error (sct=0, sc=8) 00:35:12.259 starting I/O failed: -6 00:35:12.259 Write completed with error (sct=0, sc=8) 00:35:12.259 starting I/O failed: -6 00:35:12.259 Write completed with error (sct=0, sc=8) 00:35:12.259 Write completed with error (sct=0, sc=8) 00:35:12.259 Write completed with error (sct=0, sc=8) 00:35:12.259 starting I/O failed: -6 00:35:12.259 Write completed with error (sct=0, sc=8) 00:35:12.259 starting I/O failed: -6 00:35:12.259 Write completed with error (sct=0, sc=8) 00:35:12.259 Write completed with error (sct=0, sc=8) 00:35:12.259 Write completed with error (sct=0, sc=8) 00:35:12.259 starting I/O failed: -6 00:35:12.259 Write completed with error (sct=0, sc=8) 00:35:12.259 starting I/O failed: -6 00:35:12.259 Write completed with error (sct=0, sc=8) 00:35:12.259 Write completed with error (sct=0, sc=8) 00:35:12.259 Write completed with error (sct=0, sc=8) 00:35:12.259 starting I/O failed: -6 00:35:12.259 Write completed with error (sct=0, sc=8) 00:35:12.259 starting I/O failed: -6 00:35:12.259 [2024-10-01 22:34:07.232242] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:35:12.259 Write completed with error (sct=0, sc=8) 00:35:12.259 starting I/O failed: -6 00:35:12.259 Write completed with error (sct=0, sc=8) 00:35:12.259 Write completed with error (sct=0, sc=8) 00:35:12.259 starting I/O failed: -6 00:35:12.259 Write completed with error (sct=0, sc=8) 00:35:12.259 starting I/O failed: -6 00:35:12.259 Write completed with error (sct=0, sc=8) 00:35:12.259 starting I/O failed: -6 00:35:12.259 Write completed with error (sct=0, sc=8) 00:35:12.259 Write completed with error (sct=0, sc=8) 00:35:12.259 starting I/O failed: -6 00:35:12.259 Write completed with error (sct=0, sc=8) 00:35:12.259 starting I/O failed: -6 00:35:12.259 Write completed with error (sct=0, sc=8) 00:35:12.259 starting I/O failed: -6 00:35:12.259 Write completed with error (sct=0, sc=8) 00:35:12.259 Write completed with error (sct=0, sc=8) 00:35:12.259 starting I/O failed: -6 00:35:12.259 Write completed with error (sct=0, sc=8) 00:35:12.259 starting I/O failed: -6 00:35:12.259 Write completed with error (sct=0, sc=8) 00:35:12.259 starting I/O failed: -6 00:35:12.259 Write completed with error (sct=0, sc=8) 00:35:12.259 Write completed with error (sct=0, sc=8) 00:35:12.259 starting I/O failed: -6 00:35:12.259 Write completed with error (sct=0, sc=8) 00:35:12.259 starting I/O failed: -6 00:35:12.259 Write completed with error (sct=0, sc=8) 00:35:12.259 starting I/O failed: -6 00:35:12.259 Write completed with error (sct=0, sc=8) 00:35:12.259 Write completed with error (sct=0, sc=8) 00:35:12.259 starting I/O failed: -6 00:35:12.259 Write completed with error (sct=0, sc=8) 00:35:12.259 starting I/O failed: -6 00:35:12.259 Write completed with error (sct=0, sc=8) 00:35:12.259 starting I/O failed: -6 00:35:12.259 Write completed with error (sct=0, sc=8) 00:35:12.259 Write completed with error (sct=0, sc=8) 00:35:12.259 starting I/O failed: -6 00:35:12.259 Write completed with error (sct=0, sc=8) 00:35:12.259 starting I/O failed: -6 00:35:12.259 Write completed with error (sct=0, sc=8) 00:35:12.259 starting I/O failed: -6 00:35:12.259 Write completed with error (sct=0, sc=8) 00:35:12.259 Write completed with error (sct=0, sc=8) 00:35:12.259 starting I/O failed: -6 00:35:12.259 Write completed with error (sct=0, sc=8) 00:35:12.259 starting I/O failed: -6 00:35:12.259 Write completed with error (sct=0, sc=8) 00:35:12.259 starting I/O failed: -6 00:35:12.259 Write completed with error (sct=0, sc=8) 00:35:12.259 Write completed with error (sct=0, sc=8) 00:35:12.259 starting I/O failed: -6 00:35:12.259 Write completed with error (sct=0, sc=8) 00:35:12.259 starting I/O failed: -6 00:35:12.259 Write completed with error (sct=0, sc=8) 00:35:12.259 starting I/O failed: -6 00:35:12.259 Write completed with error (sct=0, sc=8) 00:35:12.259 Write completed with error (sct=0, sc=8) 00:35:12.259 starting I/O failed: -6 00:35:12.259 Write completed with error (sct=0, sc=8) 00:35:12.259 starting I/O failed: -6 00:35:12.259 Write completed with error (sct=0, sc=8) 00:35:12.259 starting I/O failed: -6 00:35:12.259 Write completed with error (sct=0, sc=8) 00:35:12.259 Write completed with error (sct=0, sc=8) 00:35:12.259 starting I/O failed: -6 00:35:12.259 Write completed with error (sct=0, sc=8) 00:35:12.259 starting I/O failed: -6 00:35:12.259 Write completed with error (sct=0, sc=8) 00:35:12.259 starting I/O failed: -6 00:35:12.259 Write completed with error (sct=0, sc=8) 00:35:12.259 Write completed with error (sct=0, sc=8) 00:35:12.259 starting I/O failed: -6 00:35:12.259 Write completed with error (sct=0, sc=8) 00:35:12.259 starting I/O failed: -6 00:35:12.259 Write completed with error (sct=0, sc=8) 00:35:12.259 starting I/O failed: -6 00:35:12.259 Write completed with error (sct=0, sc=8) 00:35:12.259 Write completed with error (sct=0, sc=8) 00:35:12.259 starting I/O failed: -6 00:35:12.259 Write completed with error (sct=0, sc=8) 00:35:12.259 starting I/O failed: -6 00:35:12.259 Write completed with error (sct=0, sc=8) 00:35:12.259 starting I/O failed: -6 00:35:12.259 [2024-10-01 22:34:07.233169] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:12.259 Write completed with error (sct=0, sc=8) 00:35:12.259 starting I/O failed: -6 00:35:12.259 Write completed with error (sct=0, sc=8) 00:35:12.259 starting I/O failed: -6 00:35:12.259 Write completed with error (sct=0, sc=8) 00:35:12.259 starting I/O failed: -6 00:35:12.259 Write completed with error (sct=0, sc=8) 00:35:12.259 starting I/O failed: -6 00:35:12.259 Write completed with error (sct=0, sc=8) 00:35:12.259 starting I/O failed: -6 00:35:12.259 Write completed with error (sct=0, sc=8) 00:35:12.259 starting I/O failed: -6 00:35:12.259 Write completed with error (sct=0, sc=8) 00:35:12.259 starting I/O failed: -6 00:35:12.259 Write completed with error (sct=0, sc=8) 00:35:12.259 starting I/O failed: -6 00:35:12.259 Write completed with error (sct=0, sc=8) 00:35:12.259 starting I/O failed: -6 00:35:12.259 Write completed with error (sct=0, sc=8) 00:35:12.259 starting I/O failed: -6 00:35:12.259 Write completed with error (sct=0, sc=8) 00:35:12.259 starting I/O failed: -6 00:35:12.259 Write completed with error (sct=0, sc=8) 00:35:12.259 starting I/O failed: -6 00:35:12.259 Write completed with error (sct=0, sc=8) 00:35:12.259 starting I/O failed: -6 00:35:12.259 Write completed with error (sct=0, sc=8) 00:35:12.259 starting I/O failed: -6 00:35:12.259 Write completed with error (sct=0, sc=8) 00:35:12.259 starting I/O failed: -6 00:35:12.259 Write completed with error (sct=0, sc=8) 00:35:12.259 starting I/O failed: -6 00:35:12.259 Write completed with error (sct=0, sc=8) 00:35:12.259 starting I/O failed: -6 00:35:12.259 Write completed with error (sct=0, sc=8) 00:35:12.259 starting I/O failed: -6 00:35:12.259 Write completed with error (sct=0, sc=8) 00:35:12.259 starting I/O failed: -6 00:35:12.259 Write completed with error (sct=0, sc=8) 00:35:12.259 starting I/O failed: -6 00:35:12.259 Write completed with error (sct=0, sc=8) 00:35:12.259 starting I/O failed: -6 00:35:12.259 Write completed with error (sct=0, sc=8) 00:35:12.259 starting I/O failed: -6 00:35:12.259 Write completed with error (sct=0, sc=8) 00:35:12.259 starting I/O failed: -6 00:35:12.259 Write completed with error (sct=0, sc=8) 00:35:12.259 starting I/O failed: -6 00:35:12.259 Write completed with error (sct=0, sc=8) 00:35:12.259 starting I/O failed: -6 00:35:12.259 Write completed with error (sct=0, sc=8) 00:35:12.259 starting I/O failed: -6 00:35:12.259 Write completed with error (sct=0, sc=8) 00:35:12.259 starting I/O failed: -6 00:35:12.259 Write completed with error (sct=0, sc=8) 00:35:12.259 starting I/O failed: -6 00:35:12.259 Write completed with error (sct=0, sc=8) 00:35:12.259 starting I/O failed: -6 00:35:12.260 Write completed with error (sct=0, sc=8) 00:35:12.260 starting I/O failed: -6 00:35:12.260 Write completed with error (sct=0, sc=8) 00:35:12.260 starting I/O failed: -6 00:35:12.260 Write completed with error (sct=0, sc=8) 00:35:12.260 starting I/O failed: -6 00:35:12.260 Write completed with error (sct=0, sc=8) 00:35:12.260 starting I/O failed: -6 00:35:12.260 Write completed with error (sct=0, sc=8) 00:35:12.260 starting I/O failed: -6 00:35:12.260 Write completed with error (sct=0, sc=8) 00:35:12.260 starting I/O failed: -6 00:35:12.260 Write completed with error (sct=0, sc=8) 00:35:12.260 starting I/O failed: -6 00:35:12.260 Write completed with error (sct=0, sc=8) 00:35:12.260 starting I/O failed: -6 00:35:12.260 Write completed with error (sct=0, sc=8) 00:35:12.260 starting I/O failed: -6 00:35:12.260 Write completed with error (sct=0, sc=8) 00:35:12.260 starting I/O failed: -6 00:35:12.260 Write completed with error (sct=0, sc=8) 00:35:12.260 starting I/O failed: -6 00:35:12.260 Write completed with error (sct=0, sc=8) 00:35:12.260 starting I/O failed: -6 00:35:12.260 Write completed with error (sct=0, sc=8) 00:35:12.260 starting I/O failed: -6 00:35:12.260 Write completed with error (sct=0, sc=8) 00:35:12.260 starting I/O failed: -6 00:35:12.260 Write completed with error (sct=0, sc=8) 00:35:12.260 starting I/O failed: -6 00:35:12.260 Write completed with error (sct=0, sc=8) 00:35:12.260 starting I/O failed: -6 00:35:12.260 Write completed with error (sct=0, sc=8) 00:35:12.260 starting I/O failed: -6 00:35:12.260 Write completed with error (sct=0, sc=8) 00:35:12.260 starting I/O failed: -6 00:35:12.260 Write completed with error (sct=0, sc=8) 00:35:12.260 starting I/O failed: -6 00:35:12.260 Write completed with error (sct=0, sc=8) 00:35:12.260 starting I/O failed: -6 00:35:12.260 Write completed with error (sct=0, sc=8) 00:35:12.260 starting I/O failed: -6 00:35:12.260 Write completed with error (sct=0, sc=8) 00:35:12.260 starting I/O failed: -6 00:35:12.260 Write completed with error (sct=0, sc=8) 00:35:12.260 starting I/O failed: -6 00:35:12.260 Write completed with error (sct=0, sc=8) 00:35:12.260 starting I/O failed: -6 00:35:12.260 Write completed with error (sct=0, sc=8) 00:35:12.260 starting I/O failed: -6 00:35:12.260 Write completed with error (sct=0, sc=8) 00:35:12.260 starting I/O failed: -6 00:35:12.260 Write completed with error (sct=0, sc=8) 00:35:12.260 starting I/O failed: -6 00:35:12.260 Write completed with error (sct=0, sc=8) 00:35:12.260 starting I/O failed: -6 00:35:12.260 Write completed with error (sct=0, sc=8) 00:35:12.260 starting I/O failed: -6 00:35:12.260 Write completed with error (sct=0, sc=8) 00:35:12.260 starting I/O failed: -6 00:35:12.260 Write completed with error (sct=0, sc=8) 00:35:12.260 starting I/O failed: -6 00:35:12.260 Write completed with error (sct=0, sc=8) 00:35:12.260 starting I/O failed: -6 00:35:12.260 [2024-10-01 22:34:07.236189] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:35:12.260 NVMe io qpair process completion error 00:35:12.260 Write completed with error (sct=0, sc=8) 00:35:12.260 Write completed with error (sct=0, sc=8) 00:35:12.260 starting I/O failed: -6 00:35:12.260 Write completed with error (sct=0, sc=8) 00:35:12.260 Write completed with error (sct=0, sc=8) 00:35:12.260 Write completed with error (sct=0, sc=8) 00:35:12.260 Write completed with error (sct=0, sc=8) 00:35:12.260 starting I/O failed: -6 00:35:12.260 Write completed with error (sct=0, sc=8) 00:35:12.260 Write completed with error (sct=0, sc=8) 00:35:12.260 Write completed with error (sct=0, sc=8) 00:35:12.260 Write completed with error (sct=0, sc=8) 00:35:12.260 starting I/O failed: -6 00:35:12.260 Write completed with error (sct=0, sc=8) 00:35:12.260 Write completed with error (sct=0, sc=8) 00:35:12.260 Write completed with error (sct=0, sc=8) 00:35:12.260 Write completed with error (sct=0, sc=8) 00:35:12.260 starting I/O failed: -6 00:35:12.260 Write completed with error (sct=0, sc=8) 00:35:12.260 Write completed with error (sct=0, sc=8) 00:35:12.260 Write completed with error (sct=0, sc=8) 00:35:12.260 Write completed with error (sct=0, sc=8) 00:35:12.260 starting I/O failed: -6 00:35:12.260 Write completed with error (sct=0, sc=8) 00:35:12.260 Write completed with error (sct=0, sc=8) 00:35:12.260 Write completed with error (sct=0, sc=8) 00:35:12.260 Write completed with error (sct=0, sc=8) 00:35:12.260 starting I/O failed: -6 00:35:12.260 Write completed with error (sct=0, sc=8) 00:35:12.260 Write completed with error (sct=0, sc=8) 00:35:12.260 Write completed with error (sct=0, sc=8) 00:35:12.260 Write completed with error (sct=0, sc=8) 00:35:12.260 starting I/O failed: -6 00:35:12.260 Write completed with error (sct=0, sc=8) 00:35:12.260 Write completed with error (sct=0, sc=8) 00:35:12.260 Write completed with error (sct=0, sc=8) 00:35:12.260 Write completed with error (sct=0, sc=8) 00:35:12.260 starting I/O failed: -6 00:35:12.260 Write completed with error (sct=0, sc=8) 00:35:12.260 Write completed with error (sct=0, sc=8) 00:35:12.260 Write completed with error (sct=0, sc=8) 00:35:12.260 Write completed with error (sct=0, sc=8) 00:35:12.260 starting I/O failed: -6 00:35:12.260 Write completed with error (sct=0, sc=8) 00:35:12.260 Write completed with error (sct=0, sc=8) 00:35:12.260 Write completed with error (sct=0, sc=8) 00:35:12.260 Write completed with error (sct=0, sc=8) 00:35:12.260 starting I/O failed: -6 00:35:12.260 Write completed with error (sct=0, sc=8) 00:35:12.260 Write completed with error (sct=0, sc=8) 00:35:12.260 Write completed with error (sct=0, sc=8) 00:35:12.260 Write completed with error (sct=0, sc=8) 00:35:12.260 starting I/O failed: -6 00:35:12.260 Write completed with error (sct=0, sc=8) 00:35:12.260 Write completed with error (sct=0, sc=8) 00:35:12.260 Write completed with error (sct=0, sc=8) 00:35:12.260 Write completed with error (sct=0, sc=8) 00:35:12.260 starting I/O failed: -6 00:35:12.260 Write completed with error (sct=0, sc=8) 00:35:12.260 starting I/O failed: -6 00:35:12.260 Write completed with error (sct=0, sc=8) 00:35:12.260 Write completed with error (sct=0, sc=8) 00:35:12.260 Write completed with error (sct=0, sc=8) 00:35:12.260 starting I/O failed: -6 00:35:12.260 Write completed with error (sct=0, sc=8) 00:35:12.260 starting I/O failed: -6 00:35:12.260 Write completed with error (sct=0, sc=8) 00:35:12.260 starting I/O failed: -6 00:35:12.260 Write completed with error (sct=0, sc=8) 00:35:12.260 Write completed with error (sct=0, sc=8) 00:35:12.260 starting I/O failed: -6 00:35:12.260 Write completed with error (sct=0, sc=8) 00:35:12.260 starting I/O failed: -6 00:35:12.260 Write completed with error (sct=0, sc=8) 00:35:12.260 starting I/O failed: -6 00:35:12.260 Write completed with error (sct=0, sc=8) 00:35:12.260 Write completed with error (sct=0, sc=8) 00:35:12.260 starting I/O failed: -6 00:35:12.260 Write completed with error (sct=0, sc=8) 00:35:12.260 starting I/O failed: -6 00:35:12.260 Write completed with error (sct=0, sc=8) 00:35:12.260 starting I/O failed: -6 00:35:12.260 Write completed with error (sct=0, sc=8) 00:35:12.260 Write completed with error (sct=0, sc=8) 00:35:12.260 starting I/O failed: -6 00:35:12.260 Write completed with error (sct=0, sc=8) 00:35:12.260 starting I/O failed: -6 00:35:12.260 Write completed with error (sct=0, sc=8) 00:35:12.260 starting I/O failed: -6 00:35:12.260 Write completed with error (sct=0, sc=8) 00:35:12.260 Write completed with error (sct=0, sc=8) 00:35:12.260 starting I/O failed: -6 00:35:12.260 Write completed with error (sct=0, sc=8) 00:35:12.260 starting I/O failed: -6 00:35:12.260 Write completed with error (sct=0, sc=8) 00:35:12.260 starting I/O failed: -6 00:35:12.260 Write completed with error (sct=0, sc=8) 00:35:12.260 Write completed with error (sct=0, sc=8) 00:35:12.260 starting I/O failed: -6 00:35:12.260 Write completed with error (sct=0, sc=8) 00:35:12.260 starting I/O failed: -6 00:35:12.260 Write completed with error (sct=0, sc=8) 00:35:12.260 starting I/O failed: -6 00:35:12.260 Write completed with error (sct=0, sc=8) 00:35:12.260 Write completed with error (sct=0, sc=8) 00:35:12.260 starting I/O failed: -6 00:35:12.260 Write completed with error (sct=0, sc=8) 00:35:12.260 starting I/O failed: -6 00:35:12.260 Write completed with error (sct=0, sc=8) 00:35:12.260 starting I/O failed: -6 00:35:12.260 Write completed with error (sct=0, sc=8) 00:35:12.260 Write completed with error (sct=0, sc=8) 00:35:12.260 starting I/O failed: -6 00:35:12.260 Write completed with error (sct=0, sc=8) 00:35:12.260 starting I/O failed: -6 00:35:12.260 Write completed with error (sct=0, sc=8) 00:35:12.260 starting I/O failed: -6 00:35:12.260 Write completed with error (sct=0, sc=8) 00:35:12.260 Write completed with error (sct=0, sc=8) 00:35:12.260 starting I/O failed: -6 00:35:12.260 Write completed with error (sct=0, sc=8) 00:35:12.260 starting I/O failed: -6 00:35:12.260 Write completed with error (sct=0, sc=8) 00:35:12.260 starting I/O failed: -6 00:35:12.260 Write completed with error (sct=0, sc=8) 00:35:12.260 Write completed with error (sct=0, sc=8) 00:35:12.260 starting I/O failed: -6 00:35:12.260 Write completed with error (sct=0, sc=8) 00:35:12.260 starting I/O failed: -6 00:35:12.260 Write completed with error (sct=0, sc=8) 00:35:12.260 starting I/O failed: -6 00:35:12.260 Write completed with error (sct=0, sc=8) 00:35:12.260 Write completed with error (sct=0, sc=8) 00:35:12.260 starting I/O failed: -6 00:35:12.261 Write completed with error (sct=0, sc=8) 00:35:12.261 starting I/O failed: -6 00:35:12.261 Write completed with error (sct=0, sc=8) 00:35:12.261 starting I/O failed: -6 00:35:12.261 Write completed with error (sct=0, sc=8) 00:35:12.261 Write completed with error (sct=0, sc=8) 00:35:12.261 starting I/O failed: -6 00:35:12.261 Write completed with error (sct=0, sc=8) 00:35:12.261 starting I/O failed: -6 00:35:12.261 Write completed with error (sct=0, sc=8) 00:35:12.261 starting I/O failed: -6 00:35:12.261 Write completed with error (sct=0, sc=8) 00:35:12.261 Write completed with error (sct=0, sc=8) 00:35:12.261 starting I/O failed: -6 00:35:12.261 Write completed with error (sct=0, sc=8) 00:35:12.261 starting I/O failed: -6 00:35:12.261 Write completed with error (sct=0, sc=8) 00:35:12.261 starting I/O failed: -6 00:35:12.261 Write completed with error (sct=0, sc=8) 00:35:12.261 Write completed with error (sct=0, sc=8) 00:35:12.261 starting I/O failed: -6 00:35:12.261 Write completed with error (sct=0, sc=8) 00:35:12.261 starting I/O failed: -6 00:35:12.261 Write completed with error (sct=0, sc=8) 00:35:12.261 starting I/O failed: -6 00:35:12.261 Write completed with error (sct=0, sc=8) 00:35:12.261 Write completed with error (sct=0, sc=8) 00:35:12.261 starting I/O failed: -6 00:35:12.261 Write completed with error (sct=0, sc=8) 00:35:12.261 starting I/O failed: -6 00:35:12.261 Write completed with error (sct=0, sc=8) 00:35:12.261 starting I/O failed: -6 00:35:12.261 Write completed with error (sct=0, sc=8) 00:35:12.261 Write completed with error (sct=0, sc=8) 00:35:12.261 starting I/O failed: -6 00:35:12.261 Write completed with error (sct=0, sc=8) 00:35:12.261 starting I/O failed: -6 00:35:12.261 Write completed with error (sct=0, sc=8) 00:35:12.261 starting I/O failed: -6 00:35:12.261 Write completed with error (sct=0, sc=8) 00:35:12.261 Write completed with error (sct=0, sc=8) 00:35:12.261 starting I/O failed: -6 00:35:12.261 Write completed with error (sct=0, sc=8) 00:35:12.261 starting I/O failed: -6 00:35:12.261 Write completed with error (sct=0, sc=8) 00:35:12.261 starting I/O failed: -6 00:35:12.261 Write completed with error (sct=0, sc=8) 00:35:12.261 Write completed with error (sct=0, sc=8) 00:35:12.261 starting I/O failed: -6 00:35:12.261 Write completed with error (sct=0, sc=8) 00:35:12.261 starting I/O failed: -6 00:35:12.261 Write completed with error (sct=0, sc=8) 00:35:12.261 starting I/O failed: -6 00:35:12.261 Write completed with error (sct=0, sc=8) 00:35:12.261 Write completed with error (sct=0, sc=8) 00:35:12.261 starting I/O failed: -6 00:35:12.261 Write completed with error (sct=0, sc=8) 00:35:12.261 starting I/O failed: -6 00:35:12.261 Write completed with error (sct=0, sc=8) 00:35:12.261 starting I/O failed: -6 00:35:12.261 Write completed with error (sct=0, sc=8) 00:35:12.261 starting I/O failed: -6 00:35:12.261 Write completed with error (sct=0, sc=8) 00:35:12.261 starting I/O failed: -6 00:35:12.261 Write completed with error (sct=0, sc=8) 00:35:12.261 starting I/O failed: -6 00:35:12.261 Write completed with error (sct=0, sc=8) 00:35:12.261 starting I/O failed: -6 00:35:12.261 Write completed with error (sct=0, sc=8) 00:35:12.261 starting I/O failed: -6 00:35:12.261 Write completed with error (sct=0, sc=8) 00:35:12.261 starting I/O failed: -6 00:35:12.261 Write completed with error (sct=0, sc=8) 00:35:12.261 starting I/O failed: -6 00:35:12.261 Write completed with error (sct=0, sc=8) 00:35:12.261 starting I/O failed: -6 00:35:12.261 Write completed with error (sct=0, sc=8) 00:35:12.261 starting I/O failed: -6 00:35:12.261 Write completed with error (sct=0, sc=8) 00:35:12.261 starting I/O failed: -6 00:35:12.261 Write completed with error (sct=0, sc=8) 00:35:12.261 starting I/O failed: -6 00:35:12.261 Write completed with error (sct=0, sc=8) 00:35:12.261 starting I/O failed: -6 00:35:12.261 Write completed with error (sct=0, sc=8) 00:35:12.261 starting I/O failed: -6 00:35:12.261 Write completed with error (sct=0, sc=8) 00:35:12.261 starting I/O failed: -6 00:35:12.261 Write completed with error (sct=0, sc=8) 00:35:12.261 starting I/O failed: -6 00:35:12.261 Write completed with error (sct=0, sc=8) 00:35:12.261 starting I/O failed: -6 00:35:12.261 Write completed with error (sct=0, sc=8) 00:35:12.261 starting I/O failed: -6 00:35:12.261 Write completed with error (sct=0, sc=8) 00:35:12.261 starting I/O failed: -6 00:35:12.261 Write completed with error (sct=0, sc=8) 00:35:12.261 starting I/O failed: -6 00:35:12.261 Write completed with error (sct=0, sc=8) 00:35:12.261 starting I/O failed: -6 00:35:12.261 Write completed with error (sct=0, sc=8) 00:35:12.261 starting I/O failed: -6 00:35:12.261 Write completed with error (sct=0, sc=8) 00:35:12.261 starting I/O failed: -6 00:35:12.261 Write completed with error (sct=0, sc=8) 00:35:12.261 starting I/O failed: -6 00:35:12.261 Write completed with error (sct=0, sc=8) 00:35:12.261 starting I/O failed: -6 00:35:12.261 Write completed with error (sct=0, sc=8) 00:35:12.261 starting I/O failed: -6 00:35:12.261 Write completed with error (sct=0, sc=8) 00:35:12.261 starting I/O failed: -6 00:35:12.261 Write completed with error (sct=0, sc=8) 00:35:12.261 starting I/O failed: -6 00:35:12.261 Write completed with error (sct=0, sc=8) 00:35:12.261 starting I/O failed: -6 00:35:12.261 Write completed with error (sct=0, sc=8) 00:35:12.261 starting I/O failed: -6 00:35:12.261 Write completed with error (sct=0, sc=8) 00:35:12.261 starting I/O failed: -6 00:35:12.261 Write completed with error (sct=0, sc=8) 00:35:12.261 starting I/O failed: -6 00:35:12.261 Write completed with error (sct=0, sc=8) 00:35:12.261 starting I/O failed: -6 00:35:12.261 Write completed with error (sct=0, sc=8) 00:35:12.261 starting I/O failed: -6 00:35:12.261 Write completed with error (sct=0, sc=8) 00:35:12.261 starting I/O failed: -6 00:35:12.261 Write completed with error (sct=0, sc=8) 00:35:12.261 starting I/O failed: -6 00:35:12.261 Write completed with error (sct=0, sc=8) 00:35:12.261 starting I/O failed: -6 00:35:12.261 Write completed with error (sct=0, sc=8) 00:35:12.261 starting I/O failed: -6 00:35:12.261 Write completed with error (sct=0, sc=8) 00:35:12.261 starting I/O failed: -6 00:35:12.261 Write completed with error (sct=0, sc=8) 00:35:12.261 starting I/O failed: -6 00:35:12.261 Write completed with error (sct=0, sc=8) 00:35:12.261 starting I/O failed: -6 00:35:12.261 Write completed with error (sct=0, sc=8) 00:35:12.261 starting I/O failed: -6 00:35:12.261 Write completed with error (sct=0, sc=8) 00:35:12.261 starting I/O failed: -6 00:35:12.261 Write completed with error (sct=0, sc=8) 00:35:12.261 starting I/O failed: -6 00:35:12.261 Write completed with error (sct=0, sc=8) 00:35:12.261 starting I/O failed: -6 00:35:12.261 Write completed with error (sct=0, sc=8) 00:35:12.261 starting I/O failed: -6 00:35:12.261 Write completed with error (sct=0, sc=8) 00:35:12.261 starting I/O failed: -6 00:35:12.261 Write completed with error (sct=0, sc=8) 00:35:12.261 starting I/O failed: -6 00:35:12.261 Write completed with error (sct=0, sc=8) 00:35:12.261 starting I/O failed: -6 00:35:12.261 Write completed with error (sct=0, sc=8) 00:35:12.261 starting I/O failed: -6 00:35:12.261 Write completed with error (sct=0, sc=8) 00:35:12.261 starting I/O failed: -6 00:35:12.261 Write completed with error (sct=0, sc=8) 00:35:12.261 starting I/O failed: -6 00:35:12.262 Write completed with error (sct=0, sc=8) 00:35:12.262 starting I/O failed: -6 00:35:12.262 Write completed with error (sct=0, sc=8) 00:35:12.262 starting I/O failed: -6 00:35:12.262 Write completed with error (sct=0, sc=8) 00:35:12.262 starting I/O failed: -6 00:35:12.262 Write completed with error (sct=0, sc=8) 00:35:12.262 starting I/O failed: -6 00:35:12.262 Write completed with error (sct=0, sc=8) 00:35:12.262 starting I/O failed: -6 00:35:12.262 Write completed with error (sct=0, sc=8) 00:35:12.262 starting I/O failed: -6 00:35:12.262 Write completed with error (sct=0, sc=8) 00:35:12.262 starting I/O failed: -6 00:35:12.262 [2024-10-01 22:34:07.240054] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:35:12.262 NVMe io qpair process completion error 00:35:12.262 Write completed with error (sct=0, sc=8) 00:35:12.262 Write completed with error (sct=0, sc=8) 00:35:12.262 starting I/O failed: -6 00:35:12.262 Write completed with error (sct=0, sc=8) 00:35:12.262 Write completed with error (sct=0, sc=8) 00:35:12.262 Write completed with error (sct=0, sc=8) 00:35:12.262 Write completed with error (sct=0, sc=8) 00:35:12.262 starting I/O failed: -6 00:35:12.262 Write completed with error (sct=0, sc=8) 00:35:12.262 Write completed with error (sct=0, sc=8) 00:35:12.262 Write completed with error (sct=0, sc=8) 00:35:12.262 Write completed with error (sct=0, sc=8) 00:35:12.262 starting I/O failed: -6 00:35:12.262 Write completed with error (sct=0, sc=8) 00:35:12.262 Write completed with error (sct=0, sc=8) 00:35:12.262 Write completed with error (sct=0, sc=8) 00:35:12.262 Write completed with error (sct=0, sc=8) 00:35:12.262 starting I/O failed: -6 00:35:12.262 Write completed with error (sct=0, sc=8) 00:35:12.262 Write completed with error (sct=0, sc=8) 00:35:12.262 Write completed with error (sct=0, sc=8) 00:35:12.262 Write completed with error (sct=0, sc=8) 00:35:12.262 starting I/O failed: -6 00:35:12.262 Write completed with error (sct=0, sc=8) 00:35:12.262 Write completed with error (sct=0, sc=8) 00:35:12.262 Write completed with error (sct=0, sc=8) 00:35:12.262 Write completed with error (sct=0, sc=8) 00:35:12.262 starting I/O failed: -6 00:35:12.262 Write completed with error (sct=0, sc=8) 00:35:12.262 Write completed with error (sct=0, sc=8) 00:35:12.262 Write completed with error (sct=0, sc=8) 00:35:12.262 Write completed with error (sct=0, sc=8) 00:35:12.262 starting I/O failed: -6 00:35:12.262 Write completed with error (sct=0, sc=8) 00:35:12.262 Write completed with error (sct=0, sc=8) 00:35:12.262 Write completed with error (sct=0, sc=8) 00:35:12.262 Write completed with error (sct=0, sc=8) 00:35:12.262 starting I/O failed: -6 00:35:12.262 Write completed with error (sct=0, sc=8) 00:35:12.262 Write completed with error (sct=0, sc=8) 00:35:12.262 Write completed with error (sct=0, sc=8) 00:35:12.262 Write completed with error (sct=0, sc=8) 00:35:12.262 starting I/O failed: -6 00:35:12.262 Write completed with error (sct=0, sc=8) 00:35:12.262 Write completed with error (sct=0, sc=8) 00:35:12.262 Write completed with error (sct=0, sc=8) 00:35:12.262 Write completed with error (sct=0, sc=8) 00:35:12.262 starting I/O failed: -6 00:35:12.262 Write completed with error (sct=0, sc=8) 00:35:12.262 Write completed with error (sct=0, sc=8) 00:35:12.262 Write completed with error (sct=0, sc=8) 00:35:12.262 [2024-10-01 22:34:07.241237] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:35:12.262 starting I/O failed: -6 00:35:12.262 Write completed with error (sct=0, sc=8) 00:35:12.262 Write completed with error (sct=0, sc=8) 00:35:12.262 starting I/O failed: -6 00:35:12.262 Write completed with error (sct=0, sc=8) 00:35:12.262 starting I/O failed: -6 00:35:12.262 Write completed with error (sct=0, sc=8) 00:35:12.262 Write completed with error (sct=0, sc=8) 00:35:12.262 Write completed with error (sct=0, sc=8) 00:35:12.262 starting I/O failed: -6 00:35:12.262 Write completed with error (sct=0, sc=8) 00:35:12.262 starting I/O failed: -6 00:35:12.262 Write completed with error (sct=0, sc=8) 00:35:12.262 Write completed with error (sct=0, sc=8) 00:35:12.262 Write completed with error (sct=0, sc=8) 00:35:12.262 starting I/O failed: -6 00:35:12.262 Write completed with error (sct=0, sc=8) 00:35:12.262 starting I/O failed: -6 00:35:12.262 Write completed with error (sct=0, sc=8) 00:35:12.262 Write completed with error (sct=0, sc=8) 00:35:12.262 Write completed with error (sct=0, sc=8) 00:35:12.262 starting I/O failed: -6 00:35:12.262 Write completed with error (sct=0, sc=8) 00:35:12.262 starting I/O failed: -6 00:35:12.262 Write completed with error (sct=0, sc=8) 00:35:12.262 Write completed with error (sct=0, sc=8) 00:35:12.262 Write completed with error (sct=0, sc=8) 00:35:12.262 starting I/O failed: -6 00:35:12.262 Write completed with error (sct=0, sc=8) 00:35:12.262 starting I/O failed: -6 00:35:12.262 Write completed with error (sct=0, sc=8) 00:35:12.262 Write completed with error (sct=0, sc=8) 00:35:12.262 Write completed with error (sct=0, sc=8) 00:35:12.262 starting I/O failed: -6 00:35:12.262 Write completed with error (sct=0, sc=8) 00:35:12.262 starting I/O failed: -6 00:35:12.262 Write completed with error (sct=0, sc=8) 00:35:12.262 Write completed with error (sct=0, sc=8) 00:35:12.262 Write completed with error (sct=0, sc=8) 00:35:12.262 starting I/O failed: -6 00:35:12.262 Write completed with error (sct=0, sc=8) 00:35:12.262 starting I/O failed: -6 00:35:12.262 Write completed with error (sct=0, sc=8) 00:35:12.262 Write completed with error (sct=0, sc=8) 00:35:12.262 Write completed with error (sct=0, sc=8) 00:35:12.262 starting I/O failed: -6 00:35:12.262 Write completed with error (sct=0, sc=8) 00:35:12.262 starting I/O failed: -6 00:35:12.262 Write completed with error (sct=0, sc=8) 00:35:12.262 Write completed with error (sct=0, sc=8) 00:35:12.262 Write completed with error (sct=0, sc=8) 00:35:12.262 starting I/O failed: -6 00:35:12.262 Write completed with error (sct=0, sc=8) 00:35:12.262 starting I/O failed: -6 00:35:12.262 Write completed with error (sct=0, sc=8) 00:35:12.262 Write completed with error (sct=0, sc=8) 00:35:12.262 Write completed with error (sct=0, sc=8) 00:35:12.262 starting I/O failed: -6 00:35:12.262 Write completed with error (sct=0, sc=8) 00:35:12.262 starting I/O failed: -6 00:35:12.262 Write completed with error (sct=0, sc=8) 00:35:12.262 [2024-10-01 22:34:07.242104] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:35:12.262 Write completed with error (sct=0, sc=8) 00:35:12.262 Write completed with error (sct=0, sc=8) 00:35:12.262 starting I/O failed: -6 00:35:12.262 Write completed with error (sct=0, sc=8) 00:35:12.262 starting I/O failed: -6 00:35:12.262 Write completed with error (sct=0, sc=8) 00:35:12.262 starting I/O failed: -6 00:35:12.262 Write completed with error (sct=0, sc=8) 00:35:12.262 Write completed with error (sct=0, sc=8) 00:35:12.262 starting I/O failed: -6 00:35:12.262 Write completed with error (sct=0, sc=8) 00:35:12.262 starting I/O failed: -6 00:35:12.262 Write completed with error (sct=0, sc=8) 00:35:12.262 starting I/O failed: -6 00:35:12.262 Write completed with error (sct=0, sc=8) 00:35:12.262 Write completed with error (sct=0, sc=8) 00:35:12.262 starting I/O failed: -6 00:35:12.262 Write completed with error (sct=0, sc=8) 00:35:12.262 starting I/O failed: -6 00:35:12.262 Write completed with error (sct=0, sc=8) 00:35:12.262 starting I/O failed: -6 00:35:12.262 Write completed with error (sct=0, sc=8) 00:35:12.262 Write completed with error (sct=0, sc=8) 00:35:12.262 starting I/O failed: -6 00:35:12.262 Write completed with error (sct=0, sc=8) 00:35:12.262 starting I/O failed: -6 00:35:12.262 Write completed with error (sct=0, sc=8) 00:35:12.262 starting I/O failed: -6 00:35:12.262 Write completed with error (sct=0, sc=8) 00:35:12.262 Write completed with error (sct=0, sc=8) 00:35:12.262 starting I/O failed: -6 00:35:12.262 Write completed with error (sct=0, sc=8) 00:35:12.262 starting I/O failed: -6 00:35:12.262 Write completed with error (sct=0, sc=8) 00:35:12.262 starting I/O failed: -6 00:35:12.262 Write completed with error (sct=0, sc=8) 00:35:12.262 Write completed with error (sct=0, sc=8) 00:35:12.262 starting I/O failed: -6 00:35:12.262 Write completed with error (sct=0, sc=8) 00:35:12.262 starting I/O failed: -6 00:35:12.262 Write completed with error (sct=0, sc=8) 00:35:12.262 starting I/O failed: -6 00:35:12.262 Write completed with error (sct=0, sc=8) 00:35:12.262 Write completed with error (sct=0, sc=8) 00:35:12.262 starting I/O failed: -6 00:35:12.262 Write completed with error (sct=0, sc=8) 00:35:12.262 starting I/O failed: -6 00:35:12.262 Write completed with error (sct=0, sc=8) 00:35:12.262 starting I/O failed: -6 00:35:12.262 Write completed with error (sct=0, sc=8) 00:35:12.262 Write completed with error (sct=0, sc=8) 00:35:12.263 starting I/O failed: -6 00:35:12.263 Write completed with error (sct=0, sc=8) 00:35:12.263 starting I/O failed: -6 00:35:12.263 Write completed with error (sct=0, sc=8) 00:35:12.263 starting I/O failed: -6 00:35:12.263 Write completed with error (sct=0, sc=8) 00:35:12.263 Write completed with error (sct=0, sc=8) 00:35:12.263 starting I/O failed: -6 00:35:12.263 Write completed with error (sct=0, sc=8) 00:35:12.263 starting I/O failed: -6 00:35:12.263 Write completed with error (sct=0, sc=8) 00:35:12.263 starting I/O failed: -6 00:35:12.263 Write completed with error (sct=0, sc=8) 00:35:12.263 Write completed with error (sct=0, sc=8) 00:35:12.263 starting I/O failed: -6 00:35:12.263 Write completed with error (sct=0, sc=8) 00:35:12.263 starting I/O failed: -6 00:35:12.263 Write completed with error (sct=0, sc=8) 00:35:12.263 starting I/O failed: -6 00:35:12.263 Write completed with error (sct=0, sc=8) 00:35:12.263 Write completed with error (sct=0, sc=8) 00:35:12.263 starting I/O failed: -6 00:35:12.263 Write completed with error (sct=0, sc=8) 00:35:12.263 starting I/O failed: -6 00:35:12.263 Write completed with error (sct=0, sc=8) 00:35:12.263 starting I/O failed: -6 00:35:12.263 Write completed with error (sct=0, sc=8) 00:35:12.263 Write completed with error (sct=0, sc=8) 00:35:12.263 starting I/O failed: -6 00:35:12.263 Write completed with error (sct=0, sc=8) 00:35:12.263 starting I/O failed: -6 00:35:12.263 Write completed with error (sct=0, sc=8) 00:35:12.263 starting I/O failed: -6 00:35:12.263 Write completed with error (sct=0, sc=8) 00:35:12.263 [2024-10-01 22:34:07.243040] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:12.263 Write completed with error (sct=0, sc=8) 00:35:12.263 starting I/O failed: -6 00:35:12.263 Write completed with error (sct=0, sc=8) 00:35:12.263 starting I/O failed: -6 00:35:12.263 Write completed with error (sct=0, sc=8) 00:35:12.263 starting I/O failed: -6 00:35:12.263 Write completed with error (sct=0, sc=8) 00:35:12.263 starting I/O failed: -6 00:35:12.263 Write completed with error (sct=0, sc=8) 00:35:12.263 starting I/O failed: -6 00:35:12.263 Write completed with error (sct=0, sc=8) 00:35:12.263 starting I/O failed: -6 00:35:12.263 Write completed with error (sct=0, sc=8) 00:35:12.263 starting I/O failed: -6 00:35:12.263 Write completed with error (sct=0, sc=8) 00:35:12.263 starting I/O failed: -6 00:35:12.263 Write completed with error (sct=0, sc=8) 00:35:12.263 starting I/O failed: -6 00:35:12.263 Write completed with error (sct=0, sc=8) 00:35:12.263 starting I/O failed: -6 00:35:12.263 Write completed with error (sct=0, sc=8) 00:35:12.263 starting I/O failed: -6 00:35:12.263 Write completed with error (sct=0, sc=8) 00:35:12.263 starting I/O failed: -6 00:35:12.263 Write completed with error (sct=0, sc=8) 00:35:12.263 starting I/O failed: -6 00:35:12.263 Write completed with error (sct=0, sc=8) 00:35:12.263 starting I/O failed: -6 00:35:12.263 Write completed with error (sct=0, sc=8) 00:35:12.263 starting I/O failed: -6 00:35:12.263 Write completed with error (sct=0, sc=8) 00:35:12.263 starting I/O failed: -6 00:35:12.263 Write completed with error (sct=0, sc=8) 00:35:12.263 starting I/O failed: -6 00:35:12.263 Write completed with error (sct=0, sc=8) 00:35:12.263 starting I/O failed: -6 00:35:12.263 Write completed with error (sct=0, sc=8) 00:35:12.263 starting I/O failed: -6 00:35:12.263 Write completed with error (sct=0, sc=8) 00:35:12.263 starting I/O failed: -6 00:35:12.263 Write completed with error (sct=0, sc=8) 00:35:12.263 starting I/O failed: -6 00:35:12.263 Write completed with error (sct=0, sc=8) 00:35:12.263 starting I/O failed: -6 00:35:12.263 Write completed with error (sct=0, sc=8) 00:35:12.263 starting I/O failed: -6 00:35:12.263 Write completed with error (sct=0, sc=8) 00:35:12.263 starting I/O failed: -6 00:35:12.263 Write completed with error (sct=0, sc=8) 00:35:12.263 starting I/O failed: -6 00:35:12.263 Write completed with error (sct=0, sc=8) 00:35:12.263 starting I/O failed: -6 00:35:12.263 Write completed with error (sct=0, sc=8) 00:35:12.263 starting I/O failed: -6 00:35:12.263 Write completed with error (sct=0, sc=8) 00:35:12.263 starting I/O failed: -6 00:35:12.263 Write completed with error (sct=0, sc=8) 00:35:12.263 starting I/O failed: -6 00:35:12.263 Write completed with error (sct=0, sc=8) 00:35:12.263 starting I/O failed: -6 00:35:12.263 Write completed with error (sct=0, sc=8) 00:35:12.263 starting I/O failed: -6 00:35:12.263 Write completed with error (sct=0, sc=8) 00:35:12.263 starting I/O failed: -6 00:35:12.263 Write completed with error (sct=0, sc=8) 00:35:12.263 starting I/O failed: -6 00:35:12.263 Write completed with error (sct=0, sc=8) 00:35:12.263 starting I/O failed: -6 00:35:12.263 Write completed with error (sct=0, sc=8) 00:35:12.263 starting I/O failed: -6 00:35:12.263 Write completed with error (sct=0, sc=8) 00:35:12.263 starting I/O failed: -6 00:35:12.263 Write completed with error (sct=0, sc=8) 00:35:12.263 starting I/O failed: -6 00:35:12.263 Write completed with error (sct=0, sc=8) 00:35:12.263 starting I/O failed: -6 00:35:12.263 Write completed with error (sct=0, sc=8) 00:35:12.263 starting I/O failed: -6 00:35:12.263 Write completed with error (sct=0, sc=8) 00:35:12.263 starting I/O failed: -6 00:35:12.263 Write completed with error (sct=0, sc=8) 00:35:12.263 starting I/O failed: -6 00:35:12.263 Write completed with error (sct=0, sc=8) 00:35:12.263 starting I/O failed: -6 00:35:12.263 Write completed with error (sct=0, sc=8) 00:35:12.263 starting I/O failed: -6 00:35:12.263 Write completed with error (sct=0, sc=8) 00:35:12.263 starting I/O failed: -6 00:35:12.263 Write completed with error (sct=0, sc=8) 00:35:12.263 starting I/O failed: -6 00:35:12.263 Write completed with error (sct=0, sc=8) 00:35:12.263 starting I/O failed: -6 00:35:12.263 Write completed with error (sct=0, sc=8) 00:35:12.263 starting I/O failed: -6 00:35:12.263 Write completed with error (sct=0, sc=8) 00:35:12.263 starting I/O failed: -6 00:35:12.263 Write completed with error (sct=0, sc=8) 00:35:12.263 starting I/O failed: -6 00:35:12.263 Write completed with error (sct=0, sc=8) 00:35:12.263 starting I/O failed: -6 00:35:12.263 Write completed with error (sct=0, sc=8) 00:35:12.263 starting I/O failed: -6 00:35:12.263 Write completed with error (sct=0, sc=8) 00:35:12.263 starting I/O failed: -6 00:35:12.263 Write completed with error (sct=0, sc=8) 00:35:12.263 starting I/O failed: -6 00:35:12.263 Write completed with error (sct=0, sc=8) 00:35:12.263 starting I/O failed: -6 00:35:12.263 Write completed with error (sct=0, sc=8) 00:35:12.263 starting I/O failed: -6 00:35:12.263 Write completed with error (sct=0, sc=8) 00:35:12.263 starting I/O failed: -6 00:35:12.263 Write completed with error (sct=0, sc=8) 00:35:12.263 starting I/O failed: -6 00:35:12.263 Write completed with error (sct=0, sc=8) 00:35:12.263 starting I/O failed: -6 00:35:12.263 Write completed with error (sct=0, sc=8) 00:35:12.263 starting I/O failed: -6 00:35:12.263 Write completed with error (sct=0, sc=8) 00:35:12.263 starting I/O failed: -6 00:35:12.263 Write completed with error (sct=0, sc=8) 00:35:12.263 starting I/O failed: -6 00:35:12.263 [2024-10-01 22:34:07.246397] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:35:12.263 NVMe io qpair process completion error 00:35:12.263 Initializing NVMe Controllers 00:35:12.263 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode10 00:35:12.263 Controller IO queue size 128, less than required. 00:35:12.263 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:35:12.263 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode6 00:35:12.263 Controller IO queue size 128, less than required. 00:35:12.263 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:35:12.263 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:35:12.263 Controller IO queue size 128, less than required. 00:35:12.263 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:35:12.263 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode4 00:35:12.263 Controller IO queue size 128, less than required. 00:35:12.263 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:35:12.263 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode5 00:35:12.263 Controller IO queue size 128, less than required. 00:35:12.263 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:35:12.263 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode7 00:35:12.263 Controller IO queue size 128, less than required. 00:35:12.263 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:35:12.263 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode8 00:35:12.263 Controller IO queue size 128, less than required. 00:35:12.263 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:35:12.263 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode3 00:35:12.263 Controller IO queue size 128, less than required. 00:35:12.263 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:35:12.263 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode2 00:35:12.263 Controller IO queue size 128, less than required. 00:35:12.263 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:35:12.263 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode9 00:35:12.263 Controller IO queue size 128, less than required. 00:35:12.263 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:35:12.263 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode10) NSID 1 with lcore 0 00:35:12.263 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode6) NSID 1 with lcore 0 00:35:12.263 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:35:12.263 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode4) NSID 1 with lcore 0 00:35:12.263 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode5) NSID 1 with lcore 0 00:35:12.263 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode7) NSID 1 with lcore 0 00:35:12.263 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode8) NSID 1 with lcore 0 00:35:12.263 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode3) NSID 1 with lcore 0 00:35:12.263 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode2) NSID 1 with lcore 0 00:35:12.263 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode9) NSID 1 with lcore 0 00:35:12.263 Initialization complete. Launching workers. 00:35:12.263 ======================================================== 00:35:12.263 Latency(us) 00:35:12.263 Device Information : IOPS MiB/s Average min max 00:35:12.263 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode10) NSID 1 from core 0: 1904.63 81.84 67222.94 516.95 120231.26 00:35:12.263 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode6) NSID 1 from core 0: 1905.05 81.86 67226.83 816.77 121763.13 00:35:12.263 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1901.45 81.70 67375.80 671.66 119444.30 00:35:12.263 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode4) NSID 1 from core 0: 1915.43 82.30 66931.65 651.55 121602.53 00:35:12.263 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode5) NSID 1 from core 0: 1910.56 82.09 67129.35 510.96 120498.40 00:35:12.263 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode7) NSID 1 from core 0: 1920.52 82.52 66838.98 616.07 122921.02 00:35:12.263 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode8) NSID 1 from core 0: 1889.80 81.20 67944.43 923.05 123234.65 00:35:12.263 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode3) NSID 1 from core 0: 1912.68 82.19 67151.47 666.16 131460.60 00:35:12.263 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode2) NSID 1 from core 0: 1905.47 81.88 67436.42 697.23 134280.27 00:35:12.263 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode9) NSID 1 from core 0: 1926.45 82.78 66723.14 865.89 135715.14 00:35:12.263 ======================================================== 00:35:12.264 Total : 19092.03 820.36 67196.47 510.96 135715.14 00:35:12.264 00:35:12.264 [2024-10-01 22:34:07.251074] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb59d0 is same with the state(6) to be set 00:35:12.264 [2024-10-01 22:34:07.251120] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb3fc0 is same with the state(6) to be set 00:35:12.264 [2024-10-01 22:34:07.251150] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb57f0 is same with the state(6) to be set 00:35:12.264 [2024-10-01 22:34:07.251179] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb3960 is same with the state(6) to be set 00:35:12.264 [2024-10-01 22:34:07.251209] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb3c90 is same with the state(6) to be set 00:35:12.264 [2024-10-01 22:34:07.251243] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb9e60 is same with the state(6) to be set 00:35:12.264 [2024-10-01 22:34:07.251272] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbba190 is same with the state(6) to be set 00:35:12.264 [2024-10-01 22:34:07.251300] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb3630 is same with the state(6) to be set 00:35:12.264 [2024-10-01 22:34:07.251329] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb5bb0 is same with the state(6) to be set 00:35:12.264 [2024-10-01 22:34:07.251358] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbba4c0 is same with the state(6) to be set 00:35:12.264 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:35:12.264 22:34:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@156 -- # sleep 1 00:35:13.648 22:34:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@158 -- # NOT wait 282983 00:35:13.648 22:34:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@650 -- # local es=0 00:35:13.648 22:34:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@652 -- # valid_exec_arg wait 282983 00:35:13.648 22:34:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@638 -- # local arg=wait 00:35:13.648 22:34:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:35:13.648 22:34:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@642 -- # type -t wait 00:35:13.648 22:34:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:35:13.648 22:34:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@653 -- # wait 282983 00:35:13.648 22:34:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@653 -- # es=1 00:35:13.648 22:34:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:35:13.648 22:34:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:35:13.648 22:34:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:35:13.648 22:34:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@159 -- # stoptarget 00:35:13.648 22:34:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:35:13.648 22:34:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:35:13.648 22:34:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:35:13.648 22:34:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@46 -- # nvmftestfini 00:35:13.648 22:34:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@514 -- # nvmfcleanup 00:35:13.648 22:34:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@121 -- # sync 00:35:13.648 22:34:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:35:13.648 22:34:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@124 -- # set +e 00:35:13.648 22:34:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@125 -- # for i in {1..20} 00:35:13.648 22:34:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:35:13.648 rmmod nvme_tcp 00:35:13.648 rmmod nvme_fabrics 00:35:13.648 rmmod nvme_keyring 00:35:13.648 22:34:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:35:13.648 22:34:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@128 -- # set -e 00:35:13.648 22:34:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@129 -- # return 0 00:35:13.648 22:34:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@515 -- # '[' -n 282756 ']' 00:35:13.648 22:34:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@516 -- # killprocess 282756 00:35:13.648 22:34:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@950 -- # '[' -z 282756 ']' 00:35:13.648 22:34:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@954 -- # kill -0 282756 00:35:13.648 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 954: kill: (282756) - No such process 00:35:13.648 22:34:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@977 -- # echo 'Process with pid 282756 is not found' 00:35:13.648 Process with pid 282756 is not found 00:35:13.648 22:34:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:35:13.648 22:34:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:35:13.648 22:34:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:35:13.648 22:34:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@297 -- # iptr 00:35:13.648 22:34:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@789 -- # iptables-save 00:35:13.648 22:34:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:35:13.648 22:34:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@789 -- # iptables-restore 00:35:13.648 22:34:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:35:13.648 22:34:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@302 -- # remove_spdk_ns 00:35:13.648 22:34:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:13.648 22:34:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:35:13.648 22:34:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:15.562 22:34:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:35:15.562 00:35:15.562 real 0m10.319s 00:35:15.562 user 0m28.013s 00:35:15.562 sys 0m4.074s 00:35:15.562 22:34:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@1126 -- # xtrace_disable 00:35:15.562 22:34:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:35:15.562 ************************************ 00:35:15.562 END TEST nvmf_shutdown_tc4 00:35:15.562 ************************************ 00:35:15.562 22:34:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@170 -- # trap - SIGINT SIGTERM EXIT 00:35:15.562 00:35:15.562 real 0m43.694s 00:35:15.562 user 1m46.864s 00:35:15.562 sys 0m13.874s 00:35:15.562 22:34:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1126 -- # xtrace_disable 00:35:15.562 22:34:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:35:15.562 ************************************ 00:35:15.562 END TEST nvmf_shutdown 00:35:15.562 ************************************ 00:35:15.562 22:34:10 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:35:15.562 00:35:15.562 real 13m2.540s 00:35:15.562 user 27m38.607s 00:35:15.562 sys 3m42.709s 00:35:15.563 22:34:10 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1126 -- # xtrace_disable 00:35:15.563 22:34:10 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:35:15.563 ************************************ 00:35:15.563 END TEST nvmf_target_extra 00:35:15.563 ************************************ 00:35:15.563 22:34:10 nvmf_tcp -- nvmf/nvmf.sh@16 -- # run_test nvmf_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_host.sh --transport=tcp 00:35:15.563 22:34:10 nvmf_tcp -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:35:15.563 22:34:10 nvmf_tcp -- common/autotest_common.sh@1107 -- # xtrace_disable 00:35:15.563 22:34:10 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:35:15.824 ************************************ 00:35:15.824 START TEST nvmf_host 00:35:15.824 ************************************ 00:35:15.824 22:34:10 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_host.sh --transport=tcp 00:35:15.824 * Looking for test storage... 00:35:15.824 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:35:15.824 22:34:10 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:35:15.824 22:34:10 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1681 -- # lcov --version 00:35:15.824 22:34:10 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:35:15.824 22:34:11 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:35:15.824 22:34:11 nvmf_tcp.nvmf_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:35:15.824 22:34:11 nvmf_tcp.nvmf_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:35:15.824 22:34:11 nvmf_tcp.nvmf_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:35:15.824 22:34:11 nvmf_tcp.nvmf_host -- scripts/common.sh@336 -- # IFS=.-: 00:35:15.824 22:34:11 nvmf_tcp.nvmf_host -- scripts/common.sh@336 -- # read -ra ver1 00:35:15.824 22:34:11 nvmf_tcp.nvmf_host -- scripts/common.sh@337 -- # IFS=.-: 00:35:15.824 22:34:11 nvmf_tcp.nvmf_host -- scripts/common.sh@337 -- # read -ra ver2 00:35:15.824 22:34:11 nvmf_tcp.nvmf_host -- scripts/common.sh@338 -- # local 'op=<' 00:35:15.824 22:34:11 nvmf_tcp.nvmf_host -- scripts/common.sh@340 -- # ver1_l=2 00:35:15.824 22:34:11 nvmf_tcp.nvmf_host -- scripts/common.sh@341 -- # ver2_l=1 00:35:15.824 22:34:11 nvmf_tcp.nvmf_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:35:15.824 22:34:11 nvmf_tcp.nvmf_host -- scripts/common.sh@344 -- # case "$op" in 00:35:15.824 22:34:11 nvmf_tcp.nvmf_host -- scripts/common.sh@345 -- # : 1 00:35:15.824 22:34:11 nvmf_tcp.nvmf_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:35:15.824 22:34:11 nvmf_tcp.nvmf_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:35:15.824 22:34:11 nvmf_tcp.nvmf_host -- scripts/common.sh@365 -- # decimal 1 00:35:15.824 22:34:11 nvmf_tcp.nvmf_host -- scripts/common.sh@353 -- # local d=1 00:35:15.824 22:34:11 nvmf_tcp.nvmf_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:35:15.824 22:34:11 nvmf_tcp.nvmf_host -- scripts/common.sh@355 -- # echo 1 00:35:15.824 22:34:11 nvmf_tcp.nvmf_host -- scripts/common.sh@365 -- # ver1[v]=1 00:35:15.824 22:34:11 nvmf_tcp.nvmf_host -- scripts/common.sh@366 -- # decimal 2 00:35:15.824 22:34:11 nvmf_tcp.nvmf_host -- scripts/common.sh@353 -- # local d=2 00:35:15.824 22:34:11 nvmf_tcp.nvmf_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:35:15.824 22:34:11 nvmf_tcp.nvmf_host -- scripts/common.sh@355 -- # echo 2 00:35:15.824 22:34:11 nvmf_tcp.nvmf_host -- scripts/common.sh@366 -- # ver2[v]=2 00:35:15.824 22:34:11 nvmf_tcp.nvmf_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:35:15.824 22:34:11 nvmf_tcp.nvmf_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:35:15.824 22:34:11 nvmf_tcp.nvmf_host -- scripts/common.sh@368 -- # return 0 00:35:15.824 22:34:11 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:35:15.824 22:34:11 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:35:15.824 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:15.824 --rc genhtml_branch_coverage=1 00:35:15.824 --rc genhtml_function_coverage=1 00:35:15.824 --rc genhtml_legend=1 00:35:15.824 --rc geninfo_all_blocks=1 00:35:15.824 --rc geninfo_unexecuted_blocks=1 00:35:15.824 00:35:15.824 ' 00:35:15.824 22:34:11 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:35:15.824 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:15.824 --rc genhtml_branch_coverage=1 00:35:15.824 --rc genhtml_function_coverage=1 00:35:15.824 --rc genhtml_legend=1 00:35:15.824 --rc geninfo_all_blocks=1 00:35:15.824 --rc geninfo_unexecuted_blocks=1 00:35:15.824 00:35:15.824 ' 00:35:15.824 22:34:11 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:35:15.824 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:15.824 --rc genhtml_branch_coverage=1 00:35:15.824 --rc genhtml_function_coverage=1 00:35:15.824 --rc genhtml_legend=1 00:35:15.824 --rc geninfo_all_blocks=1 00:35:15.824 --rc geninfo_unexecuted_blocks=1 00:35:15.824 00:35:15.824 ' 00:35:15.824 22:34:11 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:35:15.824 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:15.824 --rc genhtml_branch_coverage=1 00:35:15.824 --rc genhtml_function_coverage=1 00:35:15.824 --rc genhtml_legend=1 00:35:15.824 --rc geninfo_all_blocks=1 00:35:15.824 --rc geninfo_unexecuted_blocks=1 00:35:15.824 00:35:15.824 ' 00:35:15.824 22:34:11 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:35:15.824 22:34:11 nvmf_tcp.nvmf_host -- nvmf/common.sh@7 -- # uname -s 00:35:15.824 22:34:11 nvmf_tcp.nvmf_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:35:15.824 22:34:11 nvmf_tcp.nvmf_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:35:15.824 22:34:11 nvmf_tcp.nvmf_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:35:15.824 22:34:11 nvmf_tcp.nvmf_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:35:15.824 22:34:11 nvmf_tcp.nvmf_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:35:15.824 22:34:11 nvmf_tcp.nvmf_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:35:15.824 22:34:11 nvmf_tcp.nvmf_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:35:15.824 22:34:11 nvmf_tcp.nvmf_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:35:15.824 22:34:11 nvmf_tcp.nvmf_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:35:15.824 22:34:11 nvmf_tcp.nvmf_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:35:15.824 22:34:11 nvmf_tcp.nvmf_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 00:35:15.824 22:34:11 nvmf_tcp.nvmf_host -- nvmf/common.sh@18 -- # NVME_HOSTID=80c5a598-37ec-ec11-9bc7-a4bf01928204 00:35:15.824 22:34:11 nvmf_tcp.nvmf_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:35:15.824 22:34:11 nvmf_tcp.nvmf_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:35:15.824 22:34:11 nvmf_tcp.nvmf_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:35:15.824 22:34:11 nvmf_tcp.nvmf_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:35:15.824 22:34:11 nvmf_tcp.nvmf_host -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:35:15.824 22:34:11 nvmf_tcp.nvmf_host -- scripts/common.sh@15 -- # shopt -s extglob 00:35:15.824 22:34:11 nvmf_tcp.nvmf_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:35:15.824 22:34:11 nvmf_tcp.nvmf_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:35:15.824 22:34:11 nvmf_tcp.nvmf_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:35:15.824 22:34:11 nvmf_tcp.nvmf_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:15.824 22:34:11 nvmf_tcp.nvmf_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:15.824 22:34:11 nvmf_tcp.nvmf_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:15.824 22:34:11 nvmf_tcp.nvmf_host -- paths/export.sh@5 -- # export PATH 00:35:16.085 22:34:11 nvmf_tcp.nvmf_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:16.085 22:34:11 nvmf_tcp.nvmf_host -- nvmf/common.sh@51 -- # : 0 00:35:16.085 22:34:11 nvmf_tcp.nvmf_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:35:16.085 22:34:11 nvmf_tcp.nvmf_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:35:16.085 22:34:11 nvmf_tcp.nvmf_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:35:16.085 22:34:11 nvmf_tcp.nvmf_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:35:16.085 22:34:11 nvmf_tcp.nvmf_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:35:16.085 22:34:11 nvmf_tcp.nvmf_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:35:16.085 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:35:16.085 22:34:11 nvmf_tcp.nvmf_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:35:16.085 22:34:11 nvmf_tcp.nvmf_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:35:16.085 22:34:11 nvmf_tcp.nvmf_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:35:16.085 22:34:11 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@11 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:35:16.085 22:34:11 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@13 -- # TEST_ARGS=("$@") 00:35:16.085 22:34:11 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@15 -- # [[ 0 -eq 0 ]] 00:35:16.085 22:34:11 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@16 -- # run_test nvmf_multicontroller /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:35:16.085 22:34:11 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:35:16.085 22:34:11 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:35:16.085 22:34:11 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:35:16.085 ************************************ 00:35:16.085 START TEST nvmf_multicontroller 00:35:16.085 ************************************ 00:35:16.086 22:34:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:35:16.086 * Looking for test storage... 00:35:16.086 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:35:16.086 22:34:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:35:16.086 22:34:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1681 -- # lcov --version 00:35:16.086 22:34:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:35:16.086 22:34:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:35:16.086 22:34:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:35:16.086 22:34:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@333 -- # local ver1 ver1_l 00:35:16.086 22:34:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@334 -- # local ver2 ver2_l 00:35:16.086 22:34:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@336 -- # IFS=.-: 00:35:16.086 22:34:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@336 -- # read -ra ver1 00:35:16.086 22:34:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@337 -- # IFS=.-: 00:35:16.086 22:34:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@337 -- # read -ra ver2 00:35:16.086 22:34:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@338 -- # local 'op=<' 00:35:16.086 22:34:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@340 -- # ver1_l=2 00:35:16.086 22:34:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@341 -- # ver2_l=1 00:35:16.086 22:34:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:35:16.086 22:34:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@344 -- # case "$op" in 00:35:16.086 22:34:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@345 -- # : 1 00:35:16.086 22:34:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@364 -- # (( v = 0 )) 00:35:16.086 22:34:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:35:16.086 22:34:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@365 -- # decimal 1 00:35:16.086 22:34:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@353 -- # local d=1 00:35:16.086 22:34:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:35:16.086 22:34:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@355 -- # echo 1 00:35:16.086 22:34:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@365 -- # ver1[v]=1 00:35:16.086 22:34:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@366 -- # decimal 2 00:35:16.086 22:34:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@353 -- # local d=2 00:35:16.086 22:34:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:35:16.086 22:34:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@355 -- # echo 2 00:35:16.086 22:34:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@366 -- # ver2[v]=2 00:35:16.086 22:34:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:35:16.086 22:34:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:35:16.086 22:34:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@368 -- # return 0 00:35:16.086 22:34:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:35:16.086 22:34:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:35:16.086 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:16.086 --rc genhtml_branch_coverage=1 00:35:16.086 --rc genhtml_function_coverage=1 00:35:16.086 --rc genhtml_legend=1 00:35:16.086 --rc geninfo_all_blocks=1 00:35:16.086 --rc geninfo_unexecuted_blocks=1 00:35:16.086 00:35:16.086 ' 00:35:16.086 22:34:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:35:16.086 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:16.086 --rc genhtml_branch_coverage=1 00:35:16.086 --rc genhtml_function_coverage=1 00:35:16.086 --rc genhtml_legend=1 00:35:16.086 --rc geninfo_all_blocks=1 00:35:16.086 --rc geninfo_unexecuted_blocks=1 00:35:16.086 00:35:16.086 ' 00:35:16.086 22:34:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:35:16.086 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:16.086 --rc genhtml_branch_coverage=1 00:35:16.086 --rc genhtml_function_coverage=1 00:35:16.086 --rc genhtml_legend=1 00:35:16.086 --rc geninfo_all_blocks=1 00:35:16.086 --rc geninfo_unexecuted_blocks=1 00:35:16.086 00:35:16.086 ' 00:35:16.086 22:34:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:35:16.086 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:16.086 --rc genhtml_branch_coverage=1 00:35:16.086 --rc genhtml_function_coverage=1 00:35:16.086 --rc genhtml_legend=1 00:35:16.086 --rc geninfo_all_blocks=1 00:35:16.086 --rc geninfo_unexecuted_blocks=1 00:35:16.086 00:35:16.086 ' 00:35:16.086 22:34:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:35:16.086 22:34:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@7 -- # uname -s 00:35:16.347 22:34:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:35:16.347 22:34:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:35:16.347 22:34:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:35:16.347 22:34:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:35:16.347 22:34:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:35:16.347 22:34:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:35:16.347 22:34:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:35:16.347 22:34:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:35:16.347 22:34:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:35:16.347 22:34:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:35:16.347 22:34:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 00:35:16.347 22:34:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@18 -- # NVME_HOSTID=80c5a598-37ec-ec11-9bc7-a4bf01928204 00:35:16.347 22:34:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:35:16.347 22:34:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:35:16.347 22:34:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:35:16.347 22:34:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:35:16.347 22:34:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:35:16.347 22:34:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@15 -- # shopt -s extglob 00:35:16.347 22:34:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:35:16.347 22:34:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:35:16.347 22:34:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:35:16.347 22:34:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:16.347 22:34:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:16.347 22:34:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:16.347 22:34:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@5 -- # export PATH 00:35:16.347 22:34:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:16.347 22:34:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@51 -- # : 0 00:35:16.347 22:34:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:35:16.347 22:34:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:35:16.347 22:34:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:35:16.347 22:34:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:35:16.347 22:34:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:35:16.347 22:34:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:35:16.347 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:35:16.347 22:34:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:35:16.347 22:34:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:35:16.347 22:34:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@55 -- # have_pci_nics=0 00:35:16.347 22:34:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@11 -- # MALLOC_BDEV_SIZE=64 00:35:16.347 22:34:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:35:16.347 22:34:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@13 -- # NVMF_HOST_FIRST_PORT=60000 00:35:16.347 22:34:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@14 -- # NVMF_HOST_SECOND_PORT=60001 00:35:16.347 22:34:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:35:16.347 22:34:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@18 -- # '[' tcp == rdma ']' 00:35:16.347 22:34:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@23 -- # nvmftestinit 00:35:16.347 22:34:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:35:16.347 22:34:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:35:16.347 22:34:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@474 -- # prepare_net_devs 00:35:16.347 22:34:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@436 -- # local -g is_hw=no 00:35:16.347 22:34:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@438 -- # remove_spdk_ns 00:35:16.347 22:34:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:16.347 22:34:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:35:16.347 22:34:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:16.347 22:34:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:35:16.347 22:34:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:35:16.347 22:34:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@309 -- # xtrace_disable 00:35:16.347 22:34:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:35:24.485 22:34:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:35:24.485 22:34:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@315 -- # pci_devs=() 00:35:24.485 22:34:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@315 -- # local -a pci_devs 00:35:24.485 22:34:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@316 -- # pci_net_devs=() 00:35:24.485 22:34:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:35:24.485 22:34:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@317 -- # pci_drivers=() 00:35:24.485 22:34:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@317 -- # local -A pci_drivers 00:35:24.485 22:34:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@319 -- # net_devs=() 00:35:24.485 22:34:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@319 -- # local -ga net_devs 00:35:24.485 22:34:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@320 -- # e810=() 00:35:24.485 22:34:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@320 -- # local -ga e810 00:35:24.485 22:34:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@321 -- # x722=() 00:35:24.485 22:34:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@321 -- # local -ga x722 00:35:24.485 22:34:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@322 -- # mlx=() 00:35:24.485 22:34:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@322 -- # local -ga mlx 00:35:24.485 22:34:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:35:24.485 22:34:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:35:24.485 22:34:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:35:24.485 22:34:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:35:24.485 22:34:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:35:24.485 22:34:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:35:24.485 22:34:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:35:24.485 22:34:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:35:24.485 22:34:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:35:24.485 22:34:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:35:24.485 22:34:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:35:24.485 22:34:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:35:24.485 22:34:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:35:24.485 22:34:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:35:24.485 22:34:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:35:24.485 22:34:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:35:24.485 22:34:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:35:24.485 22:34:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:35:24.485 22:34:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:35:24.485 22:34:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:35:24.485 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:35:24.485 22:34:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:35:24.485 22:34:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:35:24.485 22:34:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:24.485 22:34:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:24.485 22:34:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:35:24.485 22:34:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:35:24.485 22:34:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:35:24.485 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:35:24.485 22:34:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:35:24.485 22:34:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:35:24.485 22:34:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:24.485 22:34:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:24.485 22:34:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:35:24.485 22:34:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:35:24.485 22:34:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:35:24.485 22:34:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:35:24.485 22:34:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:35:24.485 22:34:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:24.485 22:34:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:35:24.485 22:34:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:24.485 22:34:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@416 -- # [[ up == up ]] 00:35:24.485 22:34:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:35:24.485 22:34:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:24.485 22:34:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:35:24.486 Found net devices under 0000:4b:00.0: cvl_0_0 00:35:24.486 22:34:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:35:24.486 22:34:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:35:24.486 22:34:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:24.486 22:34:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:35:24.486 22:34:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:24.486 22:34:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@416 -- # [[ up == up ]] 00:35:24.486 22:34:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:35:24.486 22:34:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:24.486 22:34:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:35:24.486 Found net devices under 0000:4b:00.1: cvl_0_1 00:35:24.486 22:34:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:35:24.486 22:34:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:35:24.486 22:34:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@440 -- # is_hw=yes 00:35:24.486 22:34:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:35:24.486 22:34:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:35:24.486 22:34:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:35:24.486 22:34:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:35:24.486 22:34:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:35:24.486 22:34:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:35:24.486 22:34:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:35:24.486 22:34:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:35:24.486 22:34:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:35:24.486 22:34:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:35:24.486 22:34:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:35:24.486 22:34:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:35:24.486 22:34:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:35:24.486 22:34:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:35:24.486 22:34:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:35:24.486 22:34:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:35:24.486 22:34:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:35:24.486 22:34:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:35:24.486 22:34:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:35:24.486 22:34:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:35:24.486 22:34:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:35:24.486 22:34:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:35:24.486 22:34:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:35:24.486 22:34:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:35:24.486 22:34:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:35:24.486 22:34:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:35:24.486 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:35:24.486 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.682 ms 00:35:24.486 00:35:24.486 --- 10.0.0.2 ping statistics --- 00:35:24.486 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:24.486 rtt min/avg/max/mdev = 0.682/0.682/0.682/0.000 ms 00:35:24.486 22:34:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:35:24.486 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:35:24.486 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.308 ms 00:35:24.486 00:35:24.486 --- 10.0.0.1 ping statistics --- 00:35:24.486 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:24.486 rtt min/avg/max/mdev = 0.308/0.308/0.308/0.000 ms 00:35:24.486 22:34:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:35:24.486 22:34:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@448 -- # return 0 00:35:24.486 22:34:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:35:24.486 22:34:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:35:24.486 22:34:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:35:24.486 22:34:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:35:24.486 22:34:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:35:24.486 22:34:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:35:24.486 22:34:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:35:24.486 22:34:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@25 -- # nvmfappstart -m 0xE 00:35:24.486 22:34:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:35:24.486 22:34:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@724 -- # xtrace_disable 00:35:24.486 22:34:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:35:24.486 22:34:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@507 -- # nvmfpid=288636 00:35:24.486 22:34:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@508 -- # waitforlisten 288636 00:35:24.486 22:34:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:35:24.486 22:34:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@831 -- # '[' -z 288636 ']' 00:35:24.486 22:34:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:35:24.486 22:34:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@836 -- # local max_retries=100 00:35:24.486 22:34:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:35:24.486 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:35:24.486 22:34:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@840 -- # xtrace_disable 00:35:24.486 22:34:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:35:24.486 [2024-10-01 22:34:18.740100] Starting SPDK v25.01-pre git sha1 1b1c3081e / DPDK 24.03.0 initialization... 00:35:24.486 [2024-10-01 22:34:18.740167] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:35:24.486 [2024-10-01 22:34:18.829886] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:35:24.486 [2024-10-01 22:34:18.923265] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:35:24.486 [2024-10-01 22:34:18.923328] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:35:24.486 [2024-10-01 22:34:18.923336] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:35:24.486 [2024-10-01 22:34:18.923344] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:35:24.486 [2024-10-01 22:34:18.923350] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:35:24.486 [2024-10-01 22:34:18.923483] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:35:24.486 [2024-10-01 22:34:18.923669] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:35:24.486 [2024-10-01 22:34:18.923721] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:35:24.486 22:34:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:35:24.486 22:34:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@864 -- # return 0 00:35:24.486 22:34:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:35:24.486 22:34:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@730 -- # xtrace_disable 00:35:24.486 22:34:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:35:24.486 22:34:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:35:24.486 22:34:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@27 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:35:24.486 22:34:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:24.486 22:34:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:35:24.486 [2024-10-01 22:34:19.599638] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:35:24.486 22:34:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:24.486 22:34:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@29 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:35:24.486 22:34:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:24.486 22:34:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:35:24.486 Malloc0 00:35:24.486 22:34:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:24.486 22:34:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@30 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:35:24.486 22:34:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:24.486 22:34:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:35:24.486 22:34:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:24.486 22:34:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:35:24.487 22:34:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:24.487 22:34:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:35:24.487 22:34:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:24.487 22:34:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:35:24.487 22:34:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:24.487 22:34:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:35:24.487 [2024-10-01 22:34:19.655173] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:35:24.487 22:34:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:24.487 22:34:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:35:24.487 22:34:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:24.487 22:34:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:35:24.487 [2024-10-01 22:34:19.667133] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:35:24.487 22:34:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:24.487 22:34:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:35:24.487 22:34:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:24.487 22:34:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:35:24.487 Malloc1 00:35:24.487 22:34:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:24.487 22:34:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:35:24.487 22:34:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:24.487 22:34:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:35:24.487 22:34:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:24.487 22:34:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc1 00:35:24.487 22:34:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:24.487 22:34:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:35:24.487 22:34:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:24.487 22:34:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:35:24.487 22:34:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:24.487 22:34:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:35:24.487 22:34:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:24.487 22:34:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4421 00:35:24.487 22:34:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:24.487 22:34:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:35:24.748 22:34:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:24.748 22:34:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@44 -- # bdevperf_pid=288732 00:35:24.748 22:34:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@46 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; pap "$testdir/try.txt"; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:35:24.748 22:34:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w write -t 1 -f 00:35:24.748 22:34:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@47 -- # waitforlisten 288732 /var/tmp/bdevperf.sock 00:35:24.749 22:34:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@831 -- # '[' -z 288732 ']' 00:35:24.749 22:34:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:35:24.749 22:34:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@836 -- # local max_retries=100 00:35:24.749 22:34:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:35:24.749 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:35:24.749 22:34:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@840 -- # xtrace_disable 00:35:24.749 22:34:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:35:25.690 22:34:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:35:25.690 22:34:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@864 -- # return 0 00:35:25.690 22:34:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@50 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 00:35:25.690 22:34:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:25.690 22:34:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:35:25.690 NVMe0n1 00:35:25.690 22:34:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:25.690 22:34:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@54 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:35:25.690 22:34:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@54 -- # grep -c NVMe 00:35:25.690 22:34:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:25.690 22:34:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:35:25.690 22:34:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:25.690 1 00:35:25.690 22:34:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@60 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -q nqn.2021-09-7.io.spdk:00001 00:35:25.690 22:34:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@650 -- # local es=0 00:35:25.690 22:34:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -q nqn.2021-09-7.io.spdk:00001 00:35:25.690 22:34:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:35:25.690 22:34:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:35:25.690 22:34:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:35:25.690 22:34:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:35:25.690 22:34:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -q nqn.2021-09-7.io.spdk:00001 00:35:25.690 22:34:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:25.690 22:34:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:35:25.690 request: 00:35:25.690 { 00:35:25.690 "name": "NVMe0", 00:35:25.690 "trtype": "tcp", 00:35:25.690 "traddr": "10.0.0.2", 00:35:25.690 "adrfam": "ipv4", 00:35:25.690 "trsvcid": "4420", 00:35:25.690 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:35:25.690 "hostnqn": "nqn.2021-09-7.io.spdk:00001", 00:35:25.690 "hostaddr": "10.0.0.1", 00:35:25.690 "prchk_reftag": false, 00:35:25.690 "prchk_guard": false, 00:35:25.690 "hdgst": false, 00:35:25.690 "ddgst": false, 00:35:25.690 "allow_unrecognized_csi": false, 00:35:25.690 "method": "bdev_nvme_attach_controller", 00:35:25.690 "req_id": 1 00:35:25.690 } 00:35:25.690 Got JSON-RPC error response 00:35:25.690 response: 00:35:25.690 { 00:35:25.690 "code": -114, 00:35:25.690 "message": "A controller named NVMe0 already exists with the specified network path" 00:35:25.690 } 00:35:25.690 22:34:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:35:25.690 22:34:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # es=1 00:35:25.690 22:34:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:35:25.690 22:34:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:35:25.690 22:34:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:35:25.690 22:34:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@65 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.1 00:35:25.690 22:34:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@650 -- # local es=0 00:35:25.690 22:34:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.1 00:35:25.690 22:34:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:35:25.690 22:34:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:35:25.690 22:34:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:35:25.690 22:34:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:35:25.690 22:34:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.1 00:35:25.690 22:34:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:25.690 22:34:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:35:25.690 request: 00:35:25.690 { 00:35:25.690 "name": "NVMe0", 00:35:25.690 "trtype": "tcp", 00:35:25.690 "traddr": "10.0.0.2", 00:35:25.690 "adrfam": "ipv4", 00:35:25.690 "trsvcid": "4420", 00:35:25.690 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:35:25.690 "hostaddr": "10.0.0.1", 00:35:25.690 "prchk_reftag": false, 00:35:25.690 "prchk_guard": false, 00:35:25.690 "hdgst": false, 00:35:25.690 "ddgst": false, 00:35:25.690 "allow_unrecognized_csi": false, 00:35:25.690 "method": "bdev_nvme_attach_controller", 00:35:25.690 "req_id": 1 00:35:25.690 } 00:35:25.690 Got JSON-RPC error response 00:35:25.690 response: 00:35:25.690 { 00:35:25.690 "code": -114, 00:35:25.690 "message": "A controller named NVMe0 already exists with the specified network path" 00:35:25.690 } 00:35:25.690 22:34:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:35:25.690 22:34:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # es=1 00:35:25.690 22:34:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:35:25.690 22:34:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:35:25.690 22:34:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:35:25.690 22:34:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@69 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x disable 00:35:25.690 22:34:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@650 -- # local es=0 00:35:25.690 22:34:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x disable 00:35:25.690 22:34:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:35:25.690 22:34:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:35:25.690 22:34:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:35:25.690 22:34:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:35:25.690 22:34:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x disable 00:35:25.691 22:34:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:25.691 22:34:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:35:25.691 request: 00:35:25.691 { 00:35:25.691 "name": "NVMe0", 00:35:25.691 "trtype": "tcp", 00:35:25.691 "traddr": "10.0.0.2", 00:35:25.691 "adrfam": "ipv4", 00:35:25.691 "trsvcid": "4420", 00:35:25.691 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:35:25.691 "hostaddr": "10.0.0.1", 00:35:25.691 "prchk_reftag": false, 00:35:25.691 "prchk_guard": false, 00:35:25.691 "hdgst": false, 00:35:25.691 "ddgst": false, 00:35:25.691 "multipath": "disable", 00:35:25.691 "allow_unrecognized_csi": false, 00:35:25.691 "method": "bdev_nvme_attach_controller", 00:35:25.691 "req_id": 1 00:35:25.691 } 00:35:25.691 Got JSON-RPC error response 00:35:25.691 response: 00:35:25.691 { 00:35:25.691 "code": -114, 00:35:25.691 "message": "A controller named NVMe0 already exists and multipath is disabled" 00:35:25.691 } 00:35:25.691 22:34:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:35:25.691 22:34:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # es=1 00:35:25.691 22:34:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:35:25.691 22:34:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:35:25.691 22:34:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:35:25.691 22:34:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@74 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x failover 00:35:25.691 22:34:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@650 -- # local es=0 00:35:25.691 22:34:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x failover 00:35:25.691 22:34:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:35:25.691 22:34:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:35:25.691 22:34:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:35:25.691 22:34:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:35:25.691 22:34:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x failover 00:35:25.691 22:34:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:25.691 22:34:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:35:25.691 request: 00:35:25.691 { 00:35:25.691 "name": "NVMe0", 00:35:25.691 "trtype": "tcp", 00:35:25.691 "traddr": "10.0.0.2", 00:35:25.691 "adrfam": "ipv4", 00:35:25.691 "trsvcid": "4420", 00:35:25.691 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:35:25.691 "hostaddr": "10.0.0.1", 00:35:25.691 "prchk_reftag": false, 00:35:25.691 "prchk_guard": false, 00:35:25.691 "hdgst": false, 00:35:25.691 "ddgst": false, 00:35:25.691 "multipath": "failover", 00:35:25.691 "allow_unrecognized_csi": false, 00:35:25.691 "method": "bdev_nvme_attach_controller", 00:35:25.691 "req_id": 1 00:35:25.691 } 00:35:25.691 Got JSON-RPC error response 00:35:25.691 response: 00:35:25.691 { 00:35:25.691 "code": -114, 00:35:25.691 "message": "A controller named NVMe0 already exists with the specified network path" 00:35:25.691 } 00:35:25.691 22:34:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:35:25.691 22:34:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # es=1 00:35:25.691 22:34:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:35:25.691 22:34:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:35:25.691 22:34:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:35:25.691 22:34:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@79 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:35:25.691 22:34:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:25.691 22:34:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:35:25.691 00:35:25.691 22:34:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:25.691 22:34:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@83 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:35:25.691 22:34:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:25.691 22:34:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:35:25.691 22:34:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:25.691 22:34:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@87 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe1 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 00:35:25.691 22:34:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:25.691 22:34:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:35:25.952 00:35:25.952 22:34:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:25.952 22:34:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:35:25.952 22:34:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # grep -c NVMe 00:35:25.952 22:34:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:25.952 22:34:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:35:25.952 22:34:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:25.952 22:34:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # '[' 2 '!=' 2 ']' 00:35:25.952 22:34:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:35:27.337 { 00:35:27.337 "results": [ 00:35:27.337 { 00:35:27.337 "job": "NVMe0n1", 00:35:27.337 "core_mask": "0x1", 00:35:27.337 "workload": "write", 00:35:27.337 "status": "finished", 00:35:27.337 "queue_depth": 128, 00:35:27.337 "io_size": 4096, 00:35:27.337 "runtime": 1.008225, 00:35:27.337 "iops": 19294.304346748, 00:35:27.337 "mibps": 75.36837635448437, 00:35:27.337 "io_failed": 0, 00:35:27.337 "io_timeout": 0, 00:35:27.337 "avg_latency_us": 6608.273342586404, 00:35:27.337 "min_latency_us": 2402.9866666666667, 00:35:27.337 "max_latency_us": 8738.133333333333 00:35:27.337 } 00:35:27.337 ], 00:35:27.337 "core_count": 1 00:35:27.337 } 00:35:27.337 22:34:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@98 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe1 00:35:27.337 22:34:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:27.337 22:34:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:35:27.337 22:34:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:27.337 22:34:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@100 -- # [[ -n '' ]] 00:35:27.337 22:34:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@116 -- # killprocess 288732 00:35:27.337 22:34:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@950 -- # '[' -z 288732 ']' 00:35:27.337 22:34:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@954 -- # kill -0 288732 00:35:27.337 22:34:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@955 -- # uname 00:35:27.337 22:34:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:35:27.337 22:34:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 288732 00:35:27.337 22:34:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:35:27.337 22:34:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:35:27.337 22:34:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@968 -- # echo 'killing process with pid 288732' 00:35:27.337 killing process with pid 288732 00:35:27.337 22:34:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@969 -- # kill 288732 00:35:27.337 22:34:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@974 -- # wait 288732 00:35:27.337 22:34:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@118 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:35:27.337 22:34:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:27.337 22:34:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:35:27.337 22:34:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:27.337 22:34:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@119 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:35:27.337 22:34:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:27.337 22:34:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:35:27.337 22:34:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:27.337 22:34:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@121 -- # trap - SIGINT SIGTERM EXIT 00:35:27.337 22:34:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@123 -- # pap /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:35:27.337 22:34:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1597 -- # read -r file 00:35:27.337 22:34:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1596 -- # find /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt -type f 00:35:27.337 22:34:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1596 -- # sort -u 00:35:27.337 22:34:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1598 -- # cat 00:35:27.337 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:35:27.337 [2024-10-01 22:34:19.787434] Starting SPDK v25.01-pre git sha1 1b1c3081e / DPDK 24.03.0 initialization... 00:35:27.337 [2024-10-01 22:34:19.787495] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid288732 ] 00:35:27.337 [2024-10-01 22:34:19.848303] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:27.337 [2024-10-01 22:34:19.912921] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:35:27.337 [2024-10-01 22:34:21.119577] bdev.c:4724:bdev_name_add: *ERROR*: Bdev name a005d05b-753f-40de-abd9-71edbc9bc928 already exists 00:35:27.337 [2024-10-01 22:34:21.119607] bdev.c:7885:bdev_register: *ERROR*: Unable to add uuid:a005d05b-753f-40de-abd9-71edbc9bc928 alias for bdev NVMe1n1 00:35:27.337 [2024-10-01 22:34:21.119616] bdev_nvme.c:4481:nvme_bdev_create: *ERROR*: spdk_bdev_register() failed 00:35:27.337 Running I/O for 1 seconds... 00:35:27.337 19293.00 IOPS, 75.36 MiB/s 00:35:27.337 Latency(us) 00:35:27.337 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:27.337 Job: NVMe0n1 (Core Mask 0x1, workload: write, depth: 128, IO size: 4096) 00:35:27.337 NVMe0n1 : 1.01 19294.30 75.37 0.00 0.00 6608.27 2402.99 8738.13 00:35:27.337 =================================================================================================================== 00:35:27.337 Total : 19294.30 75.37 0.00 0.00 6608.27 2402.99 8738.13 00:35:27.337 Received shutdown signal, test time was about 1.000000 seconds 00:35:27.337 00:35:27.337 Latency(us) 00:35:27.337 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:27.337 =================================================================================================================== 00:35:27.337 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:35:27.337 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:35:27.337 22:34:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1603 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:35:27.337 22:34:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1597 -- # read -r file 00:35:27.337 22:34:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@124 -- # nvmftestfini 00:35:27.337 22:34:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@514 -- # nvmfcleanup 00:35:27.337 22:34:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@121 -- # sync 00:35:27.337 22:34:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:35:27.337 22:34:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@124 -- # set +e 00:35:27.337 22:34:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@125 -- # for i in {1..20} 00:35:27.337 22:34:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:35:27.598 rmmod nvme_tcp 00:35:27.598 rmmod nvme_fabrics 00:35:27.598 rmmod nvme_keyring 00:35:27.598 22:34:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:35:27.598 22:34:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@128 -- # set -e 00:35:27.598 22:34:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@129 -- # return 0 00:35:27.598 22:34:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@515 -- # '[' -n 288636 ']' 00:35:27.598 22:34:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@516 -- # killprocess 288636 00:35:27.598 22:34:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@950 -- # '[' -z 288636 ']' 00:35:27.598 22:34:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@954 -- # kill -0 288636 00:35:27.598 22:34:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@955 -- # uname 00:35:27.598 22:34:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:35:27.598 22:34:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 288636 00:35:27.598 22:34:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:35:27.598 22:34:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:35:27.598 22:34:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@968 -- # echo 'killing process with pid 288636' 00:35:27.598 killing process with pid 288636 00:35:27.598 22:34:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@969 -- # kill 288636 00:35:27.598 22:34:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@974 -- # wait 288636 00:35:27.858 22:34:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:35:27.858 22:34:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:35:27.858 22:34:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:35:27.858 22:34:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@297 -- # iptr 00:35:27.858 22:34:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:35:27.858 22:34:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@789 -- # iptables-save 00:35:27.858 22:34:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@789 -- # iptables-restore 00:35:27.858 22:34:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:35:27.859 22:34:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@302 -- # remove_spdk_ns 00:35:27.859 22:34:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:27.859 22:34:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:35:27.859 22:34:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:29.769 22:34:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:35:29.769 00:35:29.769 real 0m13.863s 00:35:29.769 user 0m17.026s 00:35:29.769 sys 0m6.456s 00:35:29.769 22:34:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1126 -- # xtrace_disable 00:35:29.769 22:34:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:35:29.769 ************************************ 00:35:29.769 END TEST nvmf_multicontroller 00:35:29.769 ************************************ 00:35:30.030 22:34:25 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@17 -- # run_test nvmf_aer /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:35:30.030 22:34:25 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:35:30.030 22:34:25 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:35:30.030 22:34:25 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:35:30.030 ************************************ 00:35:30.030 START TEST nvmf_aer 00:35:30.030 ************************************ 00:35:30.030 22:34:25 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:35:30.030 * Looking for test storage... 00:35:30.030 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:35:30.030 22:34:25 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:35:30.030 22:34:25 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1681 -- # lcov --version 00:35:30.030 22:34:25 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:35:30.030 22:34:25 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:35:30.030 22:34:25 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:35:30.030 22:34:25 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@333 -- # local ver1 ver1_l 00:35:30.030 22:34:25 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@334 -- # local ver2 ver2_l 00:35:30.030 22:34:25 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@336 -- # IFS=.-: 00:35:30.030 22:34:25 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@336 -- # read -ra ver1 00:35:30.030 22:34:25 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@337 -- # IFS=.-: 00:35:30.030 22:34:25 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@337 -- # read -ra ver2 00:35:30.030 22:34:25 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@338 -- # local 'op=<' 00:35:30.030 22:34:25 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@340 -- # ver1_l=2 00:35:30.030 22:34:25 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@341 -- # ver2_l=1 00:35:30.030 22:34:25 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:35:30.030 22:34:25 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@344 -- # case "$op" in 00:35:30.030 22:34:25 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@345 -- # : 1 00:35:30.030 22:34:25 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@364 -- # (( v = 0 )) 00:35:30.030 22:34:25 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:35:30.030 22:34:25 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@365 -- # decimal 1 00:35:30.030 22:34:25 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@353 -- # local d=1 00:35:30.030 22:34:25 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:35:30.030 22:34:25 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@355 -- # echo 1 00:35:30.030 22:34:25 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@365 -- # ver1[v]=1 00:35:30.030 22:34:25 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@366 -- # decimal 2 00:35:30.030 22:34:25 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@353 -- # local d=2 00:35:30.030 22:34:25 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:35:30.030 22:34:25 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@355 -- # echo 2 00:35:30.030 22:34:25 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@366 -- # ver2[v]=2 00:35:30.030 22:34:25 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:35:30.030 22:34:25 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:35:30.030 22:34:25 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@368 -- # return 0 00:35:30.030 22:34:25 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:35:30.030 22:34:25 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:35:30.030 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:30.030 --rc genhtml_branch_coverage=1 00:35:30.030 --rc genhtml_function_coverage=1 00:35:30.030 --rc genhtml_legend=1 00:35:30.030 --rc geninfo_all_blocks=1 00:35:30.030 --rc geninfo_unexecuted_blocks=1 00:35:30.030 00:35:30.030 ' 00:35:30.030 22:34:25 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:35:30.030 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:30.030 --rc genhtml_branch_coverage=1 00:35:30.030 --rc genhtml_function_coverage=1 00:35:30.030 --rc genhtml_legend=1 00:35:30.030 --rc geninfo_all_blocks=1 00:35:30.030 --rc geninfo_unexecuted_blocks=1 00:35:30.030 00:35:30.030 ' 00:35:30.030 22:34:25 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:35:30.030 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:30.030 --rc genhtml_branch_coverage=1 00:35:30.031 --rc genhtml_function_coverage=1 00:35:30.031 --rc genhtml_legend=1 00:35:30.031 --rc geninfo_all_blocks=1 00:35:30.031 --rc geninfo_unexecuted_blocks=1 00:35:30.031 00:35:30.031 ' 00:35:30.031 22:34:25 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:35:30.031 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:30.031 --rc genhtml_branch_coverage=1 00:35:30.031 --rc genhtml_function_coverage=1 00:35:30.031 --rc genhtml_legend=1 00:35:30.031 --rc geninfo_all_blocks=1 00:35:30.031 --rc geninfo_unexecuted_blocks=1 00:35:30.031 00:35:30.031 ' 00:35:30.031 22:34:25 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:35:30.031 22:34:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@7 -- # uname -s 00:35:30.292 22:34:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:35:30.292 22:34:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:35:30.292 22:34:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:35:30.292 22:34:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:35:30.292 22:34:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:35:30.292 22:34:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:35:30.292 22:34:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:35:30.292 22:34:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:35:30.292 22:34:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:35:30.292 22:34:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:35:30.292 22:34:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 00:35:30.292 22:34:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@18 -- # NVME_HOSTID=80c5a598-37ec-ec11-9bc7-a4bf01928204 00:35:30.292 22:34:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:35:30.292 22:34:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:35:30.292 22:34:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:35:30.292 22:34:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:35:30.292 22:34:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:35:30.292 22:34:25 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@15 -- # shopt -s extglob 00:35:30.292 22:34:25 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:35:30.292 22:34:25 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:35:30.292 22:34:25 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:35:30.292 22:34:25 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:30.292 22:34:25 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:30.292 22:34:25 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:30.292 22:34:25 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@5 -- # export PATH 00:35:30.292 22:34:25 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:30.292 22:34:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@51 -- # : 0 00:35:30.292 22:34:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:35:30.292 22:34:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:35:30.292 22:34:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:35:30.292 22:34:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:35:30.292 22:34:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:35:30.292 22:34:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:35:30.292 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:35:30.292 22:34:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:35:30.292 22:34:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:35:30.292 22:34:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@55 -- # have_pci_nics=0 00:35:30.292 22:34:25 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@11 -- # nvmftestinit 00:35:30.292 22:34:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:35:30.292 22:34:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:35:30.292 22:34:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@474 -- # prepare_net_devs 00:35:30.292 22:34:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@436 -- # local -g is_hw=no 00:35:30.292 22:34:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@438 -- # remove_spdk_ns 00:35:30.292 22:34:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:30.292 22:34:25 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:35:30.292 22:34:25 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:30.292 22:34:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:35:30.292 22:34:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:35:30.292 22:34:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@309 -- # xtrace_disable 00:35:30.293 22:34:25 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:35:38.433 22:34:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:35:38.433 22:34:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@315 -- # pci_devs=() 00:35:38.433 22:34:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@315 -- # local -a pci_devs 00:35:38.433 22:34:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@316 -- # pci_net_devs=() 00:35:38.433 22:34:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:35:38.433 22:34:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@317 -- # pci_drivers=() 00:35:38.433 22:34:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@317 -- # local -A pci_drivers 00:35:38.433 22:34:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@319 -- # net_devs=() 00:35:38.433 22:34:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@319 -- # local -ga net_devs 00:35:38.433 22:34:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@320 -- # e810=() 00:35:38.433 22:34:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@320 -- # local -ga e810 00:35:38.433 22:34:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@321 -- # x722=() 00:35:38.433 22:34:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@321 -- # local -ga x722 00:35:38.433 22:34:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@322 -- # mlx=() 00:35:38.433 22:34:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@322 -- # local -ga mlx 00:35:38.433 22:34:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:35:38.433 22:34:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:35:38.433 22:34:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:35:38.433 22:34:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:35:38.433 22:34:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:35:38.433 22:34:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:35:38.433 22:34:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:35:38.433 22:34:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:35:38.433 22:34:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:35:38.433 22:34:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:35:38.433 22:34:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:35:38.433 22:34:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:35:38.433 22:34:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:35:38.433 22:34:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:35:38.433 22:34:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:35:38.433 22:34:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:35:38.433 22:34:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:35:38.433 22:34:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:35:38.433 22:34:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:35:38.433 22:34:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:35:38.433 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:35:38.433 22:34:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:35:38.433 22:34:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:35:38.433 22:34:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:38.433 22:34:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:38.433 22:34:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:35:38.433 22:34:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:35:38.433 22:34:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:35:38.433 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:35:38.433 22:34:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:35:38.433 22:34:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:35:38.433 22:34:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:38.433 22:34:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:38.433 22:34:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:35:38.433 22:34:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:35:38.433 22:34:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:35:38.433 22:34:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:35:38.433 22:34:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:35:38.433 22:34:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:38.433 22:34:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:35:38.433 22:34:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:38.433 22:34:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@416 -- # [[ up == up ]] 00:35:38.433 22:34:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:35:38.433 22:34:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:38.433 22:34:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:35:38.433 Found net devices under 0000:4b:00.0: cvl_0_0 00:35:38.433 22:34:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:35:38.433 22:34:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:35:38.433 22:34:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:38.433 22:34:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:35:38.433 22:34:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:38.433 22:34:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@416 -- # [[ up == up ]] 00:35:38.433 22:34:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:35:38.433 22:34:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:38.433 22:34:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:35:38.433 Found net devices under 0000:4b:00.1: cvl_0_1 00:35:38.433 22:34:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:35:38.433 22:34:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:35:38.433 22:34:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@440 -- # is_hw=yes 00:35:38.433 22:34:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:35:38.433 22:34:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:35:38.433 22:34:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:35:38.433 22:34:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:35:38.433 22:34:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:35:38.433 22:34:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:35:38.433 22:34:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:35:38.433 22:34:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:35:38.434 22:34:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:35:38.434 22:34:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:35:38.434 22:34:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:35:38.434 22:34:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:35:38.434 22:34:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:35:38.434 22:34:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:35:38.434 22:34:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:35:38.434 22:34:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:35:38.434 22:34:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:35:38.434 22:34:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:35:38.434 22:34:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:35:38.434 22:34:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:35:38.434 22:34:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:35:38.434 22:34:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:35:38.434 22:34:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:35:38.434 22:34:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:35:38.434 22:34:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:35:38.434 22:34:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:35:38.434 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:35:38.434 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.577 ms 00:35:38.434 00:35:38.434 --- 10.0.0.2 ping statistics --- 00:35:38.434 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:38.434 rtt min/avg/max/mdev = 0.577/0.577/0.577/0.000 ms 00:35:38.434 22:34:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:35:38.434 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:35:38.434 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.279 ms 00:35:38.434 00:35:38.434 --- 10.0.0.1 ping statistics --- 00:35:38.434 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:38.434 rtt min/avg/max/mdev = 0.279/0.279/0.279/0.000 ms 00:35:38.434 22:34:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:35:38.434 22:34:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@448 -- # return 0 00:35:38.434 22:34:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:35:38.434 22:34:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:35:38.434 22:34:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:35:38.434 22:34:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:35:38.434 22:34:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:35:38.434 22:34:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:35:38.434 22:34:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:35:38.434 22:34:32 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@12 -- # nvmfappstart -m 0xF 00:35:38.434 22:34:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:35:38.434 22:34:32 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@724 -- # xtrace_disable 00:35:38.434 22:34:32 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:35:38.434 22:34:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@507 -- # nvmfpid=293621 00:35:38.434 22:34:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@508 -- # waitforlisten 293621 00:35:38.434 22:34:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:35:38.434 22:34:32 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@831 -- # '[' -z 293621 ']' 00:35:38.434 22:34:32 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:35:38.434 22:34:32 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@836 -- # local max_retries=100 00:35:38.434 22:34:32 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:35:38.434 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:35:38.434 22:34:32 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@840 -- # xtrace_disable 00:35:38.434 22:34:32 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:35:38.434 [2024-10-01 22:34:32.726610] Starting SPDK v25.01-pre git sha1 1b1c3081e / DPDK 24.03.0 initialization... 00:35:38.434 [2024-10-01 22:34:32.726686] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:35:38.434 [2024-10-01 22:34:32.799388] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:35:38.434 [2024-10-01 22:34:32.870486] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:35:38.434 [2024-10-01 22:34:32.870526] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:35:38.434 [2024-10-01 22:34:32.870534] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:35:38.434 [2024-10-01 22:34:32.870541] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:35:38.434 [2024-10-01 22:34:32.870547] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:35:38.434 [2024-10-01 22:34:32.870690] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:35:38.434 [2024-10-01 22:34:32.870932] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:35:38.434 [2024-10-01 22:34:32.871089] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:35:38.434 [2024-10-01 22:34:32.871089] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:35:38.434 22:34:33 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:35:38.434 22:34:33 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@864 -- # return 0 00:35:38.434 22:34:33 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:35:38.434 22:34:33 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@730 -- # xtrace_disable 00:35:38.434 22:34:33 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:35:38.434 22:34:33 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:35:38.434 22:34:33 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@14 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:35:38.434 22:34:33 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:38.434 22:34:33 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:35:38.434 [2024-10-01 22:34:33.584656] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:35:38.434 22:34:33 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:38.434 22:34:33 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@16 -- # rpc_cmd bdev_malloc_create 64 512 --name Malloc0 00:35:38.434 22:34:33 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:38.434 22:34:33 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:35:38.434 Malloc0 00:35:38.434 22:34:33 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:38.434 22:34:33 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@17 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 2 00:35:38.434 22:34:33 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:38.434 22:34:33 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:35:38.434 22:34:33 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:38.434 22:34:33 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@18 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:35:38.434 22:34:33 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:38.434 22:34:33 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:35:38.434 22:34:33 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:38.434 22:34:33 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@19 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:35:38.434 22:34:33 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:38.434 22:34:33 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:35:38.434 [2024-10-01 22:34:33.643847] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:35:38.434 22:34:33 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:38.434 22:34:33 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@21 -- # rpc_cmd nvmf_get_subsystems 00:35:38.434 22:34:33 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:38.434 22:34:33 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:35:38.434 [ 00:35:38.434 { 00:35:38.434 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:35:38.434 "subtype": "Discovery", 00:35:38.434 "listen_addresses": [], 00:35:38.434 "allow_any_host": true, 00:35:38.434 "hosts": [] 00:35:38.434 }, 00:35:38.434 { 00:35:38.434 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:35:38.434 "subtype": "NVMe", 00:35:38.434 "listen_addresses": [ 00:35:38.434 { 00:35:38.434 "trtype": "TCP", 00:35:38.434 "adrfam": "IPv4", 00:35:38.434 "traddr": "10.0.0.2", 00:35:38.434 "trsvcid": "4420" 00:35:38.434 } 00:35:38.434 ], 00:35:38.434 "allow_any_host": true, 00:35:38.434 "hosts": [], 00:35:38.434 "serial_number": "SPDK00000000000001", 00:35:38.434 "model_number": "SPDK bdev Controller", 00:35:38.434 "max_namespaces": 2, 00:35:38.434 "min_cntlid": 1, 00:35:38.434 "max_cntlid": 65519, 00:35:38.434 "namespaces": [ 00:35:38.434 { 00:35:38.434 "nsid": 1, 00:35:38.434 "bdev_name": "Malloc0", 00:35:38.434 "name": "Malloc0", 00:35:38.434 "nguid": "84907B362433468D8A791D86068DFF6C", 00:35:38.434 "uuid": "84907b36-2433-468d-8a79-1d86068dff6c" 00:35:38.434 } 00:35:38.434 ] 00:35:38.434 } 00:35:38.434 ] 00:35:38.434 22:34:33 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:38.434 22:34:33 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@23 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:35:38.434 22:34:33 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@24 -- # rm -f /tmp/aer_touch_file 00:35:38.434 22:34:33 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@33 -- # aerpid=293706 00:35:38.434 22:34:33 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@36 -- # waitforfile /tmp/aer_touch_file 00:35:38.435 22:34:33 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -n 2 -t /tmp/aer_touch_file 00:35:38.435 22:34:33 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1265 -- # local i=0 00:35:38.435 22:34:33 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:35:38.435 22:34:33 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1267 -- # '[' 0 -lt 200 ']' 00:35:38.435 22:34:33 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1268 -- # i=1 00:35:38.435 22:34:33 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1269 -- # sleep 0.1 00:35:38.695 22:34:33 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:35:38.695 22:34:33 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1267 -- # '[' 1 -lt 200 ']' 00:35:38.695 22:34:33 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1268 -- # i=2 00:35:38.695 22:34:33 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1269 -- # sleep 0.1 00:35:38.695 22:34:33 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:35:38.695 22:34:33 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1272 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:35:38.695 22:34:33 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1276 -- # return 0 00:35:38.695 22:34:33 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@39 -- # rpc_cmd bdev_malloc_create 64 4096 --name Malloc1 00:35:38.695 22:34:33 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:38.695 22:34:33 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:35:38.695 Malloc1 00:35:38.695 22:34:33 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:38.695 22:34:33 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@40 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 2 00:35:38.695 22:34:33 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:38.695 22:34:33 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:35:38.695 22:34:33 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:38.695 22:34:33 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@41 -- # rpc_cmd nvmf_get_subsystems 00:35:38.695 22:34:33 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:38.695 22:34:33 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:35:38.695 Asynchronous Event Request test 00:35:38.695 Attaching to 10.0.0.2 00:35:38.695 Attached to 10.0.0.2 00:35:38.695 Registering asynchronous event callbacks... 00:35:38.695 Starting namespace attribute notice tests for all controllers... 00:35:38.695 10.0.0.2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:35:38.695 aer_cb - Changed Namespace 00:35:38.695 Cleaning up... 00:35:38.695 [ 00:35:38.695 { 00:35:38.695 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:35:38.695 "subtype": "Discovery", 00:35:38.695 "listen_addresses": [], 00:35:38.695 "allow_any_host": true, 00:35:38.695 "hosts": [] 00:35:38.695 }, 00:35:38.695 { 00:35:38.695 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:35:38.695 "subtype": "NVMe", 00:35:38.695 "listen_addresses": [ 00:35:38.695 { 00:35:38.695 "trtype": "TCP", 00:35:38.695 "adrfam": "IPv4", 00:35:38.695 "traddr": "10.0.0.2", 00:35:38.695 "trsvcid": "4420" 00:35:38.695 } 00:35:38.695 ], 00:35:38.695 "allow_any_host": true, 00:35:38.695 "hosts": [], 00:35:38.695 "serial_number": "SPDK00000000000001", 00:35:38.695 "model_number": "SPDK bdev Controller", 00:35:38.695 "max_namespaces": 2, 00:35:38.695 "min_cntlid": 1, 00:35:38.695 "max_cntlid": 65519, 00:35:38.695 "namespaces": [ 00:35:38.695 { 00:35:38.695 "nsid": 1, 00:35:38.695 "bdev_name": "Malloc0", 00:35:38.695 "name": "Malloc0", 00:35:38.695 "nguid": "84907B362433468D8A791D86068DFF6C", 00:35:38.695 "uuid": "84907b36-2433-468d-8a79-1d86068dff6c" 00:35:38.695 }, 00:35:38.695 { 00:35:38.695 "nsid": 2, 00:35:38.695 "bdev_name": "Malloc1", 00:35:38.695 "name": "Malloc1", 00:35:38.695 "nguid": "DCB5A379E29D4161A90E8DEE86B4AE7F", 00:35:38.695 "uuid": "dcb5a379-e29d-4161-a90e-8dee86b4ae7f" 00:35:38.695 } 00:35:38.695 ] 00:35:38.695 } 00:35:38.695 ] 00:35:38.955 22:34:33 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:38.955 22:34:33 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@43 -- # wait 293706 00:35:38.955 22:34:33 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@45 -- # rpc_cmd bdev_malloc_delete Malloc0 00:35:38.955 22:34:33 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:38.955 22:34:33 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:35:38.955 22:34:33 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:38.955 22:34:33 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@46 -- # rpc_cmd bdev_malloc_delete Malloc1 00:35:38.955 22:34:33 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:38.955 22:34:33 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:35:38.955 22:34:33 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:38.955 22:34:33 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@47 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:35:38.955 22:34:33 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:38.955 22:34:33 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:35:38.955 22:34:33 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:38.955 22:34:33 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@49 -- # trap - SIGINT SIGTERM EXIT 00:35:38.955 22:34:33 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@51 -- # nvmftestfini 00:35:38.955 22:34:33 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@514 -- # nvmfcleanup 00:35:38.955 22:34:33 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@121 -- # sync 00:35:38.955 22:34:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:35:38.955 22:34:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@124 -- # set +e 00:35:38.955 22:34:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@125 -- # for i in {1..20} 00:35:38.955 22:34:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:35:38.955 rmmod nvme_tcp 00:35:38.955 rmmod nvme_fabrics 00:35:38.955 rmmod nvme_keyring 00:35:38.955 22:34:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:35:38.955 22:34:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@128 -- # set -e 00:35:38.955 22:34:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@129 -- # return 0 00:35:38.955 22:34:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@515 -- # '[' -n 293621 ']' 00:35:38.955 22:34:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@516 -- # killprocess 293621 00:35:38.955 22:34:34 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@950 -- # '[' -z 293621 ']' 00:35:38.955 22:34:34 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@954 -- # kill -0 293621 00:35:38.955 22:34:34 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@955 -- # uname 00:35:38.955 22:34:34 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:35:38.955 22:34:34 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 293621 00:35:38.955 22:34:34 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:35:38.955 22:34:34 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:35:38.955 22:34:34 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@968 -- # echo 'killing process with pid 293621' 00:35:38.955 killing process with pid 293621 00:35:38.955 22:34:34 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@969 -- # kill 293621 00:35:38.955 22:34:34 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@974 -- # wait 293621 00:35:39.216 22:34:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:35:39.216 22:34:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:35:39.216 22:34:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:35:39.216 22:34:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@297 -- # iptr 00:35:39.216 22:34:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@789 -- # iptables-save 00:35:39.216 22:34:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:35:39.216 22:34:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@789 -- # iptables-restore 00:35:39.216 22:34:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:35:39.216 22:34:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@302 -- # remove_spdk_ns 00:35:39.216 22:34:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:39.216 22:34:34 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:35:39.217 22:34:34 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:41.772 22:34:36 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:35:41.772 00:35:41.772 real 0m11.341s 00:35:41.772 user 0m7.938s 00:35:41.772 sys 0m6.024s 00:35:41.772 22:34:36 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1126 -- # xtrace_disable 00:35:41.772 22:34:36 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:35:41.772 ************************************ 00:35:41.772 END TEST nvmf_aer 00:35:41.772 ************************************ 00:35:41.772 22:34:36 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@18 -- # run_test nvmf_async_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:35:41.772 22:34:36 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:35:41.772 22:34:36 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:35:41.772 22:34:36 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:35:41.772 ************************************ 00:35:41.772 START TEST nvmf_async_init 00:35:41.772 ************************************ 00:35:41.772 22:34:36 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:35:41.772 * Looking for test storage... 00:35:41.772 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:35:41.772 22:34:36 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:35:41.772 22:34:36 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1681 -- # lcov --version 00:35:41.772 22:34:36 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:35:41.772 22:34:36 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:35:41.772 22:34:36 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:35:41.772 22:34:36 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@333 -- # local ver1 ver1_l 00:35:41.772 22:34:36 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@334 -- # local ver2 ver2_l 00:35:41.772 22:34:36 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@336 -- # IFS=.-: 00:35:41.772 22:34:36 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@336 -- # read -ra ver1 00:35:41.772 22:34:36 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@337 -- # IFS=.-: 00:35:41.772 22:34:36 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@337 -- # read -ra ver2 00:35:41.772 22:34:36 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@338 -- # local 'op=<' 00:35:41.772 22:34:36 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@340 -- # ver1_l=2 00:35:41.772 22:34:36 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@341 -- # ver2_l=1 00:35:41.772 22:34:36 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:35:41.772 22:34:36 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@344 -- # case "$op" in 00:35:41.772 22:34:36 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@345 -- # : 1 00:35:41.772 22:34:36 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@364 -- # (( v = 0 )) 00:35:41.772 22:34:36 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:35:41.772 22:34:36 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@365 -- # decimal 1 00:35:41.772 22:34:36 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@353 -- # local d=1 00:35:41.772 22:34:36 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:35:41.772 22:34:36 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@355 -- # echo 1 00:35:41.772 22:34:36 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@365 -- # ver1[v]=1 00:35:41.772 22:34:36 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@366 -- # decimal 2 00:35:41.772 22:34:36 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@353 -- # local d=2 00:35:41.772 22:34:36 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:35:41.772 22:34:36 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@355 -- # echo 2 00:35:41.772 22:34:36 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@366 -- # ver2[v]=2 00:35:41.772 22:34:36 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:35:41.772 22:34:36 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:35:41.772 22:34:36 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@368 -- # return 0 00:35:41.772 22:34:36 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:35:41.772 22:34:36 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:35:41.772 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:41.772 --rc genhtml_branch_coverage=1 00:35:41.772 --rc genhtml_function_coverage=1 00:35:41.772 --rc genhtml_legend=1 00:35:41.772 --rc geninfo_all_blocks=1 00:35:41.772 --rc geninfo_unexecuted_blocks=1 00:35:41.772 00:35:41.772 ' 00:35:41.772 22:34:36 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:35:41.772 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:41.772 --rc genhtml_branch_coverage=1 00:35:41.772 --rc genhtml_function_coverage=1 00:35:41.772 --rc genhtml_legend=1 00:35:41.772 --rc geninfo_all_blocks=1 00:35:41.772 --rc geninfo_unexecuted_blocks=1 00:35:41.772 00:35:41.772 ' 00:35:41.772 22:34:36 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:35:41.772 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:41.772 --rc genhtml_branch_coverage=1 00:35:41.772 --rc genhtml_function_coverage=1 00:35:41.772 --rc genhtml_legend=1 00:35:41.772 --rc geninfo_all_blocks=1 00:35:41.772 --rc geninfo_unexecuted_blocks=1 00:35:41.772 00:35:41.772 ' 00:35:41.772 22:34:36 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:35:41.772 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:41.772 --rc genhtml_branch_coverage=1 00:35:41.772 --rc genhtml_function_coverage=1 00:35:41.772 --rc genhtml_legend=1 00:35:41.772 --rc geninfo_all_blocks=1 00:35:41.772 --rc geninfo_unexecuted_blocks=1 00:35:41.772 00:35:41.772 ' 00:35:41.772 22:34:36 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:35:41.772 22:34:36 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@7 -- # uname -s 00:35:41.772 22:34:36 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:35:41.772 22:34:36 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:35:41.772 22:34:36 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:35:41.772 22:34:36 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:35:41.772 22:34:36 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:35:41.772 22:34:36 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:35:41.772 22:34:36 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:35:41.772 22:34:36 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:35:41.772 22:34:36 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:35:41.772 22:34:36 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:35:41.772 22:34:36 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 00:35:41.772 22:34:36 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@18 -- # NVME_HOSTID=80c5a598-37ec-ec11-9bc7-a4bf01928204 00:35:41.772 22:34:36 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:35:41.772 22:34:36 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:35:41.772 22:34:36 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:35:41.772 22:34:36 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:35:41.773 22:34:36 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:35:41.773 22:34:36 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@15 -- # shopt -s extglob 00:35:41.773 22:34:36 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:35:41.773 22:34:36 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:35:41.773 22:34:36 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:35:41.773 22:34:36 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:41.773 22:34:36 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:41.773 22:34:36 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:41.773 22:34:36 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@5 -- # export PATH 00:35:41.773 22:34:36 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:41.773 22:34:36 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@51 -- # : 0 00:35:41.773 22:34:36 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:35:41.773 22:34:36 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:35:41.773 22:34:36 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:35:41.773 22:34:36 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:35:41.773 22:34:36 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:35:41.773 22:34:36 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:35:41.773 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:35:41.773 22:34:36 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:35:41.773 22:34:36 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:35:41.773 22:34:36 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@55 -- # have_pci_nics=0 00:35:41.773 22:34:36 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@13 -- # null_bdev_size=1024 00:35:41.773 22:34:36 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@14 -- # null_block_size=512 00:35:41.773 22:34:36 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@15 -- # null_bdev=null0 00:35:41.773 22:34:36 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@16 -- # nvme_bdev=nvme0 00:35:41.773 22:34:36 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # uuidgen 00:35:41.773 22:34:36 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # tr -d - 00:35:41.773 22:34:36 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # nguid=b54841c860ab43949ec72f64ee23cf06 00:35:41.773 22:34:36 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@22 -- # nvmftestinit 00:35:41.773 22:34:36 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:35:41.773 22:34:36 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:35:41.773 22:34:36 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@474 -- # prepare_net_devs 00:35:41.773 22:34:36 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@436 -- # local -g is_hw=no 00:35:41.773 22:34:36 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@438 -- # remove_spdk_ns 00:35:41.773 22:34:36 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:41.773 22:34:36 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:35:41.773 22:34:36 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:41.773 22:34:36 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:35:41.773 22:34:36 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:35:41.773 22:34:36 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@309 -- # xtrace_disable 00:35:41.773 22:34:36 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:35:49.914 22:34:43 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:35:49.914 22:34:43 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@315 -- # pci_devs=() 00:35:49.914 22:34:43 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@315 -- # local -a pci_devs 00:35:49.914 22:34:43 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@316 -- # pci_net_devs=() 00:35:49.914 22:34:43 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:35:49.914 22:34:43 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@317 -- # pci_drivers=() 00:35:49.914 22:34:43 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@317 -- # local -A pci_drivers 00:35:49.915 22:34:43 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@319 -- # net_devs=() 00:35:49.915 22:34:43 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@319 -- # local -ga net_devs 00:35:49.915 22:34:43 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@320 -- # e810=() 00:35:49.915 22:34:43 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@320 -- # local -ga e810 00:35:49.915 22:34:43 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@321 -- # x722=() 00:35:49.915 22:34:43 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@321 -- # local -ga x722 00:35:49.915 22:34:43 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@322 -- # mlx=() 00:35:49.915 22:34:43 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@322 -- # local -ga mlx 00:35:49.915 22:34:43 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:35:49.915 22:34:43 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:35:49.915 22:34:43 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:35:49.915 22:34:43 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:35:49.915 22:34:43 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:35:49.915 22:34:43 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:35:49.915 22:34:43 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:35:49.915 22:34:43 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:35:49.915 22:34:43 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:35:49.915 22:34:43 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:35:49.915 22:34:43 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:35:49.915 22:34:43 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:35:49.915 22:34:43 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:35:49.915 22:34:43 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:35:49.915 22:34:43 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:35:49.915 22:34:43 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:35:49.915 22:34:43 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:35:49.915 22:34:43 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:35:49.915 22:34:43 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:35:49.915 22:34:43 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:35:49.915 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:35:49.915 22:34:43 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:35:49.915 22:34:43 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:35:49.915 22:34:43 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:49.915 22:34:43 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:49.915 22:34:43 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:35:49.915 22:34:43 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:35:49.915 22:34:43 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:35:49.915 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:35:49.915 22:34:43 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:35:49.915 22:34:43 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:35:49.915 22:34:43 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:49.915 22:34:43 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:49.915 22:34:43 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:35:49.915 22:34:43 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:35:49.915 22:34:43 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:35:49.915 22:34:43 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:35:49.915 22:34:43 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:35:49.915 22:34:43 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:49.915 22:34:43 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:35:49.915 22:34:43 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:49.915 22:34:43 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@416 -- # [[ up == up ]] 00:35:49.915 22:34:43 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:35:49.915 22:34:43 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:49.915 22:34:43 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:35:49.915 Found net devices under 0000:4b:00.0: cvl_0_0 00:35:49.915 22:34:43 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:35:49.915 22:34:43 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:35:49.915 22:34:43 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:49.915 22:34:43 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:35:49.915 22:34:43 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:49.915 22:34:43 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@416 -- # [[ up == up ]] 00:35:49.915 22:34:43 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:35:49.915 22:34:43 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:49.915 22:34:43 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:35:49.915 Found net devices under 0000:4b:00.1: cvl_0_1 00:35:49.915 22:34:43 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:35:49.915 22:34:43 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:35:49.915 22:34:43 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@440 -- # is_hw=yes 00:35:49.915 22:34:43 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:35:49.915 22:34:43 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:35:49.915 22:34:43 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:35:49.915 22:34:43 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:35:49.915 22:34:43 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:35:49.915 22:34:43 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:35:49.915 22:34:43 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:35:49.915 22:34:43 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:35:49.915 22:34:43 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:35:49.915 22:34:43 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:35:49.915 22:34:43 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:35:49.915 22:34:43 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:35:49.915 22:34:43 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:35:49.915 22:34:43 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:35:49.915 22:34:43 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:35:49.915 22:34:43 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:35:49.915 22:34:43 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:35:49.915 22:34:43 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:35:49.915 22:34:43 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:35:49.915 22:34:43 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:35:49.915 22:34:43 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:35:49.915 22:34:43 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:35:49.915 22:34:44 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:35:49.915 22:34:44 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:35:49.915 22:34:44 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:35:49.915 22:34:44 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:35:49.915 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:35:49.915 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.558 ms 00:35:49.915 00:35:49.915 --- 10.0.0.2 ping statistics --- 00:35:49.915 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:49.915 rtt min/avg/max/mdev = 0.558/0.558/0.558/0.000 ms 00:35:49.915 22:34:44 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:35:49.915 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:35:49.915 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.317 ms 00:35:49.915 00:35:49.915 --- 10.0.0.1 ping statistics --- 00:35:49.915 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:49.915 rtt min/avg/max/mdev = 0.317/0.317/0.317/0.000 ms 00:35:49.915 22:34:44 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:35:49.915 22:34:44 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@448 -- # return 0 00:35:49.915 22:34:44 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:35:49.915 22:34:44 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:35:49.915 22:34:44 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:35:49.915 22:34:44 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:35:49.915 22:34:44 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:35:49.915 22:34:44 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:35:49.916 22:34:44 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:35:49.916 22:34:44 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@23 -- # nvmfappstart -m 0x1 00:35:49.916 22:34:44 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:35:49.916 22:34:44 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@724 -- # xtrace_disable 00:35:49.916 22:34:44 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:35:49.916 22:34:44 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@507 -- # nvmfpid=298033 00:35:49.916 22:34:44 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@508 -- # waitforlisten 298033 00:35:49.916 22:34:44 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:35:49.916 22:34:44 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@831 -- # '[' -z 298033 ']' 00:35:49.916 22:34:44 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:35:49.916 22:34:44 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@836 -- # local max_retries=100 00:35:49.916 22:34:44 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:35:49.916 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:35:49.916 22:34:44 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@840 -- # xtrace_disable 00:35:49.916 22:34:44 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:35:49.916 [2024-10-01 22:34:44.236215] Starting SPDK v25.01-pre git sha1 1b1c3081e / DPDK 24.03.0 initialization... 00:35:49.916 [2024-10-01 22:34:44.236278] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:35:49.916 [2024-10-01 22:34:44.307915] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:49.916 [2024-10-01 22:34:44.381594] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:35:49.916 [2024-10-01 22:34:44.381637] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:35:49.916 [2024-10-01 22:34:44.381648] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:35:49.916 [2024-10-01 22:34:44.381655] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:35:49.916 [2024-10-01 22:34:44.381661] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:35:49.916 [2024-10-01 22:34:44.381684] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:35:49.916 22:34:45 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:35:49.916 22:34:45 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@864 -- # return 0 00:35:49.916 22:34:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:35:49.916 22:34:45 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@730 -- # xtrace_disable 00:35:49.916 22:34:45 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:35:49.916 22:34:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:35:49.916 22:34:45 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@26 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:35:49.916 22:34:45 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:49.916 22:34:45 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:35:49.916 [2024-10-01 22:34:45.079912] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:35:49.916 22:34:45 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:49.916 22:34:45 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@27 -- # rpc_cmd bdev_null_create null0 1024 512 00:35:49.916 22:34:45 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:49.916 22:34:45 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:35:49.916 null0 00:35:49.916 22:34:45 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:49.916 22:34:45 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@28 -- # rpc_cmd bdev_wait_for_examine 00:35:49.916 22:34:45 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:49.916 22:34:45 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:35:49.916 22:34:45 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:49.916 22:34:45 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@29 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a 00:35:49.916 22:34:45 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:49.916 22:34:45 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:35:49.916 22:34:45 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:49.916 22:34:45 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 -g b54841c860ab43949ec72f64ee23cf06 00:35:49.916 22:34:45 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:49.916 22:34:45 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:35:49.916 22:34:45 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:49.916 22:34:45 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@31 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:35:49.916 22:34:45 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:49.916 22:34:45 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:35:49.916 [2024-10-01 22:34:45.120133] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:35:49.916 22:34:45 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:49.916 22:34:45 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@37 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode0 00:35:49.916 22:34:45 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:49.916 22:34:45 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:35:50.176 nvme0n1 00:35:50.176 22:34:45 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:50.176 22:34:45 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@41 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:35:50.176 22:34:45 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:50.176 22:34:45 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:35:50.176 [ 00:35:50.176 { 00:35:50.176 "name": "nvme0n1", 00:35:50.176 "aliases": [ 00:35:50.176 "b54841c8-60ab-4394-9ec7-2f64ee23cf06" 00:35:50.176 ], 00:35:50.176 "product_name": "NVMe disk", 00:35:50.176 "block_size": 512, 00:35:50.176 "num_blocks": 2097152, 00:35:50.176 "uuid": "b54841c8-60ab-4394-9ec7-2f64ee23cf06", 00:35:50.176 "numa_id": 0, 00:35:50.176 "assigned_rate_limits": { 00:35:50.176 "rw_ios_per_sec": 0, 00:35:50.176 "rw_mbytes_per_sec": 0, 00:35:50.176 "r_mbytes_per_sec": 0, 00:35:50.176 "w_mbytes_per_sec": 0 00:35:50.176 }, 00:35:50.176 "claimed": false, 00:35:50.176 "zoned": false, 00:35:50.176 "supported_io_types": { 00:35:50.176 "read": true, 00:35:50.176 "write": true, 00:35:50.176 "unmap": false, 00:35:50.176 "flush": true, 00:35:50.176 "reset": true, 00:35:50.176 "nvme_admin": true, 00:35:50.176 "nvme_io": true, 00:35:50.176 "nvme_io_md": false, 00:35:50.176 "write_zeroes": true, 00:35:50.176 "zcopy": false, 00:35:50.176 "get_zone_info": false, 00:35:50.176 "zone_management": false, 00:35:50.176 "zone_append": false, 00:35:50.176 "compare": true, 00:35:50.176 "compare_and_write": true, 00:35:50.176 "abort": true, 00:35:50.176 "seek_hole": false, 00:35:50.176 "seek_data": false, 00:35:50.176 "copy": true, 00:35:50.176 "nvme_iov_md": false 00:35:50.176 }, 00:35:50.176 "memory_domains": [ 00:35:50.176 { 00:35:50.176 "dma_device_id": "system", 00:35:50.176 "dma_device_type": 1 00:35:50.176 } 00:35:50.176 ], 00:35:50.176 "driver_specific": { 00:35:50.176 "nvme": [ 00:35:50.176 { 00:35:50.176 "trid": { 00:35:50.176 "trtype": "TCP", 00:35:50.176 "adrfam": "IPv4", 00:35:50.176 "traddr": "10.0.0.2", 00:35:50.176 "trsvcid": "4420", 00:35:50.176 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:35:50.176 }, 00:35:50.176 "ctrlr_data": { 00:35:50.176 "cntlid": 1, 00:35:50.176 "vendor_id": "0x8086", 00:35:50.176 "model_number": "SPDK bdev Controller", 00:35:50.176 "serial_number": "00000000000000000000", 00:35:50.176 "firmware_revision": "25.01", 00:35:50.176 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:35:50.176 "oacs": { 00:35:50.176 "security": 0, 00:35:50.176 "format": 0, 00:35:50.176 "firmware": 0, 00:35:50.176 "ns_manage": 0 00:35:50.176 }, 00:35:50.176 "multi_ctrlr": true, 00:35:50.176 "ana_reporting": false 00:35:50.176 }, 00:35:50.176 "vs": { 00:35:50.176 "nvme_version": "1.3" 00:35:50.176 }, 00:35:50.176 "ns_data": { 00:35:50.176 "id": 1, 00:35:50.176 "can_share": true 00:35:50.176 } 00:35:50.176 } 00:35:50.176 ], 00:35:50.176 "mp_policy": "active_passive" 00:35:50.176 } 00:35:50.176 } 00:35:50.176 ] 00:35:50.176 22:34:45 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:50.176 22:34:45 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@44 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:35:50.176 22:34:45 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:50.176 22:34:45 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:35:50.177 [2024-10-01 22:34:45.376649] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:35:50.177 [2024-10-01 22:34:45.376710] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2188120 (9): Bad file descriptor 00:35:50.437 [2024-10-01 22:34:45.508727] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:35:50.437 22:34:45 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:50.437 22:34:45 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@47 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:35:50.437 22:34:45 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:50.437 22:34:45 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:35:50.437 [ 00:35:50.437 { 00:35:50.437 "name": "nvme0n1", 00:35:50.437 "aliases": [ 00:35:50.437 "b54841c8-60ab-4394-9ec7-2f64ee23cf06" 00:35:50.437 ], 00:35:50.437 "product_name": "NVMe disk", 00:35:50.437 "block_size": 512, 00:35:50.437 "num_blocks": 2097152, 00:35:50.437 "uuid": "b54841c8-60ab-4394-9ec7-2f64ee23cf06", 00:35:50.437 "numa_id": 0, 00:35:50.437 "assigned_rate_limits": { 00:35:50.437 "rw_ios_per_sec": 0, 00:35:50.437 "rw_mbytes_per_sec": 0, 00:35:50.437 "r_mbytes_per_sec": 0, 00:35:50.437 "w_mbytes_per_sec": 0 00:35:50.437 }, 00:35:50.437 "claimed": false, 00:35:50.437 "zoned": false, 00:35:50.437 "supported_io_types": { 00:35:50.437 "read": true, 00:35:50.437 "write": true, 00:35:50.437 "unmap": false, 00:35:50.437 "flush": true, 00:35:50.437 "reset": true, 00:35:50.437 "nvme_admin": true, 00:35:50.437 "nvme_io": true, 00:35:50.437 "nvme_io_md": false, 00:35:50.437 "write_zeroes": true, 00:35:50.437 "zcopy": false, 00:35:50.437 "get_zone_info": false, 00:35:50.437 "zone_management": false, 00:35:50.437 "zone_append": false, 00:35:50.437 "compare": true, 00:35:50.437 "compare_and_write": true, 00:35:50.437 "abort": true, 00:35:50.437 "seek_hole": false, 00:35:50.437 "seek_data": false, 00:35:50.437 "copy": true, 00:35:50.437 "nvme_iov_md": false 00:35:50.437 }, 00:35:50.437 "memory_domains": [ 00:35:50.437 { 00:35:50.437 "dma_device_id": "system", 00:35:50.437 "dma_device_type": 1 00:35:50.437 } 00:35:50.437 ], 00:35:50.437 "driver_specific": { 00:35:50.437 "nvme": [ 00:35:50.437 { 00:35:50.437 "trid": { 00:35:50.437 "trtype": "TCP", 00:35:50.437 "adrfam": "IPv4", 00:35:50.437 "traddr": "10.0.0.2", 00:35:50.437 "trsvcid": "4420", 00:35:50.437 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:35:50.437 }, 00:35:50.437 "ctrlr_data": { 00:35:50.437 "cntlid": 2, 00:35:50.437 "vendor_id": "0x8086", 00:35:50.437 "model_number": "SPDK bdev Controller", 00:35:50.438 "serial_number": "00000000000000000000", 00:35:50.438 "firmware_revision": "25.01", 00:35:50.438 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:35:50.438 "oacs": { 00:35:50.438 "security": 0, 00:35:50.438 "format": 0, 00:35:50.438 "firmware": 0, 00:35:50.438 "ns_manage": 0 00:35:50.438 }, 00:35:50.438 "multi_ctrlr": true, 00:35:50.438 "ana_reporting": false 00:35:50.438 }, 00:35:50.438 "vs": { 00:35:50.438 "nvme_version": "1.3" 00:35:50.438 }, 00:35:50.438 "ns_data": { 00:35:50.438 "id": 1, 00:35:50.438 "can_share": true 00:35:50.438 } 00:35:50.438 } 00:35:50.438 ], 00:35:50.438 "mp_policy": "active_passive" 00:35:50.438 } 00:35:50.438 } 00:35:50.438 ] 00:35:50.438 22:34:45 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:50.438 22:34:45 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@50 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:50.438 22:34:45 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:50.438 22:34:45 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:35:50.438 22:34:45 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:50.438 22:34:45 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@53 -- # mktemp 00:35:50.438 22:34:45 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@53 -- # key_path=/tmp/tmp.zNB05HoVHw 00:35:50.438 22:34:45 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@54 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:35:50.438 22:34:45 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@55 -- # chmod 0600 /tmp/tmp.zNB05HoVHw 00:35:50.438 22:34:45 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@56 -- # rpc_cmd keyring_file_add_key key0 /tmp/tmp.zNB05HoVHw 00:35:50.438 22:34:45 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:50.438 22:34:45 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:35:50.438 22:34:45 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:50.438 22:34:45 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@57 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode0 --disable 00:35:50.438 22:34:45 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:50.438 22:34:45 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:35:50.438 22:34:45 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:50.438 22:34:45 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@58 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 --secure-channel 00:35:50.438 22:34:45 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:50.438 22:34:45 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:35:50.438 [2024-10-01 22:34:45.577271] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:35:50.438 [2024-10-01 22:34:45.577379] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:35:50.438 22:34:45 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:50.438 22:34:45 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@60 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host1 --psk key0 00:35:50.438 22:34:45 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:50.438 22:34:45 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:35:50.438 22:34:45 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:50.438 22:34:45 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@66 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4421 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host1 --psk key0 00:35:50.438 22:34:45 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:50.438 22:34:45 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:35:50.438 [2024-10-01 22:34:45.593334] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:35:50.438 nvme0n1 00:35:50.438 22:34:45 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:50.438 22:34:45 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@70 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:35:50.438 22:34:45 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:50.438 22:34:45 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:35:50.438 [ 00:35:50.438 { 00:35:50.438 "name": "nvme0n1", 00:35:50.438 "aliases": [ 00:35:50.438 "b54841c8-60ab-4394-9ec7-2f64ee23cf06" 00:35:50.438 ], 00:35:50.438 "product_name": "NVMe disk", 00:35:50.438 "block_size": 512, 00:35:50.438 "num_blocks": 2097152, 00:35:50.438 "uuid": "b54841c8-60ab-4394-9ec7-2f64ee23cf06", 00:35:50.438 "numa_id": 0, 00:35:50.438 "assigned_rate_limits": { 00:35:50.438 "rw_ios_per_sec": 0, 00:35:50.438 "rw_mbytes_per_sec": 0, 00:35:50.438 "r_mbytes_per_sec": 0, 00:35:50.438 "w_mbytes_per_sec": 0 00:35:50.438 }, 00:35:50.438 "claimed": false, 00:35:50.438 "zoned": false, 00:35:50.438 "supported_io_types": { 00:35:50.438 "read": true, 00:35:50.438 "write": true, 00:35:50.438 "unmap": false, 00:35:50.438 "flush": true, 00:35:50.438 "reset": true, 00:35:50.438 "nvme_admin": true, 00:35:50.438 "nvme_io": true, 00:35:50.438 "nvme_io_md": false, 00:35:50.438 "write_zeroes": true, 00:35:50.438 "zcopy": false, 00:35:50.438 "get_zone_info": false, 00:35:50.438 "zone_management": false, 00:35:50.438 "zone_append": false, 00:35:50.438 "compare": true, 00:35:50.438 "compare_and_write": true, 00:35:50.438 "abort": true, 00:35:50.438 "seek_hole": false, 00:35:50.438 "seek_data": false, 00:35:50.438 "copy": true, 00:35:50.438 "nvme_iov_md": false 00:35:50.438 }, 00:35:50.438 "memory_domains": [ 00:35:50.438 { 00:35:50.438 "dma_device_id": "system", 00:35:50.438 "dma_device_type": 1 00:35:50.438 } 00:35:50.438 ], 00:35:50.438 "driver_specific": { 00:35:50.438 "nvme": [ 00:35:50.438 { 00:35:50.438 "trid": { 00:35:50.438 "trtype": "TCP", 00:35:50.438 "adrfam": "IPv4", 00:35:50.438 "traddr": "10.0.0.2", 00:35:50.438 "trsvcid": "4421", 00:35:50.438 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:35:50.438 }, 00:35:50.438 "ctrlr_data": { 00:35:50.438 "cntlid": 3, 00:35:50.438 "vendor_id": "0x8086", 00:35:50.438 "model_number": "SPDK bdev Controller", 00:35:50.438 "serial_number": "00000000000000000000", 00:35:50.438 "firmware_revision": "25.01", 00:35:50.438 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:35:50.438 "oacs": { 00:35:50.438 "security": 0, 00:35:50.438 "format": 0, 00:35:50.438 "firmware": 0, 00:35:50.438 "ns_manage": 0 00:35:50.438 }, 00:35:50.438 "multi_ctrlr": true, 00:35:50.438 "ana_reporting": false 00:35:50.438 }, 00:35:50.438 "vs": { 00:35:50.438 "nvme_version": "1.3" 00:35:50.438 }, 00:35:50.438 "ns_data": { 00:35:50.438 "id": 1, 00:35:50.438 "can_share": true 00:35:50.438 } 00:35:50.438 } 00:35:50.438 ], 00:35:50.438 "mp_policy": "active_passive" 00:35:50.438 } 00:35:50.438 } 00:35:50.438 ] 00:35:50.438 22:34:45 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:50.438 22:34:45 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@73 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:50.438 22:34:45 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:50.438 22:34:45 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:35:50.698 22:34:45 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:50.699 22:34:45 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@76 -- # rm -f /tmp/tmp.zNB05HoVHw 00:35:50.699 22:34:45 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@78 -- # trap - SIGINT SIGTERM EXIT 00:35:50.699 22:34:45 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@79 -- # nvmftestfini 00:35:50.699 22:34:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@514 -- # nvmfcleanup 00:35:50.699 22:34:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@121 -- # sync 00:35:50.699 22:34:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:35:50.699 22:34:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@124 -- # set +e 00:35:50.699 22:34:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@125 -- # for i in {1..20} 00:35:50.699 22:34:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:35:50.699 rmmod nvme_tcp 00:35:50.699 rmmod nvme_fabrics 00:35:50.699 rmmod nvme_keyring 00:35:50.699 22:34:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:35:50.699 22:34:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@128 -- # set -e 00:35:50.699 22:34:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@129 -- # return 0 00:35:50.699 22:34:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@515 -- # '[' -n 298033 ']' 00:35:50.699 22:34:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@516 -- # killprocess 298033 00:35:50.699 22:34:45 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@950 -- # '[' -z 298033 ']' 00:35:50.699 22:34:45 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@954 -- # kill -0 298033 00:35:50.699 22:34:45 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@955 -- # uname 00:35:50.699 22:34:45 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:35:50.699 22:34:45 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 298033 00:35:50.699 22:34:45 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:35:50.699 22:34:45 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:35:50.699 22:34:45 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@968 -- # echo 'killing process with pid 298033' 00:35:50.699 killing process with pid 298033 00:35:50.699 22:34:45 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@969 -- # kill 298033 00:35:50.699 22:34:45 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@974 -- # wait 298033 00:35:50.958 22:34:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:35:50.958 22:34:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:35:50.958 22:34:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:35:50.958 22:34:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@297 -- # iptr 00:35:50.958 22:34:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@789 -- # iptables-save 00:35:50.958 22:34:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:35:50.958 22:34:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@789 -- # iptables-restore 00:35:50.959 22:34:46 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:35:50.959 22:34:46 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@302 -- # remove_spdk_ns 00:35:50.959 22:34:46 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:50.959 22:34:46 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:35:50.959 22:34:46 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:52.868 22:34:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:35:52.868 00:35:52.868 real 0m11.580s 00:35:52.868 user 0m4.027s 00:35:52.868 sys 0m5.993s 00:35:52.868 22:34:48 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1126 -- # xtrace_disable 00:35:52.868 22:34:48 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:35:52.868 ************************************ 00:35:52.868 END TEST nvmf_async_init 00:35:52.868 ************************************ 00:35:52.868 22:34:48 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@19 -- # run_test dma /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:35:52.868 22:34:48 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:35:52.868 22:34:48 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:35:52.868 22:34:48 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:35:53.129 ************************************ 00:35:53.129 START TEST dma 00:35:53.129 ************************************ 00:35:53.129 22:34:48 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:35:53.129 * Looking for test storage... 00:35:53.129 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:35:53.129 22:34:48 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:35:53.129 22:34:48 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1681 -- # lcov --version 00:35:53.129 22:34:48 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:35:53.129 22:34:48 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:35:53.129 22:34:48 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:35:53.129 22:34:48 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@333 -- # local ver1 ver1_l 00:35:53.129 22:34:48 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@334 -- # local ver2 ver2_l 00:35:53.129 22:34:48 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@336 -- # IFS=.-: 00:35:53.129 22:34:48 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@336 -- # read -ra ver1 00:35:53.129 22:34:48 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@337 -- # IFS=.-: 00:35:53.129 22:34:48 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@337 -- # read -ra ver2 00:35:53.129 22:34:48 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@338 -- # local 'op=<' 00:35:53.129 22:34:48 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@340 -- # ver1_l=2 00:35:53.129 22:34:48 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@341 -- # ver2_l=1 00:35:53.129 22:34:48 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:35:53.129 22:34:48 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@344 -- # case "$op" in 00:35:53.129 22:34:48 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@345 -- # : 1 00:35:53.129 22:34:48 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@364 -- # (( v = 0 )) 00:35:53.129 22:34:48 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:35:53.129 22:34:48 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@365 -- # decimal 1 00:35:53.129 22:34:48 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@353 -- # local d=1 00:35:53.129 22:34:48 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:35:53.129 22:34:48 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@355 -- # echo 1 00:35:53.129 22:34:48 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@365 -- # ver1[v]=1 00:35:53.129 22:34:48 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@366 -- # decimal 2 00:35:53.129 22:34:48 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@353 -- # local d=2 00:35:53.129 22:34:48 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:35:53.129 22:34:48 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@355 -- # echo 2 00:35:53.129 22:34:48 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@366 -- # ver2[v]=2 00:35:53.129 22:34:48 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:35:53.129 22:34:48 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:35:53.129 22:34:48 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@368 -- # return 0 00:35:53.129 22:34:48 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:35:53.129 22:34:48 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:35:53.129 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:53.129 --rc genhtml_branch_coverage=1 00:35:53.129 --rc genhtml_function_coverage=1 00:35:53.129 --rc genhtml_legend=1 00:35:53.129 --rc geninfo_all_blocks=1 00:35:53.129 --rc geninfo_unexecuted_blocks=1 00:35:53.129 00:35:53.129 ' 00:35:53.129 22:34:48 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:35:53.129 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:53.129 --rc genhtml_branch_coverage=1 00:35:53.129 --rc genhtml_function_coverage=1 00:35:53.130 --rc genhtml_legend=1 00:35:53.130 --rc geninfo_all_blocks=1 00:35:53.130 --rc geninfo_unexecuted_blocks=1 00:35:53.130 00:35:53.130 ' 00:35:53.130 22:34:48 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:35:53.130 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:53.130 --rc genhtml_branch_coverage=1 00:35:53.130 --rc genhtml_function_coverage=1 00:35:53.130 --rc genhtml_legend=1 00:35:53.130 --rc geninfo_all_blocks=1 00:35:53.130 --rc geninfo_unexecuted_blocks=1 00:35:53.130 00:35:53.130 ' 00:35:53.130 22:34:48 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:35:53.130 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:53.130 --rc genhtml_branch_coverage=1 00:35:53.130 --rc genhtml_function_coverage=1 00:35:53.130 --rc genhtml_legend=1 00:35:53.130 --rc geninfo_all_blocks=1 00:35:53.130 --rc geninfo_unexecuted_blocks=1 00:35:53.130 00:35:53.130 ' 00:35:53.130 22:34:48 nvmf_tcp.nvmf_host.dma -- host/dma.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:35:53.130 22:34:48 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@7 -- # uname -s 00:35:53.130 22:34:48 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:35:53.130 22:34:48 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:35:53.130 22:34:48 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:35:53.130 22:34:48 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:35:53.130 22:34:48 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:35:53.130 22:34:48 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:35:53.130 22:34:48 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:35:53.130 22:34:48 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:35:53.130 22:34:48 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:35:53.130 22:34:48 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:35:53.130 22:34:48 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 00:35:53.130 22:34:48 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@18 -- # NVME_HOSTID=80c5a598-37ec-ec11-9bc7-a4bf01928204 00:35:53.130 22:34:48 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:35:53.130 22:34:48 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:35:53.130 22:34:48 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:35:53.130 22:34:48 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:35:53.130 22:34:48 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:35:53.130 22:34:48 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@15 -- # shopt -s extglob 00:35:53.130 22:34:48 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:35:53.130 22:34:48 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:35:53.130 22:34:48 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:35:53.130 22:34:48 nvmf_tcp.nvmf_host.dma -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:53.130 22:34:48 nvmf_tcp.nvmf_host.dma -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:53.130 22:34:48 nvmf_tcp.nvmf_host.dma -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:53.130 22:34:48 nvmf_tcp.nvmf_host.dma -- paths/export.sh@5 -- # export PATH 00:35:53.130 22:34:48 nvmf_tcp.nvmf_host.dma -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:53.130 22:34:48 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@51 -- # : 0 00:35:53.130 22:34:48 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:35:53.130 22:34:48 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:35:53.130 22:34:48 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:35:53.130 22:34:48 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:35:53.130 22:34:48 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:35:53.130 22:34:48 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:35:53.130 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:35:53.130 22:34:48 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:35:53.130 22:34:48 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:35:53.130 22:34:48 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@55 -- # have_pci_nics=0 00:35:53.130 22:34:48 nvmf_tcp.nvmf_host.dma -- host/dma.sh@12 -- # '[' tcp '!=' rdma ']' 00:35:53.130 22:34:48 nvmf_tcp.nvmf_host.dma -- host/dma.sh@13 -- # exit 0 00:35:53.130 00:35:53.130 real 0m0.220s 00:35:53.130 user 0m0.130s 00:35:53.130 sys 0m0.103s 00:35:53.130 22:34:48 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1126 -- # xtrace_disable 00:35:53.130 22:34:48 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@10 -- # set +x 00:35:53.130 ************************************ 00:35:53.130 END TEST dma 00:35:53.130 ************************************ 00:35:53.392 22:34:48 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@22 -- # run_test nvmf_identify /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:35:53.392 22:34:48 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:35:53.392 22:34:48 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:35:53.392 22:34:48 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:35:53.392 ************************************ 00:35:53.392 START TEST nvmf_identify 00:35:53.392 ************************************ 00:35:53.392 22:34:48 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:35:53.392 * Looking for test storage... 00:35:53.392 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:35:53.392 22:34:48 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:35:53.392 22:34:48 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1681 -- # lcov --version 00:35:53.392 22:34:48 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:35:53.392 22:34:48 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:35:53.392 22:34:48 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:35:53.392 22:34:48 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@333 -- # local ver1 ver1_l 00:35:53.392 22:34:48 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@334 -- # local ver2 ver2_l 00:35:53.392 22:34:48 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@336 -- # IFS=.-: 00:35:53.392 22:34:48 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@336 -- # read -ra ver1 00:35:53.392 22:34:48 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@337 -- # IFS=.-: 00:35:53.392 22:34:48 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@337 -- # read -ra ver2 00:35:53.392 22:34:48 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@338 -- # local 'op=<' 00:35:53.392 22:34:48 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@340 -- # ver1_l=2 00:35:53.392 22:34:48 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@341 -- # ver2_l=1 00:35:53.392 22:34:48 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:35:53.392 22:34:48 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@344 -- # case "$op" in 00:35:53.392 22:34:48 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@345 -- # : 1 00:35:53.392 22:34:48 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@364 -- # (( v = 0 )) 00:35:53.392 22:34:48 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:35:53.392 22:34:48 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@365 -- # decimal 1 00:35:53.392 22:34:48 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@353 -- # local d=1 00:35:53.392 22:34:48 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:35:53.392 22:34:48 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@355 -- # echo 1 00:35:53.392 22:34:48 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@365 -- # ver1[v]=1 00:35:53.392 22:34:48 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@366 -- # decimal 2 00:35:53.392 22:34:48 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@353 -- # local d=2 00:35:53.392 22:34:48 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:35:53.392 22:34:48 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@355 -- # echo 2 00:35:53.392 22:34:48 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@366 -- # ver2[v]=2 00:35:53.392 22:34:48 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:35:53.392 22:34:48 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:35:53.392 22:34:48 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@368 -- # return 0 00:35:53.392 22:34:48 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:35:53.392 22:34:48 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:35:53.392 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:53.392 --rc genhtml_branch_coverage=1 00:35:53.392 --rc genhtml_function_coverage=1 00:35:53.392 --rc genhtml_legend=1 00:35:53.392 --rc geninfo_all_blocks=1 00:35:53.392 --rc geninfo_unexecuted_blocks=1 00:35:53.392 00:35:53.392 ' 00:35:53.392 22:34:48 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:35:53.392 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:53.392 --rc genhtml_branch_coverage=1 00:35:53.392 --rc genhtml_function_coverage=1 00:35:53.392 --rc genhtml_legend=1 00:35:53.392 --rc geninfo_all_blocks=1 00:35:53.392 --rc geninfo_unexecuted_blocks=1 00:35:53.392 00:35:53.392 ' 00:35:53.392 22:34:48 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:35:53.392 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:53.392 --rc genhtml_branch_coverage=1 00:35:53.392 --rc genhtml_function_coverage=1 00:35:53.392 --rc genhtml_legend=1 00:35:53.392 --rc geninfo_all_blocks=1 00:35:53.392 --rc geninfo_unexecuted_blocks=1 00:35:53.392 00:35:53.392 ' 00:35:53.392 22:34:48 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:35:53.392 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:53.392 --rc genhtml_branch_coverage=1 00:35:53.392 --rc genhtml_function_coverage=1 00:35:53.392 --rc genhtml_legend=1 00:35:53.392 --rc geninfo_all_blocks=1 00:35:53.392 --rc geninfo_unexecuted_blocks=1 00:35:53.392 00:35:53.392 ' 00:35:53.392 22:34:48 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:35:53.392 22:34:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@7 -- # uname -s 00:35:53.392 22:34:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:35:53.392 22:34:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:35:53.392 22:34:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:35:53.392 22:34:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:35:53.392 22:34:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:35:53.392 22:34:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:35:53.392 22:34:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:35:53.392 22:34:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:35:53.392 22:34:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:35:53.392 22:34:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:35:53.654 22:34:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 00:35:53.654 22:34:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@18 -- # NVME_HOSTID=80c5a598-37ec-ec11-9bc7-a4bf01928204 00:35:53.654 22:34:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:35:53.654 22:34:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:35:53.654 22:34:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:35:53.654 22:34:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:35:53.654 22:34:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:35:53.654 22:34:48 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@15 -- # shopt -s extglob 00:35:53.654 22:34:48 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:35:53.654 22:34:48 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:35:53.654 22:34:48 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:35:53.654 22:34:48 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:53.654 22:34:48 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:53.654 22:34:48 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:53.654 22:34:48 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@5 -- # export PATH 00:35:53.654 22:34:48 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:53.654 22:34:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@51 -- # : 0 00:35:53.654 22:34:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:35:53.654 22:34:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:35:53.654 22:34:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:35:53.654 22:34:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:35:53.654 22:34:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:35:53.654 22:34:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:35:53.654 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:35:53.654 22:34:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:35:53.654 22:34:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:35:53.654 22:34:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@55 -- # have_pci_nics=0 00:35:53.654 22:34:48 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@11 -- # MALLOC_BDEV_SIZE=64 00:35:53.654 22:34:48 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:35:53.654 22:34:48 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@14 -- # nvmftestinit 00:35:53.654 22:34:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:35:53.654 22:34:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:35:53.654 22:34:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@474 -- # prepare_net_devs 00:35:53.654 22:34:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@436 -- # local -g is_hw=no 00:35:53.654 22:34:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@438 -- # remove_spdk_ns 00:35:53.654 22:34:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:53.654 22:34:48 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:35:53.654 22:34:48 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:53.654 22:34:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:35:53.654 22:34:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:35:53.654 22:34:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@309 -- # xtrace_disable 00:35:53.654 22:34:48 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:36:01.796 22:34:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:36:01.796 22:34:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@315 -- # pci_devs=() 00:36:01.796 22:34:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@315 -- # local -a pci_devs 00:36:01.796 22:34:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@316 -- # pci_net_devs=() 00:36:01.796 22:34:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:36:01.796 22:34:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@317 -- # pci_drivers=() 00:36:01.796 22:34:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@317 -- # local -A pci_drivers 00:36:01.796 22:34:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@319 -- # net_devs=() 00:36:01.796 22:34:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@319 -- # local -ga net_devs 00:36:01.796 22:34:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@320 -- # e810=() 00:36:01.796 22:34:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@320 -- # local -ga e810 00:36:01.796 22:34:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@321 -- # x722=() 00:36:01.796 22:34:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@321 -- # local -ga x722 00:36:01.796 22:34:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@322 -- # mlx=() 00:36:01.796 22:34:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@322 -- # local -ga mlx 00:36:01.796 22:34:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:36:01.796 22:34:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:36:01.796 22:34:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:36:01.796 22:34:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:36:01.796 22:34:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:36:01.796 22:34:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:36:01.796 22:34:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:36:01.796 22:34:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:36:01.796 22:34:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:36:01.796 22:34:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:36:01.796 22:34:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:36:01.796 22:34:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:36:01.796 22:34:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:36:01.796 22:34:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:36:01.796 22:34:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:36:01.796 22:34:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:36:01.796 22:34:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:36:01.796 22:34:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:36:01.796 22:34:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:36:01.796 22:34:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:36:01.796 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:36:01.796 22:34:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:36:01.796 22:34:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:36:01.796 22:34:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:36:01.796 22:34:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:36:01.796 22:34:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:36:01.796 22:34:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:36:01.796 22:34:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:36:01.796 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:36:01.796 22:34:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:36:01.796 22:34:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:36:01.796 22:34:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:36:01.796 22:34:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:36:01.796 22:34:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:36:01.796 22:34:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:36:01.796 22:34:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:36:01.796 22:34:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:36:01.796 22:34:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:36:01.796 22:34:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:36:01.796 22:34:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:36:01.796 22:34:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:36:01.796 22:34:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@416 -- # [[ up == up ]] 00:36:01.796 22:34:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:36:01.796 22:34:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:36:01.796 22:34:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:36:01.796 Found net devices under 0000:4b:00.0: cvl_0_0 00:36:01.796 22:34:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:36:01.796 22:34:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:36:01.796 22:34:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:36:01.796 22:34:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:36:01.796 22:34:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:36:01.796 22:34:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@416 -- # [[ up == up ]] 00:36:01.796 22:34:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:36:01.796 22:34:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:36:01.796 22:34:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:36:01.796 Found net devices under 0000:4b:00.1: cvl_0_1 00:36:01.796 22:34:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:36:01.796 22:34:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:36:01.796 22:34:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@440 -- # is_hw=yes 00:36:01.796 22:34:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:36:01.796 22:34:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:36:01.796 22:34:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:36:01.796 22:34:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:36:01.796 22:34:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:36:01.796 22:34:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:36:01.796 22:34:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:36:01.796 22:34:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:36:01.796 22:34:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:36:01.796 22:34:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:36:01.796 22:34:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:36:01.796 22:34:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:36:01.796 22:34:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:36:01.796 22:34:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:36:01.796 22:34:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:36:01.796 22:34:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:36:01.796 22:34:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:36:01.796 22:34:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:36:01.796 22:34:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:36:01.796 22:34:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:36:01.796 22:34:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:36:01.796 22:34:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:36:01.796 22:34:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:36:01.796 22:34:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:36:01.796 22:34:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:36:01.796 22:34:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:36:01.796 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:36:01.796 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.644 ms 00:36:01.796 00:36:01.796 --- 10.0.0.2 ping statistics --- 00:36:01.796 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:36:01.796 rtt min/avg/max/mdev = 0.644/0.644/0.644/0.000 ms 00:36:01.796 22:34:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:36:01.796 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:36:01.796 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.272 ms 00:36:01.796 00:36:01.796 --- 10.0.0.1 ping statistics --- 00:36:01.796 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:36:01.796 rtt min/avg/max/mdev = 0.272/0.272/0.272/0.000 ms 00:36:01.797 22:34:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:36:01.797 22:34:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@448 -- # return 0 00:36:01.797 22:34:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:36:01.797 22:34:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:36:01.797 22:34:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:36:01.797 22:34:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:36:01.797 22:34:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:36:01.797 22:34:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:36:01.797 22:34:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:36:01.797 22:34:55 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@16 -- # timing_enter start_nvmf_tgt 00:36:01.797 22:34:55 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@724 -- # xtrace_disable 00:36:01.797 22:34:55 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:36:01.797 22:34:55 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@19 -- # nvmfpid=302708 00:36:01.797 22:34:55 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@21 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:36:01.797 22:34:55 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@18 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:36:01.797 22:34:55 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@23 -- # waitforlisten 302708 00:36:01.797 22:34:55 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@831 -- # '[' -z 302708 ']' 00:36:01.797 22:34:55 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:36:01.797 22:34:55 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@836 -- # local max_retries=100 00:36:01.797 22:34:55 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:36:01.797 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:36:01.797 22:34:55 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@840 -- # xtrace_disable 00:36:01.797 22:34:55 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:36:01.797 [2024-10-01 22:34:55.993711] Starting SPDK v25.01-pre git sha1 1b1c3081e / DPDK 24.03.0 initialization... 00:36:01.797 [2024-10-01 22:34:55.993778] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:36:01.797 [2024-10-01 22:34:56.067303] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:36:01.797 [2024-10-01 22:34:56.142683] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:36:01.797 [2024-10-01 22:34:56.142724] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:36:01.797 [2024-10-01 22:34:56.142732] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:36:01.797 [2024-10-01 22:34:56.142739] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:36:01.797 [2024-10-01 22:34:56.142745] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:36:01.797 [2024-10-01 22:34:56.142910] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:36:01.797 [2024-10-01 22:34:56.143036] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:36:01.797 [2024-10-01 22:34:56.143195] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:36:01.797 [2024-10-01 22:34:56.143196] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:36:01.797 22:34:56 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:36:01.797 22:34:56 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@864 -- # return 0 00:36:01.797 22:34:56 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:36:01.797 22:34:56 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:01.797 22:34:56 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:36:01.797 [2024-10-01 22:34:56.812335] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:36:01.797 22:34:56 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:01.797 22:34:56 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@25 -- # timing_exit start_nvmf_tgt 00:36:01.797 22:34:56 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@730 -- # xtrace_disable 00:36:01.797 22:34:56 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:36:01.797 22:34:56 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@27 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:36:01.797 22:34:56 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:01.797 22:34:56 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:36:01.797 Malloc0 00:36:01.797 22:34:56 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:01.797 22:34:56 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:36:01.797 22:34:56 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:01.797 22:34:56 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:36:01.797 22:34:56 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:01.797 22:34:56 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 --nguid ABCDEF0123456789ABCDEF0123456789 --eui64 ABCDEF0123456789 00:36:01.797 22:34:56 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:01.797 22:34:56 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:36:01.797 22:34:56 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:01.797 22:34:56 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:36:01.797 22:34:56 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:01.797 22:34:56 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:36:01.797 [2024-10-01 22:34:56.911801] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:36:01.797 22:34:56 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:01.797 22:34:56 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@35 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:36:01.797 22:34:56 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:01.797 22:34:56 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:36:01.797 22:34:56 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:01.797 22:34:56 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@37 -- # rpc_cmd nvmf_get_subsystems 00:36:01.797 22:34:56 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:01.797 22:34:56 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:36:01.797 [ 00:36:01.797 { 00:36:01.797 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:36:01.797 "subtype": "Discovery", 00:36:01.797 "listen_addresses": [ 00:36:01.797 { 00:36:01.797 "trtype": "TCP", 00:36:01.797 "adrfam": "IPv4", 00:36:01.797 "traddr": "10.0.0.2", 00:36:01.797 "trsvcid": "4420" 00:36:01.797 } 00:36:01.797 ], 00:36:01.797 "allow_any_host": true, 00:36:01.797 "hosts": [] 00:36:01.797 }, 00:36:01.797 { 00:36:01.797 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:36:01.797 "subtype": "NVMe", 00:36:01.797 "listen_addresses": [ 00:36:01.797 { 00:36:01.797 "trtype": "TCP", 00:36:01.797 "adrfam": "IPv4", 00:36:01.797 "traddr": "10.0.0.2", 00:36:01.797 "trsvcid": "4420" 00:36:01.797 } 00:36:01.797 ], 00:36:01.797 "allow_any_host": true, 00:36:01.797 "hosts": [], 00:36:01.797 "serial_number": "SPDK00000000000001", 00:36:01.797 "model_number": "SPDK bdev Controller", 00:36:01.797 "max_namespaces": 32, 00:36:01.797 "min_cntlid": 1, 00:36:01.797 "max_cntlid": 65519, 00:36:01.797 "namespaces": [ 00:36:01.797 { 00:36:01.797 "nsid": 1, 00:36:01.797 "bdev_name": "Malloc0", 00:36:01.797 "name": "Malloc0", 00:36:01.797 "nguid": "ABCDEF0123456789ABCDEF0123456789", 00:36:01.797 "eui64": "ABCDEF0123456789", 00:36:01.797 "uuid": "98e67710-0e57-46ad-a4d5-b097150ad62f" 00:36:01.797 } 00:36:01.797 ] 00:36:01.797 } 00:36:01.797 ] 00:36:01.797 22:34:56 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:01.797 22:34:56 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' -L all 00:36:01.797 [2024-10-01 22:34:56.974185] Starting SPDK v25.01-pre git sha1 1b1c3081e / DPDK 24.03.0 initialization... 00:36:01.797 [2024-10-01 22:34:56.974229] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid302790 ] 00:36:01.797 [2024-10-01 22:34:57.004467] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to connect adminq (no timeout) 00:36:01.797 [2024-10-01 22:34:57.004520] nvme_tcp.c:2349:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:36:01.797 [2024-10-01 22:34:57.004526] nvme_tcp.c:2353:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:36:01.797 [2024-10-01 22:34:57.004538] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:36:01.797 [2024-10-01 22:34:57.004546] sock.c: 373:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:36:01.797 [2024-10-01 22:34:57.011881] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for connect adminq (no timeout) 00:36:01.797 [2024-10-01 22:34:57.011917] nvme_tcp.c:1566:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x1151760 0 00:36:01.797 [2024-10-01 22:34:57.012154] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:36:01.797 [2024-10-01 22:34:57.012162] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:36:01.797 [2024-10-01 22:34:57.012167] nvme_tcp.c:1612:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:36:01.797 [2024-10-01 22:34:57.012170] nvme_tcp.c:1613:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:36:01.797 [2024-10-01 22:34:57.012193] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:36:01.797 [2024-10-01 22:34:57.012198] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:36:01.797 [2024-10-01 22:34:57.012202] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1151760) 00:36:01.797 [2024-10-01 22:34:57.012214] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:36:01.797 [2024-10-01 22:34:57.012228] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x11b1480, cid 0, qid 0 00:36:01.798 [2024-10-01 22:34:57.019636] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:36:01.798 [2024-10-01 22:34:57.019646] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:36:01.798 [2024-10-01 22:34:57.019650] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:36:01.798 [2024-10-01 22:34:57.019655] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x11b1480) on tqpair=0x1151760 00:36:01.798 [2024-10-01 22:34:57.019666] nvme_fabric.c: 621:nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:36:01.798 [2024-10-01 22:34:57.019673] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs (no timeout) 00:36:01.798 [2024-10-01 22:34:57.019678] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs wait for vs (no timeout) 00:36:01.798 [2024-10-01 22:34:57.019691] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:36:01.798 [2024-10-01 22:34:57.019695] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:36:01.798 [2024-10-01 22:34:57.019699] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1151760) 00:36:01.798 [2024-10-01 22:34:57.019707] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:01.798 [2024-10-01 22:34:57.019720] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x11b1480, cid 0, qid 0 00:36:01.798 [2024-10-01 22:34:57.019891] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:36:01.798 [2024-10-01 22:34:57.019897] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:36:01.798 [2024-10-01 22:34:57.019901] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:36:01.798 [2024-10-01 22:34:57.019905] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x11b1480) on tqpair=0x1151760 00:36:01.798 [2024-10-01 22:34:57.019910] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap (no timeout) 00:36:01.798 [2024-10-01 22:34:57.019917] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap wait for cap (no timeout) 00:36:01.798 [2024-10-01 22:34:57.019924] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:36:01.798 [2024-10-01 22:34:57.019928] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:36:01.798 [2024-10-01 22:34:57.019934] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1151760) 00:36:01.798 [2024-10-01 22:34:57.019942] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:01.798 [2024-10-01 22:34:57.019952] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x11b1480, cid 0, qid 0 00:36:01.798 [2024-10-01 22:34:57.020106] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:36:01.798 [2024-10-01 22:34:57.020112] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:36:01.798 [2024-10-01 22:34:57.020116] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:36:01.798 [2024-10-01 22:34:57.020120] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x11b1480) on tqpair=0x1151760 00:36:01.798 [2024-10-01 22:34:57.020125] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en (no timeout) 00:36:01.798 [2024-10-01 22:34:57.020133] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en wait for cc (timeout 15000 ms) 00:36:01.798 [2024-10-01 22:34:57.020139] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:36:01.798 [2024-10-01 22:34:57.020143] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:36:01.798 [2024-10-01 22:34:57.020147] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1151760) 00:36:01.798 [2024-10-01 22:34:57.020153] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:01.798 [2024-10-01 22:34:57.020164] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x11b1480, cid 0, qid 0 00:36:01.798 [2024-10-01 22:34:57.020323] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:36:01.798 [2024-10-01 22:34:57.020329] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:36:01.798 [2024-10-01 22:34:57.020333] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:36:01.798 [2024-10-01 22:34:57.020337] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x11b1480) on tqpair=0x1151760 00:36:01.798 [2024-10-01 22:34:57.020342] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:36:01.798 [2024-10-01 22:34:57.020351] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:36:01.798 [2024-10-01 22:34:57.020355] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:36:01.798 [2024-10-01 22:34:57.020359] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1151760) 00:36:01.798 [2024-10-01 22:34:57.020365] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:01.798 [2024-10-01 22:34:57.020375] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x11b1480, cid 0, qid 0 00:36:01.798 [2024-10-01 22:34:57.020591] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:36:01.798 [2024-10-01 22:34:57.020598] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:36:01.798 [2024-10-01 22:34:57.020601] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:36:01.798 [2024-10-01 22:34:57.020605] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x11b1480) on tqpair=0x1151760 00:36:01.798 [2024-10-01 22:34:57.020610] nvme_ctrlr.c:3893:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 0 && CSTS.RDY = 0 00:36:01.798 [2024-10-01 22:34:57.020615] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to controller is disabled (timeout 15000 ms) 00:36:01.798 [2024-10-01 22:34:57.020622] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:36:01.798 [2024-10-01 22:34:57.020732] nvme_ctrlr.c:4091:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Setting CC.EN = 1 00:36:01.798 [2024-10-01 22:34:57.020737] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:36:01.798 [2024-10-01 22:34:57.020747] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:36:01.798 [2024-10-01 22:34:57.020751] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:36:01.798 [2024-10-01 22:34:57.020755] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1151760) 00:36:01.798 [2024-10-01 22:34:57.020762] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:01.798 [2024-10-01 22:34:57.020773] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x11b1480, cid 0, qid 0 00:36:01.798 [2024-10-01 22:34:57.020954] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:36:01.798 [2024-10-01 22:34:57.020960] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:36:01.798 [2024-10-01 22:34:57.020964] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:36:01.798 [2024-10-01 22:34:57.020968] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x11b1480) on tqpair=0x1151760 00:36:01.798 [2024-10-01 22:34:57.020972] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:36:01.798 [2024-10-01 22:34:57.020982] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:36:01.798 [2024-10-01 22:34:57.020986] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:36:01.798 [2024-10-01 22:34:57.020989] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1151760) 00:36:01.798 [2024-10-01 22:34:57.020996] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:01.798 [2024-10-01 22:34:57.021006] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x11b1480, cid 0, qid 0 00:36:01.798 [2024-10-01 22:34:57.021177] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:36:01.798 [2024-10-01 22:34:57.021184] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:36:01.798 [2024-10-01 22:34:57.021187] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:36:01.798 [2024-10-01 22:34:57.021191] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x11b1480) on tqpair=0x1151760 00:36:01.798 [2024-10-01 22:34:57.021195] nvme_ctrlr.c:3928:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:36:01.798 [2024-10-01 22:34:57.021200] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to reset admin queue (timeout 30000 ms) 00:36:01.798 [2024-10-01 22:34:57.021208] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to identify controller (no timeout) 00:36:01.798 [2024-10-01 22:34:57.021216] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for identify controller (timeout 30000 ms) 00:36:01.798 [2024-10-01 22:34:57.021225] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:36:01.798 [2024-10-01 22:34:57.021229] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1151760) 00:36:01.798 [2024-10-01 22:34:57.021236] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:01.798 [2024-10-01 22:34:57.021246] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x11b1480, cid 0, qid 0 00:36:01.798 [2024-10-01 22:34:57.021443] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:36:01.798 [2024-10-01 22:34:57.021450] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:36:01.798 [2024-10-01 22:34:57.021454] nvme_tcp.c:1730:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:36:01.798 [2024-10-01 22:34:57.021458] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1151760): datao=0, datal=4096, cccid=0 00:36:01.798 [2024-10-01 22:34:57.021463] nvme_tcp.c:1742:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x11b1480) on tqpair(0x1151760): expected_datao=0, payload_size=4096 00:36:01.798 [2024-10-01 22:34:57.021469] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:36:01.798 [2024-10-01 22:34:57.021502] nvme_tcp.c:1532:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:36:01.798 [2024-10-01 22:34:57.021506] nvme_tcp.c:1323:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:36:02.082 [2024-10-01 22:34:57.061773] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:36:02.082 [2024-10-01 22:34:57.061784] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:36:02.082 [2024-10-01 22:34:57.061788] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:36:02.082 [2024-10-01 22:34:57.061792] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x11b1480) on tqpair=0x1151760 00:36:02.082 [2024-10-01 22:34:57.061799] nvme_ctrlr.c:2077:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_xfer_size 4294967295 00:36:02.082 [2024-10-01 22:34:57.061805] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] MDTS max_xfer_size 131072 00:36:02.082 [2024-10-01 22:34:57.061810] nvme_ctrlr.c:2084:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CNTLID 0x0001 00:36:02.082 [2024-10-01 22:34:57.061815] nvme_ctrlr.c:2108:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_sges 16 00:36:02.082 [2024-10-01 22:34:57.061820] nvme_ctrlr.c:2123:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] fuses compare and write: 1 00:36:02.082 [2024-10-01 22:34:57.061824] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to configure AER (timeout 30000 ms) 00:36:02.082 [2024-10-01 22:34:57.061833] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for configure aer (timeout 30000 ms) 00:36:02.082 [2024-10-01 22:34:57.061839] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:36:02.082 [2024-10-01 22:34:57.061844] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:36:02.082 [2024-10-01 22:34:57.061847] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1151760) 00:36:02.082 [2024-10-01 22:34:57.061855] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:36:02.082 [2024-10-01 22:34:57.061866] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x11b1480, cid 0, qid 0 00:36:02.082 [2024-10-01 22:34:57.061997] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:36:02.082 [2024-10-01 22:34:57.062004] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:36:02.082 [2024-10-01 22:34:57.062007] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:36:02.082 [2024-10-01 22:34:57.062011] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x11b1480) on tqpair=0x1151760 00:36:02.082 [2024-10-01 22:34:57.062019] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:36:02.082 [2024-10-01 22:34:57.062023] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:36:02.082 [2024-10-01 22:34:57.062026] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1151760) 00:36:02.082 [2024-10-01 22:34:57.062032] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:36:02.082 [2024-10-01 22:34:57.062039] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:36:02.082 [2024-10-01 22:34:57.062042] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:36:02.082 [2024-10-01 22:34:57.062046] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x1151760) 00:36:02.082 [2024-10-01 22:34:57.062052] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:36:02.082 [2024-10-01 22:34:57.062059] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:36:02.082 [2024-10-01 22:34:57.062063] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:36:02.082 [2024-10-01 22:34:57.062066] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x1151760) 00:36:02.082 [2024-10-01 22:34:57.062075] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:36:02.082 [2024-10-01 22:34:57.062082] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:36:02.082 [2024-10-01 22:34:57.062085] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:36:02.082 [2024-10-01 22:34:57.062089] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1151760) 00:36:02.082 [2024-10-01 22:34:57.062095] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:36:02.082 [2024-10-01 22:34:57.062100] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to set keep alive timeout (timeout 30000 ms) 00:36:02.082 [2024-10-01 22:34:57.062110] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:36:02.082 [2024-10-01 22:34:57.062116] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:36:02.082 [2024-10-01 22:34:57.062120] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1151760) 00:36:02.082 [2024-10-01 22:34:57.062127] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:02.082 [2024-10-01 22:34:57.062139] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x11b1480, cid 0, qid 0 00:36:02.082 [2024-10-01 22:34:57.062145] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x11b1600, cid 1, qid 0 00:36:02.082 [2024-10-01 22:34:57.062150] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x11b1780, cid 2, qid 0 00:36:02.082 [2024-10-01 22:34:57.062155] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x11b1900, cid 3, qid 0 00:36:02.082 [2024-10-01 22:34:57.062159] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x11b1a80, cid 4, qid 0 00:36:02.082 [2024-10-01 22:34:57.062406] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:36:02.082 [2024-10-01 22:34:57.062412] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:36:02.082 [2024-10-01 22:34:57.062416] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:36:02.082 [2024-10-01 22:34:57.062420] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x11b1a80) on tqpair=0x1151760 00:36:02.083 [2024-10-01 22:34:57.062425] nvme_ctrlr.c:3046:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Sending keep alive every 5000000 us 00:36:02.083 [2024-10-01 22:34:57.062430] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to ready (no timeout) 00:36:02.083 [2024-10-01 22:34:57.062441] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:36:02.083 [2024-10-01 22:34:57.062444] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1151760) 00:36:02.083 [2024-10-01 22:34:57.062451] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:02.083 [2024-10-01 22:34:57.062461] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x11b1a80, cid 4, qid 0 00:36:02.083 [2024-10-01 22:34:57.062679] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:36:02.083 [2024-10-01 22:34:57.062689] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:36:02.083 [2024-10-01 22:34:57.062695] nvme_tcp.c:1730:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:36:02.083 [2024-10-01 22:34:57.062702] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1151760): datao=0, datal=4096, cccid=4 00:36:02.083 [2024-10-01 22:34:57.062707] nvme_tcp.c:1742:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x11b1a80) on tqpair(0x1151760): expected_datao=0, payload_size=4096 00:36:02.083 [2024-10-01 22:34:57.062711] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:36:02.083 [2024-10-01 22:34:57.062718] nvme_tcp.c:1532:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:36:02.083 [2024-10-01 22:34:57.062722] nvme_tcp.c:1323:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:36:02.083 [2024-10-01 22:34:57.062884] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:36:02.083 [2024-10-01 22:34:57.062891] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:36:02.083 [2024-10-01 22:34:57.062894] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:36:02.083 [2024-10-01 22:34:57.062898] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x11b1a80) on tqpair=0x1151760 00:36:02.083 [2024-10-01 22:34:57.062909] nvme_ctrlr.c:4189:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Ctrlr already in ready state 00:36:02.083 [2024-10-01 22:34:57.062933] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:36:02.083 [2024-10-01 22:34:57.062938] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1151760) 00:36:02.083 [2024-10-01 22:34:57.062945] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:02.083 [2024-10-01 22:34:57.062951] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:36:02.083 [2024-10-01 22:34:57.062955] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:36:02.083 [2024-10-01 22:34:57.062959] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1151760) 00:36:02.083 [2024-10-01 22:34:57.062965] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:36:02.083 [2024-10-01 22:34:57.062977] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x11b1a80, cid 4, qid 0 00:36:02.083 [2024-10-01 22:34:57.062982] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x11b1c00, cid 5, qid 0 00:36:02.083 [2024-10-01 22:34:57.063178] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:36:02.083 [2024-10-01 22:34:57.063184] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:36:02.083 [2024-10-01 22:34:57.063188] nvme_tcp.c:1730:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:36:02.083 [2024-10-01 22:34:57.063191] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1151760): datao=0, datal=1024, cccid=4 00:36:02.083 [2024-10-01 22:34:57.063196] nvme_tcp.c:1742:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x11b1a80) on tqpair(0x1151760): expected_datao=0, payload_size=1024 00:36:02.083 [2024-10-01 22:34:57.063200] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:36:02.083 [2024-10-01 22:34:57.063207] nvme_tcp.c:1532:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:36:02.083 [2024-10-01 22:34:57.063210] nvme_tcp.c:1323:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:36:02.083 [2024-10-01 22:34:57.063216] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:36:02.083 [2024-10-01 22:34:57.063222] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:36:02.083 [2024-10-01 22:34:57.063225] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:36:02.083 [2024-10-01 22:34:57.063229] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x11b1c00) on tqpair=0x1151760 00:36:02.083 [2024-10-01 22:34:57.107632] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:36:02.083 [2024-10-01 22:34:57.107642] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:36:02.083 [2024-10-01 22:34:57.107646] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:36:02.083 [2024-10-01 22:34:57.107650] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x11b1a80) on tqpair=0x1151760 00:36:02.083 [2024-10-01 22:34:57.107664] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:36:02.083 [2024-10-01 22:34:57.107668] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1151760) 00:36:02.083 [2024-10-01 22:34:57.107675] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:02ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:02.083 [2024-10-01 22:34:57.107690] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x11b1a80, cid 4, qid 0 00:36:02.083 [2024-10-01 22:34:57.107802] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:36:02.083 [2024-10-01 22:34:57.107808] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:36:02.083 [2024-10-01 22:34:57.107814] nvme_tcp.c:1730:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:36:02.083 [2024-10-01 22:34:57.107818] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1151760): datao=0, datal=3072, cccid=4 00:36:02.083 [2024-10-01 22:34:57.107823] nvme_tcp.c:1742:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x11b1a80) on tqpair(0x1151760): expected_datao=0, payload_size=3072 00:36:02.083 [2024-10-01 22:34:57.107827] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:36:02.083 [2024-10-01 22:34:57.107834] nvme_tcp.c:1532:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:36:02.083 [2024-10-01 22:34:57.107837] nvme_tcp.c:1323:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:36:02.083 [2024-10-01 22:34:57.107980] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:36:02.083 [2024-10-01 22:34:57.107986] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:36:02.083 [2024-10-01 22:34:57.107990] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:36:02.083 [2024-10-01 22:34:57.107994] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x11b1a80) on tqpair=0x1151760 00:36:02.083 [2024-10-01 22:34:57.108002] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:36:02.083 [2024-10-01 22:34:57.108006] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1151760) 00:36:02.083 [2024-10-01 22:34:57.108013] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00010070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:02.083 [2024-10-01 22:34:57.108027] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x11b1a80, cid 4, qid 0 00:36:02.083 [2024-10-01 22:34:57.108294] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:36:02.083 [2024-10-01 22:34:57.108300] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:36:02.083 [2024-10-01 22:34:57.108304] nvme_tcp.c:1730:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:36:02.083 [2024-10-01 22:34:57.108308] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1151760): datao=0, datal=8, cccid=4 00:36:02.083 [2024-10-01 22:34:57.108312] nvme_tcp.c:1742:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x11b1a80) on tqpair(0x1151760): expected_datao=0, payload_size=8 00:36:02.083 [2024-10-01 22:34:57.108317] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:36:02.083 [2024-10-01 22:34:57.108323] nvme_tcp.c:1532:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:36:02.083 [2024-10-01 22:34:57.108327] nvme_tcp.c:1323:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:36:02.083 [2024-10-01 22:34:57.148818] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:36:02.083 [2024-10-01 22:34:57.148828] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:36:02.083 [2024-10-01 22:34:57.148831] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:36:02.083 [2024-10-01 22:34:57.148835] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x11b1a80) on tqpair=0x1151760 00:36:02.083 ===================================================== 00:36:02.083 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2014-08.org.nvmexpress.discovery 00:36:02.083 ===================================================== 00:36:02.083 Controller Capabilities/Features 00:36:02.083 ================================ 00:36:02.083 Vendor ID: 0000 00:36:02.083 Subsystem Vendor ID: 0000 00:36:02.083 Serial Number: .................... 00:36:02.083 Model Number: ........................................ 00:36:02.083 Firmware Version: 25.01 00:36:02.083 Recommended Arb Burst: 0 00:36:02.083 IEEE OUI Identifier: 00 00 00 00:36:02.083 Multi-path I/O 00:36:02.083 May have multiple subsystem ports: No 00:36:02.083 May have multiple controllers: No 00:36:02.083 Associated with SR-IOV VF: No 00:36:02.083 Max Data Transfer Size: 131072 00:36:02.083 Max Number of Namespaces: 0 00:36:02.083 Max Number of I/O Queues: 1024 00:36:02.083 NVMe Specification Version (VS): 1.3 00:36:02.083 NVMe Specification Version (Identify): 1.3 00:36:02.083 Maximum Queue Entries: 128 00:36:02.083 Contiguous Queues Required: Yes 00:36:02.083 Arbitration Mechanisms Supported 00:36:02.083 Weighted Round Robin: Not Supported 00:36:02.083 Vendor Specific: Not Supported 00:36:02.083 Reset Timeout: 15000 ms 00:36:02.083 Doorbell Stride: 4 bytes 00:36:02.083 NVM Subsystem Reset: Not Supported 00:36:02.083 Command Sets Supported 00:36:02.083 NVM Command Set: Supported 00:36:02.083 Boot Partition: Not Supported 00:36:02.083 Memory Page Size Minimum: 4096 bytes 00:36:02.083 Memory Page Size Maximum: 4096 bytes 00:36:02.083 Persistent Memory Region: Not Supported 00:36:02.083 Optional Asynchronous Events Supported 00:36:02.083 Namespace Attribute Notices: Not Supported 00:36:02.083 Firmware Activation Notices: Not Supported 00:36:02.083 ANA Change Notices: Not Supported 00:36:02.083 PLE Aggregate Log Change Notices: Not Supported 00:36:02.083 LBA Status Info Alert Notices: Not Supported 00:36:02.083 EGE Aggregate Log Change Notices: Not Supported 00:36:02.083 Normal NVM Subsystem Shutdown event: Not Supported 00:36:02.083 Zone Descriptor Change Notices: Not Supported 00:36:02.083 Discovery Log Change Notices: Supported 00:36:02.083 Controller Attributes 00:36:02.083 128-bit Host Identifier: Not Supported 00:36:02.083 Non-Operational Permissive Mode: Not Supported 00:36:02.083 NVM Sets: Not Supported 00:36:02.084 Read Recovery Levels: Not Supported 00:36:02.084 Endurance Groups: Not Supported 00:36:02.084 Predictable Latency Mode: Not Supported 00:36:02.084 Traffic Based Keep ALive: Not Supported 00:36:02.084 Namespace Granularity: Not Supported 00:36:02.084 SQ Associations: Not Supported 00:36:02.084 UUID List: Not Supported 00:36:02.084 Multi-Domain Subsystem: Not Supported 00:36:02.084 Fixed Capacity Management: Not Supported 00:36:02.084 Variable Capacity Management: Not Supported 00:36:02.084 Delete Endurance Group: Not Supported 00:36:02.084 Delete NVM Set: Not Supported 00:36:02.084 Extended LBA Formats Supported: Not Supported 00:36:02.084 Flexible Data Placement Supported: Not Supported 00:36:02.084 00:36:02.084 Controller Memory Buffer Support 00:36:02.084 ================================ 00:36:02.084 Supported: No 00:36:02.084 00:36:02.084 Persistent Memory Region Support 00:36:02.084 ================================ 00:36:02.084 Supported: No 00:36:02.084 00:36:02.084 Admin Command Set Attributes 00:36:02.084 ============================ 00:36:02.084 Security Send/Receive: Not Supported 00:36:02.084 Format NVM: Not Supported 00:36:02.084 Firmware Activate/Download: Not Supported 00:36:02.084 Namespace Management: Not Supported 00:36:02.084 Device Self-Test: Not Supported 00:36:02.084 Directives: Not Supported 00:36:02.084 NVMe-MI: Not Supported 00:36:02.084 Virtualization Management: Not Supported 00:36:02.084 Doorbell Buffer Config: Not Supported 00:36:02.084 Get LBA Status Capability: Not Supported 00:36:02.084 Command & Feature Lockdown Capability: Not Supported 00:36:02.084 Abort Command Limit: 1 00:36:02.084 Async Event Request Limit: 4 00:36:02.084 Number of Firmware Slots: N/A 00:36:02.084 Firmware Slot 1 Read-Only: N/A 00:36:02.084 Firmware Activation Without Reset: N/A 00:36:02.084 Multiple Update Detection Support: N/A 00:36:02.084 Firmware Update Granularity: No Information Provided 00:36:02.084 Per-Namespace SMART Log: No 00:36:02.084 Asymmetric Namespace Access Log Page: Not Supported 00:36:02.084 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:36:02.084 Command Effects Log Page: Not Supported 00:36:02.084 Get Log Page Extended Data: Supported 00:36:02.084 Telemetry Log Pages: Not Supported 00:36:02.084 Persistent Event Log Pages: Not Supported 00:36:02.084 Supported Log Pages Log Page: May Support 00:36:02.084 Commands Supported & Effects Log Page: Not Supported 00:36:02.084 Feature Identifiers & Effects Log Page:May Support 00:36:02.084 NVMe-MI Commands & Effects Log Page: May Support 00:36:02.084 Data Area 4 for Telemetry Log: Not Supported 00:36:02.084 Error Log Page Entries Supported: 128 00:36:02.084 Keep Alive: Not Supported 00:36:02.084 00:36:02.084 NVM Command Set Attributes 00:36:02.084 ========================== 00:36:02.084 Submission Queue Entry Size 00:36:02.084 Max: 1 00:36:02.084 Min: 1 00:36:02.084 Completion Queue Entry Size 00:36:02.084 Max: 1 00:36:02.084 Min: 1 00:36:02.084 Number of Namespaces: 0 00:36:02.084 Compare Command: Not Supported 00:36:02.084 Write Uncorrectable Command: Not Supported 00:36:02.084 Dataset Management Command: Not Supported 00:36:02.084 Write Zeroes Command: Not Supported 00:36:02.084 Set Features Save Field: Not Supported 00:36:02.084 Reservations: Not Supported 00:36:02.084 Timestamp: Not Supported 00:36:02.084 Copy: Not Supported 00:36:02.084 Volatile Write Cache: Not Present 00:36:02.084 Atomic Write Unit (Normal): 1 00:36:02.084 Atomic Write Unit (PFail): 1 00:36:02.084 Atomic Compare & Write Unit: 1 00:36:02.084 Fused Compare & Write: Supported 00:36:02.084 Scatter-Gather List 00:36:02.084 SGL Command Set: Supported 00:36:02.084 SGL Keyed: Supported 00:36:02.084 SGL Bit Bucket Descriptor: Not Supported 00:36:02.084 SGL Metadata Pointer: Not Supported 00:36:02.084 Oversized SGL: Not Supported 00:36:02.084 SGL Metadata Address: Not Supported 00:36:02.084 SGL Offset: Supported 00:36:02.084 Transport SGL Data Block: Not Supported 00:36:02.084 Replay Protected Memory Block: Not Supported 00:36:02.084 00:36:02.084 Firmware Slot Information 00:36:02.084 ========================= 00:36:02.084 Active slot: 0 00:36:02.084 00:36:02.084 00:36:02.084 Error Log 00:36:02.084 ========= 00:36:02.084 00:36:02.084 Active Namespaces 00:36:02.084 ================= 00:36:02.084 Discovery Log Page 00:36:02.084 ================== 00:36:02.084 Generation Counter: 2 00:36:02.084 Number of Records: 2 00:36:02.084 Record Format: 0 00:36:02.084 00:36:02.084 Discovery Log Entry 0 00:36:02.084 ---------------------- 00:36:02.084 Transport Type: 3 (TCP) 00:36:02.084 Address Family: 1 (IPv4) 00:36:02.084 Subsystem Type: 3 (Current Discovery Subsystem) 00:36:02.084 Entry Flags: 00:36:02.084 Duplicate Returned Information: 1 00:36:02.084 Explicit Persistent Connection Support for Discovery: 1 00:36:02.084 Transport Requirements: 00:36:02.084 Secure Channel: Not Required 00:36:02.084 Port ID: 0 (0x0000) 00:36:02.084 Controller ID: 65535 (0xffff) 00:36:02.084 Admin Max SQ Size: 128 00:36:02.084 Transport Service Identifier: 4420 00:36:02.084 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:36:02.084 Transport Address: 10.0.0.2 00:36:02.084 Discovery Log Entry 1 00:36:02.084 ---------------------- 00:36:02.084 Transport Type: 3 (TCP) 00:36:02.084 Address Family: 1 (IPv4) 00:36:02.084 Subsystem Type: 2 (NVM Subsystem) 00:36:02.084 Entry Flags: 00:36:02.084 Duplicate Returned Information: 0 00:36:02.084 Explicit Persistent Connection Support for Discovery: 0 00:36:02.084 Transport Requirements: 00:36:02.084 Secure Channel: Not Required 00:36:02.084 Port ID: 0 (0x0000) 00:36:02.084 Controller ID: 65535 (0xffff) 00:36:02.084 Admin Max SQ Size: 128 00:36:02.084 Transport Service Identifier: 4420 00:36:02.084 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:cnode1 00:36:02.084 Transport Address: 10.0.0.2 [2024-10-01 22:34:57.148918] nvme_ctrlr.c:4386:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Prepare to destruct SSD 00:36:02.084 [2024-10-01 22:34:57.148928] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x11b1480) on tqpair=0x1151760 00:36:02.084 [2024-10-01 22:34:57.148934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:02.084 [2024-10-01 22:34:57.148940] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x11b1600) on tqpair=0x1151760 00:36:02.084 [2024-10-01 22:34:57.148945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:02.084 [2024-10-01 22:34:57.148950] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x11b1780) on tqpair=0x1151760 00:36:02.084 [2024-10-01 22:34:57.148955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:02.084 [2024-10-01 22:34:57.148960] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x11b1900) on tqpair=0x1151760 00:36:02.084 [2024-10-01 22:34:57.148965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:02.084 [2024-10-01 22:34:57.148975] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:36:02.084 [2024-10-01 22:34:57.148979] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:36:02.084 [2024-10-01 22:34:57.148982] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1151760) 00:36:02.084 [2024-10-01 22:34:57.148990] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:02.084 [2024-10-01 22:34:57.149003] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x11b1900, cid 3, qid 0 00:36:02.084 [2024-10-01 22:34:57.149143] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:36:02.084 [2024-10-01 22:34:57.149150] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:36:02.084 [2024-10-01 22:34:57.149153] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:36:02.084 [2024-10-01 22:34:57.149157] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x11b1900) on tqpair=0x1151760 00:36:02.084 [2024-10-01 22:34:57.149164] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:36:02.084 [2024-10-01 22:34:57.149168] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:36:02.084 [2024-10-01 22:34:57.149172] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1151760) 00:36:02.084 [2024-10-01 22:34:57.149179] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:02.084 [2024-10-01 22:34:57.149192] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x11b1900, cid 3, qid 0 00:36:02.084 [2024-10-01 22:34:57.149412] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:36:02.084 [2024-10-01 22:34:57.149419] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:36:02.084 [2024-10-01 22:34:57.149422] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:36:02.084 [2024-10-01 22:34:57.149426] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x11b1900) on tqpair=0x1151760 00:36:02.084 [2024-10-01 22:34:57.149431] nvme_ctrlr.c:1147:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] RTD3E = 0 us 00:36:02.084 [2024-10-01 22:34:57.149438] nvme_ctrlr.c:1150:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown timeout = 10000 ms 00:36:02.084 [2024-10-01 22:34:57.149447] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:36:02.084 [2024-10-01 22:34:57.149451] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:36:02.084 [2024-10-01 22:34:57.149455] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1151760) 00:36:02.085 [2024-10-01 22:34:57.149461] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:02.085 [2024-10-01 22:34:57.149471] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x11b1900, cid 3, qid 0 00:36:02.085 [2024-10-01 22:34:57.149646] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:36:02.085 [2024-10-01 22:34:57.149653] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:36:02.085 [2024-10-01 22:34:57.149657] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:36:02.085 [2024-10-01 22:34:57.149660] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x11b1900) on tqpair=0x1151760 00:36:02.085 [2024-10-01 22:34:57.149670] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:36:02.085 [2024-10-01 22:34:57.149675] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:36:02.085 [2024-10-01 22:34:57.149678] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1151760) 00:36:02.085 [2024-10-01 22:34:57.149685] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:02.085 [2024-10-01 22:34:57.149696] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x11b1900, cid 3, qid 0 00:36:02.085 [2024-10-01 22:34:57.149787] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:36:02.085 [2024-10-01 22:34:57.149793] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:36:02.085 [2024-10-01 22:34:57.149799] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:36:02.085 [2024-10-01 22:34:57.149803] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x11b1900) on tqpair=0x1151760 00:36:02.085 [2024-10-01 22:34:57.149813] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:36:02.085 [2024-10-01 22:34:57.149817] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:36:02.085 [2024-10-01 22:34:57.149821] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1151760) 00:36:02.085 [2024-10-01 22:34:57.149827] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:02.085 [2024-10-01 22:34:57.149837] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x11b1900, cid 3, qid 0 00:36:02.085 [2024-10-01 22:34:57.150046] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:36:02.085 [2024-10-01 22:34:57.150053] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:36:02.085 [2024-10-01 22:34:57.150056] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:36:02.085 [2024-10-01 22:34:57.150060] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x11b1900) on tqpair=0x1151760 00:36:02.085 [2024-10-01 22:34:57.150070] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:36:02.085 [2024-10-01 22:34:57.150073] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:36:02.085 [2024-10-01 22:34:57.150077] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1151760) 00:36:02.085 [2024-10-01 22:34:57.150084] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:02.085 [2024-10-01 22:34:57.150094] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x11b1900, cid 3, qid 0 00:36:02.085 [2024-10-01 22:34:57.150275] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:36:02.085 [2024-10-01 22:34:57.150281] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:36:02.085 [2024-10-01 22:34:57.150285] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:36:02.085 [2024-10-01 22:34:57.150288] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x11b1900) on tqpair=0x1151760 00:36:02.085 [2024-10-01 22:34:57.150298] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:36:02.085 [2024-10-01 22:34:57.150302] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:36:02.085 [2024-10-01 22:34:57.150305] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1151760) 00:36:02.085 [2024-10-01 22:34:57.150312] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:02.085 [2024-10-01 22:34:57.150322] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x11b1900, cid 3, qid 0 00:36:02.085 [2024-10-01 22:34:57.150501] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:36:02.085 [2024-10-01 22:34:57.150507] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:36:02.085 [2024-10-01 22:34:57.150511] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:36:02.085 [2024-10-01 22:34:57.150514] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x11b1900) on tqpair=0x1151760 00:36:02.085 [2024-10-01 22:34:57.150524] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:36:02.085 [2024-10-01 22:34:57.150528] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:36:02.085 [2024-10-01 22:34:57.150532] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1151760) 00:36:02.085 [2024-10-01 22:34:57.150539] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:02.085 [2024-10-01 22:34:57.150549] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x11b1900, cid 3, qid 0 00:36:02.085 [2024-10-01 22:34:57.150768] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:36:02.085 [2024-10-01 22:34:57.150776] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:36:02.085 [2024-10-01 22:34:57.150779] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:36:02.085 [2024-10-01 22:34:57.150785] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x11b1900) on tqpair=0x1151760 00:36:02.085 [2024-10-01 22:34:57.150795] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:36:02.085 [2024-10-01 22:34:57.150799] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:36:02.085 [2024-10-01 22:34:57.150803] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1151760) 00:36:02.085 [2024-10-01 22:34:57.150809] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:02.085 [2024-10-01 22:34:57.150820] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x11b1900, cid 3, qid 0 00:36:02.085 [2024-10-01 22:34:57.150989] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:36:02.085 [2024-10-01 22:34:57.150996] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:36:02.085 [2024-10-01 22:34:57.150999] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:36:02.085 [2024-10-01 22:34:57.151003] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x11b1900) on tqpair=0x1151760 00:36:02.085 [2024-10-01 22:34:57.151013] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:36:02.085 [2024-10-01 22:34:57.151017] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:36:02.085 [2024-10-01 22:34:57.151020] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1151760) 00:36:02.085 [2024-10-01 22:34:57.151027] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:02.085 [2024-10-01 22:34:57.151037] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x11b1900, cid 3, qid 0 00:36:02.085 [2024-10-01 22:34:57.151221] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:36:02.085 [2024-10-01 22:34:57.151228] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:36:02.085 [2024-10-01 22:34:57.151231] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:36:02.085 [2024-10-01 22:34:57.151235] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x11b1900) on tqpair=0x1151760 00:36:02.085 [2024-10-01 22:34:57.151245] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:36:02.085 [2024-10-01 22:34:57.151249] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:36:02.085 [2024-10-01 22:34:57.151252] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1151760) 00:36:02.085 [2024-10-01 22:34:57.151259] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:02.085 [2024-10-01 22:34:57.151270] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x11b1900, cid 3, qid 0 00:36:02.085 [2024-10-01 22:34:57.151447] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:36:02.085 [2024-10-01 22:34:57.151453] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:36:02.085 [2024-10-01 22:34:57.151457] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:36:02.085 [2024-10-01 22:34:57.151461] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x11b1900) on tqpair=0x1151760 00:36:02.085 [2024-10-01 22:34:57.151471] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:36:02.085 [2024-10-01 22:34:57.151475] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:36:02.085 [2024-10-01 22:34:57.151478] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1151760) 00:36:02.085 [2024-10-01 22:34:57.151485] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:02.085 [2024-10-01 22:34:57.151495] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x11b1900, cid 3, qid 0 00:36:02.085 [2024-10-01 22:34:57.155632] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:36:02.085 [2024-10-01 22:34:57.155641] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:36:02.085 [2024-10-01 22:34:57.155644] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:36:02.085 [2024-10-01 22:34:57.155648] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x11b1900) on tqpair=0x1151760 00:36:02.085 [2024-10-01 22:34:57.155663] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:36:02.085 [2024-10-01 22:34:57.155667] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:36:02.085 [2024-10-01 22:34:57.155671] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1151760) 00:36:02.085 [2024-10-01 22:34:57.155678] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:02.085 [2024-10-01 22:34:57.155689] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x11b1900, cid 3, qid 0 00:36:02.085 [2024-10-01 22:34:57.155863] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:36:02.086 [2024-10-01 22:34:57.155870] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:36:02.086 [2024-10-01 22:34:57.155873] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:36:02.086 [2024-10-01 22:34:57.155877] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x11b1900) on tqpair=0x1151760 00:36:02.086 [2024-10-01 22:34:57.155885] nvme_ctrlr.c:1269:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown complete in 6 milliseconds 00:36:02.086 00:36:02.086 22:34:57 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -L all 00:36:02.086 [2024-10-01 22:34:57.199654] Starting SPDK v25.01-pre git sha1 1b1c3081e / DPDK 24.03.0 initialization... 00:36:02.086 [2024-10-01 22:34:57.199692] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid302820 ] 00:36:02.086 [2024-10-01 22:34:57.231188] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to connect adminq (no timeout) 00:36:02.086 [2024-10-01 22:34:57.231230] nvme_tcp.c:2349:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:36:02.086 [2024-10-01 22:34:57.231235] nvme_tcp.c:2353:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:36:02.086 [2024-10-01 22:34:57.231247] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:36:02.086 [2024-10-01 22:34:57.231255] sock.c: 373:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:36:02.086 [2024-10-01 22:34:57.234836] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for connect adminq (no timeout) 00:36:02.086 [2024-10-01 22:34:57.234864] nvme_tcp.c:1566:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x22aa760 0 00:36:02.086 [2024-10-01 22:34:57.242638] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:36:02.086 [2024-10-01 22:34:57.242650] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:36:02.086 [2024-10-01 22:34:57.242654] nvme_tcp.c:1612:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:36:02.086 [2024-10-01 22:34:57.242658] nvme_tcp.c:1613:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:36:02.086 [2024-10-01 22:34:57.242680] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:36:02.086 [2024-10-01 22:34:57.242686] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:36:02.086 [2024-10-01 22:34:57.242690] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x22aa760) 00:36:02.086 [2024-10-01 22:34:57.242702] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:36:02.086 [2024-10-01 22:34:57.242719] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x230a480, cid 0, qid 0 00:36:02.086 [2024-10-01 22:34:57.249634] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:36:02.086 [2024-10-01 22:34:57.249648] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:36:02.086 [2024-10-01 22:34:57.249652] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:36:02.086 [2024-10-01 22:34:57.249656] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x230a480) on tqpair=0x22aa760 00:36:02.086 [2024-10-01 22:34:57.249666] nvme_fabric.c: 621:nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:36:02.086 [2024-10-01 22:34:57.249672] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs (no timeout) 00:36:02.086 [2024-10-01 22:34:57.249678] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs wait for vs (no timeout) 00:36:02.086 [2024-10-01 22:34:57.249689] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:36:02.086 [2024-10-01 22:34:57.249694] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:36:02.086 [2024-10-01 22:34:57.249697] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x22aa760) 00:36:02.086 [2024-10-01 22:34:57.249705] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:02.086 [2024-10-01 22:34:57.249719] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x230a480, cid 0, qid 0 00:36:02.086 [2024-10-01 22:34:57.249872] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:36:02.086 [2024-10-01 22:34:57.249878] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:36:02.086 [2024-10-01 22:34:57.249882] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:36:02.086 [2024-10-01 22:34:57.249886] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x230a480) on tqpair=0x22aa760 00:36:02.086 [2024-10-01 22:34:57.249891] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap (no timeout) 00:36:02.086 [2024-10-01 22:34:57.249898] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap wait for cap (no timeout) 00:36:02.086 [2024-10-01 22:34:57.249905] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:36:02.086 [2024-10-01 22:34:57.249909] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:36:02.086 [2024-10-01 22:34:57.249912] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x22aa760) 00:36:02.086 [2024-10-01 22:34:57.249919] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:02.086 [2024-10-01 22:34:57.249930] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x230a480, cid 0, qid 0 00:36:02.086 [2024-10-01 22:34:57.250087] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:36:02.086 [2024-10-01 22:34:57.250094] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:36:02.086 [2024-10-01 22:34:57.250097] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:36:02.086 [2024-10-01 22:34:57.250101] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x230a480) on tqpair=0x22aa760 00:36:02.086 [2024-10-01 22:34:57.250106] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en (no timeout) 00:36:02.086 [2024-10-01 22:34:57.250114] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en wait for cc (timeout 15000 ms) 00:36:02.086 [2024-10-01 22:34:57.250121] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:36:02.086 [2024-10-01 22:34:57.250125] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:36:02.086 [2024-10-01 22:34:57.250128] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x22aa760) 00:36:02.086 [2024-10-01 22:34:57.250135] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:02.086 [2024-10-01 22:34:57.250145] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x230a480, cid 0, qid 0 00:36:02.086 [2024-10-01 22:34:57.250306] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:36:02.086 [2024-10-01 22:34:57.250312] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:36:02.086 [2024-10-01 22:34:57.250318] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:36:02.086 [2024-10-01 22:34:57.250322] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x230a480) on tqpair=0x22aa760 00:36:02.086 [2024-10-01 22:34:57.250327] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:36:02.086 [2024-10-01 22:34:57.250337] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:36:02.086 [2024-10-01 22:34:57.250342] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:36:02.086 [2024-10-01 22:34:57.250346] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x22aa760) 00:36:02.086 [2024-10-01 22:34:57.250353] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:02.086 [2024-10-01 22:34:57.250363] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x230a480, cid 0, qid 0 00:36:02.086 [2024-10-01 22:34:57.250586] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:36:02.086 [2024-10-01 22:34:57.250593] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:36:02.086 [2024-10-01 22:34:57.250596] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:36:02.086 [2024-10-01 22:34:57.250600] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x230a480) on tqpair=0x22aa760 00:36:02.086 [2024-10-01 22:34:57.250604] nvme_ctrlr.c:3893:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 0 && CSTS.RDY = 0 00:36:02.086 [2024-10-01 22:34:57.250609] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to controller is disabled (timeout 15000 ms) 00:36:02.086 [2024-10-01 22:34:57.250616] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:36:02.086 [2024-10-01 22:34:57.250722] nvme_ctrlr.c:4091:nvme_ctrlr_process_init: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Setting CC.EN = 1 00:36:02.086 [2024-10-01 22:34:57.250726] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:36:02.086 [2024-10-01 22:34:57.250733] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:36:02.086 [2024-10-01 22:34:57.250737] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:36:02.086 [2024-10-01 22:34:57.250740] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x22aa760) 00:36:02.086 [2024-10-01 22:34:57.250747] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:02.086 [2024-10-01 22:34:57.250758] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x230a480, cid 0, qid 0 00:36:02.086 [2024-10-01 22:34:57.250919] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:36:02.086 [2024-10-01 22:34:57.250925] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:36:02.086 [2024-10-01 22:34:57.250929] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:36:02.086 [2024-10-01 22:34:57.250932] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x230a480) on tqpair=0x22aa760 00:36:02.086 [2024-10-01 22:34:57.250937] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:36:02.086 [2024-10-01 22:34:57.250946] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:36:02.086 [2024-10-01 22:34:57.250950] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:36:02.086 [2024-10-01 22:34:57.250954] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x22aa760) 00:36:02.086 [2024-10-01 22:34:57.250960] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:02.086 [2024-10-01 22:34:57.250970] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x230a480, cid 0, qid 0 00:36:02.086 [2024-10-01 22:34:57.251169] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:36:02.086 [2024-10-01 22:34:57.251175] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:36:02.086 [2024-10-01 22:34:57.251181] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:36:02.086 [2024-10-01 22:34:57.251185] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x230a480) on tqpair=0x22aa760 00:36:02.086 [2024-10-01 22:34:57.251190] nvme_ctrlr.c:3928:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:36:02.087 [2024-10-01 22:34:57.251194] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to reset admin queue (timeout 30000 ms) 00:36:02.087 [2024-10-01 22:34:57.251202] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller (no timeout) 00:36:02.087 [2024-10-01 22:34:57.251209] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify controller (timeout 30000 ms) 00:36:02.087 [2024-10-01 22:34:57.251217] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:36:02.087 [2024-10-01 22:34:57.251221] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x22aa760) 00:36:02.087 [2024-10-01 22:34:57.251228] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:02.087 [2024-10-01 22:34:57.251238] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x230a480, cid 0, qid 0 00:36:02.087 [2024-10-01 22:34:57.251466] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:36:02.087 [2024-10-01 22:34:57.251472] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:36:02.087 [2024-10-01 22:34:57.251476] nvme_tcp.c:1730:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:36:02.087 [2024-10-01 22:34:57.251480] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x22aa760): datao=0, datal=4096, cccid=0 00:36:02.087 [2024-10-01 22:34:57.251485] nvme_tcp.c:1742:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x230a480) on tqpair(0x22aa760): expected_datao=0, payload_size=4096 00:36:02.087 [2024-10-01 22:34:57.251489] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:36:02.087 [2024-10-01 22:34:57.251496] nvme_tcp.c:1532:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:36:02.087 [2024-10-01 22:34:57.251500] nvme_tcp.c:1323:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:36:02.087 [2024-10-01 22:34:57.251643] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:36:02.087 [2024-10-01 22:34:57.251650] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:36:02.087 [2024-10-01 22:34:57.251653] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:36:02.087 [2024-10-01 22:34:57.251657] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x230a480) on tqpair=0x22aa760 00:36:02.087 [2024-10-01 22:34:57.251664] nvme_ctrlr.c:2077:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_xfer_size 4294967295 00:36:02.087 [2024-10-01 22:34:57.251669] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] MDTS max_xfer_size 131072 00:36:02.087 [2024-10-01 22:34:57.251673] nvme_ctrlr.c:2084:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CNTLID 0x0001 00:36:02.087 [2024-10-01 22:34:57.251678] nvme_ctrlr.c:2108:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_sges 16 00:36:02.087 [2024-10-01 22:34:57.251682] nvme_ctrlr.c:2123:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] fuses compare and write: 1 00:36:02.087 [2024-10-01 22:34:57.251687] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to configure AER (timeout 30000 ms) 00:36:02.087 [2024-10-01 22:34:57.251695] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for configure aer (timeout 30000 ms) 00:36:02.087 [2024-10-01 22:34:57.251702] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:36:02.087 [2024-10-01 22:34:57.251706] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:36:02.087 [2024-10-01 22:34:57.251709] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x22aa760) 00:36:02.087 [2024-10-01 22:34:57.251718] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:36:02.087 [2024-10-01 22:34:57.251730] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x230a480, cid 0, qid 0 00:36:02.087 [2024-10-01 22:34:57.251918] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:36:02.087 [2024-10-01 22:34:57.251925] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:36:02.087 [2024-10-01 22:34:57.251928] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:36:02.087 [2024-10-01 22:34:57.251932] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x230a480) on tqpair=0x22aa760 00:36:02.087 [2024-10-01 22:34:57.251939] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:36:02.087 [2024-10-01 22:34:57.251943] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:36:02.087 [2024-10-01 22:34:57.251946] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x22aa760) 00:36:02.087 [2024-10-01 22:34:57.251952] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:36:02.087 [2024-10-01 22:34:57.251959] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:36:02.087 [2024-10-01 22:34:57.251962] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:36:02.087 [2024-10-01 22:34:57.251966] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x22aa760) 00:36:02.087 [2024-10-01 22:34:57.251972] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:36:02.087 [2024-10-01 22:34:57.251978] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:36:02.087 [2024-10-01 22:34:57.251981] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:36:02.087 [2024-10-01 22:34:57.251985] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x22aa760) 00:36:02.087 [2024-10-01 22:34:57.251991] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:36:02.087 [2024-10-01 22:34:57.251997] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:36:02.087 [2024-10-01 22:34:57.252000] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:36:02.087 [2024-10-01 22:34:57.252004] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x22aa760) 00:36:02.087 [2024-10-01 22:34:57.252010] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:36:02.087 [2024-10-01 22:34:57.252015] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set keep alive timeout (timeout 30000 ms) 00:36:02.087 [2024-10-01 22:34:57.252025] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:36:02.087 [2024-10-01 22:34:57.252031] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:36:02.087 [2024-10-01 22:34:57.252035] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x22aa760) 00:36:02.087 [2024-10-01 22:34:57.252042] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:02.087 [2024-10-01 22:34:57.252053] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x230a480, cid 0, qid 0 00:36:02.087 [2024-10-01 22:34:57.252059] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x230a600, cid 1, qid 0 00:36:02.087 [2024-10-01 22:34:57.252063] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x230a780, cid 2, qid 0 00:36:02.087 [2024-10-01 22:34:57.252068] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x230a900, cid 3, qid 0 00:36:02.087 [2024-10-01 22:34:57.252073] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x230aa80, cid 4, qid 0 00:36:02.087 [2024-10-01 22:34:57.252272] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:36:02.087 [2024-10-01 22:34:57.252279] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:36:02.087 [2024-10-01 22:34:57.252284] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:36:02.087 [2024-10-01 22:34:57.252288] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x230aa80) on tqpair=0x22aa760 00:36:02.087 [2024-10-01 22:34:57.252293] nvme_ctrlr.c:3046:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Sending keep alive every 5000000 us 00:36:02.087 [2024-10-01 22:34:57.252298] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller iocs specific (timeout 30000 ms) 00:36:02.087 [2024-10-01 22:34:57.252306] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set number of queues (timeout 30000 ms) 00:36:02.087 [2024-10-01 22:34:57.252314] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set number of queues (timeout 30000 ms) 00:36:02.087 [2024-10-01 22:34:57.252320] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:36:02.087 [2024-10-01 22:34:57.252324] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:36:02.087 [2024-10-01 22:34:57.252328] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x22aa760) 00:36:02.087 [2024-10-01 22:34:57.252334] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:4 cdw10:00000007 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:36:02.087 [2024-10-01 22:34:57.252344] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x230aa80, cid 4, qid 0 00:36:02.087 [2024-10-01 22:34:57.252527] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:36:02.087 [2024-10-01 22:34:57.252533] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:36:02.087 [2024-10-01 22:34:57.252537] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:36:02.087 [2024-10-01 22:34:57.252541] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x230aa80) on tqpair=0x22aa760 00:36:02.087 [2024-10-01 22:34:57.252604] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify active ns (timeout 30000 ms) 00:36:02.087 [2024-10-01 22:34:57.252614] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify active ns (timeout 30000 ms) 00:36:02.087 [2024-10-01 22:34:57.252621] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:36:02.087 [2024-10-01 22:34:57.252629] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x22aa760) 00:36:02.088 [2024-10-01 22:34:57.252635] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:02.088 [2024-10-01 22:34:57.252647] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x230aa80, cid 4, qid 0 00:36:02.088 [2024-10-01 22:34:57.252824] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:36:02.088 [2024-10-01 22:34:57.252830] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:36:02.088 [2024-10-01 22:34:57.252834] nvme_tcp.c:1730:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:36:02.088 [2024-10-01 22:34:57.252837] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x22aa760): datao=0, datal=4096, cccid=4 00:36:02.088 [2024-10-01 22:34:57.252842] nvme_tcp.c:1742:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x230aa80) on tqpair(0x22aa760): expected_datao=0, payload_size=4096 00:36:02.088 [2024-10-01 22:34:57.252846] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:36:02.088 [2024-10-01 22:34:57.252853] nvme_tcp.c:1532:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:36:02.088 [2024-10-01 22:34:57.252857] nvme_tcp.c:1323:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:36:02.088 [2024-10-01 22:34:57.253023] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:36:02.088 [2024-10-01 22:34:57.253029] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:36:02.088 [2024-10-01 22:34:57.253032] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:36:02.088 [2024-10-01 22:34:57.253036] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x230aa80) on tqpair=0x22aa760 00:36:02.088 [2024-10-01 22:34:57.253045] nvme_ctrlr.c:4722:spdk_nvme_ctrlr_get_ns: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Namespace 1 was added 00:36:02.088 [2024-10-01 22:34:57.253061] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns (timeout 30000 ms) 00:36:02.088 [2024-10-01 22:34:57.253070] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify ns (timeout 30000 ms) 00:36:02.088 [2024-10-01 22:34:57.253077] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:36:02.088 [2024-10-01 22:34:57.253081] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x22aa760) 00:36:02.088 [2024-10-01 22:34:57.253087] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:02.088 [2024-10-01 22:34:57.253098] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x230aa80, cid 4, qid 0 00:36:02.088 [2024-10-01 22:34:57.253313] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:36:02.088 [2024-10-01 22:34:57.253319] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:36:02.088 [2024-10-01 22:34:57.253323] nvme_tcp.c:1730:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:36:02.088 [2024-10-01 22:34:57.253326] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x22aa760): datao=0, datal=4096, cccid=4 00:36:02.088 [2024-10-01 22:34:57.253331] nvme_tcp.c:1742:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x230aa80) on tqpair(0x22aa760): expected_datao=0, payload_size=4096 00:36:02.088 [2024-10-01 22:34:57.253335] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:36:02.088 [2024-10-01 22:34:57.253350] nvme_tcp.c:1532:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:36:02.088 [2024-10-01 22:34:57.253354] nvme_tcp.c:1323:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:36:02.088 [2024-10-01 22:34:57.253501] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:36:02.088 [2024-10-01 22:34:57.253507] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:36:02.088 [2024-10-01 22:34:57.253511] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:36:02.088 [2024-10-01 22:34:57.253515] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x230aa80) on tqpair=0x22aa760 00:36:02.088 [2024-10-01 22:34:57.253526] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:36:02.088 [2024-10-01 22:34:57.253534] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:36:02.088 [2024-10-01 22:34:57.253542] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:36:02.088 [2024-10-01 22:34:57.253545] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x22aa760) 00:36:02.088 [2024-10-01 22:34:57.253552] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:02.088 [2024-10-01 22:34:57.253562] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x230aa80, cid 4, qid 0 00:36:02.088 [2024-10-01 22:34:57.257633] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:36:02.088 [2024-10-01 22:34:57.257642] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:36:02.088 [2024-10-01 22:34:57.257645] nvme_tcp.c:1730:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:36:02.088 [2024-10-01 22:34:57.257649] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x22aa760): datao=0, datal=4096, cccid=4 00:36:02.088 [2024-10-01 22:34:57.257653] nvme_tcp.c:1742:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x230aa80) on tqpair(0x22aa760): expected_datao=0, payload_size=4096 00:36:02.088 [2024-10-01 22:34:57.257657] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:36:02.088 [2024-10-01 22:34:57.257664] nvme_tcp.c:1532:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:36:02.088 [2024-10-01 22:34:57.257668] nvme_tcp.c:1323:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:36:02.088 [2024-10-01 22:34:57.257673] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:36:02.088 [2024-10-01 22:34:57.257682] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:36:02.088 [2024-10-01 22:34:57.257685] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:36:02.088 [2024-10-01 22:34:57.257689] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x230aa80) on tqpair=0x22aa760 00:36:02.088 [2024-10-01 22:34:57.257697] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns iocs specific (timeout 30000 ms) 00:36:02.088 [2024-10-01 22:34:57.257705] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported log pages (timeout 30000 ms) 00:36:02.088 [2024-10-01 22:34:57.257713] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported features (timeout 30000 ms) 00:36:02.088 [2024-10-01 22:34:57.257719] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set host behavior support feature (timeout 30000 ms) 00:36:02.088 [2024-10-01 22:34:57.257724] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set doorbell buffer config (timeout 30000 ms) 00:36:02.088 [2024-10-01 22:34:57.257729] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set host ID (timeout 30000 ms) 00:36:02.088 [2024-10-01 22:34:57.257734] nvme_ctrlr.c:3134:nvme_ctrlr_set_host_id: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] NVMe-oF transport - not sending Set Features - Host ID 00:36:02.088 [2024-10-01 22:34:57.257739] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to transport ready (timeout 30000 ms) 00:36:02.088 [2024-10-01 22:34:57.257744] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to ready (no timeout) 00:36:02.088 [2024-10-01 22:34:57.257757] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:36:02.088 [2024-10-01 22:34:57.257761] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x22aa760) 00:36:02.088 [2024-10-01 22:34:57.257768] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:4 cdw10:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:02.088 [2024-10-01 22:34:57.257774] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:36:02.088 [2024-10-01 22:34:57.257778] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:36:02.088 [2024-10-01 22:34:57.257782] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x22aa760) 00:36:02.088 [2024-10-01 22:34:57.257788] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:36:02.088 [2024-10-01 22:34:57.257800] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x230aa80, cid 4, qid 0 00:36:02.088 [2024-10-01 22:34:57.257806] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x230ac00, cid 5, qid 0 00:36:02.088 [2024-10-01 22:34:57.257968] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:36:02.088 [2024-10-01 22:34:57.257975] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:36:02.088 [2024-10-01 22:34:57.257978] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:36:02.088 [2024-10-01 22:34:57.257982] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x230aa80) on tqpair=0x22aa760 00:36:02.088 [2024-10-01 22:34:57.257989] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:36:02.088 [2024-10-01 22:34:57.257995] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:36:02.088 [2024-10-01 22:34:57.257998] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:36:02.088 [2024-10-01 22:34:57.258002] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x230ac00) on tqpair=0x22aa760 00:36:02.088 [2024-10-01 22:34:57.258011] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:36:02.088 [2024-10-01 22:34:57.258015] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x22aa760) 00:36:02.088 [2024-10-01 22:34:57.258021] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:5 cdw10:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:02.088 [2024-10-01 22:34:57.258034] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x230ac00, cid 5, qid 0 00:36:02.088 [2024-10-01 22:34:57.258208] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:36:02.088 [2024-10-01 22:34:57.258215] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:36:02.088 [2024-10-01 22:34:57.258218] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:36:02.088 [2024-10-01 22:34:57.258222] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x230ac00) on tqpair=0x22aa760 00:36:02.088 [2024-10-01 22:34:57.258232] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:36:02.088 [2024-10-01 22:34:57.258235] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x22aa760) 00:36:02.088 [2024-10-01 22:34:57.258242] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:5 cdw10:00000004 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:02.088 [2024-10-01 22:34:57.258251] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x230ac00, cid 5, qid 0 00:36:02.088 [2024-10-01 22:34:57.258462] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:36:02.088 [2024-10-01 22:34:57.258469] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:36:02.088 [2024-10-01 22:34:57.258472] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:36:02.088 [2024-10-01 22:34:57.258476] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x230ac00) on tqpair=0x22aa760 00:36:02.088 [2024-10-01 22:34:57.258486] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:36:02.088 [2024-10-01 22:34:57.258489] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x22aa760) 00:36:02.088 [2024-10-01 22:34:57.258496] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:5 cdw10:00000007 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:02.088 [2024-10-01 22:34:57.258505] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x230ac00, cid 5, qid 0 00:36:02.088 [2024-10-01 22:34:57.258708] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:36:02.088 [2024-10-01 22:34:57.258715] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:36:02.088 [2024-10-01 22:34:57.258719] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:36:02.088 [2024-10-01 22:34:57.258722] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x230ac00) on tqpair=0x22aa760 00:36:02.088 [2024-10-01 22:34:57.258737] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:36:02.089 [2024-10-01 22:34:57.258741] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x22aa760) 00:36:02.089 [2024-10-01 22:34:57.258747] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:02.089 [2024-10-01 22:34:57.258754] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:36:02.089 [2024-10-01 22:34:57.258758] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x22aa760) 00:36:02.089 [2024-10-01 22:34:57.258764] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:ffffffff cdw10:007f0002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:02.089 [2024-10-01 22:34:57.258772] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:36:02.089 [2024-10-01 22:34:57.258775] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=6 on tqpair(0x22aa760) 00:36:02.089 [2024-10-01 22:34:57.258781] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:ffffffff cdw10:007f0003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:02.089 [2024-10-01 22:34:57.258791] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:36:02.089 [2024-10-01 22:34:57.258794] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x22aa760) 00:36:02.089 [2024-10-01 22:34:57.258801] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:02.089 [2024-10-01 22:34:57.258812] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x230ac00, cid 5, qid 0 00:36:02.089 [2024-10-01 22:34:57.258820] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x230aa80, cid 4, qid 0 00:36:02.089 [2024-10-01 22:34:57.258825] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x230ad80, cid 6, qid 0 00:36:02.089 [2024-10-01 22:34:57.258830] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x230af00, cid 7, qid 0 00:36:02.089 [2024-10-01 22:34:57.259044] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:36:02.089 [2024-10-01 22:34:57.259050] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:36:02.089 [2024-10-01 22:34:57.259054] nvme_tcp.c:1730:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:36:02.089 [2024-10-01 22:34:57.259057] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x22aa760): datao=0, datal=8192, cccid=5 00:36:02.089 [2024-10-01 22:34:57.259062] nvme_tcp.c:1742:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x230ac00) on tqpair(0x22aa760): expected_datao=0, payload_size=8192 00:36:02.089 [2024-10-01 22:34:57.259066] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:36:02.089 [2024-10-01 22:34:57.259164] nvme_tcp.c:1532:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:36:02.089 [2024-10-01 22:34:57.259168] nvme_tcp.c:1323:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:36:02.089 [2024-10-01 22:34:57.259174] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:36:02.089 [2024-10-01 22:34:57.259180] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:36:02.089 [2024-10-01 22:34:57.259183] nvme_tcp.c:1730:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:36:02.089 [2024-10-01 22:34:57.259187] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x22aa760): datao=0, datal=512, cccid=4 00:36:02.089 [2024-10-01 22:34:57.259192] nvme_tcp.c:1742:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x230aa80) on tqpair(0x22aa760): expected_datao=0, payload_size=512 00:36:02.089 [2024-10-01 22:34:57.259196] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:36:02.089 [2024-10-01 22:34:57.259202] nvme_tcp.c:1532:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:36:02.089 [2024-10-01 22:34:57.259206] nvme_tcp.c:1323:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:36:02.089 [2024-10-01 22:34:57.259212] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:36:02.089 [2024-10-01 22:34:57.259217] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:36:02.089 [2024-10-01 22:34:57.259221] nvme_tcp.c:1730:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:36:02.089 [2024-10-01 22:34:57.259224] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x22aa760): datao=0, datal=512, cccid=6 00:36:02.089 [2024-10-01 22:34:57.259229] nvme_tcp.c:1742:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x230ad80) on tqpair(0x22aa760): expected_datao=0, payload_size=512 00:36:02.089 [2024-10-01 22:34:57.259233] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:36:02.089 [2024-10-01 22:34:57.259239] nvme_tcp.c:1532:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:36:02.089 [2024-10-01 22:34:57.259243] nvme_tcp.c:1323:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:36:02.089 [2024-10-01 22:34:57.259249] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:36:02.089 [2024-10-01 22:34:57.259254] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:36:02.089 [2024-10-01 22:34:57.259258] nvme_tcp.c:1730:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:36:02.089 [2024-10-01 22:34:57.259261] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x22aa760): datao=0, datal=4096, cccid=7 00:36:02.089 [2024-10-01 22:34:57.259266] nvme_tcp.c:1742:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x230af00) on tqpair(0x22aa760): expected_datao=0, payload_size=4096 00:36:02.089 [2024-10-01 22:34:57.259270] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:36:02.089 [2024-10-01 22:34:57.259281] nvme_tcp.c:1532:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:36:02.089 [2024-10-01 22:34:57.259285] nvme_tcp.c:1323:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:36:02.089 [2024-10-01 22:34:57.259294] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:36:02.089 [2024-10-01 22:34:57.259300] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:36:02.089 [2024-10-01 22:34:57.259306] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:36:02.089 [2024-10-01 22:34:57.259310] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x230ac00) on tqpair=0x22aa760 00:36:02.089 [2024-10-01 22:34:57.259321] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:36:02.089 [2024-10-01 22:34:57.259327] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:36:02.089 [2024-10-01 22:34:57.259330] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:36:02.089 [2024-10-01 22:34:57.259334] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x230aa80) on tqpair=0x22aa760 00:36:02.089 [2024-10-01 22:34:57.259344] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:36:02.089 [2024-10-01 22:34:57.259350] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:36:02.089 [2024-10-01 22:34:57.259353] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:36:02.089 [2024-10-01 22:34:57.259357] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x230ad80) on tqpair=0x22aa760 00:36:02.089 [2024-10-01 22:34:57.259364] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:36:02.089 [2024-10-01 22:34:57.259370] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:36:02.089 [2024-10-01 22:34:57.259373] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:36:02.089 [2024-10-01 22:34:57.259377] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x230af00) on tqpair=0x22aa760 00:36:02.089 ===================================================== 00:36:02.089 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:36:02.089 ===================================================== 00:36:02.089 Controller Capabilities/Features 00:36:02.089 ================================ 00:36:02.089 Vendor ID: 8086 00:36:02.089 Subsystem Vendor ID: 8086 00:36:02.089 Serial Number: SPDK00000000000001 00:36:02.089 Model Number: SPDK bdev Controller 00:36:02.089 Firmware Version: 25.01 00:36:02.089 Recommended Arb Burst: 6 00:36:02.089 IEEE OUI Identifier: e4 d2 5c 00:36:02.089 Multi-path I/O 00:36:02.089 May have multiple subsystem ports: Yes 00:36:02.089 May have multiple controllers: Yes 00:36:02.089 Associated with SR-IOV VF: No 00:36:02.089 Max Data Transfer Size: 131072 00:36:02.089 Max Number of Namespaces: 32 00:36:02.089 Max Number of I/O Queues: 127 00:36:02.089 NVMe Specification Version (VS): 1.3 00:36:02.089 NVMe Specification Version (Identify): 1.3 00:36:02.089 Maximum Queue Entries: 128 00:36:02.089 Contiguous Queues Required: Yes 00:36:02.089 Arbitration Mechanisms Supported 00:36:02.089 Weighted Round Robin: Not Supported 00:36:02.089 Vendor Specific: Not Supported 00:36:02.089 Reset Timeout: 15000 ms 00:36:02.089 Doorbell Stride: 4 bytes 00:36:02.089 NVM Subsystem Reset: Not Supported 00:36:02.089 Command Sets Supported 00:36:02.089 NVM Command Set: Supported 00:36:02.089 Boot Partition: Not Supported 00:36:02.089 Memory Page Size Minimum: 4096 bytes 00:36:02.089 Memory Page Size Maximum: 4096 bytes 00:36:02.089 Persistent Memory Region: Not Supported 00:36:02.089 Optional Asynchronous Events Supported 00:36:02.089 Namespace Attribute Notices: Supported 00:36:02.089 Firmware Activation Notices: Not Supported 00:36:02.089 ANA Change Notices: Not Supported 00:36:02.089 PLE Aggregate Log Change Notices: Not Supported 00:36:02.089 LBA Status Info Alert Notices: Not Supported 00:36:02.089 EGE Aggregate Log Change Notices: Not Supported 00:36:02.089 Normal NVM Subsystem Shutdown event: Not Supported 00:36:02.089 Zone Descriptor Change Notices: Not Supported 00:36:02.089 Discovery Log Change Notices: Not Supported 00:36:02.089 Controller Attributes 00:36:02.089 128-bit Host Identifier: Supported 00:36:02.089 Non-Operational Permissive Mode: Not Supported 00:36:02.089 NVM Sets: Not Supported 00:36:02.089 Read Recovery Levels: Not Supported 00:36:02.089 Endurance Groups: Not Supported 00:36:02.089 Predictable Latency Mode: Not Supported 00:36:02.089 Traffic Based Keep ALive: Not Supported 00:36:02.089 Namespace Granularity: Not Supported 00:36:02.089 SQ Associations: Not Supported 00:36:02.089 UUID List: Not Supported 00:36:02.089 Multi-Domain Subsystem: Not Supported 00:36:02.089 Fixed Capacity Management: Not Supported 00:36:02.089 Variable Capacity Management: Not Supported 00:36:02.089 Delete Endurance Group: Not Supported 00:36:02.089 Delete NVM Set: Not Supported 00:36:02.089 Extended LBA Formats Supported: Not Supported 00:36:02.089 Flexible Data Placement Supported: Not Supported 00:36:02.089 00:36:02.089 Controller Memory Buffer Support 00:36:02.089 ================================ 00:36:02.089 Supported: No 00:36:02.089 00:36:02.089 Persistent Memory Region Support 00:36:02.089 ================================ 00:36:02.089 Supported: No 00:36:02.089 00:36:02.089 Admin Command Set Attributes 00:36:02.089 ============================ 00:36:02.089 Security Send/Receive: Not Supported 00:36:02.089 Format NVM: Not Supported 00:36:02.089 Firmware Activate/Download: Not Supported 00:36:02.089 Namespace Management: Not Supported 00:36:02.089 Device Self-Test: Not Supported 00:36:02.089 Directives: Not Supported 00:36:02.090 NVMe-MI: Not Supported 00:36:02.090 Virtualization Management: Not Supported 00:36:02.090 Doorbell Buffer Config: Not Supported 00:36:02.090 Get LBA Status Capability: Not Supported 00:36:02.090 Command & Feature Lockdown Capability: Not Supported 00:36:02.090 Abort Command Limit: 4 00:36:02.090 Async Event Request Limit: 4 00:36:02.090 Number of Firmware Slots: N/A 00:36:02.090 Firmware Slot 1 Read-Only: N/A 00:36:02.090 Firmware Activation Without Reset: N/A 00:36:02.090 Multiple Update Detection Support: N/A 00:36:02.090 Firmware Update Granularity: No Information Provided 00:36:02.090 Per-Namespace SMART Log: No 00:36:02.090 Asymmetric Namespace Access Log Page: Not Supported 00:36:02.090 Subsystem NQN: nqn.2016-06.io.spdk:cnode1 00:36:02.090 Command Effects Log Page: Supported 00:36:02.090 Get Log Page Extended Data: Supported 00:36:02.090 Telemetry Log Pages: Not Supported 00:36:02.090 Persistent Event Log Pages: Not Supported 00:36:02.090 Supported Log Pages Log Page: May Support 00:36:02.090 Commands Supported & Effects Log Page: Not Supported 00:36:02.090 Feature Identifiers & Effects Log Page:May Support 00:36:02.090 NVMe-MI Commands & Effects Log Page: May Support 00:36:02.090 Data Area 4 for Telemetry Log: Not Supported 00:36:02.090 Error Log Page Entries Supported: 128 00:36:02.090 Keep Alive: Supported 00:36:02.090 Keep Alive Granularity: 10000 ms 00:36:02.090 00:36:02.090 NVM Command Set Attributes 00:36:02.090 ========================== 00:36:02.090 Submission Queue Entry Size 00:36:02.090 Max: 64 00:36:02.090 Min: 64 00:36:02.090 Completion Queue Entry Size 00:36:02.090 Max: 16 00:36:02.090 Min: 16 00:36:02.090 Number of Namespaces: 32 00:36:02.090 Compare Command: Supported 00:36:02.090 Write Uncorrectable Command: Not Supported 00:36:02.090 Dataset Management Command: Supported 00:36:02.090 Write Zeroes Command: Supported 00:36:02.090 Set Features Save Field: Not Supported 00:36:02.090 Reservations: Supported 00:36:02.090 Timestamp: Not Supported 00:36:02.090 Copy: Supported 00:36:02.090 Volatile Write Cache: Present 00:36:02.090 Atomic Write Unit (Normal): 1 00:36:02.090 Atomic Write Unit (PFail): 1 00:36:02.090 Atomic Compare & Write Unit: 1 00:36:02.090 Fused Compare & Write: Supported 00:36:02.090 Scatter-Gather List 00:36:02.090 SGL Command Set: Supported 00:36:02.090 SGL Keyed: Supported 00:36:02.090 SGL Bit Bucket Descriptor: Not Supported 00:36:02.090 SGL Metadata Pointer: Not Supported 00:36:02.090 Oversized SGL: Not Supported 00:36:02.090 SGL Metadata Address: Not Supported 00:36:02.090 SGL Offset: Supported 00:36:02.090 Transport SGL Data Block: Not Supported 00:36:02.090 Replay Protected Memory Block: Not Supported 00:36:02.090 00:36:02.090 Firmware Slot Information 00:36:02.090 ========================= 00:36:02.090 Active slot: 1 00:36:02.090 Slot 1 Firmware Revision: 25.01 00:36:02.090 00:36:02.090 00:36:02.090 Commands Supported and Effects 00:36:02.090 ============================== 00:36:02.090 Admin Commands 00:36:02.090 -------------- 00:36:02.090 Get Log Page (02h): Supported 00:36:02.090 Identify (06h): Supported 00:36:02.090 Abort (08h): Supported 00:36:02.090 Set Features (09h): Supported 00:36:02.090 Get Features (0Ah): Supported 00:36:02.090 Asynchronous Event Request (0Ch): Supported 00:36:02.090 Keep Alive (18h): Supported 00:36:02.090 I/O Commands 00:36:02.090 ------------ 00:36:02.090 Flush (00h): Supported LBA-Change 00:36:02.090 Write (01h): Supported LBA-Change 00:36:02.090 Read (02h): Supported 00:36:02.090 Compare (05h): Supported 00:36:02.090 Write Zeroes (08h): Supported LBA-Change 00:36:02.090 Dataset Management (09h): Supported LBA-Change 00:36:02.090 Copy (19h): Supported LBA-Change 00:36:02.090 00:36:02.090 Error Log 00:36:02.090 ========= 00:36:02.090 00:36:02.090 Arbitration 00:36:02.090 =========== 00:36:02.090 Arbitration Burst: 1 00:36:02.090 00:36:02.090 Power Management 00:36:02.090 ================ 00:36:02.090 Number of Power States: 1 00:36:02.090 Current Power State: Power State #0 00:36:02.090 Power State #0: 00:36:02.090 Max Power: 0.00 W 00:36:02.090 Non-Operational State: Operational 00:36:02.090 Entry Latency: Not Reported 00:36:02.090 Exit Latency: Not Reported 00:36:02.090 Relative Read Throughput: 0 00:36:02.090 Relative Read Latency: 0 00:36:02.090 Relative Write Throughput: 0 00:36:02.090 Relative Write Latency: 0 00:36:02.090 Idle Power: Not Reported 00:36:02.090 Active Power: Not Reported 00:36:02.090 Non-Operational Permissive Mode: Not Supported 00:36:02.090 00:36:02.090 Health Information 00:36:02.090 ================== 00:36:02.090 Critical Warnings: 00:36:02.090 Available Spare Space: OK 00:36:02.090 Temperature: OK 00:36:02.090 Device Reliability: OK 00:36:02.090 Read Only: No 00:36:02.090 Volatile Memory Backup: OK 00:36:02.090 Current Temperature: 0 Kelvin (-273 Celsius) 00:36:02.090 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:36:02.090 Available Spare: 0% 00:36:02.090 Available Spare Threshold: 0% 00:36:02.090 Life Percentage Used:[2024-10-01 22:34:57.259475] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:36:02.090 [2024-10-01 22:34:57.259480] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x22aa760) 00:36:02.090 [2024-10-01 22:34:57.259487] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:7 cdw10:00000005 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:02.090 [2024-10-01 22:34:57.259498] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x230af00, cid 7, qid 0 00:36:02.090 [2024-10-01 22:34:57.259681] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:36:02.090 [2024-10-01 22:34:57.259688] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:36:02.090 [2024-10-01 22:34:57.259691] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:36:02.090 [2024-10-01 22:34:57.259695] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x230af00) on tqpair=0x22aa760 00:36:02.090 [2024-10-01 22:34:57.259724] nvme_ctrlr.c:4386:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Prepare to destruct SSD 00:36:02.090 [2024-10-01 22:34:57.259734] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x230a480) on tqpair=0x22aa760 00:36:02.090 [2024-10-01 22:34:57.259740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:02.090 [2024-10-01 22:34:57.259745] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x230a600) on tqpair=0x22aa760 00:36:02.090 [2024-10-01 22:34:57.259750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:02.090 [2024-10-01 22:34:57.259755] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x230a780) on tqpair=0x22aa760 00:36:02.090 [2024-10-01 22:34:57.259759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:02.090 [2024-10-01 22:34:57.259764] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x230a900) on tqpair=0x22aa760 00:36:02.090 [2024-10-01 22:34:57.259769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:02.090 [2024-10-01 22:34:57.259777] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:36:02.090 [2024-10-01 22:34:57.259781] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:36:02.090 [2024-10-01 22:34:57.259784] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x22aa760) 00:36:02.090 [2024-10-01 22:34:57.259791] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:02.090 [2024-10-01 22:34:57.259805] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x230a900, cid 3, qid 0 00:36:02.090 [2024-10-01 22:34:57.259953] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:36:02.090 [2024-10-01 22:34:57.259959] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:36:02.090 [2024-10-01 22:34:57.259963] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:36:02.090 [2024-10-01 22:34:57.259967] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x230a900) on tqpair=0x22aa760 00:36:02.090 [2024-10-01 22:34:57.259973] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:36:02.090 [2024-10-01 22:34:57.259977] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:36:02.090 [2024-10-01 22:34:57.259981] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x22aa760) 00:36:02.090 [2024-10-01 22:34:57.259987] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:02.090 [2024-10-01 22:34:57.260000] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x230a900, cid 3, qid 0 00:36:02.090 [2024-10-01 22:34:57.260177] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:36:02.090 [2024-10-01 22:34:57.260184] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:36:02.090 [2024-10-01 22:34:57.260187] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:36:02.090 [2024-10-01 22:34:57.260191] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x230a900) on tqpair=0x22aa760 00:36:02.090 [2024-10-01 22:34:57.260195] nvme_ctrlr.c:1147:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] RTD3E = 0 us 00:36:02.090 [2024-10-01 22:34:57.260200] nvme_ctrlr.c:1150:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown timeout = 10000 ms 00:36:02.090 [2024-10-01 22:34:57.260209] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:36:02.090 [2024-10-01 22:34:57.260213] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:36:02.090 [2024-10-01 22:34:57.260216] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x22aa760) 00:36:02.090 [2024-10-01 22:34:57.260223] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:02.090 [2024-10-01 22:34:57.260233] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x230a900, cid 3, qid 0 00:36:02.090 [2024-10-01 22:34:57.260385] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:36:02.090 [2024-10-01 22:34:57.260392] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:36:02.090 [2024-10-01 22:34:57.260395] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:36:02.091 [2024-10-01 22:34:57.260399] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x230a900) on tqpair=0x22aa760 00:36:02.091 [2024-10-01 22:34:57.260409] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:36:02.091 [2024-10-01 22:34:57.260413] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:36:02.091 [2024-10-01 22:34:57.260416] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x22aa760) 00:36:02.091 [2024-10-01 22:34:57.260423] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:02.091 [2024-10-01 22:34:57.260433] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x230a900, cid 3, qid 0 00:36:02.091 [2024-10-01 22:34:57.260600] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:36:02.091 [2024-10-01 22:34:57.260607] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:36:02.091 [2024-10-01 22:34:57.260610] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:36:02.091 [2024-10-01 22:34:57.260614] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x230a900) on tqpair=0x22aa760 00:36:02.091 [2024-10-01 22:34:57.260627] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:36:02.091 [2024-10-01 22:34:57.260632] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:36:02.091 [2024-10-01 22:34:57.260635] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x22aa760) 00:36:02.091 [2024-10-01 22:34:57.260644] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:02.091 [2024-10-01 22:34:57.260654] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x230a900, cid 3, qid 0 00:36:02.091 [2024-10-01 22:34:57.260862] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:36:02.091 [2024-10-01 22:34:57.260869] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:36:02.091 [2024-10-01 22:34:57.260872] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:36:02.091 [2024-10-01 22:34:57.260876] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x230a900) on tqpair=0x22aa760 00:36:02.091 [2024-10-01 22:34:57.260886] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:36:02.091 [2024-10-01 22:34:57.260889] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:36:02.091 [2024-10-01 22:34:57.260893] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x22aa760) 00:36:02.091 [2024-10-01 22:34:57.260900] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:02.091 [2024-10-01 22:34:57.260909] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x230a900, cid 3, qid 0 00:36:02.091 [2024-10-01 22:34:57.261140] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:36:02.091 [2024-10-01 22:34:57.261146] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:36:02.091 [2024-10-01 22:34:57.261150] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:36:02.091 [2024-10-01 22:34:57.261154] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x230a900) on tqpair=0x22aa760 00:36:02.091 [2024-10-01 22:34:57.261163] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:36:02.091 [2024-10-01 22:34:57.261167] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:36:02.091 [2024-10-01 22:34:57.261171] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x22aa760) 00:36:02.091 [2024-10-01 22:34:57.261177] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:02.091 [2024-10-01 22:34:57.261187] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x230a900, cid 3, qid 0 00:36:02.091 [2024-10-01 22:34:57.261411] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:36:02.091 [2024-10-01 22:34:57.261417] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:36:02.091 [2024-10-01 22:34:57.261421] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:36:02.091 [2024-10-01 22:34:57.261425] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x230a900) on tqpair=0x22aa760 00:36:02.091 [2024-10-01 22:34:57.261434] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:36:02.091 [2024-10-01 22:34:57.261438] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:36:02.091 [2024-10-01 22:34:57.261441] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x22aa760) 00:36:02.091 [2024-10-01 22:34:57.261448] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:02.091 [2024-10-01 22:34:57.261458] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x230a900, cid 3, qid 0 00:36:02.091 [2024-10-01 22:34:57.265634] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:36:02.091 [2024-10-01 22:34:57.265642] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:36:02.091 [2024-10-01 22:34:57.265646] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:36:02.091 [2024-10-01 22:34:57.265650] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x230a900) on tqpair=0x22aa760 00:36:02.091 [2024-10-01 22:34:57.265660] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:36:02.091 [2024-10-01 22:34:57.265664] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:36:02.091 [2024-10-01 22:34:57.265667] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x22aa760) 00:36:02.091 [2024-10-01 22:34:57.265674] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:02.091 [2024-10-01 22:34:57.265688] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x230a900, cid 3, qid 0 00:36:02.091 [2024-10-01 22:34:57.265834] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:36:02.091 [2024-10-01 22:34:57.265840] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:36:02.091 [2024-10-01 22:34:57.265844] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:36:02.091 [2024-10-01 22:34:57.265848] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x230a900) on tqpair=0x22aa760 00:36:02.091 [2024-10-01 22:34:57.265855] nvme_ctrlr.c:1269:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown complete in 5 milliseconds 00:36:02.091 0% 00:36:02.091 Data Units Read: 0 00:36:02.091 Data Units Written: 0 00:36:02.091 Host Read Commands: 0 00:36:02.091 Host Write Commands: 0 00:36:02.091 Controller Busy Time: 0 minutes 00:36:02.091 Power Cycles: 0 00:36:02.091 Power On Hours: 0 hours 00:36:02.091 Unsafe Shutdowns: 0 00:36:02.091 Unrecoverable Media Errors: 0 00:36:02.091 Lifetime Error Log Entries: 0 00:36:02.091 Warning Temperature Time: 0 minutes 00:36:02.091 Critical Temperature Time: 0 minutes 00:36:02.091 00:36:02.091 Number of Queues 00:36:02.091 ================ 00:36:02.091 Number of I/O Submission Queues: 127 00:36:02.091 Number of I/O Completion Queues: 127 00:36:02.091 00:36:02.091 Active Namespaces 00:36:02.091 ================= 00:36:02.091 Namespace ID:1 00:36:02.091 Error Recovery Timeout: Unlimited 00:36:02.091 Command Set Identifier: NVM (00h) 00:36:02.091 Deallocate: Supported 00:36:02.091 Deallocated/Unwritten Error: Not Supported 00:36:02.091 Deallocated Read Value: Unknown 00:36:02.091 Deallocate in Write Zeroes: Not Supported 00:36:02.091 Deallocated Guard Field: 0xFFFF 00:36:02.091 Flush: Supported 00:36:02.091 Reservation: Supported 00:36:02.091 Namespace Sharing Capabilities: Multiple Controllers 00:36:02.091 Size (in LBAs): 131072 (0GiB) 00:36:02.091 Capacity (in LBAs): 131072 (0GiB) 00:36:02.091 Utilization (in LBAs): 131072 (0GiB) 00:36:02.091 NGUID: ABCDEF0123456789ABCDEF0123456789 00:36:02.091 EUI64: ABCDEF0123456789 00:36:02.091 UUID: 98e67710-0e57-46ad-a4d5-b097150ad62f 00:36:02.091 Thin Provisioning: Not Supported 00:36:02.091 Per-NS Atomic Units: Yes 00:36:02.091 Atomic Boundary Size (Normal): 0 00:36:02.091 Atomic Boundary Size (PFail): 0 00:36:02.091 Atomic Boundary Offset: 0 00:36:02.091 Maximum Single Source Range Length: 65535 00:36:02.091 Maximum Copy Length: 65535 00:36:02.091 Maximum Source Range Count: 1 00:36:02.091 NGUID/EUI64 Never Reused: No 00:36:02.091 Namespace Write Protected: No 00:36:02.091 Number of LBA Formats: 1 00:36:02.091 Current LBA Format: LBA Format #00 00:36:02.091 LBA Format #00: Data Size: 512 Metadata Size: 0 00:36:02.091 00:36:02.091 22:34:57 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@51 -- # sync 00:36:02.091 22:34:57 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@52 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:36:02.091 22:34:57 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:02.091 22:34:57 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:36:02.091 22:34:57 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:02.091 22:34:57 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@54 -- # trap - SIGINT SIGTERM EXIT 00:36:02.091 22:34:57 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@56 -- # nvmftestfini 00:36:02.091 22:34:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@514 -- # nvmfcleanup 00:36:02.091 22:34:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@121 -- # sync 00:36:02.091 22:34:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:36:02.092 22:34:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@124 -- # set +e 00:36:02.092 22:34:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@125 -- # for i in {1..20} 00:36:02.092 22:34:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:36:02.092 rmmod nvme_tcp 00:36:02.380 rmmod nvme_fabrics 00:36:02.380 rmmod nvme_keyring 00:36:02.380 22:34:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:36:02.380 22:34:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@128 -- # set -e 00:36:02.380 22:34:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@129 -- # return 0 00:36:02.380 22:34:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@515 -- # '[' -n 302708 ']' 00:36:02.380 22:34:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@516 -- # killprocess 302708 00:36:02.380 22:34:57 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@950 -- # '[' -z 302708 ']' 00:36:02.380 22:34:57 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@954 -- # kill -0 302708 00:36:02.380 22:34:57 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@955 -- # uname 00:36:02.380 22:34:57 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:36:02.380 22:34:57 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 302708 00:36:02.380 22:34:57 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:36:02.380 22:34:57 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:36:02.380 22:34:57 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@968 -- # echo 'killing process with pid 302708' 00:36:02.380 killing process with pid 302708 00:36:02.380 22:34:57 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@969 -- # kill 302708 00:36:02.380 22:34:57 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@974 -- # wait 302708 00:36:02.380 22:34:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:36:02.380 22:34:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:36:02.380 22:34:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:36:02.380 22:34:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@297 -- # iptr 00:36:02.380 22:34:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@789 -- # iptables-save 00:36:02.380 22:34:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:36:02.380 22:34:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@789 -- # iptables-restore 00:36:02.659 22:34:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:36:02.659 22:34:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@302 -- # remove_spdk_ns 00:36:02.659 22:34:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:36:02.659 22:34:57 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:36:02.659 22:34:57 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:36:04.577 22:34:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:36:04.577 00:36:04.577 real 0m11.250s 00:36:04.577 user 0m7.966s 00:36:04.577 sys 0m5.985s 00:36:04.577 22:34:59 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1126 -- # xtrace_disable 00:36:04.577 22:34:59 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:36:04.577 ************************************ 00:36:04.577 END TEST nvmf_identify 00:36:04.577 ************************************ 00:36:04.577 22:34:59 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@23 -- # run_test nvmf_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:36:04.577 22:34:59 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:36:04.577 22:34:59 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:36:04.577 22:34:59 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:36:04.577 ************************************ 00:36:04.577 START TEST nvmf_perf 00:36:04.577 ************************************ 00:36:04.577 22:34:59 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:36:04.840 * Looking for test storage... 00:36:04.840 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:36:04.840 22:34:59 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:36:04.840 22:34:59 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1681 -- # lcov --version 00:36:04.840 22:34:59 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:36:04.840 22:34:59 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:36:04.840 22:34:59 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:36:04.840 22:34:59 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:36:04.840 22:34:59 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:36:04.840 22:34:59 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@336 -- # IFS=.-: 00:36:04.840 22:34:59 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@336 -- # read -ra ver1 00:36:04.840 22:34:59 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@337 -- # IFS=.-: 00:36:04.840 22:34:59 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@337 -- # read -ra ver2 00:36:04.840 22:34:59 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@338 -- # local 'op=<' 00:36:04.840 22:34:59 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@340 -- # ver1_l=2 00:36:04.840 22:34:59 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@341 -- # ver2_l=1 00:36:04.840 22:34:59 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:36:04.840 22:34:59 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@344 -- # case "$op" in 00:36:04.840 22:34:59 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@345 -- # : 1 00:36:04.840 22:34:59 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@364 -- # (( v = 0 )) 00:36:04.840 22:34:59 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:36:04.840 22:34:59 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@365 -- # decimal 1 00:36:04.840 22:34:59 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@353 -- # local d=1 00:36:04.840 22:34:59 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:36:04.840 22:34:59 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@355 -- # echo 1 00:36:04.840 22:34:59 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@365 -- # ver1[v]=1 00:36:04.840 22:34:59 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@366 -- # decimal 2 00:36:04.840 22:34:59 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@353 -- # local d=2 00:36:04.840 22:34:59 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:36:04.840 22:34:59 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@355 -- # echo 2 00:36:04.840 22:34:59 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@366 -- # ver2[v]=2 00:36:04.840 22:34:59 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:36:04.840 22:34:59 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:36:04.840 22:34:59 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@368 -- # return 0 00:36:04.840 22:34:59 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:36:04.840 22:34:59 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:36:04.840 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:04.840 --rc genhtml_branch_coverage=1 00:36:04.840 --rc genhtml_function_coverage=1 00:36:04.840 --rc genhtml_legend=1 00:36:04.840 --rc geninfo_all_blocks=1 00:36:04.840 --rc geninfo_unexecuted_blocks=1 00:36:04.840 00:36:04.840 ' 00:36:04.840 22:34:59 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:36:04.840 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:04.840 --rc genhtml_branch_coverage=1 00:36:04.840 --rc genhtml_function_coverage=1 00:36:04.840 --rc genhtml_legend=1 00:36:04.840 --rc geninfo_all_blocks=1 00:36:04.840 --rc geninfo_unexecuted_blocks=1 00:36:04.840 00:36:04.840 ' 00:36:04.840 22:34:59 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:36:04.840 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:04.840 --rc genhtml_branch_coverage=1 00:36:04.840 --rc genhtml_function_coverage=1 00:36:04.840 --rc genhtml_legend=1 00:36:04.840 --rc geninfo_all_blocks=1 00:36:04.840 --rc geninfo_unexecuted_blocks=1 00:36:04.840 00:36:04.840 ' 00:36:04.840 22:34:59 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:36:04.840 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:04.840 --rc genhtml_branch_coverage=1 00:36:04.840 --rc genhtml_function_coverage=1 00:36:04.840 --rc genhtml_legend=1 00:36:04.840 --rc geninfo_all_blocks=1 00:36:04.840 --rc geninfo_unexecuted_blocks=1 00:36:04.840 00:36:04.840 ' 00:36:04.840 22:34:59 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:36:04.840 22:34:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@7 -- # uname -s 00:36:04.840 22:34:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:36:04.840 22:34:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:36:04.840 22:34:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:36:04.840 22:34:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:36:04.840 22:34:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:36:04.840 22:34:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:36:04.840 22:34:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:36:04.840 22:34:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:36:04.840 22:34:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:36:04.840 22:34:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:36:04.840 22:35:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 00:36:04.840 22:35:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@18 -- # NVME_HOSTID=80c5a598-37ec-ec11-9bc7-a4bf01928204 00:36:04.840 22:35:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:36:04.840 22:35:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:36:04.840 22:35:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:36:04.841 22:35:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:36:04.841 22:35:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:36:04.841 22:35:00 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@15 -- # shopt -s extglob 00:36:04.841 22:35:00 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:36:04.841 22:35:00 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:36:04.841 22:35:00 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:36:04.841 22:35:00 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:04.841 22:35:00 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:04.841 22:35:00 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:04.841 22:35:00 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@5 -- # export PATH 00:36:04.841 22:35:00 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:04.841 22:35:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@51 -- # : 0 00:36:04.841 22:35:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:36:04.841 22:35:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:36:04.841 22:35:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:36:04.841 22:35:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:36:04.841 22:35:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:36:04.841 22:35:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:36:04.841 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:36:04.841 22:35:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:36:04.841 22:35:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:36:04.841 22:35:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@55 -- # have_pci_nics=0 00:36:04.841 22:35:00 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@12 -- # MALLOC_BDEV_SIZE=64 00:36:04.841 22:35:00 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:36:04.841 22:35:00 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:36:04.841 22:35:00 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@17 -- # nvmftestinit 00:36:04.841 22:35:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:36:04.841 22:35:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:36:04.841 22:35:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@474 -- # prepare_net_devs 00:36:04.841 22:35:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@436 -- # local -g is_hw=no 00:36:04.841 22:35:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@438 -- # remove_spdk_ns 00:36:04.841 22:35:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:36:04.841 22:35:00 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:36:04.841 22:35:00 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:36:04.841 22:35:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:36:04.841 22:35:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:36:04.841 22:35:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@309 -- # xtrace_disable 00:36:04.841 22:35:00 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:36:12.983 22:35:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:36:12.983 22:35:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@315 -- # pci_devs=() 00:36:12.983 22:35:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@315 -- # local -a pci_devs 00:36:12.983 22:35:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@316 -- # pci_net_devs=() 00:36:12.983 22:35:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:36:12.983 22:35:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@317 -- # pci_drivers=() 00:36:12.983 22:35:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@317 -- # local -A pci_drivers 00:36:12.983 22:35:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@319 -- # net_devs=() 00:36:12.983 22:35:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@319 -- # local -ga net_devs 00:36:12.983 22:35:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@320 -- # e810=() 00:36:12.983 22:35:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@320 -- # local -ga e810 00:36:12.983 22:35:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@321 -- # x722=() 00:36:12.983 22:35:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@321 -- # local -ga x722 00:36:12.983 22:35:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@322 -- # mlx=() 00:36:12.983 22:35:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@322 -- # local -ga mlx 00:36:12.983 22:35:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:36:12.983 22:35:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:36:12.983 22:35:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:36:12.983 22:35:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:36:12.983 22:35:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:36:12.983 22:35:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:36:12.983 22:35:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:36:12.983 22:35:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:36:12.983 22:35:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:36:12.983 22:35:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:36:12.983 22:35:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:36:12.983 22:35:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:36:12.984 22:35:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:36:12.984 22:35:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:36:12.984 22:35:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:36:12.984 22:35:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:36:12.984 22:35:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:36:12.984 22:35:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:36:12.984 22:35:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:36:12.984 22:35:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:36:12.984 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:36:12.984 22:35:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:36:12.984 22:35:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:36:12.984 22:35:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:36:12.984 22:35:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:36:12.984 22:35:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:36:12.984 22:35:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:36:12.984 22:35:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:36:12.984 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:36:12.984 22:35:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:36:12.984 22:35:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:36:12.984 22:35:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:36:12.984 22:35:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:36:12.984 22:35:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:36:12.984 22:35:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:36:12.984 22:35:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:36:12.984 22:35:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:36:12.984 22:35:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:36:12.984 22:35:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:36:12.984 22:35:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:36:12.984 22:35:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:36:12.984 22:35:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@416 -- # [[ up == up ]] 00:36:12.984 22:35:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:36:12.984 22:35:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:36:12.984 22:35:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:36:12.984 Found net devices under 0000:4b:00.0: cvl_0_0 00:36:12.984 22:35:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:36:12.984 22:35:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:36:12.984 22:35:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:36:12.984 22:35:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:36:12.984 22:35:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:36:12.984 22:35:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@416 -- # [[ up == up ]] 00:36:12.984 22:35:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:36:12.984 22:35:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:36:12.984 22:35:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:36:12.984 Found net devices under 0000:4b:00.1: cvl_0_1 00:36:12.984 22:35:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:36:12.984 22:35:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:36:12.984 22:35:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@440 -- # is_hw=yes 00:36:12.984 22:35:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:36:12.984 22:35:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:36:12.984 22:35:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:36:12.984 22:35:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:36:12.984 22:35:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:36:12.984 22:35:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:36:12.984 22:35:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:36:12.984 22:35:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:36:12.984 22:35:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:36:12.984 22:35:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:36:12.984 22:35:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:36:12.984 22:35:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:36:12.984 22:35:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:36:12.984 22:35:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:36:12.984 22:35:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:36:12.984 22:35:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:36:12.984 22:35:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:36:12.984 22:35:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:36:12.984 22:35:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:36:12.984 22:35:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:36:12.984 22:35:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:36:12.984 22:35:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:36:12.984 22:35:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:36:12.984 22:35:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:36:12.984 22:35:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:36:12.984 22:35:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:36:12.984 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:36:12.984 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.658 ms 00:36:12.984 00:36:12.984 --- 10.0.0.2 ping statistics --- 00:36:12.984 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:36:12.984 rtt min/avg/max/mdev = 0.658/0.658/0.658/0.000 ms 00:36:12.984 22:35:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:36:12.984 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:36:12.984 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.322 ms 00:36:12.984 00:36:12.984 --- 10.0.0.1 ping statistics --- 00:36:12.984 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:36:12.984 rtt min/avg/max/mdev = 0.322/0.322/0.322/0.000 ms 00:36:12.984 22:35:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:36:12.984 22:35:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@448 -- # return 0 00:36:12.984 22:35:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:36:12.984 22:35:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:36:12.984 22:35:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:36:12.984 22:35:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:36:12.984 22:35:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:36:12.984 22:35:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:36:12.984 22:35:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:36:12.984 22:35:07 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@18 -- # nvmfappstart -m 0xF 00:36:12.984 22:35:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:36:12.984 22:35:07 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@724 -- # xtrace_disable 00:36:12.984 22:35:07 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:36:12.984 22:35:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@507 -- # nvmfpid=307115 00:36:12.984 22:35:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@508 -- # waitforlisten 307115 00:36:12.984 22:35:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:36:12.984 22:35:07 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@831 -- # '[' -z 307115 ']' 00:36:12.984 22:35:07 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:36:12.984 22:35:07 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@836 -- # local max_retries=100 00:36:12.984 22:35:07 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:36:12.984 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:36:12.984 22:35:07 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@840 -- # xtrace_disable 00:36:12.984 22:35:07 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:36:12.984 [2024-10-01 22:35:07.403969] Starting SPDK v25.01-pre git sha1 1b1c3081e / DPDK 24.03.0 initialization... 00:36:12.984 [2024-10-01 22:35:07.404030] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:36:12.984 [2024-10-01 22:35:07.472574] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:36:12.984 [2024-10-01 22:35:07.538364] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:36:12.984 [2024-10-01 22:35:07.538400] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:36:12.984 [2024-10-01 22:35:07.538408] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:36:12.984 [2024-10-01 22:35:07.538414] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:36:12.984 [2024-10-01 22:35:07.538420] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:36:12.984 [2024-10-01 22:35:07.538565] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:36:12.984 [2024-10-01 22:35:07.538675] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:36:12.984 [2024-10-01 22:35:07.538773] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:36:12.984 [2024-10-01 22:35:07.538775] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:36:12.984 22:35:08 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:36:12.984 22:35:08 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@864 -- # return 0 00:36:12.984 22:35:08 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:36:12.985 22:35:08 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@730 -- # xtrace_disable 00:36:12.985 22:35:08 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:36:13.245 22:35:08 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:36:13.245 22:35:08 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:36:13.245 22:35:08 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_subsystem_config 00:36:13.505 22:35:08 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_get_config bdev 00:36:13.505 22:35:08 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # jq -r '.[].params | select(.name=="Nvme0").traddr' 00:36:13.765 22:35:08 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # local_nvme_trid=0000:65:00.0 00:36:13.765 22:35:08 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:36:14.025 22:35:09 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@31 -- # bdevs=' Malloc0' 00:36:14.025 22:35:09 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@33 -- # '[' -n 0000:65:00.0 ']' 00:36:14.025 22:35:09 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@34 -- # bdevs=' Malloc0 Nvme0n1' 00:36:14.025 22:35:09 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@37 -- # '[' tcp == rdma ']' 00:36:14.025 22:35:09 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:36:14.025 [2024-10-01 22:35:09.271946] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:36:14.285 22:35:09 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:36:14.285 22:35:09 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:36:14.285 22:35:09 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:36:14.545 22:35:09 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:36:14.546 22:35:09 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:36:14.807 22:35:09 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:36:14.807 [2024-10-01 22:35:10.018878] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:36:14.807 22:35:10 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:36:15.067 22:35:10 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@52 -- # '[' -n 0000:65:00.0 ']' 00:36:15.067 22:35:10 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@53 -- # perf_app -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:65:00.0' 00:36:15.067 22:35:10 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@21 -- # '[' 0 -eq 1 ']' 00:36:15.068 22:35:10 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:65:00.0' 00:36:16.449 Initializing NVMe Controllers 00:36:16.449 Attached to NVMe Controller at 0000:65:00.0 [144d:a80a] 00:36:16.449 Associating PCIE (0000:65:00.0) NSID 1 with lcore 0 00:36:16.449 Initialization complete. Launching workers. 00:36:16.449 ======================================================== 00:36:16.449 Latency(us) 00:36:16.449 Device Information : IOPS MiB/s Average min max 00:36:16.449 PCIE (0000:65:00.0) NSID 1 from core 0: 79250.17 309.57 403.06 13.27 5283.78 00:36:16.449 ======================================================== 00:36:16.449 Total : 79250.17 309.57 403.06 13.27 5283.78 00:36:16.449 00:36:16.449 22:35:11 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:36:17.831 Initializing NVMe Controllers 00:36:17.831 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:36:17.831 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:36:17.831 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:36:17.831 Initialization complete. Launching workers. 00:36:17.831 ======================================================== 00:36:17.831 Latency(us) 00:36:17.831 Device Information : IOPS MiB/s Average min max 00:36:17.831 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 91.94 0.36 11073.61 114.30 45106.54 00:36:17.831 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 59.96 0.23 16677.01 5365.03 47888.73 00:36:17.831 ======================================================== 00:36:17.831 Total : 151.90 0.59 13285.48 114.30 47888.73 00:36:17.831 00:36:17.831 22:35:12 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 4096 -w randrw -M 50 -t 1 -HI -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:36:18.771 Initializing NVMe Controllers 00:36:18.771 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:36:18.771 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:36:18.771 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:36:18.771 Initialization complete. Launching workers. 00:36:18.771 ======================================================== 00:36:18.771 Latency(us) 00:36:18.771 Device Information : IOPS MiB/s Average min max 00:36:18.771 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 10339.20 40.39 3095.32 471.04 8269.54 00:36:18.771 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 3664.36 14.31 8778.76 5117.82 17267.95 00:36:18.771 ======================================================== 00:36:18.771 Total : 14003.56 54.70 4582.53 471.04 17267.95 00:36:18.771 00:36:19.032 22:35:14 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@59 -- # [[ e810 == \e\8\1\0 ]] 00:36:19.032 22:35:14 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@59 -- # [[ tcp == \r\d\m\a ]] 00:36:19.032 22:35:14 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -O 16384 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:36:21.572 Initializing NVMe Controllers 00:36:21.572 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:36:21.572 Controller IO queue size 128, less than required. 00:36:21.572 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:36:21.572 Controller IO queue size 128, less than required. 00:36:21.572 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:36:21.572 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:36:21.572 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:36:21.572 Initialization complete. Launching workers. 00:36:21.572 ======================================================== 00:36:21.572 Latency(us) 00:36:21.572 Device Information : IOPS MiB/s Average min max 00:36:21.572 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1619.48 404.87 80336.41 49939.04 129605.78 00:36:21.572 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 557.99 139.50 236578.49 69837.04 390010.28 00:36:21.572 ======================================================== 00:36:21.572 Total : 2177.47 544.37 120374.57 49939.04 390010.28 00:36:21.572 00:36:21.832 22:35:16 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 36964 -O 4096 -w randrw -M 50 -t 5 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0xf -P 4 00:36:21.832 No valid NVMe controllers or AIO or URING devices found 00:36:21.832 Initializing NVMe Controllers 00:36:21.832 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:36:21.832 Controller IO queue size 128, less than required. 00:36:21.832 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:36:21.832 WARNING: IO size 36964 (-o) is not a multiple of nsid 1 sector size 512. Removing this ns from test 00:36:21.832 Controller IO queue size 128, less than required. 00:36:21.832 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:36:21.832 WARNING: IO size 36964 (-o) is not a multiple of nsid 2 sector size 512. Removing this ns from test 00:36:21.832 WARNING: Some requested NVMe devices were skipped 00:36:21.832 22:35:17 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' --transport-stat 00:36:24.371 Initializing NVMe Controllers 00:36:24.371 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:36:24.371 Controller IO queue size 128, less than required. 00:36:24.371 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:36:24.371 Controller IO queue size 128, less than required. 00:36:24.371 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:36:24.371 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:36:24.371 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:36:24.371 Initialization complete. Launching workers. 00:36:24.371 00:36:24.371 ==================== 00:36:24.371 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 statistics: 00:36:24.371 TCP transport: 00:36:24.371 polls: 24204 00:36:24.371 idle_polls: 14000 00:36:24.371 sock_completions: 10204 00:36:24.371 nvme_completions: 6225 00:36:24.371 submitted_requests: 9382 00:36:24.371 queued_requests: 1 00:36:24.371 00:36:24.371 ==================== 00:36:24.371 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 statistics: 00:36:24.371 TCP transport: 00:36:24.371 polls: 24386 00:36:24.371 idle_polls: 13594 00:36:24.371 sock_completions: 10792 00:36:24.371 nvme_completions: 6255 00:36:24.371 submitted_requests: 9362 00:36:24.371 queued_requests: 1 00:36:24.371 ======================================================== 00:36:24.371 Latency(us) 00:36:24.371 Device Information : IOPS MiB/s Average min max 00:36:24.371 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1555.68 388.92 83987.24 53522.49 156421.48 00:36:24.371 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 1563.18 390.80 82786.02 36886.55 113944.00 00:36:24.371 ======================================================== 00:36:24.371 Total : 3118.86 779.72 83385.19 36886.55 156421.48 00:36:24.371 00:36:24.371 22:35:19 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@66 -- # sync 00:36:24.371 22:35:19 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@67 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:36:24.371 22:35:19 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@69 -- # '[' 0 -eq 1 ']' 00:36:24.371 22:35:19 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:36:24.371 22:35:19 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@114 -- # nvmftestfini 00:36:24.371 22:35:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@514 -- # nvmfcleanup 00:36:24.371 22:35:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@121 -- # sync 00:36:24.371 22:35:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:36:24.371 22:35:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@124 -- # set +e 00:36:24.371 22:35:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@125 -- # for i in {1..20} 00:36:24.371 22:35:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:36:24.371 rmmod nvme_tcp 00:36:24.371 rmmod nvme_fabrics 00:36:24.631 rmmod nvme_keyring 00:36:24.631 22:35:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:36:24.631 22:35:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@128 -- # set -e 00:36:24.631 22:35:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@129 -- # return 0 00:36:24.631 22:35:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@515 -- # '[' -n 307115 ']' 00:36:24.631 22:35:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@516 -- # killprocess 307115 00:36:24.631 22:35:19 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@950 -- # '[' -z 307115 ']' 00:36:24.631 22:35:19 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@954 -- # kill -0 307115 00:36:24.631 22:35:19 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@955 -- # uname 00:36:24.631 22:35:19 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:36:24.631 22:35:19 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 307115 00:36:24.631 22:35:19 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:36:24.631 22:35:19 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:36:24.631 22:35:19 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@968 -- # echo 'killing process with pid 307115' 00:36:24.631 killing process with pid 307115 00:36:24.631 22:35:19 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@969 -- # kill 307115 00:36:24.631 22:35:19 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@974 -- # wait 307115 00:36:26.557 22:35:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:36:26.557 22:35:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:36:26.557 22:35:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:36:26.557 22:35:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@297 -- # iptr 00:36:26.557 22:35:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@789 -- # iptables-save 00:36:26.557 22:35:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:36:26.557 22:35:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@789 -- # iptables-restore 00:36:26.557 22:35:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:36:26.557 22:35:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@302 -- # remove_spdk_ns 00:36:26.557 22:35:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:36:26.557 22:35:21 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:36:26.557 22:35:21 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:36:29.103 22:35:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:36:29.103 00:36:29.103 real 0m24.075s 00:36:29.103 user 0m58.228s 00:36:29.103 sys 0m8.411s 00:36:29.103 22:35:23 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:36:29.103 22:35:23 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:36:29.103 ************************************ 00:36:29.103 END TEST nvmf_perf 00:36:29.103 ************************************ 00:36:29.103 22:35:23 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@24 -- # run_test nvmf_fio_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:36:29.103 22:35:23 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:36:29.103 22:35:23 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:36:29.103 22:35:23 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:36:29.103 ************************************ 00:36:29.103 START TEST nvmf_fio_host 00:36:29.103 ************************************ 00:36:29.103 22:35:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:36:29.103 * Looking for test storage... 00:36:29.103 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:36:29.103 22:35:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:36:29.103 22:35:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1681 -- # lcov --version 00:36:29.103 22:35:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:36:29.103 22:35:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:36:29.103 22:35:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:36:29.103 22:35:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:36:29.103 22:35:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:36:29.103 22:35:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@336 -- # IFS=.-: 00:36:29.103 22:35:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@336 -- # read -ra ver1 00:36:29.103 22:35:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@337 -- # IFS=.-: 00:36:29.103 22:35:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@337 -- # read -ra ver2 00:36:29.103 22:35:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@338 -- # local 'op=<' 00:36:29.103 22:35:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@340 -- # ver1_l=2 00:36:29.103 22:35:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@341 -- # ver2_l=1 00:36:29.103 22:35:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:36:29.103 22:35:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@344 -- # case "$op" in 00:36:29.103 22:35:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@345 -- # : 1 00:36:29.103 22:35:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:36:29.103 22:35:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:36:29.103 22:35:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@365 -- # decimal 1 00:36:29.103 22:35:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@353 -- # local d=1 00:36:29.103 22:35:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:36:29.103 22:35:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@355 -- # echo 1 00:36:29.103 22:35:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@365 -- # ver1[v]=1 00:36:29.103 22:35:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@366 -- # decimal 2 00:36:29.103 22:35:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@353 -- # local d=2 00:36:29.103 22:35:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:36:29.103 22:35:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@355 -- # echo 2 00:36:29.103 22:35:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@366 -- # ver2[v]=2 00:36:29.103 22:35:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:36:29.103 22:35:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:36:29.103 22:35:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@368 -- # return 0 00:36:29.103 22:35:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:36:29.103 22:35:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:36:29.103 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:29.103 --rc genhtml_branch_coverage=1 00:36:29.103 --rc genhtml_function_coverage=1 00:36:29.103 --rc genhtml_legend=1 00:36:29.103 --rc geninfo_all_blocks=1 00:36:29.103 --rc geninfo_unexecuted_blocks=1 00:36:29.103 00:36:29.103 ' 00:36:29.103 22:35:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:36:29.103 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:29.103 --rc genhtml_branch_coverage=1 00:36:29.103 --rc genhtml_function_coverage=1 00:36:29.103 --rc genhtml_legend=1 00:36:29.103 --rc geninfo_all_blocks=1 00:36:29.103 --rc geninfo_unexecuted_blocks=1 00:36:29.103 00:36:29.103 ' 00:36:29.103 22:35:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:36:29.103 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:29.103 --rc genhtml_branch_coverage=1 00:36:29.103 --rc genhtml_function_coverage=1 00:36:29.103 --rc genhtml_legend=1 00:36:29.103 --rc geninfo_all_blocks=1 00:36:29.103 --rc geninfo_unexecuted_blocks=1 00:36:29.103 00:36:29.103 ' 00:36:29.103 22:35:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:36:29.103 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:29.103 --rc genhtml_branch_coverage=1 00:36:29.103 --rc genhtml_function_coverage=1 00:36:29.103 --rc genhtml_legend=1 00:36:29.103 --rc geninfo_all_blocks=1 00:36:29.103 --rc geninfo_unexecuted_blocks=1 00:36:29.103 00:36:29.103 ' 00:36:29.103 22:35:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:36:29.103 22:35:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@15 -- # shopt -s extglob 00:36:29.103 22:35:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:36:29.103 22:35:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:36:29.103 22:35:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:36:29.103 22:35:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:29.103 22:35:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:29.103 22:35:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:29.103 22:35:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:36:29.103 22:35:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:29.103 22:35:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:36:29.103 22:35:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@7 -- # uname -s 00:36:29.104 22:35:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:36:29.104 22:35:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:36:29.104 22:35:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:36:29.104 22:35:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:36:29.104 22:35:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:36:29.104 22:35:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:36:29.104 22:35:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:36:29.104 22:35:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:36:29.104 22:35:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:36:29.104 22:35:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:36:29.104 22:35:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 00:36:29.104 22:35:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@18 -- # NVME_HOSTID=80c5a598-37ec-ec11-9bc7-a4bf01928204 00:36:29.104 22:35:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:36:29.104 22:35:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:36:29.104 22:35:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:36:29.104 22:35:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:36:29.104 22:35:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:36:29.104 22:35:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@15 -- # shopt -s extglob 00:36:29.104 22:35:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:36:29.104 22:35:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:36:29.104 22:35:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:36:29.104 22:35:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:29.104 22:35:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:29.104 22:35:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:29.104 22:35:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:36:29.104 22:35:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:29.104 22:35:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@51 -- # : 0 00:36:29.104 22:35:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:36:29.104 22:35:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:36:29.104 22:35:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:36:29.104 22:35:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:36:29.104 22:35:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:36:29.104 22:35:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:36:29.104 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:36:29.104 22:35:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:36:29.104 22:35:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:36:29.104 22:35:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:36:29.104 22:35:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:36:29.104 22:35:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@14 -- # nvmftestinit 00:36:29.104 22:35:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:36:29.104 22:35:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:36:29.104 22:35:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@474 -- # prepare_net_devs 00:36:29.104 22:35:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@436 -- # local -g is_hw=no 00:36:29.104 22:35:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@438 -- # remove_spdk_ns 00:36:29.104 22:35:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:36:29.104 22:35:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:36:29.104 22:35:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:36:29.104 22:35:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:36:29.104 22:35:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:36:29.104 22:35:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@309 -- # xtrace_disable 00:36:29.104 22:35:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:36:37.246 22:35:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:36:37.246 22:35:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@315 -- # pci_devs=() 00:36:37.246 22:35:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@315 -- # local -a pci_devs 00:36:37.246 22:35:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@316 -- # pci_net_devs=() 00:36:37.246 22:35:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:36:37.246 22:35:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@317 -- # pci_drivers=() 00:36:37.246 22:35:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@317 -- # local -A pci_drivers 00:36:37.246 22:35:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@319 -- # net_devs=() 00:36:37.246 22:35:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@319 -- # local -ga net_devs 00:36:37.246 22:35:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@320 -- # e810=() 00:36:37.246 22:35:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@320 -- # local -ga e810 00:36:37.246 22:35:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@321 -- # x722=() 00:36:37.246 22:35:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@321 -- # local -ga x722 00:36:37.246 22:35:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@322 -- # mlx=() 00:36:37.246 22:35:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@322 -- # local -ga mlx 00:36:37.246 22:35:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:36:37.246 22:35:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:36:37.246 22:35:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:36:37.246 22:35:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:36:37.246 22:35:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:36:37.246 22:35:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:36:37.246 22:35:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:36:37.246 22:35:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:36:37.246 22:35:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:36:37.246 22:35:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:36:37.246 22:35:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:36:37.246 22:35:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:36:37.246 22:35:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:36:37.246 22:35:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:36:37.246 22:35:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:36:37.246 22:35:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:36:37.246 22:35:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:36:37.246 22:35:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:36:37.246 22:35:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:36:37.246 22:35:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:36:37.246 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:36:37.246 22:35:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:36:37.246 22:35:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:36:37.246 22:35:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:36:37.246 22:35:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:36:37.246 22:35:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:36:37.246 22:35:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:36:37.246 22:35:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:36:37.246 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:36:37.246 22:35:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:36:37.246 22:35:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:36:37.246 22:35:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:36:37.246 22:35:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:36:37.246 22:35:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:36:37.246 22:35:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:36:37.246 22:35:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:36:37.246 22:35:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:36:37.246 22:35:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:36:37.246 22:35:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:36:37.246 22:35:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:36:37.246 22:35:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:36:37.246 22:35:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@416 -- # [[ up == up ]] 00:36:37.246 22:35:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:36:37.246 22:35:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:36:37.246 22:35:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:36:37.246 Found net devices under 0000:4b:00.0: cvl_0_0 00:36:37.246 22:35:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:36:37.246 22:35:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:36:37.246 22:35:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:36:37.246 22:35:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:36:37.246 22:35:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:36:37.246 22:35:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@416 -- # [[ up == up ]] 00:36:37.246 22:35:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:36:37.246 22:35:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:36:37.246 22:35:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:36:37.246 Found net devices under 0000:4b:00.1: cvl_0_1 00:36:37.246 22:35:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:36:37.246 22:35:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:36:37.246 22:35:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@440 -- # is_hw=yes 00:36:37.246 22:35:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:36:37.246 22:35:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:36:37.246 22:35:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:36:37.246 22:35:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:36:37.246 22:35:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:36:37.246 22:35:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:36:37.246 22:35:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:36:37.246 22:35:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:36:37.246 22:35:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:36:37.246 22:35:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:36:37.246 22:35:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:36:37.247 22:35:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:36:37.247 22:35:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:36:37.247 22:35:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:36:37.247 22:35:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:36:37.247 22:35:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:36:37.247 22:35:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:36:37.247 22:35:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:36:37.247 22:35:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:36:37.247 22:35:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:36:37.247 22:35:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:36:37.247 22:35:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:36:37.247 22:35:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:36:37.247 22:35:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:36:37.247 22:35:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:36:37.247 22:35:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:36:37.247 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:36:37.247 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.541 ms 00:36:37.247 00:36:37.247 --- 10.0.0.2 ping statistics --- 00:36:37.247 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:36:37.247 rtt min/avg/max/mdev = 0.541/0.541/0.541/0.000 ms 00:36:37.247 22:35:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:36:37.247 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:36:37.247 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.275 ms 00:36:37.247 00:36:37.247 --- 10.0.0.1 ping statistics --- 00:36:37.247 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:36:37.247 rtt min/avg/max/mdev = 0.275/0.275/0.275/0.000 ms 00:36:37.247 22:35:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:36:37.247 22:35:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@448 -- # return 0 00:36:37.247 22:35:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:36:37.247 22:35:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:36:37.247 22:35:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:36:37.247 22:35:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:36:37.247 22:35:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:36:37.247 22:35:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:36:37.247 22:35:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:36:37.247 22:35:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@16 -- # [[ y != y ]] 00:36:37.247 22:35:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@21 -- # timing_enter start_nvmf_tgt 00:36:37.247 22:35:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@724 -- # xtrace_disable 00:36:37.247 22:35:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:36:37.247 22:35:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@24 -- # nvmfpid=314180 00:36:37.247 22:35:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@26 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:36:37.247 22:35:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@23 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:36:37.247 22:35:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@28 -- # waitforlisten 314180 00:36:37.247 22:35:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@831 -- # '[' -z 314180 ']' 00:36:37.247 22:35:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:36:37.247 22:35:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@836 -- # local max_retries=100 00:36:37.247 22:35:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:36:37.247 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:36:37.247 22:35:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@840 -- # xtrace_disable 00:36:37.247 22:35:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:36:37.247 [2024-10-01 22:35:31.506981] Starting SPDK v25.01-pre git sha1 1b1c3081e / DPDK 24.03.0 initialization... 00:36:37.247 [2024-10-01 22:35:31.507060] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:36:37.247 [2024-10-01 22:35:31.577906] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:36:37.247 [2024-10-01 22:35:31.646140] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:36:37.247 [2024-10-01 22:35:31.646181] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:36:37.247 [2024-10-01 22:35:31.646189] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:36:37.247 [2024-10-01 22:35:31.646196] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:36:37.247 [2024-10-01 22:35:31.646202] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:36:37.247 [2024-10-01 22:35:31.646344] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:36:37.247 [2024-10-01 22:35:31.646455] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:36:37.247 [2024-10-01 22:35:31.646611] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:36:37.247 [2024-10-01 22:35:31.646612] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:36:37.247 22:35:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:36:37.247 22:35:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@864 -- # return 0 00:36:37.247 22:35:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:36:37.247 [2024-10-01 22:35:32.450520] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:36:37.247 22:35:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@30 -- # timing_exit start_nvmf_tgt 00:36:37.247 22:35:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@730 -- # xtrace_disable 00:36:37.247 22:35:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:36:37.508 22:35:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:36:37.508 Malloc1 00:36:37.508 22:35:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:36:37.769 22:35:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:36:38.030 22:35:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:36:38.030 [2024-10-01 22:35:33.244318] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:36:38.030 22:35:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:36:38.291 22:35:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@38 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:36:38.291 22:35:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@41 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:36:38.291 22:35:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:36:38.291 22:35:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:36:38.291 22:35:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:36:38.291 22:35:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local sanitizers 00:36:38.291 22:35:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:36:38.291 22:35:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # shift 00:36:38.291 22:35:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local asan_lib= 00:36:38.291 22:35:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:36:38.291 22:35:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:36:38.291 22:35:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libasan 00:36:38.291 22:35:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:36:38.291 22:35:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:36:38.291 22:35:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:36:38.291 22:35:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:36:38.291 22:35:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:36:38.291 22:35:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:36:38.291 22:35:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:36:38.291 22:35:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:36:38.291 22:35:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:36:38.291 22:35:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:36:38.291 22:35:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:36:38.888 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:36:38.888 fio-3.35 00:36:38.888 Starting 1 thread 00:36:41.435 00:36:41.435 test: (groupid=0, jobs=1): err= 0: pid=314716: Tue Oct 1 22:35:36 2024 00:36:41.435 read: IOPS=13.0k, BW=50.9MiB/s (53.4MB/s)(102MiB/2004msec) 00:36:41.435 slat (usec): min=2, max=278, avg= 2.16, stdev= 2.46 00:36:41.435 clat (usec): min=3791, max=9753, avg=5410.24, stdev=825.57 00:36:41.435 lat (usec): min=3833, max=9760, avg=5412.40, stdev=825.70 00:36:41.435 clat percentiles (usec): 00:36:41.435 | 1.00th=[ 4359], 5.00th=[ 4621], 10.00th=[ 4752], 20.00th=[ 4883], 00:36:41.435 | 30.00th=[ 5014], 40.00th=[ 5080], 50.00th=[ 5211], 60.00th=[ 5276], 00:36:41.435 | 70.00th=[ 5407], 80.00th=[ 5604], 90.00th=[ 6980], 95.00th=[ 7504], 00:36:41.435 | 99.00th=[ 8160], 99.50th=[ 8356], 99.90th=[ 8848], 99.95th=[ 9372], 00:36:41.435 | 99.99th=[ 9634] 00:36:41.435 bw ( KiB/s): min=42936, max=55456, per=99.93%, avg=52130.00, stdev=6133.80, samples=4 00:36:41.435 iops : min=10734, max=13864, avg=13032.50, stdev=1533.45, samples=4 00:36:41.435 write: IOPS=13.0k, BW=50.9MiB/s (53.4MB/s)(102MiB/2004msec); 0 zone resets 00:36:41.435 slat (usec): min=2, max=269, avg= 2.22, stdev= 1.84 00:36:41.435 clat (usec): min=2919, max=8014, avg=4370.33, stdev=681.56 00:36:41.435 lat (usec): min=2937, max=8019, avg=4372.55, stdev=681.72 00:36:41.435 clat percentiles (usec): 00:36:41.435 | 1.00th=[ 3490], 5.00th=[ 3720], 10.00th=[ 3818], 20.00th=[ 3949], 00:36:41.435 | 30.00th=[ 4047], 40.00th=[ 4113], 50.00th=[ 4178], 60.00th=[ 4293], 00:36:41.435 | 70.00th=[ 4359], 80.00th=[ 4490], 90.00th=[ 5669], 95.00th=[ 6063], 00:36:41.435 | 99.00th=[ 6587], 99.50th=[ 6783], 99.90th=[ 7635], 99.95th=[ 7767], 00:36:41.435 | 99.99th=[ 7963] 00:36:41.435 bw ( KiB/s): min=43320, max=55360, per=99.99%, avg=52148.00, stdev=5888.81, samples=4 00:36:41.435 iops : min=10830, max=13840, avg=13037.00, stdev=1472.20, samples=4 00:36:41.435 lat (msec) : 4=12.77%, 10=87.23% 00:36:41.435 cpu : usr=72.59%, sys=26.21%, ctx=42, majf=0, minf=9 00:36:41.435 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:36:41.435 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:41.435 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:36:41.435 issued rwts: total=26136,26130,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:41.435 latency : target=0, window=0, percentile=100.00%, depth=128 00:36:41.435 00:36:41.435 Run status group 0 (all jobs): 00:36:41.435 READ: bw=50.9MiB/s (53.4MB/s), 50.9MiB/s-50.9MiB/s (53.4MB/s-53.4MB/s), io=102MiB (107MB), run=2004-2004msec 00:36:41.435 WRITE: bw=50.9MiB/s (53.4MB/s), 50.9MiB/s-50.9MiB/s (53.4MB/s-53.4MB/s), io=102MiB (107MB), run=2004-2004msec 00:36:41.435 22:35:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@45 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:36:41.435 22:35:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:36:41.435 22:35:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:36:41.435 22:35:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:36:41.435 22:35:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local sanitizers 00:36:41.435 22:35:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:36:41.435 22:35:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # shift 00:36:41.435 22:35:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local asan_lib= 00:36:41.435 22:35:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:36:41.435 22:35:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:36:41.435 22:35:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:36:41.435 22:35:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libasan 00:36:41.435 22:35:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:36:41.435 22:35:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:36:41.435 22:35:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:36:41.435 22:35:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:36:41.435 22:35:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:36:41.435 22:35:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:36:41.435 22:35:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:36:41.435 22:35:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:36:41.435 22:35:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:36:41.435 22:35:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:36:41.435 test: (g=0): rw=randrw, bs=(R) 16.0KiB-16.0KiB, (W) 16.0KiB-16.0KiB, (T) 16.0KiB-16.0KiB, ioengine=spdk, iodepth=128 00:36:41.435 fio-3.35 00:36:41.435 Starting 1 thread 00:36:43.977 00:36:43.977 test: (groupid=0, jobs=1): err= 0: pid=315542: Tue Oct 1 22:35:38 2024 00:36:43.977 read: IOPS=9456, BW=148MiB/s (155MB/s)(296MiB/2006msec) 00:36:43.977 slat (usec): min=3, max=110, avg= 3.60, stdev= 1.61 00:36:43.977 clat (usec): min=1979, max=15448, avg=8167.83, stdev=1886.94 00:36:43.977 lat (usec): min=1982, max=15451, avg=8171.44, stdev=1887.06 00:36:43.977 clat percentiles (usec): 00:36:43.977 | 1.00th=[ 4424], 5.00th=[ 5276], 10.00th=[ 5735], 20.00th=[ 6456], 00:36:43.977 | 30.00th=[ 7046], 40.00th=[ 7570], 50.00th=[ 8029], 60.00th=[ 8717], 00:36:43.977 | 70.00th=[ 9503], 80.00th=[ 9896], 90.00th=[10290], 95.00th=[11076], 00:36:43.977 | 99.00th=[12911], 99.50th=[13960], 99.90th=[15008], 99.95th=[15270], 00:36:43.977 | 99.99th=[15401] 00:36:43.977 bw ( KiB/s): min=63456, max=90080, per=49.30%, avg=74584.00, stdev=11579.53, samples=4 00:36:43.977 iops : min= 3966, max= 5630, avg=4661.50, stdev=723.72, samples=4 00:36:43.977 write: IOPS=5703, BW=89.1MiB/s (93.5MB/s)(153MiB/1715msec); 0 zone resets 00:36:43.977 slat (usec): min=39, max=453, avg=41.01, stdev= 8.33 00:36:43.977 clat (usec): min=2675, max=17164, avg=9428.90, stdev=1570.47 00:36:43.977 lat (usec): min=2715, max=17204, avg=9469.91, stdev=1572.25 00:36:43.977 clat percentiles (usec): 00:36:43.977 | 1.00th=[ 6652], 5.00th=[ 7373], 10.00th=[ 7701], 20.00th=[ 8160], 00:36:43.977 | 30.00th=[ 8455], 40.00th=[ 8848], 50.00th=[ 9241], 60.00th=[ 9634], 00:36:43.977 | 70.00th=[10159], 80.00th=[10683], 90.00th=[11338], 95.00th=[12256], 00:36:43.977 | 99.00th=[14353], 99.50th=[14877], 99.90th=[15795], 99.95th=[16188], 00:36:43.977 | 99.99th=[17171] 00:36:43.977 bw ( KiB/s): min=67264, max=92736, per=85.03%, avg=77600.00, stdev=10978.16, samples=4 00:36:43.977 iops : min= 4204, max= 5796, avg=4850.00, stdev=686.14, samples=4 00:36:43.977 lat (msec) : 2=0.01%, 4=0.36%, 10=77.19%, 20=22.44% 00:36:43.977 cpu : usr=85.04%, sys=13.86%, ctx=12, majf=0, minf=37 00:36:43.977 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.8%, >=64=98.5% 00:36:43.977 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:43.977 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:36:43.977 issued rwts: total=18969,9782,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:43.977 latency : target=0, window=0, percentile=100.00%, depth=128 00:36:43.977 00:36:43.977 Run status group 0 (all jobs): 00:36:43.977 READ: bw=148MiB/s (155MB/s), 148MiB/s-148MiB/s (155MB/s-155MB/s), io=296MiB (311MB), run=2006-2006msec 00:36:43.977 WRITE: bw=89.1MiB/s (93.5MB/s), 89.1MiB/s-89.1MiB/s (93.5MB/s-93.5MB/s), io=153MiB (160MB), run=1715-1715msec 00:36:43.977 22:35:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:36:43.977 22:35:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@49 -- # '[' 0 -eq 1 ']' 00:36:43.977 22:35:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:36:43.977 22:35:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@85 -- # rm -f ./local-test-0-verify.state 00:36:43.977 22:35:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@86 -- # nvmftestfini 00:36:43.977 22:35:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@514 -- # nvmfcleanup 00:36:43.977 22:35:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@121 -- # sync 00:36:43.977 22:35:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:36:43.977 22:35:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@124 -- # set +e 00:36:43.977 22:35:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@125 -- # for i in {1..20} 00:36:43.977 22:35:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:36:43.977 rmmod nvme_tcp 00:36:43.977 rmmod nvme_fabrics 00:36:43.977 rmmod nvme_keyring 00:36:44.237 22:35:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:36:44.237 22:35:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@128 -- # set -e 00:36:44.237 22:35:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@129 -- # return 0 00:36:44.237 22:35:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@515 -- # '[' -n 314180 ']' 00:36:44.237 22:35:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@516 -- # killprocess 314180 00:36:44.237 22:35:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@950 -- # '[' -z 314180 ']' 00:36:44.237 22:35:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@954 -- # kill -0 314180 00:36:44.237 22:35:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@955 -- # uname 00:36:44.237 22:35:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:36:44.237 22:35:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 314180 00:36:44.237 22:35:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:36:44.237 22:35:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:36:44.237 22:35:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@968 -- # echo 'killing process with pid 314180' 00:36:44.237 killing process with pid 314180 00:36:44.237 22:35:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@969 -- # kill 314180 00:36:44.237 22:35:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@974 -- # wait 314180 00:36:44.498 22:35:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:36:44.498 22:35:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:36:44.498 22:35:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:36:44.498 22:35:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@297 -- # iptr 00:36:44.498 22:35:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@789 -- # iptables-save 00:36:44.498 22:35:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:36:44.498 22:35:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@789 -- # iptables-restore 00:36:44.498 22:35:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:36:44.498 22:35:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@302 -- # remove_spdk_ns 00:36:44.498 22:35:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:36:44.498 22:35:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:36:44.498 22:35:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:36:46.409 22:35:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:36:46.409 00:36:46.409 real 0m17.650s 00:36:46.409 user 1m7.163s 00:36:46.409 sys 0m7.562s 00:36:46.409 22:35:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1126 -- # xtrace_disable 00:36:46.409 22:35:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:36:46.409 ************************************ 00:36:46.409 END TEST nvmf_fio_host 00:36:46.410 ************************************ 00:36:46.410 22:35:41 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@25 -- # run_test nvmf_failover /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:36:46.410 22:35:41 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:36:46.410 22:35:41 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:36:46.410 22:35:41 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:36:46.672 ************************************ 00:36:46.672 START TEST nvmf_failover 00:36:46.672 ************************************ 00:36:46.672 22:35:41 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:36:46.672 * Looking for test storage... 00:36:46.672 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:36:46.672 22:35:41 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:36:46.672 22:35:41 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1681 -- # lcov --version 00:36:46.672 22:35:41 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:36:46.672 22:35:41 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:36:46.672 22:35:41 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:36:46.672 22:35:41 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@333 -- # local ver1 ver1_l 00:36:46.672 22:35:41 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@334 -- # local ver2 ver2_l 00:36:46.672 22:35:41 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@336 -- # IFS=.-: 00:36:46.672 22:35:41 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@336 -- # read -ra ver1 00:36:46.672 22:35:41 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@337 -- # IFS=.-: 00:36:46.672 22:35:41 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@337 -- # read -ra ver2 00:36:46.672 22:35:41 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@338 -- # local 'op=<' 00:36:46.672 22:35:41 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@340 -- # ver1_l=2 00:36:46.672 22:35:41 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@341 -- # ver2_l=1 00:36:46.672 22:35:41 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:36:46.672 22:35:41 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@344 -- # case "$op" in 00:36:46.672 22:35:41 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@345 -- # : 1 00:36:46.672 22:35:41 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@364 -- # (( v = 0 )) 00:36:46.672 22:35:41 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:36:46.672 22:35:41 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@365 -- # decimal 1 00:36:46.672 22:35:41 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@353 -- # local d=1 00:36:46.672 22:35:41 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:36:46.672 22:35:41 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@355 -- # echo 1 00:36:46.672 22:35:41 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@365 -- # ver1[v]=1 00:36:46.672 22:35:41 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@366 -- # decimal 2 00:36:46.672 22:35:41 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@353 -- # local d=2 00:36:46.672 22:35:41 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:36:46.672 22:35:41 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@355 -- # echo 2 00:36:46.672 22:35:41 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@366 -- # ver2[v]=2 00:36:46.672 22:35:41 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:36:46.672 22:35:41 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:36:46.672 22:35:41 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@368 -- # return 0 00:36:46.672 22:35:41 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:36:46.672 22:35:41 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:36:46.672 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:46.672 --rc genhtml_branch_coverage=1 00:36:46.672 --rc genhtml_function_coverage=1 00:36:46.672 --rc genhtml_legend=1 00:36:46.672 --rc geninfo_all_blocks=1 00:36:46.672 --rc geninfo_unexecuted_blocks=1 00:36:46.672 00:36:46.672 ' 00:36:46.672 22:35:41 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:36:46.672 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:46.672 --rc genhtml_branch_coverage=1 00:36:46.672 --rc genhtml_function_coverage=1 00:36:46.672 --rc genhtml_legend=1 00:36:46.672 --rc geninfo_all_blocks=1 00:36:46.672 --rc geninfo_unexecuted_blocks=1 00:36:46.672 00:36:46.672 ' 00:36:46.672 22:35:41 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:36:46.672 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:46.672 --rc genhtml_branch_coverage=1 00:36:46.672 --rc genhtml_function_coverage=1 00:36:46.672 --rc genhtml_legend=1 00:36:46.672 --rc geninfo_all_blocks=1 00:36:46.672 --rc geninfo_unexecuted_blocks=1 00:36:46.672 00:36:46.672 ' 00:36:46.672 22:35:41 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:36:46.672 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:46.672 --rc genhtml_branch_coverage=1 00:36:46.672 --rc genhtml_function_coverage=1 00:36:46.672 --rc genhtml_legend=1 00:36:46.672 --rc geninfo_all_blocks=1 00:36:46.672 --rc geninfo_unexecuted_blocks=1 00:36:46.672 00:36:46.672 ' 00:36:46.672 22:35:41 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:36:46.672 22:35:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@7 -- # uname -s 00:36:46.672 22:35:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:36:46.672 22:35:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:36:46.672 22:35:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:36:46.672 22:35:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:36:46.672 22:35:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:36:46.672 22:35:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:36:46.672 22:35:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:36:46.672 22:35:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:36:46.672 22:35:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:36:46.672 22:35:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:36:46.672 22:35:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 00:36:46.672 22:35:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@18 -- # NVME_HOSTID=80c5a598-37ec-ec11-9bc7-a4bf01928204 00:36:46.672 22:35:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:36:46.672 22:35:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:36:46.672 22:35:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:36:46.672 22:35:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:36:46.672 22:35:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:36:46.672 22:35:41 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@15 -- # shopt -s extglob 00:36:46.672 22:35:41 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:36:46.672 22:35:41 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:36:46.672 22:35:41 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:36:46.672 22:35:41 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:46.672 22:35:41 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:46.673 22:35:41 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:46.673 22:35:41 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@5 -- # export PATH 00:36:46.673 22:35:41 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:46.673 22:35:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@51 -- # : 0 00:36:46.673 22:35:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:36:46.673 22:35:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:36:46.673 22:35:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:36:46.673 22:35:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:36:46.673 22:35:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:36:46.673 22:35:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:36:46.673 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:36:46.673 22:35:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:36:46.673 22:35:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:36:46.673 22:35:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@55 -- # have_pci_nics=0 00:36:46.673 22:35:41 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@11 -- # MALLOC_BDEV_SIZE=64 00:36:46.673 22:35:41 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:36:46.673 22:35:41 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:36:46.673 22:35:41 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:36:46.673 22:35:41 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@18 -- # nvmftestinit 00:36:46.673 22:35:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:36:46.673 22:35:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:36:46.673 22:35:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@474 -- # prepare_net_devs 00:36:46.673 22:35:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@436 -- # local -g is_hw=no 00:36:46.673 22:35:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@438 -- # remove_spdk_ns 00:36:46.673 22:35:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:36:46.673 22:35:41 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:36:46.673 22:35:41 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:36:46.673 22:35:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:36:46.673 22:35:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:36:46.673 22:35:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@309 -- # xtrace_disable 00:36:46.673 22:35:41 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:36:53.260 22:35:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:36:53.260 22:35:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@315 -- # pci_devs=() 00:36:53.260 22:35:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@315 -- # local -a pci_devs 00:36:53.260 22:35:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@316 -- # pci_net_devs=() 00:36:53.260 22:35:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:36:53.260 22:35:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@317 -- # pci_drivers=() 00:36:53.260 22:35:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@317 -- # local -A pci_drivers 00:36:53.260 22:35:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@319 -- # net_devs=() 00:36:53.260 22:35:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@319 -- # local -ga net_devs 00:36:53.260 22:35:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@320 -- # e810=() 00:36:53.260 22:35:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@320 -- # local -ga e810 00:36:53.260 22:35:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@321 -- # x722=() 00:36:53.260 22:35:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@321 -- # local -ga x722 00:36:53.260 22:35:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@322 -- # mlx=() 00:36:53.260 22:35:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@322 -- # local -ga mlx 00:36:53.260 22:35:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:36:53.260 22:35:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:36:53.260 22:35:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:36:53.260 22:35:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:36:53.260 22:35:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:36:53.260 22:35:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:36:53.260 22:35:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:36:53.260 22:35:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:36:53.260 22:35:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:36:53.260 22:35:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:36:53.260 22:35:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:36:53.260 22:35:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:36:53.260 22:35:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:36:53.261 22:35:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:36:53.261 22:35:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:36:53.261 22:35:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:36:53.261 22:35:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:36:53.261 22:35:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:36:53.261 22:35:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:36:53.261 22:35:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:36:53.261 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:36:53.261 22:35:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:36:53.261 22:35:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:36:53.261 22:35:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:36:53.261 22:35:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:36:53.261 22:35:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:36:53.261 22:35:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:36:53.261 22:35:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:36:53.261 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:36:53.261 22:35:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:36:53.261 22:35:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:36:53.261 22:35:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:36:53.261 22:35:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:36:53.261 22:35:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:36:53.261 22:35:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:36:53.261 22:35:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:36:53.261 22:35:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:36:53.261 22:35:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:36:53.261 22:35:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:36:53.261 22:35:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:36:53.261 22:35:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:36:53.261 22:35:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@416 -- # [[ up == up ]] 00:36:53.261 22:35:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:36:53.261 22:35:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:36:53.261 22:35:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:36:53.261 Found net devices under 0000:4b:00.0: cvl_0_0 00:36:53.261 22:35:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:36:53.261 22:35:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:36:53.261 22:35:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:36:53.261 22:35:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:36:53.261 22:35:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:36:53.261 22:35:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@416 -- # [[ up == up ]] 00:36:53.261 22:35:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:36:53.261 22:35:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:36:53.261 22:35:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:36:53.261 Found net devices under 0000:4b:00.1: cvl_0_1 00:36:53.261 22:35:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:36:53.261 22:35:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:36:53.261 22:35:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@440 -- # is_hw=yes 00:36:53.261 22:35:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:36:53.261 22:35:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:36:53.261 22:35:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:36:53.261 22:35:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:36:53.261 22:35:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:36:53.261 22:35:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:36:53.261 22:35:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:36:53.261 22:35:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:36:53.261 22:35:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:36:53.261 22:35:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:36:53.261 22:35:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:36:53.261 22:35:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:36:53.261 22:35:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:36:53.261 22:35:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:36:53.261 22:35:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:36:53.261 22:35:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:36:53.261 22:35:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:36:53.261 22:35:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:36:53.523 22:35:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:36:53.523 22:35:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:36:53.523 22:35:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:36:53.523 22:35:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:36:53.523 22:35:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:36:53.523 22:35:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:36:53.523 22:35:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:36:53.523 22:35:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:36:53.523 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:36:53.523 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.588 ms 00:36:53.523 00:36:53.523 --- 10.0.0.2 ping statistics --- 00:36:53.523 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:36:53.523 rtt min/avg/max/mdev = 0.588/0.588/0.588/0.000 ms 00:36:53.523 22:35:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:36:53.523 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:36:53.523 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.258 ms 00:36:53.523 00:36:53.523 --- 10.0.0.1 ping statistics --- 00:36:53.523 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:36:53.523 rtt min/avg/max/mdev = 0.258/0.258/0.258/0.000 ms 00:36:53.523 22:35:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:36:53.523 22:35:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@448 -- # return 0 00:36:53.523 22:35:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:36:53.523 22:35:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:36:53.523 22:35:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:36:53.523 22:35:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:36:53.523 22:35:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:36:53.523 22:35:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:36:53.523 22:35:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:36:53.523 22:35:48 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@20 -- # nvmfappstart -m 0xE 00:36:53.523 22:35:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:36:53.523 22:35:48 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@724 -- # xtrace_disable 00:36:53.523 22:35:48 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:36:53.523 22:35:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@507 -- # nvmfpid=320019 00:36:53.523 22:35:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@508 -- # waitforlisten 320019 00:36:53.523 22:35:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:36:53.523 22:35:48 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@831 -- # '[' -z 320019 ']' 00:36:53.523 22:35:48 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:36:53.523 22:35:48 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@836 -- # local max_retries=100 00:36:53.523 22:35:48 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:36:53.523 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:36:53.523 22:35:48 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # xtrace_disable 00:36:53.523 22:35:48 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:36:53.523 [2024-10-01 22:35:48.772834] Starting SPDK v25.01-pre git sha1 1b1c3081e / DPDK 24.03.0 initialization... 00:36:53.523 [2024-10-01 22:35:48.772900] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:36:53.784 [2024-10-01 22:35:48.878148] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:36:53.784 [2024-10-01 22:35:48.973503] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:36:53.784 [2024-10-01 22:35:48.973561] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:36:53.784 [2024-10-01 22:35:48.973570] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:36:53.784 [2024-10-01 22:35:48.973578] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:36:53.784 [2024-10-01 22:35:48.973584] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:36:53.784 [2024-10-01 22:35:48.973731] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:36:53.784 [2024-10-01 22:35:48.973893] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:36:53.784 [2024-10-01 22:35:48.973893] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:36:54.353 22:35:49 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:36:54.353 22:35:49 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # return 0 00:36:54.353 22:35:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:36:54.353 22:35:49 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@730 -- # xtrace_disable 00:36:54.353 22:35:49 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:36:54.612 22:35:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:36:54.612 22:35:49 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:36:54.612 [2024-10-01 22:35:49.780571] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:36:54.612 22:35:49 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:36:54.872 Malloc0 00:36:54.872 22:35:50 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:36:55.131 22:35:50 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:36:55.131 22:35:50 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:36:55.392 [2024-10-01 22:35:50.510181] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:36:55.392 22:35:50 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:36:55.653 [2024-10-01 22:35:50.694648] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:36:55.653 22:35:50 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:36:55.653 [2024-10-01 22:35:50.879216] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:36:55.653 22:35:50 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@31 -- # bdevperf_pid=320568 00:36:55.653 22:35:50 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; cat $testdir/try.txt; rm -f $testdir/try.txt; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:36:55.653 22:35:50 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 15 -f 00:36:55.653 22:35:50 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@34 -- # waitforlisten 320568 /var/tmp/bdevperf.sock 00:36:55.653 22:35:50 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@831 -- # '[' -z 320568 ']' 00:36:55.653 22:35:50 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:36:55.653 22:35:50 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@836 -- # local max_retries=100 00:36:55.653 22:35:50 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:36:55.653 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:36:55.653 22:35:50 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # xtrace_disable 00:36:55.653 22:35:50 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:36:56.594 22:35:51 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:36:56.594 22:35:51 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # return 0 00:36:56.594 22:35:51 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:36:57.162 NVMe0n1 00:36:57.162 22:35:52 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:36:57.422 00:36:57.422 22:35:52 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@39 -- # run_test_pid=320886 00:36:57.422 22:35:52 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@41 -- # sleep 1 00:36:57.422 22:35:52 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:36:58.362 22:35:53 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:36:58.623 [2024-10-01 22:35:53.727896] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b9210 is same with the state(6) to be set 00:36:58.623 [2024-10-01 22:35:53.727932] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b9210 is same with the state(6) to be set 00:36:58.623 [2024-10-01 22:35:53.727938] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b9210 is same with the state(6) to be set 00:36:58.623 [2024-10-01 22:35:53.727943] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b9210 is same with the state(6) to be set 00:36:58.623 [2024-10-01 22:35:53.727948] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b9210 is same with the state(6) to be set 00:36:58.623 [2024-10-01 22:35:53.727953] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b9210 is same with the state(6) to be set 00:36:58.623 [2024-10-01 22:35:53.727958] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b9210 is same with the state(6) to be set 00:36:58.623 [2024-10-01 22:35:53.727962] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b9210 is same with the state(6) to be set 00:36:58.623 [2024-10-01 22:35:53.727967] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b9210 is same with the state(6) to be set 00:36:58.623 [2024-10-01 22:35:53.727971] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b9210 is same with the state(6) to be set 00:36:58.623 [2024-10-01 22:35:53.727976] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b9210 is same with the state(6) to be set 00:36:58.623 [2024-10-01 22:35:53.727980] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b9210 is same with the state(6) to be set 00:36:58.623 [2024-10-01 22:35:53.727985] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b9210 is same with the state(6) to be set 00:36:58.623 [2024-10-01 22:35:53.727990] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b9210 is same with the state(6) to be set 00:36:58.623 [2024-10-01 22:35:53.728000] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b9210 is same with the state(6) to be set 00:36:58.623 [2024-10-01 22:35:53.728004] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b9210 is same with the state(6) to be set 00:36:58.623 [2024-10-01 22:35:53.728009] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b9210 is same with the state(6) to be set 00:36:58.623 [2024-10-01 22:35:53.728013] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b9210 is same with the state(6) to be set 00:36:58.623 [2024-10-01 22:35:53.728018] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b9210 is same with the state(6) to be set 00:36:58.623 [2024-10-01 22:35:53.728023] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b9210 is same with the state(6) to be set 00:36:58.623 [2024-10-01 22:35:53.728027] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b9210 is same with the state(6) to be set 00:36:58.623 [2024-10-01 22:35:53.728032] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b9210 is same with the state(6) to be set 00:36:58.623 [2024-10-01 22:35:53.728037] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b9210 is same with the state(6) to be set 00:36:58.624 [2024-10-01 22:35:53.728041] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b9210 is same with the state(6) to be set 00:36:58.624 [2024-10-01 22:35:53.728046] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b9210 is same with the state(6) to be set 00:36:58.624 [2024-10-01 22:35:53.728050] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b9210 is same with the state(6) to be set 00:36:58.624 [2024-10-01 22:35:53.728055] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b9210 is same with the state(6) to be set 00:36:58.624 [2024-10-01 22:35:53.728060] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b9210 is same with the state(6) to be set 00:36:58.624 [2024-10-01 22:35:53.728064] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b9210 is same with the state(6) to be set 00:36:58.624 [2024-10-01 22:35:53.728069] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b9210 is same with the state(6) to be set 00:36:58.624 [2024-10-01 22:35:53.728073] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b9210 is same with the state(6) to be set 00:36:58.624 [2024-10-01 22:35:53.728078] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b9210 is same with the state(6) to be set 00:36:58.624 22:35:53 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@45 -- # sleep 3 00:37:01.922 22:35:56 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:37:01.922 00:37:02.183 22:35:57 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:37:02.183 [2024-10-01 22:35:57.333931] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15ba010 is same with the state(6) to be set 00:37:02.183 [2024-10-01 22:35:57.333961] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15ba010 is same with the state(6) to be set 00:37:02.183 [2024-10-01 22:35:57.333967] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15ba010 is same with the state(6) to be set 00:37:02.183 [2024-10-01 22:35:57.333973] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15ba010 is same with the state(6) to be set 00:37:02.183 [2024-10-01 22:35:57.333978] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15ba010 is same with the state(6) to be set 00:37:02.183 [2024-10-01 22:35:57.333982] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15ba010 is same with the state(6) to be set 00:37:02.183 [2024-10-01 22:35:57.333992] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15ba010 is same with the state(6) to be set 00:37:02.183 [2024-10-01 22:35:57.333996] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15ba010 is same with the state(6) to be set 00:37:02.183 [2024-10-01 22:35:57.334001] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15ba010 is same with the state(6) to be set 00:37:02.183 [2024-10-01 22:35:57.334006] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15ba010 is same with the state(6) to be set 00:37:02.183 [2024-10-01 22:35:57.334010] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15ba010 is same with the state(6) to be set 00:37:02.183 [2024-10-01 22:35:57.334015] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15ba010 is same with the state(6) to be set 00:37:02.183 [2024-10-01 22:35:57.334019] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15ba010 is same with the state(6) to be set 00:37:02.183 [2024-10-01 22:35:57.334024] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15ba010 is same with the state(6) to be set 00:37:02.183 [2024-10-01 22:35:57.334029] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15ba010 is same with the state(6) to be set 00:37:02.183 [2024-10-01 22:35:57.334033] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15ba010 is same with the state(6) to be set 00:37:02.183 [2024-10-01 22:35:57.334038] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15ba010 is same with the state(6) to be set 00:37:02.183 [2024-10-01 22:35:57.334042] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15ba010 is same with the state(6) to be set 00:37:02.183 [2024-10-01 22:35:57.334047] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15ba010 is same with the state(6) to be set 00:37:02.183 [2024-10-01 22:35:57.334051] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15ba010 is same with the state(6) to be set 00:37:02.183 [2024-10-01 22:35:57.334056] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15ba010 is same with the state(6) to be set 00:37:02.183 [2024-10-01 22:35:57.334061] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15ba010 is same with the state(6) to be set 00:37:02.183 [2024-10-01 22:35:57.334066] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15ba010 is same with the state(6) to be set 00:37:02.183 [2024-10-01 22:35:57.334071] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15ba010 is same with the state(6) to be set 00:37:02.183 [2024-10-01 22:35:57.334075] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15ba010 is same with the state(6) to be set 00:37:02.183 [2024-10-01 22:35:57.334080] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15ba010 is same with the state(6) to be set 00:37:02.183 [2024-10-01 22:35:57.334084] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15ba010 is same with the state(6) to be set 00:37:02.183 [2024-10-01 22:35:57.334089] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15ba010 is same with the state(6) to be set 00:37:02.183 [2024-10-01 22:35:57.334093] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15ba010 is same with the state(6) to be set 00:37:02.183 [2024-10-01 22:35:57.334098] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15ba010 is same with the state(6) to be set 00:37:02.183 [2024-10-01 22:35:57.334103] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15ba010 is same with the state(6) to be set 00:37:02.183 [2024-10-01 22:35:57.334107] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15ba010 is same with the state(6) to be set 00:37:02.183 [2024-10-01 22:35:57.334112] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15ba010 is same with the state(6) to be set 00:37:02.183 [2024-10-01 22:35:57.334120] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15ba010 is same with the state(6) to be set 00:37:02.183 [2024-10-01 22:35:57.334125] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15ba010 is same with the state(6) to be set 00:37:02.183 [2024-10-01 22:35:57.334130] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15ba010 is same with the state(6) to be set 00:37:02.183 [2024-10-01 22:35:57.334134] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15ba010 is same with the state(6) to be set 00:37:02.183 [2024-10-01 22:35:57.334139] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15ba010 is same with the state(6) to be set 00:37:02.183 [2024-10-01 22:35:57.334144] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15ba010 is same with the state(6) to be set 00:37:02.183 [2024-10-01 22:35:57.334149] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15ba010 is same with the state(6) to be set 00:37:02.183 [2024-10-01 22:35:57.334153] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15ba010 is same with the state(6) to be set 00:37:02.183 [2024-10-01 22:35:57.334158] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15ba010 is same with the state(6) to be set 00:37:02.183 [2024-10-01 22:35:57.334162] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15ba010 is same with the state(6) to be set 00:37:02.183 [2024-10-01 22:35:57.334167] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15ba010 is same with the state(6) to be set 00:37:02.183 [2024-10-01 22:35:57.334171] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15ba010 is same with the state(6) to be set 00:37:02.183 [2024-10-01 22:35:57.334176] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15ba010 is same with the state(6) to be set 00:37:02.183 [2024-10-01 22:35:57.334180] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15ba010 is same with the state(6) to be set 00:37:02.183 [2024-10-01 22:35:57.334184] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15ba010 is same with the state(6) to be set 00:37:02.183 [2024-10-01 22:35:57.334189] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15ba010 is same with the state(6) to be set 00:37:02.183 [2024-10-01 22:35:57.334194] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15ba010 is same with the state(6) to be set 00:37:02.184 [2024-10-01 22:35:57.334199] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15ba010 is same with the state(6) to be set 00:37:02.184 [2024-10-01 22:35:57.334203] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15ba010 is same with the state(6) to be set 00:37:02.184 [2024-10-01 22:35:57.334208] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15ba010 is same with the state(6) to be set 00:37:02.184 [2024-10-01 22:35:57.334213] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15ba010 is same with the state(6) to be set 00:37:02.184 [2024-10-01 22:35:57.334217] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15ba010 is same with the state(6) to be set 00:37:02.184 [2024-10-01 22:35:57.334222] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15ba010 is same with the state(6) to be set 00:37:02.184 [2024-10-01 22:35:57.334227] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15ba010 is same with the state(6) to be set 00:37:02.184 [2024-10-01 22:35:57.334231] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15ba010 is same with the state(6) to be set 00:37:02.184 [2024-10-01 22:35:57.334236] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15ba010 is same with the state(6) to be set 00:37:02.184 [2024-10-01 22:35:57.334241] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15ba010 is same with the state(6) to be set 00:37:02.184 22:35:57 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@50 -- # sleep 3 00:37:05.485 22:36:00 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:37:05.485 [2024-10-01 22:36:00.562341] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:37:05.485 22:36:00 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@55 -- # sleep 1 00:37:06.559 22:36:01 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:37:06.560 [2024-10-01 22:36:01.752020] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15baf80 is same with the state(6) to be set 00:37:06.560 [2024-10-01 22:36:01.752049] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15baf80 is same with the state(6) to be set 00:37:06.560 [2024-10-01 22:36:01.752055] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15baf80 is same with the state(6) to be set 00:37:06.560 [2024-10-01 22:36:01.752060] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15baf80 is same with the state(6) to be set 00:37:06.560 [2024-10-01 22:36:01.752065] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15baf80 is same with the state(6) to be set 00:37:06.560 22:36:01 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@59 -- # wait 320886 00:37:13.154 { 00:37:13.154 "results": [ 00:37:13.154 { 00:37:13.154 "job": "NVMe0n1", 00:37:13.154 "core_mask": "0x1", 00:37:13.154 "workload": "verify", 00:37:13.154 "status": "finished", 00:37:13.154 "verify_range": { 00:37:13.154 "start": 0, 00:37:13.154 "length": 16384 00:37:13.154 }, 00:37:13.154 "queue_depth": 128, 00:37:13.154 "io_size": 4096, 00:37:13.154 "runtime": 15.006099, 00:37:13.154 "iops": 11017.920113681777, 00:37:13.154 "mibps": 43.03875044406944, 00:37:13.154 "io_failed": 4957, 00:37:13.154 "io_timeout": 0, 00:37:13.154 "avg_latency_us": 11250.393018151068, 00:37:13.154 "min_latency_us": 549.5466666666666, 00:37:13.154 "max_latency_us": 14636.373333333333 00:37:13.154 } 00:37:13.154 ], 00:37:13.154 "core_count": 1 00:37:13.154 } 00:37:13.154 22:36:07 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@61 -- # killprocess 320568 00:37:13.154 22:36:07 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@950 -- # '[' -z 320568 ']' 00:37:13.154 22:36:07 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # kill -0 320568 00:37:13.154 22:36:07 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@955 -- # uname 00:37:13.154 22:36:07 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:37:13.154 22:36:07 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 320568 00:37:13.154 22:36:07 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:37:13.154 22:36:07 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:37:13.154 22:36:07 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@968 -- # echo 'killing process with pid 320568' 00:37:13.154 killing process with pid 320568 00:37:13.154 22:36:07 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@969 -- # kill 320568 00:37:13.154 22:36:07 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@974 -- # wait 320568 00:37:13.154 22:36:07 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@63 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:37:13.154 [2024-10-01 22:35:50.946336] Starting SPDK v25.01-pre git sha1 1b1c3081e / DPDK 24.03.0 initialization... 00:37:13.154 [2024-10-01 22:35:50.946396] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid320568 ] 00:37:13.154 [2024-10-01 22:35:51.007483] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:37:13.154 [2024-10-01 22:35:51.072321] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:37:13.154 Running I/O for 15 seconds... 00:37:13.154 11135.00 IOPS, 43.50 MiB/s [2024-10-01 22:35:53.729139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:96192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:13.154 [2024-10-01 22:35:53.729173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:13.154 [2024-10-01 22:35:53.729189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:96200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:13.154 [2024-10-01 22:35:53.729198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:13.154 [2024-10-01 22:35:53.729208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:96208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:13.154 [2024-10-01 22:35:53.729216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:13.154 [2024-10-01 22:35:53.729226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:96216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:13.154 [2024-10-01 22:35:53.729233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:13.154 [2024-10-01 22:35:53.729243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:96224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:13.154 [2024-10-01 22:35:53.729251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:13.154 [2024-10-01 22:35:53.729260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:96232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:13.154 [2024-10-01 22:35:53.729267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:13.154 [2024-10-01 22:35:53.729277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:96240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:13.154 [2024-10-01 22:35:53.729284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:13.154 [2024-10-01 22:35:53.729293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:96248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:13.154 [2024-10-01 22:35:53.729301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:13.154 [2024-10-01 22:35:53.729310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:96256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:13.154 [2024-10-01 22:35:53.729318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:13.154 [2024-10-01 22:35:53.729327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:96304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:13.155 [2024-10-01 22:35:53.729335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:13.155 [2024-10-01 22:35:53.729344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:96312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:13.155 [2024-10-01 22:35:53.729351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:13.155 [2024-10-01 22:35:53.729366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:96320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:13.155 [2024-10-01 22:35:53.729375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:13.155 [2024-10-01 22:35:53.729384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:96328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:13.155 [2024-10-01 22:35:53.729391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:13.155 [2024-10-01 22:35:53.729401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:96336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:13.155 [2024-10-01 22:35:53.729408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:13.155 [2024-10-01 22:35:53.729417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:96344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:13.155 [2024-10-01 22:35:53.729425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:13.155 [2024-10-01 22:35:53.729434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:96352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:13.155 [2024-10-01 22:35:53.729441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:13.155 [2024-10-01 22:35:53.729450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:96264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:13.155 [2024-10-01 22:35:53.729458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:13.155 [2024-10-01 22:35:53.729467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:96272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:13.155 [2024-10-01 22:35:53.729474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:13.155 [2024-10-01 22:35:53.729484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:96280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:13.155 [2024-10-01 22:35:53.729491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:13.155 [2024-10-01 22:35:53.729500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:96288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:13.155 [2024-10-01 22:35:53.729508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:13.155 [2024-10-01 22:35:53.729517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:96296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:13.155 [2024-10-01 22:35:53.729524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:13.155 [2024-10-01 22:35:53.729534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:96360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:13.155 [2024-10-01 22:35:53.729541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:13.155 [2024-10-01 22:35:53.729550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:96368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:13.155 [2024-10-01 22:35:53.729558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:13.155 [2024-10-01 22:35:53.729567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:96376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:13.155 [2024-10-01 22:35:53.729576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:13.155 [2024-10-01 22:35:53.729586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:96384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:13.155 [2024-10-01 22:35:53.729593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:13.155 [2024-10-01 22:35:53.729602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:96392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:13.155 [2024-10-01 22:35:53.729610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:13.155 [2024-10-01 22:35:53.729619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:96400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:13.155 [2024-10-01 22:35:53.729632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:13.155 [2024-10-01 22:35:53.729641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:96408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:13.155 [2024-10-01 22:35:53.729649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:13.155 [2024-10-01 22:35:53.729658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:96416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:13.155 [2024-10-01 22:35:53.729666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:13.155 [2024-10-01 22:35:53.729675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:96424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:13.155 [2024-10-01 22:35:53.729682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:13.155 [2024-10-01 22:35:53.729691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:96432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:13.155 [2024-10-01 22:35:53.729699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:13.155 [2024-10-01 22:35:53.729708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:96440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:13.155 [2024-10-01 22:35:53.729715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:13.155 [2024-10-01 22:35:53.729724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:96448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:13.155 [2024-10-01 22:35:53.729732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:13.155 [2024-10-01 22:35:53.729742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:96456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:13.155 [2024-10-01 22:35:53.729749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:13.155 [2024-10-01 22:35:53.729760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:96464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:13.155 [2024-10-01 22:35:53.729767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:13.155 [2024-10-01 22:35:53.729777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:96472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:13.155 [2024-10-01 22:35:53.729784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:13.155 [2024-10-01 22:35:53.729795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:96480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:13.155 [2024-10-01 22:35:53.729802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:13.155 [2024-10-01 22:35:53.729812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:96488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:13.155 [2024-10-01 22:35:53.729819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:13.155 [2024-10-01 22:35:53.729828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:96496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:13.155 [2024-10-01 22:35:53.729836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:13.155 [2024-10-01 22:35:53.729845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:96504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:13.155 [2024-10-01 22:35:53.729852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:13.155 [2024-10-01 22:35:53.729862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:96512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:13.155 [2024-10-01 22:35:53.729869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:13.155 [2024-10-01 22:35:53.729878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:96520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:13.155 [2024-10-01 22:35:53.729886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:13.155 [2024-10-01 22:35:53.729895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:96528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:13.156 [2024-10-01 22:35:53.729902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:13.156 [2024-10-01 22:35:53.729912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:96536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:13.156 [2024-10-01 22:35:53.729919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:13.156 [2024-10-01 22:35:53.729928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:96544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:13.156 [2024-10-01 22:35:53.729936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:13.156 [2024-10-01 22:35:53.729945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:96552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:13.156 [2024-10-01 22:35:53.729952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:13.156 [2024-10-01 22:35:53.729961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:96560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:13.156 [2024-10-01 22:35:53.729969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:13.156 [2024-10-01 22:35:53.729978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:96568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:13.156 [2024-10-01 22:35:53.729985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:13.156 [2024-10-01 22:35:53.729994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:96576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:13.156 [2024-10-01 22:35:53.730001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:13.156 [2024-10-01 22:35:53.730013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:96584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:13.156 [2024-10-01 22:35:53.730020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:13.156 [2024-10-01 22:35:53.730029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:96592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:13.156 [2024-10-01 22:35:53.730037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:13.156 [2024-10-01 22:35:53.730046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:96600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:13.156 [2024-10-01 22:35:53.730054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:13.156 [2024-10-01 22:35:53.730063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:96608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:13.156 [2024-10-01 22:35:53.730071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:13.156 [2024-10-01 22:35:53.730080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:96616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:13.156 [2024-10-01 22:35:53.730087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:13.156 [2024-10-01 22:35:53.730096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:96624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:13.156 [2024-10-01 22:35:53.730104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:13.156 [2024-10-01 22:35:53.730114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:96632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:13.156 [2024-10-01 22:35:53.730121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:13.156 [2024-10-01 22:35:53.730130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:96640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:13.156 [2024-10-01 22:35:53.730137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:13.156 [2024-10-01 22:35:53.730147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:96648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:13.156 [2024-10-01 22:35:53.730155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:13.156 [2024-10-01 22:35:53.730164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:96656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:13.156 [2024-10-01 22:35:53.730171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:13.156 [2024-10-01 22:35:53.730180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:96664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:13.156 [2024-10-01 22:35:53.730188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:13.156 [2024-10-01 22:35:53.730197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:96672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:13.156 [2024-10-01 22:35:53.730204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:13.156 [2024-10-01 22:35:53.730213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:96680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:13.156 [2024-10-01 22:35:53.730225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:13.156 [2024-10-01 22:35:53.730235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:96688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:13.156 [2024-10-01 22:35:53.730242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:13.156 [2024-10-01 22:35:53.730251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:96696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:13.156 [2024-10-01 22:35:53.730259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:13.156 [2024-10-01 22:35:53.730269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:96704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:13.156 [2024-10-01 22:35:53.730277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:13.156 [2024-10-01 22:35:53.730286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:96712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:13.156 [2024-10-01 22:35:53.730294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:13.156 [2024-10-01 22:35:53.730304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:96720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:13.156 [2024-10-01 22:35:53.730311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:13.156 [2024-10-01 22:35:53.730321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:96728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:13.156 [2024-10-01 22:35:53.730328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:13.156 [2024-10-01 22:35:53.730337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:96736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:13.156 [2024-10-01 22:35:53.730345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:13.156 [2024-10-01 22:35:53.730355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:96744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:13.156 [2024-10-01 22:35:53.730363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:13.156 [2024-10-01 22:35:53.730372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:96752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:13.156 [2024-10-01 22:35:53.730380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:13.156 [2024-10-01 22:35:53.730390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:96760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:13.156 [2024-10-01 22:35:53.730398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:13.156 [2024-10-01 22:35:53.730407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:96768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:13.156 [2024-10-01 22:35:53.730415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:13.156 [2024-10-01 22:35:53.730424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:96776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:13.156 [2024-10-01 22:35:53.730432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:13.156 [2024-10-01 22:35:53.730443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:96784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:13.157 [2024-10-01 22:35:53.730450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:13.157 [2024-10-01 22:35:53.730460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:96792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:13.157 [2024-10-01 22:35:53.730468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:13.157 [2024-10-01 22:35:53.730477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:96800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:13.157 [2024-10-01 22:35:53.730484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:13.157 [2024-10-01 22:35:53.730494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:96808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:13.157 [2024-10-01 22:35:53.730501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:13.157 [2024-10-01 22:35:53.730511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:96816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:13.157 [2024-10-01 22:35:53.730518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:13.157 [2024-10-01 22:35:53.730527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:96824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:13.157 [2024-10-01 22:35:53.730534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:13.157 [2024-10-01 22:35:53.730544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:96832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:13.157 [2024-10-01 22:35:53.730551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:13.157 [2024-10-01 22:35:53.730560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:96840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:13.157 [2024-10-01 22:35:53.730568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:13.157 [2024-10-01 22:35:53.730577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:96848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:13.157 [2024-10-01 22:35:53.730584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:13.157 [2024-10-01 22:35:53.730594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:96856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:13.157 [2024-10-01 22:35:53.730601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:13.157 [2024-10-01 22:35:53.730610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:96864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:13.157 [2024-10-01 22:35:53.730617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:13.157 [2024-10-01 22:35:53.730630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:96872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:13.157 [2024-10-01 22:35:53.730638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:13.157 [2024-10-01 22:35:53.730648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:96880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:13.157 [2024-10-01 22:35:53.730656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:13.157 [2024-10-01 22:35:53.730666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:96888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:13.157 [2024-10-01 22:35:53.730673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:13.157 [2024-10-01 22:35:53.730682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:96896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:13.157 [2024-10-01 22:35:53.730690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:13.157 [2024-10-01 22:35:53.730699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:96904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:13.157 [2024-10-01 22:35:53.730706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:13.157 [2024-10-01 22:35:53.730716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:96912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:13.157 [2024-10-01 22:35:53.730724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:13.157 [2024-10-01 22:35:53.730733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:96920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:13.157 [2024-10-01 22:35:53.730741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:13.157 [2024-10-01 22:35:53.730750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:96928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:13.157 [2024-10-01 22:35:53.730757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:13.157 [2024-10-01 22:35:53.730766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:96936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:13.157 [2024-10-01 22:35:53.730773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:13.157 [2024-10-01 22:35:53.730783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:96944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:13.157 [2024-10-01 22:35:53.730790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:13.157 [2024-10-01 22:35:53.730799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:96952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:13.157 [2024-10-01 22:35:53.730807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:13.157 [2024-10-01 22:35:53.730816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:96960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:13.157 [2024-10-01 22:35:53.730824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:13.157 [2024-10-01 22:35:53.730833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:96968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:13.157 [2024-10-01 22:35:53.730841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:13.157 [2024-10-01 22:35:53.730850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:96976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:13.157 [2024-10-01 22:35:53.730858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:13.157 [2024-10-01 22:35:53.730867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:96984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:13.157 [2024-10-01 22:35:53.730876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:13.157 [2024-10-01 22:35:53.730885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:96992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:13.157 [2024-10-01 22:35:53.730893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:13.157 [2024-10-01 22:35:53.730902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:97000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:13.157 [2024-10-01 22:35:53.730909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:13.157 [2024-10-01 22:35:53.730919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:97008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:13.157 [2024-10-01 22:35:53.730926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:13.157 [2024-10-01 22:35:53.730935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:97016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:13.157 [2024-10-01 22:35:53.730943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:13.157 [2024-10-01 22:35:53.730952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:97024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:13.157 [2024-10-01 22:35:53.730960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:13.157 [2024-10-01 22:35:53.730969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:97032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:13.157 [2024-10-01 22:35:53.730976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:13.157 [2024-10-01 22:35:53.730985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:13.157 [2024-10-01 22:35:53.730993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:13.157 [2024-10-01 22:35:53.731002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:97048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:13.157 [2024-10-01 22:35:53.731009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:13.157 [2024-10-01 22:35:53.731018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:97056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:13.157 [2024-10-01 22:35:53.731025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:13.158 [2024-10-01 22:35:53.731035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:97064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:13.158 [2024-10-01 22:35:53.731042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:13.158 [2024-10-01 22:35:53.731051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:97072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:13.158 [2024-10-01 22:35:53.731059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:13.158 [2024-10-01 22:35:53.731068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:97080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:13.158 [2024-10-01 22:35:53.731075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:13.158 [2024-10-01 22:35:53.731086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:97088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:13.158 [2024-10-01 22:35:53.731093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:13.158 [2024-10-01 22:35:53.731103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:97096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:13.158 [2024-10-01 22:35:53.731110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:13.158 [2024-10-01 22:35:53.731119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:97104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:13.158 [2024-10-01 22:35:53.731127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:13.158 [2024-10-01 22:35:53.731136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:97112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:13.158 [2024-10-01 22:35:53.731144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:13.158 [2024-10-01 22:35:53.731153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:97120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:13.158 [2024-10-01 22:35:53.731160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:13.158 [2024-10-01 22:35:53.731170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:97128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:13.158 [2024-10-01 22:35:53.731177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:13.158 [2024-10-01 22:35:53.731200] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:37:13.158 [2024-10-01 22:35:53.731208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97136 len:8 PRP1 0x0 PRP2 0x0 00:37:13.158 [2024-10-01 22:35:53.731216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:13.158 [2024-10-01 22:35:53.731227] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:37:13.158 [2024-10-01 22:35:53.731233] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:37:13.158 [2024-10-01 22:35:53.731239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97144 len:8 PRP1 0x0 PRP2 0x0 00:37:13.158 [2024-10-01 22:35:53.731246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:13.158 [2024-10-01 22:35:53.731254] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:37:13.158 [2024-10-01 22:35:53.731260] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:37:13.158 [2024-10-01 22:35:53.731266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97152 len:8 PRP1 0x0 PRP2 0x0 00:37:13.158 [2024-10-01 22:35:53.731273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:13.158 [2024-10-01 22:35:53.731281] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:37:13.158 [2024-10-01 22:35:53.731286] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:37:13.158 [2024-10-01 22:35:53.731292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97160 len:8 PRP1 0x0 PRP2 0x0 00:37:13.158 [2024-10-01 22:35:53.731300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:13.158 [2024-10-01 22:35:53.731307] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:37:13.158 [2024-10-01 22:35:53.731315] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:37:13.158 [2024-10-01 22:35:53.731321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97168 len:8 PRP1 0x0 PRP2 0x0 00:37:13.158 [2024-10-01 22:35:53.731328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:13.158 [2024-10-01 22:35:53.731336] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:37:13.158 [2024-10-01 22:35:53.731341] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:37:13.158 [2024-10-01 22:35:53.731348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97176 len:8 PRP1 0x0 PRP2 0x0 00:37:13.158 [2024-10-01 22:35:53.731356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:13.158 [2024-10-01 22:35:53.731363] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:37:13.158 [2024-10-01 22:35:53.731369] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:37:13.158 [2024-10-01 22:35:53.731375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97184 len:8 PRP1 0x0 PRP2 0x0 00:37:13.158 [2024-10-01 22:35:53.731383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:13.158 [2024-10-01 22:35:53.731390] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:37:13.158 [2024-10-01 22:35:53.731396] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:37:13.158 [2024-10-01 22:35:53.731402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97192 len:8 PRP1 0x0 PRP2 0x0 00:37:13.158 [2024-10-01 22:35:53.731410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:13.158 [2024-10-01 22:35:53.731417] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:37:13.158 [2024-10-01 22:35:53.731423] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:37:13.158 [2024-10-01 22:35:53.731429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97200 len:8 PRP1 0x0 PRP2 0x0 00:37:13.158 [2024-10-01 22:35:53.731436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:13.158 [2024-10-01 22:35:53.731444] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:37:13.158 [2024-10-01 22:35:53.731450] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:37:13.158 [2024-10-01 22:35:53.731457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97208 len:8 PRP1 0x0 PRP2 0x0 00:37:13.158 [2024-10-01 22:35:53.731464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:13.158 [2024-10-01 22:35:53.731497] bdev_nvme.c:1730:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0xa878f0 was disconnected and freed. reset controller. 00:37:13.158 [2024-10-01 22:35:53.731506] bdev_nvme.c:1987:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:37:13.158 [2024-10-01 22:35:53.731526] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:37:13.158 [2024-10-01 22:35:53.731534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:13.158 [2024-10-01 22:35:53.731542] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:37:13.158 [2024-10-01 22:35:53.731550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:13.158 [2024-10-01 22:35:53.731558] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:37:13.158 [2024-10-01 22:35:53.731568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:13.158 [2024-10-01 22:35:53.731576] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:37:13.158 [2024-10-01 22:35:53.731583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:13.158 [2024-10-01 22:35:53.731591] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:13.158 [2024-10-01 22:35:53.731619] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa67000 (9): Bad file descriptor 00:37:13.158 [2024-10-01 22:35:53.735166] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:13.158 [2024-10-01 22:35:53.768086] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:37:13.158 11071.50 IOPS, 43.25 MiB/s 11076.67 IOPS, 43.27 MiB/s 11083.75 IOPS, 43.30 MiB/s [2024-10-01 22:35:57.335360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:27464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:13.158 [2024-10-01 22:35:57.335397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:13.159 [2024-10-01 22:35:57.335414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:27472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:13.159 [2024-10-01 22:35:57.335423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:13.159 [2024-10-01 22:35:57.335433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:27480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:13.159 [2024-10-01 22:35:57.335441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:13.159 [2024-10-01 22:35:57.335451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:27488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:13.159 [2024-10-01 22:35:57.335459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:13.159 [2024-10-01 22:35:57.335469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:27496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:13.159 [2024-10-01 22:35:57.335477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:13.159 [2024-10-01 22:35:57.335486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:27504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:13.159 [2024-10-01 22:35:57.335494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:13.159 [2024-10-01 22:35:57.335503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:27512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:13.159 [2024-10-01 22:35:57.335511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:13.159 [2024-10-01 22:35:57.335521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:27520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:13.159 [2024-10-01 22:35:57.335528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:13.159 [2024-10-01 22:35:57.335538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:27528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:13.159 [2024-10-01 22:35:57.335545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:13.159 [2024-10-01 22:35:57.335560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:27536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:13.159 [2024-10-01 22:35:57.335567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:13.159 [2024-10-01 22:35:57.335577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:27544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:13.159 [2024-10-01 22:35:57.335584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:13.159 [2024-10-01 22:35:57.335594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:27552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:13.159 [2024-10-01 22:35:57.335601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:13.159 [2024-10-01 22:35:57.335610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:27560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:13.159 [2024-10-01 22:35:57.335617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:13.159 [2024-10-01 22:35:57.335631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:27568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:13.159 [2024-10-01 22:35:57.335638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:13.159 [2024-10-01 22:35:57.335648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:27576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:13.159 [2024-10-01 22:35:57.335655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:13.159 [2024-10-01 22:35:57.335665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:27584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:13.159 [2024-10-01 22:35:57.335672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:13.159 [2024-10-01 22:35:57.335681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:27592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:13.159 [2024-10-01 22:35:57.335689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:13.159 [2024-10-01 22:35:57.335699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:27600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:13.159 [2024-10-01 22:35:57.335706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:13.159 [2024-10-01 22:35:57.335716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:27608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:13.159 [2024-10-01 22:35:57.335723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:13.159 [2024-10-01 22:35:57.335732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:27616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:13.159 [2024-10-01 22:35:57.335739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:13.159 [2024-10-01 22:35:57.335749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:27624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:13.159 [2024-10-01 22:35:57.335756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:13.159 [2024-10-01 22:35:57.335766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:27632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:13.159 [2024-10-01 22:35:57.335775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:13.159 [2024-10-01 22:35:57.335784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:27640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:13.159 [2024-10-01 22:35:57.335791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:13.159 [2024-10-01 22:35:57.335800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:27648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:13.159 [2024-10-01 22:35:57.335808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:13.159 [2024-10-01 22:35:57.335817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:27656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:13.159 [2024-10-01 22:35:57.335825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:13.159 [2024-10-01 22:35:57.335834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:27664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:13.159 [2024-10-01 22:35:57.335841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:13.159 [2024-10-01 22:35:57.335851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:27672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:13.159 [2024-10-01 22:35:57.335858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:13.159 [2024-10-01 22:35:57.335867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:27680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:13.160 [2024-10-01 22:35:57.335874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:13.160 [2024-10-01 22:35:57.335884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:27688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:13.160 [2024-10-01 22:35:57.335891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:13.160 [2024-10-01 22:35:57.335900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:27696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:13.160 [2024-10-01 22:35:57.335908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:13.160 [2024-10-01 22:35:57.335917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:27704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:13.160 [2024-10-01 22:35:57.335924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:13.160 [2024-10-01 22:35:57.335934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:27712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:13.160 [2024-10-01 22:35:57.335942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:13.160 [2024-10-01 22:35:57.335951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:27720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:13.160 [2024-10-01 22:35:57.335959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:13.160 [2024-10-01 22:35:57.335968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:13.160 [2024-10-01 22:35:57.335976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:13.160 [2024-10-01 22:35:57.335987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:27736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:13.160 [2024-10-01 22:35:57.335995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:13.160 [2024-10-01 22:35:57.336004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:27744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:13.160 [2024-10-01 22:35:57.336012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:13.160 [2024-10-01 22:35:57.336021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:27752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:13.160 [2024-10-01 22:35:57.336029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:13.160 [2024-10-01 22:35:57.336038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:27760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:13.160 [2024-10-01 22:35:57.336045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:13.160 [2024-10-01 22:35:57.336054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:27768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:13.160 [2024-10-01 22:35:57.336062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:13.160 [2024-10-01 22:35:57.336071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:27776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:13.160 [2024-10-01 22:35:57.336079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:13.160 [2024-10-01 22:35:57.336088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:13.160 [2024-10-01 22:35:57.336095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:13.160 [2024-10-01 22:35:57.336104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:27792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:13.160 [2024-10-01 22:35:57.336112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:13.160 [2024-10-01 22:35:57.336121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:27800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:13.160 [2024-10-01 22:35:57.336128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:13.160 [2024-10-01 22:35:57.336138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:27808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:13.160 [2024-10-01 22:35:57.336145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:13.160 [2024-10-01 22:35:57.336154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:27816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:13.160 [2024-10-01 22:35:57.336161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:13.160 [2024-10-01 22:35:57.336170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:27824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:13.160 [2024-10-01 22:35:57.336178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:13.160 [2024-10-01 22:35:57.336187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:27832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:13.160 [2024-10-01 22:35:57.336194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:13.160 [2024-10-01 22:35:57.336205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:27840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:13.160 [2024-10-01 22:35:57.336213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:13.160 [2024-10-01 22:35:57.336222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:27848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:13.160 [2024-10-01 22:35:57.336229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:13.160 [2024-10-01 22:35:57.336238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:27856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:13.160 [2024-10-01 22:35:57.336246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:13.160 [2024-10-01 22:35:57.336255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:27864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:13.160 [2024-10-01 22:35:57.336263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:13.160 [2024-10-01 22:35:57.336272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:27872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:13.160 [2024-10-01 22:35:57.336279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:13.160 [2024-10-01 22:35:57.336289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:27880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:13.160 [2024-10-01 22:35:57.336296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:13.160 [2024-10-01 22:35:57.336305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:27888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:13.160 [2024-10-01 22:35:57.336313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:13.160 [2024-10-01 22:35:57.336322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:27896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:13.160 [2024-10-01 22:35:57.336329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:13.160 [2024-10-01 22:35:57.336338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:27904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:13.160 [2024-10-01 22:35:57.336346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:13.160 [2024-10-01 22:35:57.336355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:27912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:13.160 [2024-10-01 22:35:57.336362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:13.160 [2024-10-01 22:35:57.336371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:27920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:13.160 [2024-10-01 22:35:57.336379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:13.160 [2024-10-01 22:35:57.336388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:27928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:13.161 [2024-10-01 22:35:57.336395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:13.161 [2024-10-01 22:35:57.336404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:27936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:13.161 [2024-10-01 22:35:57.336413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:13.161 [2024-10-01 22:35:57.336423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:27944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:13.161 [2024-10-01 22:35:57.336430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:13.161 [2024-10-01 22:35:57.336439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:27952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:13.161 [2024-10-01 22:35:57.336446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:13.161 [2024-10-01 22:35:57.336456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:27960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:13.161 [2024-10-01 22:35:57.336463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:13.161 [2024-10-01 22:35:57.336473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:27968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:13.161 [2024-10-01 22:35:57.336481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:13.161 [2024-10-01 22:35:57.336490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:27976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:13.161 [2024-10-01 22:35:57.336497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:13.161 [2024-10-01 22:35:57.336507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:27984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:13.161 [2024-10-01 22:35:57.336515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:13.161 [2024-10-01 22:35:57.336524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:13.161 [2024-10-01 22:35:57.336531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:13.161 [2024-10-01 22:35:57.336540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:28000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:13.161 [2024-10-01 22:35:57.336547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:13.161 [2024-10-01 22:35:57.336557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:28008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:13.161 [2024-10-01 22:35:57.336564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:13.161 [2024-10-01 22:35:57.336574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:28016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:13.161 [2024-10-01 22:35:57.336581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:13.161 [2024-10-01 22:35:57.336590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:28024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:13.161 [2024-10-01 22:35:57.336598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:13.161 [2024-10-01 22:35:57.336607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:28032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:13.161 [2024-10-01 22:35:57.336615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:13.161 [2024-10-01 22:35:57.336629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:28040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:13.161 [2024-10-01 22:35:57.336637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:13.161 [2024-10-01 22:35:57.336647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:28048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:13.161 [2024-10-01 22:35:57.336654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:13.161 [2024-10-01 22:35:57.336663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:28056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:13.161 [2024-10-01 22:35:57.336671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:13.161 [2024-10-01 22:35:57.336680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:28064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:13.161 [2024-10-01 22:35:57.336687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:13.161 [2024-10-01 22:35:57.336697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:28072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:13.161 [2024-10-01 22:35:57.336704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:13.161 [2024-10-01 22:35:57.336713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:28080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:13.161 [2024-10-01 22:35:57.336721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:13.161 [2024-10-01 22:35:57.336730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:28088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:13.161 [2024-10-01 22:35:57.336738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:13.161 [2024-10-01 22:35:57.336747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:28096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:13.161 [2024-10-01 22:35:57.336754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:13.161 [2024-10-01 22:35:57.336763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:13.161 [2024-10-01 22:35:57.336770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:13.161 [2024-10-01 22:35:57.336780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:13.161 [2024-10-01 22:35:57.336787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:13.161 [2024-10-01 22:35:57.336796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:28120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:13.161 [2024-10-01 22:35:57.336803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:13.161 [2024-10-01 22:35:57.336813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:28128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:13.161 [2024-10-01 22:35:57.336820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:13.161 [2024-10-01 22:35:57.336829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:28136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:13.161 [2024-10-01 22:35:57.336838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:13.161 [2024-10-01 22:35:57.336848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:28144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:13.161 [2024-10-01 22:35:57.336856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:13.161 [2024-10-01 22:35:57.336866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:28152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:13.161 [2024-10-01 22:35:57.336873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:13.161 [2024-10-01 22:35:57.336882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:28160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:13.161 [2024-10-01 22:35:57.336889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:13.161 [2024-10-01 22:35:57.336899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:28168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:13.161 [2024-10-01 22:35:57.336906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:13.161 [2024-10-01 22:35:57.336915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:13.161 [2024-10-01 22:35:57.336922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:13.161 [2024-10-01 22:35:57.336932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:28184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:13.161 [2024-10-01 22:35:57.336939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:13.161 [2024-10-01 22:35:57.336948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:28192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:13.161 [2024-10-01 22:35:57.336955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:13.161 [2024-10-01 22:35:57.336965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:28200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:13.162 [2024-10-01 22:35:57.336972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:13.162 [2024-10-01 22:35:57.336981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:28208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:13.162 [2024-10-01 22:35:57.336988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:13.162 [2024-10-01 22:35:57.336998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:28216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:13.162 [2024-10-01 22:35:57.337005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:13.162 [2024-10-01 22:35:57.337017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:28224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:13.162 [2024-10-01 22:35:57.337024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:13.162 [2024-10-01 22:35:57.337033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:28232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:13.162 [2024-10-01 22:35:57.337040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:13.162 [2024-10-01 22:35:57.337051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:28240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:13.162 [2024-10-01 22:35:57.337058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:13.162 [2024-10-01 22:35:57.337067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:28248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:13.162 [2024-10-01 22:35:57.337075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:13.162 [2024-10-01 22:35:57.337084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:28256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:13.162 [2024-10-01 22:35:57.337091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:13.162 [2024-10-01 22:35:57.337100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:28264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:13.162 [2024-10-01 22:35:57.337107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:13.162 [2024-10-01 22:35:57.337116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:28272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:13.162 [2024-10-01 22:35:57.337126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:13.162 [2024-10-01 22:35:57.337135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:28280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:13.162 [2024-10-01 22:35:57.337143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:13.162 [2024-10-01 22:35:57.337152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:28288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:13.162 [2024-10-01 22:35:57.337159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:13.162 [2024-10-01 22:35:57.337168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:28296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:13.162 [2024-10-01 22:35:57.337175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:13.162 [2024-10-01 22:35:57.337184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:28304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:13.162 [2024-10-01 22:35:57.337191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:13.162 [2024-10-01 22:35:57.337200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:28312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:13.162 [2024-10-01 22:35:57.337208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:13.162 [2024-10-01 22:35:57.337217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:28320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:13.162 [2024-10-01 22:35:57.337224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:13.162 [2024-10-01 22:35:57.337233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:28328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:13.162 [2024-10-01 22:35:57.337240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:13.162 [2024-10-01 22:35:57.337249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:28336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:13.162 [2024-10-01 22:35:57.337257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:13.162 [2024-10-01 22:35:57.337268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:28344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:13.162 [2024-10-01 22:35:57.337275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:13.162 [2024-10-01 22:35:57.337284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:28352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:13.162 [2024-10-01 22:35:57.337291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:13.162 [2024-10-01 22:35:57.337300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:28360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:13.162 [2024-10-01 22:35:57.337307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:13.162 [2024-10-01 22:35:57.337317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:28368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:13.162 [2024-10-01 22:35:57.337324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:13.162 [2024-10-01 22:35:57.337333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:28376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:13.162 [2024-10-01 22:35:57.337340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:13.162 [2024-10-01 22:35:57.337350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:28384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:13.162 [2024-10-01 22:35:57.337357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:13.162 [2024-10-01 22:35:57.337366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:28392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:13.162 [2024-10-01 22:35:57.337373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:13.162 [2024-10-01 22:35:57.337382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:28400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:13.162 [2024-10-01 22:35:57.337389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:13.162 [2024-10-01 22:35:57.337399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:28408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:13.162 [2024-10-01 22:35:57.337406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:13.162 [2024-10-01 22:35:57.337415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:28416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:13.162 [2024-10-01 22:35:57.337422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:13.162 [2024-10-01 22:35:57.337431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:28424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:13.162 [2024-10-01 22:35:57.337438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:13.162 [2024-10-01 22:35:57.337448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:28432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:13.162 [2024-10-01 22:35:57.337455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:13.162 [2024-10-01 22:35:57.337464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:28440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:13.162 [2024-10-01 22:35:57.337474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:13.162 [2024-10-01 22:35:57.337483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:28448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:13.162 [2024-10-01 22:35:57.337490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:13.162 [2024-10-01 22:35:57.337499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:28456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:13.162 [2024-10-01 22:35:57.337506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:13.162 [2024-10-01 22:35:57.337515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:28464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:13.163 [2024-10-01 22:35:57.337522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:13.163 [2024-10-01 22:35:57.337544] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:37:13.163 [2024-10-01 22:35:57.337552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:28472 len:8 PRP1 0x0 PRP2 0x0 00:37:13.163 [2024-10-01 22:35:57.337561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:13.163 [2024-10-01 22:35:57.337571] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:37:13.163 [2024-10-01 22:35:57.337576] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:37:13.163 [2024-10-01 22:35:57.337583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:28480 len:8 PRP1 0x0 PRP2 0x0 00:37:13.163 [2024-10-01 22:35:57.337590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:13.163 [2024-10-01 22:35:57.337631] bdev_nvme.c:1730:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0xa89b30 was disconnected and freed. reset controller. 00:37:13.163 [2024-10-01 22:35:57.337641] bdev_nvme.c:1987:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4421 to 10.0.0.2:4422 00:37:13.163 [2024-10-01 22:35:57.337661] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:37:13.163 [2024-10-01 22:35:57.337670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:13.163 [2024-10-01 22:35:57.337678] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:37:13.163 [2024-10-01 22:35:57.337685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:13.163 [2024-10-01 22:35:57.337693] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:37:13.163 [2024-10-01 22:35:57.337700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:13.163 [2024-10-01 22:35:57.337709] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:37:13.163 [2024-10-01 22:35:57.337717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:13.163 [2024-10-01 22:35:57.337725] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:13.163 [2024-10-01 22:35:57.341305] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:13.163 [2024-10-01 22:35:57.341330] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa67000 (9): Bad file descriptor 00:37:13.163 [2024-10-01 22:35:57.385129] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:37:13.163 11040.80 IOPS, 43.13 MiB/s 11026.33 IOPS, 43.07 MiB/s 11031.14 IOPS, 43.09 MiB/s 11039.62 IOPS, 43.12 MiB/s 11046.78 IOPS, 43.15 MiB/s [2024-10-01 22:36:01.753744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:34272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:13.163 [2024-10-01 22:36:01.753778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:13.163 [2024-10-01 22:36:01.753795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:34280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:13.163 [2024-10-01 22:36:01.753803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:13.163 [2024-10-01 22:36:01.753813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:34288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:13.163 [2024-10-01 22:36:01.753821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:13.163 [2024-10-01 22:36:01.753830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:34296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:13.163 [2024-10-01 22:36:01.753838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:13.163 [2024-10-01 22:36:01.753847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:34304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:13.163 [2024-10-01 22:36:01.753854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:13.163 [2024-10-01 22:36:01.753863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:34312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:13.163 [2024-10-01 22:36:01.753871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:13.163 [2024-10-01 22:36:01.753880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:34320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:13.163 [2024-10-01 22:36:01.753887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:13.163 [2024-10-01 22:36:01.753896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:34328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:13.163 [2024-10-01 22:36:01.753904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:13.163 [2024-10-01 22:36:01.753913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:34336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:13.163 [2024-10-01 22:36:01.753920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:13.163 [2024-10-01 22:36:01.753930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:34344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:13.163 [2024-10-01 22:36:01.753937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:13.163 [2024-10-01 22:36:01.753946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:34352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:13.163 [2024-10-01 22:36:01.753953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:13.163 [2024-10-01 22:36:01.753963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:34360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:13.163 [2024-10-01 22:36:01.753970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:13.163 [2024-10-01 22:36:01.753985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:34368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:13.163 [2024-10-01 22:36:01.753993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:13.163 [2024-10-01 22:36:01.754003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:34376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:13.163 [2024-10-01 22:36:01.754011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:13.163 [2024-10-01 22:36:01.754020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:34384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:13.163 [2024-10-01 22:36:01.754028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:13.163 [2024-10-01 22:36:01.754037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:34392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:13.163 [2024-10-01 22:36:01.754044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:13.163 [2024-10-01 22:36:01.754053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:34400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:13.163 [2024-10-01 22:36:01.754060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:13.163 [2024-10-01 22:36:01.754069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:34408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:13.163 [2024-10-01 22:36:01.754077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:13.163 [2024-10-01 22:36:01.754087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:34128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:13.163 [2024-10-01 22:36:01.754095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:13.163 [2024-10-01 22:36:01.754104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:34136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:13.163 [2024-10-01 22:36:01.754111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:13.163 [2024-10-01 22:36:01.754121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:34144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:13.163 [2024-10-01 22:36:01.754130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:13.163 [2024-10-01 22:36:01.754139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:34152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:13.163 [2024-10-01 22:36:01.754147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:13.164 [2024-10-01 22:36:01.754156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:34160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:13.164 [2024-10-01 22:36:01.754164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:13.164 [2024-10-01 22:36:01.754174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:34168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:13.164 [2024-10-01 22:36:01.754182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:13.164 [2024-10-01 22:36:01.754191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:34176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:13.164 [2024-10-01 22:36:01.754200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:13.164 [2024-10-01 22:36:01.754209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:34416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:13.164 [2024-10-01 22:36:01.754217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:13.164 [2024-10-01 22:36:01.754226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:34424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:13.164 [2024-10-01 22:36:01.754233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:13.164 [2024-10-01 22:36:01.754243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:34432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:13.164 [2024-10-01 22:36:01.754250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:13.164 [2024-10-01 22:36:01.754259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:34440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:13.164 [2024-10-01 22:36:01.754266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:13.164 [2024-10-01 22:36:01.754275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:34448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:13.164 [2024-10-01 22:36:01.754283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:13.164 [2024-10-01 22:36:01.754292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:34456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:13.164 [2024-10-01 22:36:01.754299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:13.164 [2024-10-01 22:36:01.754309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:34464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:13.164 [2024-10-01 22:36:01.754316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:13.164 [2024-10-01 22:36:01.754325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:34472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:13.164 [2024-10-01 22:36:01.754332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:13.164 [2024-10-01 22:36:01.754342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:34480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:13.164 [2024-10-01 22:36:01.754350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:13.164 [2024-10-01 22:36:01.754359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:34488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:13.164 [2024-10-01 22:36:01.754367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:13.164 [2024-10-01 22:36:01.754376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:34496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:13.164 [2024-10-01 22:36:01.754383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:13.164 [2024-10-01 22:36:01.754393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:34504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:13.164 [2024-10-01 22:36:01.754400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:13.164 [2024-10-01 22:36:01.754411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:34512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:13.164 [2024-10-01 22:36:01.754418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:13.164 [2024-10-01 22:36:01.754428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:34520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:13.164 [2024-10-01 22:36:01.754435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:13.164 [2024-10-01 22:36:01.754444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:34528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:13.164 [2024-10-01 22:36:01.754452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:13.164 [2024-10-01 22:36:01.754461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:34536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:13.164 [2024-10-01 22:36:01.754468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:13.164 [2024-10-01 22:36:01.754477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:34544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:13.164 [2024-10-01 22:36:01.754485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:13.164 [2024-10-01 22:36:01.754494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:34552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:13.164 [2024-10-01 22:36:01.754501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:13.164 [2024-10-01 22:36:01.754510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:34560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:13.164 [2024-10-01 22:36:01.754517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:13.164 [2024-10-01 22:36:01.754527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:34568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:13.164 [2024-10-01 22:36:01.754535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:13.164 [2024-10-01 22:36:01.754544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:34576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:13.164 [2024-10-01 22:36:01.754551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:13.164 [2024-10-01 22:36:01.754560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:34584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:13.164 [2024-10-01 22:36:01.754568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:13.164 [2024-10-01 22:36:01.754577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:34592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:13.164 [2024-10-01 22:36:01.754585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:13.164 [2024-10-01 22:36:01.754594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:34600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:13.164 [2024-10-01 22:36:01.754601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:13.164 [2024-10-01 22:36:01.754610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:34608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:13.164 [2024-10-01 22:36:01.754622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:13.164 [2024-10-01 22:36:01.754637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:34616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:13.164 [2024-10-01 22:36:01.754644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:13.164 [2024-10-01 22:36:01.754653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:34624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:13.164 [2024-10-01 22:36:01.754661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:13.164 [2024-10-01 22:36:01.754670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:34632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:13.164 [2024-10-01 22:36:01.754677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:13.164 [2024-10-01 22:36:01.754686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:34640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:13.164 [2024-10-01 22:36:01.754694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:13.164 [2024-10-01 22:36:01.754703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:34648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:13.164 [2024-10-01 22:36:01.754710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:13.164 [2024-10-01 22:36:01.754720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:34656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:13.164 [2024-10-01 22:36:01.754727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:13.165 [2024-10-01 22:36:01.754736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:34664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:13.165 [2024-10-01 22:36:01.754743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:13.165 [2024-10-01 22:36:01.754753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:34672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:13.165 [2024-10-01 22:36:01.754760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:13.165 [2024-10-01 22:36:01.754770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:34680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:13.165 [2024-10-01 22:36:01.754777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:13.165 [2024-10-01 22:36:01.754786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:34688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:13.165 [2024-10-01 22:36:01.754793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:13.165 [2024-10-01 22:36:01.754803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:34696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:13.165 [2024-10-01 22:36:01.754810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:13.165 [2024-10-01 22:36:01.754819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:34704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:13.165 [2024-10-01 22:36:01.754826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:13.165 [2024-10-01 22:36:01.754836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:34712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:13.165 [2024-10-01 22:36:01.754845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:13.165 [2024-10-01 22:36:01.754854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:34720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:13.165 [2024-10-01 22:36:01.754861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:13.165 [2024-10-01 22:36:01.754870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:34728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:13.165 [2024-10-01 22:36:01.754878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:13.165 [2024-10-01 22:36:01.754888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:34736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:13.165 [2024-10-01 22:36:01.754895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:13.165 [2024-10-01 22:36:01.754904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:34744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:13.165 [2024-10-01 22:36:01.754912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:13.165 [2024-10-01 22:36:01.754921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:34752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:13.165 [2024-10-01 22:36:01.754929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:13.165 [2024-10-01 22:36:01.754939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:34760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:13.165 [2024-10-01 22:36:01.754946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:13.165 [2024-10-01 22:36:01.754955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:34768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:13.165 [2024-10-01 22:36:01.754962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:13.165 [2024-10-01 22:36:01.754972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:34776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:13.165 [2024-10-01 22:36:01.754979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:13.165 [2024-10-01 22:36:01.754989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:34784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:13.165 [2024-10-01 22:36:01.754996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:13.165 [2024-10-01 22:36:01.755006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:34792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:13.165 [2024-10-01 22:36:01.755013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:13.165 [2024-10-01 22:36:01.755023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:34800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:13.165 [2024-10-01 22:36:01.755030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:13.165 [2024-10-01 22:36:01.755040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:34808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:13.165 [2024-10-01 22:36:01.755047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:13.165 [2024-10-01 22:36:01.755059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:34816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:13.165 [2024-10-01 22:36:01.755066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:13.165 [2024-10-01 22:36:01.755076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:34824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:13.165 [2024-10-01 22:36:01.755083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:13.165 [2024-10-01 22:36:01.755092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:34832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:13.165 [2024-10-01 22:36:01.755099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:13.165 [2024-10-01 22:36:01.755109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:34840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:13.165 [2024-10-01 22:36:01.755116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:13.165 [2024-10-01 22:36:01.755125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:34848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:13.165 [2024-10-01 22:36:01.755132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:13.165 [2024-10-01 22:36:01.755142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:34856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:13.165 [2024-10-01 22:36:01.755149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:13.165 [2024-10-01 22:36:01.755158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:34864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:13.165 [2024-10-01 22:36:01.755165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:13.165 [2024-10-01 22:36:01.755174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:34872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:13.165 [2024-10-01 22:36:01.755181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:13.165 [2024-10-01 22:36:01.755191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:34880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:13.165 [2024-10-01 22:36:01.755198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:13.165 [2024-10-01 22:36:01.755207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:34888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:13.165 [2024-10-01 22:36:01.755215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:13.165 [2024-10-01 22:36:01.755224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:34896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:13.165 [2024-10-01 22:36:01.755232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:13.165 [2024-10-01 22:36:01.755241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:34904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:13.165 [2024-10-01 22:36:01.755248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:13.165 [2024-10-01 22:36:01.755258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:34912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:13.166 [2024-10-01 22:36:01.755266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:13.166 [2024-10-01 22:36:01.755276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:34920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:13.166 [2024-10-01 22:36:01.755283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:13.166 [2024-10-01 22:36:01.755293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:34928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:13.166 [2024-10-01 22:36:01.755300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:13.166 [2024-10-01 22:36:01.755309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:34936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:13.166 [2024-10-01 22:36:01.755316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:13.166 [2024-10-01 22:36:01.755326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:34944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:13.166 [2024-10-01 22:36:01.755333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:13.166 [2024-10-01 22:36:01.755343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:34952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:13.166 [2024-10-01 22:36:01.755350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:13.166 [2024-10-01 22:36:01.755359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:34960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:13.166 [2024-10-01 22:36:01.755366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:13.166 [2024-10-01 22:36:01.755376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:34968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:13.166 [2024-10-01 22:36:01.755383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:13.166 [2024-10-01 22:36:01.755392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:34976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:13.166 [2024-10-01 22:36:01.755399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:13.166 [2024-10-01 22:36:01.755409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:34984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:13.166 [2024-10-01 22:36:01.755416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:13.166 [2024-10-01 22:36:01.755425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:34992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:13.166 [2024-10-01 22:36:01.755433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:13.166 [2024-10-01 22:36:01.755442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:35000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:13.166 [2024-10-01 22:36:01.755450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:13.166 [2024-10-01 22:36:01.755459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:35008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:13.166 [2024-10-01 22:36:01.755466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:13.166 [2024-10-01 22:36:01.755476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:35016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:13.166 [2024-10-01 22:36:01.755484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:13.166 [2024-10-01 22:36:01.755494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:35024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:13.166 [2024-10-01 22:36:01.755501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:13.166 [2024-10-01 22:36:01.755510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:35032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:13.166 [2024-10-01 22:36:01.755517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:13.166 [2024-10-01 22:36:01.755527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:35040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:13.166 [2024-10-01 22:36:01.755534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:13.166 [2024-10-01 22:36:01.755544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:35048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:13.166 [2024-10-01 22:36:01.755551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:13.166 [2024-10-01 22:36:01.755560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:35056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:13.166 [2024-10-01 22:36:01.755568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:13.166 [2024-10-01 22:36:01.755577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:35064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:13.166 [2024-10-01 22:36:01.755584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:13.166 [2024-10-01 22:36:01.755593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:35072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:13.166 [2024-10-01 22:36:01.755601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:13.166 [2024-10-01 22:36:01.755610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:35080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:13.166 [2024-10-01 22:36:01.755617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:13.166 [2024-10-01 22:36:01.755629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:35088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:13.166 [2024-10-01 22:36:01.755637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:13.166 [2024-10-01 22:36:01.755646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:35096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:13.166 [2024-10-01 22:36:01.755654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:13.166 [2024-10-01 22:36:01.755663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:35104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:13.166 [2024-10-01 22:36:01.755670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:13.166 [2024-10-01 22:36:01.755679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:35112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:13.166 [2024-10-01 22:36:01.755686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:13.166 [2024-10-01 22:36:01.755697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:35120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:13.166 [2024-10-01 22:36:01.755705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:13.166 [2024-10-01 22:36:01.755714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:35128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:13.166 [2024-10-01 22:36:01.755721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:13.166 [2024-10-01 22:36:01.755730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:35136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:13.166 [2024-10-01 22:36:01.755737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:13.166 [2024-10-01 22:36:01.755746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:34184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:13.166 [2024-10-01 22:36:01.755754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:13.166 [2024-10-01 22:36:01.755763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:34192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:13.166 [2024-10-01 22:36:01.755770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:13.166 [2024-10-01 22:36:01.755780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:34200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:13.166 [2024-10-01 22:36:01.755787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:13.166 [2024-10-01 22:36:01.755796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:34208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:13.166 [2024-10-01 22:36:01.755804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:13.166 [2024-10-01 22:36:01.755813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:34216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:13.166 [2024-10-01 22:36:01.755820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:13.166 [2024-10-01 22:36:01.755830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:34224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:13.166 [2024-10-01 22:36:01.755837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:13.166 [2024-10-01 22:36:01.755846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:34232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:13.166 [2024-10-01 22:36:01.755854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:13.166 [2024-10-01 22:36:01.755863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:34240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:13.166 [2024-10-01 22:36:01.755870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:13.167 [2024-10-01 22:36:01.755893] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:37:13.167 [2024-10-01 22:36:01.755901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:34248 len:8 PRP1 0x0 PRP2 0x0 00:37:13.167 [2024-10-01 22:36:01.755909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:13.167 [2024-10-01 22:36:01.755920] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:37:13.167 [2024-10-01 22:36:01.755926] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:37:13.167 [2024-10-01 22:36:01.755933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:34256 len:8 PRP1 0x0 PRP2 0x0 00:37:13.167 [2024-10-01 22:36:01.755940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:13.167 [2024-10-01 22:36:01.755947] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:37:13.167 [2024-10-01 22:36:01.755953] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:37:13.167 [2024-10-01 22:36:01.755960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:34264 len:8 PRP1 0x0 PRP2 0x0 00:37:13.167 [2024-10-01 22:36:01.755968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:13.167 [2024-10-01 22:36:01.755975] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:37:13.167 [2024-10-01 22:36:01.755981] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:37:13.167 [2024-10-01 22:36:01.755991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:35144 len:8 PRP1 0x0 PRP2 0x0 00:37:13.167 [2024-10-01 22:36:01.755998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:13.167 [2024-10-01 22:36:01.756036] bdev_nvme.c:1730:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0xa89d10 was disconnected and freed. reset controller. 00:37:13.167 [2024-10-01 22:36:01.756046] bdev_nvme.c:1987:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4422 to 10.0.0.2:4420 00:37:13.167 [2024-10-01 22:36:01.756066] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:37:13.167 [2024-10-01 22:36:01.756075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:13.167 [2024-10-01 22:36:01.756083] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:37:13.167 [2024-10-01 22:36:01.756090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:13.167 [2024-10-01 22:36:01.756099] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:37:13.167 [2024-10-01 22:36:01.756106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:13.167 [2024-10-01 22:36:01.756114] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:37:13.167 [2024-10-01 22:36:01.756121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:13.167 [2024-10-01 22:36:01.756129] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:13.167 [2024-10-01 22:36:01.756161] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa67000 (9): Bad file descriptor 00:37:13.167 [2024-10-01 22:36:01.759701] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:13.167 [2024-10-01 22:36:01.836093] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:37:13.167 11014.60 IOPS, 43.03 MiB/s 11028.64 IOPS, 43.08 MiB/s 11026.25 IOPS, 43.07 MiB/s 11017.08 IOPS, 43.04 MiB/s 11004.14 IOPS, 42.98 MiB/s 00:37:13.167 Latency(us) 00:37:13.167 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:37:13.167 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:37:13.167 Verification LBA range: start 0x0 length 0x4000 00:37:13.167 NVMe0n1 : 15.01 11017.92 43.04 330.33 0.00 11250.39 549.55 14636.37 00:37:13.167 =================================================================================================================== 00:37:13.167 Total : 11017.92 43.04 330.33 0.00 11250.39 549.55 14636.37 00:37:13.167 Received shutdown signal, test time was about 15.000000 seconds 00:37:13.167 00:37:13.167 Latency(us) 00:37:13.167 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:37:13.167 =================================================================================================================== 00:37:13.167 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:37:13.167 22:36:07 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@65 -- # grep -c 'Resetting controller successful' 00:37:13.167 22:36:07 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@65 -- # count=3 00:37:13.167 22:36:07 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@67 -- # (( count != 3 )) 00:37:13.167 22:36:07 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@73 -- # bdevperf_pid=323873 00:37:13.167 22:36:07 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@75 -- # waitforlisten 323873 /var/tmp/bdevperf.sock 00:37:13.167 22:36:07 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 1 -f 00:37:13.167 22:36:07 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@831 -- # '[' -z 323873 ']' 00:37:13.167 22:36:07 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:37:13.167 22:36:07 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@836 -- # local max_retries=100 00:37:13.167 22:36:07 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:37:13.167 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:37:13.167 22:36:07 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # xtrace_disable 00:37:13.167 22:36:07 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:37:13.739 22:36:08 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:37:13.739 22:36:08 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # return 0 00:37:13.740 22:36:08 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:37:13.740 [2024-10-01 22:36:08.985032] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:37:14.005 22:36:09 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:37:14.005 [2024-10-01 22:36:09.165482] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:37:14.005 22:36:09 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:37:14.576 NVMe0n1 00:37:14.576 22:36:09 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:37:14.836 00:37:14.836 22:36:09 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:37:15.096 00:37:15.096 22:36:10 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:37:15.096 22:36:10 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@82 -- # grep -q NVMe0 00:37:15.356 22:36:10 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:37:15.616 22:36:10 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@87 -- # sleep 3 00:37:18.914 22:36:13 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:37:18.914 22:36:13 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@88 -- # grep -q NVMe0 00:37:18.914 22:36:13 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@90 -- # run_test_pid=325514 00:37:18.914 22:36:13 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@92 -- # wait 325514 00:37:18.914 22:36:13 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:37:19.855 { 00:37:19.855 "results": [ 00:37:19.855 { 00:37:19.855 "job": "NVMe0n1", 00:37:19.855 "core_mask": "0x1", 00:37:19.855 "workload": "verify", 00:37:19.855 "status": "finished", 00:37:19.855 "verify_range": { 00:37:19.855 "start": 0, 00:37:19.855 "length": 16384 00:37:19.855 }, 00:37:19.855 "queue_depth": 128, 00:37:19.855 "io_size": 4096, 00:37:19.855 "runtime": 1.004307, 00:37:19.855 "iops": 11019.538846189462, 00:37:19.855 "mibps": 43.045073617927585, 00:37:19.855 "io_failed": 0, 00:37:19.855 "io_timeout": 0, 00:37:19.855 "avg_latency_us": 11560.33235625433, 00:37:19.855 "min_latency_us": 2034.3466666666666, 00:37:19.855 "max_latency_us": 10868.053333333333 00:37:19.855 } 00:37:19.855 ], 00:37:19.855 "core_count": 1 00:37:19.855 } 00:37:19.855 22:36:14 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@94 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:37:19.855 [2024-10-01 22:36:08.040957] Starting SPDK v25.01-pre git sha1 1b1c3081e / DPDK 24.03.0 initialization... 00:37:19.855 [2024-10-01 22:36:08.041021] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid323873 ] 00:37:19.855 [2024-10-01 22:36:08.102038] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:37:19.855 [2024-10-01 22:36:08.165217] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:37:19.855 [2024-10-01 22:36:10.608429] bdev_nvme.c:1987:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:37:19.855 [2024-10-01 22:36:10.608484] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:37:19.855 [2024-10-01 22:36:10.608496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:19.855 [2024-10-01 22:36:10.608507] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:37:19.855 [2024-10-01 22:36:10.608515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:19.855 [2024-10-01 22:36:10.608524] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:37:19.855 [2024-10-01 22:36:10.608531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:19.855 [2024-10-01 22:36:10.608540] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:37:19.855 [2024-10-01 22:36:10.608547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:19.855 [2024-10-01 22:36:10.608555] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:19.855 [2024-10-01 22:36:10.608585] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:19.855 [2024-10-01 22:36:10.608601] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2374000 (9): Bad file descriptor 00:37:19.855 [2024-10-01 22:36:10.620183] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:37:19.855 Running I/O for 1 seconds... 00:37:19.855 10939.00 IOPS, 42.73 MiB/s 00:37:19.855 Latency(us) 00:37:19.855 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:37:19.855 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:37:19.855 Verification LBA range: start 0x0 length 0x4000 00:37:19.855 NVMe0n1 : 1.00 11019.54 43.05 0.00 0.00 11560.33 2034.35 10868.05 00:37:19.855 =================================================================================================================== 00:37:19.855 Total : 11019.54 43.05 0.00 0.00 11560.33 2034.35 10868.05 00:37:19.855 22:36:14 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:37:19.855 22:36:14 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@95 -- # grep -q NVMe0 00:37:20.115 22:36:15 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@98 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:37:20.115 22:36:15 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:37:20.115 22:36:15 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@99 -- # grep -q NVMe0 00:37:20.375 22:36:15 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:37:20.635 22:36:15 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@101 -- # sleep 3 00:37:23.932 22:36:18 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:37:23.932 22:36:18 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@103 -- # grep -q NVMe0 00:37:23.932 22:36:18 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@108 -- # killprocess 323873 00:37:23.932 22:36:18 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@950 -- # '[' -z 323873 ']' 00:37:23.932 22:36:18 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # kill -0 323873 00:37:23.932 22:36:18 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@955 -- # uname 00:37:23.932 22:36:18 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:37:23.932 22:36:18 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 323873 00:37:23.932 22:36:18 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:37:23.932 22:36:18 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:37:23.932 22:36:18 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@968 -- # echo 'killing process with pid 323873' 00:37:23.932 killing process with pid 323873 00:37:23.932 22:36:18 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@969 -- # kill 323873 00:37:23.932 22:36:18 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@974 -- # wait 323873 00:37:23.932 22:36:19 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@110 -- # sync 00:37:23.932 22:36:19 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:37:24.193 22:36:19 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@113 -- # trap - SIGINT SIGTERM EXIT 00:37:24.193 22:36:19 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@115 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:37:24.193 22:36:19 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@116 -- # nvmftestfini 00:37:24.193 22:36:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@514 -- # nvmfcleanup 00:37:24.193 22:36:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@121 -- # sync 00:37:24.193 22:36:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:37:24.193 22:36:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@124 -- # set +e 00:37:24.193 22:36:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@125 -- # for i in {1..20} 00:37:24.193 22:36:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:37:24.193 rmmod nvme_tcp 00:37:24.193 rmmod nvme_fabrics 00:37:24.193 rmmod nvme_keyring 00:37:24.193 22:36:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:37:24.193 22:36:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@128 -- # set -e 00:37:24.193 22:36:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@129 -- # return 0 00:37:24.193 22:36:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@515 -- # '[' -n 320019 ']' 00:37:24.193 22:36:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@516 -- # killprocess 320019 00:37:24.193 22:36:19 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@950 -- # '[' -z 320019 ']' 00:37:24.193 22:36:19 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # kill -0 320019 00:37:24.193 22:36:19 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@955 -- # uname 00:37:24.193 22:36:19 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:37:24.193 22:36:19 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 320019 00:37:24.452 22:36:19 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:37:24.452 22:36:19 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:37:24.452 22:36:19 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@968 -- # echo 'killing process with pid 320019' 00:37:24.452 killing process with pid 320019 00:37:24.452 22:36:19 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@969 -- # kill 320019 00:37:24.452 22:36:19 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@974 -- # wait 320019 00:37:24.452 22:36:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:37:24.452 22:36:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:37:24.452 22:36:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:37:24.452 22:36:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@297 -- # iptr 00:37:24.452 22:36:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@789 -- # iptables-save 00:37:24.452 22:36:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:37:24.452 22:36:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@789 -- # iptables-restore 00:37:24.452 22:36:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:37:24.452 22:36:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@302 -- # remove_spdk_ns 00:37:24.453 22:36:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:37:24.453 22:36:19 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:37:24.453 22:36:19 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:37:26.993 22:36:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:37:26.993 00:37:26.993 real 0m40.035s 00:37:26.993 user 2m5.195s 00:37:26.993 sys 0m8.260s 00:37:26.993 22:36:21 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1126 -- # xtrace_disable 00:37:26.993 22:36:21 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:37:26.993 ************************************ 00:37:26.993 END TEST nvmf_failover 00:37:26.993 ************************************ 00:37:26.993 22:36:21 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@26 -- # run_test nvmf_host_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:37:26.993 22:36:21 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:37:26.993 22:36:21 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:37:26.993 22:36:21 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:37:26.993 ************************************ 00:37:26.993 START TEST nvmf_host_discovery 00:37:26.993 ************************************ 00:37:26.993 22:36:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:37:26.993 * Looking for test storage... 00:37:26.993 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:37:26.993 22:36:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:37:26.993 22:36:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1681 -- # lcov --version 00:37:26.993 22:36:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:37:26.993 22:36:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:37:26.993 22:36:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:37:26.993 22:36:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@333 -- # local ver1 ver1_l 00:37:26.993 22:36:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@334 -- # local ver2 ver2_l 00:37:26.993 22:36:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@336 -- # IFS=.-: 00:37:26.993 22:36:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@336 -- # read -ra ver1 00:37:26.993 22:36:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@337 -- # IFS=.-: 00:37:26.993 22:36:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@337 -- # read -ra ver2 00:37:26.993 22:36:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@338 -- # local 'op=<' 00:37:26.993 22:36:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@340 -- # ver1_l=2 00:37:26.993 22:36:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@341 -- # ver2_l=1 00:37:26.993 22:36:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:37:26.993 22:36:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@344 -- # case "$op" in 00:37:26.993 22:36:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@345 -- # : 1 00:37:26.993 22:36:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@364 -- # (( v = 0 )) 00:37:26.993 22:36:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:37:26.993 22:36:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@365 -- # decimal 1 00:37:26.993 22:36:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@353 -- # local d=1 00:37:26.993 22:36:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:37:26.993 22:36:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@355 -- # echo 1 00:37:26.993 22:36:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@365 -- # ver1[v]=1 00:37:26.993 22:36:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@366 -- # decimal 2 00:37:26.993 22:36:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@353 -- # local d=2 00:37:26.993 22:36:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:37:26.993 22:36:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@355 -- # echo 2 00:37:26.993 22:36:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@366 -- # ver2[v]=2 00:37:26.993 22:36:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:37:26.993 22:36:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:37:26.993 22:36:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@368 -- # return 0 00:37:26.993 22:36:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:37:26.993 22:36:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:37:26.993 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:26.993 --rc genhtml_branch_coverage=1 00:37:26.993 --rc genhtml_function_coverage=1 00:37:26.993 --rc genhtml_legend=1 00:37:26.993 --rc geninfo_all_blocks=1 00:37:26.993 --rc geninfo_unexecuted_blocks=1 00:37:26.993 00:37:26.993 ' 00:37:26.993 22:36:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:37:26.993 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:26.993 --rc genhtml_branch_coverage=1 00:37:26.993 --rc genhtml_function_coverage=1 00:37:26.993 --rc genhtml_legend=1 00:37:26.993 --rc geninfo_all_blocks=1 00:37:26.993 --rc geninfo_unexecuted_blocks=1 00:37:26.993 00:37:26.993 ' 00:37:26.993 22:36:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:37:26.993 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:26.993 --rc genhtml_branch_coverage=1 00:37:26.993 --rc genhtml_function_coverage=1 00:37:26.993 --rc genhtml_legend=1 00:37:26.993 --rc geninfo_all_blocks=1 00:37:26.993 --rc geninfo_unexecuted_blocks=1 00:37:26.993 00:37:26.993 ' 00:37:26.993 22:36:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:37:26.993 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:26.994 --rc genhtml_branch_coverage=1 00:37:26.994 --rc genhtml_function_coverage=1 00:37:26.994 --rc genhtml_legend=1 00:37:26.994 --rc geninfo_all_blocks=1 00:37:26.994 --rc geninfo_unexecuted_blocks=1 00:37:26.994 00:37:26.994 ' 00:37:26.994 22:36:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:37:26.994 22:36:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@7 -- # uname -s 00:37:26.994 22:36:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:37:26.994 22:36:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:37:26.994 22:36:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:37:26.994 22:36:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:37:26.994 22:36:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:37:26.994 22:36:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:37:26.994 22:36:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:37:26.994 22:36:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:37:26.994 22:36:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:37:26.994 22:36:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:37:26.994 22:36:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 00:37:26.994 22:36:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=80c5a598-37ec-ec11-9bc7-a4bf01928204 00:37:26.994 22:36:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:37:26.994 22:36:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:37:26.994 22:36:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:37:26.994 22:36:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:37:26.994 22:36:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:37:26.994 22:36:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@15 -- # shopt -s extglob 00:37:26.994 22:36:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:37:26.994 22:36:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:37:26.994 22:36:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:37:26.994 22:36:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:26.994 22:36:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:26.994 22:36:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:26.994 22:36:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@5 -- # export PATH 00:37:26.994 22:36:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:26.994 22:36:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@51 -- # : 0 00:37:26.994 22:36:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:37:26.994 22:36:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:37:26.994 22:36:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:37:26.994 22:36:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:37:26.994 22:36:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:37:26.994 22:36:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:37:26.994 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:37:26.994 22:36:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:37:26.994 22:36:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:37:26.994 22:36:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@55 -- # have_pci_nics=0 00:37:26.994 22:36:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@11 -- # '[' tcp == rdma ']' 00:37:26.994 22:36:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@16 -- # DISCOVERY_PORT=8009 00:37:26.994 22:36:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@17 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:37:26.994 22:36:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode 00:37:26.994 22:36:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@22 -- # HOST_NQN=nqn.2021-12.io.spdk:test 00:37:26.994 22:36:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@23 -- # HOST_SOCK=/tmp/host.sock 00:37:26.994 22:36:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@25 -- # nvmftestinit 00:37:26.994 22:36:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:37:26.994 22:36:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:37:26.994 22:36:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@474 -- # prepare_net_devs 00:37:26.994 22:36:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@436 -- # local -g is_hw=no 00:37:26.994 22:36:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@438 -- # remove_spdk_ns 00:37:26.994 22:36:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:37:26.994 22:36:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:37:26.994 22:36:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:37:26.994 22:36:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:37:26.994 22:36:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:37:26.994 22:36:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@309 -- # xtrace_disable 00:37:26.994 22:36:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:37:35.127 22:36:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:37:35.127 22:36:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@315 -- # pci_devs=() 00:37:35.127 22:36:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@315 -- # local -a pci_devs 00:37:35.127 22:36:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@316 -- # pci_net_devs=() 00:37:35.127 22:36:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:37:35.127 22:36:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@317 -- # pci_drivers=() 00:37:35.127 22:36:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@317 -- # local -A pci_drivers 00:37:35.127 22:36:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@319 -- # net_devs=() 00:37:35.127 22:36:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@319 -- # local -ga net_devs 00:37:35.127 22:36:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@320 -- # e810=() 00:37:35.127 22:36:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@320 -- # local -ga e810 00:37:35.127 22:36:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@321 -- # x722=() 00:37:35.127 22:36:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@321 -- # local -ga x722 00:37:35.127 22:36:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@322 -- # mlx=() 00:37:35.127 22:36:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@322 -- # local -ga mlx 00:37:35.127 22:36:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:37:35.127 22:36:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:37:35.127 22:36:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:37:35.127 22:36:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:37:35.127 22:36:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:37:35.127 22:36:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:37:35.127 22:36:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:37:35.127 22:36:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:37:35.127 22:36:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:37:35.127 22:36:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:37:35.127 22:36:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:37:35.127 22:36:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:37:35.127 22:36:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:37:35.127 22:36:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:37:35.127 22:36:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:37:35.127 22:36:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:37:35.127 22:36:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:37:35.127 22:36:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:37:35.127 22:36:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:37:35.127 22:36:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:37:35.127 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:37:35.127 22:36:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:37:35.127 22:36:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:37:35.127 22:36:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:37:35.127 22:36:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:37:35.127 22:36:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:37:35.127 22:36:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:37:35.127 22:36:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:37:35.127 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:37:35.127 22:36:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:37:35.127 22:36:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:37:35.127 22:36:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:37:35.127 22:36:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:37:35.127 22:36:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:37:35.127 22:36:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:37:35.127 22:36:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:37:35.127 22:36:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:37:35.127 22:36:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:37:35.127 22:36:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:37:35.127 22:36:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:37:35.127 22:36:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:37:35.127 22:36:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@416 -- # [[ up == up ]] 00:37:35.127 22:36:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:37:35.127 22:36:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:37:35.127 22:36:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:37:35.127 Found net devices under 0000:4b:00.0: cvl_0_0 00:37:35.127 22:36:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:37:35.127 22:36:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:37:35.127 22:36:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:37:35.127 22:36:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:37:35.127 22:36:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:37:35.128 22:36:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@416 -- # [[ up == up ]] 00:37:35.128 22:36:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:37:35.128 22:36:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:37:35.128 22:36:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:37:35.128 Found net devices under 0000:4b:00.1: cvl_0_1 00:37:35.128 22:36:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:37:35.128 22:36:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:37:35.128 22:36:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@440 -- # is_hw=yes 00:37:35.128 22:36:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:37:35.128 22:36:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:37:35.128 22:36:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:37:35.128 22:36:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:37:35.128 22:36:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:37:35.128 22:36:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:37:35.128 22:36:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:37:35.128 22:36:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:37:35.128 22:36:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:37:35.128 22:36:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:37:35.128 22:36:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:37:35.128 22:36:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:37:35.128 22:36:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:37:35.128 22:36:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:37:35.128 22:36:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:37:35.128 22:36:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:37:35.128 22:36:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:37:35.128 22:36:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:37:35.128 22:36:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:37:35.128 22:36:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:37:35.128 22:36:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:37:35.128 22:36:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:37:35.128 22:36:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:37:35.128 22:36:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:37:35.128 22:36:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:37:35.128 22:36:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:37:35.128 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:37:35.128 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.515 ms 00:37:35.128 00:37:35.128 --- 10.0.0.2 ping statistics --- 00:37:35.128 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:37:35.128 rtt min/avg/max/mdev = 0.515/0.515/0.515/0.000 ms 00:37:35.128 22:36:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:37:35.128 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:37:35.128 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.312 ms 00:37:35.128 00:37:35.128 --- 10.0.0.1 ping statistics --- 00:37:35.128 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:37:35.128 rtt min/avg/max/mdev = 0.312/0.312/0.312/0.000 ms 00:37:35.128 22:36:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:37:35.128 22:36:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@448 -- # return 0 00:37:35.128 22:36:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:37:35.128 22:36:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:37:35.128 22:36:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:37:35.128 22:36:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:37:35.128 22:36:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:37:35.128 22:36:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:37:35.128 22:36:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:37:35.128 22:36:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@30 -- # nvmfappstart -m 0x2 00:37:35.128 22:36:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:37:35.128 22:36:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@724 -- # xtrace_disable 00:37:35.128 22:36:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:37:35.128 22:36:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:37:35.128 22:36:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@507 -- # nvmfpid=330711 00:37:35.128 22:36:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@508 -- # waitforlisten 330711 00:37:35.128 22:36:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@831 -- # '[' -z 330711 ']' 00:37:35.128 22:36:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:37:35.128 22:36:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@836 -- # local max_retries=100 00:37:35.128 22:36:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:37:35.128 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:37:35.128 22:36:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@840 -- # xtrace_disable 00:37:35.128 22:36:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:37:35.128 [2024-10-01 22:36:29.347208] Starting SPDK v25.01-pre git sha1 1b1c3081e / DPDK 24.03.0 initialization... 00:37:35.128 [2024-10-01 22:36:29.347255] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:37:35.128 [2024-10-01 22:36:29.423278] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:37:35.128 [2024-10-01 22:36:29.510434] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:37:35.128 [2024-10-01 22:36:29.510492] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:37:35.128 [2024-10-01 22:36:29.510500] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:37:35.128 [2024-10-01 22:36:29.510508] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:37:35.128 [2024-10-01 22:36:29.510514] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:37:35.128 [2024-10-01 22:36:29.510546] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:37:35.128 22:36:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:37:35.128 22:36:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@864 -- # return 0 00:37:35.128 22:36:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:37:35.128 22:36:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@730 -- # xtrace_disable 00:37:35.128 22:36:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:37:35.128 22:36:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:37:35.128 22:36:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@32 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:37:35.128 22:36:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:35.128 22:36:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:37:35.128 [2024-10-01 22:36:30.241041] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:37:35.128 22:36:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:35.128 22:36:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.2 -s 8009 00:37:35.128 22:36:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:35.128 22:36:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:37:35.128 [2024-10-01 22:36:30.249270] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:37:35.128 22:36:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:35.128 22:36:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@35 -- # rpc_cmd bdev_null_create null0 1000 512 00:37:35.128 22:36:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:35.128 22:36:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:37:35.128 null0 00:37:35.128 22:36:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:35.128 22:36:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@36 -- # rpc_cmd bdev_null_create null1 1000 512 00:37:35.128 22:36:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:35.128 22:36:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:37:35.128 null1 00:37:35.128 22:36:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:35.128 22:36:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@37 -- # rpc_cmd bdev_wait_for_examine 00:37:35.128 22:36:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:35.128 22:36:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:37:35.128 22:36:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:35.128 22:36:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@45 -- # hostpid=330877 00:37:35.128 22:36:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@46 -- # waitforlisten 330877 /tmp/host.sock 00:37:35.128 22:36:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock 00:37:35.128 22:36:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@831 -- # '[' -z 330877 ']' 00:37:35.128 22:36:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@835 -- # local rpc_addr=/tmp/host.sock 00:37:35.128 22:36:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@836 -- # local max_retries=100 00:37:35.129 22:36:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:37:35.129 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:37:35.129 22:36:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@840 -- # xtrace_disable 00:37:35.129 22:36:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:37:35.129 [2024-10-01 22:36:30.342975] Starting SPDK v25.01-pre git sha1 1b1c3081e / DPDK 24.03.0 initialization... 00:37:35.129 [2024-10-01 22:36:30.343053] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid330877 ] 00:37:35.389 [2024-10-01 22:36:30.409345] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:37:35.389 [2024-10-01 22:36:30.484522] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:37:35.960 22:36:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:37:35.960 22:36:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@864 -- # return 0 00:37:35.960 22:36:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@48 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:37:35.960 22:36:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@50 -- # rpc_cmd -s /tmp/host.sock log_set_flag bdev_nvme 00:37:35.960 22:36:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:35.960 22:36:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:37:35.960 22:36:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:35.960 22:36:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@51 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test 00:37:35.960 22:36:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:35.960 22:36:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:37:35.961 22:36:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:35.961 22:36:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@72 -- # notify_id=0 00:37:35.961 22:36:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@83 -- # get_subsystem_names 00:37:35.961 22:36:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:37:35.961 22:36:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:37:35.961 22:36:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:35.961 22:36:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:37:35.961 22:36:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:37:35.961 22:36:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:37:35.961 22:36:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:35.961 22:36:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@83 -- # [[ '' == '' ]] 00:37:35.961 22:36:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@84 -- # get_bdev_list 00:37:35.961 22:36:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:37:35.961 22:36:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:37:35.961 22:36:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:35.961 22:36:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:37:35.961 22:36:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:37:35.961 22:36:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:37:35.961 22:36:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:36.221 22:36:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@84 -- # [[ '' == '' ]] 00:37:36.221 22:36:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@86 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 00:37:36.221 22:36:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:36.221 22:36:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:37:36.221 22:36:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:36.221 22:36:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@87 -- # get_subsystem_names 00:37:36.221 22:36:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:37:36.221 22:36:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:37:36.221 22:36:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:36.221 22:36:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:37:36.221 22:36:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:37:36.221 22:36:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:37:36.221 22:36:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:36.221 22:36:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@87 -- # [[ '' == '' ]] 00:37:36.221 22:36:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@88 -- # get_bdev_list 00:37:36.221 22:36:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:37:36.221 22:36:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:37:36.221 22:36:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:37:36.221 22:36:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:36.221 22:36:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:37:36.221 22:36:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:37:36.221 22:36:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:36.221 22:36:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@88 -- # [[ '' == '' ]] 00:37:36.221 22:36:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@90 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 00:37:36.221 22:36:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:36.221 22:36:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:37:36.221 22:36:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:36.221 22:36:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@91 -- # get_subsystem_names 00:37:36.221 22:36:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:37:36.221 22:36:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:37:36.221 22:36:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:36.221 22:36:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:37:36.221 22:36:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:37:36.221 22:36:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:37:36.221 22:36:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:36.221 22:36:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@91 -- # [[ '' == '' ]] 00:37:36.221 22:36:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@92 -- # get_bdev_list 00:37:36.221 22:36:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:37:36.221 22:36:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:37:36.221 22:36:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:36.221 22:36:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:37:36.221 22:36:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:37:36.221 22:36:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:37:36.221 22:36:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:36.221 22:36:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@92 -- # [[ '' == '' ]] 00:37:36.221 22:36:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@96 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:37:36.221 22:36:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:36.221 22:36:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:37:36.221 [2024-10-01 22:36:31.472342] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:37:36.481 22:36:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:36.481 22:36:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@97 -- # get_subsystem_names 00:37:36.481 22:36:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:37:36.481 22:36:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:37:36.481 22:36:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:36.481 22:36:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:37:36.481 22:36:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:37:36.481 22:36:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:37:36.481 22:36:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:36.481 22:36:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@97 -- # [[ '' == '' ]] 00:37:36.481 22:36:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@98 -- # get_bdev_list 00:37:36.481 22:36:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:37:36.482 22:36:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:37:36.482 22:36:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:36.482 22:36:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:37:36.482 22:36:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:37:36.482 22:36:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:37:36.482 22:36:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:36.482 22:36:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@98 -- # [[ '' == '' ]] 00:37:36.482 22:36:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@99 -- # is_notification_count_eq 0 00:37:36.482 22:36:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:37:36.482 22:36:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:37:36.482 22:36:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:37:36.482 22:36:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:37:36.482 22:36:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:37:36.482 22:36:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:37:36.482 22:36:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_notification_count 00:37:36.482 22:36:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:37:36.482 22:36:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:37:36.482 22:36:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:36.482 22:36:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:37:36.482 22:36:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:36.482 22:36:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:37:36.482 22:36:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=0 00:37:36.482 22:36:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # (( notification_count == expected_count )) 00:37:36.482 22:36:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:37:36.482 22:36:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@103 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2021-12.io.spdk:test 00:37:36.482 22:36:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:36.482 22:36:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:37:36.482 22:36:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:36.482 22:36:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@105 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:37:36.482 22:36:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:37:36.482 22:36:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:37:36.482 22:36:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:37:36.482 22:36:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:37:36.482 22:36:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_names 00:37:36.482 22:36:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:37:36.482 22:36:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:37:36.482 22:36:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:36.482 22:36:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:37:36.482 22:36:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:37:36.482 22:36:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:37:36.482 22:36:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:36.482 22:36:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ '' == \n\v\m\e\0 ]] 00:37:36.482 22:36:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # sleep 1 00:37:37.053 [2024-10-01 22:36:32.190579] bdev_nvme.c:7162:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:37:37.053 [2024-10-01 22:36:32.190603] bdev_nvme.c:7242:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:37:37.053 [2024-10-01 22:36:32.190617] bdev_nvme.c:7125:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:37:37.315 [2024-10-01 22:36:32.320039] bdev_nvme.c:7091:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:37:37.315 [2024-10-01 22:36:32.501811] bdev_nvme.c:6981:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:37:37.315 [2024-10-01 22:36:32.501835] bdev_nvme.c:6940:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:37:37.576 22:36:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:37:37.576 22:36:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:37:37.576 22:36:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_names 00:37:37.576 22:36:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:37:37.576 22:36:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:37:37.576 22:36:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:37:37.576 22:36:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:37.576 22:36:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:37:37.576 22:36:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:37:37.576 22:36:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:37.576 22:36:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:37:37.576 22:36:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:37:37.576 22:36:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@106 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:37:37.576 22:36:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:37:37.576 22:36:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:37:37.576 22:36:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:37:37.576 22:36:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1"' ']]' 00:37:37.576 22:36:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_bdev_list 00:37:37.576 22:36:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:37:37.576 22:36:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:37:37.576 22:36:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:37.576 22:36:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:37:37.576 22:36:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:37:37.576 22:36:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:37:37.576 22:36:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:37.576 22:36:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ nvme0n1 == \n\v\m\e\0\n\1 ]] 00:37:37.576 22:36:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:37:37.576 22:36:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@107 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:37:37.576 22:36:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:37:37.576 22:36:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:37:37.576 22:36:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:37:37.576 22:36:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT"' ']]' 00:37:37.576 22:36:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_paths nvme0 00:37:37.576 22:36:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:37:37.576 22:36:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:37:37.576 22:36:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:37.576 22:36:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:37:37.576 22:36:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:37:37.576 22:36:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:37:37.576 22:36:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:37.839 22:36:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ 4420 == \4\4\2\0 ]] 00:37:37.839 22:36:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:37:37.839 22:36:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@108 -- # is_notification_count_eq 1 00:37:37.839 22:36:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:37:37.839 22:36:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:37:37.839 22:36:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:37:37.839 22:36:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:37:37.839 22:36:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:37:37.839 22:36:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:37:37.839 22:36:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_notification_count 00:37:37.839 22:36:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:37:37.839 22:36:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:37:37.839 22:36:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:37.839 22:36:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:37:37.839 22:36:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:37.839 22:36:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:37:37.839 22:36:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=1 00:37:37.839 22:36:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # (( notification_count == expected_count )) 00:37:37.839 22:36:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:37:37.839 22:36:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@111 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null1 00:37:37.839 22:36:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:37.839 22:36:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:37:37.839 22:36:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:37.839 22:36:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@113 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:37:37.839 22:36:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:37:37.839 22:36:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:37:37.839 22:36:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:37:37.839 22:36:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:37:37.839 22:36:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_bdev_list 00:37:37.839 22:36:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:37:37.839 22:36:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:37:37.839 22:36:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:37.839 22:36:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:37:37.839 22:36:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:37:37.839 22:36:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:37:37.839 22:36:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:37.839 22:36:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:37:37.839 22:36:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:37:37.839 22:36:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@114 -- # is_notification_count_eq 1 00:37:37.839 22:36:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:37:37.839 22:36:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:37:37.839 22:36:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:37:37.839 22:36:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:37:37.839 22:36:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:37:37.839 22:36:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:37:37.839 22:36:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_notification_count 00:37:37.839 22:36:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 1 00:37:37.839 22:36:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:37:37.839 22:36:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:37.839 22:36:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:37:37.839 22:36:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:37.839 22:36:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:37:37.839 22:36:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:37:37.839 22:36:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # (( notification_count == expected_count )) 00:37:37.839 22:36:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:37:37.839 22:36:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@118 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 00:37:37.839 22:36:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:37.839 22:36:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:37:37.839 [2024-10-01 22:36:33.020741] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:37:37.839 [2024-10-01 22:36:33.021749] bdev_nvme.c:7144:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:37:37.839 [2024-10-01 22:36:33.021774] bdev_nvme.c:7125:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:37:37.839 22:36:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:37.839 22:36:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@120 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:37:37.839 22:36:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:37:37.839 22:36:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:37:37.839 22:36:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:37:37.839 22:36:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:37:37.839 22:36:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_names 00:37:37.839 22:36:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:37:37.839 22:36:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:37:37.839 22:36:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:37.839 22:36:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:37:37.839 22:36:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:37:37.839 22:36:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:37:37.839 22:36:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:37.839 22:36:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:37:37.839 22:36:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:37:37.839 22:36:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@121 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:37:37.839 22:36:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:37:37.839 22:36:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:37:37.839 22:36:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:37:37.839 22:36:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:37:37.839 22:36:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_bdev_list 00:37:37.839 22:36:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:37:37.839 22:36:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:37:37.839 22:36:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:37.839 22:36:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:37:37.839 22:36:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:37:37.840 22:36:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:37:38.101 22:36:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:38.101 22:36:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:37:38.101 22:36:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:37:38.101 22:36:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@122 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:37:38.101 22:36:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:37:38.101 22:36:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:37:38.101 22:36:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:37:38.101 22:36:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:37:38.101 22:36:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_paths nvme0 00:37:38.101 22:36:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:37:38.101 22:36:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:37:38.101 22:36:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:38.101 22:36:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:37:38.101 22:36:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:37:38.101 22:36:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:37:38.101 [2024-10-01 22:36:33.149165] bdev_nvme.c:7086:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new path for nvme0 00:37:38.101 22:36:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:38.101 22:36:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ 4420 == \4\4\2\0\ \4\4\2\1 ]] 00:37:38.101 22:36:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # sleep 1 00:37:38.101 [2024-10-01 22:36:33.213946] bdev_nvme.c:6981:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:37:38.101 [2024-10-01 22:36:33.213964] bdev_nvme.c:6940:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:37:38.101 [2024-10-01 22:36:33.213970] bdev_nvme.c:6940:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:37:39.044 22:36:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:37:39.044 22:36:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:37:39.044 22:36:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_paths nvme0 00:37:39.044 22:36:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:37:39.044 22:36:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:37:39.044 22:36:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:39.044 22:36:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:37:39.044 22:36:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:37:39.044 22:36:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:37:39.044 22:36:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:39.044 22:36:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 00:37:39.044 22:36:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:37:39.044 22:36:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@123 -- # is_notification_count_eq 0 00:37:39.044 22:36:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:37:39.044 22:36:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:37:39.044 22:36:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:37:39.044 22:36:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:37:39.044 22:36:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:37:39.044 22:36:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:37:39.044 22:36:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_notification_count 00:37:39.044 22:36:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:37:39.044 22:36:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:37:39.044 22:36:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:39.044 22:36:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:37:39.044 22:36:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:39.044 22:36:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:37:39.044 22:36:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:37:39.044 22:36:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # (( notification_count == expected_count )) 00:37:39.044 22:36:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:37:39.044 22:36:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@127 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:37:39.044 22:36:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:39.044 22:36:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:37:39.044 [2024-10-01 22:36:34.293130] bdev_nvme.c:7144:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:37:39.044 [2024-10-01 22:36:34.293153] bdev_nvme.c:7125:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:37:39.044 [2024-10-01 22:36:34.293649] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:37:39.044 [2024-10-01 22:36:34.293665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:39.044 [2024-10-01 22:36:34.293674] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:37:39.044 [2024-10-01 22:36:34.293681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:39.044 [2024-10-01 22:36:34.293690] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:37:39.044 [2024-10-01 22:36:34.293698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:39.044 [2024-10-01 22:36:34.293705] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:37:39.044 [2024-10-01 22:36:34.293713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:39.044 [2024-10-01 22:36:34.293720] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bac090 is same with the state(6) to be set 00:37:39.307 22:36:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:39.307 22:36:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@129 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:37:39.307 22:36:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:37:39.307 22:36:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:37:39.307 22:36:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:37:39.307 22:36:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:37:39.307 22:36:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_names 00:37:39.307 22:36:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:37:39.307 22:36:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:39.307 22:36:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:37:39.307 22:36:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:37:39.307 [2024-10-01 22:36:34.303659] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bac090 (9): Bad file descriptor 00:37:39.307 22:36:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:37:39.307 22:36:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:37:39.307 [2024-10-01 22:36:34.313701] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:37:39.307 22:36:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:39.307 [2024-10-01 22:36:34.314050] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.307 [2024-10-01 22:36:34.314067] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bac090 with addr=10.0.0.2, port=4420 00:37:39.307 [2024-10-01 22:36:34.314075] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bac090 is same with the state(6) to be set 00:37:39.307 [2024-10-01 22:36:34.314087] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bac090 (9): Bad file descriptor 00:37:39.307 [2024-10-01 22:36:34.314098] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:37:39.307 [2024-10-01 22:36:34.314105] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:37:39.307 [2024-10-01 22:36:34.314113] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:37:39.307 [2024-10-01 22:36:34.314125] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:39.307 [2024-10-01 22:36:34.323761] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:37:39.307 [2024-10-01 22:36:34.324053] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.307 [2024-10-01 22:36:34.324066] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bac090 with addr=10.0.0.2, port=4420 00:37:39.307 [2024-10-01 22:36:34.324073] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bac090 is same with the state(6) to be set 00:37:39.307 [2024-10-01 22:36:34.324084] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bac090 (9): Bad file descriptor 00:37:39.307 [2024-10-01 22:36:34.324095] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:37:39.307 [2024-10-01 22:36:34.324101] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:37:39.307 [2024-10-01 22:36:34.324109] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:37:39.307 [2024-10-01 22:36:34.324119] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:39.307 [2024-10-01 22:36:34.333814] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:37:39.307 [2024-10-01 22:36:34.334108] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.307 [2024-10-01 22:36:34.334124] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bac090 with addr=10.0.0.2, port=4420 00:37:39.307 [2024-10-01 22:36:34.334131] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bac090 is same with the state(6) to be set 00:37:39.307 [2024-10-01 22:36:34.334142] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bac090 (9): Bad file descriptor 00:37:39.307 [2024-10-01 22:36:34.334153] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:37:39.307 [2024-10-01 22:36:34.334159] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:37:39.307 [2024-10-01 22:36:34.334167] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:37:39.307 [2024-10-01 22:36:34.334177] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:39.307 [2024-10-01 22:36:34.343867] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:37:39.307 [2024-10-01 22:36:34.344159] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.307 [2024-10-01 22:36:34.344171] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bac090 with addr=10.0.0.2, port=4420 00:37:39.307 [2024-10-01 22:36:34.344179] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bac090 is same with the state(6) to be set 00:37:39.307 [2024-10-01 22:36:34.344190] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bac090 (9): Bad file descriptor 00:37:39.307 [2024-10-01 22:36:34.344201] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:37:39.307 [2024-10-01 22:36:34.344208] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:37:39.307 [2024-10-01 22:36:34.344215] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:37:39.307 [2024-10-01 22:36:34.344226] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:39.307 22:36:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:37:39.307 22:36:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:37:39.307 22:36:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@130 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:37:39.307 22:36:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:37:39.307 22:36:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:37:39.307 22:36:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:37:39.307 22:36:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:37:39.307 [2024-10-01 22:36:34.353922] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:37:39.307 [2024-10-01 22:36:34.354210] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.307 [2024-10-01 22:36:34.354223] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bac090 with addr=10.0.0.2, port=4420 00:37:39.307 [2024-10-01 22:36:34.354230] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bac090 is same with the state(6) to be set 00:37:39.307 [2024-10-01 22:36:34.354241] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bac090 (9): Bad file descriptor 00:37:39.307 [2024-10-01 22:36:34.354252] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:37:39.307 [2024-10-01 22:36:34.354258] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:37:39.307 [2024-10-01 22:36:34.354265] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:37:39.307 [2024-10-01 22:36:34.354280] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:39.307 22:36:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_bdev_list 00:37:39.307 22:36:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:37:39.307 22:36:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:39.307 22:36:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:37:39.307 22:36:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:37:39.307 22:36:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:37:39.307 22:36:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:37:39.307 [2024-10-01 22:36:34.363975] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:37:39.307 [2024-10-01 22:36:34.364267] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.307 [2024-10-01 22:36:34.364280] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bac090 with addr=10.0.0.2, port=4420 00:37:39.307 [2024-10-01 22:36:34.364287] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bac090 is same with the state(6) to be set 00:37:39.307 [2024-10-01 22:36:34.364298] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bac090 (9): Bad file descriptor 00:37:39.307 [2024-10-01 22:36:34.364309] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:37:39.307 [2024-10-01 22:36:34.364315] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:37:39.307 [2024-10-01 22:36:34.364322] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:37:39.307 [2024-10-01 22:36:34.364333] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:39.307 [2024-10-01 22:36:34.374032] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:37:39.307 [2024-10-01 22:36:34.374323] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.307 [2024-10-01 22:36:34.374335] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bac090 with addr=10.0.0.2, port=4420 00:37:39.307 [2024-10-01 22:36:34.374342] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bac090 is same with the state(6) to be set 00:37:39.307 [2024-10-01 22:36:34.374353] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bac090 (9): Bad file descriptor 00:37:39.307 [2024-10-01 22:36:34.374363] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:37:39.308 [2024-10-01 22:36:34.374370] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:37:39.308 [2024-10-01 22:36:34.374376] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:37:39.308 [2024-10-01 22:36:34.374387] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:39.308 22:36:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:39.308 [2024-10-01 22:36:34.379524] bdev_nvme.c:6949:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 not found 00:37:39.308 [2024-10-01 22:36:34.379543] bdev_nvme.c:6940:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:37:39.308 22:36:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:37:39.308 22:36:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:37:39.308 22:36:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@131 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:37:39.308 22:36:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:37:39.308 22:36:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:37:39.308 22:36:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:37:39.308 22:36:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_SECOND_PORT"' ']]' 00:37:39.308 22:36:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_paths nvme0 00:37:39.308 22:36:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:37:39.308 22:36:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:37:39.308 22:36:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:37:39.308 22:36:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:37:39.308 22:36:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:39.308 22:36:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:37:39.308 22:36:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:39.308 22:36:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ 4421 == \4\4\2\1 ]] 00:37:39.308 22:36:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:37:39.308 22:36:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@132 -- # is_notification_count_eq 0 00:37:39.308 22:36:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:37:39.308 22:36:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:37:39.308 22:36:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:37:39.308 22:36:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:37:39.308 22:36:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:37:39.308 22:36:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:37:39.308 22:36:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_notification_count 00:37:39.308 22:36:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:37:39.308 22:36:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:37:39.308 22:36:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:39.308 22:36:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:37:39.308 22:36:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:39.308 22:36:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:37:39.308 22:36:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:37:39.308 22:36:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # (( notification_count == expected_count )) 00:37:39.308 22:36:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:37:39.308 22:36:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@134 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_discovery -b nvme 00:37:39.308 22:36:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:39.308 22:36:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:37:39.308 22:36:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:39.308 22:36:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@136 -- # waitforcondition '[[ "$(get_subsystem_names)" == "" ]]' 00:37:39.308 22:36:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_subsystem_names)" == "" ]]' 00:37:39.308 22:36:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:37:39.308 22:36:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:37:39.308 22:36:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_names)"' == '""' ']]' 00:37:39.308 22:36:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_names 00:37:39.308 22:36:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:37:39.308 22:36:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:37:39.308 22:36:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:39.308 22:36:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:37:39.308 22:36:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:37:39.308 22:36:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:37:39.308 22:36:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:39.569 22:36:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ '' == '' ]] 00:37:39.569 22:36:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:37:39.569 22:36:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@137 -- # waitforcondition '[[ "$(get_bdev_list)" == "" ]]' 00:37:39.569 22:36:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_bdev_list)" == "" ]]' 00:37:39.569 22:36:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:37:39.569 22:36:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:37:39.569 22:36:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_bdev_list)"' == '""' ']]' 00:37:39.569 22:36:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_bdev_list 00:37:39.569 22:36:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:37:39.569 22:36:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:37:39.569 22:36:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:39.569 22:36:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:37:39.569 22:36:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:37:39.569 22:36:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:37:39.569 22:36:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:39.569 22:36:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ '' == '' ]] 00:37:39.569 22:36:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:37:39.569 22:36:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@138 -- # is_notification_count_eq 2 00:37:39.569 22:36:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=2 00:37:39.569 22:36:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:37:39.569 22:36:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:37:39.569 22:36:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:37:39.569 22:36:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:37:39.569 22:36:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:37:39.569 22:36:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_notification_count 00:37:39.570 22:36:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:37:39.570 22:36:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:37:39.570 22:36:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:39.570 22:36:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:37:39.570 22:36:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:39.570 22:36:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=2 00:37:39.570 22:36:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=4 00:37:39.570 22:36:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # (( notification_count == expected_count )) 00:37:39.570 22:36:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:37:39.570 22:36:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@141 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:37:39.570 22:36:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:39.570 22:36:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:37:40.512 [2024-10-01 22:36:35.678808] bdev_nvme.c:7162:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:37:40.512 [2024-10-01 22:36:35.678825] bdev_nvme.c:7242:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:37:40.512 [2024-10-01 22:36:35.678838] bdev_nvme.c:7125:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:37:40.773 [2024-10-01 22:36:35.766116] bdev_nvme.c:7091:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new subsystem nvme0 00:37:41.036 [2024-10-01 22:36:36.079153] bdev_nvme.c:6981:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:37:41.036 [2024-10-01 22:36:36.079183] bdev_nvme.c:6940:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:37:41.036 22:36:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:41.036 22:36:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@143 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:37:41.036 22:36:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@650 -- # local es=0 00:37:41.036 22:36:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:37:41.036 22:36:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:37:41.036 22:36:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:37:41.036 22:36:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:37:41.036 22:36:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:37:41.036 22:36:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:37:41.036 22:36:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:41.036 22:36:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:37:41.036 request: 00:37:41.036 { 00:37:41.036 "name": "nvme", 00:37:41.036 "trtype": "tcp", 00:37:41.036 "traddr": "10.0.0.2", 00:37:41.036 "adrfam": "ipv4", 00:37:41.036 "trsvcid": "8009", 00:37:41.036 "hostnqn": "nqn.2021-12.io.spdk:test", 00:37:41.036 "wait_for_attach": true, 00:37:41.036 "method": "bdev_nvme_start_discovery", 00:37:41.036 "req_id": 1 00:37:41.036 } 00:37:41.036 Got JSON-RPC error response 00:37:41.036 response: 00:37:41.036 { 00:37:41.036 "code": -17, 00:37:41.036 "message": "File exists" 00:37:41.036 } 00:37:41.036 22:36:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:37:41.036 22:36:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # es=1 00:37:41.036 22:36:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:37:41.036 22:36:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:37:41.036 22:36:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:37:41.036 22:36:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@145 -- # get_discovery_ctrlrs 00:37:41.036 22:36:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:37:41.036 22:36:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:37:41.036 22:36:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:41.036 22:36:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:37:41.036 22:36:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:37:41.036 22:36:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:37:41.036 22:36:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:41.036 22:36:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@145 -- # [[ nvme == \n\v\m\e ]] 00:37:41.036 22:36:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@146 -- # get_bdev_list 00:37:41.036 22:36:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:37:41.036 22:36:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:37:41.036 22:36:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:37:41.036 22:36:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:37:41.036 22:36:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:41.036 22:36:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:37:41.036 22:36:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:41.036 22:36:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@146 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:37:41.036 22:36:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@149 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:37:41.036 22:36:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@650 -- # local es=0 00:37:41.036 22:36:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:37:41.036 22:36:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:37:41.036 22:36:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:37:41.036 22:36:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:37:41.036 22:36:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:37:41.036 22:36:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:37:41.036 22:36:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:41.036 22:36:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:37:41.036 request: 00:37:41.036 { 00:37:41.036 "name": "nvme_second", 00:37:41.036 "trtype": "tcp", 00:37:41.036 "traddr": "10.0.0.2", 00:37:41.036 "adrfam": "ipv4", 00:37:41.036 "trsvcid": "8009", 00:37:41.036 "hostnqn": "nqn.2021-12.io.spdk:test", 00:37:41.036 "wait_for_attach": true, 00:37:41.036 "method": "bdev_nvme_start_discovery", 00:37:41.036 "req_id": 1 00:37:41.036 } 00:37:41.036 Got JSON-RPC error response 00:37:41.036 response: 00:37:41.036 { 00:37:41.036 "code": -17, 00:37:41.036 "message": "File exists" 00:37:41.036 } 00:37:41.036 22:36:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:37:41.036 22:36:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # es=1 00:37:41.036 22:36:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:37:41.036 22:36:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:37:41.036 22:36:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:37:41.036 22:36:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@151 -- # get_discovery_ctrlrs 00:37:41.036 22:36:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:37:41.036 22:36:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:37:41.036 22:36:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:41.036 22:36:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:37:41.036 22:36:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:37:41.036 22:36:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:37:41.036 22:36:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:41.036 22:36:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@151 -- # [[ nvme == \n\v\m\e ]] 00:37:41.036 22:36:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@152 -- # get_bdev_list 00:37:41.036 22:36:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:37:41.036 22:36:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:37:41.036 22:36:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:37:41.036 22:36:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:41.036 22:36:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:37:41.036 22:36:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:37:41.298 22:36:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:41.298 22:36:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@152 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:37:41.298 22:36:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@155 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:37:41.298 22:36:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@650 -- # local es=0 00:37:41.298 22:36:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:37:41.298 22:36:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:37:41.298 22:36:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:37:41.298 22:36:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:37:41.298 22:36:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:37:41.298 22:36:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:37:41.298 22:36:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:41.298 22:36:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:37:42.238 [2024-10-01 22:36:37.322615] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.239 [2024-10-01 22:36:37.322649] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc4510 with addr=10.0.0.2, port=8010 00:37:42.239 [2024-10-01 22:36:37.322664] nvme_tcp.c:2723:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:37:42.239 [2024-10-01 22:36:37.322671] nvme.c: 831:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:37:42.239 [2024-10-01 22:36:37.322678] bdev_nvme.c:7224:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:37:43.180 [2024-10-01 22:36:38.324948] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.180 [2024-10-01 22:36:38.324971] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc4510 with addr=10.0.0.2, port=8010 00:37:43.180 [2024-10-01 22:36:38.324982] nvme_tcp.c:2723:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:37:43.180 [2024-10-01 22:36:38.324989] nvme.c: 831:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:37:43.180 [2024-10-01 22:36:38.324995] bdev_nvme.c:7224:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:37:44.121 [2024-10-01 22:36:39.326953] bdev_nvme.c:7205:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] timed out while attaching discovery ctrlr 00:37:44.121 request: 00:37:44.121 { 00:37:44.121 "name": "nvme_second", 00:37:44.121 "trtype": "tcp", 00:37:44.121 "traddr": "10.0.0.2", 00:37:44.121 "adrfam": "ipv4", 00:37:44.121 "trsvcid": "8010", 00:37:44.121 "hostnqn": "nqn.2021-12.io.spdk:test", 00:37:44.121 "wait_for_attach": false, 00:37:44.121 "attach_timeout_ms": 3000, 00:37:44.121 "method": "bdev_nvme_start_discovery", 00:37:44.121 "req_id": 1 00:37:44.121 } 00:37:44.121 Got JSON-RPC error response 00:37:44.121 response: 00:37:44.121 { 00:37:44.121 "code": -110, 00:37:44.121 "message": "Connection timed out" 00:37:44.121 } 00:37:44.121 22:36:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:37:44.121 22:36:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # es=1 00:37:44.121 22:36:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:37:44.121 22:36:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:37:44.121 22:36:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:37:44.121 22:36:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@157 -- # get_discovery_ctrlrs 00:37:44.121 22:36:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:37:44.121 22:36:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:37:44.121 22:36:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:44.121 22:36:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:37:44.121 22:36:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:37:44.121 22:36:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:37:44.121 22:36:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:44.381 22:36:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@157 -- # [[ nvme == \n\v\m\e ]] 00:37:44.381 22:36:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@159 -- # trap - SIGINT SIGTERM EXIT 00:37:44.381 22:36:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@161 -- # kill 330877 00:37:44.381 22:36:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@162 -- # nvmftestfini 00:37:44.381 22:36:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@514 -- # nvmfcleanup 00:37:44.381 22:36:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@121 -- # sync 00:37:44.381 22:36:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:37:44.381 22:36:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@124 -- # set +e 00:37:44.381 22:36:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@125 -- # for i in {1..20} 00:37:44.381 22:36:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:37:44.381 rmmod nvme_tcp 00:37:44.381 rmmod nvme_fabrics 00:37:44.381 rmmod nvme_keyring 00:37:44.381 22:36:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:37:44.381 22:36:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@128 -- # set -e 00:37:44.381 22:36:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@129 -- # return 0 00:37:44.381 22:36:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@515 -- # '[' -n 330711 ']' 00:37:44.381 22:36:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@516 -- # killprocess 330711 00:37:44.381 22:36:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@950 -- # '[' -z 330711 ']' 00:37:44.381 22:36:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@954 -- # kill -0 330711 00:37:44.381 22:36:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@955 -- # uname 00:37:44.381 22:36:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:37:44.381 22:36:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 330711 00:37:44.381 22:36:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:37:44.381 22:36:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:37:44.381 22:36:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@968 -- # echo 'killing process with pid 330711' 00:37:44.381 killing process with pid 330711 00:37:44.381 22:36:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@969 -- # kill 330711 00:37:44.381 22:36:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@974 -- # wait 330711 00:37:44.643 22:36:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:37:44.643 22:36:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:37:44.643 22:36:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:37:44.643 22:36:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@297 -- # iptr 00:37:44.643 22:36:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@789 -- # iptables-save 00:37:44.643 22:36:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:37:44.643 22:36:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@789 -- # iptables-restore 00:37:44.643 22:36:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:37:44.643 22:36:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@302 -- # remove_spdk_ns 00:37:44.643 22:36:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:37:44.643 22:36:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:37:44.643 22:36:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:37:46.557 22:36:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:37:46.557 00:37:46.557 real 0m19.984s 00:37:46.557 user 0m23.101s 00:37:46.557 sys 0m7.151s 00:37:46.557 22:36:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1126 -- # xtrace_disable 00:37:46.557 22:36:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:37:46.557 ************************************ 00:37:46.557 END TEST nvmf_host_discovery 00:37:46.557 ************************************ 00:37:46.818 22:36:41 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@27 -- # run_test nvmf_host_multipath_status /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:37:46.818 22:36:41 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:37:46.818 22:36:41 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:37:46.818 22:36:41 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:37:46.818 ************************************ 00:37:46.818 START TEST nvmf_host_multipath_status 00:37:46.818 ************************************ 00:37:46.818 22:36:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:37:46.818 * Looking for test storage... 00:37:46.818 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:37:46.818 22:36:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:37:46.818 22:36:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1681 -- # lcov --version 00:37:46.818 22:36:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:37:46.818 22:36:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:37:46.818 22:36:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:37:46.818 22:36:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@333 -- # local ver1 ver1_l 00:37:46.818 22:36:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@334 -- # local ver2 ver2_l 00:37:46.818 22:36:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@336 -- # IFS=.-: 00:37:46.818 22:36:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@336 -- # read -ra ver1 00:37:46.818 22:36:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@337 -- # IFS=.-: 00:37:46.818 22:36:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@337 -- # read -ra ver2 00:37:46.818 22:36:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@338 -- # local 'op=<' 00:37:46.818 22:36:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@340 -- # ver1_l=2 00:37:46.818 22:36:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@341 -- # ver2_l=1 00:37:46.818 22:36:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:37:46.818 22:36:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@344 -- # case "$op" in 00:37:46.818 22:36:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@345 -- # : 1 00:37:46.818 22:36:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@364 -- # (( v = 0 )) 00:37:46.818 22:36:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:37:46.818 22:36:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@365 -- # decimal 1 00:37:46.818 22:36:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@353 -- # local d=1 00:37:46.819 22:36:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:37:46.819 22:36:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@355 -- # echo 1 00:37:46.819 22:36:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@365 -- # ver1[v]=1 00:37:46.819 22:36:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@366 -- # decimal 2 00:37:46.819 22:36:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@353 -- # local d=2 00:37:46.819 22:36:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:37:46.819 22:36:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@355 -- # echo 2 00:37:46.819 22:36:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@366 -- # ver2[v]=2 00:37:46.819 22:36:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:37:46.819 22:36:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:37:46.819 22:36:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@368 -- # return 0 00:37:46.819 22:36:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:37:46.819 22:36:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:37:46.819 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:46.819 --rc genhtml_branch_coverage=1 00:37:46.819 --rc genhtml_function_coverage=1 00:37:46.819 --rc genhtml_legend=1 00:37:46.819 --rc geninfo_all_blocks=1 00:37:46.819 --rc geninfo_unexecuted_blocks=1 00:37:46.819 00:37:46.819 ' 00:37:46.819 22:36:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:37:46.819 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:46.819 --rc genhtml_branch_coverage=1 00:37:46.819 --rc genhtml_function_coverage=1 00:37:46.819 --rc genhtml_legend=1 00:37:46.819 --rc geninfo_all_blocks=1 00:37:46.819 --rc geninfo_unexecuted_blocks=1 00:37:46.819 00:37:46.819 ' 00:37:46.819 22:36:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:37:46.819 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:46.819 --rc genhtml_branch_coverage=1 00:37:46.819 --rc genhtml_function_coverage=1 00:37:46.819 --rc genhtml_legend=1 00:37:46.819 --rc geninfo_all_blocks=1 00:37:46.819 --rc geninfo_unexecuted_blocks=1 00:37:46.819 00:37:46.819 ' 00:37:46.819 22:36:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:37:46.819 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:46.819 --rc genhtml_branch_coverage=1 00:37:46.819 --rc genhtml_function_coverage=1 00:37:46.819 --rc genhtml_legend=1 00:37:46.819 --rc geninfo_all_blocks=1 00:37:46.819 --rc geninfo_unexecuted_blocks=1 00:37:46.819 00:37:46.819 ' 00:37:46.819 22:36:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:37:46.819 22:36:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # uname -s 00:37:46.819 22:36:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:37:46.819 22:36:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:37:46.819 22:36:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:37:46.819 22:36:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:37:46.819 22:36:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:37:46.819 22:36:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:37:46.819 22:36:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:37:46.819 22:36:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:37:46.819 22:36:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:37:46.819 22:36:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:37:47.080 22:36:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 00:37:47.080 22:36:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@18 -- # NVME_HOSTID=80c5a598-37ec-ec11-9bc7-a4bf01928204 00:37:47.080 22:36:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:37:47.080 22:36:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:37:47.080 22:36:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:37:47.080 22:36:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:37:47.080 22:36:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:37:47.080 22:36:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@15 -- # shopt -s extglob 00:37:47.080 22:36:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:37:47.080 22:36:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:37:47.080 22:36:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:37:47.080 22:36:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:47.080 22:36:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:47.080 22:36:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:47.080 22:36:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@5 -- # export PATH 00:37:47.080 22:36:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:47.080 22:36:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@51 -- # : 0 00:37:47.080 22:36:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:37:47.080 22:36:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:37:47.080 22:36:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:37:47.080 22:36:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:37:47.080 22:36:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:37:47.080 22:36:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:37:47.080 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:37:47.080 22:36:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:37:47.080 22:36:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:37:47.080 22:36:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@55 -- # have_pci_nics=0 00:37:47.080 22:36:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@12 -- # MALLOC_BDEV_SIZE=64 00:37:47.080 22:36:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:37:47.080 22:36:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:37:47.081 22:36:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@16 -- # bpf_sh=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/bpftrace.sh 00:37:47.081 22:36:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@18 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:37:47.081 22:36:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@21 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:37:47.081 22:36:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@31 -- # nvmftestinit 00:37:47.081 22:36:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:37:47.081 22:36:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:37:47.081 22:36:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@474 -- # prepare_net_devs 00:37:47.081 22:36:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@436 -- # local -g is_hw=no 00:37:47.081 22:36:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@438 -- # remove_spdk_ns 00:37:47.081 22:36:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:37:47.081 22:36:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:37:47.081 22:36:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:37:47.081 22:36:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:37:47.081 22:36:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:37:47.081 22:36:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@309 -- # xtrace_disable 00:37:47.081 22:36:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:37:55.225 22:36:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:37:55.225 22:36:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@315 -- # pci_devs=() 00:37:55.225 22:36:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@315 -- # local -a pci_devs 00:37:55.225 22:36:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@316 -- # pci_net_devs=() 00:37:55.225 22:36:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:37:55.225 22:36:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@317 -- # pci_drivers=() 00:37:55.225 22:36:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@317 -- # local -A pci_drivers 00:37:55.225 22:36:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@319 -- # net_devs=() 00:37:55.225 22:36:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@319 -- # local -ga net_devs 00:37:55.225 22:36:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@320 -- # e810=() 00:37:55.225 22:36:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@320 -- # local -ga e810 00:37:55.225 22:36:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@321 -- # x722=() 00:37:55.225 22:36:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@321 -- # local -ga x722 00:37:55.225 22:36:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@322 -- # mlx=() 00:37:55.225 22:36:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@322 -- # local -ga mlx 00:37:55.225 22:36:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:37:55.225 22:36:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:37:55.225 22:36:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:37:55.225 22:36:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:37:55.225 22:36:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:37:55.225 22:36:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:37:55.225 22:36:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:37:55.225 22:36:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:37:55.225 22:36:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:37:55.225 22:36:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:37:55.225 22:36:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:37:55.225 22:36:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:37:55.225 22:36:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:37:55.225 22:36:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:37:55.225 22:36:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:37:55.225 22:36:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:37:55.225 22:36:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:37:55.225 22:36:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:37:55.225 22:36:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:37:55.225 22:36:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:37:55.225 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:37:55.225 22:36:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:37:55.225 22:36:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:37:55.225 22:36:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:37:55.225 22:36:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:37:55.225 22:36:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:37:55.225 22:36:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:37:55.225 22:36:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:37:55.225 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:37:55.225 22:36:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:37:55.225 22:36:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:37:55.225 22:36:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:37:55.225 22:36:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:37:55.225 22:36:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:37:55.225 22:36:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:37:55.225 22:36:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:37:55.225 22:36:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:37:55.225 22:36:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:37:55.225 22:36:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:37:55.226 22:36:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:37:55.226 22:36:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:37:55.226 22:36:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@416 -- # [[ up == up ]] 00:37:55.226 22:36:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:37:55.226 22:36:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:37:55.226 22:36:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:37:55.226 Found net devices under 0000:4b:00.0: cvl_0_0 00:37:55.226 22:36:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:37:55.226 22:36:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:37:55.226 22:36:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:37:55.226 22:36:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:37:55.226 22:36:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:37:55.226 22:36:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@416 -- # [[ up == up ]] 00:37:55.226 22:36:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:37:55.226 22:36:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:37:55.226 22:36:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:37:55.226 Found net devices under 0000:4b:00.1: cvl_0_1 00:37:55.226 22:36:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:37:55.226 22:36:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:37:55.226 22:36:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@440 -- # is_hw=yes 00:37:55.226 22:36:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:37:55.226 22:36:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:37:55.226 22:36:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:37:55.226 22:36:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:37:55.226 22:36:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:37:55.226 22:36:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:37:55.226 22:36:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:37:55.226 22:36:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:37:55.226 22:36:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:37:55.226 22:36:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:37:55.226 22:36:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:37:55.226 22:36:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:37:55.226 22:36:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:37:55.226 22:36:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:37:55.226 22:36:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:37:55.226 22:36:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:37:55.226 22:36:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:37:55.226 22:36:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:37:55.226 22:36:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:37:55.226 22:36:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:37:55.226 22:36:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:37:55.226 22:36:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:37:55.226 22:36:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:37:55.226 22:36:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:37:55.226 22:36:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:37:55.226 22:36:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:37:55.226 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:37:55.226 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.525 ms 00:37:55.226 00:37:55.226 --- 10.0.0.2 ping statistics --- 00:37:55.226 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:37:55.226 rtt min/avg/max/mdev = 0.525/0.525/0.525/0.000 ms 00:37:55.226 22:36:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:37:55.226 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:37:55.226 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.314 ms 00:37:55.226 00:37:55.226 --- 10.0.0.1 ping statistics --- 00:37:55.226 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:37:55.226 rtt min/avg/max/mdev = 0.314/0.314/0.314/0.000 ms 00:37:55.226 22:36:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:37:55.226 22:36:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@448 -- # return 0 00:37:55.226 22:36:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:37:55.226 22:36:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:37:55.226 22:36:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:37:55.226 22:36:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:37:55.226 22:36:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:37:55.226 22:36:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:37:55.226 22:36:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:37:55.226 22:36:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@33 -- # nvmfappstart -m 0x3 00:37:55.226 22:36:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:37:55.226 22:36:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@724 -- # xtrace_disable 00:37:55.226 22:36:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:37:55.226 22:36:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@507 -- # nvmfpid=337049 00:37:55.226 22:36:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@508 -- # waitforlisten 337049 00:37:55.226 22:36:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:37:55.226 22:36:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@831 -- # '[' -z 337049 ']' 00:37:55.226 22:36:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:37:55.226 22:36:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@836 -- # local max_retries=100 00:37:55.227 22:36:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:37:55.227 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:37:55.227 22:36:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@840 -- # xtrace_disable 00:37:55.227 22:36:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:37:55.227 [2024-10-01 22:36:49.559728] Starting SPDK v25.01-pre git sha1 1b1c3081e / DPDK 24.03.0 initialization... 00:37:55.227 [2024-10-01 22:36:49.559793] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:37:55.227 [2024-10-01 22:36:49.631786] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:37:55.227 [2024-10-01 22:36:49.705403] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:37:55.227 [2024-10-01 22:36:49.705441] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:37:55.227 [2024-10-01 22:36:49.705449] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:37:55.227 [2024-10-01 22:36:49.705456] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:37:55.227 [2024-10-01 22:36:49.705461] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:37:55.227 [2024-10-01 22:36:49.705597] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:37:55.227 [2024-10-01 22:36:49.705599] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:37:55.227 22:36:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:37:55.227 22:36:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@864 -- # return 0 00:37:55.227 22:36:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:37:55.227 22:36:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@730 -- # xtrace_disable 00:37:55.227 22:36:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:37:55.227 22:36:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:37:55.227 22:36:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@34 -- # nvmfapp_pid=337049 00:37:55.227 22:36:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:37:55.488 [2024-10-01 22:36:50.545726] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:37:55.488 22:36:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:37:55.488 Malloc0 00:37:55.488 22:36:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -r -m 2 00:37:55.749 22:36:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:37:56.010 22:36:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:37:56.010 [2024-10-01 22:36:51.239118] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:37:56.271 22:36:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:37:56.271 [2024-10-01 22:36:51.423591] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:37:56.271 22:36:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@45 -- # bdevperf_pid=337408 00:37:56.271 22:36:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@47 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:37:56.271 22:36:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 90 00:37:56.271 22:36:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@48 -- # waitforlisten 337408 /var/tmp/bdevperf.sock 00:37:56.271 22:36:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@831 -- # '[' -z 337408 ']' 00:37:56.271 22:36:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:37:56.271 22:36:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@836 -- # local max_retries=100 00:37:56.271 22:36:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:37:56.271 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:37:56.271 22:36:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@840 -- # xtrace_disable 00:37:56.271 22:36:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:37:57.213 22:36:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:37:57.213 22:36:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@864 -- # return 0 00:37:57.213 22:36:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:37:57.213 22:36:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -l -1 -o 10 00:37:57.784 Nvme0n1 00:37:57.784 22:36:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:37:58.045 Nvme0n1 00:37:58.045 22:36:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@78 -- # sleep 2 00:37:58.045 22:36:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 120 -s /var/tmp/bdevperf.sock perform_tests 00:37:59.960 22:36:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@90 -- # set_ANA_state optimized optimized 00:37:59.960 22:36:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:38:00.220 22:36:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:38:00.481 22:36:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@91 -- # sleep 1 00:38:01.424 22:36:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@92 -- # check_status true false true true true true 00:38:01.424 22:36:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:38:01.424 22:36:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:38:01.424 22:36:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:38:01.684 22:36:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:38:01.684 22:36:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:38:01.684 22:36:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:38:01.684 22:36:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:38:01.944 22:36:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:38:01.944 22:36:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:38:01.944 22:36:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:38:01.944 22:36:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:38:01.944 22:36:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:38:01.944 22:36:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:38:01.944 22:36:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:38:01.944 22:36:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:38:02.204 22:36:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:38:02.204 22:36:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:38:02.204 22:36:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:38:02.204 22:36:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:38:02.464 22:36:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:38:02.464 22:36:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:38:02.464 22:36:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:38:02.464 22:36:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:38:02.465 22:36:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:38:02.465 22:36:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@94 -- # set_ANA_state non_optimized optimized 00:38:02.465 22:36:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:38:02.726 22:36:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:38:02.990 22:36:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@95 -- # sleep 1 00:38:03.992 22:36:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@96 -- # check_status false true true true true true 00:38:03.992 22:36:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:38:03.992 22:36:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:38:03.992 22:36:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:38:03.992 22:36:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:38:03.992 22:36:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:38:03.992 22:36:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:38:03.992 22:36:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:38:04.252 22:36:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:38:04.252 22:36:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:38:04.252 22:36:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:38:04.252 22:36:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:38:04.514 22:36:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:38:04.514 22:36:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:38:04.514 22:36:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:38:04.514 22:36:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:38:04.514 22:36:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:38:04.514 22:36:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:38:04.514 22:36:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:38:04.514 22:36:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:38:04.774 22:36:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:38:04.774 22:36:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:38:04.774 22:36:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:38:04.774 22:36:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:38:05.034 22:37:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:38:05.034 22:37:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@100 -- # set_ANA_state non_optimized non_optimized 00:38:05.034 22:37:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:38:05.034 22:37:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:38:05.294 22:37:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@101 -- # sleep 1 00:38:06.235 22:37:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@102 -- # check_status true false true true true true 00:38:06.235 22:37:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:38:06.235 22:37:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:38:06.235 22:37:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:38:06.495 22:37:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:38:06.495 22:37:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:38:06.495 22:37:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:38:06.495 22:37:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:38:06.755 22:37:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:38:06.755 22:37:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:38:06.755 22:37:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:38:06.755 22:37:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:38:06.755 22:37:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:38:07.015 22:37:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:38:07.015 22:37:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:38:07.015 22:37:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:38:07.015 22:37:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:38:07.015 22:37:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:38:07.015 22:37:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:38:07.015 22:37:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:38:07.275 22:37:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:38:07.275 22:37:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:38:07.275 22:37:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:38:07.275 22:37:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:38:07.535 22:37:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:38:07.535 22:37:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@104 -- # set_ANA_state non_optimized inaccessible 00:38:07.535 22:37:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:38:07.535 22:37:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:38:07.795 22:37:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@105 -- # sleep 1 00:38:08.735 22:37:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@106 -- # check_status true false true true true false 00:38:08.735 22:37:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:38:08.735 22:37:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:38:08.735 22:37:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:38:08.996 22:37:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:38:08.996 22:37:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:38:08.996 22:37:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:38:08.996 22:37:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:38:09.257 22:37:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:38:09.257 22:37:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:38:09.257 22:37:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:38:09.257 22:37:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:38:09.257 22:37:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:38:09.257 22:37:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:38:09.257 22:37:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:38:09.258 22:37:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:38:09.519 22:37:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:38:09.519 22:37:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:38:09.519 22:37:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:38:09.519 22:37:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:38:09.779 22:37:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:38:09.779 22:37:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:38:09.779 22:37:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:38:09.779 22:37:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:38:10.043 22:37:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:38:10.043 22:37:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@108 -- # set_ANA_state inaccessible inaccessible 00:38:10.043 22:37:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:38:10.043 22:37:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:38:10.305 22:37:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@109 -- # sleep 1 00:38:11.246 22:37:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@110 -- # check_status false false true true false false 00:38:11.246 22:37:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:38:11.246 22:37:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:38:11.246 22:37:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:38:11.506 22:37:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:38:11.506 22:37:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:38:11.506 22:37:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:38:11.506 22:37:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:38:11.768 22:37:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:38:11.768 22:37:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:38:11.768 22:37:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:38:11.768 22:37:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:38:11.768 22:37:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:38:11.768 22:37:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:38:11.768 22:37:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:38:11.768 22:37:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:38:12.028 22:37:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:38:12.028 22:37:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:38:12.029 22:37:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:38:12.029 22:37:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:38:12.289 22:37:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:38:12.289 22:37:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:38:12.289 22:37:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:38:12.289 22:37:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:38:12.289 22:37:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:38:12.289 22:37:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@112 -- # set_ANA_state inaccessible optimized 00:38:12.289 22:37:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:38:12.550 22:37:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:38:12.817 22:37:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@113 -- # sleep 1 00:38:13.764 22:37:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@114 -- # check_status false true true true false true 00:38:13.764 22:37:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:38:13.764 22:37:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:38:13.764 22:37:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:38:14.025 22:37:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:38:14.025 22:37:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:38:14.025 22:37:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:38:14.025 22:37:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:38:14.025 22:37:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:38:14.025 22:37:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:38:14.025 22:37:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:38:14.025 22:37:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:38:14.285 22:37:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:38:14.285 22:37:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:38:14.285 22:37:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:38:14.285 22:37:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:38:14.547 22:37:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:38:14.547 22:37:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:38:14.547 22:37:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:38:14.547 22:37:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:38:14.547 22:37:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:38:14.547 22:37:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:38:14.547 22:37:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:38:14.547 22:37:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:38:14.809 22:37:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:38:14.809 22:37:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@116 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_multipath_policy -b Nvme0n1 -p active_active 00:38:15.070 22:37:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@119 -- # set_ANA_state optimized optimized 00:38:15.070 22:37:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:38:15.070 22:37:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:38:15.331 22:37:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@120 -- # sleep 1 00:38:16.272 22:37:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@121 -- # check_status true true true true true true 00:38:16.272 22:37:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:38:16.272 22:37:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:38:16.272 22:37:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:38:16.534 22:37:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:38:16.534 22:37:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:38:16.534 22:37:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:38:16.534 22:37:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:38:16.794 22:37:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:38:16.794 22:37:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:38:16.794 22:37:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:38:16.794 22:37:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:38:16.794 22:37:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:38:16.794 22:37:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:38:16.794 22:37:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:38:16.794 22:37:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:38:17.055 22:37:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:38:17.055 22:37:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:38:17.055 22:37:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:38:17.055 22:37:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:38:17.315 22:37:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:38:17.315 22:37:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:38:17.315 22:37:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:38:17.315 22:37:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:38:17.576 22:37:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:38:17.576 22:37:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@123 -- # set_ANA_state non_optimized optimized 00:38:17.576 22:37:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:38:17.576 22:37:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:38:17.837 22:37:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@124 -- # sleep 1 00:38:18.778 22:37:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@125 -- # check_status false true true true true true 00:38:18.778 22:37:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:38:18.778 22:37:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:38:18.778 22:37:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:38:19.039 22:37:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:38:19.039 22:37:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:38:19.039 22:37:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:38:19.039 22:37:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:38:19.299 22:37:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:38:19.299 22:37:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:38:19.299 22:37:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:38:19.299 22:37:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:38:19.299 22:37:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:38:19.299 22:37:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:38:19.299 22:37:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:38:19.299 22:37:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:38:19.560 22:37:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:38:19.560 22:37:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:38:19.560 22:37:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:38:19.560 22:37:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:38:19.821 22:37:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:38:19.821 22:37:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:38:19.821 22:37:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:38:19.821 22:37:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:38:19.821 22:37:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:38:19.821 22:37:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@129 -- # set_ANA_state non_optimized non_optimized 00:38:19.821 22:37:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:38:20.082 22:37:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:38:20.342 22:37:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@130 -- # sleep 1 00:38:21.284 22:37:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@131 -- # check_status true true true true true true 00:38:21.284 22:37:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:38:21.284 22:37:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:38:21.284 22:37:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:38:21.546 22:37:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:38:21.546 22:37:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:38:21.546 22:37:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:38:21.546 22:37:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:38:21.546 22:37:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:38:21.546 22:37:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:38:21.546 22:37:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:38:21.546 22:37:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:38:21.806 22:37:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:38:21.806 22:37:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:38:21.806 22:37:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:38:21.806 22:37:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:38:22.067 22:37:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:38:22.067 22:37:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:38:22.067 22:37:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:38:22.067 22:37:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:38:22.328 22:37:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:38:22.328 22:37:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:38:22.328 22:37:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:38:22.328 22:37:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:38:22.328 22:37:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:38:22.328 22:37:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@133 -- # set_ANA_state non_optimized inaccessible 00:38:22.328 22:37:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:38:22.588 22:37:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:38:22.849 22:37:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@134 -- # sleep 1 00:38:23.790 22:37:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@135 -- # check_status true false true true true false 00:38:23.790 22:37:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:38:23.790 22:37:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:38:23.790 22:37:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:38:24.051 22:37:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:38:24.051 22:37:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:38:24.051 22:37:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:38:24.051 22:37:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:38:24.051 22:37:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:38:24.051 22:37:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:38:24.051 22:37:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:38:24.051 22:37:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:38:24.311 22:37:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:38:24.311 22:37:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:38:24.311 22:37:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:38:24.311 22:37:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:38:24.572 22:37:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:38:24.572 22:37:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:38:24.572 22:37:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:38:24.572 22:37:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:38:24.572 22:37:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:38:24.572 22:37:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:38:24.833 22:37:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:38:24.833 22:37:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:38:24.833 22:37:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:38:24.833 22:37:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@137 -- # killprocess 337408 00:38:24.833 22:37:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@950 -- # '[' -z 337408 ']' 00:38:24.833 22:37:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # kill -0 337408 00:38:24.833 22:37:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@955 -- # uname 00:38:24.833 22:37:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:38:24.833 22:37:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 337408 00:38:24.833 22:37:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:38:24.833 22:37:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:38:24.833 22:37:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@968 -- # echo 'killing process with pid 337408' 00:38:24.833 killing process with pid 337408 00:38:24.833 22:37:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@969 -- # kill 337408 00:38:24.833 22:37:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@974 -- # wait 337408 00:38:24.833 { 00:38:24.833 "results": [ 00:38:24.833 { 00:38:24.833 "job": "Nvme0n1", 00:38:24.833 "core_mask": "0x4", 00:38:24.833 "workload": "verify", 00:38:24.833 "status": "terminated", 00:38:24.833 "verify_range": { 00:38:24.833 "start": 0, 00:38:24.833 "length": 16384 00:38:24.833 }, 00:38:24.833 "queue_depth": 128, 00:38:24.833 "io_size": 4096, 00:38:24.833 "runtime": 26.73697, 00:38:24.833 "iops": 10767.861878141017, 00:38:24.833 "mibps": 42.06196046148835, 00:38:24.833 "io_failed": 0, 00:38:24.833 "io_timeout": 0, 00:38:24.833 "avg_latency_us": 11867.15541118444, 00:38:24.833 "min_latency_us": 351.5733333333333, 00:38:24.833 "max_latency_us": 3019898.88 00:38:24.833 } 00:38:24.833 ], 00:38:24.833 "core_count": 1 00:38:24.833 } 00:38:25.097 22:37:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@139 -- # wait 337408 00:38:25.097 22:37:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@141 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:38:25.097 [2024-10-01 22:36:51.505956] Starting SPDK v25.01-pre git sha1 1b1c3081e / DPDK 24.03.0 initialization... 00:38:25.097 [2024-10-01 22:36:51.506015] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid337408 ] 00:38:25.097 [2024-10-01 22:36:51.557490] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:25.097 [2024-10-01 22:36:51.610131] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:38:25.097 [2024-10-01 22:36:53.105870] bdev_nvme.c:5605:nvme_bdev_ctrlr_create: *WARNING*: multipath_config: deprecated feature bdev_nvme_attach_controller.multipath configuration mismatch to be removed in v25.01 00:38:25.097 Running I/O for 90 seconds... 00:38:25.097 9541.00 IOPS, 37.27 MiB/s 9560.50 IOPS, 37.35 MiB/s 9590.00 IOPS, 37.46 MiB/s 9573.50 IOPS, 37.40 MiB/s 9832.00 IOPS, 38.41 MiB/s 10337.67 IOPS, 40.38 MiB/s 10716.00 IOPS, 41.86 MiB/s 10650.12 IOPS, 41.60 MiB/s 10524.00 IOPS, 41.11 MiB/s 10435.80 IOPS, 40.76 MiB/s 10353.64 IOPS, 40.44 MiB/s [2024-10-01 22:37:05.209048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:63120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:25.097 [2024-10-01 22:37:05.209080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:38:25.097 [2024-10-01 22:37:05.209113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:63128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:25.097 [2024-10-01 22:37:05.209120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:38:25.097 [2024-10-01 22:37:05.209131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:63136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:25.097 [2024-10-01 22:37:05.209137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:38:25.097 [2024-10-01 22:37:05.209147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:63144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:25.097 [2024-10-01 22:37:05.209152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:38:25.097 [2024-10-01 22:37:05.209163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:63152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:25.097 [2024-10-01 22:37:05.209168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:38:25.097 [2024-10-01 22:37:05.209178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:63160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:25.097 [2024-10-01 22:37:05.209183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:38:25.097 [2024-10-01 22:37:05.209194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:63168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:25.097 [2024-10-01 22:37:05.209199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:38:25.097 [2024-10-01 22:37:05.209210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:63176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:25.097 [2024-10-01 22:37:05.209215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:38:25.097 [2024-10-01 22:37:05.209358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:63184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:25.097 [2024-10-01 22:37:05.209366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:38:25.097 [2024-10-01 22:37:05.209383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:63192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:25.097 [2024-10-01 22:37:05.209389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:38:25.097 [2024-10-01 22:37:05.209399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:63200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:25.097 [2024-10-01 22:37:05.209404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:38:25.097 [2024-10-01 22:37:05.209415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:63208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:25.097 [2024-10-01 22:37:05.209421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:38:25.097 [2024-10-01 22:37:05.209432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:63216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:25.097 [2024-10-01 22:37:05.209437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:38:25.097 [2024-10-01 22:37:05.209448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:63224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:25.097 [2024-10-01 22:37:05.209453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:38:25.097 [2024-10-01 22:37:05.209464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:63232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:25.097 [2024-10-01 22:37:05.209469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:38:25.097 [2024-10-01 22:37:05.209480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:63240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:25.097 [2024-10-01 22:37:05.209486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:38:25.097 [2024-10-01 22:37:05.209812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:63248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:25.097 [2024-10-01 22:37:05.209819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:38:25.097 [2024-10-01 22:37:05.209831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:63256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:25.097 [2024-10-01 22:37:05.209836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:38:25.097 [2024-10-01 22:37:05.209847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:63264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:25.097 [2024-10-01 22:37:05.209852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:38:25.097 [2024-10-01 22:37:05.209863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:62504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:25.097 [2024-10-01 22:37:05.209868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:38:25.097 [2024-10-01 22:37:05.209879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:62512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:25.097 [2024-10-01 22:37:05.209884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:38:25.097 [2024-10-01 22:37:05.209897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:62520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:25.097 [2024-10-01 22:37:05.209903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:38:25.097 [2024-10-01 22:37:05.209914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:62528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:25.097 [2024-10-01 22:37:05.209919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:38:25.097 [2024-10-01 22:37:05.209931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:62536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:25.097 [2024-10-01 22:37:05.209936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:38:25.097 [2024-10-01 22:37:05.209947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:62544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:25.097 [2024-10-01 22:37:05.209952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:38:25.097 [2024-10-01 22:37:05.209963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:63272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:25.097 [2024-10-01 22:37:05.209968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:38:25.097 [2024-10-01 22:37:05.209979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:63280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:25.097 [2024-10-01 22:37:05.209984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:38:25.097 [2024-10-01 22:37:05.209995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:63288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:25.097 [2024-10-01 22:37:05.210000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:38:25.097 [2024-10-01 22:37:05.210011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:63296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:25.097 [2024-10-01 22:37:05.210016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:38:25.097 [2024-10-01 22:37:05.210028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:63304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:25.097 [2024-10-01 22:37:05.210033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:38:25.098 [2024-10-01 22:37:05.210143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:63312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:25.098 [2024-10-01 22:37:05.210150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:38:25.098 [2024-10-01 22:37:05.210162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:63320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:25.098 [2024-10-01 22:37:05.210167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:38:25.098 [2024-10-01 22:37:05.210179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:63328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:25.098 [2024-10-01 22:37:05.210183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:38:25.098 [2024-10-01 22:37:05.210195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:63336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:25.098 [2024-10-01 22:37:05.210202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:38:25.098 [2024-10-01 22:37:05.210214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:63344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:25.098 [2024-10-01 22:37:05.210219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:38:25.098 [2024-10-01 22:37:05.210230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:63352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:25.098 [2024-10-01 22:37:05.210235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:38:25.098 [2024-10-01 22:37:05.210249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:63360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:25.098 [2024-10-01 22:37:05.210254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:38:25.098 [2024-10-01 22:37:05.210266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:63368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:25.098 [2024-10-01 22:37:05.210271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:38:25.098 [2024-10-01 22:37:05.210487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:63376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:25.098 [2024-10-01 22:37:05.210494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:38:25.098 [2024-10-01 22:37:05.210506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:63384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:25.098 [2024-10-01 22:37:05.210512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:38:25.098 [2024-10-01 22:37:05.210524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:63392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:25.098 [2024-10-01 22:37:05.210529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:38:25.098 [2024-10-01 22:37:05.210541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:63400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:25.098 [2024-10-01 22:37:05.210546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:38:25.098 [2024-10-01 22:37:05.210558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:63408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:25.098 [2024-10-01 22:37:05.210563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:38:25.098 [2024-10-01 22:37:05.210576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:63416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:25.098 [2024-10-01 22:37:05.210581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:38:25.098 [2024-10-01 22:37:05.210593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:63424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:25.098 [2024-10-01 22:37:05.210598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:38:25.098 [2024-10-01 22:37:05.210610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:63432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:25.098 [2024-10-01 22:37:05.210617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:38:25.098 [2024-10-01 22:37:05.210749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:63440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:25.098 [2024-10-01 22:37:05.210756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:38:25.098 [2024-10-01 22:37:05.210769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:63448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:25.098 [2024-10-01 22:37:05.210774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:38:25.098 [2024-10-01 22:37:05.210786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:63456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:25.098 [2024-10-01 22:37:05.210792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:38:25.098 [2024-10-01 22:37:05.210804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:63464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:25.098 [2024-10-01 22:37:05.210809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:38:25.098 [2024-10-01 22:37:05.210821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:63472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:25.098 [2024-10-01 22:37:05.210826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:38:25.098 [2024-10-01 22:37:05.210839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:63480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:25.098 [2024-10-01 22:37:05.210844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:38:25.098 [2024-10-01 22:37:05.210856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:63488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:25.098 [2024-10-01 22:37:05.210861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:38:25.098 [2024-10-01 22:37:05.210874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:63496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:25.098 [2024-10-01 22:37:05.210879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:38:25.098 [2024-10-01 22:37:05.212498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:63504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:25.098 [2024-10-01 22:37:05.212506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:38:25.098 [2024-10-01 22:37:05.212519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:63512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:25.098 [2024-10-01 22:37:05.212524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:38:25.098 [2024-10-01 22:37:05.212537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:62552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:25.098 [2024-10-01 22:37:05.212543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:38:25.098 [2024-10-01 22:37:05.212556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:62560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:25.098 [2024-10-01 22:37:05.212561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:38:25.098 [2024-10-01 22:37:05.212575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:62568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:25.098 [2024-10-01 22:37:05.212580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:38:25.098 [2024-10-01 22:37:05.212593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:62576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:25.098 [2024-10-01 22:37:05.212599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:38:25.098 [2024-10-01 22:37:05.212612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:62584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:25.098 [2024-10-01 22:37:05.212617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:38:25.098 [2024-10-01 22:37:05.212632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:62592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:25.098 [2024-10-01 22:37:05.212638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:38:25.098 [2024-10-01 22:37:05.212651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:62600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:25.098 [2024-10-01 22:37:05.212656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:38:25.098 [2024-10-01 22:37:05.212669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:62608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:25.098 [2024-10-01 22:37:05.212674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:38:25.098 [2024-10-01 22:37:05.212687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:62616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:25.099 [2024-10-01 22:37:05.212692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:38:25.099 [2024-10-01 22:37:05.212705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:62624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:25.099 [2024-10-01 22:37:05.212710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:38:25.099 [2024-10-01 22:37:05.212723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:62632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:25.099 [2024-10-01 22:37:05.212728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:38:25.099 [2024-10-01 22:37:05.212741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:62640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:25.099 [2024-10-01 22:37:05.212746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:25.099 [2024-10-01 22:37:05.212760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:62648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:25.099 [2024-10-01 22:37:05.212765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:25.099 [2024-10-01 22:37:05.212778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:62656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:25.099 [2024-10-01 22:37:05.212783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:38:25.099 [2024-10-01 22:37:05.212797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:62664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:25.099 [2024-10-01 22:37:05.212803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:38:25.099 [2024-10-01 22:37:05.212815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:62672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:25.099 [2024-10-01 22:37:05.212820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:38:25.099 [2024-10-01 22:37:05.212833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:62680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:25.099 [2024-10-01 22:37:05.212839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:38:25.099 [2024-10-01 22:37:05.212852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:62688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:25.099 [2024-10-01 22:37:05.212857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:38:25.099 [2024-10-01 22:37:05.212870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:62696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:25.099 [2024-10-01 22:37:05.212875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:38:25.099 [2024-10-01 22:37:05.212888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:62704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:25.099 [2024-10-01 22:37:05.212893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:38:25.099 [2024-10-01 22:37:05.212906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:62712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:25.099 [2024-10-01 22:37:05.212911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:38:25.099 [2024-10-01 22:37:05.212924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:62720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:25.099 [2024-10-01 22:37:05.212929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:38:25.099 [2024-10-01 22:37:05.212943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:62728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:25.099 [2024-10-01 22:37:05.212948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:38:25.099 [2024-10-01 22:37:05.212960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:63520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:25.099 [2024-10-01 22:37:05.212965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:38:25.099 [2024-10-01 22:37:05.212978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:62736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:25.099 [2024-10-01 22:37:05.212984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:38:25.099 [2024-10-01 22:37:05.212997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:62744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:25.099 [2024-10-01 22:37:05.213002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:38:25.099 [2024-10-01 22:37:05.213015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:62752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:25.099 [2024-10-01 22:37:05.213024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:38:25.099 [2024-10-01 22:37:05.213037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:62760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:25.099 [2024-10-01 22:37:05.213042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:38:25.099 [2024-10-01 22:37:05.213055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:62768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:25.099 [2024-10-01 22:37:05.213060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:38:25.099 [2024-10-01 22:37:05.213073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:62776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:25.099 [2024-10-01 22:37:05.213078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:38:25.099 [2024-10-01 22:37:05.213091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:62784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:25.099 [2024-10-01 22:37:05.213097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:38:25.099 [2024-10-01 22:37:05.213110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:62792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:25.099 [2024-10-01 22:37:05.213115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:38:25.099 [2024-10-01 22:37:05.213191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:62800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:25.099 [2024-10-01 22:37:05.213198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:38:25.099 [2024-10-01 22:37:05.213214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:62808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:25.099 [2024-10-01 22:37:05.213219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:38:25.099 [2024-10-01 22:37:05.213233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:62816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:25.099 [2024-10-01 22:37:05.213238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:38:25.099 [2024-10-01 22:37:05.213254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:62824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:25.099 [2024-10-01 22:37:05.213259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:38:25.099 [2024-10-01 22:37:05.213274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:62832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:25.099 [2024-10-01 22:37:05.213279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:38:25.099 [2024-10-01 22:37:05.213293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:62840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:25.099 [2024-10-01 22:37:05.213298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:38:25.099 [2024-10-01 22:37:05.213313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:62848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:25.099 [2024-10-01 22:37:05.213320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:38:25.099 [2024-10-01 22:37:05.213334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:62856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:25.099 [2024-10-01 22:37:05.213339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:38:25.099 [2024-10-01 22:37:05.213354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:62864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:25.099 [2024-10-01 22:37:05.213359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:38:25.099 [2024-10-01 22:37:05.213375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:62872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:25.099 [2024-10-01 22:37:05.213380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:38:25.099 [2024-10-01 22:37:05.213394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:62880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:25.099 [2024-10-01 22:37:05.213400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:38:25.099 [2024-10-01 22:37:05.213414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:62888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:25.099 [2024-10-01 22:37:05.213420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:38:25.099 [2024-10-01 22:37:05.213435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:62896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:25.099 [2024-10-01 22:37:05.213440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:38:25.099 [2024-10-01 22:37:05.213454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:62904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:25.099 [2024-10-01 22:37:05.213460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:38:25.099 [2024-10-01 22:37:05.213475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:62912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:25.100 [2024-10-01 22:37:05.213480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:38:25.100 [2024-10-01 22:37:05.213494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:62920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:25.100 [2024-10-01 22:37:05.213500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:38:25.100 [2024-10-01 22:37:05.213514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:62928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:25.100 [2024-10-01 22:37:05.213519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:38:25.100 [2024-10-01 22:37:05.213534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:62936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:25.100 [2024-10-01 22:37:05.213539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:38:25.100 [2024-10-01 22:37:05.213554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:62944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:25.100 [2024-10-01 22:37:05.213559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:38:25.100 [2024-10-01 22:37:05.213576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:62952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:25.100 [2024-10-01 22:37:05.213581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:38:25.100 [2024-10-01 22:37:05.213595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:62960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:25.100 [2024-10-01 22:37:05.213600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:38:25.100 [2024-10-01 22:37:05.213615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:62968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:25.100 [2024-10-01 22:37:05.213620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:38:25.100 [2024-10-01 22:37:05.213643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:62976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:25.100 [2024-10-01 22:37:05.213648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:38:25.100 [2024-10-01 22:37:05.213663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:62984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:25.100 [2024-10-01 22:37:05.213669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:38:25.100 [2024-10-01 22:37:05.213694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:62992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:25.100 [2024-10-01 22:37:05.213699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:38:25.100 [2024-10-01 22:37:05.213714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:63000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:25.100 [2024-10-01 22:37:05.213719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:38:25.100 [2024-10-01 22:37:05.213734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:63008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:25.100 [2024-10-01 22:37:05.213739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:38:25.100 [2024-10-01 22:37:05.213753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:63016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:25.100 [2024-10-01 22:37:05.213758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:38:25.100 [2024-10-01 22:37:05.213773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:63024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:25.100 [2024-10-01 22:37:05.213778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:38:25.100 [2024-10-01 22:37:05.213793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:63032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:25.100 [2024-10-01 22:37:05.213798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:38:25.100 [2024-10-01 22:37:05.213813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:63040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:25.100 [2024-10-01 22:37:05.213819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:38:25.100 [2024-10-01 22:37:05.213835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:63048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:25.100 [2024-10-01 22:37:05.213841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:38:25.100 [2024-10-01 22:37:05.213855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:63056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:25.100 [2024-10-01 22:37:05.213860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:38:25.100 [2024-10-01 22:37:05.213875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:63064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:25.100 [2024-10-01 22:37:05.213881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:38:25.100 [2024-10-01 22:37:05.213896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:63072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:25.100 [2024-10-01 22:37:05.213901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:38:25.100 [2024-10-01 22:37:05.213915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:63080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:25.100 [2024-10-01 22:37:05.213920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:38:25.100 [2024-10-01 22:37:05.213936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:63088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:25.100 [2024-10-01 22:37:05.213941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:38:25.100 [2024-10-01 22:37:05.213956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:63096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:25.100 [2024-10-01 22:37:05.213961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:38:25.100 [2024-10-01 22:37:05.213975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:63104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:25.100 [2024-10-01 22:37:05.213981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:38:25.100 [2024-10-01 22:37:05.213996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:63112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:25.100 [2024-10-01 22:37:05.214001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:38:25.100 10208.42 IOPS, 39.88 MiB/s 9423.15 IOPS, 36.81 MiB/s 8750.07 IOPS, 34.18 MiB/s 8254.27 IOPS, 32.24 MiB/s 8554.75 IOPS, 33.42 MiB/s 8798.35 IOPS, 34.37 MiB/s 9252.33 IOPS, 36.14 MiB/s 9660.74 IOPS, 37.74 MiB/s 9933.35 IOPS, 38.80 MiB/s 10066.57 IOPS, 39.32 MiB/s 10199.50 IOPS, 39.84 MiB/s 10476.22 IOPS, 40.92 MiB/s 10748.17 IOPS, 41.99 MiB/s [2024-10-01 22:37:17.859855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:36328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:25.100 [2024-10-01 22:37:17.859893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:38:25.100 [2024-10-01 22:37:17.860648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:36360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:25.100 [2024-10-01 22:37:17.860662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:38:25.100 [2024-10-01 22:37:17.860680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:36392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:25.100 [2024-10-01 22:37:17.860686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:38:25.100 [2024-10-01 22:37:17.860697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:36968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:25.100 [2024-10-01 22:37:17.860702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:38:25.100 [2024-10-01 22:37:17.860712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:37000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:25.100 [2024-10-01 22:37:17.860718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:38:25.100 [2024-10-01 22:37:17.860728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:37032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:25.100 [2024-10-01 22:37:17.860734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:38:25.100 [2024-10-01 22:37:17.860744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:37176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:25.100 [2024-10-01 22:37:17.860749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:38:25.100 [2024-10-01 22:37:17.860760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:37192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:25.100 [2024-10-01 22:37:17.860765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:38:25.100 [2024-10-01 22:37:17.860775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:37208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:25.100 [2024-10-01 22:37:17.860780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:38:25.100 [2024-10-01 22:37:17.860791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:37224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:25.100 [2024-10-01 22:37:17.860796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:38:25.100 [2024-10-01 22:37:17.860806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:36456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:25.100 [2024-10-01 22:37:17.860811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:38:25.100 [2024-10-01 22:37:17.860821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:37240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:25.100 [2024-10-01 22:37:17.860827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:38:25.100 [2024-10-01 22:37:17.860837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:36472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:25.101 [2024-10-01 22:37:17.860842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:38:25.101 [2024-10-01 22:37:17.860853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:36504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:25.101 [2024-10-01 22:37:17.860858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:38:25.101 [2024-10-01 22:37:17.860869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:36536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:25.101 [2024-10-01 22:37:17.860874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:38:25.101 [2024-10-01 22:37:17.860885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:36576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:25.101 [2024-10-01 22:37:17.860891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:38:25.101 [2024-10-01 22:37:17.860901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:37040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:25.101 [2024-10-01 22:37:17.860906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:38:25.101 [2024-10-01 22:37:17.860916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:37072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:25.101 [2024-10-01 22:37:17.860921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:38:25.101 [2024-10-01 22:37:17.860931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:37104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:25.101 [2024-10-01 22:37:17.860937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:38:25.101 [2024-10-01 22:37:17.860947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:37136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:25.101 [2024-10-01 22:37:17.860953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:38:25.101 10853.60 IOPS, 42.40 MiB/s 10804.27 IOPS, 42.20 MiB/s Received shutdown signal, test time was about 26.737583 seconds 00:38:25.101 00:38:25.101 Latency(us) 00:38:25.101 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:38:25.101 Job: Nvme0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:38:25.101 Verification LBA range: start 0x0 length 0x4000 00:38:25.101 Nvme0n1 : 26.74 10767.86 42.06 0.00 0.00 11867.16 351.57 3019898.88 00:38:25.101 =================================================================================================================== 00:38:25.101 Total : 10767.86 42.06 0.00 0.00 11867.16 351.57 3019898.88 00:38:25.101 [2024-10-01 22:37:20.083592] app.c:1032:log_deprecation_hits: *WARNING*: multipath_config: deprecation 'bdev_nvme_attach_controller.multipath configuration mismatch' scheduled for removal in v25.01 hit 1 times 00:38:25.101 22:37:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@143 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:38:25.362 22:37:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@145 -- # trap - SIGINT SIGTERM EXIT 00:38:25.362 22:37:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@147 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:38:25.362 22:37:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@148 -- # nvmftestfini 00:38:25.362 22:37:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@514 -- # nvmfcleanup 00:38:25.362 22:37:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@121 -- # sync 00:38:25.362 22:37:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:38:25.362 22:37:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@124 -- # set +e 00:38:25.362 22:37:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@125 -- # for i in {1..20} 00:38:25.362 22:37:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:38:25.362 rmmod nvme_tcp 00:38:25.362 rmmod nvme_fabrics 00:38:25.362 rmmod nvme_keyring 00:38:25.362 22:37:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:38:25.362 22:37:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@128 -- # set -e 00:38:25.362 22:37:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@129 -- # return 0 00:38:25.362 22:37:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@515 -- # '[' -n 337049 ']' 00:38:25.362 22:37:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@516 -- # killprocess 337049 00:38:25.362 22:37:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@950 -- # '[' -z 337049 ']' 00:38:25.362 22:37:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # kill -0 337049 00:38:25.362 22:37:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@955 -- # uname 00:38:25.362 22:37:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:38:25.362 22:37:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 337049 00:38:25.362 22:37:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:38:25.362 22:37:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:38:25.362 22:37:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@968 -- # echo 'killing process with pid 337049' 00:38:25.362 killing process with pid 337049 00:38:25.362 22:37:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@969 -- # kill 337049 00:38:25.362 22:37:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@974 -- # wait 337049 00:38:25.623 22:37:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:38:25.623 22:37:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:38:25.624 22:37:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:38:25.624 22:37:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@297 -- # iptr 00:38:25.624 22:37:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@789 -- # iptables-save 00:38:25.624 22:37:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:38:25.624 22:37:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@789 -- # iptables-restore 00:38:25.624 22:37:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:38:25.624 22:37:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@302 -- # remove_spdk_ns 00:38:25.624 22:37:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:38:25.624 22:37:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:38:25.624 22:37:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:38:28.170 22:37:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:38:28.170 00:38:28.170 real 0m40.975s 00:38:28.170 user 1m46.013s 00:38:28.170 sys 0m11.399s 00:38:28.170 22:37:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1126 -- # xtrace_disable 00:38:28.170 22:37:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:38:28.170 ************************************ 00:38:28.170 END TEST nvmf_host_multipath_status 00:38:28.170 ************************************ 00:38:28.170 22:37:22 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@28 -- # run_test nvmf_discovery_remove_ifc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:38:28.170 22:37:22 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:38:28.170 22:37:22 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:38:28.170 22:37:22 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:38:28.170 ************************************ 00:38:28.170 START TEST nvmf_discovery_remove_ifc 00:38:28.170 ************************************ 00:38:28.170 22:37:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:38:28.170 * Looking for test storage... 00:38:28.170 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:38:28.170 22:37:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:38:28.170 22:37:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1681 -- # lcov --version 00:38:28.170 22:37:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:38:28.170 22:37:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:38:28.170 22:37:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:38:28.170 22:37:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:38:28.170 22:37:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:38:28.170 22:37:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@336 -- # IFS=.-: 00:38:28.170 22:37:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@336 -- # read -ra ver1 00:38:28.170 22:37:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@337 -- # IFS=.-: 00:38:28.170 22:37:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@337 -- # read -ra ver2 00:38:28.170 22:37:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@338 -- # local 'op=<' 00:38:28.170 22:37:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@340 -- # ver1_l=2 00:38:28.170 22:37:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@341 -- # ver2_l=1 00:38:28.170 22:37:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:38:28.170 22:37:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@344 -- # case "$op" in 00:38:28.170 22:37:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@345 -- # : 1 00:38:28.170 22:37:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@364 -- # (( v = 0 )) 00:38:28.170 22:37:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:38:28.170 22:37:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@365 -- # decimal 1 00:38:28.170 22:37:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@353 -- # local d=1 00:38:28.170 22:37:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:38:28.170 22:37:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@355 -- # echo 1 00:38:28.170 22:37:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@365 -- # ver1[v]=1 00:38:28.170 22:37:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@366 -- # decimal 2 00:38:28.170 22:37:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@353 -- # local d=2 00:38:28.170 22:37:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:38:28.170 22:37:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@355 -- # echo 2 00:38:28.170 22:37:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@366 -- # ver2[v]=2 00:38:28.170 22:37:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:38:28.170 22:37:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:38:28.170 22:37:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@368 -- # return 0 00:38:28.170 22:37:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:38:28.170 22:37:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:38:28.170 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:28.170 --rc genhtml_branch_coverage=1 00:38:28.170 --rc genhtml_function_coverage=1 00:38:28.170 --rc genhtml_legend=1 00:38:28.170 --rc geninfo_all_blocks=1 00:38:28.170 --rc geninfo_unexecuted_blocks=1 00:38:28.170 00:38:28.170 ' 00:38:28.170 22:37:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:38:28.170 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:28.170 --rc genhtml_branch_coverage=1 00:38:28.170 --rc genhtml_function_coverage=1 00:38:28.170 --rc genhtml_legend=1 00:38:28.170 --rc geninfo_all_blocks=1 00:38:28.170 --rc geninfo_unexecuted_blocks=1 00:38:28.170 00:38:28.170 ' 00:38:28.170 22:37:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:38:28.170 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:28.170 --rc genhtml_branch_coverage=1 00:38:28.170 --rc genhtml_function_coverage=1 00:38:28.170 --rc genhtml_legend=1 00:38:28.170 --rc geninfo_all_blocks=1 00:38:28.170 --rc geninfo_unexecuted_blocks=1 00:38:28.170 00:38:28.170 ' 00:38:28.170 22:37:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:38:28.170 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:28.170 --rc genhtml_branch_coverage=1 00:38:28.171 --rc genhtml_function_coverage=1 00:38:28.171 --rc genhtml_legend=1 00:38:28.171 --rc geninfo_all_blocks=1 00:38:28.171 --rc geninfo_unexecuted_blocks=1 00:38:28.171 00:38:28.171 ' 00:38:28.171 22:37:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:38:28.171 22:37:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # uname -s 00:38:28.171 22:37:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:38:28.171 22:37:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:38:28.171 22:37:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:38:28.171 22:37:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:38:28.171 22:37:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:38:28.171 22:37:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:38:28.171 22:37:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:38:28.171 22:37:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:38:28.171 22:37:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:38:28.171 22:37:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:38:28.171 22:37:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 00:38:28.171 22:37:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@18 -- # NVME_HOSTID=80c5a598-37ec-ec11-9bc7-a4bf01928204 00:38:28.171 22:37:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:38:28.171 22:37:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:38:28.171 22:37:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:38:28.171 22:37:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:38:28.171 22:37:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:38:28.171 22:37:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@15 -- # shopt -s extglob 00:38:28.171 22:37:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:38:28.171 22:37:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:38:28.171 22:37:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:38:28.171 22:37:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:28.171 22:37:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:28.171 22:37:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:28.171 22:37:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@5 -- # export PATH 00:38:28.171 22:37:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:28.171 22:37:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@51 -- # : 0 00:38:28.171 22:37:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:38:28.171 22:37:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:38:28.171 22:37:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:38:28.171 22:37:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:38:28.171 22:37:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:38:28.171 22:37:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:38:28.171 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:38:28.171 22:37:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:38:28.171 22:37:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:38:28.171 22:37:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@55 -- # have_pci_nics=0 00:38:28.171 22:37:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@14 -- # '[' tcp == rdma ']' 00:38:28.171 22:37:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@19 -- # discovery_port=8009 00:38:28.171 22:37:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@20 -- # discovery_nqn=nqn.2014-08.org.nvmexpress.discovery 00:38:28.171 22:37:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@23 -- # nqn=nqn.2016-06.io.spdk:cnode 00:38:28.171 22:37:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@25 -- # host_nqn=nqn.2021-12.io.spdk:test 00:38:28.171 22:37:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@26 -- # host_sock=/tmp/host.sock 00:38:28.171 22:37:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@39 -- # nvmftestinit 00:38:28.171 22:37:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:38:28.171 22:37:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:38:28.171 22:37:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@474 -- # prepare_net_devs 00:38:28.171 22:37:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@436 -- # local -g is_hw=no 00:38:28.171 22:37:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@438 -- # remove_spdk_ns 00:38:28.171 22:37:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:38:28.171 22:37:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:38:28.171 22:37:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:38:28.171 22:37:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:38:28.171 22:37:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:38:28.171 22:37:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@309 -- # xtrace_disable 00:38:28.171 22:37:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:38:36.315 22:37:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:38:36.315 22:37:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@315 -- # pci_devs=() 00:38:36.315 22:37:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@315 -- # local -a pci_devs 00:38:36.315 22:37:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@316 -- # pci_net_devs=() 00:38:36.315 22:37:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:38:36.315 22:37:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@317 -- # pci_drivers=() 00:38:36.315 22:37:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@317 -- # local -A pci_drivers 00:38:36.315 22:37:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@319 -- # net_devs=() 00:38:36.315 22:37:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@319 -- # local -ga net_devs 00:38:36.315 22:37:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@320 -- # e810=() 00:38:36.315 22:37:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@320 -- # local -ga e810 00:38:36.315 22:37:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@321 -- # x722=() 00:38:36.315 22:37:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@321 -- # local -ga x722 00:38:36.315 22:37:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@322 -- # mlx=() 00:38:36.315 22:37:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@322 -- # local -ga mlx 00:38:36.315 22:37:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:38:36.315 22:37:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:38:36.315 22:37:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:38:36.315 22:37:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:38:36.315 22:37:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:38:36.315 22:37:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:38:36.315 22:37:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:38:36.315 22:37:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:38:36.315 22:37:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:38:36.315 22:37:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:38:36.315 22:37:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:38:36.315 22:37:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:38:36.315 22:37:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:38:36.315 22:37:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:38:36.315 22:37:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:38:36.315 22:37:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:38:36.315 22:37:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:38:36.315 22:37:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:38:36.315 22:37:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:38:36.315 22:37:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:38:36.315 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:38:36.315 22:37:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:38:36.315 22:37:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:38:36.315 22:37:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:38:36.315 22:37:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:38:36.315 22:37:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:38:36.315 22:37:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:38:36.315 22:37:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:38:36.315 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:38:36.315 22:37:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:38:36.315 22:37:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:38:36.315 22:37:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:38:36.315 22:37:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:38:36.315 22:37:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:38:36.315 22:37:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:38:36.315 22:37:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:38:36.315 22:37:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:38:36.315 22:37:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:38:36.315 22:37:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:38:36.315 22:37:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:38:36.315 22:37:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:38:36.315 22:37:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@416 -- # [[ up == up ]] 00:38:36.315 22:37:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:38:36.315 22:37:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:38:36.315 22:37:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:38:36.315 Found net devices under 0000:4b:00.0: cvl_0_0 00:38:36.315 22:37:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:38:36.315 22:37:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:38:36.315 22:37:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:38:36.316 22:37:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:38:36.316 22:37:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:38:36.316 22:37:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@416 -- # [[ up == up ]] 00:38:36.316 22:37:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:38:36.316 22:37:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:38:36.316 22:37:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:38:36.316 Found net devices under 0000:4b:00.1: cvl_0_1 00:38:36.316 22:37:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:38:36.316 22:37:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:38:36.316 22:37:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@440 -- # is_hw=yes 00:38:36.316 22:37:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:38:36.316 22:37:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:38:36.316 22:37:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:38:36.316 22:37:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:38:36.316 22:37:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:38:36.316 22:37:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:38:36.316 22:37:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:38:36.316 22:37:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:38:36.316 22:37:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:38:36.316 22:37:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:38:36.316 22:37:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:38:36.316 22:37:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:38:36.316 22:37:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:38:36.316 22:37:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:38:36.316 22:37:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:38:36.316 22:37:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:38:36.316 22:37:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:38:36.316 22:37:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:38:36.316 22:37:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:38:36.316 22:37:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:38:36.316 22:37:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:38:36.316 22:37:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:38:36.316 22:37:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:38:36.316 22:37:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:38:36.316 22:37:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:38:36.316 22:37:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:38:36.316 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:38:36.316 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.581 ms 00:38:36.316 00:38:36.316 --- 10.0.0.2 ping statistics --- 00:38:36.316 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:38:36.316 rtt min/avg/max/mdev = 0.581/0.581/0.581/0.000 ms 00:38:36.316 22:37:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:38:36.316 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:38:36.316 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.276 ms 00:38:36.316 00:38:36.316 --- 10.0.0.1 ping statistics --- 00:38:36.316 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:38:36.316 rtt min/avg/max/mdev = 0.276/0.276/0.276/0.000 ms 00:38:36.316 22:37:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:38:36.316 22:37:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@448 -- # return 0 00:38:36.316 22:37:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:38:36.316 22:37:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:38:36.316 22:37:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:38:36.316 22:37:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:38:36.316 22:37:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:38:36.316 22:37:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:38:36.316 22:37:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:38:36.316 22:37:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@40 -- # nvmfappstart -m 0x2 00:38:36.316 22:37:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:38:36.316 22:37:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@724 -- # xtrace_disable 00:38:36.316 22:37:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:38:36.316 22:37:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@507 -- # nvmfpid=347317 00:38:36.316 22:37:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@508 -- # waitforlisten 347317 00:38:36.316 22:37:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@831 -- # '[' -z 347317 ']' 00:38:36.316 22:37:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:38:36.316 22:37:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:38:36.316 22:37:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@836 -- # local max_retries=100 00:38:36.316 22:37:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:38:36.316 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:38:36.316 22:37:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@840 -- # xtrace_disable 00:38:36.316 22:37:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:38:36.316 [2024-10-01 22:37:30.508995] Starting SPDK v25.01-pre git sha1 1b1c3081e / DPDK 24.03.0 initialization... 00:38:36.316 [2024-10-01 22:37:30.509054] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:38:36.316 [2024-10-01 22:37:30.597646] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:36.316 [2024-10-01 22:37:30.691160] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:38:36.316 [2024-10-01 22:37:30.691221] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:38:36.316 [2024-10-01 22:37:30.691230] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:38:36.316 [2024-10-01 22:37:30.691237] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:38:36.316 [2024-10-01 22:37:30.691243] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:38:36.316 [2024-10-01 22:37:30.691268] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:38:36.316 22:37:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:38:36.316 22:37:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@864 -- # return 0 00:38:36.316 22:37:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:38:36.316 22:37:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@730 -- # xtrace_disable 00:38:36.316 22:37:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:38:36.316 22:37:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:38:36.316 22:37:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@43 -- # rpc_cmd 00:38:36.316 22:37:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:36.316 22:37:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:38:36.316 [2024-10-01 22:37:31.386390] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:38:36.316 [2024-10-01 22:37:31.394662] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:38:36.316 null0 00:38:36.317 [2024-10-01 22:37:31.426579] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:38:36.317 22:37:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:36.317 22:37:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@59 -- # hostpid=347661 00:38:36.317 22:37:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@60 -- # waitforlisten 347661 /tmp/host.sock 00:38:36.317 22:37:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock --wait-for-rpc -L bdev_nvme 00:38:36.317 22:37:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@831 -- # '[' -z 347661 ']' 00:38:36.317 22:37:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@835 -- # local rpc_addr=/tmp/host.sock 00:38:36.317 22:37:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@836 -- # local max_retries=100 00:38:36.317 22:37:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:38:36.317 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:38:36.317 22:37:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@840 -- # xtrace_disable 00:38:36.317 22:37:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:38:36.317 [2024-10-01 22:37:31.502622] Starting SPDK v25.01-pre git sha1 1b1c3081e / DPDK 24.03.0 initialization... 00:38:36.317 [2024-10-01 22:37:31.502696] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid347661 ] 00:38:36.576 [2024-10-01 22:37:31.568309] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:36.576 [2024-10-01 22:37:31.642504] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:38:37.145 22:37:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:38:37.145 22:37:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@864 -- # return 0 00:38:37.145 22:37:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@62 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:38:37.145 22:37:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_set_options -e 1 00:38:37.145 22:37:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:37.145 22:37:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:38:37.145 22:37:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:37.145 22:37:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@66 -- # rpc_cmd -s /tmp/host.sock framework_start_init 00:38:37.145 22:37:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:37.145 22:37:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:38:37.405 22:37:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:37.405 22:37:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test --ctrlr-loss-timeout-sec 2 --reconnect-delay-sec 1 --fast-io-fail-timeout-sec 1 --wait-for-attach 00:38:37.405 22:37:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:37.405 22:37:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:38:38.345 [2024-10-01 22:37:33.484719] bdev_nvme.c:7162:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:38:38.345 [2024-10-01 22:37:33.484744] bdev_nvme.c:7242:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:38:38.345 [2024-10-01 22:37:33.484759] bdev_nvme.c:7125:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:38:38.345 [2024-10-01 22:37:33.573062] bdev_nvme.c:7091:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:38:38.606 [2024-10-01 22:37:33.636336] bdev_nvme.c:7952:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:38:38.606 [2024-10-01 22:37:33.636386] bdev_nvme.c:7952:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:38:38.606 [2024-10-01 22:37:33.636407] bdev_nvme.c:7952:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:38:38.606 [2024-10-01 22:37:33.636420] bdev_nvme.c:6981:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:38:38.606 [2024-10-01 22:37:33.636440] bdev_nvme.c:6940:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:38:38.606 22:37:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:38.606 22:37:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@72 -- # wait_for_bdev nvme0n1 00:38:38.606 22:37:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:38:38.606 22:37:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:38:38.606 22:37:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:38:38.606 22:37:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:38.606 22:37:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:38:38.606 22:37:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:38:38.606 22:37:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:38:38.606 22:37:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:38.606 [2024-10-01 22:37:33.685393] bdev_nvme.c:1735:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0xdf35d0 was disconnected and freed. delete nvme_qpair. 00:38:38.606 22:37:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != \n\v\m\e\0\n\1 ]] 00:38:38.606 22:37:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@75 -- # ip netns exec cvl_0_0_ns_spdk ip addr del 10.0.0.2/24 dev cvl_0_0 00:38:38.606 22:37:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@76 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 down 00:38:38.606 22:37:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@79 -- # wait_for_bdev '' 00:38:38.606 22:37:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:38:38.606 22:37:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:38:38.606 22:37:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:38.606 22:37:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:38:38.606 22:37:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:38:38.606 22:37:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:38:38.606 22:37:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:38:38.606 22:37:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:38.606 22:37:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:38:38.606 22:37:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:38:39.988 22:37:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:38:39.988 22:37:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:38:39.988 22:37:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:38:39.988 22:37:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:39.988 22:37:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:38:39.988 22:37:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:38:39.988 22:37:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:38:39.988 22:37:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:39.988 22:37:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:38:39.988 22:37:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:38:40.929 22:37:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:38:40.929 22:37:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:38:40.929 22:37:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:38:40.929 22:37:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:40.929 22:37:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:38:40.929 22:37:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:38:40.929 22:37:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:38:40.929 22:37:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:40.929 22:37:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:38:40.929 22:37:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:38:41.869 22:37:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:38:41.869 22:37:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:38:41.869 22:37:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:38:41.869 22:37:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:41.869 22:37:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:38:41.869 22:37:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:38:41.869 22:37:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:38:41.869 22:37:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:41.869 22:37:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:38:41.869 22:37:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:38:42.812 22:37:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:38:42.812 22:37:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:38:42.812 22:37:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:38:42.812 22:37:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:38:42.812 22:37:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:42.812 22:37:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:38:42.812 22:37:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:38:42.812 22:37:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:43.072 22:37:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:38:43.072 22:37:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:38:44.015 [2024-10-01 22:37:39.077256] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 110: Connection timed out 00:38:44.015 [2024-10-01 22:37:39.077296] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:38:44.015 [2024-10-01 22:37:39.077308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:44.015 [2024-10-01 22:37:39.077318] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:38:44.015 [2024-10-01 22:37:39.077326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:44.015 [2024-10-01 22:37:39.077334] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:38:44.015 [2024-10-01 22:37:39.077342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:44.015 [2024-10-01 22:37:39.077351] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:38:44.015 [2024-10-01 22:37:39.077359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:44.015 [2024-10-01 22:37:39.077367] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:38:44.015 [2024-10-01 22:37:39.077379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:44.015 [2024-10-01 22:37:39.077387] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdcfec0 is same with the state(6) to be set 00:38:44.015 22:37:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:38:44.015 22:37:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:38:44.015 22:37:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:38:44.015 22:37:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:44.015 22:37:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:38:44.015 22:37:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:38:44.015 22:37:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:38:44.015 [2024-10-01 22:37:39.087278] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdcfec0 (9): Bad file descriptor 00:38:44.015 [2024-10-01 22:37:39.097317] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:38:44.015 22:37:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:44.015 22:37:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:38:44.015 22:37:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:38:44.958 22:37:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:38:44.958 22:37:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:38:44.958 22:37:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:38:44.958 22:37:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:44.958 22:37:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:38:44.958 22:37:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:38:44.958 22:37:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:38:44.958 [2024-10-01 22:37:40.143667] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 110 00:38:44.958 [2024-10-01 22:37:40.143717] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdcfec0 with addr=10.0.0.2, port=4420 00:38:44.958 [2024-10-01 22:37:40.143734] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdcfec0 is same with the state(6) to be set 00:38:44.958 [2024-10-01 22:37:40.143768] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdcfec0 (9): Bad file descriptor 00:38:44.958 [2024-10-01 22:37:40.144171] bdev_nvme.c:3029:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:38:44.958 [2024-10-01 22:37:40.144197] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:38:44.958 [2024-10-01 22:37:40.144205] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:38:44.958 [2024-10-01 22:37:40.144214] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:38:44.958 [2024-10-01 22:37:40.144234] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:44.958 [2024-10-01 22:37:40.144243] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:38:44.958 22:37:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:44.958 22:37:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:38:44.958 22:37:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:38:45.903 [2024-10-01 22:37:41.146619] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:38:45.903 [2024-10-01 22:37:41.146647] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:38:45.903 [2024-10-01 22:37:41.146656] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:38:45.903 [2024-10-01 22:37:41.146663] nvme_ctrlr.c:1094:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] already in failed state 00:38:45.903 [2024-10-01 22:37:41.146676] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:45.903 [2024-10-01 22:37:41.146697] bdev_nvme.c:6913:remove_discovery_entry: *INFO*: Discovery[10.0.0.2:8009] Remove discovery entry: nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 00:38:45.903 [2024-10-01 22:37:41.146721] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:38:45.903 [2024-10-01 22:37:41.146732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:45.903 [2024-10-01 22:37:41.146743] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:38:45.903 [2024-10-01 22:37:41.146750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:45.903 [2024-10-01 22:37:41.146759] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:38:45.903 [2024-10-01 22:37:41.146766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:45.903 [2024-10-01 22:37:41.146774] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:38:45.903 [2024-10-01 22:37:41.146782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:45.903 [2024-10-01 22:37:41.146791] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:38:45.903 [2024-10-01 22:37:41.146798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:45.903 [2024-10-01 22:37:41.146805] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] in failed state. 00:38:45.903 [2024-10-01 22:37:41.147013] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdbf5d0 (9): Bad file descriptor 00:38:45.903 [2024-10-01 22:37:41.148026] nvme_fabric.c: 214:nvme_fabric_prop_get_cmd_async: *ERROR*: Failed to send Property Get fabrics command 00:38:45.903 [2024-10-01 22:37:41.148037] nvme_ctrlr.c:1213:nvme_ctrlr_shutdown_async: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] Failed to read the CC register 00:38:46.164 22:37:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:38:46.164 22:37:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:38:46.164 22:37:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:38:46.164 22:37:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:46.164 22:37:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:38:46.164 22:37:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:38:46.164 22:37:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:38:46.164 22:37:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:46.164 22:37:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != '' ]] 00:38:46.164 22:37:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@82 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:38:46.164 22:37:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@83 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:38:46.164 22:37:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@86 -- # wait_for_bdev nvme1n1 00:38:46.164 22:37:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:38:46.164 22:37:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:38:46.164 22:37:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:38:46.164 22:37:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:46.164 22:37:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:38:46.164 22:37:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:38:46.164 22:37:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:38:46.164 22:37:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:46.164 22:37:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:38:46.164 22:37:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:38:47.548 22:37:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:38:47.548 22:37:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:38:47.548 22:37:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:38:47.548 22:37:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:47.548 22:37:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:38:47.548 22:37:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:38:47.548 22:37:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:38:47.548 22:37:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:47.548 22:37:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:38:47.548 22:37:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:38:48.120 [2024-10-01 22:37:43.200575] bdev_nvme.c:7162:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:38:48.120 [2024-10-01 22:37:43.200595] bdev_nvme.c:7242:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:38:48.120 [2024-10-01 22:37:43.200609] bdev_nvme.c:7125:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:38:48.120 [2024-10-01 22:37:43.328012] bdev_nvme.c:7091:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme1 00:38:48.381 [2024-10-01 22:37:43.389685] bdev_nvme.c:7952:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:38:48.381 [2024-10-01 22:37:43.389723] bdev_nvme.c:7952:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:38:48.381 [2024-10-01 22:37:43.389742] bdev_nvme.c:7952:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:38:48.381 [2024-10-01 22:37:43.389756] bdev_nvme.c:6981:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme1 done 00:38:48.381 [2024-10-01 22:37:43.389764] bdev_nvme.c:6940:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:38:48.381 [2024-10-01 22:37:43.438375] bdev_nvme.c:1735:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0xddaf10 was disconnected and freed. delete nvme_qpair. 00:38:48.382 22:37:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:38:48.382 22:37:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:38:48.382 22:37:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:38:48.382 22:37:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:38:48.382 22:37:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:48.382 22:37:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:38:48.382 22:37:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:38:48.382 22:37:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:48.382 22:37:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme1n1 != \n\v\m\e\1\n\1 ]] 00:38:48.382 22:37:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@88 -- # trap - SIGINT SIGTERM EXIT 00:38:48.382 22:37:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@90 -- # killprocess 347661 00:38:48.382 22:37:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@950 -- # '[' -z 347661 ']' 00:38:48.382 22:37:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # kill -0 347661 00:38:48.382 22:37:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@955 -- # uname 00:38:48.382 22:37:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:38:48.382 22:37:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 347661 00:38:48.382 22:37:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:38:48.382 22:37:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:38:48.382 22:37:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 347661' 00:38:48.382 killing process with pid 347661 00:38:48.382 22:37:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@969 -- # kill 347661 00:38:48.382 22:37:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@974 -- # wait 347661 00:38:48.643 22:37:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@91 -- # nvmftestfini 00:38:48.643 22:37:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@514 -- # nvmfcleanup 00:38:48.643 22:37:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@121 -- # sync 00:38:48.643 22:37:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:38:48.643 22:37:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@124 -- # set +e 00:38:48.643 22:37:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@125 -- # for i in {1..20} 00:38:48.643 22:37:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:38:48.643 rmmod nvme_tcp 00:38:48.643 rmmod nvme_fabrics 00:38:48.643 rmmod nvme_keyring 00:38:48.643 22:37:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:38:48.643 22:37:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@128 -- # set -e 00:38:48.643 22:37:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@129 -- # return 0 00:38:48.643 22:37:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@515 -- # '[' -n 347317 ']' 00:38:48.643 22:37:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@516 -- # killprocess 347317 00:38:48.643 22:37:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@950 -- # '[' -z 347317 ']' 00:38:48.643 22:37:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # kill -0 347317 00:38:48.643 22:37:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@955 -- # uname 00:38:48.643 22:37:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:38:48.643 22:37:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 347317 00:38:48.643 22:37:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:38:48.643 22:37:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:38:48.643 22:37:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 347317' 00:38:48.643 killing process with pid 347317 00:38:48.643 22:37:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@969 -- # kill 347317 00:38:48.643 22:37:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@974 -- # wait 347317 00:38:48.909 22:37:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:38:48.909 22:37:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:38:48.909 22:37:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:38:48.909 22:37:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@297 -- # iptr 00:38:48.909 22:37:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@789 -- # iptables-save 00:38:48.909 22:37:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@789 -- # iptables-restore 00:38:48.909 22:37:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:38:48.909 22:37:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:38:48.909 22:37:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@302 -- # remove_spdk_ns 00:38:48.909 22:37:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:38:48.909 22:37:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:38:48.909 22:37:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:38:50.935 22:37:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:38:50.935 00:38:50.935 real 0m23.190s 00:38:50.935 user 0m27.228s 00:38:50.935 sys 0m7.027s 00:38:50.935 22:37:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:38:50.935 22:37:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:38:50.935 ************************************ 00:38:50.935 END TEST nvmf_discovery_remove_ifc 00:38:50.935 ************************************ 00:38:50.935 22:37:46 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@29 -- # run_test nvmf_identify_kernel_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:38:50.935 22:37:46 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:38:50.935 22:37:46 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:38:50.935 22:37:46 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:38:50.935 ************************************ 00:38:50.935 START TEST nvmf_identify_kernel_target 00:38:50.935 ************************************ 00:38:50.935 22:37:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:38:51.196 * Looking for test storage... 00:38:51.196 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:38:51.196 22:37:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:38:51.196 22:37:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1681 -- # lcov --version 00:38:51.196 22:37:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:38:51.196 22:37:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:38:51.196 22:37:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:38:51.196 22:37:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:38:51.196 22:37:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:38:51.196 22:37:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@336 -- # IFS=.-: 00:38:51.196 22:37:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@336 -- # read -ra ver1 00:38:51.196 22:37:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@337 -- # IFS=.-: 00:38:51.196 22:37:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@337 -- # read -ra ver2 00:38:51.196 22:37:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@338 -- # local 'op=<' 00:38:51.196 22:37:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@340 -- # ver1_l=2 00:38:51.196 22:37:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@341 -- # ver2_l=1 00:38:51.196 22:37:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:38:51.196 22:37:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@344 -- # case "$op" in 00:38:51.196 22:37:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@345 -- # : 1 00:38:51.196 22:37:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:38:51.196 22:37:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:38:51.196 22:37:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@365 -- # decimal 1 00:38:51.196 22:37:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@353 -- # local d=1 00:38:51.196 22:37:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:38:51.196 22:37:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@355 -- # echo 1 00:38:51.196 22:37:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@365 -- # ver1[v]=1 00:38:51.196 22:37:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@366 -- # decimal 2 00:38:51.196 22:37:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@353 -- # local d=2 00:38:51.196 22:37:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:38:51.196 22:37:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@355 -- # echo 2 00:38:51.196 22:37:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@366 -- # ver2[v]=2 00:38:51.196 22:37:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:38:51.196 22:37:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:38:51.196 22:37:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@368 -- # return 0 00:38:51.196 22:37:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:38:51.196 22:37:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:38:51.196 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:51.196 --rc genhtml_branch_coverage=1 00:38:51.196 --rc genhtml_function_coverage=1 00:38:51.196 --rc genhtml_legend=1 00:38:51.196 --rc geninfo_all_blocks=1 00:38:51.197 --rc geninfo_unexecuted_blocks=1 00:38:51.197 00:38:51.197 ' 00:38:51.197 22:37:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:38:51.197 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:51.197 --rc genhtml_branch_coverage=1 00:38:51.197 --rc genhtml_function_coverage=1 00:38:51.197 --rc genhtml_legend=1 00:38:51.197 --rc geninfo_all_blocks=1 00:38:51.197 --rc geninfo_unexecuted_blocks=1 00:38:51.197 00:38:51.197 ' 00:38:51.197 22:37:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:38:51.197 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:51.197 --rc genhtml_branch_coverage=1 00:38:51.197 --rc genhtml_function_coverage=1 00:38:51.197 --rc genhtml_legend=1 00:38:51.197 --rc geninfo_all_blocks=1 00:38:51.197 --rc geninfo_unexecuted_blocks=1 00:38:51.197 00:38:51.197 ' 00:38:51.197 22:37:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:38:51.197 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:51.197 --rc genhtml_branch_coverage=1 00:38:51.197 --rc genhtml_function_coverage=1 00:38:51.197 --rc genhtml_legend=1 00:38:51.197 --rc geninfo_all_blocks=1 00:38:51.197 --rc geninfo_unexecuted_blocks=1 00:38:51.197 00:38:51.197 ' 00:38:51.197 22:37:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:38:51.197 22:37:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # uname -s 00:38:51.197 22:37:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:38:51.197 22:37:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:38:51.197 22:37:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:38:51.197 22:37:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:38:51.197 22:37:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:38:51.197 22:37:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:38:51.197 22:37:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:38:51.197 22:37:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:38:51.197 22:37:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:38:51.197 22:37:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:38:51.197 22:37:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 00:38:51.197 22:37:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@18 -- # NVME_HOSTID=80c5a598-37ec-ec11-9bc7-a4bf01928204 00:38:51.197 22:37:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:38:51.197 22:37:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:38:51.197 22:37:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:38:51.197 22:37:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:38:51.197 22:37:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:38:51.197 22:37:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@15 -- # shopt -s extglob 00:38:51.197 22:37:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:38:51.197 22:37:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:38:51.197 22:37:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:38:51.197 22:37:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:51.197 22:37:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:51.197 22:37:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:51.197 22:37:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@5 -- # export PATH 00:38:51.197 22:37:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:51.197 22:37:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@51 -- # : 0 00:38:51.197 22:37:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:38:51.197 22:37:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:38:51.197 22:37:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:38:51.197 22:37:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:38:51.197 22:37:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:38:51.197 22:37:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:38:51.197 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:38:51.197 22:37:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:38:51.197 22:37:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:38:51.197 22:37:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:38:51.197 22:37:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@11 -- # nvmftestinit 00:38:51.197 22:37:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:38:51.197 22:37:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:38:51.197 22:37:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@474 -- # prepare_net_devs 00:38:51.197 22:37:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@436 -- # local -g is_hw=no 00:38:51.197 22:37:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@438 -- # remove_spdk_ns 00:38:51.197 22:37:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:38:51.197 22:37:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:38:51.197 22:37:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:38:51.197 22:37:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:38:51.197 22:37:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:38:51.197 22:37:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@309 -- # xtrace_disable 00:38:51.197 22:37:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:38:59.341 22:37:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:38:59.341 22:37:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@315 -- # pci_devs=() 00:38:59.341 22:37:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:38:59.341 22:37:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:38:59.341 22:37:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:38:59.341 22:37:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:38:59.341 22:37:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:38:59.341 22:37:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@319 -- # net_devs=() 00:38:59.341 22:37:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:38:59.341 22:37:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@320 -- # e810=() 00:38:59.341 22:37:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@320 -- # local -ga e810 00:38:59.341 22:37:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@321 -- # x722=() 00:38:59.341 22:37:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@321 -- # local -ga x722 00:38:59.341 22:37:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@322 -- # mlx=() 00:38:59.341 22:37:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@322 -- # local -ga mlx 00:38:59.341 22:37:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:38:59.341 22:37:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:38:59.341 22:37:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:38:59.341 22:37:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:38:59.341 22:37:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:38:59.341 22:37:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:38:59.341 22:37:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:38:59.341 22:37:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:38:59.341 22:37:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:38:59.341 22:37:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:38:59.341 22:37:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:38:59.341 22:37:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:38:59.341 22:37:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:38:59.341 22:37:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:38:59.341 22:37:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:38:59.341 22:37:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:38:59.341 22:37:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:38:59.341 22:37:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:38:59.341 22:37:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:38:59.341 22:37:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:38:59.341 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:38:59.341 22:37:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:38:59.341 22:37:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:38:59.341 22:37:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:38:59.342 22:37:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:38:59.342 22:37:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:38:59.342 22:37:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:38:59.342 22:37:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:38:59.342 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:38:59.342 22:37:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:38:59.342 22:37:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:38:59.342 22:37:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:38:59.342 22:37:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:38:59.342 22:37:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:38:59.342 22:37:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:38:59.342 22:37:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:38:59.342 22:37:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:38:59.342 22:37:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:38:59.342 22:37:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:38:59.342 22:37:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:38:59.342 22:37:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:38:59.342 22:37:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@416 -- # [[ up == up ]] 00:38:59.342 22:37:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:38:59.342 22:37:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:38:59.342 22:37:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:38:59.342 Found net devices under 0000:4b:00.0: cvl_0_0 00:38:59.342 22:37:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:38:59.342 22:37:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:38:59.342 22:37:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:38:59.342 22:37:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:38:59.342 22:37:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:38:59.342 22:37:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@416 -- # [[ up == up ]] 00:38:59.342 22:37:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:38:59.342 22:37:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:38:59.342 22:37:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:38:59.342 Found net devices under 0000:4b:00.1: cvl_0_1 00:38:59.342 22:37:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:38:59.342 22:37:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:38:59.342 22:37:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@440 -- # is_hw=yes 00:38:59.342 22:37:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:38:59.342 22:37:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:38:59.342 22:37:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:38:59.342 22:37:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:38:59.342 22:37:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:38:59.342 22:37:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:38:59.342 22:37:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:38:59.342 22:37:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:38:59.342 22:37:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:38:59.342 22:37:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:38:59.342 22:37:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:38:59.342 22:37:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:38:59.342 22:37:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:38:59.342 22:37:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:38:59.342 22:37:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:38:59.342 22:37:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:38:59.342 22:37:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:38:59.342 22:37:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:38:59.342 22:37:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:38:59.342 22:37:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:38:59.342 22:37:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:38:59.342 22:37:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:38:59.342 22:37:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:38:59.342 22:37:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:38:59.342 22:37:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:38:59.342 22:37:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:38:59.342 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:38:59.342 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.783 ms 00:38:59.342 00:38:59.342 --- 10.0.0.2 ping statistics --- 00:38:59.342 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:38:59.342 rtt min/avg/max/mdev = 0.783/0.783/0.783/0.000 ms 00:38:59.342 22:37:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:38:59.342 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:38:59.342 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.288 ms 00:38:59.342 00:38:59.342 --- 10.0.0.1 ping statistics --- 00:38:59.342 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:38:59.342 rtt min/avg/max/mdev = 0.288/0.288/0.288/0.000 ms 00:38:59.342 22:37:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:38:59.342 22:37:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@448 -- # return 0 00:38:59.342 22:37:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:38:59.342 22:37:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:38:59.342 22:37:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:38:59.342 22:37:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:38:59.342 22:37:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:38:59.342 22:37:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:38:59.342 22:37:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:38:59.342 22:37:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@13 -- # trap 'nvmftestfini || :; clean_kernel_target' EXIT 00:38:59.342 22:37:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # get_main_ns_ip 00:38:59.342 22:37:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@767 -- # local ip 00:38:59.342 22:37:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@768 -- # ip_candidates=() 00:38:59.342 22:37:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@768 -- # local -A ip_candidates 00:38:59.342 22:37:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:38:59.342 22:37:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:38:59.342 22:37:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:38:59.342 22:37:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:38:59.342 22:37:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:38:59.342 22:37:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:38:59.342 22:37:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:38:59.342 22:37:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # target_ip=10.0.0.1 00:38:59.342 22:37:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@16 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:38:59.342 22:37:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@658 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:38:59.342 22:37:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@660 -- # nvmet=/sys/kernel/config/nvmet 00:38:59.343 22:37:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@661 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:38:59.343 22:37:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@662 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:38:59.343 22:37:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@663 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:38:59.343 22:37:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@665 -- # local block nvme 00:38:59.343 22:37:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@667 -- # [[ ! -e /sys/module/nvmet ]] 00:38:59.343 22:37:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@668 -- # modprobe nvmet 00:38:59.343 22:37:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@671 -- # [[ -e /sys/kernel/config/nvmet ]] 00:38:59.343 22:37:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@673 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:39:01.898 Waiting for block devices as requested 00:39:01.898 0000:80:01.6 (8086 0b00): vfio-pci -> ioatdma 00:39:01.898 0000:80:01.7 (8086 0b00): vfio-pci -> ioatdma 00:39:01.898 0000:80:01.4 (8086 0b00): vfio-pci -> ioatdma 00:39:02.159 0000:80:01.5 (8086 0b00): vfio-pci -> ioatdma 00:39:02.159 0000:80:01.2 (8086 0b00): vfio-pci -> ioatdma 00:39:02.159 0000:80:01.3 (8086 0b00): vfio-pci -> ioatdma 00:39:02.419 0000:80:01.0 (8086 0b00): vfio-pci -> ioatdma 00:39:02.419 0000:80:01.1 (8086 0b00): vfio-pci -> ioatdma 00:39:02.419 0000:65:00.0 (144d a80a): vfio-pci -> nvme 00:39:02.680 0000:00:01.6 (8086 0b00): vfio-pci -> ioatdma 00:39:02.680 0000:00:01.7 (8086 0b00): vfio-pci -> ioatdma 00:39:02.940 0000:00:01.4 (8086 0b00): vfio-pci -> ioatdma 00:39:02.940 0000:00:01.5 (8086 0b00): vfio-pci -> ioatdma 00:39:02.940 0000:00:01.2 (8086 0b00): vfio-pci -> ioatdma 00:39:02.940 0000:00:01.3 (8086 0b00): vfio-pci -> ioatdma 00:39:03.200 0000:00:01.0 (8086 0b00): vfio-pci -> ioatdma 00:39:03.200 0000:00:01.1 (8086 0b00): vfio-pci -> ioatdma 00:39:03.460 22:37:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@676 -- # for block in /sys/block/nvme* 00:39:03.460 22:37:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@677 -- # [[ -e /sys/block/nvme0n1 ]] 00:39:03.460 22:37:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@678 -- # is_block_zoned nvme0n1 00:39:03.460 22:37:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:39:03.460 22:37:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:39:03.460 22:37:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:39:03.460 22:37:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@679 -- # block_in_use nvme0n1 00:39:03.460 22:37:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:39:03.460 22:37:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:39:03.460 No valid GPT data, bailing 00:39:03.460 22:37:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:39:03.460 22:37:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # pt= 00:39:03.460 22:37:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@395 -- # return 1 00:39:03.460 22:37:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@679 -- # nvme=/dev/nvme0n1 00:39:03.460 22:37:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@682 -- # [[ -b /dev/nvme0n1 ]] 00:39:03.460 22:37:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@684 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:39:03.460 22:37:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@685 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:39:03.460 22:37:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@686 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:39:03.460 22:37:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@691 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:39:03.460 22:37:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@693 -- # echo 1 00:39:03.460 22:37:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@694 -- # echo /dev/nvme0n1 00:39:03.460 22:37:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@695 -- # echo 1 00:39:03.460 22:37:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@697 -- # echo 10.0.0.1 00:39:03.460 22:37:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@698 -- # echo tcp 00:39:03.460 22:37:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@699 -- # echo 4420 00:39:03.460 22:37:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@700 -- # echo ipv4 00:39:03.460 22:37:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@703 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:39:03.460 22:37:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@706 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 --hostid=80c5a598-37ec-ec11-9bc7-a4bf01928204 -a 10.0.0.1 -t tcp -s 4420 00:39:03.721 00:39:03.721 Discovery Log Number of Records 2, Generation counter 2 00:39:03.721 =====Discovery Log Entry 0====== 00:39:03.721 trtype: tcp 00:39:03.721 adrfam: ipv4 00:39:03.721 subtype: current discovery subsystem 00:39:03.721 treq: not specified, sq flow control disable supported 00:39:03.721 portid: 1 00:39:03.721 trsvcid: 4420 00:39:03.721 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:39:03.721 traddr: 10.0.0.1 00:39:03.721 eflags: none 00:39:03.721 sectype: none 00:39:03.721 =====Discovery Log Entry 1====== 00:39:03.721 trtype: tcp 00:39:03.721 adrfam: ipv4 00:39:03.721 subtype: nvme subsystem 00:39:03.721 treq: not specified, sq flow control disable supported 00:39:03.721 portid: 1 00:39:03.721 trsvcid: 4420 00:39:03.721 subnqn: nqn.2016-06.io.spdk:testnqn 00:39:03.722 traddr: 10.0.0.1 00:39:03.722 eflags: none 00:39:03.722 sectype: none 00:39:03.722 22:37:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 00:39:03.722 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' 00:39:03.722 ===================================================== 00:39:03.722 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2014-08.org.nvmexpress.discovery 00:39:03.722 ===================================================== 00:39:03.722 Controller Capabilities/Features 00:39:03.722 ================================ 00:39:03.722 Vendor ID: 0000 00:39:03.722 Subsystem Vendor ID: 0000 00:39:03.722 Serial Number: 035826e47cb9ab89600d 00:39:03.722 Model Number: Linux 00:39:03.722 Firmware Version: 6.8.9-20 00:39:03.722 Recommended Arb Burst: 0 00:39:03.722 IEEE OUI Identifier: 00 00 00 00:39:03.722 Multi-path I/O 00:39:03.722 May have multiple subsystem ports: No 00:39:03.722 May have multiple controllers: No 00:39:03.722 Associated with SR-IOV VF: No 00:39:03.722 Max Data Transfer Size: Unlimited 00:39:03.722 Max Number of Namespaces: 0 00:39:03.722 Max Number of I/O Queues: 1024 00:39:03.722 NVMe Specification Version (VS): 1.3 00:39:03.722 NVMe Specification Version (Identify): 1.3 00:39:03.722 Maximum Queue Entries: 1024 00:39:03.722 Contiguous Queues Required: No 00:39:03.722 Arbitration Mechanisms Supported 00:39:03.722 Weighted Round Robin: Not Supported 00:39:03.722 Vendor Specific: Not Supported 00:39:03.722 Reset Timeout: 7500 ms 00:39:03.722 Doorbell Stride: 4 bytes 00:39:03.722 NVM Subsystem Reset: Not Supported 00:39:03.722 Command Sets Supported 00:39:03.722 NVM Command Set: Supported 00:39:03.722 Boot Partition: Not Supported 00:39:03.722 Memory Page Size Minimum: 4096 bytes 00:39:03.722 Memory Page Size Maximum: 4096 bytes 00:39:03.722 Persistent Memory Region: Not Supported 00:39:03.722 Optional Asynchronous Events Supported 00:39:03.722 Namespace Attribute Notices: Not Supported 00:39:03.722 Firmware Activation Notices: Not Supported 00:39:03.722 ANA Change Notices: Not Supported 00:39:03.722 PLE Aggregate Log Change Notices: Not Supported 00:39:03.722 LBA Status Info Alert Notices: Not Supported 00:39:03.722 EGE Aggregate Log Change Notices: Not Supported 00:39:03.722 Normal NVM Subsystem Shutdown event: Not Supported 00:39:03.722 Zone Descriptor Change Notices: Not Supported 00:39:03.722 Discovery Log Change Notices: Supported 00:39:03.722 Controller Attributes 00:39:03.722 128-bit Host Identifier: Not Supported 00:39:03.722 Non-Operational Permissive Mode: Not Supported 00:39:03.722 NVM Sets: Not Supported 00:39:03.722 Read Recovery Levels: Not Supported 00:39:03.722 Endurance Groups: Not Supported 00:39:03.722 Predictable Latency Mode: Not Supported 00:39:03.722 Traffic Based Keep ALive: Not Supported 00:39:03.722 Namespace Granularity: Not Supported 00:39:03.722 SQ Associations: Not Supported 00:39:03.722 UUID List: Not Supported 00:39:03.722 Multi-Domain Subsystem: Not Supported 00:39:03.722 Fixed Capacity Management: Not Supported 00:39:03.722 Variable Capacity Management: Not Supported 00:39:03.722 Delete Endurance Group: Not Supported 00:39:03.722 Delete NVM Set: Not Supported 00:39:03.722 Extended LBA Formats Supported: Not Supported 00:39:03.722 Flexible Data Placement Supported: Not Supported 00:39:03.722 00:39:03.722 Controller Memory Buffer Support 00:39:03.722 ================================ 00:39:03.722 Supported: No 00:39:03.722 00:39:03.722 Persistent Memory Region Support 00:39:03.722 ================================ 00:39:03.722 Supported: No 00:39:03.722 00:39:03.722 Admin Command Set Attributes 00:39:03.722 ============================ 00:39:03.722 Security Send/Receive: Not Supported 00:39:03.722 Format NVM: Not Supported 00:39:03.722 Firmware Activate/Download: Not Supported 00:39:03.722 Namespace Management: Not Supported 00:39:03.722 Device Self-Test: Not Supported 00:39:03.722 Directives: Not Supported 00:39:03.722 NVMe-MI: Not Supported 00:39:03.722 Virtualization Management: Not Supported 00:39:03.722 Doorbell Buffer Config: Not Supported 00:39:03.722 Get LBA Status Capability: Not Supported 00:39:03.722 Command & Feature Lockdown Capability: Not Supported 00:39:03.722 Abort Command Limit: 1 00:39:03.722 Async Event Request Limit: 1 00:39:03.722 Number of Firmware Slots: N/A 00:39:03.722 Firmware Slot 1 Read-Only: N/A 00:39:03.722 Firmware Activation Without Reset: N/A 00:39:03.722 Multiple Update Detection Support: N/A 00:39:03.722 Firmware Update Granularity: No Information Provided 00:39:03.722 Per-Namespace SMART Log: No 00:39:03.722 Asymmetric Namespace Access Log Page: Not Supported 00:39:03.722 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:39:03.722 Command Effects Log Page: Not Supported 00:39:03.722 Get Log Page Extended Data: Supported 00:39:03.722 Telemetry Log Pages: Not Supported 00:39:03.722 Persistent Event Log Pages: Not Supported 00:39:03.722 Supported Log Pages Log Page: May Support 00:39:03.722 Commands Supported & Effects Log Page: Not Supported 00:39:03.722 Feature Identifiers & Effects Log Page:May Support 00:39:03.722 NVMe-MI Commands & Effects Log Page: May Support 00:39:03.722 Data Area 4 for Telemetry Log: Not Supported 00:39:03.722 Error Log Page Entries Supported: 1 00:39:03.722 Keep Alive: Not Supported 00:39:03.722 00:39:03.722 NVM Command Set Attributes 00:39:03.722 ========================== 00:39:03.722 Submission Queue Entry Size 00:39:03.722 Max: 1 00:39:03.722 Min: 1 00:39:03.722 Completion Queue Entry Size 00:39:03.722 Max: 1 00:39:03.722 Min: 1 00:39:03.722 Number of Namespaces: 0 00:39:03.722 Compare Command: Not Supported 00:39:03.722 Write Uncorrectable Command: Not Supported 00:39:03.722 Dataset Management Command: Not Supported 00:39:03.722 Write Zeroes Command: Not Supported 00:39:03.722 Set Features Save Field: Not Supported 00:39:03.722 Reservations: Not Supported 00:39:03.722 Timestamp: Not Supported 00:39:03.722 Copy: Not Supported 00:39:03.722 Volatile Write Cache: Not Present 00:39:03.722 Atomic Write Unit (Normal): 1 00:39:03.722 Atomic Write Unit (PFail): 1 00:39:03.722 Atomic Compare & Write Unit: 1 00:39:03.722 Fused Compare & Write: Not Supported 00:39:03.722 Scatter-Gather List 00:39:03.722 SGL Command Set: Supported 00:39:03.722 SGL Keyed: Not Supported 00:39:03.722 SGL Bit Bucket Descriptor: Not Supported 00:39:03.722 SGL Metadata Pointer: Not Supported 00:39:03.722 Oversized SGL: Not Supported 00:39:03.722 SGL Metadata Address: Not Supported 00:39:03.722 SGL Offset: Supported 00:39:03.722 Transport SGL Data Block: Not Supported 00:39:03.722 Replay Protected Memory Block: Not Supported 00:39:03.722 00:39:03.722 Firmware Slot Information 00:39:03.722 ========================= 00:39:03.722 Active slot: 0 00:39:03.722 00:39:03.722 00:39:03.722 Error Log 00:39:03.722 ========= 00:39:03.722 00:39:03.722 Active Namespaces 00:39:03.722 ================= 00:39:03.722 Discovery Log Page 00:39:03.722 ================== 00:39:03.722 Generation Counter: 2 00:39:03.722 Number of Records: 2 00:39:03.722 Record Format: 0 00:39:03.722 00:39:03.722 Discovery Log Entry 0 00:39:03.722 ---------------------- 00:39:03.722 Transport Type: 3 (TCP) 00:39:03.722 Address Family: 1 (IPv4) 00:39:03.722 Subsystem Type: 3 (Current Discovery Subsystem) 00:39:03.722 Entry Flags: 00:39:03.722 Duplicate Returned Information: 0 00:39:03.722 Explicit Persistent Connection Support for Discovery: 0 00:39:03.722 Transport Requirements: 00:39:03.722 Secure Channel: Not Specified 00:39:03.722 Port ID: 1 (0x0001) 00:39:03.722 Controller ID: 65535 (0xffff) 00:39:03.722 Admin Max SQ Size: 32 00:39:03.722 Transport Service Identifier: 4420 00:39:03.722 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:39:03.722 Transport Address: 10.0.0.1 00:39:03.722 Discovery Log Entry 1 00:39:03.722 ---------------------- 00:39:03.722 Transport Type: 3 (TCP) 00:39:03.722 Address Family: 1 (IPv4) 00:39:03.722 Subsystem Type: 2 (NVM Subsystem) 00:39:03.722 Entry Flags: 00:39:03.722 Duplicate Returned Information: 0 00:39:03.722 Explicit Persistent Connection Support for Discovery: 0 00:39:03.722 Transport Requirements: 00:39:03.722 Secure Channel: Not Specified 00:39:03.723 Port ID: 1 (0x0001) 00:39:03.723 Controller ID: 65535 (0xffff) 00:39:03.723 Admin Max SQ Size: 32 00:39:03.723 Transport Service Identifier: 4420 00:39:03.723 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:testnqn 00:39:03.723 Transport Address: 10.0.0.1 00:39:03.723 22:37:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:39:03.723 get_feature(0x01) failed 00:39:03.723 get_feature(0x02) failed 00:39:03.723 get_feature(0x04) failed 00:39:03.723 ===================================================== 00:39:03.723 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:39:03.723 ===================================================== 00:39:03.723 Controller Capabilities/Features 00:39:03.723 ================================ 00:39:03.723 Vendor ID: 0000 00:39:03.723 Subsystem Vendor ID: 0000 00:39:03.723 Serial Number: 87c456d2e481bae86849 00:39:03.723 Model Number: SPDK-nqn.2016-06.io.spdk:testnqn 00:39:03.723 Firmware Version: 6.8.9-20 00:39:03.723 Recommended Arb Burst: 6 00:39:03.723 IEEE OUI Identifier: 00 00 00 00:39:03.723 Multi-path I/O 00:39:03.723 May have multiple subsystem ports: Yes 00:39:03.723 May have multiple controllers: Yes 00:39:03.723 Associated with SR-IOV VF: No 00:39:03.723 Max Data Transfer Size: Unlimited 00:39:03.723 Max Number of Namespaces: 1024 00:39:03.723 Max Number of I/O Queues: 128 00:39:03.723 NVMe Specification Version (VS): 1.3 00:39:03.723 NVMe Specification Version (Identify): 1.3 00:39:03.723 Maximum Queue Entries: 1024 00:39:03.723 Contiguous Queues Required: No 00:39:03.723 Arbitration Mechanisms Supported 00:39:03.723 Weighted Round Robin: Not Supported 00:39:03.723 Vendor Specific: Not Supported 00:39:03.723 Reset Timeout: 7500 ms 00:39:03.723 Doorbell Stride: 4 bytes 00:39:03.723 NVM Subsystem Reset: Not Supported 00:39:03.723 Command Sets Supported 00:39:03.723 NVM Command Set: Supported 00:39:03.723 Boot Partition: Not Supported 00:39:03.723 Memory Page Size Minimum: 4096 bytes 00:39:03.723 Memory Page Size Maximum: 4096 bytes 00:39:03.723 Persistent Memory Region: Not Supported 00:39:03.723 Optional Asynchronous Events Supported 00:39:03.723 Namespace Attribute Notices: Supported 00:39:03.723 Firmware Activation Notices: Not Supported 00:39:03.723 ANA Change Notices: Supported 00:39:03.723 PLE Aggregate Log Change Notices: Not Supported 00:39:03.723 LBA Status Info Alert Notices: Not Supported 00:39:03.723 EGE Aggregate Log Change Notices: Not Supported 00:39:03.723 Normal NVM Subsystem Shutdown event: Not Supported 00:39:03.723 Zone Descriptor Change Notices: Not Supported 00:39:03.723 Discovery Log Change Notices: Not Supported 00:39:03.723 Controller Attributes 00:39:03.723 128-bit Host Identifier: Supported 00:39:03.723 Non-Operational Permissive Mode: Not Supported 00:39:03.723 NVM Sets: Not Supported 00:39:03.723 Read Recovery Levels: Not Supported 00:39:03.723 Endurance Groups: Not Supported 00:39:03.723 Predictable Latency Mode: Not Supported 00:39:03.723 Traffic Based Keep ALive: Supported 00:39:03.723 Namespace Granularity: Not Supported 00:39:03.723 SQ Associations: Not Supported 00:39:03.723 UUID List: Not Supported 00:39:03.723 Multi-Domain Subsystem: Not Supported 00:39:03.723 Fixed Capacity Management: Not Supported 00:39:03.723 Variable Capacity Management: Not Supported 00:39:03.723 Delete Endurance Group: Not Supported 00:39:03.723 Delete NVM Set: Not Supported 00:39:03.723 Extended LBA Formats Supported: Not Supported 00:39:03.723 Flexible Data Placement Supported: Not Supported 00:39:03.723 00:39:03.723 Controller Memory Buffer Support 00:39:03.723 ================================ 00:39:03.723 Supported: No 00:39:03.723 00:39:03.723 Persistent Memory Region Support 00:39:03.723 ================================ 00:39:03.723 Supported: No 00:39:03.723 00:39:03.723 Admin Command Set Attributes 00:39:03.723 ============================ 00:39:03.723 Security Send/Receive: Not Supported 00:39:03.723 Format NVM: Not Supported 00:39:03.723 Firmware Activate/Download: Not Supported 00:39:03.723 Namespace Management: Not Supported 00:39:03.723 Device Self-Test: Not Supported 00:39:03.723 Directives: Not Supported 00:39:03.723 NVMe-MI: Not Supported 00:39:03.723 Virtualization Management: Not Supported 00:39:03.723 Doorbell Buffer Config: Not Supported 00:39:03.723 Get LBA Status Capability: Not Supported 00:39:03.723 Command & Feature Lockdown Capability: Not Supported 00:39:03.723 Abort Command Limit: 4 00:39:03.723 Async Event Request Limit: 4 00:39:03.723 Number of Firmware Slots: N/A 00:39:03.723 Firmware Slot 1 Read-Only: N/A 00:39:03.723 Firmware Activation Without Reset: N/A 00:39:03.723 Multiple Update Detection Support: N/A 00:39:03.723 Firmware Update Granularity: No Information Provided 00:39:03.723 Per-Namespace SMART Log: Yes 00:39:03.723 Asymmetric Namespace Access Log Page: Supported 00:39:03.723 ANA Transition Time : 10 sec 00:39:03.723 00:39:03.723 Asymmetric Namespace Access Capabilities 00:39:03.723 ANA Optimized State : Supported 00:39:03.723 ANA Non-Optimized State : Supported 00:39:03.723 ANA Inaccessible State : Supported 00:39:03.723 ANA Persistent Loss State : Supported 00:39:03.723 ANA Change State : Supported 00:39:03.723 ANAGRPID is not changed : No 00:39:03.723 Non-Zero ANAGRPID for NS Mgmt Cmd : Not Supported 00:39:03.723 00:39:03.723 ANA Group Identifier Maximum : 128 00:39:03.723 Number of ANA Group Identifiers : 128 00:39:03.723 Max Number of Allowed Namespaces : 1024 00:39:03.723 Subsystem NQN: nqn.2016-06.io.spdk:testnqn 00:39:03.723 Command Effects Log Page: Supported 00:39:03.723 Get Log Page Extended Data: Supported 00:39:03.723 Telemetry Log Pages: Not Supported 00:39:03.723 Persistent Event Log Pages: Not Supported 00:39:03.723 Supported Log Pages Log Page: May Support 00:39:03.723 Commands Supported & Effects Log Page: Not Supported 00:39:03.723 Feature Identifiers & Effects Log Page:May Support 00:39:03.723 NVMe-MI Commands & Effects Log Page: May Support 00:39:03.723 Data Area 4 for Telemetry Log: Not Supported 00:39:03.723 Error Log Page Entries Supported: 128 00:39:03.723 Keep Alive: Supported 00:39:03.723 Keep Alive Granularity: 1000 ms 00:39:03.723 00:39:03.723 NVM Command Set Attributes 00:39:03.723 ========================== 00:39:03.723 Submission Queue Entry Size 00:39:03.723 Max: 64 00:39:03.723 Min: 64 00:39:03.723 Completion Queue Entry Size 00:39:03.723 Max: 16 00:39:03.723 Min: 16 00:39:03.723 Number of Namespaces: 1024 00:39:03.723 Compare Command: Not Supported 00:39:03.723 Write Uncorrectable Command: Not Supported 00:39:03.723 Dataset Management Command: Supported 00:39:03.723 Write Zeroes Command: Supported 00:39:03.723 Set Features Save Field: Not Supported 00:39:03.723 Reservations: Not Supported 00:39:03.723 Timestamp: Not Supported 00:39:03.723 Copy: Not Supported 00:39:03.723 Volatile Write Cache: Present 00:39:03.723 Atomic Write Unit (Normal): 1 00:39:03.723 Atomic Write Unit (PFail): 1 00:39:03.723 Atomic Compare & Write Unit: 1 00:39:03.723 Fused Compare & Write: Not Supported 00:39:03.723 Scatter-Gather List 00:39:03.723 SGL Command Set: Supported 00:39:03.723 SGL Keyed: Not Supported 00:39:03.723 SGL Bit Bucket Descriptor: Not Supported 00:39:03.723 SGL Metadata Pointer: Not Supported 00:39:03.723 Oversized SGL: Not Supported 00:39:03.723 SGL Metadata Address: Not Supported 00:39:03.723 SGL Offset: Supported 00:39:03.723 Transport SGL Data Block: Not Supported 00:39:03.723 Replay Protected Memory Block: Not Supported 00:39:03.723 00:39:03.723 Firmware Slot Information 00:39:03.723 ========================= 00:39:03.723 Active slot: 0 00:39:03.723 00:39:03.723 Asymmetric Namespace Access 00:39:03.723 =========================== 00:39:03.723 Change Count : 0 00:39:03.723 Number of ANA Group Descriptors : 1 00:39:03.723 ANA Group Descriptor : 0 00:39:03.723 ANA Group ID : 1 00:39:03.723 Number of NSID Values : 1 00:39:03.723 Change Count : 0 00:39:03.723 ANA State : 1 00:39:03.723 Namespace Identifier : 1 00:39:03.723 00:39:03.723 Commands Supported and Effects 00:39:03.723 ============================== 00:39:03.723 Admin Commands 00:39:03.723 -------------- 00:39:03.723 Get Log Page (02h): Supported 00:39:03.723 Identify (06h): Supported 00:39:03.723 Abort (08h): Supported 00:39:03.723 Set Features (09h): Supported 00:39:03.723 Get Features (0Ah): Supported 00:39:03.723 Asynchronous Event Request (0Ch): Supported 00:39:03.723 Keep Alive (18h): Supported 00:39:03.723 I/O Commands 00:39:03.723 ------------ 00:39:03.723 Flush (00h): Supported 00:39:03.723 Write (01h): Supported LBA-Change 00:39:03.723 Read (02h): Supported 00:39:03.723 Write Zeroes (08h): Supported LBA-Change 00:39:03.723 Dataset Management (09h): Supported 00:39:03.723 00:39:03.723 Error Log 00:39:03.723 ========= 00:39:03.723 Entry: 0 00:39:03.723 Error Count: 0x3 00:39:03.724 Submission Queue Id: 0x0 00:39:03.724 Command Id: 0x5 00:39:03.724 Phase Bit: 0 00:39:03.724 Status Code: 0x2 00:39:03.724 Status Code Type: 0x0 00:39:03.724 Do Not Retry: 1 00:39:03.724 Error Location: 0x28 00:39:03.724 LBA: 0x0 00:39:03.724 Namespace: 0x0 00:39:03.724 Vendor Log Page: 0x0 00:39:03.724 ----------- 00:39:03.724 Entry: 1 00:39:03.724 Error Count: 0x2 00:39:03.724 Submission Queue Id: 0x0 00:39:03.724 Command Id: 0x5 00:39:03.724 Phase Bit: 0 00:39:03.724 Status Code: 0x2 00:39:03.724 Status Code Type: 0x0 00:39:03.724 Do Not Retry: 1 00:39:03.724 Error Location: 0x28 00:39:03.724 LBA: 0x0 00:39:03.724 Namespace: 0x0 00:39:03.724 Vendor Log Page: 0x0 00:39:03.724 ----------- 00:39:03.724 Entry: 2 00:39:03.724 Error Count: 0x1 00:39:03.724 Submission Queue Id: 0x0 00:39:03.724 Command Id: 0x4 00:39:03.724 Phase Bit: 0 00:39:03.724 Status Code: 0x2 00:39:03.724 Status Code Type: 0x0 00:39:03.724 Do Not Retry: 1 00:39:03.724 Error Location: 0x28 00:39:03.724 LBA: 0x0 00:39:03.724 Namespace: 0x0 00:39:03.724 Vendor Log Page: 0x0 00:39:03.724 00:39:03.724 Number of Queues 00:39:03.724 ================ 00:39:03.724 Number of I/O Submission Queues: 128 00:39:03.724 Number of I/O Completion Queues: 128 00:39:03.724 00:39:03.724 ZNS Specific Controller Data 00:39:03.724 ============================ 00:39:03.724 Zone Append Size Limit: 0 00:39:03.724 00:39:03.724 00:39:03.724 Active Namespaces 00:39:03.724 ================= 00:39:03.724 get_feature(0x05) failed 00:39:03.724 Namespace ID:1 00:39:03.724 Command Set Identifier: NVM (00h) 00:39:03.724 Deallocate: Supported 00:39:03.724 Deallocated/Unwritten Error: Not Supported 00:39:03.724 Deallocated Read Value: Unknown 00:39:03.724 Deallocate in Write Zeroes: Not Supported 00:39:03.724 Deallocated Guard Field: 0xFFFF 00:39:03.724 Flush: Supported 00:39:03.724 Reservation: Not Supported 00:39:03.724 Namespace Sharing Capabilities: Multiple Controllers 00:39:03.724 Size (in LBAs): 3750748848 (1788GiB) 00:39:03.724 Capacity (in LBAs): 3750748848 (1788GiB) 00:39:03.724 Utilization (in LBAs): 3750748848 (1788GiB) 00:39:03.724 UUID: 201e0650-95b9-4787-b545-2021731e60c8 00:39:03.724 Thin Provisioning: Not Supported 00:39:03.724 Per-NS Atomic Units: Yes 00:39:03.724 Atomic Write Unit (Normal): 8 00:39:03.724 Atomic Write Unit (PFail): 8 00:39:03.724 Preferred Write Granularity: 8 00:39:03.724 Atomic Compare & Write Unit: 8 00:39:03.724 Atomic Boundary Size (Normal): 0 00:39:03.724 Atomic Boundary Size (PFail): 0 00:39:03.724 Atomic Boundary Offset: 0 00:39:03.724 NGUID/EUI64 Never Reused: No 00:39:03.724 ANA group ID: 1 00:39:03.724 Namespace Write Protected: No 00:39:03.724 Number of LBA Formats: 1 00:39:03.724 Current LBA Format: LBA Format #00 00:39:03.724 LBA Format #00: Data Size: 512 Metadata Size: 0 00:39:03.724 00:39:03.724 22:37:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # nvmftestfini 00:39:03.724 22:37:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@514 -- # nvmfcleanup 00:39:03.724 22:37:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@121 -- # sync 00:39:03.724 22:37:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:39:03.724 22:37:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@124 -- # set +e 00:39:03.724 22:37:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:39:03.724 22:37:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:39:03.724 rmmod nvme_tcp 00:39:03.724 rmmod nvme_fabrics 00:39:03.983 22:37:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:39:03.983 22:37:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@128 -- # set -e 00:39:03.983 22:37:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@129 -- # return 0 00:39:03.983 22:37:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@515 -- # '[' -n '' ']' 00:39:03.983 22:37:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:39:03.983 22:37:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:39:03.983 22:37:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:39:03.983 22:37:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@297 -- # iptr 00:39:03.983 22:37:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@789 -- # iptables-save 00:39:03.983 22:37:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:39:03.983 22:37:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@789 -- # iptables-restore 00:39:03.983 22:37:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:39:03.983 22:37:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@302 -- # remove_spdk_ns 00:39:03.983 22:37:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:39:03.983 22:37:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:39:03.983 22:37:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:39:05.893 22:38:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:39:05.893 22:38:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # clean_kernel_target 00:39:05.893 22:38:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@710 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:39:05.893 22:38:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@712 -- # echo 0 00:39:05.893 22:38:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@714 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:39:05.893 22:38:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@715 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:39:05.893 22:38:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@716 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:39:05.893 22:38:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@717 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:39:05.893 22:38:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@719 -- # modules=(/sys/module/nvmet/holders/*) 00:39:05.893 22:38:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@721 -- # modprobe -r nvmet_tcp nvmet 00:39:06.154 22:38:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@724 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:39:09.463 0000:80:01.6 (8086 0b00): ioatdma -> vfio-pci 00:39:09.463 0000:80:01.7 (8086 0b00): ioatdma -> vfio-pci 00:39:09.463 0000:80:01.4 (8086 0b00): ioatdma -> vfio-pci 00:39:09.463 0000:80:01.5 (8086 0b00): ioatdma -> vfio-pci 00:39:09.463 0000:80:01.2 (8086 0b00): ioatdma -> vfio-pci 00:39:09.463 0000:80:01.3 (8086 0b00): ioatdma -> vfio-pci 00:39:09.463 0000:80:01.0 (8086 0b00): ioatdma -> vfio-pci 00:39:09.463 0000:80:01.1 (8086 0b00): ioatdma -> vfio-pci 00:39:09.463 0000:00:01.6 (8086 0b00): ioatdma -> vfio-pci 00:39:09.463 0000:00:01.7 (8086 0b00): ioatdma -> vfio-pci 00:39:09.724 0000:00:01.4 (8086 0b00): ioatdma -> vfio-pci 00:39:09.724 0000:00:01.5 (8086 0b00): ioatdma -> vfio-pci 00:39:09.724 0000:00:01.2 (8086 0b00): ioatdma -> vfio-pci 00:39:09.724 0000:00:01.3 (8086 0b00): ioatdma -> vfio-pci 00:39:09.724 0000:00:01.0 (8086 0b00): ioatdma -> vfio-pci 00:39:09.724 0000:00:01.1 (8086 0b00): ioatdma -> vfio-pci 00:39:09.724 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:39:09.985 00:39:09.985 real 0m18.997s 00:39:09.985 user 0m5.010s 00:39:09.985 sys 0m11.033s 00:39:09.985 22:38:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1126 -- # xtrace_disable 00:39:09.985 22:38:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:39:09.985 ************************************ 00:39:09.985 END TEST nvmf_identify_kernel_target 00:39:09.985 ************************************ 00:39:09.985 22:38:05 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@30 -- # run_test nvmf_auth_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=tcp 00:39:09.985 22:38:05 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:39:09.985 22:38:05 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:39:09.985 22:38:05 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:39:09.985 ************************************ 00:39:09.985 START TEST nvmf_auth_host 00:39:09.985 ************************************ 00:39:09.985 22:38:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=tcp 00:39:10.245 * Looking for test storage... 00:39:10.245 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:39:10.245 22:38:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:39:10.245 22:38:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1681 -- # lcov --version 00:39:10.246 22:38:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:39:10.246 22:38:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:39:10.246 22:38:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:39:10.246 22:38:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:39:10.246 22:38:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:39:10.246 22:38:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@336 -- # IFS=.-: 00:39:10.246 22:38:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@336 -- # read -ra ver1 00:39:10.246 22:38:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@337 -- # IFS=.-: 00:39:10.246 22:38:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@337 -- # read -ra ver2 00:39:10.246 22:38:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@338 -- # local 'op=<' 00:39:10.246 22:38:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@340 -- # ver1_l=2 00:39:10.246 22:38:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@341 -- # ver2_l=1 00:39:10.246 22:38:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:39:10.246 22:38:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@344 -- # case "$op" in 00:39:10.246 22:38:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@345 -- # : 1 00:39:10.246 22:38:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:39:10.246 22:38:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:39:10.246 22:38:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@365 -- # decimal 1 00:39:10.246 22:38:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@353 -- # local d=1 00:39:10.246 22:38:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:39:10.246 22:38:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@355 -- # echo 1 00:39:10.246 22:38:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@365 -- # ver1[v]=1 00:39:10.246 22:38:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@366 -- # decimal 2 00:39:10.246 22:38:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@353 -- # local d=2 00:39:10.246 22:38:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:39:10.246 22:38:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@355 -- # echo 2 00:39:10.246 22:38:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@366 -- # ver2[v]=2 00:39:10.246 22:38:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:39:10.246 22:38:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:39:10.246 22:38:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@368 -- # return 0 00:39:10.246 22:38:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:39:10.246 22:38:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:39:10.246 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:10.246 --rc genhtml_branch_coverage=1 00:39:10.246 --rc genhtml_function_coverage=1 00:39:10.246 --rc genhtml_legend=1 00:39:10.246 --rc geninfo_all_blocks=1 00:39:10.246 --rc geninfo_unexecuted_blocks=1 00:39:10.246 00:39:10.246 ' 00:39:10.246 22:38:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:39:10.246 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:10.246 --rc genhtml_branch_coverage=1 00:39:10.246 --rc genhtml_function_coverage=1 00:39:10.246 --rc genhtml_legend=1 00:39:10.246 --rc geninfo_all_blocks=1 00:39:10.246 --rc geninfo_unexecuted_blocks=1 00:39:10.246 00:39:10.246 ' 00:39:10.246 22:38:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:39:10.246 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:10.246 --rc genhtml_branch_coverage=1 00:39:10.246 --rc genhtml_function_coverage=1 00:39:10.246 --rc genhtml_legend=1 00:39:10.246 --rc geninfo_all_blocks=1 00:39:10.246 --rc geninfo_unexecuted_blocks=1 00:39:10.246 00:39:10.246 ' 00:39:10.246 22:38:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:39:10.246 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:10.246 --rc genhtml_branch_coverage=1 00:39:10.246 --rc genhtml_function_coverage=1 00:39:10.246 --rc genhtml_legend=1 00:39:10.246 --rc geninfo_all_blocks=1 00:39:10.246 --rc geninfo_unexecuted_blocks=1 00:39:10.246 00:39:10.246 ' 00:39:10.246 22:38:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:39:10.246 22:38:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@7 -- # uname -s 00:39:10.246 22:38:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:39:10.246 22:38:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:39:10.246 22:38:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:39:10.246 22:38:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:39:10.246 22:38:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:39:10.246 22:38:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:39:10.246 22:38:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:39:10.246 22:38:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:39:10.246 22:38:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:39:10.246 22:38:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:39:10.246 22:38:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 00:39:10.246 22:38:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@18 -- # NVME_HOSTID=80c5a598-37ec-ec11-9bc7-a4bf01928204 00:39:10.246 22:38:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:39:10.246 22:38:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:39:10.246 22:38:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:39:10.246 22:38:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:39:10.246 22:38:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:39:10.246 22:38:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@15 -- # shopt -s extglob 00:39:10.246 22:38:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:39:10.246 22:38:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:39:10.246 22:38:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:39:10.246 22:38:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:10.246 22:38:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:10.246 22:38:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:10.246 22:38:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@5 -- # export PATH 00:39:10.246 22:38:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:10.246 22:38:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@51 -- # : 0 00:39:10.246 22:38:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:39:10.246 22:38:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:39:10.246 22:38:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:39:10.246 22:38:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:39:10.246 22:38:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:39:10.246 22:38:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:39:10.246 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:39:10.246 22:38:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:39:10.246 22:38:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:39:10.246 22:38:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:39:10.246 22:38:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:39:10.246 22:38:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@16 -- # dhgroups=("ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:39:10.246 22:38:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@17 -- # subnqn=nqn.2024-02.io.spdk:cnode0 00:39:10.246 22:38:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@18 -- # hostnqn=nqn.2024-02.io.spdk:host0 00:39:10.246 22:38:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@19 -- # nvmet_subsys=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:39:10.246 22:38:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@20 -- # nvmet_host=/sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:39:10.246 22:38:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@21 -- # keys=() 00:39:10.246 22:38:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@21 -- # ckeys=() 00:39:10.246 22:38:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@68 -- # nvmftestinit 00:39:10.246 22:38:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:39:10.246 22:38:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:39:10.246 22:38:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@474 -- # prepare_net_devs 00:39:10.246 22:38:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@436 -- # local -g is_hw=no 00:39:10.246 22:38:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@438 -- # remove_spdk_ns 00:39:10.246 22:38:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:39:10.246 22:38:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:39:10.246 22:38:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:39:10.246 22:38:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:39:10.246 22:38:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:39:10.246 22:38:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@309 -- # xtrace_disable 00:39:10.246 22:38:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:18.382 22:38:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:39:18.382 22:38:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@315 -- # pci_devs=() 00:39:18.382 22:38:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@315 -- # local -a pci_devs 00:39:18.382 22:38:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@316 -- # pci_net_devs=() 00:39:18.382 22:38:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:39:18.382 22:38:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@317 -- # pci_drivers=() 00:39:18.382 22:38:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@317 -- # local -A pci_drivers 00:39:18.382 22:38:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@319 -- # net_devs=() 00:39:18.382 22:38:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@319 -- # local -ga net_devs 00:39:18.382 22:38:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@320 -- # e810=() 00:39:18.382 22:38:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@320 -- # local -ga e810 00:39:18.382 22:38:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@321 -- # x722=() 00:39:18.382 22:38:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@321 -- # local -ga x722 00:39:18.382 22:38:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@322 -- # mlx=() 00:39:18.382 22:38:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@322 -- # local -ga mlx 00:39:18.382 22:38:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:39:18.382 22:38:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:39:18.382 22:38:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:39:18.382 22:38:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:39:18.382 22:38:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:39:18.382 22:38:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:39:18.382 22:38:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:39:18.382 22:38:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:39:18.382 22:38:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:39:18.382 22:38:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:39:18.382 22:38:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:39:18.382 22:38:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:39:18.382 22:38:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:39:18.382 22:38:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:39:18.383 22:38:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:39:18.383 22:38:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:39:18.383 22:38:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:39:18.383 22:38:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:39:18.383 22:38:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:39:18.383 22:38:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:39:18.383 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:39:18.383 22:38:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:39:18.383 22:38:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:39:18.383 22:38:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:39:18.383 22:38:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:39:18.383 22:38:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:39:18.383 22:38:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:39:18.383 22:38:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:39:18.383 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:39:18.383 22:38:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:39:18.383 22:38:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:39:18.383 22:38:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:39:18.383 22:38:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:39:18.383 22:38:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:39:18.383 22:38:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:39:18.383 22:38:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:39:18.383 22:38:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:39:18.383 22:38:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:39:18.383 22:38:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:39:18.383 22:38:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:39:18.383 22:38:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:39:18.383 22:38:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@416 -- # [[ up == up ]] 00:39:18.383 22:38:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:39:18.383 22:38:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:39:18.383 22:38:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:39:18.383 Found net devices under 0000:4b:00.0: cvl_0_0 00:39:18.383 22:38:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:39:18.383 22:38:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:39:18.383 22:38:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:39:18.383 22:38:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:39:18.383 22:38:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:39:18.383 22:38:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@416 -- # [[ up == up ]] 00:39:18.383 22:38:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:39:18.383 22:38:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:39:18.383 22:38:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:39:18.383 Found net devices under 0000:4b:00.1: cvl_0_1 00:39:18.383 22:38:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:39:18.383 22:38:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:39:18.383 22:38:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@440 -- # is_hw=yes 00:39:18.383 22:38:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:39:18.383 22:38:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:39:18.383 22:38:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:39:18.383 22:38:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:39:18.383 22:38:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:39:18.383 22:38:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:39:18.383 22:38:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:39:18.383 22:38:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:39:18.383 22:38:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:39:18.383 22:38:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:39:18.383 22:38:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:39:18.383 22:38:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:39:18.383 22:38:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:39:18.383 22:38:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:39:18.383 22:38:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:39:18.383 22:38:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:39:18.383 22:38:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:39:18.383 22:38:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:39:18.383 22:38:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:39:18.383 22:38:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:39:18.383 22:38:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:39:18.383 22:38:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:39:18.383 22:38:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:39:18.383 22:38:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:39:18.383 22:38:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:39:18.383 22:38:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:39:18.383 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:39:18.383 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.684 ms 00:39:18.383 00:39:18.383 --- 10.0.0.2 ping statistics --- 00:39:18.383 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:39:18.383 rtt min/avg/max/mdev = 0.684/0.684/0.684/0.000 ms 00:39:18.383 22:38:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:39:18.383 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:39:18.383 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.297 ms 00:39:18.383 00:39:18.383 --- 10.0.0.1 ping statistics --- 00:39:18.383 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:39:18.383 rtt min/avg/max/mdev = 0.297/0.297/0.297/0.000 ms 00:39:18.383 22:38:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:39:18.383 22:38:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@448 -- # return 0 00:39:18.383 22:38:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:39:18.383 22:38:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:39:18.383 22:38:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:39:18.383 22:38:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:39:18.383 22:38:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:39:18.383 22:38:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:39:18.383 22:38:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:39:18.383 22:38:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@69 -- # nvmfappstart -L nvme_auth 00:39:18.383 22:38:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:39:18.383 22:38:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@724 -- # xtrace_disable 00:39:18.383 22:38:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:18.383 22:38:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@507 -- # nvmfpid=361838 00:39:18.383 22:38:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@508 -- # waitforlisten 361838 00:39:18.383 22:38:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvme_auth 00:39:18.383 22:38:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@831 -- # '[' -z 361838 ']' 00:39:18.383 22:38:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:39:18.383 22:38:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@836 -- # local max_retries=100 00:39:18.383 22:38:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:39:18.383 22:38:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@840 -- # xtrace_disable 00:39:18.383 22:38:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:18.646 22:38:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:39:18.646 22:38:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@864 -- # return 0 00:39:18.646 22:38:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:39:18.646 22:38:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@730 -- # xtrace_disable 00:39:18.646 22:38:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:18.646 22:38:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:39:18.646 22:38:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@70 -- # trap 'cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log; cleanup' SIGINT SIGTERM EXIT 00:39:18.646 22:38:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key null 32 00:39:18.646 22:38:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@749 -- # local digest len file key 00:39:18.646 22:38:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:39:18.646 22:38:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # local -A digests 00:39:18.646 22:38:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digest=null 00:39:18.646 22:38:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # len=32 00:39:18.646 22:38:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # xxd -p -c0 -l 16 /dev/urandom 00:39:18.646 22:38:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # key=d25d7b929635e6c4117625016d36ba10 00:39:18.646 22:38:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # mktemp -t spdk.key-null.XXX 00:39:18.646 22:38:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # file=/tmp/spdk.key-null.RNz 00:39:18.646 22:38:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # format_dhchap_key d25d7b929635e6c4117625016d36ba10 0 00:39:18.646 22:38:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # format_key DHHC-1 d25d7b929635e6c4117625016d36ba10 0 00:39:18.646 22:38:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # local prefix key digest 00:39:18.646 22:38:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # prefix=DHHC-1 00:39:18.646 22:38:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # key=d25d7b929635e6c4117625016d36ba10 00:39:18.646 22:38:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # digest=0 00:39:18.646 22:38:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@731 -- # python - 00:39:18.646 22:38:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # chmod 0600 /tmp/spdk.key-null.RNz 00:39:18.646 22:38:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # echo /tmp/spdk.key-null.RNz 00:39:18.646 22:38:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # keys[0]=/tmp/spdk.key-null.RNz 00:39:18.646 22:38:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key sha512 64 00:39:18.646 22:38:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@749 -- # local digest len file key 00:39:18.646 22:38:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:39:18.646 22:38:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # local -A digests 00:39:18.646 22:38:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digest=sha512 00:39:18.646 22:38:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # len=64 00:39:18.646 22:38:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # xxd -p -c0 -l 32 /dev/urandom 00:39:18.646 22:38:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # key=24c50f8582e7f5ec7e76cbee395b68af5c7deb96db2026e1a718b3fbd46d585c 00:39:18.646 22:38:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # mktemp -t spdk.key-sha512.XXX 00:39:18.646 22:38:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # file=/tmp/spdk.key-sha512.xEL 00:39:18.646 22:38:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # format_dhchap_key 24c50f8582e7f5ec7e76cbee395b68af5c7deb96db2026e1a718b3fbd46d585c 3 00:39:18.646 22:38:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # format_key DHHC-1 24c50f8582e7f5ec7e76cbee395b68af5c7deb96db2026e1a718b3fbd46d585c 3 00:39:18.646 22:38:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # local prefix key digest 00:39:18.646 22:38:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # prefix=DHHC-1 00:39:18.646 22:38:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # key=24c50f8582e7f5ec7e76cbee395b68af5c7deb96db2026e1a718b3fbd46d585c 00:39:18.646 22:38:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # digest=3 00:39:18.646 22:38:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@731 -- # python - 00:39:18.908 22:38:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # chmod 0600 /tmp/spdk.key-sha512.xEL 00:39:18.908 22:38:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # echo /tmp/spdk.key-sha512.xEL 00:39:18.908 22:38:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # ckeys[0]=/tmp/spdk.key-sha512.xEL 00:39:18.908 22:38:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key null 48 00:39:18.908 22:38:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@749 -- # local digest len file key 00:39:18.908 22:38:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:39:18.908 22:38:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # local -A digests 00:39:18.908 22:38:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digest=null 00:39:18.908 22:38:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # len=48 00:39:18.908 22:38:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # xxd -p -c0 -l 24 /dev/urandom 00:39:18.908 22:38:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # key=22cc8305fbf002a752326cf4219786b8260ad8a7258fc28b 00:39:18.908 22:38:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # mktemp -t spdk.key-null.XXX 00:39:18.908 22:38:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # file=/tmp/spdk.key-null.Gn1 00:39:18.908 22:38:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # format_dhchap_key 22cc8305fbf002a752326cf4219786b8260ad8a7258fc28b 0 00:39:18.908 22:38:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # format_key DHHC-1 22cc8305fbf002a752326cf4219786b8260ad8a7258fc28b 0 00:39:18.908 22:38:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # local prefix key digest 00:39:18.908 22:38:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # prefix=DHHC-1 00:39:18.908 22:38:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # key=22cc8305fbf002a752326cf4219786b8260ad8a7258fc28b 00:39:18.908 22:38:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # digest=0 00:39:18.908 22:38:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@731 -- # python - 00:39:18.908 22:38:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # chmod 0600 /tmp/spdk.key-null.Gn1 00:39:18.908 22:38:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # echo /tmp/spdk.key-null.Gn1 00:39:18.908 22:38:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # keys[1]=/tmp/spdk.key-null.Gn1 00:39:18.908 22:38:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key sha384 48 00:39:18.908 22:38:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@749 -- # local digest len file key 00:39:18.908 22:38:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:39:18.908 22:38:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # local -A digests 00:39:18.908 22:38:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digest=sha384 00:39:18.908 22:38:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # len=48 00:39:18.908 22:38:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # xxd -p -c0 -l 24 /dev/urandom 00:39:18.908 22:38:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # key=1a93bc8c52d9fbabfd30af72f36158894817ba244abaf6db 00:39:18.908 22:38:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # mktemp -t spdk.key-sha384.XXX 00:39:18.908 22:38:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # file=/tmp/spdk.key-sha384.FXd 00:39:18.908 22:38:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # format_dhchap_key 1a93bc8c52d9fbabfd30af72f36158894817ba244abaf6db 2 00:39:18.908 22:38:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # format_key DHHC-1 1a93bc8c52d9fbabfd30af72f36158894817ba244abaf6db 2 00:39:18.908 22:38:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # local prefix key digest 00:39:18.908 22:38:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # prefix=DHHC-1 00:39:18.908 22:38:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # key=1a93bc8c52d9fbabfd30af72f36158894817ba244abaf6db 00:39:18.908 22:38:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # digest=2 00:39:18.908 22:38:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@731 -- # python - 00:39:18.908 22:38:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # chmod 0600 /tmp/spdk.key-sha384.FXd 00:39:18.908 22:38:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # echo /tmp/spdk.key-sha384.FXd 00:39:18.908 22:38:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # ckeys[1]=/tmp/spdk.key-sha384.FXd 00:39:18.908 22:38:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:39:18.908 22:38:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@749 -- # local digest len file key 00:39:18.908 22:38:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:39:18.908 22:38:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # local -A digests 00:39:18.908 22:38:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digest=sha256 00:39:18.908 22:38:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # len=32 00:39:18.908 22:38:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # xxd -p -c0 -l 16 /dev/urandom 00:39:18.908 22:38:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # key=ac1328acd087098c1de0311e51b5878c 00:39:18.908 22:38:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # mktemp -t spdk.key-sha256.XXX 00:39:18.908 22:38:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # file=/tmp/spdk.key-sha256.s1q 00:39:18.908 22:38:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # format_dhchap_key ac1328acd087098c1de0311e51b5878c 1 00:39:18.909 22:38:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # format_key DHHC-1 ac1328acd087098c1de0311e51b5878c 1 00:39:18.909 22:38:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # local prefix key digest 00:39:18.909 22:38:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # prefix=DHHC-1 00:39:18.909 22:38:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # key=ac1328acd087098c1de0311e51b5878c 00:39:18.909 22:38:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # digest=1 00:39:18.909 22:38:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@731 -- # python - 00:39:18.909 22:38:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # chmod 0600 /tmp/spdk.key-sha256.s1q 00:39:18.909 22:38:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # echo /tmp/spdk.key-sha256.s1q 00:39:18.909 22:38:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # keys[2]=/tmp/spdk.key-sha256.s1q 00:39:18.909 22:38:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:39:18.909 22:38:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@749 -- # local digest len file key 00:39:18.909 22:38:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:39:18.909 22:38:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # local -A digests 00:39:18.909 22:38:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digest=sha256 00:39:18.909 22:38:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # len=32 00:39:18.909 22:38:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # xxd -p -c0 -l 16 /dev/urandom 00:39:18.909 22:38:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # key=902d151cef8c2d28be51c9f53c0aa7fa 00:39:18.909 22:38:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # mktemp -t spdk.key-sha256.XXX 00:39:18.909 22:38:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # file=/tmp/spdk.key-sha256.9kD 00:39:18.909 22:38:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # format_dhchap_key 902d151cef8c2d28be51c9f53c0aa7fa 1 00:39:18.909 22:38:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # format_key DHHC-1 902d151cef8c2d28be51c9f53c0aa7fa 1 00:39:18.909 22:38:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # local prefix key digest 00:39:18.909 22:38:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # prefix=DHHC-1 00:39:18.909 22:38:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # key=902d151cef8c2d28be51c9f53c0aa7fa 00:39:18.909 22:38:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # digest=1 00:39:18.909 22:38:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@731 -- # python - 00:39:19.170 22:38:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # chmod 0600 /tmp/spdk.key-sha256.9kD 00:39:19.170 22:38:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # echo /tmp/spdk.key-sha256.9kD 00:39:19.170 22:38:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # ckeys[2]=/tmp/spdk.key-sha256.9kD 00:39:19.170 22:38:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key sha384 48 00:39:19.170 22:38:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@749 -- # local digest len file key 00:39:19.170 22:38:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:39:19.170 22:38:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # local -A digests 00:39:19.170 22:38:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digest=sha384 00:39:19.170 22:38:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # len=48 00:39:19.170 22:38:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # xxd -p -c0 -l 24 /dev/urandom 00:39:19.170 22:38:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # key=3a70b86cc2b50f202502dfff34c3099fd941c38148f9954f 00:39:19.170 22:38:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # mktemp -t spdk.key-sha384.XXX 00:39:19.170 22:38:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # file=/tmp/spdk.key-sha384.xv9 00:39:19.170 22:38:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # format_dhchap_key 3a70b86cc2b50f202502dfff34c3099fd941c38148f9954f 2 00:39:19.170 22:38:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # format_key DHHC-1 3a70b86cc2b50f202502dfff34c3099fd941c38148f9954f 2 00:39:19.170 22:38:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # local prefix key digest 00:39:19.170 22:38:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # prefix=DHHC-1 00:39:19.170 22:38:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # key=3a70b86cc2b50f202502dfff34c3099fd941c38148f9954f 00:39:19.170 22:38:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # digest=2 00:39:19.170 22:38:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@731 -- # python - 00:39:19.170 22:38:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # chmod 0600 /tmp/spdk.key-sha384.xv9 00:39:19.170 22:38:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # echo /tmp/spdk.key-sha384.xv9 00:39:19.170 22:38:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # keys[3]=/tmp/spdk.key-sha384.xv9 00:39:19.170 22:38:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key null 32 00:39:19.170 22:38:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@749 -- # local digest len file key 00:39:19.170 22:38:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:39:19.170 22:38:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # local -A digests 00:39:19.170 22:38:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digest=null 00:39:19.170 22:38:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # len=32 00:39:19.170 22:38:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # xxd -p -c0 -l 16 /dev/urandom 00:39:19.170 22:38:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # key=7165c356273675d85de17f5a3527f85f 00:39:19.170 22:38:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # mktemp -t spdk.key-null.XXX 00:39:19.170 22:38:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # file=/tmp/spdk.key-null.C2c 00:39:19.170 22:38:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # format_dhchap_key 7165c356273675d85de17f5a3527f85f 0 00:39:19.170 22:38:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # format_key DHHC-1 7165c356273675d85de17f5a3527f85f 0 00:39:19.170 22:38:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # local prefix key digest 00:39:19.170 22:38:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # prefix=DHHC-1 00:39:19.170 22:38:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # key=7165c356273675d85de17f5a3527f85f 00:39:19.170 22:38:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # digest=0 00:39:19.170 22:38:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@731 -- # python - 00:39:19.170 22:38:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # chmod 0600 /tmp/spdk.key-null.C2c 00:39:19.170 22:38:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # echo /tmp/spdk.key-null.C2c 00:39:19.170 22:38:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # ckeys[3]=/tmp/spdk.key-null.C2c 00:39:19.170 22:38:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # gen_dhchap_key sha512 64 00:39:19.170 22:38:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@749 -- # local digest len file key 00:39:19.170 22:38:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:39:19.170 22:38:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # local -A digests 00:39:19.170 22:38:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digest=sha512 00:39:19.170 22:38:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # len=64 00:39:19.170 22:38:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # xxd -p -c0 -l 32 /dev/urandom 00:39:19.170 22:38:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # key=328707b810a0702ac4bdab09f9bc6da7a4d9d590b63598bea2fd5dce92d5620f 00:39:19.170 22:38:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # mktemp -t spdk.key-sha512.XXX 00:39:19.171 22:38:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # file=/tmp/spdk.key-sha512.rzD 00:39:19.171 22:38:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # format_dhchap_key 328707b810a0702ac4bdab09f9bc6da7a4d9d590b63598bea2fd5dce92d5620f 3 00:39:19.171 22:38:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # format_key DHHC-1 328707b810a0702ac4bdab09f9bc6da7a4d9d590b63598bea2fd5dce92d5620f 3 00:39:19.171 22:38:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # local prefix key digest 00:39:19.171 22:38:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # prefix=DHHC-1 00:39:19.171 22:38:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # key=328707b810a0702ac4bdab09f9bc6da7a4d9d590b63598bea2fd5dce92d5620f 00:39:19.171 22:38:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # digest=3 00:39:19.171 22:38:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@731 -- # python - 00:39:19.171 22:38:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # chmod 0600 /tmp/spdk.key-sha512.rzD 00:39:19.171 22:38:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # echo /tmp/spdk.key-sha512.rzD 00:39:19.171 22:38:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # keys[4]=/tmp/spdk.key-sha512.rzD 00:39:19.171 22:38:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # ckeys[4]= 00:39:19.171 22:38:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@79 -- # waitforlisten 361838 00:39:19.171 22:38:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@831 -- # '[' -z 361838 ']' 00:39:19.171 22:38:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:39:19.171 22:38:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@836 -- # local max_retries=100 00:39:19.171 22:38:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:39:19.171 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:39:19.171 22:38:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@840 -- # xtrace_disable 00:39:19.171 22:38:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:19.432 22:38:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:39:19.432 22:38:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@864 -- # return 0 00:39:19.432 22:38:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:39:19.432 22:38:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.RNz 00:39:19.432 22:38:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:19.432 22:38:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:19.432 22:38:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:19.432 22:38:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha512.xEL ]] 00:39:19.432 22:38:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.xEL 00:39:19.432 22:38:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:19.432 22:38:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:19.432 22:38:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:19.432 22:38:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:39:19.432 22:38:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-null.Gn1 00:39:19.432 22:38:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:19.432 22:38:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:19.432 22:38:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:19.432 22:38:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha384.FXd ]] 00:39:19.432 22:38:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.FXd 00:39:19.432 22:38:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:19.432 22:38:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:19.432 22:38:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:19.432 22:38:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:39:19.432 22:38:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha256.s1q 00:39:19.432 22:38:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:19.432 22:38:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:19.432 22:38:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:19.432 22:38:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha256.9kD ]] 00:39:19.432 22:38:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.9kD 00:39:19.432 22:38:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:19.432 22:38:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:19.432 22:38:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:19.432 22:38:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:39:19.432 22:38:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha384.xv9 00:39:19.432 22:38:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:19.432 22:38:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:19.432 22:38:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:19.432 22:38:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-null.C2c ]] 00:39:19.432 22:38:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey3 /tmp/spdk.key-null.C2c 00:39:19.432 22:38:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:19.432 22:38:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:19.432 22:38:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:19.432 22:38:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:39:19.432 22:38:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key4 /tmp/spdk.key-sha512.rzD 00:39:19.432 22:38:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:19.432 22:38:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:19.432 22:38:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:19.432 22:38:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n '' ]] 00:39:19.432 22:38:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@85 -- # nvmet_auth_init 00:39:19.432 22:38:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@35 -- # get_main_ns_ip 00:39:19.432 22:38:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:39:19.432 22:38:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:39:19.432 22:38:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:39:19.432 22:38:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:39:19.432 22:38:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:39:19.432 22:38:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:39:19.432 22:38:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:39:19.432 22:38:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:39:19.432 22:38:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:39:19.432 22:38:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:39:19.432 22:38:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@35 -- # configure_kernel_target nqn.2024-02.io.spdk:cnode0 10.0.0.1 00:39:19.432 22:38:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@658 -- # local kernel_name=nqn.2024-02.io.spdk:cnode0 kernel_target_ip=10.0.0.1 00:39:19.432 22:38:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@660 -- # nvmet=/sys/kernel/config/nvmet 00:39:19.432 22:38:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@661 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:39:19.432 22:38:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@662 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:39:19.432 22:38:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@663 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:39:19.432 22:38:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@665 -- # local block nvme 00:39:19.432 22:38:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@667 -- # [[ ! -e /sys/module/nvmet ]] 00:39:19.432 22:38:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@668 -- # modprobe nvmet 00:39:19.693 22:38:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@671 -- # [[ -e /sys/kernel/config/nvmet ]] 00:39:19.693 22:38:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@673 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:39:22.999 Waiting for block devices as requested 00:39:22.999 0000:80:01.6 (8086 0b00): vfio-pci -> ioatdma 00:39:22.999 0000:80:01.7 (8086 0b00): vfio-pci -> ioatdma 00:39:22.999 0000:80:01.4 (8086 0b00): vfio-pci -> ioatdma 00:39:22.999 0000:80:01.5 (8086 0b00): vfio-pci -> ioatdma 00:39:23.260 0000:80:01.2 (8086 0b00): vfio-pci -> ioatdma 00:39:23.260 0000:80:01.3 (8086 0b00): vfio-pci -> ioatdma 00:39:23.260 0000:80:01.0 (8086 0b00): vfio-pci -> ioatdma 00:39:23.520 0000:80:01.1 (8086 0b00): vfio-pci -> ioatdma 00:39:23.520 0000:65:00.0 (144d a80a): vfio-pci -> nvme 00:39:23.782 0000:00:01.6 (8086 0b00): vfio-pci -> ioatdma 00:39:23.782 0000:00:01.7 (8086 0b00): vfio-pci -> ioatdma 00:39:23.782 0000:00:01.4 (8086 0b00): vfio-pci -> ioatdma 00:39:23.782 0000:00:01.5 (8086 0b00): vfio-pci -> ioatdma 00:39:24.042 0000:00:01.2 (8086 0b00): vfio-pci -> ioatdma 00:39:24.042 0000:00:01.3 (8086 0b00): vfio-pci -> ioatdma 00:39:24.042 0000:00:01.0 (8086 0b00): vfio-pci -> ioatdma 00:39:24.302 0000:00:01.1 (8086 0b00): vfio-pci -> ioatdma 00:39:25.687 22:38:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@676 -- # for block in /sys/block/nvme* 00:39:25.687 22:38:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@677 -- # [[ -e /sys/block/nvme0n1 ]] 00:39:25.687 22:38:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@678 -- # is_block_zoned nvme0n1 00:39:25.687 22:38:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:39:25.687 22:38:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:39:25.687 22:38:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:39:25.687 22:38:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@679 -- # block_in_use nvme0n1 00:39:25.687 22:38:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:39:25.687 22:38:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:39:25.687 No valid GPT data, bailing 00:39:25.687 22:38:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:39:25.687 22:38:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # pt= 00:39:25.687 22:38:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@395 -- # return 1 00:39:25.687 22:38:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@679 -- # nvme=/dev/nvme0n1 00:39:25.687 22:38:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@682 -- # [[ -b /dev/nvme0n1 ]] 00:39:25.687 22:38:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@684 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:39:25.687 22:38:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@685 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:39:25.687 22:38:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@686 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:39:25.687 22:38:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@691 -- # echo SPDK-nqn.2024-02.io.spdk:cnode0 00:39:25.687 22:38:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@693 -- # echo 1 00:39:25.687 22:38:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@694 -- # echo /dev/nvme0n1 00:39:25.687 22:38:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@695 -- # echo 1 00:39:25.687 22:38:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@697 -- # echo 10.0.0.1 00:39:25.687 22:38:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@698 -- # echo tcp 00:39:25.687 22:38:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@699 -- # echo 4420 00:39:25.687 22:38:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@700 -- # echo ipv4 00:39:25.687 22:38:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@703 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 /sys/kernel/config/nvmet/ports/1/subsystems/ 00:39:25.687 22:38:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@706 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 --hostid=80c5a598-37ec-ec11-9bc7-a4bf01928204 -a 10.0.0.1 -t tcp -s 4420 00:39:25.687 00:39:25.687 Discovery Log Number of Records 2, Generation counter 2 00:39:25.687 =====Discovery Log Entry 0====== 00:39:25.687 trtype: tcp 00:39:25.687 adrfam: ipv4 00:39:25.687 subtype: current discovery subsystem 00:39:25.687 treq: not specified, sq flow control disable supported 00:39:25.687 portid: 1 00:39:25.687 trsvcid: 4420 00:39:25.687 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:39:25.687 traddr: 10.0.0.1 00:39:25.687 eflags: none 00:39:25.687 sectype: none 00:39:25.687 =====Discovery Log Entry 1====== 00:39:25.687 trtype: tcp 00:39:25.687 adrfam: ipv4 00:39:25.687 subtype: nvme subsystem 00:39:25.687 treq: not specified, sq flow control disable supported 00:39:25.687 portid: 1 00:39:25.687 trsvcid: 4420 00:39:25.687 subnqn: nqn.2024-02.io.spdk:cnode0 00:39:25.687 traddr: 10.0.0.1 00:39:25.688 eflags: none 00:39:25.688 sectype: none 00:39:25.688 22:38:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@36 -- # mkdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:39:25.688 22:38:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@37 -- # echo 0 00:39:25.688 22:38:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@38 -- # ln -s /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:39:25.688 22:38:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@88 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:39:25.688 22:38:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:39:25.688 22:38:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:39:25.688 22:38:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:39:25.688 22:38:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:39:25.688 22:38:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MjJjYzgzMDVmYmYwMDJhNzUyMzI2Y2Y0MjE5Nzg2YjgyNjBhZDhhNzI1OGZjMjhiJ3xOtg==: 00:39:25.688 22:38:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MWE5M2JjOGM1MmQ5ZmJhYmZkMzBhZjcyZjM2MTU4ODk0ODE3YmEyNDRhYmFmNmRijzhxdQ==: 00:39:25.688 22:38:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:39:25.688 22:38:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:39:25.950 22:38:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MjJjYzgzMDVmYmYwMDJhNzUyMzI2Y2Y0MjE5Nzg2YjgyNjBhZDhhNzI1OGZjMjhiJ3xOtg==: 00:39:25.950 22:38:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MWE5M2JjOGM1MmQ5ZmJhYmZkMzBhZjcyZjM2MTU4ODk0ODE3YmEyNDRhYmFmNmRijzhxdQ==: ]] 00:39:25.950 22:38:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MWE5M2JjOGM1MmQ5ZmJhYmZkMzBhZjcyZjM2MTU4ODk0ODE3YmEyNDRhYmFmNmRijzhxdQ==: 00:39:25.950 22:38:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:39:25.950 22:38:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@94 -- # printf %s sha256,sha384,sha512 00:39:25.950 22:38:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:39:25.950 22:38:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@94 -- # printf %s ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:39:25.950 22:38:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # connect_authenticate sha256,sha384,sha512 ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 1 00:39:25.950 22:38:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:39:25.950 22:38:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256,sha384,sha512 00:39:25.950 22:38:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:39:25.950 22:38:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:39:25.950 22:38:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:39:25.950 22:38:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:39:25.950 22:38:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:25.950 22:38:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:25.950 22:38:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:25.950 22:38:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:39:25.950 22:38:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:39:25.950 22:38:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:39:25.950 22:38:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:39:25.950 22:38:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:39:25.950 22:38:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:39:25.950 22:38:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:39:25.950 22:38:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:39:25.950 22:38:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:39:25.950 22:38:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:39:25.950 22:38:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:39:25.950 22:38:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:39:25.950 22:38:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:25.950 22:38:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:25.950 nvme0n1 00:39:25.950 22:38:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:25.950 22:38:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:39:25.950 22:38:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:39:25.950 22:38:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:25.950 22:38:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:25.950 22:38:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:25.950 22:38:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:39:25.950 22:38:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:39:25.950 22:38:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:25.950 22:38:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:25.950 22:38:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:25.950 22:38:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:39:25.950 22:38:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:39:25.950 22:38:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:39:25.950 22:38:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 0 00:39:25.950 22:38:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:39:25.950 22:38:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:39:25.950 22:38:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:39:25.950 22:38:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:39:25.950 22:38:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZDI1ZDdiOTI5NjM1ZTZjNDExNzYyNTAxNmQzNmJhMTCNtXb4: 00:39:25.950 22:38:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MjRjNTBmODU4MmU3ZjVlYzdlNzZjYmVlMzk1YjY4YWY1YzdkZWI5NmRiMjAyNmUxYTcxOGIzZmJkNDZkNTg1Y8k+rDI=: 00:39:25.950 22:38:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:39:25.950 22:38:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:39:25.950 22:38:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZDI1ZDdiOTI5NjM1ZTZjNDExNzYyNTAxNmQzNmJhMTCNtXb4: 00:39:25.950 22:38:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MjRjNTBmODU4MmU3ZjVlYzdlNzZjYmVlMzk1YjY4YWY1YzdkZWI5NmRiMjAyNmUxYTcxOGIzZmJkNDZkNTg1Y8k+rDI=: ]] 00:39:25.950 22:38:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MjRjNTBmODU4MmU3ZjVlYzdlNzZjYmVlMzk1YjY4YWY1YzdkZWI5NmRiMjAyNmUxYTcxOGIzZmJkNDZkNTg1Y8k+rDI=: 00:39:25.950 22:38:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 0 00:39:25.950 22:38:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:39:25.950 22:38:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:39:25.950 22:38:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:39:25.950 22:38:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:39:25.951 22:38:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:39:25.951 22:38:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:39:25.951 22:38:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:25.951 22:38:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:26.212 22:38:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:26.212 22:38:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:39:26.212 22:38:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:39:26.212 22:38:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:39:26.212 22:38:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:39:26.212 22:38:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:39:26.212 22:38:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:39:26.212 22:38:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:39:26.212 22:38:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:39:26.212 22:38:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:39:26.212 22:38:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:39:26.212 22:38:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:39:26.212 22:38:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:39:26.212 22:38:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:26.212 22:38:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:26.212 nvme0n1 00:39:26.212 22:38:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:26.212 22:38:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:39:26.212 22:38:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:26.212 22:38:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:39:26.212 22:38:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:26.212 22:38:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:26.212 22:38:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:39:26.212 22:38:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:39:26.212 22:38:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:26.212 22:38:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:26.212 22:38:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:26.212 22:38:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:39:26.212 22:38:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:39:26.212 22:38:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:39:26.212 22:38:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:39:26.212 22:38:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:39:26.212 22:38:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:39:26.212 22:38:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MjJjYzgzMDVmYmYwMDJhNzUyMzI2Y2Y0MjE5Nzg2YjgyNjBhZDhhNzI1OGZjMjhiJ3xOtg==: 00:39:26.212 22:38:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MWE5M2JjOGM1MmQ5ZmJhYmZkMzBhZjcyZjM2MTU4ODk0ODE3YmEyNDRhYmFmNmRijzhxdQ==: 00:39:26.212 22:38:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:39:26.212 22:38:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:39:26.212 22:38:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MjJjYzgzMDVmYmYwMDJhNzUyMzI2Y2Y0MjE5Nzg2YjgyNjBhZDhhNzI1OGZjMjhiJ3xOtg==: 00:39:26.212 22:38:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MWE5M2JjOGM1MmQ5ZmJhYmZkMzBhZjcyZjM2MTU4ODk0ODE3YmEyNDRhYmFmNmRijzhxdQ==: ]] 00:39:26.212 22:38:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MWE5M2JjOGM1MmQ5ZmJhYmZkMzBhZjcyZjM2MTU4ODk0ODE3YmEyNDRhYmFmNmRijzhxdQ==: 00:39:26.212 22:38:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 1 00:39:26.212 22:38:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:39:26.212 22:38:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:39:26.212 22:38:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:39:26.212 22:38:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:39:26.212 22:38:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:39:26.212 22:38:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:39:26.212 22:38:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:26.212 22:38:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:26.212 22:38:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:26.212 22:38:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:39:26.212 22:38:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:39:26.212 22:38:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:39:26.212 22:38:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:39:26.212 22:38:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:39:26.213 22:38:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:39:26.213 22:38:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:39:26.213 22:38:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:39:26.213 22:38:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:39:26.213 22:38:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:39:26.213 22:38:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:39:26.213 22:38:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:39:26.213 22:38:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:26.213 22:38:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:26.473 nvme0n1 00:39:26.473 22:38:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:26.473 22:38:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:39:26.473 22:38:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:26.473 22:38:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:39:26.473 22:38:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:26.473 22:38:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:26.473 22:38:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:39:26.473 22:38:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:39:26.473 22:38:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:26.473 22:38:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:26.473 22:38:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:26.473 22:38:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:39:26.473 22:38:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:39:26.473 22:38:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:39:26.473 22:38:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:39:26.473 22:38:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:39:26.473 22:38:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:39:26.473 22:38:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YWMxMzI4YWNkMDg3MDk4YzFkZTAzMTFlNTFiNTg3OGNulVAJ: 00:39:26.473 22:38:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:OTAyZDE1MWNlZjhjMmQyOGJlNTFjOWY1M2MwYWE3ZmEBh5uT: 00:39:26.473 22:38:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:39:26.473 22:38:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:39:26.473 22:38:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YWMxMzI4YWNkMDg3MDk4YzFkZTAzMTFlNTFiNTg3OGNulVAJ: 00:39:26.473 22:38:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:OTAyZDE1MWNlZjhjMmQyOGJlNTFjOWY1M2MwYWE3ZmEBh5uT: ]] 00:39:26.473 22:38:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:OTAyZDE1MWNlZjhjMmQyOGJlNTFjOWY1M2MwYWE3ZmEBh5uT: 00:39:26.473 22:38:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 2 00:39:26.473 22:38:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:39:26.473 22:38:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:39:26.473 22:38:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:39:26.473 22:38:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:39:26.473 22:38:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:39:26.473 22:38:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:39:26.473 22:38:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:26.473 22:38:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:26.473 22:38:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:26.473 22:38:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:39:26.473 22:38:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:39:26.473 22:38:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:39:26.474 22:38:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:39:26.474 22:38:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:39:26.474 22:38:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:39:26.474 22:38:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:39:26.474 22:38:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:39:26.474 22:38:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:39:26.474 22:38:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:39:26.474 22:38:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:39:26.474 22:38:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:39:26.474 22:38:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:26.474 22:38:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:26.735 nvme0n1 00:39:26.735 22:38:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:26.735 22:38:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:39:26.735 22:38:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:39:26.735 22:38:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:26.735 22:38:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:26.735 22:38:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:26.735 22:38:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:39:26.735 22:38:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:39:26.735 22:38:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:26.735 22:38:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:26.735 22:38:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:26.735 22:38:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:39:26.735 22:38:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 3 00:39:26.735 22:38:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:39:26.735 22:38:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:39:26.735 22:38:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:39:26.735 22:38:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:39:26.735 22:38:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:M2E3MGI4NmNjMmI1MGYyMDI1MDJkZmZmMzRjMzA5OWZkOTQxYzM4MTQ4Zjk5NTRmM0RBIQ==: 00:39:26.735 22:38:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NzE2NWMzNTYyNzM2NzVkODVkZTE3ZjVhMzUyN2Y4NWaSDkhX: 00:39:26.735 22:38:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:39:26.735 22:38:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:39:26.735 22:38:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:M2E3MGI4NmNjMmI1MGYyMDI1MDJkZmZmMzRjMzA5OWZkOTQxYzM4MTQ4Zjk5NTRmM0RBIQ==: 00:39:26.735 22:38:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NzE2NWMzNTYyNzM2NzVkODVkZTE3ZjVhMzUyN2Y4NWaSDkhX: ]] 00:39:26.735 22:38:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NzE2NWMzNTYyNzM2NzVkODVkZTE3ZjVhMzUyN2Y4NWaSDkhX: 00:39:26.735 22:38:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 3 00:39:26.735 22:38:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:39:26.735 22:38:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:39:26.735 22:38:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:39:26.735 22:38:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:39:26.735 22:38:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:39:26.735 22:38:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:39:26.735 22:38:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:26.735 22:38:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:26.735 22:38:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:26.735 22:38:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:39:26.735 22:38:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:39:26.735 22:38:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:39:26.735 22:38:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:39:26.735 22:38:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:39:26.735 22:38:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:39:26.735 22:38:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:39:26.735 22:38:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:39:26.735 22:38:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:39:26.736 22:38:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:39:26.736 22:38:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:39:26.736 22:38:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:39:26.736 22:38:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:26.736 22:38:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:26.997 nvme0n1 00:39:26.997 22:38:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:26.997 22:38:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:39:26.997 22:38:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:26.997 22:38:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:39:26.997 22:38:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:26.997 22:38:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:26.997 22:38:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:39:26.997 22:38:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:39:26.997 22:38:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:26.997 22:38:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:26.997 22:38:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:26.997 22:38:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:39:26.997 22:38:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 4 00:39:26.997 22:38:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:39:26.997 22:38:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:39:26.997 22:38:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:39:26.997 22:38:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:39:26.997 22:38:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MzI4NzA3YjgxMGEwNzAyYWM0YmRhYjA5ZjliYzZkYTdhNGQ5ZDU5MGI2MzU5OGJlYTJmZDVkY2U5MmQ1NjIwZrBhziw=: 00:39:26.997 22:38:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:39:26.997 22:38:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:39:26.997 22:38:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:39:26.997 22:38:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MzI4NzA3YjgxMGEwNzAyYWM0YmRhYjA5ZjliYzZkYTdhNGQ5ZDU5MGI2MzU5OGJlYTJmZDVkY2U5MmQ1NjIwZrBhziw=: 00:39:26.997 22:38:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:39:26.997 22:38:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 4 00:39:26.997 22:38:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:39:26.997 22:38:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:39:26.997 22:38:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:39:26.997 22:38:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:39:26.997 22:38:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:39:26.997 22:38:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:39:26.997 22:38:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:26.997 22:38:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:26.997 22:38:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:26.998 22:38:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:39:26.998 22:38:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:39:26.998 22:38:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:39:26.998 22:38:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:39:26.998 22:38:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:39:26.998 22:38:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:39:26.998 22:38:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:39:26.998 22:38:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:39:26.998 22:38:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:39:26.998 22:38:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:39:26.998 22:38:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:39:26.998 22:38:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:39:26.998 22:38:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:26.998 22:38:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:27.258 nvme0n1 00:39:27.258 22:38:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:27.258 22:38:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:39:27.258 22:38:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:39:27.259 22:38:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:27.259 22:38:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:27.259 22:38:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:27.259 22:38:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:39:27.259 22:38:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:39:27.259 22:38:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:27.259 22:38:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:27.259 22:38:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:27.259 22:38:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:39:27.259 22:38:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:39:27.259 22:38:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 0 00:39:27.259 22:38:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:39:27.259 22:38:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:39:27.259 22:38:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:39:27.259 22:38:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:39:27.259 22:38:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZDI1ZDdiOTI5NjM1ZTZjNDExNzYyNTAxNmQzNmJhMTCNtXb4: 00:39:27.259 22:38:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MjRjNTBmODU4MmU3ZjVlYzdlNzZjYmVlMzk1YjY4YWY1YzdkZWI5NmRiMjAyNmUxYTcxOGIzZmJkNDZkNTg1Y8k+rDI=: 00:39:27.259 22:38:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:39:27.259 22:38:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:39:27.520 22:38:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZDI1ZDdiOTI5NjM1ZTZjNDExNzYyNTAxNmQzNmJhMTCNtXb4: 00:39:27.520 22:38:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MjRjNTBmODU4MmU3ZjVlYzdlNzZjYmVlMzk1YjY4YWY1YzdkZWI5NmRiMjAyNmUxYTcxOGIzZmJkNDZkNTg1Y8k+rDI=: ]] 00:39:27.520 22:38:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MjRjNTBmODU4MmU3ZjVlYzdlNzZjYmVlMzk1YjY4YWY1YzdkZWI5NmRiMjAyNmUxYTcxOGIzZmJkNDZkNTg1Y8k+rDI=: 00:39:27.520 22:38:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 0 00:39:27.520 22:38:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:39:27.520 22:38:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:39:27.520 22:38:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:39:27.520 22:38:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:39:27.520 22:38:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:39:27.520 22:38:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:39:27.520 22:38:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:27.520 22:38:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:27.520 22:38:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:27.520 22:38:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:39:27.520 22:38:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:39:27.520 22:38:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:39:27.520 22:38:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:39:27.520 22:38:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:39:27.520 22:38:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:39:27.520 22:38:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:39:27.520 22:38:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:39:27.520 22:38:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:39:27.520 22:38:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:39:27.520 22:38:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:39:27.520 22:38:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:39:27.521 22:38:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:27.521 22:38:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:27.783 nvme0n1 00:39:27.783 22:38:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:27.783 22:38:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:39:27.783 22:38:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:39:27.783 22:38:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:27.783 22:38:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:27.783 22:38:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:27.783 22:38:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:39:27.783 22:38:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:39:27.783 22:38:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:27.783 22:38:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:27.783 22:38:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:27.783 22:38:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:39:27.783 22:38:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 1 00:39:27.783 22:38:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:39:27.783 22:38:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:39:27.783 22:38:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:39:27.783 22:38:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:39:27.783 22:38:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MjJjYzgzMDVmYmYwMDJhNzUyMzI2Y2Y0MjE5Nzg2YjgyNjBhZDhhNzI1OGZjMjhiJ3xOtg==: 00:39:27.783 22:38:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MWE5M2JjOGM1MmQ5ZmJhYmZkMzBhZjcyZjM2MTU4ODk0ODE3YmEyNDRhYmFmNmRijzhxdQ==: 00:39:27.783 22:38:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:39:27.783 22:38:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:39:27.783 22:38:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MjJjYzgzMDVmYmYwMDJhNzUyMzI2Y2Y0MjE5Nzg2YjgyNjBhZDhhNzI1OGZjMjhiJ3xOtg==: 00:39:27.783 22:38:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MWE5M2JjOGM1MmQ5ZmJhYmZkMzBhZjcyZjM2MTU4ODk0ODE3YmEyNDRhYmFmNmRijzhxdQ==: ]] 00:39:27.783 22:38:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MWE5M2JjOGM1MmQ5ZmJhYmZkMzBhZjcyZjM2MTU4ODk0ODE3YmEyNDRhYmFmNmRijzhxdQ==: 00:39:27.783 22:38:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 1 00:39:27.783 22:38:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:39:27.783 22:38:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:39:27.783 22:38:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:39:27.783 22:38:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:39:27.783 22:38:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:39:27.783 22:38:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:39:27.783 22:38:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:27.783 22:38:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:27.783 22:38:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:27.783 22:38:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:39:27.783 22:38:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:39:27.783 22:38:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:39:27.783 22:38:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:39:27.783 22:38:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:39:27.783 22:38:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:39:27.783 22:38:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:39:27.783 22:38:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:39:27.783 22:38:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:39:27.783 22:38:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:39:27.783 22:38:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:39:27.783 22:38:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:39:27.783 22:38:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:27.783 22:38:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:28.044 nvme0n1 00:39:28.044 22:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:28.044 22:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:39:28.044 22:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:39:28.044 22:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:28.044 22:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:28.044 22:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:28.044 22:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:39:28.044 22:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:39:28.044 22:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:28.044 22:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:28.044 22:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:28.044 22:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:39:28.044 22:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 2 00:39:28.044 22:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:39:28.044 22:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:39:28.044 22:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:39:28.044 22:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:39:28.044 22:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YWMxMzI4YWNkMDg3MDk4YzFkZTAzMTFlNTFiNTg3OGNulVAJ: 00:39:28.044 22:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:OTAyZDE1MWNlZjhjMmQyOGJlNTFjOWY1M2MwYWE3ZmEBh5uT: 00:39:28.044 22:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:39:28.044 22:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:39:28.044 22:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YWMxMzI4YWNkMDg3MDk4YzFkZTAzMTFlNTFiNTg3OGNulVAJ: 00:39:28.044 22:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:OTAyZDE1MWNlZjhjMmQyOGJlNTFjOWY1M2MwYWE3ZmEBh5uT: ]] 00:39:28.044 22:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:OTAyZDE1MWNlZjhjMmQyOGJlNTFjOWY1M2MwYWE3ZmEBh5uT: 00:39:28.044 22:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 2 00:39:28.044 22:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:39:28.044 22:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:39:28.044 22:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:39:28.044 22:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:39:28.044 22:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:39:28.044 22:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:39:28.044 22:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:28.044 22:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:28.044 22:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:28.044 22:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:39:28.044 22:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:39:28.044 22:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:39:28.044 22:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:39:28.044 22:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:39:28.044 22:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:39:28.044 22:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:39:28.044 22:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:39:28.044 22:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:39:28.044 22:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:39:28.044 22:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:39:28.044 22:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:39:28.044 22:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:28.044 22:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:28.305 nvme0n1 00:39:28.305 22:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:28.305 22:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:39:28.305 22:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:39:28.305 22:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:28.305 22:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:28.305 22:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:28.305 22:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:39:28.305 22:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:39:28.305 22:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:28.305 22:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:28.305 22:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:28.305 22:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:39:28.305 22:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 3 00:39:28.305 22:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:39:28.305 22:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:39:28.305 22:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:39:28.305 22:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:39:28.305 22:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:M2E3MGI4NmNjMmI1MGYyMDI1MDJkZmZmMzRjMzA5OWZkOTQxYzM4MTQ4Zjk5NTRmM0RBIQ==: 00:39:28.305 22:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NzE2NWMzNTYyNzM2NzVkODVkZTE3ZjVhMzUyN2Y4NWaSDkhX: 00:39:28.305 22:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:39:28.305 22:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:39:28.305 22:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:M2E3MGI4NmNjMmI1MGYyMDI1MDJkZmZmMzRjMzA5OWZkOTQxYzM4MTQ4Zjk5NTRmM0RBIQ==: 00:39:28.305 22:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NzE2NWMzNTYyNzM2NzVkODVkZTE3ZjVhMzUyN2Y4NWaSDkhX: ]] 00:39:28.305 22:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NzE2NWMzNTYyNzM2NzVkODVkZTE3ZjVhMzUyN2Y4NWaSDkhX: 00:39:28.305 22:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 3 00:39:28.305 22:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:39:28.305 22:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:39:28.305 22:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:39:28.305 22:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:39:28.305 22:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:39:28.305 22:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:39:28.305 22:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:28.305 22:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:28.305 22:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:28.305 22:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:39:28.305 22:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:39:28.305 22:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:39:28.305 22:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:39:28.305 22:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:39:28.305 22:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:39:28.305 22:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:39:28.305 22:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:39:28.305 22:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:39:28.305 22:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:39:28.305 22:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:39:28.305 22:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:39:28.305 22:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:28.305 22:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:28.566 nvme0n1 00:39:28.566 22:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:28.566 22:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:39:28.566 22:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:39:28.566 22:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:28.566 22:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:28.566 22:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:28.566 22:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:39:28.566 22:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:39:28.566 22:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:28.566 22:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:28.566 22:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:28.566 22:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:39:28.566 22:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 4 00:39:28.566 22:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:39:28.566 22:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:39:28.566 22:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:39:28.566 22:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:39:28.566 22:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MzI4NzA3YjgxMGEwNzAyYWM0YmRhYjA5ZjliYzZkYTdhNGQ5ZDU5MGI2MzU5OGJlYTJmZDVkY2U5MmQ1NjIwZrBhziw=: 00:39:28.566 22:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:39:28.566 22:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:39:28.566 22:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:39:28.566 22:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MzI4NzA3YjgxMGEwNzAyYWM0YmRhYjA5ZjliYzZkYTdhNGQ5ZDU5MGI2MzU5OGJlYTJmZDVkY2U5MmQ1NjIwZrBhziw=: 00:39:28.566 22:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:39:28.566 22:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 4 00:39:28.566 22:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:39:28.566 22:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:39:28.566 22:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:39:28.566 22:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:39:28.567 22:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:39:28.567 22:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:39:28.567 22:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:28.567 22:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:28.567 22:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:28.567 22:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:39:28.567 22:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:39:28.567 22:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:39:28.567 22:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:39:28.567 22:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:39:28.567 22:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:39:28.567 22:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:39:28.567 22:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:39:28.567 22:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:39:28.567 22:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:39:28.567 22:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:39:28.567 22:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:39:28.567 22:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:28.567 22:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:28.833 nvme0n1 00:39:28.833 22:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:28.833 22:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:39:28.833 22:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:39:28.833 22:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:28.833 22:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:28.833 22:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:28.833 22:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:39:28.833 22:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:39:28.833 22:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:28.833 22:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:28.833 22:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:28.833 22:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:39:28.833 22:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:39:28.833 22:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 0 00:39:28.833 22:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:39:28.833 22:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:39:28.833 22:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:39:28.833 22:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:39:28.833 22:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZDI1ZDdiOTI5NjM1ZTZjNDExNzYyNTAxNmQzNmJhMTCNtXb4: 00:39:28.833 22:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MjRjNTBmODU4MmU3ZjVlYzdlNzZjYmVlMzk1YjY4YWY1YzdkZWI5NmRiMjAyNmUxYTcxOGIzZmJkNDZkNTg1Y8k+rDI=: 00:39:28.833 22:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:39:28.833 22:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:39:29.491 22:38:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZDI1ZDdiOTI5NjM1ZTZjNDExNzYyNTAxNmQzNmJhMTCNtXb4: 00:39:29.491 22:38:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MjRjNTBmODU4MmU3ZjVlYzdlNzZjYmVlMzk1YjY4YWY1YzdkZWI5NmRiMjAyNmUxYTcxOGIzZmJkNDZkNTg1Y8k+rDI=: ]] 00:39:29.491 22:38:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MjRjNTBmODU4MmU3ZjVlYzdlNzZjYmVlMzk1YjY4YWY1YzdkZWI5NmRiMjAyNmUxYTcxOGIzZmJkNDZkNTg1Y8k+rDI=: 00:39:29.491 22:38:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 0 00:39:29.491 22:38:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:39:29.491 22:38:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:39:29.491 22:38:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:39:29.491 22:38:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:39:29.491 22:38:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:39:29.491 22:38:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:39:29.491 22:38:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:29.491 22:38:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:29.491 22:38:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:29.491 22:38:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:39:29.491 22:38:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:39:29.491 22:38:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:39:29.491 22:38:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:39:29.491 22:38:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:39:29.491 22:38:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:39:29.491 22:38:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:39:29.491 22:38:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:39:29.491 22:38:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:39:29.491 22:38:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:39:29.491 22:38:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:39:29.491 22:38:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:39:29.491 22:38:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:29.491 22:38:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:29.751 nvme0n1 00:39:29.751 22:38:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:29.751 22:38:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:39:29.751 22:38:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:39:29.751 22:38:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:29.751 22:38:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:29.751 22:38:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:29.751 22:38:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:39:29.751 22:38:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:39:29.751 22:38:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:29.751 22:38:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:29.751 22:38:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:29.751 22:38:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:39:29.751 22:38:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 1 00:39:29.751 22:38:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:39:29.751 22:38:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:39:29.751 22:38:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:39:29.751 22:38:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:39:29.751 22:38:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MjJjYzgzMDVmYmYwMDJhNzUyMzI2Y2Y0MjE5Nzg2YjgyNjBhZDhhNzI1OGZjMjhiJ3xOtg==: 00:39:29.751 22:38:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MWE5M2JjOGM1MmQ5ZmJhYmZkMzBhZjcyZjM2MTU4ODk0ODE3YmEyNDRhYmFmNmRijzhxdQ==: 00:39:29.751 22:38:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:39:29.751 22:38:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:39:29.751 22:38:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MjJjYzgzMDVmYmYwMDJhNzUyMzI2Y2Y0MjE5Nzg2YjgyNjBhZDhhNzI1OGZjMjhiJ3xOtg==: 00:39:29.751 22:38:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MWE5M2JjOGM1MmQ5ZmJhYmZkMzBhZjcyZjM2MTU4ODk0ODE3YmEyNDRhYmFmNmRijzhxdQ==: ]] 00:39:29.751 22:38:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MWE5M2JjOGM1MmQ5ZmJhYmZkMzBhZjcyZjM2MTU4ODk0ODE3YmEyNDRhYmFmNmRijzhxdQ==: 00:39:29.751 22:38:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 1 00:39:29.751 22:38:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:39:29.751 22:38:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:39:29.751 22:38:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:39:29.751 22:38:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:39:29.751 22:38:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:39:29.751 22:38:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:39:29.751 22:38:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:29.751 22:38:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:29.751 22:38:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:29.751 22:38:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:39:29.751 22:38:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:39:29.751 22:38:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:39:29.751 22:38:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:39:29.751 22:38:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:39:29.751 22:38:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:39:29.751 22:38:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:39:29.751 22:38:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:39:29.751 22:38:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:39:29.751 22:38:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:39:29.751 22:38:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:39:29.751 22:38:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:39:29.751 22:38:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:29.751 22:38:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:30.012 nvme0n1 00:39:30.012 22:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:30.012 22:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:39:30.012 22:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:39:30.012 22:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:30.012 22:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:30.012 22:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:30.012 22:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:39:30.012 22:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:39:30.012 22:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:30.012 22:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:30.274 22:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:30.274 22:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:39:30.274 22:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 2 00:39:30.274 22:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:39:30.274 22:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:39:30.274 22:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:39:30.274 22:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:39:30.274 22:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YWMxMzI4YWNkMDg3MDk4YzFkZTAzMTFlNTFiNTg3OGNulVAJ: 00:39:30.274 22:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:OTAyZDE1MWNlZjhjMmQyOGJlNTFjOWY1M2MwYWE3ZmEBh5uT: 00:39:30.274 22:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:39:30.274 22:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:39:30.274 22:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YWMxMzI4YWNkMDg3MDk4YzFkZTAzMTFlNTFiNTg3OGNulVAJ: 00:39:30.274 22:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:OTAyZDE1MWNlZjhjMmQyOGJlNTFjOWY1M2MwYWE3ZmEBh5uT: ]] 00:39:30.274 22:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:OTAyZDE1MWNlZjhjMmQyOGJlNTFjOWY1M2MwYWE3ZmEBh5uT: 00:39:30.274 22:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 2 00:39:30.274 22:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:39:30.274 22:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:39:30.274 22:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:39:30.274 22:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:39:30.274 22:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:39:30.274 22:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:39:30.274 22:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:30.274 22:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:30.274 22:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:30.274 22:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:39:30.274 22:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:39:30.274 22:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:39:30.274 22:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:39:30.274 22:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:39:30.274 22:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:39:30.274 22:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:39:30.274 22:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:39:30.274 22:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:39:30.274 22:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:39:30.274 22:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:39:30.274 22:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:39:30.274 22:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:30.274 22:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:30.536 nvme0n1 00:39:30.536 22:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:30.536 22:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:39:30.536 22:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:39:30.536 22:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:30.536 22:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:30.536 22:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:30.536 22:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:39:30.536 22:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:39:30.536 22:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:30.536 22:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:30.536 22:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:30.536 22:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:39:30.536 22:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 3 00:39:30.536 22:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:39:30.536 22:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:39:30.536 22:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:39:30.536 22:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:39:30.536 22:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:M2E3MGI4NmNjMmI1MGYyMDI1MDJkZmZmMzRjMzA5OWZkOTQxYzM4MTQ4Zjk5NTRmM0RBIQ==: 00:39:30.536 22:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NzE2NWMzNTYyNzM2NzVkODVkZTE3ZjVhMzUyN2Y4NWaSDkhX: 00:39:30.536 22:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:39:30.536 22:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:39:30.536 22:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:M2E3MGI4NmNjMmI1MGYyMDI1MDJkZmZmMzRjMzA5OWZkOTQxYzM4MTQ4Zjk5NTRmM0RBIQ==: 00:39:30.536 22:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NzE2NWMzNTYyNzM2NzVkODVkZTE3ZjVhMzUyN2Y4NWaSDkhX: ]] 00:39:30.536 22:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NzE2NWMzNTYyNzM2NzVkODVkZTE3ZjVhMzUyN2Y4NWaSDkhX: 00:39:30.536 22:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 3 00:39:30.536 22:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:39:30.536 22:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:39:30.536 22:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:39:30.536 22:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:39:30.536 22:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:39:30.536 22:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:39:30.536 22:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:30.536 22:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:30.536 22:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:30.536 22:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:39:30.536 22:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:39:30.536 22:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:39:30.536 22:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:39:30.536 22:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:39:30.536 22:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:39:30.536 22:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:39:30.536 22:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:39:30.536 22:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:39:30.536 22:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:39:30.536 22:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:39:30.536 22:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:39:30.536 22:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:30.536 22:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:30.797 nvme0n1 00:39:30.797 22:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:30.797 22:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:39:30.797 22:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:39:30.797 22:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:30.797 22:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:30.797 22:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:30.797 22:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:39:30.797 22:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:39:30.797 22:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:30.797 22:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:30.797 22:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:30.797 22:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:39:30.797 22:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 4 00:39:30.797 22:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:39:30.797 22:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:39:30.797 22:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:39:30.797 22:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:39:30.797 22:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MzI4NzA3YjgxMGEwNzAyYWM0YmRhYjA5ZjliYzZkYTdhNGQ5ZDU5MGI2MzU5OGJlYTJmZDVkY2U5MmQ1NjIwZrBhziw=: 00:39:30.797 22:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:39:30.797 22:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:39:30.797 22:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:39:30.797 22:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MzI4NzA3YjgxMGEwNzAyYWM0YmRhYjA5ZjliYzZkYTdhNGQ5ZDU5MGI2MzU5OGJlYTJmZDVkY2U5MmQ1NjIwZrBhziw=: 00:39:30.797 22:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:39:30.797 22:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 4 00:39:30.797 22:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:39:30.797 22:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:39:30.797 22:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:39:30.797 22:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:39:30.797 22:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:39:30.797 22:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:39:30.797 22:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:30.797 22:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:30.797 22:38:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:30.797 22:38:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:39:30.797 22:38:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:39:30.797 22:38:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:39:30.797 22:38:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:39:30.797 22:38:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:39:30.797 22:38:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:39:30.797 22:38:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:39:30.797 22:38:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:39:30.797 22:38:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:39:30.797 22:38:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:39:30.797 22:38:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:39:30.797 22:38:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:39:30.797 22:38:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:30.797 22:38:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:31.057 nvme0n1 00:39:31.057 22:38:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:31.057 22:38:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:39:31.057 22:38:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:39:31.057 22:38:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:31.057 22:38:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:31.057 22:38:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:31.317 22:38:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:39:31.317 22:38:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:39:31.317 22:38:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:31.317 22:38:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:31.317 22:38:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:31.317 22:38:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:39:31.317 22:38:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:39:31.317 22:38:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 0 00:39:31.317 22:38:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:39:31.317 22:38:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:39:31.317 22:38:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:39:31.317 22:38:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:39:31.317 22:38:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZDI1ZDdiOTI5NjM1ZTZjNDExNzYyNTAxNmQzNmJhMTCNtXb4: 00:39:31.317 22:38:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MjRjNTBmODU4MmU3ZjVlYzdlNzZjYmVlMzk1YjY4YWY1YzdkZWI5NmRiMjAyNmUxYTcxOGIzZmJkNDZkNTg1Y8k+rDI=: 00:39:31.317 22:38:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:39:31.317 22:38:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:39:33.241 22:38:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZDI1ZDdiOTI5NjM1ZTZjNDExNzYyNTAxNmQzNmJhMTCNtXb4: 00:39:33.241 22:38:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MjRjNTBmODU4MmU3ZjVlYzdlNzZjYmVlMzk1YjY4YWY1YzdkZWI5NmRiMjAyNmUxYTcxOGIzZmJkNDZkNTg1Y8k+rDI=: ]] 00:39:33.241 22:38:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MjRjNTBmODU4MmU3ZjVlYzdlNzZjYmVlMzk1YjY4YWY1YzdkZWI5NmRiMjAyNmUxYTcxOGIzZmJkNDZkNTg1Y8k+rDI=: 00:39:33.241 22:38:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 0 00:39:33.241 22:38:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:39:33.241 22:38:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:39:33.241 22:38:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:39:33.241 22:38:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:39:33.241 22:38:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:39:33.241 22:38:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:39:33.241 22:38:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:33.241 22:38:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:33.241 22:38:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:33.241 22:38:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:39:33.241 22:38:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:39:33.241 22:38:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:39:33.241 22:38:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:39:33.241 22:38:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:39:33.241 22:38:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:39:33.241 22:38:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:39:33.241 22:38:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:39:33.241 22:38:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:39:33.241 22:38:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:39:33.241 22:38:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:39:33.241 22:38:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:39:33.241 22:38:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:33.241 22:38:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:33.241 nvme0n1 00:39:33.241 22:38:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:33.501 22:38:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:39:33.501 22:38:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:39:33.501 22:38:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:33.501 22:38:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:33.501 22:38:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:33.501 22:38:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:39:33.501 22:38:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:39:33.501 22:38:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:33.501 22:38:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:33.501 22:38:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:33.501 22:38:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:39:33.501 22:38:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 1 00:39:33.501 22:38:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:39:33.501 22:38:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:39:33.501 22:38:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:39:33.501 22:38:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:39:33.501 22:38:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MjJjYzgzMDVmYmYwMDJhNzUyMzI2Y2Y0MjE5Nzg2YjgyNjBhZDhhNzI1OGZjMjhiJ3xOtg==: 00:39:33.501 22:38:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MWE5M2JjOGM1MmQ5ZmJhYmZkMzBhZjcyZjM2MTU4ODk0ODE3YmEyNDRhYmFmNmRijzhxdQ==: 00:39:33.501 22:38:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:39:33.501 22:38:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:39:33.501 22:38:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MjJjYzgzMDVmYmYwMDJhNzUyMzI2Y2Y0MjE5Nzg2YjgyNjBhZDhhNzI1OGZjMjhiJ3xOtg==: 00:39:33.501 22:38:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MWE5M2JjOGM1MmQ5ZmJhYmZkMzBhZjcyZjM2MTU4ODk0ODE3YmEyNDRhYmFmNmRijzhxdQ==: ]] 00:39:33.501 22:38:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MWE5M2JjOGM1MmQ5ZmJhYmZkMzBhZjcyZjM2MTU4ODk0ODE3YmEyNDRhYmFmNmRijzhxdQ==: 00:39:33.501 22:38:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 1 00:39:33.501 22:38:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:39:33.501 22:38:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:39:33.501 22:38:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:39:33.501 22:38:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:39:33.501 22:38:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:39:33.501 22:38:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:39:33.501 22:38:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:33.501 22:38:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:33.501 22:38:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:33.501 22:38:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:39:33.502 22:38:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:39:33.502 22:38:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:39:33.502 22:38:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:39:33.502 22:38:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:39:33.502 22:38:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:39:33.502 22:38:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:39:33.502 22:38:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:39:33.502 22:38:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:39:33.502 22:38:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:39:33.502 22:38:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:39:33.502 22:38:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:39:33.502 22:38:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:33.502 22:38:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:34.071 nvme0n1 00:39:34.071 22:38:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:34.071 22:38:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:39:34.071 22:38:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:34.071 22:38:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:39:34.071 22:38:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:34.071 22:38:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:34.071 22:38:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:39:34.071 22:38:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:39:34.071 22:38:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:34.071 22:38:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:34.071 22:38:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:34.071 22:38:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:39:34.071 22:38:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 2 00:39:34.071 22:38:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:39:34.071 22:38:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:39:34.071 22:38:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:39:34.071 22:38:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:39:34.071 22:38:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YWMxMzI4YWNkMDg3MDk4YzFkZTAzMTFlNTFiNTg3OGNulVAJ: 00:39:34.071 22:38:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:OTAyZDE1MWNlZjhjMmQyOGJlNTFjOWY1M2MwYWE3ZmEBh5uT: 00:39:34.071 22:38:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:39:34.071 22:38:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:39:34.071 22:38:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YWMxMzI4YWNkMDg3MDk4YzFkZTAzMTFlNTFiNTg3OGNulVAJ: 00:39:34.071 22:38:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:OTAyZDE1MWNlZjhjMmQyOGJlNTFjOWY1M2MwYWE3ZmEBh5uT: ]] 00:39:34.071 22:38:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:OTAyZDE1MWNlZjhjMmQyOGJlNTFjOWY1M2MwYWE3ZmEBh5uT: 00:39:34.071 22:38:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 2 00:39:34.071 22:38:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:39:34.071 22:38:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:39:34.071 22:38:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:39:34.071 22:38:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:39:34.071 22:38:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:39:34.071 22:38:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:39:34.071 22:38:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:34.071 22:38:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:34.071 22:38:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:34.071 22:38:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:39:34.071 22:38:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:39:34.071 22:38:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:39:34.071 22:38:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:39:34.071 22:38:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:39:34.071 22:38:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:39:34.071 22:38:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:39:34.071 22:38:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:39:34.071 22:38:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:39:34.071 22:38:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:39:34.071 22:38:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:39:34.071 22:38:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:39:34.071 22:38:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:34.071 22:38:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:34.331 nvme0n1 00:39:34.331 22:38:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:34.331 22:38:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:39:34.331 22:38:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:39:34.331 22:38:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:34.331 22:38:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:34.331 22:38:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:34.331 22:38:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:39:34.331 22:38:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:39:34.331 22:38:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:34.331 22:38:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:34.591 22:38:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:34.591 22:38:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:39:34.591 22:38:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 3 00:39:34.591 22:38:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:39:34.591 22:38:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:39:34.591 22:38:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:39:34.591 22:38:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:39:34.591 22:38:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:M2E3MGI4NmNjMmI1MGYyMDI1MDJkZmZmMzRjMzA5OWZkOTQxYzM4MTQ4Zjk5NTRmM0RBIQ==: 00:39:34.591 22:38:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NzE2NWMzNTYyNzM2NzVkODVkZTE3ZjVhMzUyN2Y4NWaSDkhX: 00:39:34.591 22:38:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:39:34.591 22:38:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:39:34.591 22:38:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:M2E3MGI4NmNjMmI1MGYyMDI1MDJkZmZmMzRjMzA5OWZkOTQxYzM4MTQ4Zjk5NTRmM0RBIQ==: 00:39:34.591 22:38:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NzE2NWMzNTYyNzM2NzVkODVkZTE3ZjVhMzUyN2Y4NWaSDkhX: ]] 00:39:34.591 22:38:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NzE2NWMzNTYyNzM2NzVkODVkZTE3ZjVhMzUyN2Y4NWaSDkhX: 00:39:34.591 22:38:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 3 00:39:34.591 22:38:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:39:34.591 22:38:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:39:34.591 22:38:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:39:34.591 22:38:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:39:34.591 22:38:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:39:34.591 22:38:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:39:34.591 22:38:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:34.591 22:38:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:34.592 22:38:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:34.592 22:38:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:39:34.592 22:38:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:39:34.592 22:38:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:39:34.592 22:38:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:39:34.592 22:38:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:39:34.592 22:38:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:39:34.592 22:38:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:39:34.592 22:38:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:39:34.592 22:38:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:39:34.592 22:38:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:39:34.592 22:38:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:39:34.592 22:38:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:39:34.592 22:38:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:34.592 22:38:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:34.852 nvme0n1 00:39:34.852 22:38:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:34.852 22:38:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:39:34.852 22:38:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:39:34.852 22:38:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:34.852 22:38:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:34.852 22:38:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:34.852 22:38:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:39:34.852 22:38:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:39:34.852 22:38:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:34.852 22:38:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:35.113 22:38:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:35.113 22:38:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:39:35.113 22:38:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 4 00:39:35.113 22:38:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:39:35.113 22:38:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:39:35.113 22:38:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:39:35.113 22:38:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:39:35.113 22:38:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MzI4NzA3YjgxMGEwNzAyYWM0YmRhYjA5ZjliYzZkYTdhNGQ5ZDU5MGI2MzU5OGJlYTJmZDVkY2U5MmQ1NjIwZrBhziw=: 00:39:35.113 22:38:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:39:35.113 22:38:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:39:35.113 22:38:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:39:35.113 22:38:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MzI4NzA3YjgxMGEwNzAyYWM0YmRhYjA5ZjliYzZkYTdhNGQ5ZDU5MGI2MzU5OGJlYTJmZDVkY2U5MmQ1NjIwZrBhziw=: 00:39:35.113 22:38:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:39:35.113 22:38:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 4 00:39:35.113 22:38:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:39:35.113 22:38:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:39:35.113 22:38:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:39:35.113 22:38:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:39:35.113 22:38:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:39:35.113 22:38:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:39:35.113 22:38:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:35.113 22:38:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:35.113 22:38:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:35.113 22:38:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:39:35.113 22:38:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:39:35.113 22:38:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:39:35.113 22:38:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:39:35.113 22:38:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:39:35.113 22:38:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:39:35.113 22:38:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:39:35.113 22:38:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:39:35.113 22:38:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:39:35.113 22:38:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:39:35.113 22:38:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:39:35.113 22:38:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:39:35.113 22:38:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:35.113 22:38:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:35.374 nvme0n1 00:39:35.374 22:38:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:35.374 22:38:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:39:35.374 22:38:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:39:35.374 22:38:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:35.374 22:38:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:35.374 22:38:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:35.634 22:38:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:39:35.634 22:38:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:39:35.634 22:38:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:35.634 22:38:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:35.634 22:38:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:35.634 22:38:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:39:35.634 22:38:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:39:35.634 22:38:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 0 00:39:35.634 22:38:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:39:35.634 22:38:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:39:35.634 22:38:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:39:35.634 22:38:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:39:35.634 22:38:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZDI1ZDdiOTI5NjM1ZTZjNDExNzYyNTAxNmQzNmJhMTCNtXb4: 00:39:35.634 22:38:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MjRjNTBmODU4MmU3ZjVlYzdlNzZjYmVlMzk1YjY4YWY1YzdkZWI5NmRiMjAyNmUxYTcxOGIzZmJkNDZkNTg1Y8k+rDI=: 00:39:35.634 22:38:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:39:35.634 22:38:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:39:35.634 22:38:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZDI1ZDdiOTI5NjM1ZTZjNDExNzYyNTAxNmQzNmJhMTCNtXb4: 00:39:35.634 22:38:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MjRjNTBmODU4MmU3ZjVlYzdlNzZjYmVlMzk1YjY4YWY1YzdkZWI5NmRiMjAyNmUxYTcxOGIzZmJkNDZkNTg1Y8k+rDI=: ]] 00:39:35.634 22:38:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MjRjNTBmODU4MmU3ZjVlYzdlNzZjYmVlMzk1YjY4YWY1YzdkZWI5NmRiMjAyNmUxYTcxOGIzZmJkNDZkNTg1Y8k+rDI=: 00:39:35.634 22:38:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 0 00:39:35.634 22:38:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:39:35.634 22:38:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:39:35.634 22:38:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:39:35.634 22:38:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:39:35.634 22:38:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:39:35.634 22:38:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:39:35.634 22:38:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:35.634 22:38:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:35.634 22:38:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:35.634 22:38:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:39:35.634 22:38:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:39:35.634 22:38:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:39:35.634 22:38:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:39:35.634 22:38:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:39:35.634 22:38:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:39:35.634 22:38:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:39:35.634 22:38:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:39:35.634 22:38:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:39:35.634 22:38:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:39:35.634 22:38:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:39:35.634 22:38:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:39:35.634 22:38:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:35.634 22:38:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:36.205 nvme0n1 00:39:36.205 22:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:36.205 22:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:39:36.205 22:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:39:36.205 22:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:36.205 22:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:36.205 22:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:36.466 22:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:39:36.466 22:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:39:36.466 22:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:36.466 22:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:36.466 22:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:36.466 22:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:39:36.466 22:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 1 00:39:36.466 22:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:39:36.466 22:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:39:36.466 22:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:39:36.466 22:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:39:36.466 22:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MjJjYzgzMDVmYmYwMDJhNzUyMzI2Y2Y0MjE5Nzg2YjgyNjBhZDhhNzI1OGZjMjhiJ3xOtg==: 00:39:36.466 22:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MWE5M2JjOGM1MmQ5ZmJhYmZkMzBhZjcyZjM2MTU4ODk0ODE3YmEyNDRhYmFmNmRijzhxdQ==: 00:39:36.466 22:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:39:36.466 22:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:39:36.466 22:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MjJjYzgzMDVmYmYwMDJhNzUyMzI2Y2Y0MjE5Nzg2YjgyNjBhZDhhNzI1OGZjMjhiJ3xOtg==: 00:39:36.466 22:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MWE5M2JjOGM1MmQ5ZmJhYmZkMzBhZjcyZjM2MTU4ODk0ODE3YmEyNDRhYmFmNmRijzhxdQ==: ]] 00:39:36.466 22:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MWE5M2JjOGM1MmQ5ZmJhYmZkMzBhZjcyZjM2MTU4ODk0ODE3YmEyNDRhYmFmNmRijzhxdQ==: 00:39:36.466 22:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 1 00:39:36.466 22:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:39:36.466 22:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:39:36.466 22:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:39:36.466 22:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:39:36.466 22:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:39:36.466 22:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:39:36.466 22:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:36.466 22:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:36.466 22:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:36.466 22:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:39:36.466 22:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:39:36.466 22:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:39:36.466 22:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:39:36.466 22:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:39:36.466 22:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:39:36.466 22:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:39:36.466 22:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:39:36.466 22:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:39:36.466 22:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:39:36.466 22:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:39:36.466 22:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:39:36.466 22:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:36.466 22:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:37.037 nvme0n1 00:39:37.037 22:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:37.037 22:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:39:37.037 22:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:39:37.037 22:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:37.037 22:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:37.037 22:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:37.298 22:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:39:37.298 22:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:39:37.298 22:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:37.298 22:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:37.298 22:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:37.298 22:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:39:37.298 22:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 2 00:39:37.298 22:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:39:37.298 22:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:39:37.298 22:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:39:37.298 22:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:39:37.298 22:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YWMxMzI4YWNkMDg3MDk4YzFkZTAzMTFlNTFiNTg3OGNulVAJ: 00:39:37.298 22:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:OTAyZDE1MWNlZjhjMmQyOGJlNTFjOWY1M2MwYWE3ZmEBh5uT: 00:39:37.298 22:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:39:37.298 22:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:39:37.298 22:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YWMxMzI4YWNkMDg3MDk4YzFkZTAzMTFlNTFiNTg3OGNulVAJ: 00:39:37.298 22:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:OTAyZDE1MWNlZjhjMmQyOGJlNTFjOWY1M2MwYWE3ZmEBh5uT: ]] 00:39:37.298 22:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:OTAyZDE1MWNlZjhjMmQyOGJlNTFjOWY1M2MwYWE3ZmEBh5uT: 00:39:37.298 22:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 2 00:39:37.298 22:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:39:37.298 22:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:39:37.298 22:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:39:37.298 22:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:39:37.298 22:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:39:37.298 22:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:39:37.298 22:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:37.298 22:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:37.298 22:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:37.298 22:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:39:37.298 22:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:39:37.298 22:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:39:37.298 22:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:39:37.298 22:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:39:37.298 22:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:39:37.298 22:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:39:37.298 22:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:39:37.298 22:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:39:37.298 22:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:39:37.298 22:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:39:37.298 22:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:39:37.298 22:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:37.298 22:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:37.870 nvme0n1 00:39:37.870 22:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:37.870 22:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:39:37.870 22:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:39:37.870 22:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:37.870 22:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:37.870 22:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:38.130 22:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:39:38.130 22:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:39:38.130 22:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:38.130 22:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:38.131 22:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:38.131 22:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:39:38.131 22:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 3 00:39:38.131 22:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:39:38.131 22:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:39:38.131 22:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:39:38.131 22:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:39:38.131 22:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:M2E3MGI4NmNjMmI1MGYyMDI1MDJkZmZmMzRjMzA5OWZkOTQxYzM4MTQ4Zjk5NTRmM0RBIQ==: 00:39:38.131 22:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NzE2NWMzNTYyNzM2NzVkODVkZTE3ZjVhMzUyN2Y4NWaSDkhX: 00:39:38.131 22:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:39:38.131 22:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:39:38.131 22:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:M2E3MGI4NmNjMmI1MGYyMDI1MDJkZmZmMzRjMzA5OWZkOTQxYzM4MTQ4Zjk5NTRmM0RBIQ==: 00:39:38.131 22:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NzE2NWMzNTYyNzM2NzVkODVkZTE3ZjVhMzUyN2Y4NWaSDkhX: ]] 00:39:38.131 22:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NzE2NWMzNTYyNzM2NzVkODVkZTE3ZjVhMzUyN2Y4NWaSDkhX: 00:39:38.131 22:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 3 00:39:38.131 22:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:39:38.131 22:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:39:38.131 22:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:39:38.131 22:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:39:38.131 22:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:39:38.131 22:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:39:38.131 22:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:38.131 22:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:38.131 22:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:38.131 22:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:39:38.131 22:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:39:38.131 22:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:39:38.131 22:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:39:38.131 22:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:39:38.131 22:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:39:38.131 22:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:39:38.131 22:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:39:38.131 22:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:39:38.131 22:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:39:38.131 22:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:39:38.131 22:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:39:38.131 22:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:38.131 22:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:38.701 nvme0n1 00:39:38.701 22:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:38.701 22:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:39:38.701 22:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:39:38.701 22:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:38.701 22:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:38.701 22:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:38.961 22:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:39:38.961 22:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:39:38.961 22:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:38.961 22:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:38.961 22:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:38.961 22:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:39:38.961 22:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 4 00:39:38.961 22:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:39:38.961 22:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:39:38.961 22:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:39:38.961 22:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:39:38.961 22:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MzI4NzA3YjgxMGEwNzAyYWM0YmRhYjA5ZjliYzZkYTdhNGQ5ZDU5MGI2MzU5OGJlYTJmZDVkY2U5MmQ1NjIwZrBhziw=: 00:39:38.961 22:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:39:38.961 22:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:39:38.961 22:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:39:38.961 22:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MzI4NzA3YjgxMGEwNzAyYWM0YmRhYjA5ZjliYzZkYTdhNGQ5ZDU5MGI2MzU5OGJlYTJmZDVkY2U5MmQ1NjIwZrBhziw=: 00:39:38.961 22:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:39:38.961 22:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 4 00:39:38.961 22:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:39:38.961 22:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:39:38.961 22:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:39:38.961 22:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:39:38.961 22:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:39:38.961 22:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:39:38.961 22:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:38.961 22:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:38.961 22:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:38.961 22:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:39:38.961 22:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:39:38.961 22:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:39:38.961 22:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:39:38.961 22:38:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:39:38.961 22:38:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:39:38.961 22:38:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:39:38.961 22:38:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:39:38.961 22:38:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:39:38.961 22:38:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:39:38.961 22:38:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:39:38.961 22:38:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:39:38.961 22:38:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:38.961 22:38:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:39.530 nvme0n1 00:39:39.530 22:38:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:39.530 22:38:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:39:39.530 22:38:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:39:39.530 22:38:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:39.530 22:38:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:39.530 22:38:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:39.792 22:38:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:39:39.792 22:38:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:39:39.792 22:38:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:39.792 22:38:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:39.792 22:38:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:39.792 22:38:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:39:39.792 22:38:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:39:39.792 22:38:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:39:39.792 22:38:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 0 00:39:39.792 22:38:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:39:39.792 22:38:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:39:39.792 22:38:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:39:39.792 22:38:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:39:39.792 22:38:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZDI1ZDdiOTI5NjM1ZTZjNDExNzYyNTAxNmQzNmJhMTCNtXb4: 00:39:39.792 22:38:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MjRjNTBmODU4MmU3ZjVlYzdlNzZjYmVlMzk1YjY4YWY1YzdkZWI5NmRiMjAyNmUxYTcxOGIzZmJkNDZkNTg1Y8k+rDI=: 00:39:39.792 22:38:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:39:39.792 22:38:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:39:39.792 22:38:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZDI1ZDdiOTI5NjM1ZTZjNDExNzYyNTAxNmQzNmJhMTCNtXb4: 00:39:39.792 22:38:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MjRjNTBmODU4MmU3ZjVlYzdlNzZjYmVlMzk1YjY4YWY1YzdkZWI5NmRiMjAyNmUxYTcxOGIzZmJkNDZkNTg1Y8k+rDI=: ]] 00:39:39.792 22:38:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MjRjNTBmODU4MmU3ZjVlYzdlNzZjYmVlMzk1YjY4YWY1YzdkZWI5NmRiMjAyNmUxYTcxOGIzZmJkNDZkNTg1Y8k+rDI=: 00:39:39.792 22:38:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 0 00:39:39.792 22:38:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:39:39.792 22:38:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:39:39.792 22:38:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:39:39.792 22:38:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:39:39.792 22:38:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:39:39.792 22:38:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:39:39.792 22:38:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:39.792 22:38:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:39.792 22:38:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:39.792 22:38:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:39:39.792 22:38:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:39:39.792 22:38:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:39:39.792 22:38:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:39:39.792 22:38:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:39:39.792 22:38:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:39:39.792 22:38:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:39:39.792 22:38:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:39:39.792 22:38:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:39:39.792 22:38:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:39:39.792 22:38:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:39:39.792 22:38:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:39:39.792 22:38:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:39.792 22:38:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:39.792 nvme0n1 00:39:39.792 22:38:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:39.792 22:38:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:39:39.792 22:38:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:39:39.792 22:38:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:39.792 22:38:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:39.792 22:38:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:39.792 22:38:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:39:39.792 22:38:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:39:39.792 22:38:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:39.792 22:38:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:40.053 22:38:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:40.054 22:38:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:39:40.054 22:38:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 1 00:39:40.054 22:38:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:39:40.054 22:38:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:39:40.054 22:38:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:39:40.054 22:38:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:39:40.054 22:38:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MjJjYzgzMDVmYmYwMDJhNzUyMzI2Y2Y0MjE5Nzg2YjgyNjBhZDhhNzI1OGZjMjhiJ3xOtg==: 00:39:40.054 22:38:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MWE5M2JjOGM1MmQ5ZmJhYmZkMzBhZjcyZjM2MTU4ODk0ODE3YmEyNDRhYmFmNmRijzhxdQ==: 00:39:40.054 22:38:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:39:40.054 22:38:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:39:40.054 22:38:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MjJjYzgzMDVmYmYwMDJhNzUyMzI2Y2Y0MjE5Nzg2YjgyNjBhZDhhNzI1OGZjMjhiJ3xOtg==: 00:39:40.054 22:38:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MWE5M2JjOGM1MmQ5ZmJhYmZkMzBhZjcyZjM2MTU4ODk0ODE3YmEyNDRhYmFmNmRijzhxdQ==: ]] 00:39:40.054 22:38:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MWE5M2JjOGM1MmQ5ZmJhYmZkMzBhZjcyZjM2MTU4ODk0ODE3YmEyNDRhYmFmNmRijzhxdQ==: 00:39:40.054 22:38:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 1 00:39:40.054 22:38:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:39:40.054 22:38:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:39:40.054 22:38:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:39:40.054 22:38:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:39:40.054 22:38:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:39:40.054 22:38:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:39:40.054 22:38:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:40.054 22:38:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:40.054 22:38:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:40.054 22:38:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:39:40.054 22:38:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:39:40.054 22:38:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:39:40.054 22:38:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:39:40.054 22:38:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:39:40.054 22:38:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:39:40.054 22:38:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:39:40.054 22:38:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:39:40.054 22:38:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:39:40.054 22:38:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:39:40.054 22:38:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:39:40.054 22:38:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:39:40.054 22:38:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:40.054 22:38:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:40.054 nvme0n1 00:39:40.054 22:38:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:40.054 22:38:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:39:40.054 22:38:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:39:40.054 22:38:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:40.054 22:38:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:40.054 22:38:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:40.054 22:38:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:39:40.054 22:38:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:39:40.054 22:38:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:40.054 22:38:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:40.054 22:38:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:40.054 22:38:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:39:40.054 22:38:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 2 00:39:40.054 22:38:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:39:40.054 22:38:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:39:40.054 22:38:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:39:40.054 22:38:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:39:40.054 22:38:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YWMxMzI4YWNkMDg3MDk4YzFkZTAzMTFlNTFiNTg3OGNulVAJ: 00:39:40.054 22:38:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:OTAyZDE1MWNlZjhjMmQyOGJlNTFjOWY1M2MwYWE3ZmEBh5uT: 00:39:40.054 22:38:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:39:40.054 22:38:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:39:40.054 22:38:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YWMxMzI4YWNkMDg3MDk4YzFkZTAzMTFlNTFiNTg3OGNulVAJ: 00:39:40.054 22:38:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:OTAyZDE1MWNlZjhjMmQyOGJlNTFjOWY1M2MwYWE3ZmEBh5uT: ]] 00:39:40.054 22:38:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:OTAyZDE1MWNlZjhjMmQyOGJlNTFjOWY1M2MwYWE3ZmEBh5uT: 00:39:40.054 22:38:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 2 00:39:40.054 22:38:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:39:40.054 22:38:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:39:40.054 22:38:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:39:40.054 22:38:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:39:40.054 22:38:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:39:40.054 22:38:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:39:40.054 22:38:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:40.054 22:38:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:40.054 22:38:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:40.054 22:38:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:39:40.054 22:38:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:39:40.054 22:38:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:39:40.054 22:38:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:39:40.054 22:38:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:39:40.054 22:38:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:39:40.054 22:38:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:39:40.054 22:38:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:39:40.054 22:38:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:39:40.054 22:38:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:39:40.054 22:38:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:39:40.054 22:38:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:39:40.054 22:38:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:40.054 22:38:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:40.315 nvme0n1 00:39:40.315 22:38:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:40.315 22:38:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:39:40.315 22:38:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:39:40.315 22:38:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:40.315 22:38:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:40.315 22:38:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:40.315 22:38:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:39:40.315 22:38:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:39:40.315 22:38:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:40.315 22:38:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:40.315 22:38:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:40.315 22:38:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:39:40.315 22:38:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 3 00:39:40.315 22:38:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:39:40.315 22:38:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:39:40.315 22:38:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:39:40.315 22:38:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:39:40.315 22:38:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:M2E3MGI4NmNjMmI1MGYyMDI1MDJkZmZmMzRjMzA5OWZkOTQxYzM4MTQ4Zjk5NTRmM0RBIQ==: 00:39:40.315 22:38:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NzE2NWMzNTYyNzM2NzVkODVkZTE3ZjVhMzUyN2Y4NWaSDkhX: 00:39:40.315 22:38:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:39:40.315 22:38:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:39:40.315 22:38:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:M2E3MGI4NmNjMmI1MGYyMDI1MDJkZmZmMzRjMzA5OWZkOTQxYzM4MTQ4Zjk5NTRmM0RBIQ==: 00:39:40.315 22:38:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NzE2NWMzNTYyNzM2NzVkODVkZTE3ZjVhMzUyN2Y4NWaSDkhX: ]] 00:39:40.315 22:38:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NzE2NWMzNTYyNzM2NzVkODVkZTE3ZjVhMzUyN2Y4NWaSDkhX: 00:39:40.315 22:38:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 3 00:39:40.315 22:38:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:39:40.315 22:38:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:39:40.315 22:38:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:39:40.315 22:38:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:39:40.315 22:38:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:39:40.315 22:38:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:39:40.315 22:38:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:40.315 22:38:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:40.315 22:38:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:40.315 22:38:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:39:40.315 22:38:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:39:40.315 22:38:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:39:40.315 22:38:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:39:40.315 22:38:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:39:40.315 22:38:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:39:40.315 22:38:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:39:40.315 22:38:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:39:40.315 22:38:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:39:40.315 22:38:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:39:40.315 22:38:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:39:40.315 22:38:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:39:40.315 22:38:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:40.315 22:38:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:40.575 nvme0n1 00:39:40.575 22:38:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:40.575 22:38:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:39:40.575 22:38:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:39:40.575 22:38:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:40.575 22:38:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:40.575 22:38:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:40.575 22:38:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:39:40.575 22:38:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:39:40.575 22:38:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:40.575 22:38:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:40.575 22:38:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:40.575 22:38:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:39:40.575 22:38:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 4 00:39:40.575 22:38:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:39:40.575 22:38:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:39:40.575 22:38:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:39:40.575 22:38:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:39:40.575 22:38:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MzI4NzA3YjgxMGEwNzAyYWM0YmRhYjA5ZjliYzZkYTdhNGQ5ZDU5MGI2MzU5OGJlYTJmZDVkY2U5MmQ1NjIwZrBhziw=: 00:39:40.575 22:38:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:39:40.575 22:38:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:39:40.575 22:38:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:39:40.575 22:38:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MzI4NzA3YjgxMGEwNzAyYWM0YmRhYjA5ZjliYzZkYTdhNGQ5ZDU5MGI2MzU5OGJlYTJmZDVkY2U5MmQ1NjIwZrBhziw=: 00:39:40.575 22:38:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:39:40.575 22:38:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 4 00:39:40.575 22:38:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:39:40.575 22:38:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:39:40.575 22:38:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:39:40.575 22:38:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:39:40.575 22:38:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:39:40.575 22:38:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:39:40.575 22:38:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:40.575 22:38:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:40.575 22:38:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:40.575 22:38:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:39:40.575 22:38:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:39:40.575 22:38:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:39:40.575 22:38:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:39:40.575 22:38:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:39:40.575 22:38:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:39:40.575 22:38:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:39:40.575 22:38:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:39:40.575 22:38:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:39:40.575 22:38:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:39:40.575 22:38:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:39:40.575 22:38:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:39:40.575 22:38:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:40.575 22:38:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:40.837 nvme0n1 00:39:40.837 22:38:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:40.837 22:38:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:39:40.837 22:38:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:39:40.837 22:38:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:40.837 22:38:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:40.837 22:38:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:40.837 22:38:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:39:40.837 22:38:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:39:40.837 22:38:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:40.837 22:38:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:40.837 22:38:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:40.837 22:38:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:39:40.837 22:38:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:39:40.837 22:38:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 0 00:39:40.837 22:38:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:39:40.837 22:38:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:39:40.837 22:38:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:39:40.837 22:38:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:39:40.837 22:38:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZDI1ZDdiOTI5NjM1ZTZjNDExNzYyNTAxNmQzNmJhMTCNtXb4: 00:39:40.837 22:38:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MjRjNTBmODU4MmU3ZjVlYzdlNzZjYmVlMzk1YjY4YWY1YzdkZWI5NmRiMjAyNmUxYTcxOGIzZmJkNDZkNTg1Y8k+rDI=: 00:39:40.837 22:38:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:39:40.837 22:38:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:39:40.837 22:38:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZDI1ZDdiOTI5NjM1ZTZjNDExNzYyNTAxNmQzNmJhMTCNtXb4: 00:39:40.837 22:38:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MjRjNTBmODU4MmU3ZjVlYzdlNzZjYmVlMzk1YjY4YWY1YzdkZWI5NmRiMjAyNmUxYTcxOGIzZmJkNDZkNTg1Y8k+rDI=: ]] 00:39:40.837 22:38:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MjRjNTBmODU4MmU3ZjVlYzdlNzZjYmVlMzk1YjY4YWY1YzdkZWI5NmRiMjAyNmUxYTcxOGIzZmJkNDZkNTg1Y8k+rDI=: 00:39:40.837 22:38:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 0 00:39:40.837 22:38:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:39:40.837 22:38:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:39:40.837 22:38:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:39:40.837 22:38:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:39:40.837 22:38:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:39:40.837 22:38:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:39:40.837 22:38:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:40.837 22:38:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:40.837 22:38:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:40.837 22:38:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:39:40.837 22:38:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:39:40.837 22:38:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:39:40.837 22:38:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:39:40.837 22:38:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:39:40.837 22:38:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:39:40.837 22:38:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:39:40.837 22:38:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:39:40.837 22:38:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:39:40.837 22:38:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:39:40.837 22:38:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:39:40.837 22:38:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:39:40.837 22:38:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:40.837 22:38:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:41.097 nvme0n1 00:39:41.097 22:38:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:41.097 22:38:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:39:41.097 22:38:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:39:41.097 22:38:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:41.097 22:38:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:41.097 22:38:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:41.097 22:38:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:39:41.097 22:38:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:39:41.097 22:38:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:41.097 22:38:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:41.097 22:38:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:41.097 22:38:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:39:41.097 22:38:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 1 00:39:41.098 22:38:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:39:41.098 22:38:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:39:41.098 22:38:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:39:41.098 22:38:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:39:41.098 22:38:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MjJjYzgzMDVmYmYwMDJhNzUyMzI2Y2Y0MjE5Nzg2YjgyNjBhZDhhNzI1OGZjMjhiJ3xOtg==: 00:39:41.098 22:38:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MWE5M2JjOGM1MmQ5ZmJhYmZkMzBhZjcyZjM2MTU4ODk0ODE3YmEyNDRhYmFmNmRijzhxdQ==: 00:39:41.098 22:38:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:39:41.098 22:38:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:39:41.098 22:38:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MjJjYzgzMDVmYmYwMDJhNzUyMzI2Y2Y0MjE5Nzg2YjgyNjBhZDhhNzI1OGZjMjhiJ3xOtg==: 00:39:41.098 22:38:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MWE5M2JjOGM1MmQ5ZmJhYmZkMzBhZjcyZjM2MTU4ODk0ODE3YmEyNDRhYmFmNmRijzhxdQ==: ]] 00:39:41.098 22:38:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MWE5M2JjOGM1MmQ5ZmJhYmZkMzBhZjcyZjM2MTU4ODk0ODE3YmEyNDRhYmFmNmRijzhxdQ==: 00:39:41.098 22:38:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 1 00:39:41.098 22:38:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:39:41.098 22:38:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:39:41.098 22:38:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:39:41.098 22:38:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:39:41.098 22:38:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:39:41.098 22:38:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:39:41.098 22:38:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:41.098 22:38:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:41.098 22:38:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:41.098 22:38:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:39:41.098 22:38:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:39:41.098 22:38:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:39:41.098 22:38:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:39:41.098 22:38:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:39:41.098 22:38:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:39:41.098 22:38:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:39:41.098 22:38:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:39:41.098 22:38:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:39:41.098 22:38:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:39:41.098 22:38:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:39:41.098 22:38:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:39:41.098 22:38:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:41.098 22:38:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:41.360 nvme0n1 00:39:41.360 22:38:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:41.360 22:38:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:39:41.360 22:38:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:39:41.360 22:38:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:41.360 22:38:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:41.360 22:38:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:41.360 22:38:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:39:41.360 22:38:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:39:41.360 22:38:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:41.360 22:38:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:41.360 22:38:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:41.360 22:38:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:39:41.360 22:38:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 2 00:39:41.360 22:38:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:39:41.360 22:38:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:39:41.360 22:38:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:39:41.360 22:38:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:39:41.360 22:38:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YWMxMzI4YWNkMDg3MDk4YzFkZTAzMTFlNTFiNTg3OGNulVAJ: 00:39:41.360 22:38:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:OTAyZDE1MWNlZjhjMmQyOGJlNTFjOWY1M2MwYWE3ZmEBh5uT: 00:39:41.360 22:38:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:39:41.360 22:38:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:39:41.360 22:38:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YWMxMzI4YWNkMDg3MDk4YzFkZTAzMTFlNTFiNTg3OGNulVAJ: 00:39:41.360 22:38:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:OTAyZDE1MWNlZjhjMmQyOGJlNTFjOWY1M2MwYWE3ZmEBh5uT: ]] 00:39:41.360 22:38:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:OTAyZDE1MWNlZjhjMmQyOGJlNTFjOWY1M2MwYWE3ZmEBh5uT: 00:39:41.360 22:38:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 2 00:39:41.360 22:38:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:39:41.360 22:38:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:39:41.360 22:38:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:39:41.360 22:38:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:39:41.360 22:38:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:39:41.360 22:38:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:39:41.360 22:38:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:41.360 22:38:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:41.360 22:38:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:41.360 22:38:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:39:41.360 22:38:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:39:41.360 22:38:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:39:41.360 22:38:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:39:41.360 22:38:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:39:41.360 22:38:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:39:41.360 22:38:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:39:41.360 22:38:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:39:41.360 22:38:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:39:41.360 22:38:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:39:41.360 22:38:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:39:41.360 22:38:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:39:41.360 22:38:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:41.360 22:38:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:41.622 nvme0n1 00:39:41.622 22:38:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:41.622 22:38:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:39:41.622 22:38:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:39:41.622 22:38:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:41.622 22:38:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:41.622 22:38:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:41.622 22:38:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:39:41.622 22:38:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:39:41.622 22:38:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:41.622 22:38:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:41.622 22:38:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:41.622 22:38:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:39:41.622 22:38:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 3 00:39:41.622 22:38:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:39:41.622 22:38:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:39:41.622 22:38:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:39:41.622 22:38:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:39:41.622 22:38:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:M2E3MGI4NmNjMmI1MGYyMDI1MDJkZmZmMzRjMzA5OWZkOTQxYzM4MTQ4Zjk5NTRmM0RBIQ==: 00:39:41.622 22:38:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NzE2NWMzNTYyNzM2NzVkODVkZTE3ZjVhMzUyN2Y4NWaSDkhX: 00:39:41.622 22:38:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:39:41.622 22:38:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:39:41.622 22:38:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:M2E3MGI4NmNjMmI1MGYyMDI1MDJkZmZmMzRjMzA5OWZkOTQxYzM4MTQ4Zjk5NTRmM0RBIQ==: 00:39:41.622 22:38:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NzE2NWMzNTYyNzM2NzVkODVkZTE3ZjVhMzUyN2Y4NWaSDkhX: ]] 00:39:41.622 22:38:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NzE2NWMzNTYyNzM2NzVkODVkZTE3ZjVhMzUyN2Y4NWaSDkhX: 00:39:41.622 22:38:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 3 00:39:41.622 22:38:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:39:41.622 22:38:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:39:41.622 22:38:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:39:41.622 22:38:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:39:41.622 22:38:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:39:41.622 22:38:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:39:41.623 22:38:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:41.623 22:38:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:41.623 22:38:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:41.623 22:38:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:39:41.623 22:38:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:39:41.623 22:38:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:39:41.623 22:38:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:39:41.623 22:38:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:39:41.623 22:38:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:39:41.623 22:38:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:39:41.623 22:38:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:39:41.623 22:38:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:39:41.623 22:38:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:39:41.623 22:38:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:39:41.623 22:38:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:39:41.623 22:38:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:41.623 22:38:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:41.884 nvme0n1 00:39:41.885 22:38:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:41.885 22:38:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:39:41.885 22:38:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:39:41.885 22:38:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:41.885 22:38:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:41.885 22:38:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:41.885 22:38:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:39:41.885 22:38:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:39:41.885 22:38:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:41.885 22:38:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:41.885 22:38:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:41.885 22:38:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:39:41.885 22:38:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 4 00:39:41.885 22:38:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:39:41.885 22:38:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:39:41.885 22:38:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:39:41.885 22:38:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:39:41.885 22:38:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MzI4NzA3YjgxMGEwNzAyYWM0YmRhYjA5ZjliYzZkYTdhNGQ5ZDU5MGI2MzU5OGJlYTJmZDVkY2U5MmQ1NjIwZrBhziw=: 00:39:41.885 22:38:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:39:41.885 22:38:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:39:41.885 22:38:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:39:41.885 22:38:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MzI4NzA3YjgxMGEwNzAyYWM0YmRhYjA5ZjliYzZkYTdhNGQ5ZDU5MGI2MzU5OGJlYTJmZDVkY2U5MmQ1NjIwZrBhziw=: 00:39:41.885 22:38:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:39:41.885 22:38:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 4 00:39:41.885 22:38:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:39:41.885 22:38:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:39:41.885 22:38:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:39:41.885 22:38:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:39:41.885 22:38:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:39:41.885 22:38:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:39:41.885 22:38:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:41.885 22:38:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:41.885 22:38:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:41.885 22:38:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:39:41.885 22:38:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:39:41.885 22:38:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:39:41.885 22:38:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:39:41.885 22:38:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:39:41.885 22:38:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:39:41.885 22:38:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:39:41.885 22:38:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:39:41.885 22:38:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:39:41.885 22:38:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:39:41.885 22:38:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:39:41.885 22:38:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:39:41.885 22:38:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:41.885 22:38:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:42.146 nvme0n1 00:39:42.146 22:38:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:42.146 22:38:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:39:42.146 22:38:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:39:42.146 22:38:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:42.146 22:38:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:42.146 22:38:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:42.146 22:38:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:39:42.146 22:38:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:39:42.146 22:38:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:42.146 22:38:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:42.146 22:38:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:42.146 22:38:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:39:42.146 22:38:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:39:42.146 22:38:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 0 00:39:42.146 22:38:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:39:42.146 22:38:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:39:42.146 22:38:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:39:42.146 22:38:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:39:42.146 22:38:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZDI1ZDdiOTI5NjM1ZTZjNDExNzYyNTAxNmQzNmJhMTCNtXb4: 00:39:42.146 22:38:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MjRjNTBmODU4MmU3ZjVlYzdlNzZjYmVlMzk1YjY4YWY1YzdkZWI5NmRiMjAyNmUxYTcxOGIzZmJkNDZkNTg1Y8k+rDI=: 00:39:42.146 22:38:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:39:42.146 22:38:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:39:42.146 22:38:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZDI1ZDdiOTI5NjM1ZTZjNDExNzYyNTAxNmQzNmJhMTCNtXb4: 00:39:42.146 22:38:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MjRjNTBmODU4MmU3ZjVlYzdlNzZjYmVlMzk1YjY4YWY1YzdkZWI5NmRiMjAyNmUxYTcxOGIzZmJkNDZkNTg1Y8k+rDI=: ]] 00:39:42.146 22:38:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MjRjNTBmODU4MmU3ZjVlYzdlNzZjYmVlMzk1YjY4YWY1YzdkZWI5NmRiMjAyNmUxYTcxOGIzZmJkNDZkNTg1Y8k+rDI=: 00:39:42.146 22:38:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 0 00:39:42.146 22:38:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:39:42.146 22:38:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:39:42.146 22:38:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:39:42.146 22:38:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:39:42.146 22:38:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:39:42.146 22:38:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:39:42.146 22:38:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:42.147 22:38:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:42.147 22:38:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:42.147 22:38:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:39:42.147 22:38:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:39:42.147 22:38:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:39:42.147 22:38:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:39:42.147 22:38:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:39:42.147 22:38:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:39:42.147 22:38:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:39:42.147 22:38:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:39:42.147 22:38:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:39:42.147 22:38:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:39:42.147 22:38:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:39:42.147 22:38:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:39:42.147 22:38:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:42.147 22:38:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:42.408 nvme0n1 00:39:42.408 22:38:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:42.408 22:38:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:39:42.408 22:38:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:39:42.408 22:38:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:42.408 22:38:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:42.408 22:38:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:42.408 22:38:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:39:42.408 22:38:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:39:42.408 22:38:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:42.408 22:38:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:42.408 22:38:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:42.408 22:38:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:39:42.408 22:38:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 1 00:39:42.408 22:38:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:39:42.408 22:38:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:39:42.408 22:38:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:39:42.408 22:38:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:39:42.408 22:38:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MjJjYzgzMDVmYmYwMDJhNzUyMzI2Y2Y0MjE5Nzg2YjgyNjBhZDhhNzI1OGZjMjhiJ3xOtg==: 00:39:42.408 22:38:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MWE5M2JjOGM1MmQ5ZmJhYmZkMzBhZjcyZjM2MTU4ODk0ODE3YmEyNDRhYmFmNmRijzhxdQ==: 00:39:42.408 22:38:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:39:42.408 22:38:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:39:42.408 22:38:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MjJjYzgzMDVmYmYwMDJhNzUyMzI2Y2Y0MjE5Nzg2YjgyNjBhZDhhNzI1OGZjMjhiJ3xOtg==: 00:39:42.408 22:38:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MWE5M2JjOGM1MmQ5ZmJhYmZkMzBhZjcyZjM2MTU4ODk0ODE3YmEyNDRhYmFmNmRijzhxdQ==: ]] 00:39:42.408 22:38:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MWE5M2JjOGM1MmQ5ZmJhYmZkMzBhZjcyZjM2MTU4ODk0ODE3YmEyNDRhYmFmNmRijzhxdQ==: 00:39:42.408 22:38:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 1 00:39:42.408 22:38:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:39:42.408 22:38:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:39:42.408 22:38:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:39:42.408 22:38:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:39:42.408 22:38:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:39:42.408 22:38:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:39:42.408 22:38:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:42.408 22:38:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:42.408 22:38:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:42.669 22:38:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:39:42.670 22:38:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:39:42.670 22:38:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:39:42.670 22:38:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:39:42.670 22:38:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:39:42.670 22:38:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:39:42.670 22:38:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:39:42.670 22:38:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:39:42.670 22:38:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:39:42.670 22:38:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:39:42.670 22:38:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:39:42.670 22:38:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:39:42.670 22:38:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:42.670 22:38:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:42.931 nvme0n1 00:39:42.931 22:38:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:42.931 22:38:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:39:42.931 22:38:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:39:42.931 22:38:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:42.931 22:38:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:42.931 22:38:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:42.931 22:38:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:39:42.931 22:38:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:39:42.931 22:38:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:42.931 22:38:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:42.931 22:38:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:42.931 22:38:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:39:42.931 22:38:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 2 00:39:42.931 22:38:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:39:42.931 22:38:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:39:42.931 22:38:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:39:42.931 22:38:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:39:42.932 22:38:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YWMxMzI4YWNkMDg3MDk4YzFkZTAzMTFlNTFiNTg3OGNulVAJ: 00:39:42.932 22:38:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:OTAyZDE1MWNlZjhjMmQyOGJlNTFjOWY1M2MwYWE3ZmEBh5uT: 00:39:42.932 22:38:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:39:42.932 22:38:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:39:42.932 22:38:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YWMxMzI4YWNkMDg3MDk4YzFkZTAzMTFlNTFiNTg3OGNulVAJ: 00:39:42.932 22:38:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:OTAyZDE1MWNlZjhjMmQyOGJlNTFjOWY1M2MwYWE3ZmEBh5uT: ]] 00:39:42.932 22:38:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:OTAyZDE1MWNlZjhjMmQyOGJlNTFjOWY1M2MwYWE3ZmEBh5uT: 00:39:42.932 22:38:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 2 00:39:42.932 22:38:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:39:42.932 22:38:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:39:42.932 22:38:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:39:42.932 22:38:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:39:42.932 22:38:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:39:42.932 22:38:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:39:42.932 22:38:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:42.932 22:38:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:42.932 22:38:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:42.932 22:38:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:39:42.932 22:38:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:39:42.932 22:38:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:39:42.932 22:38:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:39:42.932 22:38:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:39:42.932 22:38:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:39:42.932 22:38:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:39:42.932 22:38:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:39:42.932 22:38:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:39:42.932 22:38:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:39:42.932 22:38:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:39:42.932 22:38:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:39:42.932 22:38:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:42.932 22:38:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:43.192 nvme0n1 00:39:43.192 22:38:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:43.192 22:38:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:39:43.192 22:38:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:39:43.192 22:38:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:43.192 22:38:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:43.192 22:38:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:43.192 22:38:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:39:43.192 22:38:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:39:43.192 22:38:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:43.192 22:38:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:43.192 22:38:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:43.192 22:38:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:39:43.192 22:38:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 3 00:39:43.192 22:38:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:39:43.192 22:38:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:39:43.192 22:38:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:39:43.192 22:38:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:39:43.192 22:38:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:M2E3MGI4NmNjMmI1MGYyMDI1MDJkZmZmMzRjMzA5OWZkOTQxYzM4MTQ4Zjk5NTRmM0RBIQ==: 00:39:43.192 22:38:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NzE2NWMzNTYyNzM2NzVkODVkZTE3ZjVhMzUyN2Y4NWaSDkhX: 00:39:43.192 22:38:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:39:43.192 22:38:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:39:43.192 22:38:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:M2E3MGI4NmNjMmI1MGYyMDI1MDJkZmZmMzRjMzA5OWZkOTQxYzM4MTQ4Zjk5NTRmM0RBIQ==: 00:39:43.192 22:38:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NzE2NWMzNTYyNzM2NzVkODVkZTE3ZjVhMzUyN2Y4NWaSDkhX: ]] 00:39:43.192 22:38:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NzE2NWMzNTYyNzM2NzVkODVkZTE3ZjVhMzUyN2Y4NWaSDkhX: 00:39:43.192 22:38:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 3 00:39:43.192 22:38:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:39:43.192 22:38:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:39:43.192 22:38:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:39:43.192 22:38:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:39:43.192 22:38:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:39:43.192 22:38:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:39:43.192 22:38:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:43.192 22:38:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:43.192 22:38:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:43.192 22:38:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:39:43.192 22:38:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:39:43.192 22:38:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:39:43.192 22:38:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:39:43.192 22:38:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:39:43.192 22:38:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:39:43.192 22:38:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:39:43.192 22:38:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:39:43.192 22:38:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:39:43.192 22:38:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:39:43.192 22:38:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:39:43.192 22:38:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:39:43.192 22:38:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:43.192 22:38:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:43.453 nvme0n1 00:39:43.453 22:38:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:43.453 22:38:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:39:43.453 22:38:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:39:43.453 22:38:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:43.453 22:38:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:43.453 22:38:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:43.453 22:38:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:39:43.453 22:38:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:39:43.453 22:38:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:43.453 22:38:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:43.713 22:38:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:43.713 22:38:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:39:43.713 22:38:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 4 00:39:43.713 22:38:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:39:43.713 22:38:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:39:43.714 22:38:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:39:43.714 22:38:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:39:43.714 22:38:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MzI4NzA3YjgxMGEwNzAyYWM0YmRhYjA5ZjliYzZkYTdhNGQ5ZDU5MGI2MzU5OGJlYTJmZDVkY2U5MmQ1NjIwZrBhziw=: 00:39:43.714 22:38:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:39:43.714 22:38:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:39:43.714 22:38:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:39:43.714 22:38:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MzI4NzA3YjgxMGEwNzAyYWM0YmRhYjA5ZjliYzZkYTdhNGQ5ZDU5MGI2MzU5OGJlYTJmZDVkY2U5MmQ1NjIwZrBhziw=: 00:39:43.714 22:38:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:39:43.714 22:38:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 4 00:39:43.714 22:38:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:39:43.714 22:38:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:39:43.714 22:38:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:39:43.714 22:38:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:39:43.714 22:38:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:39:43.714 22:38:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:39:43.714 22:38:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:43.714 22:38:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:43.714 22:38:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:43.714 22:38:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:39:43.714 22:38:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:39:43.714 22:38:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:39:43.714 22:38:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:39:43.714 22:38:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:39:43.714 22:38:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:39:43.714 22:38:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:39:43.714 22:38:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:39:43.714 22:38:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:39:43.714 22:38:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:39:43.714 22:38:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:39:43.714 22:38:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:39:43.714 22:38:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:43.714 22:38:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:43.984 nvme0n1 00:39:43.984 22:38:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:43.984 22:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:39:43.984 22:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:43.984 22:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:39:43.984 22:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:43.984 22:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:43.984 22:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:39:43.984 22:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:39:43.984 22:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:43.984 22:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:43.984 22:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:43.984 22:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:39:43.984 22:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:39:43.984 22:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 0 00:39:43.984 22:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:39:43.984 22:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:39:43.984 22:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:39:43.984 22:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:39:43.984 22:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZDI1ZDdiOTI5NjM1ZTZjNDExNzYyNTAxNmQzNmJhMTCNtXb4: 00:39:43.984 22:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MjRjNTBmODU4MmU3ZjVlYzdlNzZjYmVlMzk1YjY4YWY1YzdkZWI5NmRiMjAyNmUxYTcxOGIzZmJkNDZkNTg1Y8k+rDI=: 00:39:43.984 22:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:39:43.984 22:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:39:43.984 22:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZDI1ZDdiOTI5NjM1ZTZjNDExNzYyNTAxNmQzNmJhMTCNtXb4: 00:39:43.984 22:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MjRjNTBmODU4MmU3ZjVlYzdlNzZjYmVlMzk1YjY4YWY1YzdkZWI5NmRiMjAyNmUxYTcxOGIzZmJkNDZkNTg1Y8k+rDI=: ]] 00:39:43.984 22:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MjRjNTBmODU4MmU3ZjVlYzdlNzZjYmVlMzk1YjY4YWY1YzdkZWI5NmRiMjAyNmUxYTcxOGIzZmJkNDZkNTg1Y8k+rDI=: 00:39:43.984 22:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 0 00:39:43.984 22:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:39:43.984 22:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:39:43.984 22:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:39:43.984 22:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:39:43.984 22:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:39:43.984 22:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:39:43.984 22:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:43.984 22:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:43.984 22:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:43.984 22:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:39:43.984 22:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:39:43.984 22:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:39:43.984 22:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:39:43.984 22:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:39:43.984 22:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:39:43.984 22:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:39:43.984 22:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:39:43.984 22:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:39:43.984 22:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:39:43.984 22:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:39:43.984 22:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:39:43.984 22:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:43.984 22:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:44.554 nvme0n1 00:39:44.554 22:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:44.554 22:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:39:44.554 22:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:44.554 22:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:39:44.554 22:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:44.554 22:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:44.554 22:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:39:44.554 22:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:39:44.554 22:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:44.554 22:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:44.554 22:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:44.554 22:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:39:44.554 22:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 1 00:39:44.554 22:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:39:44.554 22:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:39:44.554 22:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:39:44.554 22:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:39:44.554 22:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MjJjYzgzMDVmYmYwMDJhNzUyMzI2Y2Y0MjE5Nzg2YjgyNjBhZDhhNzI1OGZjMjhiJ3xOtg==: 00:39:44.554 22:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MWE5M2JjOGM1MmQ5ZmJhYmZkMzBhZjcyZjM2MTU4ODk0ODE3YmEyNDRhYmFmNmRijzhxdQ==: 00:39:44.554 22:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:39:44.554 22:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:39:44.554 22:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MjJjYzgzMDVmYmYwMDJhNzUyMzI2Y2Y0MjE5Nzg2YjgyNjBhZDhhNzI1OGZjMjhiJ3xOtg==: 00:39:44.554 22:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MWE5M2JjOGM1MmQ5ZmJhYmZkMzBhZjcyZjM2MTU4ODk0ODE3YmEyNDRhYmFmNmRijzhxdQ==: ]] 00:39:44.554 22:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MWE5M2JjOGM1MmQ5ZmJhYmZkMzBhZjcyZjM2MTU4ODk0ODE3YmEyNDRhYmFmNmRijzhxdQ==: 00:39:44.554 22:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 1 00:39:44.554 22:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:39:44.554 22:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:39:44.554 22:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:39:44.554 22:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:39:44.554 22:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:39:44.554 22:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:39:44.554 22:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:44.554 22:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:44.554 22:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:44.554 22:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:39:44.554 22:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:39:44.554 22:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:39:44.554 22:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:39:44.554 22:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:39:44.554 22:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:39:44.554 22:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:39:44.554 22:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:39:44.554 22:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:39:44.554 22:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:39:44.554 22:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:39:44.554 22:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:39:44.554 22:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:44.554 22:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:45.125 nvme0n1 00:39:45.125 22:38:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:45.125 22:38:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:39:45.125 22:38:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:39:45.125 22:38:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:45.125 22:38:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:45.125 22:38:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:45.125 22:38:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:39:45.125 22:38:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:39:45.125 22:38:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:45.125 22:38:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:45.125 22:38:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:45.125 22:38:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:39:45.125 22:38:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 2 00:39:45.125 22:38:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:39:45.125 22:38:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:39:45.125 22:38:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:39:45.125 22:38:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:39:45.125 22:38:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YWMxMzI4YWNkMDg3MDk4YzFkZTAzMTFlNTFiNTg3OGNulVAJ: 00:39:45.125 22:38:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:OTAyZDE1MWNlZjhjMmQyOGJlNTFjOWY1M2MwYWE3ZmEBh5uT: 00:39:45.125 22:38:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:39:45.125 22:38:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:39:45.125 22:38:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YWMxMzI4YWNkMDg3MDk4YzFkZTAzMTFlNTFiNTg3OGNulVAJ: 00:39:45.125 22:38:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:OTAyZDE1MWNlZjhjMmQyOGJlNTFjOWY1M2MwYWE3ZmEBh5uT: ]] 00:39:45.125 22:38:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:OTAyZDE1MWNlZjhjMmQyOGJlNTFjOWY1M2MwYWE3ZmEBh5uT: 00:39:45.125 22:38:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 2 00:39:45.125 22:38:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:39:45.125 22:38:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:39:45.125 22:38:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:39:45.125 22:38:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:39:45.125 22:38:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:39:45.125 22:38:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:39:45.125 22:38:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:45.125 22:38:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:45.125 22:38:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:45.125 22:38:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:39:45.125 22:38:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:39:45.125 22:38:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:39:45.125 22:38:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:39:45.126 22:38:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:39:45.126 22:38:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:39:45.126 22:38:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:39:45.126 22:38:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:39:45.126 22:38:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:39:45.126 22:38:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:39:45.126 22:38:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:39:45.126 22:38:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:39:45.126 22:38:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:45.126 22:38:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:45.386 nvme0n1 00:39:45.386 22:38:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:45.386 22:38:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:39:45.386 22:38:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:45.386 22:38:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:39:45.386 22:38:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:45.386 22:38:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:45.646 22:38:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:39:45.646 22:38:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:39:45.646 22:38:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:45.646 22:38:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:45.646 22:38:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:45.646 22:38:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:39:45.646 22:38:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 3 00:39:45.646 22:38:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:39:45.646 22:38:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:39:45.646 22:38:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:39:45.646 22:38:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:39:45.646 22:38:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:M2E3MGI4NmNjMmI1MGYyMDI1MDJkZmZmMzRjMzA5OWZkOTQxYzM4MTQ4Zjk5NTRmM0RBIQ==: 00:39:45.646 22:38:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NzE2NWMzNTYyNzM2NzVkODVkZTE3ZjVhMzUyN2Y4NWaSDkhX: 00:39:45.646 22:38:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:39:45.646 22:38:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:39:45.646 22:38:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:M2E3MGI4NmNjMmI1MGYyMDI1MDJkZmZmMzRjMzA5OWZkOTQxYzM4MTQ4Zjk5NTRmM0RBIQ==: 00:39:45.646 22:38:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NzE2NWMzNTYyNzM2NzVkODVkZTE3ZjVhMzUyN2Y4NWaSDkhX: ]] 00:39:45.646 22:38:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NzE2NWMzNTYyNzM2NzVkODVkZTE3ZjVhMzUyN2Y4NWaSDkhX: 00:39:45.646 22:38:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 3 00:39:45.646 22:38:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:39:45.646 22:38:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:39:45.646 22:38:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:39:45.646 22:38:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:39:45.646 22:38:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:39:45.646 22:38:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:39:45.646 22:38:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:45.646 22:38:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:45.646 22:38:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:45.646 22:38:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:39:45.646 22:38:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:39:45.646 22:38:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:39:45.646 22:38:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:39:45.646 22:38:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:39:45.646 22:38:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:39:45.646 22:38:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:39:45.646 22:38:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:39:45.646 22:38:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:39:45.646 22:38:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:39:45.646 22:38:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:39:45.646 22:38:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:39:45.646 22:38:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:45.646 22:38:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:45.907 nvme0n1 00:39:45.907 22:38:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:45.907 22:38:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:39:45.907 22:38:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:39:45.907 22:38:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:45.907 22:38:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:45.907 22:38:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:46.167 22:38:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:39:46.167 22:38:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:39:46.167 22:38:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:46.167 22:38:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:46.167 22:38:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:46.167 22:38:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:39:46.167 22:38:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 4 00:39:46.167 22:38:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:39:46.167 22:38:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:39:46.167 22:38:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:39:46.167 22:38:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:39:46.167 22:38:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MzI4NzA3YjgxMGEwNzAyYWM0YmRhYjA5ZjliYzZkYTdhNGQ5ZDU5MGI2MzU5OGJlYTJmZDVkY2U5MmQ1NjIwZrBhziw=: 00:39:46.167 22:38:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:39:46.167 22:38:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:39:46.167 22:38:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:39:46.167 22:38:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MzI4NzA3YjgxMGEwNzAyYWM0YmRhYjA5ZjliYzZkYTdhNGQ5ZDU5MGI2MzU5OGJlYTJmZDVkY2U5MmQ1NjIwZrBhziw=: 00:39:46.167 22:38:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:39:46.167 22:38:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 4 00:39:46.167 22:38:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:39:46.167 22:38:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:39:46.167 22:38:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:39:46.167 22:38:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:39:46.167 22:38:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:39:46.167 22:38:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:39:46.167 22:38:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:46.167 22:38:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:46.167 22:38:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:46.167 22:38:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:39:46.167 22:38:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:39:46.167 22:38:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:39:46.167 22:38:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:39:46.167 22:38:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:39:46.167 22:38:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:39:46.167 22:38:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:39:46.167 22:38:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:39:46.167 22:38:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:39:46.167 22:38:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:39:46.167 22:38:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:39:46.167 22:38:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:39:46.167 22:38:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:46.167 22:38:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:46.738 nvme0n1 00:39:46.738 22:38:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:46.738 22:38:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:39:46.738 22:38:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:46.738 22:38:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:39:46.738 22:38:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:46.738 22:38:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:46.738 22:38:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:39:46.738 22:38:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:39:46.738 22:38:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:46.738 22:38:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:46.738 22:38:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:46.738 22:38:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:39:46.738 22:38:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:39:46.738 22:38:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 0 00:39:46.738 22:38:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:39:46.738 22:38:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:39:46.738 22:38:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:39:46.738 22:38:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:39:46.738 22:38:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZDI1ZDdiOTI5NjM1ZTZjNDExNzYyNTAxNmQzNmJhMTCNtXb4: 00:39:46.738 22:38:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MjRjNTBmODU4MmU3ZjVlYzdlNzZjYmVlMzk1YjY4YWY1YzdkZWI5NmRiMjAyNmUxYTcxOGIzZmJkNDZkNTg1Y8k+rDI=: 00:39:46.738 22:38:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:39:46.738 22:38:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:39:46.738 22:38:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZDI1ZDdiOTI5NjM1ZTZjNDExNzYyNTAxNmQzNmJhMTCNtXb4: 00:39:46.738 22:38:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MjRjNTBmODU4MmU3ZjVlYzdlNzZjYmVlMzk1YjY4YWY1YzdkZWI5NmRiMjAyNmUxYTcxOGIzZmJkNDZkNTg1Y8k+rDI=: ]] 00:39:46.738 22:38:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MjRjNTBmODU4MmU3ZjVlYzdlNzZjYmVlMzk1YjY4YWY1YzdkZWI5NmRiMjAyNmUxYTcxOGIzZmJkNDZkNTg1Y8k+rDI=: 00:39:46.738 22:38:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 0 00:39:46.738 22:38:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:39:46.738 22:38:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:39:46.738 22:38:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:39:46.738 22:38:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:39:46.738 22:38:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:39:46.738 22:38:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:39:46.738 22:38:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:46.738 22:38:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:46.738 22:38:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:46.738 22:38:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:39:46.738 22:38:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:39:46.738 22:38:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:39:46.738 22:38:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:39:46.738 22:38:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:39:46.738 22:38:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:39:46.738 22:38:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:39:46.738 22:38:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:39:46.738 22:38:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:39:46.738 22:38:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:39:46.738 22:38:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:39:46.738 22:38:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:39:46.738 22:38:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:46.738 22:38:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:47.308 nvme0n1 00:39:47.308 22:38:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:47.308 22:38:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:39:47.308 22:38:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:39:47.308 22:38:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:47.308 22:38:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:47.308 22:38:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:47.568 22:38:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:39:47.568 22:38:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:39:47.568 22:38:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:47.568 22:38:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:47.568 22:38:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:47.568 22:38:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:39:47.568 22:38:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 1 00:39:47.568 22:38:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:39:47.568 22:38:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:39:47.568 22:38:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:39:47.568 22:38:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:39:47.568 22:38:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MjJjYzgzMDVmYmYwMDJhNzUyMzI2Y2Y0MjE5Nzg2YjgyNjBhZDhhNzI1OGZjMjhiJ3xOtg==: 00:39:47.568 22:38:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MWE5M2JjOGM1MmQ5ZmJhYmZkMzBhZjcyZjM2MTU4ODk0ODE3YmEyNDRhYmFmNmRijzhxdQ==: 00:39:47.568 22:38:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:39:47.568 22:38:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:39:47.568 22:38:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MjJjYzgzMDVmYmYwMDJhNzUyMzI2Y2Y0MjE5Nzg2YjgyNjBhZDhhNzI1OGZjMjhiJ3xOtg==: 00:39:47.568 22:38:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MWE5M2JjOGM1MmQ5ZmJhYmZkMzBhZjcyZjM2MTU4ODk0ODE3YmEyNDRhYmFmNmRijzhxdQ==: ]] 00:39:47.568 22:38:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MWE5M2JjOGM1MmQ5ZmJhYmZkMzBhZjcyZjM2MTU4ODk0ODE3YmEyNDRhYmFmNmRijzhxdQ==: 00:39:47.568 22:38:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 1 00:39:47.568 22:38:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:39:47.568 22:38:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:39:47.568 22:38:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:39:47.568 22:38:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:39:47.568 22:38:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:39:47.568 22:38:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:39:47.568 22:38:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:47.568 22:38:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:47.568 22:38:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:47.568 22:38:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:39:47.568 22:38:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:39:47.568 22:38:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:39:47.568 22:38:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:39:47.568 22:38:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:39:47.568 22:38:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:39:47.568 22:38:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:39:47.568 22:38:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:39:47.568 22:38:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:39:47.568 22:38:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:39:47.568 22:38:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:39:47.568 22:38:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:39:47.568 22:38:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:47.568 22:38:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:48.138 nvme0n1 00:39:48.138 22:38:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:48.138 22:38:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:39:48.138 22:38:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:39:48.138 22:38:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:48.138 22:38:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:48.138 22:38:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:48.398 22:38:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:39:48.398 22:38:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:39:48.398 22:38:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:48.398 22:38:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:48.398 22:38:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:48.398 22:38:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:39:48.398 22:38:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 2 00:39:48.398 22:38:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:39:48.398 22:38:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:39:48.398 22:38:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:39:48.398 22:38:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:39:48.398 22:38:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YWMxMzI4YWNkMDg3MDk4YzFkZTAzMTFlNTFiNTg3OGNulVAJ: 00:39:48.398 22:38:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:OTAyZDE1MWNlZjhjMmQyOGJlNTFjOWY1M2MwYWE3ZmEBh5uT: 00:39:48.398 22:38:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:39:48.398 22:38:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:39:48.398 22:38:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YWMxMzI4YWNkMDg3MDk4YzFkZTAzMTFlNTFiNTg3OGNulVAJ: 00:39:48.398 22:38:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:OTAyZDE1MWNlZjhjMmQyOGJlNTFjOWY1M2MwYWE3ZmEBh5uT: ]] 00:39:48.398 22:38:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:OTAyZDE1MWNlZjhjMmQyOGJlNTFjOWY1M2MwYWE3ZmEBh5uT: 00:39:48.398 22:38:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 2 00:39:48.398 22:38:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:39:48.398 22:38:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:39:48.398 22:38:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:39:48.398 22:38:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:39:48.398 22:38:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:39:48.398 22:38:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:39:48.398 22:38:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:48.398 22:38:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:48.398 22:38:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:48.398 22:38:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:39:48.398 22:38:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:39:48.398 22:38:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:39:48.398 22:38:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:39:48.398 22:38:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:39:48.398 22:38:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:39:48.398 22:38:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:39:48.398 22:38:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:39:48.398 22:38:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:39:48.398 22:38:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:39:48.398 22:38:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:39:48.398 22:38:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:39:48.398 22:38:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:48.398 22:38:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:48.968 nvme0n1 00:39:48.968 22:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:48.968 22:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:39:48.968 22:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:39:48.969 22:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:48.969 22:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:48.969 22:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:49.229 22:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:39:49.229 22:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:39:49.229 22:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:49.229 22:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:49.229 22:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:49.229 22:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:39:49.229 22:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 3 00:39:49.230 22:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:39:49.230 22:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:39:49.230 22:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:39:49.230 22:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:39:49.230 22:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:M2E3MGI4NmNjMmI1MGYyMDI1MDJkZmZmMzRjMzA5OWZkOTQxYzM4MTQ4Zjk5NTRmM0RBIQ==: 00:39:49.230 22:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NzE2NWMzNTYyNzM2NzVkODVkZTE3ZjVhMzUyN2Y4NWaSDkhX: 00:39:49.230 22:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:39:49.230 22:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:39:49.230 22:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:M2E3MGI4NmNjMmI1MGYyMDI1MDJkZmZmMzRjMzA5OWZkOTQxYzM4MTQ4Zjk5NTRmM0RBIQ==: 00:39:49.230 22:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NzE2NWMzNTYyNzM2NzVkODVkZTE3ZjVhMzUyN2Y4NWaSDkhX: ]] 00:39:49.230 22:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NzE2NWMzNTYyNzM2NzVkODVkZTE3ZjVhMzUyN2Y4NWaSDkhX: 00:39:49.230 22:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 3 00:39:49.230 22:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:39:49.230 22:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:39:49.230 22:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:39:49.230 22:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:39:49.230 22:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:39:49.230 22:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:39:49.230 22:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:49.230 22:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:49.230 22:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:49.230 22:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:39:49.230 22:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:39:49.230 22:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:39:49.230 22:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:39:49.230 22:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:39:49.230 22:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:39:49.230 22:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:39:49.230 22:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:39:49.230 22:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:39:49.230 22:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:39:49.230 22:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:39:49.230 22:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:39:49.230 22:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:49.230 22:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:49.800 nvme0n1 00:39:49.800 22:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:49.800 22:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:39:49.800 22:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:39:49.800 22:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:49.800 22:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:49.800 22:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:50.060 22:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:39:50.060 22:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:39:50.060 22:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:50.060 22:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:50.060 22:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:50.060 22:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:39:50.060 22:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 4 00:39:50.060 22:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:39:50.060 22:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:39:50.060 22:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:39:50.060 22:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:39:50.060 22:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MzI4NzA3YjgxMGEwNzAyYWM0YmRhYjA5ZjliYzZkYTdhNGQ5ZDU5MGI2MzU5OGJlYTJmZDVkY2U5MmQ1NjIwZrBhziw=: 00:39:50.060 22:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:39:50.060 22:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:39:50.060 22:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:39:50.060 22:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MzI4NzA3YjgxMGEwNzAyYWM0YmRhYjA5ZjliYzZkYTdhNGQ5ZDU5MGI2MzU5OGJlYTJmZDVkY2U5MmQ1NjIwZrBhziw=: 00:39:50.060 22:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:39:50.060 22:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 4 00:39:50.060 22:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:39:50.060 22:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:39:50.060 22:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:39:50.060 22:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:39:50.060 22:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:39:50.060 22:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:39:50.060 22:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:50.060 22:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:50.060 22:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:50.060 22:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:39:50.060 22:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:39:50.060 22:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:39:50.060 22:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:39:50.060 22:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:39:50.060 22:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:39:50.060 22:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:39:50.060 22:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:39:50.060 22:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:39:50.060 22:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:39:50.060 22:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:39:50.060 22:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:39:50.060 22:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:50.060 22:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:50.631 nvme0n1 00:39:50.631 22:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:50.631 22:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:39:50.631 22:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:39:50.631 22:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:50.631 22:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:50.631 22:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:50.631 22:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:39:50.631 22:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:39:50.631 22:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:50.631 22:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:50.891 22:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:50.891 22:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:39:50.891 22:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:39:50.891 22:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:39:50.891 22:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 0 00:39:50.891 22:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:39:50.891 22:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:39:50.891 22:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:39:50.891 22:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:39:50.891 22:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZDI1ZDdiOTI5NjM1ZTZjNDExNzYyNTAxNmQzNmJhMTCNtXb4: 00:39:50.891 22:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MjRjNTBmODU4MmU3ZjVlYzdlNzZjYmVlMzk1YjY4YWY1YzdkZWI5NmRiMjAyNmUxYTcxOGIzZmJkNDZkNTg1Y8k+rDI=: 00:39:50.891 22:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:39:50.891 22:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:39:50.891 22:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZDI1ZDdiOTI5NjM1ZTZjNDExNzYyNTAxNmQzNmJhMTCNtXb4: 00:39:50.891 22:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MjRjNTBmODU4MmU3ZjVlYzdlNzZjYmVlMzk1YjY4YWY1YzdkZWI5NmRiMjAyNmUxYTcxOGIzZmJkNDZkNTg1Y8k+rDI=: ]] 00:39:50.891 22:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MjRjNTBmODU4MmU3ZjVlYzdlNzZjYmVlMzk1YjY4YWY1YzdkZWI5NmRiMjAyNmUxYTcxOGIzZmJkNDZkNTg1Y8k+rDI=: 00:39:50.891 22:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 0 00:39:50.891 22:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:39:50.891 22:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:39:50.891 22:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:39:50.891 22:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:39:50.892 22:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:39:50.892 22:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:39:50.892 22:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:50.892 22:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:50.892 22:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:50.892 22:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:39:50.892 22:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:39:50.892 22:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:39:50.892 22:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:39:50.892 22:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:39:50.892 22:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:39:50.892 22:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:39:50.892 22:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:39:50.892 22:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:39:50.892 22:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:39:50.892 22:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:39:50.892 22:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:39:50.892 22:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:50.892 22:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:50.892 nvme0n1 00:39:50.892 22:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:50.892 22:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:39:50.892 22:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:39:50.892 22:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:50.892 22:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:50.892 22:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:50.892 22:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:39:50.892 22:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:39:50.892 22:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:50.892 22:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:50.892 22:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:50.892 22:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:39:50.892 22:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 1 00:39:50.892 22:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:39:50.892 22:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:39:50.892 22:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:39:50.892 22:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:39:50.892 22:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MjJjYzgzMDVmYmYwMDJhNzUyMzI2Y2Y0MjE5Nzg2YjgyNjBhZDhhNzI1OGZjMjhiJ3xOtg==: 00:39:50.892 22:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MWE5M2JjOGM1MmQ5ZmJhYmZkMzBhZjcyZjM2MTU4ODk0ODE3YmEyNDRhYmFmNmRijzhxdQ==: 00:39:50.892 22:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:39:50.892 22:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:39:50.892 22:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MjJjYzgzMDVmYmYwMDJhNzUyMzI2Y2Y0MjE5Nzg2YjgyNjBhZDhhNzI1OGZjMjhiJ3xOtg==: 00:39:50.892 22:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MWE5M2JjOGM1MmQ5ZmJhYmZkMzBhZjcyZjM2MTU4ODk0ODE3YmEyNDRhYmFmNmRijzhxdQ==: ]] 00:39:50.892 22:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MWE5M2JjOGM1MmQ5ZmJhYmZkMzBhZjcyZjM2MTU4ODk0ODE3YmEyNDRhYmFmNmRijzhxdQ==: 00:39:50.892 22:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 1 00:39:50.892 22:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:39:50.892 22:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:39:50.892 22:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:39:50.892 22:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:39:50.892 22:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:39:50.892 22:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:39:50.892 22:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:50.892 22:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:50.892 22:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:50.892 22:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:39:50.892 22:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:39:50.892 22:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:39:50.892 22:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:39:50.892 22:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:39:50.892 22:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:39:50.892 22:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:39:50.892 22:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:39:50.892 22:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:39:50.892 22:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:39:50.892 22:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:39:50.892 22:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:39:50.892 22:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:50.892 22:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:51.152 nvme0n1 00:39:51.152 22:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:51.152 22:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:39:51.152 22:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:39:51.152 22:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:51.152 22:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:51.152 22:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:51.152 22:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:39:51.152 22:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:39:51.152 22:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:51.152 22:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:51.152 22:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:51.152 22:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:39:51.152 22:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 2 00:39:51.152 22:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:39:51.152 22:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:39:51.152 22:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:39:51.152 22:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:39:51.152 22:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YWMxMzI4YWNkMDg3MDk4YzFkZTAzMTFlNTFiNTg3OGNulVAJ: 00:39:51.152 22:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:OTAyZDE1MWNlZjhjMmQyOGJlNTFjOWY1M2MwYWE3ZmEBh5uT: 00:39:51.152 22:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:39:51.152 22:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:39:51.152 22:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YWMxMzI4YWNkMDg3MDk4YzFkZTAzMTFlNTFiNTg3OGNulVAJ: 00:39:51.152 22:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:OTAyZDE1MWNlZjhjMmQyOGJlNTFjOWY1M2MwYWE3ZmEBh5uT: ]] 00:39:51.152 22:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:OTAyZDE1MWNlZjhjMmQyOGJlNTFjOWY1M2MwYWE3ZmEBh5uT: 00:39:51.152 22:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 2 00:39:51.152 22:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:39:51.152 22:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:39:51.152 22:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:39:51.152 22:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:39:51.152 22:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:39:51.152 22:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:39:51.152 22:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:51.152 22:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:51.152 22:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:51.152 22:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:39:51.152 22:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:39:51.152 22:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:39:51.152 22:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:39:51.152 22:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:39:51.152 22:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:39:51.152 22:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:39:51.152 22:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:39:51.152 22:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:39:51.152 22:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:39:51.152 22:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:39:51.152 22:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:39:51.152 22:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:51.152 22:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:51.413 nvme0n1 00:39:51.413 22:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:51.413 22:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:39:51.413 22:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:39:51.413 22:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:51.413 22:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:51.413 22:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:51.413 22:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:39:51.413 22:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:39:51.413 22:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:51.413 22:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:51.413 22:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:51.413 22:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:39:51.413 22:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 3 00:39:51.413 22:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:39:51.413 22:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:39:51.413 22:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:39:51.413 22:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:39:51.413 22:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:M2E3MGI4NmNjMmI1MGYyMDI1MDJkZmZmMzRjMzA5OWZkOTQxYzM4MTQ4Zjk5NTRmM0RBIQ==: 00:39:51.413 22:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NzE2NWMzNTYyNzM2NzVkODVkZTE3ZjVhMzUyN2Y4NWaSDkhX: 00:39:51.413 22:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:39:51.413 22:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:39:51.413 22:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:M2E3MGI4NmNjMmI1MGYyMDI1MDJkZmZmMzRjMzA5OWZkOTQxYzM4MTQ4Zjk5NTRmM0RBIQ==: 00:39:51.413 22:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NzE2NWMzNTYyNzM2NzVkODVkZTE3ZjVhMzUyN2Y4NWaSDkhX: ]] 00:39:51.413 22:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NzE2NWMzNTYyNzM2NzVkODVkZTE3ZjVhMzUyN2Y4NWaSDkhX: 00:39:51.413 22:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 3 00:39:51.413 22:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:39:51.413 22:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:39:51.413 22:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:39:51.413 22:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:39:51.413 22:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:39:51.413 22:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:39:51.413 22:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:51.413 22:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:51.413 22:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:51.413 22:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:39:51.413 22:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:39:51.413 22:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:39:51.413 22:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:39:51.413 22:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:39:51.413 22:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:39:51.413 22:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:39:51.413 22:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:39:51.413 22:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:39:51.413 22:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:39:51.413 22:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:39:51.413 22:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:39:51.413 22:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:51.413 22:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:51.673 nvme0n1 00:39:51.673 22:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:51.673 22:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:39:51.673 22:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:39:51.673 22:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:51.673 22:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:51.673 22:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:51.673 22:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:39:51.673 22:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:39:51.673 22:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:51.673 22:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:51.673 22:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:51.673 22:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:39:51.673 22:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 4 00:39:51.673 22:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:39:51.673 22:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:39:51.673 22:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:39:51.673 22:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:39:51.673 22:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MzI4NzA3YjgxMGEwNzAyYWM0YmRhYjA5ZjliYzZkYTdhNGQ5ZDU5MGI2MzU5OGJlYTJmZDVkY2U5MmQ1NjIwZrBhziw=: 00:39:51.673 22:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:39:51.673 22:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:39:51.673 22:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:39:51.673 22:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MzI4NzA3YjgxMGEwNzAyYWM0YmRhYjA5ZjliYzZkYTdhNGQ5ZDU5MGI2MzU5OGJlYTJmZDVkY2U5MmQ1NjIwZrBhziw=: 00:39:51.673 22:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:39:51.673 22:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 4 00:39:51.673 22:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:39:51.673 22:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:39:51.673 22:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:39:51.673 22:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:39:51.673 22:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:39:51.673 22:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:39:51.673 22:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:51.673 22:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:51.673 22:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:51.673 22:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:39:51.673 22:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:39:51.673 22:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:39:51.673 22:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:39:51.673 22:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:39:51.673 22:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:39:51.673 22:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:39:51.673 22:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:39:51.673 22:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:39:51.673 22:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:39:51.673 22:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:39:51.674 22:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:39:51.674 22:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:51.674 22:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:51.934 nvme0n1 00:39:51.934 22:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:51.934 22:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:39:51.934 22:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:51.934 22:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:39:51.934 22:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:51.934 22:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:51.934 22:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:39:51.934 22:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:39:51.934 22:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:51.934 22:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:51.934 22:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:51.934 22:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:39:51.934 22:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:39:51.934 22:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 0 00:39:51.934 22:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:39:51.934 22:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:39:51.934 22:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:39:51.934 22:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:39:51.934 22:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZDI1ZDdiOTI5NjM1ZTZjNDExNzYyNTAxNmQzNmJhMTCNtXb4: 00:39:51.934 22:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MjRjNTBmODU4MmU3ZjVlYzdlNzZjYmVlMzk1YjY4YWY1YzdkZWI5NmRiMjAyNmUxYTcxOGIzZmJkNDZkNTg1Y8k+rDI=: 00:39:51.934 22:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:39:51.934 22:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:39:51.934 22:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZDI1ZDdiOTI5NjM1ZTZjNDExNzYyNTAxNmQzNmJhMTCNtXb4: 00:39:51.934 22:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MjRjNTBmODU4MmU3ZjVlYzdlNzZjYmVlMzk1YjY4YWY1YzdkZWI5NmRiMjAyNmUxYTcxOGIzZmJkNDZkNTg1Y8k+rDI=: ]] 00:39:51.934 22:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MjRjNTBmODU4MmU3ZjVlYzdlNzZjYmVlMzk1YjY4YWY1YzdkZWI5NmRiMjAyNmUxYTcxOGIzZmJkNDZkNTg1Y8k+rDI=: 00:39:51.934 22:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 0 00:39:51.934 22:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:39:51.934 22:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:39:51.934 22:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:39:51.934 22:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:39:51.934 22:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:39:51.934 22:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:39:51.934 22:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:51.934 22:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:51.934 22:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:51.934 22:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:39:51.934 22:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:39:51.934 22:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:39:51.934 22:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:39:51.934 22:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:39:51.934 22:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:39:51.934 22:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:39:51.934 22:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:39:51.934 22:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:39:51.934 22:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:39:51.934 22:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:39:51.934 22:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:39:51.934 22:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:51.934 22:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:52.195 nvme0n1 00:39:52.195 22:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:52.195 22:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:39:52.195 22:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:39:52.195 22:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:52.195 22:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:52.195 22:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:52.195 22:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:39:52.195 22:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:39:52.195 22:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:52.195 22:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:52.195 22:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:52.195 22:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:39:52.195 22:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 1 00:39:52.195 22:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:39:52.195 22:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:39:52.195 22:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:39:52.195 22:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:39:52.195 22:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MjJjYzgzMDVmYmYwMDJhNzUyMzI2Y2Y0MjE5Nzg2YjgyNjBhZDhhNzI1OGZjMjhiJ3xOtg==: 00:39:52.195 22:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MWE5M2JjOGM1MmQ5ZmJhYmZkMzBhZjcyZjM2MTU4ODk0ODE3YmEyNDRhYmFmNmRijzhxdQ==: 00:39:52.195 22:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:39:52.195 22:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:39:52.195 22:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MjJjYzgzMDVmYmYwMDJhNzUyMzI2Y2Y0MjE5Nzg2YjgyNjBhZDhhNzI1OGZjMjhiJ3xOtg==: 00:39:52.195 22:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MWE5M2JjOGM1MmQ5ZmJhYmZkMzBhZjcyZjM2MTU4ODk0ODE3YmEyNDRhYmFmNmRijzhxdQ==: ]] 00:39:52.195 22:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MWE5M2JjOGM1MmQ5ZmJhYmZkMzBhZjcyZjM2MTU4ODk0ODE3YmEyNDRhYmFmNmRijzhxdQ==: 00:39:52.195 22:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 1 00:39:52.195 22:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:39:52.195 22:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:39:52.195 22:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:39:52.195 22:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:39:52.195 22:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:39:52.195 22:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:39:52.195 22:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:52.195 22:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:52.195 22:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:52.195 22:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:39:52.195 22:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:39:52.195 22:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:39:52.195 22:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:39:52.195 22:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:39:52.195 22:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:39:52.195 22:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:39:52.195 22:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:39:52.195 22:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:39:52.195 22:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:39:52.195 22:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:39:52.195 22:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:39:52.195 22:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:52.195 22:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:52.455 nvme0n1 00:39:52.455 22:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:52.455 22:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:39:52.455 22:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:39:52.455 22:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:52.455 22:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:52.455 22:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:52.455 22:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:39:52.455 22:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:39:52.455 22:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:52.455 22:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:52.455 22:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:52.455 22:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:39:52.455 22:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 2 00:39:52.455 22:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:39:52.455 22:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:39:52.455 22:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:39:52.455 22:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:39:52.455 22:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YWMxMzI4YWNkMDg3MDk4YzFkZTAzMTFlNTFiNTg3OGNulVAJ: 00:39:52.455 22:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:OTAyZDE1MWNlZjhjMmQyOGJlNTFjOWY1M2MwYWE3ZmEBh5uT: 00:39:52.455 22:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:39:52.455 22:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:39:52.455 22:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YWMxMzI4YWNkMDg3MDk4YzFkZTAzMTFlNTFiNTg3OGNulVAJ: 00:39:52.455 22:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:OTAyZDE1MWNlZjhjMmQyOGJlNTFjOWY1M2MwYWE3ZmEBh5uT: ]] 00:39:52.455 22:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:OTAyZDE1MWNlZjhjMmQyOGJlNTFjOWY1M2MwYWE3ZmEBh5uT: 00:39:52.455 22:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 2 00:39:52.455 22:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:39:52.455 22:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:39:52.455 22:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:39:52.455 22:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:39:52.455 22:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:39:52.455 22:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:39:52.455 22:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:52.455 22:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:52.455 22:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:52.455 22:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:39:52.455 22:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:39:52.455 22:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:39:52.455 22:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:39:52.455 22:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:39:52.455 22:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:39:52.455 22:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:39:52.455 22:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:39:52.455 22:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:39:52.455 22:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:39:52.455 22:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:39:52.455 22:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:39:52.455 22:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:52.455 22:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:52.716 nvme0n1 00:39:52.716 22:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:52.716 22:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:39:52.716 22:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:52.716 22:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:39:52.716 22:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:52.716 22:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:52.716 22:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:39:52.716 22:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:39:52.716 22:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:52.716 22:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:52.716 22:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:52.716 22:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:39:52.716 22:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 3 00:39:52.716 22:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:39:52.716 22:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:39:52.716 22:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:39:52.716 22:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:39:52.716 22:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:M2E3MGI4NmNjMmI1MGYyMDI1MDJkZmZmMzRjMzA5OWZkOTQxYzM4MTQ4Zjk5NTRmM0RBIQ==: 00:39:52.716 22:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NzE2NWMzNTYyNzM2NzVkODVkZTE3ZjVhMzUyN2Y4NWaSDkhX: 00:39:52.716 22:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:39:52.716 22:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:39:52.716 22:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:M2E3MGI4NmNjMmI1MGYyMDI1MDJkZmZmMzRjMzA5OWZkOTQxYzM4MTQ4Zjk5NTRmM0RBIQ==: 00:39:52.716 22:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NzE2NWMzNTYyNzM2NzVkODVkZTE3ZjVhMzUyN2Y4NWaSDkhX: ]] 00:39:52.716 22:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NzE2NWMzNTYyNzM2NzVkODVkZTE3ZjVhMzUyN2Y4NWaSDkhX: 00:39:52.716 22:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 3 00:39:52.716 22:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:39:52.716 22:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:39:52.716 22:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:39:52.716 22:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:39:52.716 22:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:39:52.716 22:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:39:52.716 22:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:52.716 22:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:52.716 22:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:52.716 22:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:39:52.716 22:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:39:52.716 22:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:39:52.716 22:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:39:52.716 22:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:39:52.716 22:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:39:52.716 22:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:39:52.716 22:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:39:52.716 22:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:39:52.716 22:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:39:52.716 22:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:39:52.716 22:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:39:52.716 22:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:52.716 22:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:52.976 nvme0n1 00:39:52.976 22:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:52.976 22:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:39:52.976 22:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:39:52.976 22:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:52.976 22:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:52.976 22:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:52.976 22:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:39:52.976 22:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:39:52.976 22:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:52.976 22:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:52.976 22:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:52.976 22:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:39:52.976 22:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 4 00:39:52.976 22:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:39:52.976 22:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:39:52.976 22:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:39:52.976 22:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:39:52.976 22:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MzI4NzA3YjgxMGEwNzAyYWM0YmRhYjA5ZjliYzZkYTdhNGQ5ZDU5MGI2MzU5OGJlYTJmZDVkY2U5MmQ1NjIwZrBhziw=: 00:39:52.976 22:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:39:52.976 22:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:39:52.976 22:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:39:52.976 22:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MzI4NzA3YjgxMGEwNzAyYWM0YmRhYjA5ZjliYzZkYTdhNGQ5ZDU5MGI2MzU5OGJlYTJmZDVkY2U5MmQ1NjIwZrBhziw=: 00:39:52.976 22:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:39:52.976 22:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 4 00:39:52.976 22:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:39:52.976 22:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:39:52.976 22:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:39:52.976 22:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:39:52.976 22:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:39:52.976 22:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:39:52.976 22:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:52.976 22:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:52.976 22:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:52.976 22:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:39:52.976 22:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:39:52.976 22:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:39:52.976 22:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:39:52.976 22:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:39:52.976 22:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:39:52.976 22:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:39:52.976 22:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:39:52.976 22:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:39:52.976 22:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:39:52.976 22:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:39:52.976 22:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:39:52.976 22:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:52.976 22:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:53.235 nvme0n1 00:39:53.235 22:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:53.235 22:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:39:53.235 22:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:39:53.235 22:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:53.235 22:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:53.235 22:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:53.235 22:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:39:53.235 22:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:39:53.235 22:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:53.235 22:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:53.235 22:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:53.235 22:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:39:53.235 22:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:39:53.235 22:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 0 00:39:53.235 22:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:39:53.235 22:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:39:53.235 22:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:39:53.235 22:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:39:53.235 22:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZDI1ZDdiOTI5NjM1ZTZjNDExNzYyNTAxNmQzNmJhMTCNtXb4: 00:39:53.235 22:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MjRjNTBmODU4MmU3ZjVlYzdlNzZjYmVlMzk1YjY4YWY1YzdkZWI5NmRiMjAyNmUxYTcxOGIzZmJkNDZkNTg1Y8k+rDI=: 00:39:53.235 22:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:39:53.235 22:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:39:53.235 22:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZDI1ZDdiOTI5NjM1ZTZjNDExNzYyNTAxNmQzNmJhMTCNtXb4: 00:39:53.235 22:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MjRjNTBmODU4MmU3ZjVlYzdlNzZjYmVlMzk1YjY4YWY1YzdkZWI5NmRiMjAyNmUxYTcxOGIzZmJkNDZkNTg1Y8k+rDI=: ]] 00:39:53.235 22:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MjRjNTBmODU4MmU3ZjVlYzdlNzZjYmVlMzk1YjY4YWY1YzdkZWI5NmRiMjAyNmUxYTcxOGIzZmJkNDZkNTg1Y8k+rDI=: 00:39:53.235 22:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 0 00:39:53.235 22:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:39:53.235 22:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:39:53.235 22:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:39:53.235 22:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:39:53.235 22:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:39:53.235 22:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:39:53.235 22:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:53.235 22:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:53.235 22:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:53.235 22:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:39:53.235 22:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:39:53.235 22:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:39:53.235 22:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:39:53.235 22:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:39:53.235 22:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:39:53.235 22:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:39:53.235 22:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:39:53.235 22:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:39:53.235 22:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:39:53.235 22:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:39:53.235 22:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:39:53.235 22:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:53.235 22:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:53.496 nvme0n1 00:39:53.496 22:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:53.496 22:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:39:53.496 22:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:39:53.496 22:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:53.496 22:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:53.496 22:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:53.496 22:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:39:53.496 22:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:39:53.496 22:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:53.496 22:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:53.496 22:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:53.496 22:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:39:53.496 22:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 1 00:39:53.496 22:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:39:53.496 22:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:39:53.496 22:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:39:53.496 22:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:39:53.496 22:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MjJjYzgzMDVmYmYwMDJhNzUyMzI2Y2Y0MjE5Nzg2YjgyNjBhZDhhNzI1OGZjMjhiJ3xOtg==: 00:39:53.496 22:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MWE5M2JjOGM1MmQ5ZmJhYmZkMzBhZjcyZjM2MTU4ODk0ODE3YmEyNDRhYmFmNmRijzhxdQ==: 00:39:53.496 22:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:39:53.496 22:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:39:53.496 22:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MjJjYzgzMDVmYmYwMDJhNzUyMzI2Y2Y0MjE5Nzg2YjgyNjBhZDhhNzI1OGZjMjhiJ3xOtg==: 00:39:53.496 22:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MWE5M2JjOGM1MmQ5ZmJhYmZkMzBhZjcyZjM2MTU4ODk0ODE3YmEyNDRhYmFmNmRijzhxdQ==: ]] 00:39:53.496 22:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MWE5M2JjOGM1MmQ5ZmJhYmZkMzBhZjcyZjM2MTU4ODk0ODE3YmEyNDRhYmFmNmRijzhxdQ==: 00:39:53.496 22:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 1 00:39:53.496 22:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:39:53.496 22:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:39:53.496 22:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:39:53.496 22:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:39:53.496 22:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:39:53.496 22:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:39:53.496 22:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:53.496 22:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:53.496 22:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:53.756 22:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:39:53.756 22:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:39:53.756 22:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:39:53.756 22:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:39:53.756 22:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:39:53.756 22:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:39:53.756 22:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:39:53.756 22:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:39:53.756 22:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:39:53.756 22:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:39:53.756 22:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:39:53.756 22:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:39:53.756 22:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:53.756 22:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:54.015 nvme0n1 00:39:54.015 22:38:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:54.015 22:38:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:39:54.015 22:38:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:39:54.015 22:38:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:54.015 22:38:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:54.015 22:38:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:54.015 22:38:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:39:54.015 22:38:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:39:54.015 22:38:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:54.015 22:38:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:54.015 22:38:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:54.015 22:38:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:39:54.015 22:38:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 2 00:39:54.015 22:38:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:39:54.015 22:38:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:39:54.015 22:38:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:39:54.015 22:38:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:39:54.015 22:38:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YWMxMzI4YWNkMDg3MDk4YzFkZTAzMTFlNTFiNTg3OGNulVAJ: 00:39:54.015 22:38:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:OTAyZDE1MWNlZjhjMmQyOGJlNTFjOWY1M2MwYWE3ZmEBh5uT: 00:39:54.015 22:38:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:39:54.015 22:38:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:39:54.015 22:38:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YWMxMzI4YWNkMDg3MDk4YzFkZTAzMTFlNTFiNTg3OGNulVAJ: 00:39:54.015 22:38:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:OTAyZDE1MWNlZjhjMmQyOGJlNTFjOWY1M2MwYWE3ZmEBh5uT: ]] 00:39:54.015 22:38:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:OTAyZDE1MWNlZjhjMmQyOGJlNTFjOWY1M2MwYWE3ZmEBh5uT: 00:39:54.015 22:38:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 2 00:39:54.015 22:38:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:39:54.015 22:38:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:39:54.015 22:38:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:39:54.015 22:38:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:39:54.015 22:38:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:39:54.015 22:38:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:39:54.015 22:38:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:54.015 22:38:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:54.015 22:38:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:54.015 22:38:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:39:54.015 22:38:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:39:54.015 22:38:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:39:54.015 22:38:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:39:54.015 22:38:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:39:54.015 22:38:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:39:54.015 22:38:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:39:54.015 22:38:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:39:54.015 22:38:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:39:54.015 22:38:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:39:54.015 22:38:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:39:54.015 22:38:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:39:54.015 22:38:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:54.015 22:38:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:54.274 nvme0n1 00:39:54.274 22:38:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:54.274 22:38:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:39:54.274 22:38:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:39:54.274 22:38:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:54.274 22:38:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:54.274 22:38:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:54.274 22:38:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:39:54.274 22:38:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:39:54.274 22:38:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:54.274 22:38:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:54.274 22:38:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:54.274 22:38:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:39:54.274 22:38:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 3 00:39:54.274 22:38:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:39:54.274 22:38:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:39:54.275 22:38:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:39:54.275 22:38:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:39:54.275 22:38:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:M2E3MGI4NmNjMmI1MGYyMDI1MDJkZmZmMzRjMzA5OWZkOTQxYzM4MTQ4Zjk5NTRmM0RBIQ==: 00:39:54.275 22:38:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NzE2NWMzNTYyNzM2NzVkODVkZTE3ZjVhMzUyN2Y4NWaSDkhX: 00:39:54.275 22:38:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:39:54.275 22:38:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:39:54.275 22:38:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:M2E3MGI4NmNjMmI1MGYyMDI1MDJkZmZmMzRjMzA5OWZkOTQxYzM4MTQ4Zjk5NTRmM0RBIQ==: 00:39:54.275 22:38:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NzE2NWMzNTYyNzM2NzVkODVkZTE3ZjVhMzUyN2Y4NWaSDkhX: ]] 00:39:54.275 22:38:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NzE2NWMzNTYyNzM2NzVkODVkZTE3ZjVhMzUyN2Y4NWaSDkhX: 00:39:54.275 22:38:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 3 00:39:54.275 22:38:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:39:54.275 22:38:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:39:54.275 22:38:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:39:54.275 22:38:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:39:54.275 22:38:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:39:54.275 22:38:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:39:54.275 22:38:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:54.275 22:38:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:54.275 22:38:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:54.275 22:38:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:39:54.275 22:38:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:39:54.275 22:38:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:39:54.275 22:38:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:39:54.275 22:38:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:39:54.275 22:38:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:39:54.275 22:38:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:39:54.275 22:38:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:39:54.275 22:38:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:39:54.275 22:38:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:39:54.275 22:38:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:39:54.275 22:38:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:39:54.275 22:38:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:54.275 22:38:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:54.615 nvme0n1 00:39:54.615 22:38:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:54.615 22:38:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:39:54.615 22:38:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:39:54.615 22:38:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:54.615 22:38:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:54.615 22:38:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:54.615 22:38:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:39:54.615 22:38:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:39:54.615 22:38:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:54.615 22:38:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:54.615 22:38:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:54.615 22:38:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:39:54.615 22:38:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 4 00:39:54.615 22:38:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:39:54.615 22:38:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:39:54.615 22:38:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:39:54.615 22:38:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:39:54.615 22:38:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MzI4NzA3YjgxMGEwNzAyYWM0YmRhYjA5ZjliYzZkYTdhNGQ5ZDU5MGI2MzU5OGJlYTJmZDVkY2U5MmQ1NjIwZrBhziw=: 00:39:54.615 22:38:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:39:54.615 22:38:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:39:54.615 22:38:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:39:54.615 22:38:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MzI4NzA3YjgxMGEwNzAyYWM0YmRhYjA5ZjliYzZkYTdhNGQ5ZDU5MGI2MzU5OGJlYTJmZDVkY2U5MmQ1NjIwZrBhziw=: 00:39:54.615 22:38:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:39:54.615 22:38:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 4 00:39:54.615 22:38:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:39:54.615 22:38:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:39:54.615 22:38:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:39:54.615 22:38:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:39:54.615 22:38:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:39:54.615 22:38:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:39:54.615 22:38:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:54.615 22:38:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:54.615 22:38:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:54.615 22:38:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:39:54.615 22:38:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:39:54.615 22:38:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:39:54.615 22:38:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:39:54.615 22:38:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:39:54.615 22:38:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:39:54.615 22:38:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:39:54.615 22:38:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:39:54.615 22:38:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:39:54.615 22:38:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:39:54.615 22:38:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:39:54.615 22:38:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:39:54.615 22:38:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:54.615 22:38:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:54.967 nvme0n1 00:39:54.967 22:38:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:54.967 22:38:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:39:54.967 22:38:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:39:54.967 22:38:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:54.967 22:38:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:54.967 22:38:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:54.967 22:38:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:39:54.967 22:38:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:39:54.967 22:38:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:54.967 22:38:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:54.967 22:38:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:54.967 22:38:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:39:54.967 22:38:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:39:54.967 22:38:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 0 00:39:54.967 22:38:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:39:54.967 22:38:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:39:54.967 22:38:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:39:54.967 22:38:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:39:54.967 22:38:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZDI1ZDdiOTI5NjM1ZTZjNDExNzYyNTAxNmQzNmJhMTCNtXb4: 00:39:54.967 22:38:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MjRjNTBmODU4MmU3ZjVlYzdlNzZjYmVlMzk1YjY4YWY1YzdkZWI5NmRiMjAyNmUxYTcxOGIzZmJkNDZkNTg1Y8k+rDI=: 00:39:54.967 22:38:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:39:54.967 22:38:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:39:54.967 22:38:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZDI1ZDdiOTI5NjM1ZTZjNDExNzYyNTAxNmQzNmJhMTCNtXb4: 00:39:54.967 22:38:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MjRjNTBmODU4MmU3ZjVlYzdlNzZjYmVlMzk1YjY4YWY1YzdkZWI5NmRiMjAyNmUxYTcxOGIzZmJkNDZkNTg1Y8k+rDI=: ]] 00:39:54.967 22:38:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MjRjNTBmODU4MmU3ZjVlYzdlNzZjYmVlMzk1YjY4YWY1YzdkZWI5NmRiMjAyNmUxYTcxOGIzZmJkNDZkNTg1Y8k+rDI=: 00:39:54.967 22:38:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 0 00:39:54.967 22:38:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:39:54.967 22:38:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:39:54.967 22:38:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:39:54.967 22:38:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:39:54.967 22:38:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:39:54.967 22:38:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:39:54.967 22:38:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:54.967 22:38:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:54.967 22:38:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:54.967 22:38:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:39:54.967 22:38:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:39:54.967 22:38:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:39:54.967 22:38:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:39:54.967 22:38:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:39:54.967 22:38:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:39:54.967 22:38:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:39:54.967 22:38:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:39:54.967 22:38:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:39:54.967 22:38:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:39:54.967 22:38:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:39:54.967 22:38:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:39:54.967 22:38:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:54.967 22:38:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:55.545 nvme0n1 00:39:55.545 22:38:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:55.545 22:38:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:39:55.545 22:38:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:39:55.545 22:38:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:55.545 22:38:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:55.545 22:38:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:55.545 22:38:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:39:55.545 22:38:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:39:55.545 22:38:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:55.545 22:38:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:55.545 22:38:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:55.545 22:38:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:39:55.545 22:38:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 1 00:39:55.545 22:38:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:39:55.545 22:38:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:39:55.545 22:38:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:39:55.545 22:38:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:39:55.545 22:38:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MjJjYzgzMDVmYmYwMDJhNzUyMzI2Y2Y0MjE5Nzg2YjgyNjBhZDhhNzI1OGZjMjhiJ3xOtg==: 00:39:55.545 22:38:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MWE5M2JjOGM1MmQ5ZmJhYmZkMzBhZjcyZjM2MTU4ODk0ODE3YmEyNDRhYmFmNmRijzhxdQ==: 00:39:55.545 22:38:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:39:55.545 22:38:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:39:55.545 22:38:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MjJjYzgzMDVmYmYwMDJhNzUyMzI2Y2Y0MjE5Nzg2YjgyNjBhZDhhNzI1OGZjMjhiJ3xOtg==: 00:39:55.545 22:38:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MWE5M2JjOGM1MmQ5ZmJhYmZkMzBhZjcyZjM2MTU4ODk0ODE3YmEyNDRhYmFmNmRijzhxdQ==: ]] 00:39:55.545 22:38:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MWE5M2JjOGM1MmQ5ZmJhYmZkMzBhZjcyZjM2MTU4ODk0ODE3YmEyNDRhYmFmNmRijzhxdQ==: 00:39:55.545 22:38:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 1 00:39:55.545 22:38:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:39:55.545 22:38:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:39:55.545 22:38:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:39:55.545 22:38:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:39:55.545 22:38:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:39:55.545 22:38:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:39:55.545 22:38:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:55.545 22:38:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:55.545 22:38:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:55.545 22:38:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:39:55.545 22:38:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:39:55.545 22:38:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:39:55.545 22:38:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:39:55.545 22:38:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:39:55.545 22:38:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:39:55.545 22:38:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:39:55.545 22:38:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:39:55.545 22:38:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:39:55.545 22:38:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:39:55.545 22:38:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:39:55.545 22:38:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:39:55.545 22:38:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:55.545 22:38:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:56.115 nvme0n1 00:39:56.115 22:38:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:56.115 22:38:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:39:56.115 22:38:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:39:56.115 22:38:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:56.115 22:38:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:56.115 22:38:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:56.115 22:38:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:39:56.115 22:38:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:39:56.115 22:38:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:56.115 22:38:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:56.115 22:38:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:56.115 22:38:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:39:56.115 22:38:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 2 00:39:56.115 22:38:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:39:56.115 22:38:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:39:56.115 22:38:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:39:56.115 22:38:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:39:56.115 22:38:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YWMxMzI4YWNkMDg3MDk4YzFkZTAzMTFlNTFiNTg3OGNulVAJ: 00:39:56.115 22:38:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:OTAyZDE1MWNlZjhjMmQyOGJlNTFjOWY1M2MwYWE3ZmEBh5uT: 00:39:56.115 22:38:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:39:56.115 22:38:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:39:56.115 22:38:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YWMxMzI4YWNkMDg3MDk4YzFkZTAzMTFlNTFiNTg3OGNulVAJ: 00:39:56.115 22:38:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:OTAyZDE1MWNlZjhjMmQyOGJlNTFjOWY1M2MwYWE3ZmEBh5uT: ]] 00:39:56.115 22:38:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:OTAyZDE1MWNlZjhjMmQyOGJlNTFjOWY1M2MwYWE3ZmEBh5uT: 00:39:56.115 22:38:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 2 00:39:56.115 22:38:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:39:56.115 22:38:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:39:56.115 22:38:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:39:56.115 22:38:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:39:56.115 22:38:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:39:56.115 22:38:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:39:56.115 22:38:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:56.115 22:38:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:56.115 22:38:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:56.115 22:38:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:39:56.115 22:38:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:39:56.115 22:38:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:39:56.115 22:38:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:39:56.115 22:38:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:39:56.115 22:38:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:39:56.115 22:38:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:39:56.115 22:38:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:39:56.115 22:38:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:39:56.115 22:38:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:39:56.115 22:38:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:39:56.115 22:38:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:39:56.115 22:38:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:56.115 22:38:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:56.685 nvme0n1 00:39:56.685 22:38:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:56.685 22:38:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:39:56.685 22:38:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:56.685 22:38:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:39:56.685 22:38:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:56.685 22:38:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:56.685 22:38:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:39:56.685 22:38:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:39:56.685 22:38:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:56.685 22:38:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:56.685 22:38:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:56.685 22:38:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:39:56.685 22:38:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 3 00:39:56.685 22:38:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:39:56.685 22:38:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:39:56.685 22:38:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:39:56.685 22:38:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:39:56.685 22:38:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:M2E3MGI4NmNjMmI1MGYyMDI1MDJkZmZmMzRjMzA5OWZkOTQxYzM4MTQ4Zjk5NTRmM0RBIQ==: 00:39:56.685 22:38:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NzE2NWMzNTYyNzM2NzVkODVkZTE3ZjVhMzUyN2Y4NWaSDkhX: 00:39:56.685 22:38:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:39:56.685 22:38:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:39:56.685 22:38:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:M2E3MGI4NmNjMmI1MGYyMDI1MDJkZmZmMzRjMzA5OWZkOTQxYzM4MTQ4Zjk5NTRmM0RBIQ==: 00:39:56.685 22:38:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NzE2NWMzNTYyNzM2NzVkODVkZTE3ZjVhMzUyN2Y4NWaSDkhX: ]] 00:39:56.685 22:38:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NzE2NWMzNTYyNzM2NzVkODVkZTE3ZjVhMzUyN2Y4NWaSDkhX: 00:39:56.685 22:38:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 3 00:39:56.685 22:38:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:39:56.685 22:38:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:39:56.685 22:38:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:39:56.685 22:38:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:39:56.685 22:38:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:39:56.685 22:38:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:39:56.685 22:38:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:56.685 22:38:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:56.685 22:38:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:56.685 22:38:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:39:56.685 22:38:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:39:56.685 22:38:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:39:56.685 22:38:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:39:56.685 22:38:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:39:56.685 22:38:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:39:56.685 22:38:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:39:56.685 22:38:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:39:56.685 22:38:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:39:56.685 22:38:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:39:56.685 22:38:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:39:56.685 22:38:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:39:56.685 22:38:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:56.685 22:38:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:57.256 nvme0n1 00:39:57.256 22:38:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:57.256 22:38:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:39:57.256 22:38:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:57.256 22:38:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:39:57.256 22:38:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:57.256 22:38:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:57.256 22:38:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:39:57.256 22:38:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:39:57.256 22:38:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:57.256 22:38:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:57.256 22:38:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:57.256 22:38:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:39:57.256 22:38:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 4 00:39:57.256 22:38:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:39:57.256 22:38:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:39:57.256 22:38:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:39:57.256 22:38:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:39:57.256 22:38:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MzI4NzA3YjgxMGEwNzAyYWM0YmRhYjA5ZjliYzZkYTdhNGQ5ZDU5MGI2MzU5OGJlYTJmZDVkY2U5MmQ1NjIwZrBhziw=: 00:39:57.256 22:38:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:39:57.256 22:38:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:39:57.256 22:38:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:39:57.256 22:38:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MzI4NzA3YjgxMGEwNzAyYWM0YmRhYjA5ZjliYzZkYTdhNGQ5ZDU5MGI2MzU5OGJlYTJmZDVkY2U5MmQ1NjIwZrBhziw=: 00:39:57.256 22:38:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:39:57.256 22:38:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 4 00:39:57.256 22:38:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:39:57.256 22:38:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:39:57.256 22:38:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:39:57.256 22:38:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:39:57.256 22:38:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:39:57.256 22:38:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:39:57.256 22:38:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:57.256 22:38:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:57.256 22:38:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:57.256 22:38:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:39:57.256 22:38:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:39:57.256 22:38:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:39:57.256 22:38:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:39:57.256 22:38:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:39:57.256 22:38:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:39:57.256 22:38:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:39:57.256 22:38:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:39:57.256 22:38:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:39:57.256 22:38:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:39:57.256 22:38:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:39:57.256 22:38:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:39:57.256 22:38:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:57.256 22:38:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:57.827 nvme0n1 00:39:57.827 22:38:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:57.827 22:38:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:39:57.827 22:38:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:39:57.827 22:38:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:57.827 22:38:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:57.827 22:38:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:57.827 22:38:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:39:57.827 22:38:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:39:57.827 22:38:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:57.827 22:38:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:57.827 22:38:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:57.827 22:38:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:39:57.827 22:38:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:39:57.827 22:38:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 0 00:39:57.827 22:38:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:39:57.827 22:38:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:39:57.827 22:38:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:39:57.827 22:38:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:39:57.827 22:38:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZDI1ZDdiOTI5NjM1ZTZjNDExNzYyNTAxNmQzNmJhMTCNtXb4: 00:39:57.827 22:38:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MjRjNTBmODU4MmU3ZjVlYzdlNzZjYmVlMzk1YjY4YWY1YzdkZWI5NmRiMjAyNmUxYTcxOGIzZmJkNDZkNTg1Y8k+rDI=: 00:39:57.827 22:38:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:39:57.827 22:38:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:39:57.827 22:38:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZDI1ZDdiOTI5NjM1ZTZjNDExNzYyNTAxNmQzNmJhMTCNtXb4: 00:39:57.827 22:38:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MjRjNTBmODU4MmU3ZjVlYzdlNzZjYmVlMzk1YjY4YWY1YzdkZWI5NmRiMjAyNmUxYTcxOGIzZmJkNDZkNTg1Y8k+rDI=: ]] 00:39:57.827 22:38:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MjRjNTBmODU4MmU3ZjVlYzdlNzZjYmVlMzk1YjY4YWY1YzdkZWI5NmRiMjAyNmUxYTcxOGIzZmJkNDZkNTg1Y8k+rDI=: 00:39:57.827 22:38:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 0 00:39:57.827 22:38:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:39:57.827 22:38:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:39:57.827 22:38:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:39:57.827 22:38:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:39:57.827 22:38:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:39:57.827 22:38:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:39:57.827 22:38:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:57.827 22:38:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:57.827 22:38:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:57.827 22:38:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:39:57.827 22:38:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:39:57.827 22:38:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:39:57.827 22:38:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:39:57.828 22:38:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:39:57.828 22:38:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:39:57.828 22:38:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:39:57.828 22:38:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:39:57.828 22:38:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:39:57.828 22:38:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:39:57.828 22:38:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:39:57.828 22:38:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:39:57.828 22:38:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:57.828 22:38:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:58.399 nvme0n1 00:39:58.399 22:38:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:58.399 22:38:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:39:58.399 22:38:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:58.399 22:38:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:39:58.399 22:38:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:58.659 22:38:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:58.659 22:38:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:39:58.659 22:38:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:39:58.659 22:38:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:58.659 22:38:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:58.659 22:38:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:58.659 22:38:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:39:58.659 22:38:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 1 00:39:58.659 22:38:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:39:58.659 22:38:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:39:58.659 22:38:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:39:58.659 22:38:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:39:58.659 22:38:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MjJjYzgzMDVmYmYwMDJhNzUyMzI2Y2Y0MjE5Nzg2YjgyNjBhZDhhNzI1OGZjMjhiJ3xOtg==: 00:39:58.659 22:38:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MWE5M2JjOGM1MmQ5ZmJhYmZkMzBhZjcyZjM2MTU4ODk0ODE3YmEyNDRhYmFmNmRijzhxdQ==: 00:39:58.659 22:38:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:39:58.659 22:38:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:39:58.659 22:38:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MjJjYzgzMDVmYmYwMDJhNzUyMzI2Y2Y0MjE5Nzg2YjgyNjBhZDhhNzI1OGZjMjhiJ3xOtg==: 00:39:58.659 22:38:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MWE5M2JjOGM1MmQ5ZmJhYmZkMzBhZjcyZjM2MTU4ODk0ODE3YmEyNDRhYmFmNmRijzhxdQ==: ]] 00:39:58.659 22:38:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MWE5M2JjOGM1MmQ5ZmJhYmZkMzBhZjcyZjM2MTU4ODk0ODE3YmEyNDRhYmFmNmRijzhxdQ==: 00:39:58.659 22:38:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 1 00:39:58.659 22:38:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:39:58.659 22:38:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:39:58.659 22:38:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:39:58.659 22:38:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:39:58.659 22:38:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:39:58.659 22:38:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:39:58.660 22:38:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:58.660 22:38:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:58.660 22:38:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:58.660 22:38:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:39:58.660 22:38:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:39:58.660 22:38:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:39:58.660 22:38:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:39:58.660 22:38:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:39:58.660 22:38:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:39:58.660 22:38:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:39:58.660 22:38:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:39:58.660 22:38:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:39:58.660 22:38:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:39:58.660 22:38:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:39:58.660 22:38:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:39:58.660 22:38:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:58.660 22:38:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:59.230 nvme0n1 00:39:59.230 22:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:59.230 22:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:39:59.230 22:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:39:59.230 22:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:59.230 22:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:59.491 22:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:59.491 22:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:39:59.491 22:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:39:59.491 22:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:59.491 22:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:59.491 22:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:59.491 22:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:39:59.491 22:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 2 00:39:59.491 22:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:39:59.491 22:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:39:59.491 22:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:39:59.491 22:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:39:59.491 22:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YWMxMzI4YWNkMDg3MDk4YzFkZTAzMTFlNTFiNTg3OGNulVAJ: 00:39:59.491 22:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:OTAyZDE1MWNlZjhjMmQyOGJlNTFjOWY1M2MwYWE3ZmEBh5uT: 00:39:59.491 22:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:39:59.491 22:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:39:59.491 22:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YWMxMzI4YWNkMDg3MDk4YzFkZTAzMTFlNTFiNTg3OGNulVAJ: 00:39:59.491 22:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:OTAyZDE1MWNlZjhjMmQyOGJlNTFjOWY1M2MwYWE3ZmEBh5uT: ]] 00:39:59.491 22:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:OTAyZDE1MWNlZjhjMmQyOGJlNTFjOWY1M2MwYWE3ZmEBh5uT: 00:39:59.491 22:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 2 00:39:59.491 22:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:39:59.491 22:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:39:59.491 22:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:39:59.491 22:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:39:59.491 22:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:39:59.492 22:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:39:59.492 22:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:59.492 22:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:59.492 22:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:59.492 22:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:39:59.492 22:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:39:59.492 22:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:39:59.492 22:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:39:59.492 22:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:39:59.492 22:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:39:59.492 22:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:39:59.492 22:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:39:59.492 22:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:39:59.492 22:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:39:59.492 22:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:39:59.492 22:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:39:59.492 22:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:59.492 22:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:40:00.064 nvme0n1 00:40:00.064 22:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:00.064 22:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:40:00.064 22:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:40:00.064 22:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:00.064 22:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:40:00.064 22:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:00.324 22:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:40:00.324 22:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:40:00.324 22:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:00.325 22:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:40:00.325 22:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:00.325 22:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:40:00.325 22:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 3 00:40:00.325 22:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:40:00.325 22:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:40:00.325 22:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:40:00.325 22:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:40:00.325 22:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:M2E3MGI4NmNjMmI1MGYyMDI1MDJkZmZmMzRjMzA5OWZkOTQxYzM4MTQ4Zjk5NTRmM0RBIQ==: 00:40:00.325 22:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NzE2NWMzNTYyNzM2NzVkODVkZTE3ZjVhMzUyN2Y4NWaSDkhX: 00:40:00.325 22:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:40:00.325 22:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:40:00.325 22:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:M2E3MGI4NmNjMmI1MGYyMDI1MDJkZmZmMzRjMzA5OWZkOTQxYzM4MTQ4Zjk5NTRmM0RBIQ==: 00:40:00.325 22:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NzE2NWMzNTYyNzM2NzVkODVkZTE3ZjVhMzUyN2Y4NWaSDkhX: ]] 00:40:00.325 22:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NzE2NWMzNTYyNzM2NzVkODVkZTE3ZjVhMzUyN2Y4NWaSDkhX: 00:40:00.325 22:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 3 00:40:00.325 22:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:40:00.325 22:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:40:00.325 22:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:40:00.325 22:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:40:00.325 22:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:40:00.325 22:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:40:00.325 22:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:00.325 22:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:40:00.325 22:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:00.325 22:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:40:00.325 22:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:40:00.325 22:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:40:00.325 22:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:40:00.325 22:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:40:00.325 22:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:40:00.325 22:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:40:00.325 22:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:40:00.325 22:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:40:00.325 22:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:40:00.325 22:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:40:00.325 22:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:40:00.325 22:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:00.325 22:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:40:00.897 nvme0n1 00:40:00.897 22:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:00.897 22:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:40:00.897 22:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:40:00.897 22:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:00.897 22:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:40:00.897 22:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:01.159 22:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:40:01.159 22:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:40:01.159 22:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:01.159 22:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:40:01.159 22:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:01.159 22:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:40:01.159 22:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 4 00:40:01.159 22:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:40:01.159 22:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:40:01.159 22:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:40:01.159 22:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:40:01.159 22:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MzI4NzA3YjgxMGEwNzAyYWM0YmRhYjA5ZjliYzZkYTdhNGQ5ZDU5MGI2MzU5OGJlYTJmZDVkY2U5MmQ1NjIwZrBhziw=: 00:40:01.159 22:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:40:01.159 22:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:40:01.159 22:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:40:01.159 22:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MzI4NzA3YjgxMGEwNzAyYWM0YmRhYjA5ZjliYzZkYTdhNGQ5ZDU5MGI2MzU5OGJlYTJmZDVkY2U5MmQ1NjIwZrBhziw=: 00:40:01.159 22:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:40:01.159 22:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 4 00:40:01.159 22:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:40:01.159 22:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:40:01.159 22:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:40:01.159 22:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:40:01.159 22:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:40:01.159 22:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:40:01.159 22:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:01.159 22:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:40:01.159 22:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:01.159 22:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:40:01.159 22:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:40:01.159 22:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:40:01.159 22:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:40:01.159 22:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:40:01.159 22:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:40:01.159 22:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:40:01.159 22:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:40:01.159 22:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:40:01.159 22:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:40:01.159 22:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:40:01.159 22:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:40:01.159 22:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:01.159 22:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:40:01.730 nvme0n1 00:40:01.730 22:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:01.730 22:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:40:01.730 22:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:01.730 22:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:40:01.730 22:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:40:01.730 22:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:01.992 22:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:40:01.992 22:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:40:01.992 22:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:01.992 22:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:40:01.992 22:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:01.992 22:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:40:01.992 22:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:40:01.992 22:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:40:01.992 22:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:40:01.992 22:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:40:01.992 22:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MjJjYzgzMDVmYmYwMDJhNzUyMzI2Y2Y0MjE5Nzg2YjgyNjBhZDhhNzI1OGZjMjhiJ3xOtg==: 00:40:01.992 22:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MWE5M2JjOGM1MmQ5ZmJhYmZkMzBhZjcyZjM2MTU4ODk0ODE3YmEyNDRhYmFmNmRijzhxdQ==: 00:40:01.992 22:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:40:01.992 22:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:40:01.992 22:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MjJjYzgzMDVmYmYwMDJhNzUyMzI2Y2Y0MjE5Nzg2YjgyNjBhZDhhNzI1OGZjMjhiJ3xOtg==: 00:40:01.992 22:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MWE5M2JjOGM1MmQ5ZmJhYmZkMzBhZjcyZjM2MTU4ODk0ODE3YmEyNDRhYmFmNmRijzhxdQ==: ]] 00:40:01.992 22:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MWE5M2JjOGM1MmQ5ZmJhYmZkMzBhZjcyZjM2MTU4ODk0ODE3YmEyNDRhYmFmNmRijzhxdQ==: 00:40:01.992 22:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@111 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:40:01.992 22:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:01.992 22:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:40:01.992 22:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:01.992 22:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@112 -- # get_main_ns_ip 00:40:01.992 22:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:40:01.992 22:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:40:01.992 22:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:40:01.992 22:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:40:01.992 22:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:40:01.992 22:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:40:01.992 22:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:40:01.992 22:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:40:01.992 22:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:40:01.992 22:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:40:01.992 22:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@112 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:40:01.992 22:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@650 -- # local es=0 00:40:01.992 22:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:40:01.992 22:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:40:01.992 22:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:40:01.992 22:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:40:01.992 22:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:40:01.992 22:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:40:01.992 22:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:01.992 22:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:40:01.992 request: 00:40:01.992 { 00:40:01.992 "name": "nvme0", 00:40:01.992 "trtype": "tcp", 00:40:01.992 "traddr": "10.0.0.1", 00:40:01.992 "adrfam": "ipv4", 00:40:01.992 "trsvcid": "4420", 00:40:01.992 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:40:01.992 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:40:01.992 "prchk_reftag": false, 00:40:01.992 "prchk_guard": false, 00:40:01.992 "hdgst": false, 00:40:01.992 "ddgst": false, 00:40:01.992 "allow_unrecognized_csi": false, 00:40:01.992 "method": "bdev_nvme_attach_controller", 00:40:01.992 "req_id": 1 00:40:01.992 } 00:40:01.993 Got JSON-RPC error response 00:40:01.993 response: 00:40:01.993 { 00:40:01.993 "code": -5, 00:40:01.993 "message": "Input/output error" 00:40:01.993 } 00:40:01.993 22:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:40:01.993 22:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # es=1 00:40:01.993 22:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:40:01.993 22:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:40:01.993 22:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:40:01.993 22:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # rpc_cmd bdev_nvme_get_controllers 00:40:01.993 22:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # jq length 00:40:01.993 22:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:01.993 22:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:40:01.993 22:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:01.993 22:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # (( 0 == 0 )) 00:40:01.993 22:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@117 -- # get_main_ns_ip 00:40:01.993 22:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:40:01.993 22:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:40:01.993 22:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:40:01.993 22:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:40:01.993 22:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:40:01.993 22:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:40:01.993 22:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:40:01.993 22:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:40:01.993 22:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:40:01.993 22:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:40:01.993 22:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@117 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:40:01.993 22:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@650 -- # local es=0 00:40:01.993 22:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:40:01.993 22:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:40:01.993 22:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:40:01.993 22:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:40:01.993 22:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:40:01.993 22:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:40:01.993 22:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:01.993 22:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:40:01.993 request: 00:40:01.993 { 00:40:01.993 "name": "nvme0", 00:40:01.993 "trtype": "tcp", 00:40:01.993 "traddr": "10.0.0.1", 00:40:01.993 "adrfam": "ipv4", 00:40:01.993 "trsvcid": "4420", 00:40:01.993 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:40:01.993 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:40:01.993 "prchk_reftag": false, 00:40:01.993 "prchk_guard": false, 00:40:01.993 "hdgst": false, 00:40:01.993 "ddgst": false, 00:40:01.993 "dhchap_key": "key2", 00:40:01.993 "allow_unrecognized_csi": false, 00:40:01.993 "method": "bdev_nvme_attach_controller", 00:40:01.993 "req_id": 1 00:40:01.993 } 00:40:01.993 Got JSON-RPC error response 00:40:01.993 response: 00:40:01.993 { 00:40:01.993 "code": -5, 00:40:01.993 "message": "Input/output error" 00:40:01.993 } 00:40:01.993 22:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:40:01.993 22:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # es=1 00:40:01.993 22:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:40:01.993 22:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:40:01.993 22:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:40:01.993 22:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # rpc_cmd bdev_nvme_get_controllers 00:40:01.993 22:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # jq length 00:40:01.993 22:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:01.993 22:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:40:02.254 22:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:02.254 22:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # (( 0 == 0 )) 00:40:02.254 22:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@123 -- # get_main_ns_ip 00:40:02.254 22:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:40:02.254 22:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:40:02.254 22:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:40:02.254 22:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:40:02.254 22:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:40:02.254 22:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:40:02.254 22:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:40:02.254 22:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:40:02.254 22:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:40:02.254 22:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:40:02.254 22:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@123 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:40:02.254 22:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@650 -- # local es=0 00:40:02.254 22:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:40:02.254 22:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:40:02.254 22:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:40:02.254 22:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:40:02.254 22:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:40:02.254 22:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:40:02.254 22:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:02.254 22:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:40:02.254 request: 00:40:02.254 { 00:40:02.254 "name": "nvme0", 00:40:02.254 "trtype": "tcp", 00:40:02.254 "traddr": "10.0.0.1", 00:40:02.254 "adrfam": "ipv4", 00:40:02.254 "trsvcid": "4420", 00:40:02.254 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:40:02.254 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:40:02.254 "prchk_reftag": false, 00:40:02.254 "prchk_guard": false, 00:40:02.254 "hdgst": false, 00:40:02.254 "ddgst": false, 00:40:02.254 "dhchap_key": "key1", 00:40:02.254 "dhchap_ctrlr_key": "ckey2", 00:40:02.254 "allow_unrecognized_csi": false, 00:40:02.254 "method": "bdev_nvme_attach_controller", 00:40:02.254 "req_id": 1 00:40:02.254 } 00:40:02.254 Got JSON-RPC error response 00:40:02.254 response: 00:40:02.254 { 00:40:02.254 "code": -5, 00:40:02.254 "message": "Input/output error" 00:40:02.254 } 00:40:02.254 22:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:40:02.254 22:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # es=1 00:40:02.254 22:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:40:02.254 22:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:40:02.254 22:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:40:02.254 22:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@128 -- # get_main_ns_ip 00:40:02.254 22:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:40:02.254 22:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:40:02.254 22:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:40:02.254 22:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:40:02.254 22:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:40:02.254 22:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:40:02.254 22:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:40:02.254 22:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:40:02.254 22:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:40:02.254 22:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:40:02.254 22:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@128 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:40:02.254 22:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:02.255 22:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:40:02.516 nvme0n1 00:40:02.516 22:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:02.516 22:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@132 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:40:02.516 22:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:40:02.516 22:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:40:02.516 22:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:40:02.516 22:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:40:02.516 22:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YWMxMzI4YWNkMDg3MDk4YzFkZTAzMTFlNTFiNTg3OGNulVAJ: 00:40:02.516 22:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:OTAyZDE1MWNlZjhjMmQyOGJlNTFjOWY1M2MwYWE3ZmEBh5uT: 00:40:02.516 22:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:40:02.516 22:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:40:02.516 22:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YWMxMzI4YWNkMDg3MDk4YzFkZTAzMTFlNTFiNTg3OGNulVAJ: 00:40:02.516 22:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:OTAyZDE1MWNlZjhjMmQyOGJlNTFjOWY1M2MwYWE3ZmEBh5uT: ]] 00:40:02.516 22:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:OTAyZDE1MWNlZjhjMmQyOGJlNTFjOWY1M2MwYWE3ZmEBh5uT: 00:40:02.516 22:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@133 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:40:02.516 22:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:02.516 22:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:40:02.516 22:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:02.516 22:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # rpc_cmd bdev_nvme_get_controllers 00:40:02.516 22:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # jq -r '.[].name' 00:40:02.516 22:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:02.516 22:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:40:02.516 22:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:02.516 22:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:40:02.516 22:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@136 -- # NOT rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:40:02.516 22:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@650 -- # local es=0 00:40:02.516 22:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:40:02.516 22:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:40:02.517 22:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:40:02.517 22:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:40:02.517 22:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:40:02.517 22:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:40:02.517 22:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:02.517 22:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:40:02.517 request: 00:40:02.517 { 00:40:02.517 "name": "nvme0", 00:40:02.517 "dhchap_key": "key1", 00:40:02.517 "dhchap_ctrlr_key": "ckey2", 00:40:02.517 "method": "bdev_nvme_set_keys", 00:40:02.517 "req_id": 1 00:40:02.517 } 00:40:02.517 Got JSON-RPC error response 00:40:02.517 response: 00:40:02.517 { 00:40:02.517 "code": -13, 00:40:02.517 "message": "Permission denied" 00:40:02.517 } 00:40:02.517 22:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:40:02.517 22:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # es=1 00:40:02.517 22:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:40:02.517 22:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:40:02.517 22:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:40:02.517 22:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 00:40:02.517 22:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:02.517 22:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 00:40:02.517 22:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:40:02.517 22:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:02.517 22:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 1 != 0 )) 00:40:02.517 22:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@138 -- # sleep 1s 00:40:03.916 22:38:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 00:40:03.916 22:38:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 00:40:03.916 22:38:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:03.916 22:38:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:40:03.916 22:38:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:03.916 22:38:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 1 != 0 )) 00:40:03.916 22:38:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@138 -- # sleep 1s 00:40:04.859 22:38:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 00:40:04.859 22:38:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 00:40:04.859 22:38:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:04.859 22:38:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:40:04.860 22:38:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:04.860 22:38:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 0 != 0 )) 00:40:04.860 22:38:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@141 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:40:04.860 22:38:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:40:04.860 22:38:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:40:04.860 22:38:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:40:04.860 22:38:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:40:04.860 22:38:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MjJjYzgzMDVmYmYwMDJhNzUyMzI2Y2Y0MjE5Nzg2YjgyNjBhZDhhNzI1OGZjMjhiJ3xOtg==: 00:40:04.860 22:38:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MWE5M2JjOGM1MmQ5ZmJhYmZkMzBhZjcyZjM2MTU4ODk0ODE3YmEyNDRhYmFmNmRijzhxdQ==: 00:40:04.860 22:38:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:40:04.860 22:38:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:40:04.860 22:38:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MjJjYzgzMDVmYmYwMDJhNzUyMzI2Y2Y0MjE5Nzg2YjgyNjBhZDhhNzI1OGZjMjhiJ3xOtg==: 00:40:04.860 22:38:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MWE5M2JjOGM1MmQ5ZmJhYmZkMzBhZjcyZjM2MTU4ODk0ODE3YmEyNDRhYmFmNmRijzhxdQ==: ]] 00:40:04.860 22:38:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MWE5M2JjOGM1MmQ5ZmJhYmZkMzBhZjcyZjM2MTU4ODk0ODE3YmEyNDRhYmFmNmRijzhxdQ==: 00:40:04.860 22:38:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@142 -- # get_main_ns_ip 00:40:04.860 22:38:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:40:04.860 22:38:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:40:04.860 22:38:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:40:04.860 22:38:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:40:04.860 22:38:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:40:04.860 22:38:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:40:04.860 22:38:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:40:04.860 22:38:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:40:04.860 22:38:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:40:04.860 22:38:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:40:04.860 22:38:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@142 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:40:04.860 22:38:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:04.860 22:38:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:40:04.860 nvme0n1 00:40:04.860 22:38:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:04.860 22:38:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@146 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:40:04.860 22:38:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:40:04.860 22:38:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:40:04.860 22:38:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:40:04.860 22:38:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:40:04.860 22:38:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YWMxMzI4YWNkMDg3MDk4YzFkZTAzMTFlNTFiNTg3OGNulVAJ: 00:40:04.860 22:38:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:OTAyZDE1MWNlZjhjMmQyOGJlNTFjOWY1M2MwYWE3ZmEBh5uT: 00:40:04.860 22:38:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:40:04.860 22:38:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:40:04.860 22:38:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YWMxMzI4YWNkMDg3MDk4YzFkZTAzMTFlNTFiNTg3OGNulVAJ: 00:40:04.860 22:38:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:OTAyZDE1MWNlZjhjMmQyOGJlNTFjOWY1M2MwYWE3ZmEBh5uT: ]] 00:40:04.860 22:38:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:OTAyZDE1MWNlZjhjMmQyOGJlNTFjOWY1M2MwYWE3ZmEBh5uT: 00:40:04.860 22:38:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@147 -- # NOT rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:40:04.860 22:38:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@650 -- # local es=0 00:40:04.860 22:38:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:40:04.860 22:38:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:40:04.860 22:38:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:40:04.860 22:38:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:40:04.860 22:38:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:40:04.860 22:39:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:40:04.860 22:39:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:04.860 22:39:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:40:04.860 request: 00:40:04.860 { 00:40:04.860 "name": "nvme0", 00:40:04.860 "dhchap_key": "key2", 00:40:04.860 "dhchap_ctrlr_key": "ckey1", 00:40:04.860 "method": "bdev_nvme_set_keys", 00:40:04.860 "req_id": 1 00:40:04.860 } 00:40:04.860 Got JSON-RPC error response 00:40:04.860 response: 00:40:04.860 { 00:40:04.860 "code": -13, 00:40:04.860 "message": "Permission denied" 00:40:04.860 } 00:40:04.860 22:39:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:40:04.860 22:39:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # es=1 00:40:04.860 22:39:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:40:04.860 22:39:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:40:04.860 22:39:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:40:04.860 22:39:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # rpc_cmd bdev_nvme_get_controllers 00:40:04.860 22:39:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # jq length 00:40:04.860 22:39:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:04.860 22:39:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:40:04.860 22:39:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:04.860 22:39:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # (( 1 != 0 )) 00:40:04.860 22:39:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@149 -- # sleep 1s 00:40:06.248 22:39:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # rpc_cmd bdev_nvme_get_controllers 00:40:06.248 22:39:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # jq length 00:40:06.248 22:39:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:06.248 22:39:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:40:06.248 22:39:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:06.248 22:39:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # (( 0 != 0 )) 00:40:06.248 22:39:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@152 -- # trap - SIGINT SIGTERM EXIT 00:40:06.248 22:39:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@153 -- # cleanup 00:40:06.248 22:39:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@24 -- # nvmftestfini 00:40:06.248 22:39:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@514 -- # nvmfcleanup 00:40:06.248 22:39:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@121 -- # sync 00:40:06.248 22:39:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:40:06.248 22:39:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@124 -- # set +e 00:40:06.248 22:39:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@125 -- # for i in {1..20} 00:40:06.248 22:39:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:40:06.248 rmmod nvme_tcp 00:40:06.248 rmmod nvme_fabrics 00:40:06.248 22:39:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:40:06.248 22:39:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@128 -- # set -e 00:40:06.248 22:39:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@129 -- # return 0 00:40:06.248 22:39:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@515 -- # '[' -n 361838 ']' 00:40:06.248 22:39:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@516 -- # killprocess 361838 00:40:06.248 22:39:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@950 -- # '[' -z 361838 ']' 00:40:06.248 22:39:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@954 -- # kill -0 361838 00:40:06.248 22:39:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@955 -- # uname 00:40:06.248 22:39:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:40:06.248 22:39:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 361838 00:40:06.248 22:39:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:40:06.248 22:39:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:40:06.248 22:39:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@968 -- # echo 'killing process with pid 361838' 00:40:06.248 killing process with pid 361838 00:40:06.248 22:39:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@969 -- # kill 361838 00:40:06.248 22:39:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@974 -- # wait 361838 00:40:06.248 22:39:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:40:06.248 22:39:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:40:06.248 22:39:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:40:06.248 22:39:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@297 -- # iptr 00:40:06.248 22:39:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@789 -- # iptables-save 00:40:06.248 22:39:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:40:06.248 22:39:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@789 -- # iptables-restore 00:40:06.248 22:39:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:40:06.248 22:39:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@302 -- # remove_spdk_ns 00:40:06.248 22:39:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:40:06.248 22:39:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:40:06.248 22:39:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:40:08.795 22:39:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:40:08.795 22:39:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@25 -- # rm /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:40:08.795 22:39:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@26 -- # rmdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:40:08.795 22:39:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@27 -- # clean_kernel_target 00:40:08.795 22:39:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@710 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 ]] 00:40:08.795 22:39:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@712 -- # echo 0 00:40:08.795 22:39:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@714 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2024-02.io.spdk:cnode0 00:40:08.795 22:39:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@715 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:40:08.795 22:39:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@716 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:40:08.795 22:39:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@717 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:40:08.795 22:39:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # modules=(/sys/module/nvmet/holders/*) 00:40:08.795 22:39:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@721 -- # modprobe -r nvmet_tcp nvmet 00:40:08.795 22:39:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:40:12.106 0000:80:01.6 (8086 0b00): ioatdma -> vfio-pci 00:40:12.106 0000:80:01.7 (8086 0b00): ioatdma -> vfio-pci 00:40:12.106 0000:80:01.4 (8086 0b00): ioatdma -> vfio-pci 00:40:12.106 0000:80:01.5 (8086 0b00): ioatdma -> vfio-pci 00:40:12.106 0000:80:01.2 (8086 0b00): ioatdma -> vfio-pci 00:40:12.106 0000:80:01.3 (8086 0b00): ioatdma -> vfio-pci 00:40:12.106 0000:80:01.0 (8086 0b00): ioatdma -> vfio-pci 00:40:12.106 0000:80:01.1 (8086 0b00): ioatdma -> vfio-pci 00:40:12.106 0000:00:01.6 (8086 0b00): ioatdma -> vfio-pci 00:40:12.106 0000:00:01.7 (8086 0b00): ioatdma -> vfio-pci 00:40:12.106 0000:00:01.4 (8086 0b00): ioatdma -> vfio-pci 00:40:12.106 0000:00:01.5 (8086 0b00): ioatdma -> vfio-pci 00:40:12.106 0000:00:01.2 (8086 0b00): ioatdma -> vfio-pci 00:40:12.106 0000:00:01.3 (8086 0b00): ioatdma -> vfio-pci 00:40:12.106 0000:00:01.0 (8086 0b00): ioatdma -> vfio-pci 00:40:12.106 0000:00:01.1 (8086 0b00): ioatdma -> vfio-pci 00:40:12.106 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:40:12.679 22:39:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@28 -- # rm -f /tmp/spdk.key-null.RNz /tmp/spdk.key-null.Gn1 /tmp/spdk.key-sha256.s1q /tmp/spdk.key-sha384.xv9 /tmp/spdk.key-sha512.rzD /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log 00:40:12.679 22:39:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:40:15.983 0000:80:01.6 (8086 0b00): Already using the vfio-pci driver 00:40:15.983 0000:80:01.7 (8086 0b00): Already using the vfio-pci driver 00:40:15.983 0000:80:01.4 (8086 0b00): Already using the vfio-pci driver 00:40:15.983 0000:80:01.5 (8086 0b00): Already using the vfio-pci driver 00:40:15.983 0000:80:01.2 (8086 0b00): Already using the vfio-pci driver 00:40:15.983 0000:80:01.3 (8086 0b00): Already using the vfio-pci driver 00:40:15.983 0000:80:01.0 (8086 0b00): Already using the vfio-pci driver 00:40:15.983 0000:80:01.1 (8086 0b00): Already using the vfio-pci driver 00:40:15.983 0000:00:01.6 (8086 0b00): Already using the vfio-pci driver 00:40:15.983 0000:65:00.0 (144d a80a): Already using the vfio-pci driver 00:40:15.983 0000:00:01.7 (8086 0b00): Already using the vfio-pci driver 00:40:15.983 0000:00:01.4 (8086 0b00): Already using the vfio-pci driver 00:40:15.983 0000:00:01.5 (8086 0b00): Already using the vfio-pci driver 00:40:15.983 0000:00:01.2 (8086 0b00): Already using the vfio-pci driver 00:40:15.983 0000:00:01.3 (8086 0b00): Already using the vfio-pci driver 00:40:15.983 0000:00:01.0 (8086 0b00): Already using the vfio-pci driver 00:40:15.983 0000:00:01.1 (8086 0b00): Already using the vfio-pci driver 00:40:16.244 00:40:16.244 real 1m6.237s 00:40:16.244 user 0m59.866s 00:40:16.244 sys 0m16.447s 00:40:16.244 22:39:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1126 -- # xtrace_disable 00:40:16.244 22:39:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:40:16.244 ************************************ 00:40:16.244 END TEST nvmf_auth_host 00:40:16.244 ************************************ 00:40:16.505 22:39:11 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@32 -- # [[ tcp == \t\c\p ]] 00:40:16.505 22:39:11 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@33 -- # run_test nvmf_digest /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:40:16.505 22:39:11 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:40:16.505 22:39:11 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:40:16.505 22:39:11 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:40:16.505 ************************************ 00:40:16.505 START TEST nvmf_digest 00:40:16.505 ************************************ 00:40:16.505 22:39:11 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:40:16.505 * Looking for test storage... 00:40:16.505 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:40:16.505 22:39:11 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:40:16.505 22:39:11 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1681 -- # lcov --version 00:40:16.505 22:39:11 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:40:16.505 22:39:11 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:40:16.505 22:39:11 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:40:16.505 22:39:11 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@333 -- # local ver1 ver1_l 00:40:16.505 22:39:11 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@334 -- # local ver2 ver2_l 00:40:16.505 22:39:11 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@336 -- # IFS=.-: 00:40:16.505 22:39:11 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@336 -- # read -ra ver1 00:40:16.505 22:39:11 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@337 -- # IFS=.-: 00:40:16.505 22:39:11 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@337 -- # read -ra ver2 00:40:16.505 22:39:11 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@338 -- # local 'op=<' 00:40:16.505 22:39:11 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@340 -- # ver1_l=2 00:40:16.505 22:39:11 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@341 -- # ver2_l=1 00:40:16.505 22:39:11 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:40:16.505 22:39:11 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@344 -- # case "$op" in 00:40:16.505 22:39:11 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@345 -- # : 1 00:40:16.505 22:39:11 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@364 -- # (( v = 0 )) 00:40:16.505 22:39:11 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:40:16.505 22:39:11 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@365 -- # decimal 1 00:40:16.505 22:39:11 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@353 -- # local d=1 00:40:16.505 22:39:11 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:40:16.505 22:39:11 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@355 -- # echo 1 00:40:16.505 22:39:11 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@365 -- # ver1[v]=1 00:40:16.505 22:39:11 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@366 -- # decimal 2 00:40:16.505 22:39:11 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@353 -- # local d=2 00:40:16.505 22:39:11 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:40:16.505 22:39:11 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@355 -- # echo 2 00:40:16.505 22:39:11 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@366 -- # ver2[v]=2 00:40:16.505 22:39:11 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:40:16.505 22:39:11 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:40:16.505 22:39:11 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@368 -- # return 0 00:40:16.505 22:39:11 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:40:16.505 22:39:11 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:40:16.505 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:16.505 --rc genhtml_branch_coverage=1 00:40:16.505 --rc genhtml_function_coverage=1 00:40:16.505 --rc genhtml_legend=1 00:40:16.505 --rc geninfo_all_blocks=1 00:40:16.505 --rc geninfo_unexecuted_blocks=1 00:40:16.505 00:40:16.505 ' 00:40:16.505 22:39:11 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:40:16.505 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:16.505 --rc genhtml_branch_coverage=1 00:40:16.505 --rc genhtml_function_coverage=1 00:40:16.505 --rc genhtml_legend=1 00:40:16.505 --rc geninfo_all_blocks=1 00:40:16.505 --rc geninfo_unexecuted_blocks=1 00:40:16.505 00:40:16.505 ' 00:40:16.505 22:39:11 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:40:16.505 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:16.505 --rc genhtml_branch_coverage=1 00:40:16.505 --rc genhtml_function_coverage=1 00:40:16.505 --rc genhtml_legend=1 00:40:16.505 --rc geninfo_all_blocks=1 00:40:16.505 --rc geninfo_unexecuted_blocks=1 00:40:16.505 00:40:16.505 ' 00:40:16.505 22:39:11 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:40:16.505 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:16.505 --rc genhtml_branch_coverage=1 00:40:16.505 --rc genhtml_function_coverage=1 00:40:16.505 --rc genhtml_legend=1 00:40:16.505 --rc geninfo_all_blocks=1 00:40:16.505 --rc geninfo_unexecuted_blocks=1 00:40:16.505 00:40:16.505 ' 00:40:16.505 22:39:11 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:40:16.505 22:39:11 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@7 -- # uname -s 00:40:16.766 22:39:11 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:40:16.766 22:39:11 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:40:16.766 22:39:11 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:40:16.766 22:39:11 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:40:16.766 22:39:11 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:40:16.766 22:39:11 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:40:16.766 22:39:11 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:40:16.766 22:39:11 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:40:16.766 22:39:11 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:40:16.766 22:39:11 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:40:16.766 22:39:11 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 00:40:16.766 22:39:11 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@18 -- # NVME_HOSTID=80c5a598-37ec-ec11-9bc7-a4bf01928204 00:40:16.766 22:39:11 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:40:16.766 22:39:11 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:40:16.766 22:39:11 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:40:16.766 22:39:11 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:40:16.766 22:39:11 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:40:16.766 22:39:11 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@15 -- # shopt -s extglob 00:40:16.766 22:39:11 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:40:16.766 22:39:11 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:40:16.766 22:39:11 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:40:16.766 22:39:11 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:16.766 22:39:11 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:16.766 22:39:11 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:16.766 22:39:11 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@5 -- # export PATH 00:40:16.766 22:39:11 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:16.766 22:39:11 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@51 -- # : 0 00:40:16.766 22:39:11 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:40:16.766 22:39:11 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:40:16.766 22:39:11 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:40:16.766 22:39:11 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:40:16.766 22:39:11 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:40:16.766 22:39:11 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:40:16.766 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:40:16.766 22:39:11 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:40:16.766 22:39:11 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:40:16.766 22:39:11 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@55 -- # have_pci_nics=0 00:40:16.766 22:39:11 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@14 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:40:16.766 22:39:11 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@15 -- # bperfsock=/var/tmp/bperf.sock 00:40:16.766 22:39:11 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@16 -- # runtime=2 00:40:16.766 22:39:11 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@136 -- # [[ tcp != \t\c\p ]] 00:40:16.766 22:39:11 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@138 -- # nvmftestinit 00:40:16.766 22:39:11 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:40:16.766 22:39:11 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:40:16.766 22:39:11 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@474 -- # prepare_net_devs 00:40:16.766 22:39:11 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@436 -- # local -g is_hw=no 00:40:16.766 22:39:11 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@438 -- # remove_spdk_ns 00:40:16.766 22:39:11 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:40:16.766 22:39:11 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:40:16.766 22:39:11 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:40:16.766 22:39:11 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:40:16.767 22:39:11 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:40:16.767 22:39:11 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@309 -- # xtrace_disable 00:40:16.767 22:39:11 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:40:24.904 22:39:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:40:24.904 22:39:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@315 -- # pci_devs=() 00:40:24.904 22:39:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@315 -- # local -a pci_devs 00:40:24.904 22:39:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@316 -- # pci_net_devs=() 00:40:24.904 22:39:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:40:24.904 22:39:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@317 -- # pci_drivers=() 00:40:24.904 22:39:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@317 -- # local -A pci_drivers 00:40:24.904 22:39:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@319 -- # net_devs=() 00:40:24.904 22:39:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@319 -- # local -ga net_devs 00:40:24.904 22:39:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@320 -- # e810=() 00:40:24.904 22:39:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@320 -- # local -ga e810 00:40:24.904 22:39:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@321 -- # x722=() 00:40:24.904 22:39:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@321 -- # local -ga x722 00:40:24.904 22:39:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@322 -- # mlx=() 00:40:24.904 22:39:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@322 -- # local -ga mlx 00:40:24.904 22:39:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:40:24.904 22:39:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:40:24.904 22:39:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:40:24.904 22:39:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:40:24.904 22:39:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:40:24.904 22:39:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:40:24.904 22:39:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:40:24.904 22:39:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:40:24.904 22:39:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:40:24.904 22:39:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:40:24.904 22:39:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:40:24.904 22:39:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:40:24.904 22:39:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:40:24.904 22:39:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:40:24.904 22:39:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:40:24.904 22:39:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:40:24.904 22:39:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:40:24.904 22:39:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:40:24.904 22:39:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:40:24.904 22:39:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:40:24.904 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:40:24.904 22:39:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:40:24.904 22:39:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:40:24.904 22:39:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:40:24.904 22:39:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:40:24.904 22:39:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:40:24.904 22:39:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:40:24.904 22:39:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:40:24.904 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:40:24.904 22:39:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:40:24.904 22:39:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:40:24.904 22:39:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:40:24.904 22:39:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:40:24.904 22:39:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:40:24.904 22:39:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:40:24.904 22:39:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:40:24.904 22:39:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:40:24.904 22:39:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:40:24.904 22:39:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:40:24.904 22:39:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:40:24.905 22:39:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:40:24.905 22:39:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@416 -- # [[ up == up ]] 00:40:24.905 22:39:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:40:24.905 22:39:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:40:24.905 22:39:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:40:24.905 Found net devices under 0000:4b:00.0: cvl_0_0 00:40:24.905 22:39:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:40:24.905 22:39:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:40:24.905 22:39:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:40:24.905 22:39:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:40:24.905 22:39:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:40:24.905 22:39:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@416 -- # [[ up == up ]] 00:40:24.905 22:39:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:40:24.905 22:39:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:40:24.905 22:39:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:40:24.905 Found net devices under 0000:4b:00.1: cvl_0_1 00:40:24.905 22:39:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:40:24.905 22:39:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:40:24.905 22:39:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@440 -- # is_hw=yes 00:40:24.905 22:39:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:40:24.905 22:39:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:40:24.905 22:39:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:40:24.905 22:39:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:40:24.905 22:39:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:40:24.905 22:39:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:40:24.905 22:39:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:40:24.905 22:39:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:40:24.905 22:39:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:40:24.905 22:39:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:40:24.905 22:39:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:40:24.905 22:39:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:40:24.905 22:39:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:40:24.905 22:39:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:40:24.905 22:39:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:40:24.905 22:39:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:40:24.905 22:39:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:40:24.905 22:39:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:40:24.905 22:39:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:40:24.905 22:39:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:40:24.905 22:39:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:40:24.905 22:39:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:40:24.905 22:39:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:40:24.905 22:39:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:40:24.905 22:39:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:40:24.905 22:39:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:40:24.905 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:40:24.905 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.619 ms 00:40:24.905 00:40:24.905 --- 10.0.0.2 ping statistics --- 00:40:24.905 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:40:24.905 rtt min/avg/max/mdev = 0.619/0.619/0.619/0.000 ms 00:40:24.905 22:39:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:40:24.905 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:40:24.905 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.244 ms 00:40:24.905 00:40:24.905 --- 10.0.0.1 ping statistics --- 00:40:24.905 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:40:24.905 rtt min/avg/max/mdev = 0.244/0.244/0.244/0.000 ms 00:40:24.905 22:39:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:40:24.905 22:39:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@448 -- # return 0 00:40:24.905 22:39:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:40:24.905 22:39:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:40:24.905 22:39:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:40:24.905 22:39:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:40:24.905 22:39:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:40:24.905 22:39:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:40:24.905 22:39:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:40:24.905 22:39:19 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@140 -- # trap cleanup SIGINT SIGTERM EXIT 00:40:24.905 22:39:19 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@141 -- # [[ 0 -eq 1 ]] 00:40:24.905 22:39:19 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@145 -- # run_test nvmf_digest_clean run_digest 00:40:24.905 22:39:19 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:40:24.905 22:39:19 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1107 -- # xtrace_disable 00:40:24.905 22:39:19 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:40:24.905 ************************************ 00:40:24.905 START TEST nvmf_digest_clean 00:40:24.905 ************************************ 00:40:24.905 22:39:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1125 -- # run_digest 00:40:24.905 22:39:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@120 -- # local dsa_initiator 00:40:24.905 22:39:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # [[ '' == \d\s\a\_\i\n\i\t\i\a\t\o\r ]] 00:40:24.905 22:39:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # dsa_initiator=false 00:40:24.905 22:39:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@123 -- # tgt_params=("--wait-for-rpc") 00:40:24.905 22:39:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@124 -- # nvmfappstart --wait-for-rpc 00:40:24.905 22:39:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:40:24.905 22:39:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@724 -- # xtrace_disable 00:40:24.905 22:39:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:40:24.905 22:39:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@507 -- # nvmfpid=380547 00:40:24.905 22:39:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@508 -- # waitforlisten 380547 00:40:24.905 22:39:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:40:24.905 22:39:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@831 -- # '[' -z 380547 ']' 00:40:24.905 22:39:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:40:24.905 22:39:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # local max_retries=100 00:40:24.905 22:39:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:40:24.905 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:40:24.905 22:39:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # xtrace_disable 00:40:24.905 22:39:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:40:24.905 [2024-10-01 22:39:19.408859] Starting SPDK v25.01-pre git sha1 1b1c3081e / DPDK 24.03.0 initialization... 00:40:24.906 [2024-10-01 22:39:19.408909] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:40:24.906 [2024-10-01 22:39:19.477684] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:40:24.906 [2024-10-01 22:39:19.542640] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:40:24.906 [2024-10-01 22:39:19.542678] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:40:24.906 [2024-10-01 22:39:19.542687] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:40:24.906 [2024-10-01 22:39:19.542695] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:40:24.906 [2024-10-01 22:39:19.542703] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:40:24.906 [2024-10-01 22:39:19.542724] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:40:25.167 22:39:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:40:25.167 22:39:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # return 0 00:40:25.167 22:39:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:40:25.167 22:39:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@730 -- # xtrace_disable 00:40:25.167 22:39:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:40:25.167 22:39:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:40:25.167 22:39:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@125 -- # [[ '' == \d\s\a\_\t\a\r\g\e\t ]] 00:40:25.167 22:39:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@126 -- # common_target_config 00:40:25.167 22:39:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@43 -- # rpc_cmd 00:40:25.167 22:39:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:25.167 22:39:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:40:25.167 null0 00:40:25.167 [2024-10-01 22:39:20.385240] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:40:25.167 [2024-10-01 22:39:20.409445] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:40:25.167 22:39:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:25.167 22:39:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@128 -- # run_bperf randread 4096 128 false 00:40:25.167 22:39:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:40:25.167 22:39:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:40:25.167 22:39:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:40:25.167 22:39:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:40:25.167 22:39:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:40:25.167 22:39:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:40:25.167 22:39:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=380896 00:40:25.167 22:39:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 380896 /var/tmp/bperf.sock 00:40:25.167 22:39:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@831 -- # '[' -z 380896 ']' 00:40:25.167 22:39:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:40:25.168 22:39:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:40:25.168 22:39:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # local max_retries=100 00:40:25.168 22:39:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:40:25.168 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:40:25.168 22:39:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # xtrace_disable 00:40:25.168 22:39:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:40:25.429 [2024-10-01 22:39:20.466536] Starting SPDK v25.01-pre git sha1 1b1c3081e / DPDK 24.03.0 initialization... 00:40:25.429 [2024-10-01 22:39:20.466586] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid380896 ] 00:40:25.429 [2024-10-01 22:39:20.542128] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:40:25.429 [2024-10-01 22:39:20.606830] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:40:26.001 22:39:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:40:26.001 22:39:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # return 0 00:40:26.001 22:39:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:40:26.001 22:39:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:40:26.001 22:39:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:40:26.579 22:39:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:40:26.579 22:39:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:40:26.839 nvme0n1 00:40:26.839 22:39:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:40:26.839 22:39:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:40:26.839 Running I/O for 2 seconds... 00:40:29.168 19172.00 IOPS, 74.89 MiB/s 19326.00 IOPS, 75.49 MiB/s 00:40:29.168 Latency(us) 00:40:29.168 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:40:29.168 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:40:29.168 nvme0n1 : 2.04 18965.59 74.08 0.00 0.00 6608.41 2908.16 45219.84 00:40:29.168 =================================================================================================================== 00:40:29.168 Total : 18965.59 74.08 0.00 0.00 6608.41 2908.16 45219.84 00:40:29.168 { 00:40:29.168 "results": [ 00:40:29.168 { 00:40:29.168 "job": "nvme0n1", 00:40:29.168 "core_mask": "0x2", 00:40:29.168 "workload": "randread", 00:40:29.168 "status": "finished", 00:40:29.168 "queue_depth": 128, 00:40:29.168 "io_size": 4096, 00:40:29.168 "runtime": 2.044756, 00:40:29.168 "iops": 18965.588070165828, 00:40:29.168 "mibps": 74.08432839908527, 00:40:29.168 "io_failed": 0, 00:40:29.168 "io_timeout": 0, 00:40:29.168 "avg_latency_us": 6608.412605810555, 00:40:29.168 "min_latency_us": 2908.16, 00:40:29.168 "max_latency_us": 45219.84 00:40:29.168 } 00:40:29.168 ], 00:40:29.168 "core_count": 1 00:40:29.168 } 00:40:29.168 22:39:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:40:29.168 22:39:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:40:29.168 22:39:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:40:29.168 22:39:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:40:29.168 | select(.opcode=="crc32c") 00:40:29.168 | "\(.module_name) \(.executed)"' 00:40:29.168 22:39:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:40:29.168 22:39:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:40:29.168 22:39:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:40:29.168 22:39:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:40:29.168 22:39:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:40:29.168 22:39:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 380896 00:40:29.168 22:39:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@950 -- # '[' -z 380896 ']' 00:40:29.168 22:39:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # kill -0 380896 00:40:29.168 22:39:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # uname 00:40:29.168 22:39:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:40:29.168 22:39:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 380896 00:40:29.168 22:39:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:40:29.168 22:39:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:40:29.168 22:39:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@968 -- # echo 'killing process with pid 380896' 00:40:29.168 killing process with pid 380896 00:40:29.168 22:39:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@969 -- # kill 380896 00:40:29.168 Received shutdown signal, test time was about 2.000000 seconds 00:40:29.168 00:40:29.168 Latency(us) 00:40:29.168 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:40:29.168 =================================================================================================================== 00:40:29.168 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:40:29.168 22:39:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@974 -- # wait 380896 00:40:29.429 22:39:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@129 -- # run_bperf randread 131072 16 false 00:40:29.429 22:39:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:40:29.429 22:39:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:40:29.429 22:39:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:40:29.429 22:39:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:40:29.429 22:39:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:40:29.429 22:39:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:40:29.429 22:39:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=381582 00:40:29.429 22:39:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 381582 /var/tmp/bperf.sock 00:40:29.429 22:39:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@831 -- # '[' -z 381582 ']' 00:40:29.429 22:39:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:40:29.429 22:39:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:40:29.429 22:39:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # local max_retries=100 00:40:29.429 22:39:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:40:29.429 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:40:29.429 22:39:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # xtrace_disable 00:40:29.429 22:39:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:40:29.429 [2024-10-01 22:39:24.560273] Starting SPDK v25.01-pre git sha1 1b1c3081e / DPDK 24.03.0 initialization... 00:40:29.429 [2024-10-01 22:39:24.560329] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid381582 ] 00:40:29.429 I/O size of 131072 is greater than zero copy threshold (65536). 00:40:29.429 Zero copy mechanism will not be used. 00:40:29.429 [2024-10-01 22:39:24.637144] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:40:29.690 [2024-10-01 22:39:24.689677] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:40:30.261 22:39:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:40:30.261 22:39:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # return 0 00:40:30.261 22:39:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:40:30.261 22:39:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:40:30.262 22:39:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:40:30.522 22:39:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:40:30.522 22:39:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:40:30.782 nvme0n1 00:40:30.782 22:39:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:40:30.782 22:39:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:40:31.043 I/O size of 131072 is greater than zero copy threshold (65536). 00:40:31.043 Zero copy mechanism will not be used. 00:40:31.043 Running I/O for 2 seconds... 00:40:32.926 2881.00 IOPS, 360.12 MiB/s 2893.00 IOPS, 361.62 MiB/s 00:40:32.926 Latency(us) 00:40:32.926 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:40:32.926 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:40:32.926 nvme0n1 : 2.04 2840.02 355.00 0.00 0.00 5523.94 1112.75 43690.67 00:40:32.926 =================================================================================================================== 00:40:32.926 Total : 2840.02 355.00 0.00 0.00 5523.94 1112.75 43690.67 00:40:32.926 { 00:40:32.926 "results": [ 00:40:32.926 { 00:40:32.926 "job": "nvme0n1", 00:40:32.926 "core_mask": "0x2", 00:40:32.926 "workload": "randread", 00:40:32.926 "status": "finished", 00:40:32.926 "queue_depth": 16, 00:40:32.926 "io_size": 131072, 00:40:32.926 "runtime": 2.042942, 00:40:32.926 "iops": 2840.021889999814, 00:40:32.926 "mibps": 355.00273624997675, 00:40:32.926 "io_failed": 0, 00:40:32.926 "io_timeout": 0, 00:40:32.926 "avg_latency_us": 5523.936996438009, 00:40:32.926 "min_latency_us": 1112.7466666666667, 00:40:32.926 "max_latency_us": 43690.666666666664 00:40:32.926 } 00:40:32.926 ], 00:40:32.926 "core_count": 1 00:40:32.926 } 00:40:32.926 22:39:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:40:32.926 22:39:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:40:32.926 22:39:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:40:32.926 22:39:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:40:32.926 | select(.opcode=="crc32c") 00:40:32.926 | "\(.module_name) \(.executed)"' 00:40:32.926 22:39:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:40:33.187 22:39:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:40:33.187 22:39:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:40:33.187 22:39:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:40:33.187 22:39:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:40:33.187 22:39:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 381582 00:40:33.187 22:39:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@950 -- # '[' -z 381582 ']' 00:40:33.187 22:39:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # kill -0 381582 00:40:33.187 22:39:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # uname 00:40:33.187 22:39:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:40:33.187 22:39:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 381582 00:40:33.187 22:39:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:40:33.187 22:39:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:40:33.187 22:39:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@968 -- # echo 'killing process with pid 381582' 00:40:33.187 killing process with pid 381582 00:40:33.187 22:39:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@969 -- # kill 381582 00:40:33.187 Received shutdown signal, test time was about 2.000000 seconds 00:40:33.187 00:40:33.187 Latency(us) 00:40:33.187 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:40:33.187 =================================================================================================================== 00:40:33.187 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:40:33.187 22:39:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@974 -- # wait 381582 00:40:33.448 22:39:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@130 -- # run_bperf randwrite 4096 128 false 00:40:33.448 22:39:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:40:33.448 22:39:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:40:33.448 22:39:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:40:33.448 22:39:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:40:33.448 22:39:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:40:33.448 22:39:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:40:33.448 22:39:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=382322 00:40:33.448 22:39:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 382322 /var/tmp/bperf.sock 00:40:33.448 22:39:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@831 -- # '[' -z 382322 ']' 00:40:33.448 22:39:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:40:33.448 22:39:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:40:33.448 22:39:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # local max_retries=100 00:40:33.448 22:39:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:40:33.448 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:40:33.448 22:39:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # xtrace_disable 00:40:33.448 22:39:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:40:33.448 [2024-10-01 22:39:28.558791] Starting SPDK v25.01-pre git sha1 1b1c3081e / DPDK 24.03.0 initialization... 00:40:33.448 [2024-10-01 22:39:28.558850] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid382322 ] 00:40:33.448 [2024-10-01 22:39:28.634651] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:40:33.448 [2024-10-01 22:39:28.688336] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:40:34.390 22:39:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:40:34.390 22:39:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # return 0 00:40:34.390 22:39:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:40:34.390 22:39:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:40:34.390 22:39:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:40:34.390 22:39:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:40:34.390 22:39:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:40:34.961 nvme0n1 00:40:34.961 22:39:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:40:34.962 22:39:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:40:34.962 Running I/O for 2 seconds... 00:40:36.847 21342.00 IOPS, 83.37 MiB/s 21452.00 IOPS, 83.80 MiB/s 00:40:36.847 Latency(us) 00:40:36.847 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:40:36.847 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:40:36.847 nvme0n1 : 2.00 21463.00 83.84 0.00 0.00 5955.10 2293.76 11687.25 00:40:36.847 =================================================================================================================== 00:40:36.847 Total : 21463.00 83.84 0.00 0.00 5955.10 2293.76 11687.25 00:40:36.847 { 00:40:36.847 "results": [ 00:40:36.847 { 00:40:36.847 "job": "nvme0n1", 00:40:36.847 "core_mask": "0x2", 00:40:36.847 "workload": "randwrite", 00:40:36.847 "status": "finished", 00:40:36.847 "queue_depth": 128, 00:40:36.847 "io_size": 4096, 00:40:36.847 "runtime": 2.004939, 00:40:36.847 "iops": 21462.997128590945, 00:40:36.847 "mibps": 83.83983253355838, 00:40:36.847 "io_failed": 0, 00:40:36.847 "io_timeout": 0, 00:40:36.847 "avg_latency_us": 5955.097477845944, 00:40:36.847 "min_latency_us": 2293.76, 00:40:36.847 "max_latency_us": 11687.253333333334 00:40:36.847 } 00:40:36.847 ], 00:40:36.847 "core_count": 1 00:40:36.847 } 00:40:36.847 22:39:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:40:36.847 22:39:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:40:36.847 22:39:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:40:36.847 22:39:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:40:36.847 | select(.opcode=="crc32c") 00:40:36.847 | "\(.module_name) \(.executed)"' 00:40:36.847 22:39:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:40:37.108 22:39:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:40:37.108 22:39:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:40:37.108 22:39:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:40:37.108 22:39:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:40:37.108 22:39:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 382322 00:40:37.108 22:39:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@950 -- # '[' -z 382322 ']' 00:40:37.108 22:39:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # kill -0 382322 00:40:37.108 22:39:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # uname 00:40:37.108 22:39:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:40:37.108 22:39:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 382322 00:40:37.108 22:39:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:40:37.108 22:39:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:40:37.108 22:39:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@968 -- # echo 'killing process with pid 382322' 00:40:37.108 killing process with pid 382322 00:40:37.108 22:39:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@969 -- # kill 382322 00:40:37.108 Received shutdown signal, test time was about 2.000000 seconds 00:40:37.108 00:40:37.108 Latency(us) 00:40:37.108 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:40:37.108 =================================================================================================================== 00:40:37.108 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:40:37.108 22:39:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@974 -- # wait 382322 00:40:37.369 22:39:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@131 -- # run_bperf randwrite 131072 16 false 00:40:37.369 22:39:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:40:37.369 22:39:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:40:37.369 22:39:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:40:37.369 22:39:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:40:37.369 22:39:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:40:37.369 22:39:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:40:37.369 22:39:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=383195 00:40:37.369 22:39:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 383195 /var/tmp/bperf.sock 00:40:37.369 22:39:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@831 -- # '[' -z 383195 ']' 00:40:37.369 22:39:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:40:37.369 22:39:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:40:37.369 22:39:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # local max_retries=100 00:40:37.369 22:39:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:40:37.369 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:40:37.369 22:39:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # xtrace_disable 00:40:37.369 22:39:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:40:37.369 [2024-10-01 22:39:32.492430] Starting SPDK v25.01-pre git sha1 1b1c3081e / DPDK 24.03.0 initialization... 00:40:37.369 [2024-10-01 22:39:32.492487] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid383195 ] 00:40:37.369 I/O size of 131072 is greater than zero copy threshold (65536). 00:40:37.369 Zero copy mechanism will not be used. 00:40:37.369 [2024-10-01 22:39:32.569214] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:40:37.630 [2024-10-01 22:39:32.623302] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:40:38.201 22:39:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:40:38.201 22:39:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # return 0 00:40:38.201 22:39:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:40:38.201 22:39:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:40:38.201 22:39:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:40:38.462 22:39:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:40:38.462 22:39:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:40:38.722 nvme0n1 00:40:38.722 22:39:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:40:38.722 22:39:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:40:38.984 I/O size of 131072 is greater than zero copy threshold (65536). 00:40:38.984 Zero copy mechanism will not be used. 00:40:38.984 Running I/O for 2 seconds... 00:40:40.867 3488.00 IOPS, 436.00 MiB/s 3399.50 IOPS, 424.94 MiB/s 00:40:40.867 Latency(us) 00:40:40.867 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:40:40.867 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:40:40.867 nvme0n1 : 2.00 3401.45 425.18 0.00 0.00 4698.87 1727.15 6853.97 00:40:40.867 =================================================================================================================== 00:40:40.867 Total : 3401.45 425.18 0.00 0.00 4698.87 1727.15 6853.97 00:40:40.867 { 00:40:40.867 "results": [ 00:40:40.867 { 00:40:40.867 "job": "nvme0n1", 00:40:40.867 "core_mask": "0x2", 00:40:40.867 "workload": "randwrite", 00:40:40.867 "status": "finished", 00:40:40.867 "queue_depth": 16, 00:40:40.867 "io_size": 131072, 00:40:40.867 "runtime": 2.004732, 00:40:40.867 "iops": 3401.4521641795513, 00:40:40.867 "mibps": 425.1815205224439, 00:40:40.867 "io_failed": 0, 00:40:40.867 "io_timeout": 0, 00:40:40.867 "avg_latency_us": 4698.870051327173, 00:40:40.867 "min_latency_us": 1727.1466666666668, 00:40:40.867 "max_latency_us": 6853.973333333333 00:40:40.867 } 00:40:40.867 ], 00:40:40.867 "core_count": 1 00:40:40.867 } 00:40:40.867 22:39:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:40:40.867 22:39:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:40:40.867 22:39:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:40:40.867 22:39:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:40:40.867 | select(.opcode=="crc32c") 00:40:40.867 | "\(.module_name) \(.executed)"' 00:40:40.867 22:39:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:40:41.128 22:39:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:40:41.128 22:39:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:40:41.128 22:39:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:40:41.128 22:39:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:40:41.128 22:39:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 383195 00:40:41.128 22:39:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@950 -- # '[' -z 383195 ']' 00:40:41.128 22:39:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # kill -0 383195 00:40:41.128 22:39:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # uname 00:40:41.128 22:39:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:40:41.128 22:39:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 383195 00:40:41.128 22:39:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:40:41.128 22:39:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:40:41.128 22:39:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@968 -- # echo 'killing process with pid 383195' 00:40:41.128 killing process with pid 383195 00:40:41.128 22:39:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@969 -- # kill 383195 00:40:41.128 Received shutdown signal, test time was about 2.000000 seconds 00:40:41.128 00:40:41.128 Latency(us) 00:40:41.128 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:40:41.128 =================================================================================================================== 00:40:41.128 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:40:41.128 22:39:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@974 -- # wait 383195 00:40:41.390 22:39:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@132 -- # killprocess 380547 00:40:41.390 22:39:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@950 -- # '[' -z 380547 ']' 00:40:41.390 22:39:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # kill -0 380547 00:40:41.390 22:39:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # uname 00:40:41.390 22:39:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:40:41.390 22:39:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 380547 00:40:41.390 22:39:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:40:41.390 22:39:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:40:41.390 22:39:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@968 -- # echo 'killing process with pid 380547' 00:40:41.390 killing process with pid 380547 00:40:41.390 22:39:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@969 -- # kill 380547 00:40:41.390 22:39:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@974 -- # wait 380547 00:40:41.651 00:40:41.651 real 0m17.381s 00:40:41.651 user 0m34.146s 00:40:41.651 sys 0m3.604s 00:40:41.651 22:39:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1126 -- # xtrace_disable 00:40:41.651 22:39:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:40:41.651 ************************************ 00:40:41.651 END TEST nvmf_digest_clean 00:40:41.651 ************************************ 00:40:41.651 22:39:36 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@147 -- # run_test nvmf_digest_error run_digest_error 00:40:41.651 22:39:36 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:40:41.651 22:39:36 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1107 -- # xtrace_disable 00:40:41.651 22:39:36 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:40:41.651 ************************************ 00:40:41.651 START TEST nvmf_digest_error 00:40:41.651 ************************************ 00:40:41.651 22:39:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1125 -- # run_digest_error 00:40:41.651 22:39:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@102 -- # nvmfappstart --wait-for-rpc 00:40:41.651 22:39:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:40:41.651 22:39:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@724 -- # xtrace_disable 00:40:41.651 22:39:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:40:41.651 22:39:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@507 -- # nvmfpid=383984 00:40:41.651 22:39:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@508 -- # waitforlisten 383984 00:40:41.651 22:39:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:40:41.651 22:39:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@831 -- # '[' -z 383984 ']' 00:40:41.651 22:39:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:40:41.651 22:39:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # local max_retries=100 00:40:41.651 22:39:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:40:41.651 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:40:41.651 22:39:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # xtrace_disable 00:40:41.651 22:39:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:40:41.651 [2024-10-01 22:39:36.870978] Starting SPDK v25.01-pre git sha1 1b1c3081e / DPDK 24.03.0 initialization... 00:40:41.651 [2024-10-01 22:39:36.871027] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:40:41.913 [2024-10-01 22:39:36.936550] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:40:41.913 [2024-10-01 22:39:37.000616] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:40:41.913 [2024-10-01 22:39:37.000659] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:40:41.913 [2024-10-01 22:39:37.000667] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:40:41.913 [2024-10-01 22:39:37.000674] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:40:41.913 [2024-10-01 22:39:37.000680] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:40:41.913 [2024-10-01 22:39:37.000703] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:40:42.485 22:39:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:40:42.485 22:39:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # return 0 00:40:42.485 22:39:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:40:42.485 22:39:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@730 -- # xtrace_disable 00:40:42.485 22:39:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:40:42.485 22:39:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:40:42.485 22:39:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@104 -- # rpc_cmd accel_assign_opc -o crc32c -m error 00:40:42.485 22:39:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:42.485 22:39:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:40:42.485 [2024-10-01 22:39:37.694677] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation crc32c will be assigned to module error 00:40:42.485 22:39:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:42.485 22:39:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@105 -- # common_target_config 00:40:42.485 22:39:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@43 -- # rpc_cmd 00:40:42.485 22:39:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:42.485 22:39:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:40:42.746 null0 00:40:42.746 [2024-10-01 22:39:37.832854] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:40:42.746 [2024-10-01 22:39:37.857054] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:40:42.746 22:39:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:42.746 22:39:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@108 -- # run_bperf_err randread 4096 128 00:40:42.746 22:39:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:40:42.746 22:39:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:40:42.746 22:39:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:40:42.746 22:39:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:40:42.746 22:39:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=384327 00:40:42.746 22:39:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 384327 /var/tmp/bperf.sock 00:40:42.746 22:39:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@831 -- # '[' -z 384327 ']' 00:40:42.746 22:39:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z 00:40:42.746 22:39:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:40:42.746 22:39:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # local max_retries=100 00:40:42.746 22:39:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:40:42.746 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:40:42.746 22:39:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # xtrace_disable 00:40:42.746 22:39:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:40:42.746 [2024-10-01 22:39:37.912871] Starting SPDK v25.01-pre git sha1 1b1c3081e / DPDK 24.03.0 initialization... 00:40:42.746 [2024-10-01 22:39:37.912920] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid384327 ] 00:40:42.746 [2024-10-01 22:39:37.989269] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:40:43.007 [2024-10-01 22:39:38.044111] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:40:43.578 22:39:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:40:43.578 22:39:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # return 0 00:40:43.578 22:39:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:40:43.578 22:39:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:40:43.839 22:39:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:40:43.839 22:39:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:43.839 22:39:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:40:43.839 22:39:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:43.839 22:39:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:40:43.839 22:39:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:40:44.099 nvme0n1 00:40:44.099 22:39:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:40:44.099 22:39:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:44.099 22:39:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:40:44.099 22:39:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:44.099 22:39:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:40:44.099 22:39:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:40:44.099 Running I/O for 2 seconds... 00:40:44.099 [2024-10-01 22:39:39.292606] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1682f30) 00:40:44.099 [2024-10-01 22:39:39.292641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:6535 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:44.099 [2024-10-01 22:39:39.292651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:40:44.099 [2024-10-01 22:39:39.307321] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1682f30) 00:40:44.099 [2024-10-01 22:39:39.307341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:2323 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:44.099 [2024-10-01 22:39:39.307349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:40:44.099 [2024-10-01 22:39:39.321058] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1682f30) 00:40:44.099 [2024-10-01 22:39:39.321077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:4637 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:44.099 [2024-10-01 22:39:39.321084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:40:44.099 [2024-10-01 22:39:39.331717] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1682f30) 00:40:44.099 [2024-10-01 22:39:39.331736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:25252 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:44.099 [2024-10-01 22:39:39.331743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:40:44.099 [2024-10-01 22:39:39.343902] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1682f30) 00:40:44.099 [2024-10-01 22:39:39.343920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:22972 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:44.099 [2024-10-01 22:39:39.343927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:40:44.361 [2024-10-01 22:39:39.357296] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1682f30) 00:40:44.361 [2024-10-01 22:39:39.357315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:24289 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:44.361 [2024-10-01 22:39:39.357322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:40:44.361 [2024-10-01 22:39:39.370667] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1682f30) 00:40:44.361 [2024-10-01 22:39:39.370685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:18818 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:44.361 [2024-10-01 22:39:39.370692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:40:44.361 [2024-10-01 22:39:39.383995] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1682f30) 00:40:44.361 [2024-10-01 22:39:39.384012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:2319 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:44.361 [2024-10-01 22:39:39.384018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:40:44.361 [2024-10-01 22:39:39.397356] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1682f30) 00:40:44.361 [2024-10-01 22:39:39.397373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:22777 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:44.361 [2024-10-01 22:39:39.397379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:40:44.361 [2024-10-01 22:39:39.409578] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1682f30) 00:40:44.361 [2024-10-01 22:39:39.409595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:22263 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:44.361 [2024-10-01 22:39:39.409602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:40:44.361 [2024-10-01 22:39:39.421012] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1682f30) 00:40:44.361 [2024-10-01 22:39:39.421029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:6403 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:44.361 [2024-10-01 22:39:39.421036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:40:44.361 [2024-10-01 22:39:39.433527] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1682f30) 00:40:44.361 [2024-10-01 22:39:39.433544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:18259 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:44.361 [2024-10-01 22:39:39.433551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:40:44.361 [2024-10-01 22:39:39.446348] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1682f30) 00:40:44.361 [2024-10-01 22:39:39.446365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:4558 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:44.361 [2024-10-01 22:39:39.446372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:40:44.361 [2024-10-01 22:39:39.459212] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1682f30) 00:40:44.361 [2024-10-01 22:39:39.459230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:22337 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:44.361 [2024-10-01 22:39:39.459237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:40:44.361 [2024-10-01 22:39:39.471065] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1682f30) 00:40:44.361 [2024-10-01 22:39:39.471083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:24516 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:44.361 [2024-10-01 22:39:39.471093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:40:44.361 [2024-10-01 22:39:39.484073] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1682f30) 00:40:44.361 [2024-10-01 22:39:39.484090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:7857 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:44.361 [2024-10-01 22:39:39.484097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:40:44.361 [2024-10-01 22:39:39.496798] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1682f30) 00:40:44.361 [2024-10-01 22:39:39.496816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:10107 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:44.361 [2024-10-01 22:39:39.496822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:40:44.361 [2024-10-01 22:39:39.510356] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1682f30) 00:40:44.361 [2024-10-01 22:39:39.510374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:15321 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:44.361 [2024-10-01 22:39:39.510381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:40:44.361 [2024-10-01 22:39:39.522870] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1682f30) 00:40:44.361 [2024-10-01 22:39:39.522889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:5280 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:44.361 [2024-10-01 22:39:39.522895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:40:44.361 [2024-10-01 22:39:39.535524] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1682f30) 00:40:44.361 [2024-10-01 22:39:39.535541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:415 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:44.361 [2024-10-01 22:39:39.535548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:40:44.361 [2024-10-01 22:39:39.547142] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1682f30) 00:40:44.361 [2024-10-01 22:39:39.547160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:10223 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:44.361 [2024-10-01 22:39:39.547166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:40:44.361 [2024-10-01 22:39:39.561224] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1682f30) 00:40:44.361 [2024-10-01 22:39:39.561241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:4364 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:44.361 [2024-10-01 22:39:39.561247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:40:44.361 [2024-10-01 22:39:39.572157] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1682f30) 00:40:44.362 [2024-10-01 22:39:39.572174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:16413 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:44.362 [2024-10-01 22:39:39.572181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:40:44.362 [2024-10-01 22:39:39.583752] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1682f30) 00:40:44.362 [2024-10-01 22:39:39.583770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:3518 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:44.362 [2024-10-01 22:39:39.583777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:40:44.362 [2024-10-01 22:39:39.598250] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1682f30) 00:40:44.362 [2024-10-01 22:39:39.598268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:5578 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:44.362 [2024-10-01 22:39:39.598275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:40:44.362 [2024-10-01 22:39:39.611041] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1682f30) 00:40:44.362 [2024-10-01 22:39:39.611059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:7509 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:44.362 [2024-10-01 22:39:39.611065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:40:44.622 [2024-10-01 22:39:39.622672] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1682f30) 00:40:44.622 [2024-10-01 22:39:39.622690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:4721 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:44.622 [2024-10-01 22:39:39.622697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:40:44.622 [2024-10-01 22:39:39.634774] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1682f30) 00:40:44.622 [2024-10-01 22:39:39.634791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:18916 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:44.622 [2024-10-01 22:39:39.634798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:40:44.622 [2024-10-01 22:39:39.647595] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1682f30) 00:40:44.622 [2024-10-01 22:39:39.647612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:15961 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:44.622 [2024-10-01 22:39:39.647619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:40:44.622 [2024-10-01 22:39:39.660056] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1682f30) 00:40:44.622 [2024-10-01 22:39:39.660073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:14175 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:44.622 [2024-10-01 22:39:39.660080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:40:44.622 [2024-10-01 22:39:39.673061] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1682f30) 00:40:44.622 [2024-10-01 22:39:39.673078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:18821 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:44.622 [2024-10-01 22:39:39.673084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:40:44.622 [2024-10-01 22:39:39.685507] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1682f30) 00:40:44.622 [2024-10-01 22:39:39.685523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23569 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:44.622 [2024-10-01 22:39:39.685533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:40:44.622 [2024-10-01 22:39:39.698733] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1682f30) 00:40:44.622 [2024-10-01 22:39:39.698749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:11168 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:44.622 [2024-10-01 22:39:39.698755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:40:44.622 [2024-10-01 22:39:39.710579] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1682f30) 00:40:44.622 [2024-10-01 22:39:39.710596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:24582 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:44.622 [2024-10-01 22:39:39.710602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:40:44.622 [2024-10-01 22:39:39.723609] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1682f30) 00:40:44.622 [2024-10-01 22:39:39.723629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:5872 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:44.622 [2024-10-01 22:39:39.723635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:40:44.622 [2024-10-01 22:39:39.735481] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1682f30) 00:40:44.622 [2024-10-01 22:39:39.735498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:1917 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:44.623 [2024-10-01 22:39:39.735505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:40:44.623 [2024-10-01 22:39:39.747587] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1682f30) 00:40:44.623 [2024-10-01 22:39:39.747604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:17994 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:44.623 [2024-10-01 22:39:39.747611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:40:44.623 [2024-10-01 22:39:39.760112] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1682f30) 00:40:44.623 [2024-10-01 22:39:39.760128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:24662 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:44.623 [2024-10-01 22:39:39.760135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:40:44.623 [2024-10-01 22:39:39.773492] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1682f30) 00:40:44.623 [2024-10-01 22:39:39.773510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:24059 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:44.623 [2024-10-01 22:39:39.773516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:40:44.623 [2024-10-01 22:39:39.787257] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1682f30) 00:40:44.623 [2024-10-01 22:39:39.787273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:648 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:44.623 [2024-10-01 22:39:39.787279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:40:44.623 [2024-10-01 22:39:39.798066] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1682f30) 00:40:44.623 [2024-10-01 22:39:39.798086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:7098 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:44.623 [2024-10-01 22:39:39.798092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:40:44.623 [2024-10-01 22:39:39.809951] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1682f30) 00:40:44.623 [2024-10-01 22:39:39.809968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:23433 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:44.623 [2024-10-01 22:39:39.809974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:40:44.623 [2024-10-01 22:39:39.823122] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1682f30) 00:40:44.623 [2024-10-01 22:39:39.823139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:12699 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:44.623 [2024-10-01 22:39:39.823146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:40:44.623 [2024-10-01 22:39:39.836462] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1682f30) 00:40:44.623 [2024-10-01 22:39:39.836479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:20679 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:44.623 [2024-10-01 22:39:39.836485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:40:44.623 [2024-10-01 22:39:39.849320] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1682f30) 00:40:44.623 [2024-10-01 22:39:39.849337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:5117 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:44.623 [2024-10-01 22:39:39.849344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:40:44.623 [2024-10-01 22:39:39.860132] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1682f30) 00:40:44.623 [2024-10-01 22:39:39.860149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:12556 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:44.623 [2024-10-01 22:39:39.860155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:40:44.883 [2024-10-01 22:39:39.875140] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1682f30) 00:40:44.883 [2024-10-01 22:39:39.875158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:24186 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:44.883 [2024-10-01 22:39:39.875167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:40:44.883 [2024-10-01 22:39:39.888005] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1682f30) 00:40:44.883 [2024-10-01 22:39:39.888022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:1276 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:44.884 [2024-10-01 22:39:39.888029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:40:44.884 [2024-10-01 22:39:39.901362] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1682f30) 00:40:44.884 [2024-10-01 22:39:39.901378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:14293 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:44.884 [2024-10-01 22:39:39.901385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:40:44.884 [2024-10-01 22:39:39.914314] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1682f30) 00:40:44.884 [2024-10-01 22:39:39.914331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:11898 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:44.884 [2024-10-01 22:39:39.914337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:40:44.884 [2024-10-01 22:39:39.925832] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1682f30) 00:40:44.884 [2024-10-01 22:39:39.925849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:21742 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:44.884 [2024-10-01 22:39:39.925856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:40:44.884 [2024-10-01 22:39:39.938176] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1682f30) 00:40:44.884 [2024-10-01 22:39:39.938193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:2845 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:44.884 [2024-10-01 22:39:39.938200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:40:44.884 [2024-10-01 22:39:39.952704] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1682f30) 00:40:44.884 [2024-10-01 22:39:39.952721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:21167 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:44.884 [2024-10-01 22:39:39.952727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:40:44.884 [2024-10-01 22:39:39.965783] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1682f30) 00:40:44.884 [2024-10-01 22:39:39.965807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:9630 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:44.884 [2024-10-01 22:39:39.965814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:40:44.884 [2024-10-01 22:39:39.977888] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1682f30) 00:40:44.884 [2024-10-01 22:39:39.977904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:18175 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:44.884 [2024-10-01 22:39:39.977911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:40:44.884 [2024-10-01 22:39:39.991129] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1682f30) 00:40:44.884 [2024-10-01 22:39:39.991146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:25164 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:44.884 [2024-10-01 22:39:39.991153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:40:44.884 [2024-10-01 22:39:40.004671] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1682f30) 00:40:44.884 [2024-10-01 22:39:40.004688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:20652 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:44.884 [2024-10-01 22:39:40.004694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:40:44.884 [2024-10-01 22:39:40.016500] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1682f30) 00:40:44.884 [2024-10-01 22:39:40.016517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:7797 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:44.884 [2024-10-01 22:39:40.016526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:40:44.884 [2024-10-01 22:39:40.028464] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1682f30) 00:40:44.884 [2024-10-01 22:39:40.028481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:23139 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:44.884 [2024-10-01 22:39:40.028487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:40:44.884 [2024-10-01 22:39:40.041018] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1682f30) 00:40:44.884 [2024-10-01 22:39:40.041035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:16092 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:44.884 [2024-10-01 22:39:40.041042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:40:44.884 [2024-10-01 22:39:40.054980] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1682f30) 00:40:44.884 [2024-10-01 22:39:40.054998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:24731 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:44.884 [2024-10-01 22:39:40.055005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:40:44.884 [2024-10-01 22:39:40.068563] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1682f30) 00:40:44.884 [2024-10-01 22:39:40.068581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:17704 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:44.884 [2024-10-01 22:39:40.068588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:40:44.884 [2024-10-01 22:39:40.081681] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1682f30) 00:40:44.884 [2024-10-01 22:39:40.081698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:25220 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:44.884 [2024-10-01 22:39:40.081706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:40:44.884 [2024-10-01 22:39:40.095322] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1682f30) 00:40:44.884 [2024-10-01 22:39:40.095338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18514 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:44.884 [2024-10-01 22:39:40.095345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:40:44.884 [2024-10-01 22:39:40.108224] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1682f30) 00:40:44.884 [2024-10-01 22:39:40.108241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:25082 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:44.884 [2024-10-01 22:39:40.108248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:40:44.884 [2024-10-01 22:39:40.120590] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1682f30) 00:40:44.884 [2024-10-01 22:39:40.120607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:11284 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:44.884 [2024-10-01 22:39:40.120614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:40:44.884 [2024-10-01 22:39:40.133308] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1682f30) 00:40:44.884 [2024-10-01 22:39:40.133330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:10872 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:44.884 [2024-10-01 22:39:40.133337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:40:45.144 [2024-10-01 22:39:40.145111] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1682f30) 00:40:45.144 [2024-10-01 22:39:40.145128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:3936 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:45.145 [2024-10-01 22:39:40.145135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:40:45.145 [2024-10-01 22:39:40.156116] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1682f30) 00:40:45.145 [2024-10-01 22:39:40.156132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:20220 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:45.145 [2024-10-01 22:39:40.156139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:40:45.145 [2024-10-01 22:39:40.170188] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1682f30) 00:40:45.145 [2024-10-01 22:39:40.170205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:4550 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:45.145 [2024-10-01 22:39:40.170212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:40:45.145 [2024-10-01 22:39:40.183534] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1682f30) 00:40:45.145 [2024-10-01 22:39:40.183551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:17158 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:45.145 [2024-10-01 22:39:40.183558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:40:45.145 [2024-10-01 22:39:40.196216] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1682f30) 00:40:45.145 [2024-10-01 22:39:40.196233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:13291 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:45.145 [2024-10-01 22:39:40.196240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:40:45.145 [2024-10-01 22:39:40.208633] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1682f30) 00:40:45.145 [2024-10-01 22:39:40.208650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:5758 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:45.145 [2024-10-01 22:39:40.208657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:40:45.145 [2024-10-01 22:39:40.221422] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1682f30) 00:40:45.145 [2024-10-01 22:39:40.221440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:8950 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:45.145 [2024-10-01 22:39:40.221447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:40:45.145 [2024-10-01 22:39:40.235428] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1682f30) 00:40:45.145 [2024-10-01 22:39:40.235445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:168 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:45.145 [2024-10-01 22:39:40.235455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:40:45.145 [2024-10-01 22:39:40.247703] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1682f30) 00:40:45.145 [2024-10-01 22:39:40.247720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:16132 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:45.145 [2024-10-01 22:39:40.247726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:40:45.145 [2024-10-01 22:39:40.257811] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1682f30) 00:40:45.145 [2024-10-01 22:39:40.257828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:22032 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:45.145 [2024-10-01 22:39:40.257834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:40:45.145 19863.00 IOPS, 77.59 MiB/s [2024-10-01 22:39:40.272028] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1682f30) 00:40:45.145 [2024-10-01 22:39:40.272043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:4243 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:45.145 [2024-10-01 22:39:40.272049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:40:45.145 [2024-10-01 22:39:40.286308] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1682f30) 00:40:45.145 [2024-10-01 22:39:40.286325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:18818 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:45.145 [2024-10-01 22:39:40.286332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:40:45.145 [2024-10-01 22:39:40.299296] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1682f30) 00:40:45.145 [2024-10-01 22:39:40.299313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:14578 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:45.145 [2024-10-01 22:39:40.299319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:40:45.145 [2024-10-01 22:39:40.312490] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1682f30) 00:40:45.145 [2024-10-01 22:39:40.312506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:4082 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:45.145 [2024-10-01 22:39:40.312512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:40:45.145 [2024-10-01 22:39:40.323881] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1682f30) 00:40:45.145 [2024-10-01 22:39:40.323898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:12726 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:45.145 [2024-10-01 22:39:40.323905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:40:45.145 [2024-10-01 22:39:40.337124] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1682f30) 00:40:45.145 [2024-10-01 22:39:40.337141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:24431 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:45.145 [2024-10-01 22:39:40.337147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:40:45.145 [2024-10-01 22:39:40.350057] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1682f30) 00:40:45.145 [2024-10-01 22:39:40.350077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:25382 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:45.145 [2024-10-01 22:39:40.350083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:40:45.145 [2024-10-01 22:39:40.362974] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1682f30) 00:40:45.145 [2024-10-01 22:39:40.362991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:3346 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:45.145 [2024-10-01 22:39:40.362997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:40:45.145 [2024-10-01 22:39:40.376349] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1682f30) 00:40:45.145 [2024-10-01 22:39:40.376365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:19884 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:45.145 [2024-10-01 22:39:40.376372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:40:45.145 [2024-10-01 22:39:40.387635] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1682f30) 00:40:45.145 [2024-10-01 22:39:40.387652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:14835 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:45.145 [2024-10-01 22:39:40.387658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:40:45.407 [2024-10-01 22:39:40.400284] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1682f30) 00:40:45.407 [2024-10-01 22:39:40.400301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:24132 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:45.407 [2024-10-01 22:39:40.400307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:40:45.407 [2024-10-01 22:39:40.412858] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1682f30) 00:40:45.407 [2024-10-01 22:39:40.412875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:2833 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:45.407 [2024-10-01 22:39:40.412882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:40:45.407 [2024-10-01 22:39:40.425931] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1682f30) 00:40:45.407 [2024-10-01 22:39:40.425948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:8687 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:45.407 [2024-10-01 22:39:40.425955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:40:45.407 [2024-10-01 22:39:40.441114] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1682f30) 00:40:45.407 [2024-10-01 22:39:40.441131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:18505 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:45.407 [2024-10-01 22:39:40.441138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:40:45.407 [2024-10-01 22:39:40.452829] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1682f30) 00:40:45.407 [2024-10-01 22:39:40.452845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:4128 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:45.407 [2024-10-01 22:39:40.452852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:40:45.407 [2024-10-01 22:39:40.464256] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1682f30) 00:40:45.407 [2024-10-01 22:39:40.464273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:3647 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:45.407 [2024-10-01 22:39:40.464279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:40:45.407 [2024-10-01 22:39:40.478240] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1682f30) 00:40:45.407 [2024-10-01 22:39:40.478257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:20524 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:45.407 [2024-10-01 22:39:40.478264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:40:45.407 [2024-10-01 22:39:40.491202] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1682f30) 00:40:45.407 [2024-10-01 22:39:40.491219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:8021 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:45.407 [2024-10-01 22:39:40.491225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:40:45.407 [2024-10-01 22:39:40.503423] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1682f30) 00:40:45.407 [2024-10-01 22:39:40.503440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:8637 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:45.407 [2024-10-01 22:39:40.503447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:40:45.407 [2024-10-01 22:39:40.516124] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1682f30) 00:40:45.407 [2024-10-01 22:39:40.516141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:9113 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:45.407 [2024-10-01 22:39:40.516147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:40:45.407 [2024-10-01 22:39:40.530334] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1682f30) 00:40:45.407 [2024-10-01 22:39:40.530351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:18494 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:45.407 [2024-10-01 22:39:40.530357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:40:45.407 [2024-10-01 22:39:40.541146] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1682f30) 00:40:45.407 [2024-10-01 22:39:40.541163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:6065 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:45.407 [2024-10-01 22:39:40.541170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:40:45.407 [2024-10-01 22:39:40.553967] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1682f30) 00:40:45.407 [2024-10-01 22:39:40.553983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:15694 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:45.407 [2024-10-01 22:39:40.553990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:40:45.407 [2024-10-01 22:39:40.568239] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1682f30) 00:40:45.407 [2024-10-01 22:39:40.568256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:4817 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:45.407 [2024-10-01 22:39:40.568266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:40:45.407 [2024-10-01 22:39:40.581518] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1682f30) 00:40:45.407 [2024-10-01 22:39:40.581536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:4657 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:45.407 [2024-10-01 22:39:40.581542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:40:45.407 [2024-10-01 22:39:40.594228] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1682f30) 00:40:45.407 [2024-10-01 22:39:40.594245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:4004 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:45.407 [2024-10-01 22:39:40.594251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:40:45.407 [2024-10-01 22:39:40.605492] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1682f30) 00:40:45.407 [2024-10-01 22:39:40.605509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:18488 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:45.407 [2024-10-01 22:39:40.605515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:40:45.407 [2024-10-01 22:39:40.617593] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1682f30) 00:40:45.407 [2024-10-01 22:39:40.617610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:24645 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:45.407 [2024-10-01 22:39:40.617616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:40:45.407 [2024-10-01 22:39:40.630828] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1682f30) 00:40:45.407 [2024-10-01 22:39:40.630845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:16447 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:45.407 [2024-10-01 22:39:40.630851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:40:45.407 [2024-10-01 22:39:40.645174] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1682f30) 00:40:45.408 [2024-10-01 22:39:40.645192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:8486 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:45.408 [2024-10-01 22:39:40.645198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:40:45.408 [2024-10-01 22:39:40.658623] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1682f30) 00:40:45.408 [2024-10-01 22:39:40.658645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:7438 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:45.408 [2024-10-01 22:39:40.658652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:40:45.667 [2024-10-01 22:39:40.670510] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1682f30) 00:40:45.667 [2024-10-01 22:39:40.670527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:2822 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:45.667 [2024-10-01 22:39:40.670534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:40:45.667 [2024-10-01 22:39:40.681400] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1682f30) 00:40:45.667 [2024-10-01 22:39:40.681417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:20692 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:45.667 [2024-10-01 22:39:40.681424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:40:45.667 [2024-10-01 22:39:40.694047] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1682f30) 00:40:45.667 [2024-10-01 22:39:40.694064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:6110 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:45.667 [2024-10-01 22:39:40.694071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:40:45.667 [2024-10-01 22:39:40.707248] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1682f30) 00:40:45.667 [2024-10-01 22:39:40.707265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:17811 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:45.667 [2024-10-01 22:39:40.707272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:40:45.667 [2024-10-01 22:39:40.720426] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1682f30) 00:40:45.667 [2024-10-01 22:39:40.720444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:25298 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:45.667 [2024-10-01 22:39:40.720450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:40:45.667 [2024-10-01 22:39:40.733329] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1682f30) 00:40:45.667 [2024-10-01 22:39:40.733347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:5261 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:45.667 [2024-10-01 22:39:40.733353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:40:45.667 [2024-10-01 22:39:40.745405] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1682f30) 00:40:45.667 [2024-10-01 22:39:40.745421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:10017 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:45.667 [2024-10-01 22:39:40.745427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:40:45.667 [2024-10-01 22:39:40.757667] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1682f30) 00:40:45.667 [2024-10-01 22:39:40.757684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:186 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:45.667 [2024-10-01 22:39:40.757690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:40:45.667 [2024-10-01 22:39:40.770927] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1682f30) 00:40:45.667 [2024-10-01 22:39:40.770944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:24089 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:45.667 [2024-10-01 22:39:40.770951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:40:45.667 [2024-10-01 22:39:40.784229] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1682f30) 00:40:45.668 [2024-10-01 22:39:40.784247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:16435 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:45.668 [2024-10-01 22:39:40.784257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:40:45.668 [2024-10-01 22:39:40.797830] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1682f30) 00:40:45.668 [2024-10-01 22:39:40.797847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:20475 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:45.668 [2024-10-01 22:39:40.797854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:40:45.668 [2024-10-01 22:39:40.811778] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1682f30) 00:40:45.668 [2024-10-01 22:39:40.811795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:8360 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:45.668 [2024-10-01 22:39:40.811801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:40:45.668 [2024-10-01 22:39:40.823076] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1682f30) 00:40:45.668 [2024-10-01 22:39:40.823093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:3066 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:45.668 [2024-10-01 22:39:40.823099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:40:45.668 [2024-10-01 22:39:40.835243] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1682f30) 00:40:45.668 [2024-10-01 22:39:40.835260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:25285 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:45.668 [2024-10-01 22:39:40.835267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:40:45.668 [2024-10-01 22:39:40.848547] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1682f30) 00:40:45.668 [2024-10-01 22:39:40.848564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:7969 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:45.668 [2024-10-01 22:39:40.848571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:40:45.668 [2024-10-01 22:39:40.861765] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1682f30) 00:40:45.668 [2024-10-01 22:39:40.861782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:3246 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:45.668 [2024-10-01 22:39:40.861788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:40:45.668 [2024-10-01 22:39:40.873413] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1682f30) 00:40:45.668 [2024-10-01 22:39:40.873430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:6726 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:45.668 [2024-10-01 22:39:40.873436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:40:45.668 [2024-10-01 22:39:40.885946] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1682f30) 00:40:45.668 [2024-10-01 22:39:40.885963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:16236 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:45.668 [2024-10-01 22:39:40.885969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:40:45.668 [2024-10-01 22:39:40.898696] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1682f30) 00:40:45.668 [2024-10-01 22:39:40.898716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:19194 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:45.668 [2024-10-01 22:39:40.898723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:40:45.668 [2024-10-01 22:39:40.911493] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1682f30) 00:40:45.668 [2024-10-01 22:39:40.911510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:12116 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:45.668 [2024-10-01 22:39:40.911516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:40:45.928 [2024-10-01 22:39:40.924530] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1682f30) 00:40:45.928 [2024-10-01 22:39:40.924548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:3962 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:45.928 [2024-10-01 22:39:40.924555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:40:45.928 [2024-10-01 22:39:40.934890] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1682f30) 00:40:45.928 [2024-10-01 22:39:40.934908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:20354 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:45.928 [2024-10-01 22:39:40.934914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:40:45.928 [2024-10-01 22:39:40.948258] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1682f30) 00:40:45.928 [2024-10-01 22:39:40.948275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:2462 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:45.928 [2024-10-01 22:39:40.948282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:40:45.928 [2024-10-01 22:39:40.960072] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1682f30) 00:40:45.928 [2024-10-01 22:39:40.960090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:21576 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:45.928 [2024-10-01 22:39:40.960096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:40:45.928 [2024-10-01 22:39:40.973867] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1682f30) 00:40:45.928 [2024-10-01 22:39:40.973884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:3273 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:45.928 [2024-10-01 22:39:40.973891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:40:45.928 [2024-10-01 22:39:40.986715] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1682f30) 00:40:45.928 [2024-10-01 22:39:40.986731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:24726 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:45.928 [2024-10-01 22:39:40.986738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:40:45.928 [2024-10-01 22:39:40.999328] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1682f30) 00:40:45.928 [2024-10-01 22:39:40.999346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:13014 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:45.928 [2024-10-01 22:39:40.999352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:40:45.928 [2024-10-01 22:39:41.012230] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1682f30) 00:40:45.928 [2024-10-01 22:39:41.012247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:6770 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:45.928 [2024-10-01 22:39:41.012253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:40:45.928 [2024-10-01 22:39:41.024673] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1682f30) 00:40:45.928 [2024-10-01 22:39:41.024690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:8207 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:45.928 [2024-10-01 22:39:41.024696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:40:45.928 [2024-10-01 22:39:41.037032] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1682f30) 00:40:45.928 [2024-10-01 22:39:41.037050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:12897 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:45.928 [2024-10-01 22:39:41.037057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:40:45.928 [2024-10-01 22:39:41.049627] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1682f30) 00:40:45.928 [2024-10-01 22:39:41.049644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:9752 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:45.928 [2024-10-01 22:39:41.049650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:40:45.928 [2024-10-01 22:39:41.061307] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1682f30) 00:40:45.928 [2024-10-01 22:39:41.061324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:2531 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:45.929 [2024-10-01 22:39:41.061331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:40:45.929 [2024-10-01 22:39:41.074772] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1682f30) 00:40:45.929 [2024-10-01 22:39:41.074789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:10718 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:45.929 [2024-10-01 22:39:41.074795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:40:45.929 [2024-10-01 22:39:41.087070] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1682f30) 00:40:45.929 [2024-10-01 22:39:41.087087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:24785 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:45.929 [2024-10-01 22:39:41.087093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:40:45.929 [2024-10-01 22:39:41.100113] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1682f30) 00:40:45.929 [2024-10-01 22:39:41.100129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:14475 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:45.929 [2024-10-01 22:39:41.100136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:40:45.929 [2024-10-01 22:39:41.111986] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1682f30) 00:40:45.929 [2024-10-01 22:39:41.112003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:9694 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:45.929 [2024-10-01 22:39:41.112014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:40:45.929 [2024-10-01 22:39:41.124634] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1682f30) 00:40:45.929 [2024-10-01 22:39:41.124652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:14196 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:45.929 [2024-10-01 22:39:41.124659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:40:45.929 [2024-10-01 22:39:41.137873] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1682f30) 00:40:45.929 [2024-10-01 22:39:41.137890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:9727 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:45.929 [2024-10-01 22:39:41.137897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:40:45.929 [2024-10-01 22:39:41.151099] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1682f30) 00:40:45.929 [2024-10-01 22:39:41.151117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:777 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:45.929 [2024-10-01 22:39:41.151123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:40:45.929 [2024-10-01 22:39:41.161300] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1682f30) 00:40:45.929 [2024-10-01 22:39:41.161317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:3801 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:45.929 [2024-10-01 22:39:41.161324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:40:45.929 [2024-10-01 22:39:41.174165] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1682f30) 00:40:45.929 [2024-10-01 22:39:41.174183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:10434 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:45.929 [2024-10-01 22:39:41.174189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:40:46.188 [2024-10-01 22:39:41.188773] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1682f30) 00:40:46.189 [2024-10-01 22:39:41.188790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:12344 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:46.189 [2024-10-01 22:39:41.188797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:40:46.189 [2024-10-01 22:39:41.201329] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1682f30) 00:40:46.189 [2024-10-01 22:39:41.201345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:20502 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:46.189 [2024-10-01 22:39:41.201352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:40:46.189 [2024-10-01 22:39:41.214506] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1682f30) 00:40:46.189 [2024-10-01 22:39:41.214523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:8310 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:46.189 [2024-10-01 22:39:41.214529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:40:46.189 [2024-10-01 22:39:41.225452] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1682f30) 00:40:46.189 [2024-10-01 22:39:41.225476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:10853 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:46.189 [2024-10-01 22:39:41.225483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:40:46.189 [2024-10-01 22:39:41.239520] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1682f30) 00:40:46.189 [2024-10-01 22:39:41.239537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:25589 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:46.189 [2024-10-01 22:39:41.239544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:40:46.189 [2024-10-01 22:39:41.251593] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1682f30) 00:40:46.189 [2024-10-01 22:39:41.251609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:1516 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:46.189 [2024-10-01 22:39:41.251616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:40:46.189 [2024-10-01 22:39:41.263301] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1682f30) 00:40:46.189 [2024-10-01 22:39:41.263318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:9097 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:46.189 [2024-10-01 22:39:41.263325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:40:46.189 19977.50 IOPS, 78.04 MiB/s [2024-10-01 22:39:41.275735] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1682f30) 00:40:46.189 [2024-10-01 22:39:41.275751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:17328 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:46.189 [2024-10-01 22:39:41.275758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:40:46.189 00:40:46.189 Latency(us) 00:40:46.189 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:40:46.189 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:40:46.189 nvme0n1 : 2.04 19597.21 76.55 0.00 0.00 6395.92 2389.33 47841.28 00:40:46.189 =================================================================================================================== 00:40:46.189 Total : 19597.21 76.55 0.00 0.00 6395.92 2389.33 47841.28 00:40:46.189 { 00:40:46.189 "results": [ 00:40:46.189 { 00:40:46.189 "job": "nvme0n1", 00:40:46.189 "core_mask": "0x2", 00:40:46.189 "workload": "randread", 00:40:46.189 "status": "finished", 00:40:46.189 "queue_depth": 128, 00:40:46.189 "io_size": 4096, 00:40:46.189 "runtime": 2.044679, 00:40:46.189 "iops": 19597.2081681281, 00:40:46.189 "mibps": 76.5515944067504, 00:40:46.189 "io_failed": 0, 00:40:46.189 "io_timeout": 0, 00:40:46.189 "avg_latency_us": 6395.915755428001, 00:40:46.189 "min_latency_us": 2389.3333333333335, 00:40:46.189 "max_latency_us": 47841.28 00:40:46.189 } 00:40:46.189 ], 00:40:46.189 "core_count": 1 00:40:46.189 } 00:40:46.189 22:39:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:40:46.189 22:39:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:40:46.189 22:39:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:40:46.189 22:39:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:40:46.189 | .driver_specific 00:40:46.189 | .nvme_error 00:40:46.189 | .status_code 00:40:46.189 | .command_transient_transport_error' 00:40:46.448 22:39:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 157 > 0 )) 00:40:46.448 22:39:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 384327 00:40:46.448 22:39:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@950 -- # '[' -z 384327 ']' 00:40:46.448 22:39:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # kill -0 384327 00:40:46.448 22:39:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # uname 00:40:46.448 22:39:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:40:46.448 22:39:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 384327 00:40:46.448 22:39:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:40:46.449 22:39:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:40:46.449 22:39:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@968 -- # echo 'killing process with pid 384327' 00:40:46.449 killing process with pid 384327 00:40:46.449 22:39:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@969 -- # kill 384327 00:40:46.449 Received shutdown signal, test time was about 2.000000 seconds 00:40:46.449 00:40:46.449 Latency(us) 00:40:46.449 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:40:46.449 =================================================================================================================== 00:40:46.449 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:40:46.449 22:39:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@974 -- # wait 384327 00:40:46.709 22:39:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@109 -- # run_bperf_err randread 131072 16 00:40:46.709 22:39:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:40:46.709 22:39:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:40:46.709 22:39:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:40:46.709 22:39:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:40:46.709 22:39:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=385019 00:40:46.709 22:39:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 385019 /var/tmp/bperf.sock 00:40:46.709 22:39:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@831 -- # '[' -z 385019 ']' 00:40:46.709 22:39:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z 00:40:46.709 22:39:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:40:46.709 22:39:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # local max_retries=100 00:40:46.709 22:39:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:40:46.709 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:40:46.709 22:39:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # xtrace_disable 00:40:46.709 22:39:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:40:46.709 [2024-10-01 22:39:41.794441] Starting SPDK v25.01-pre git sha1 1b1c3081e / DPDK 24.03.0 initialization... 00:40:46.709 [2024-10-01 22:39:41.794498] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid385019 ] 00:40:46.709 I/O size of 131072 is greater than zero copy threshold (65536). 00:40:46.709 Zero copy mechanism will not be used. 00:40:46.709 [2024-10-01 22:39:41.872251] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:40:46.709 [2024-10-01 22:39:41.925893] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:40:47.651 22:39:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:40:47.651 22:39:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # return 0 00:40:47.651 22:39:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:40:47.651 22:39:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:40:47.651 22:39:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:40:47.651 22:39:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:47.651 22:39:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:40:47.651 22:39:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:47.651 22:39:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:40:47.651 22:39:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:40:47.911 nvme0n1 00:40:47.911 22:39:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:40:47.911 22:39:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:47.911 22:39:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:40:47.911 22:39:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:47.911 22:39:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:40:47.911 22:39:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:40:47.911 I/O size of 131072 is greater than zero copy threshold (65536). 00:40:47.911 Zero copy mechanism will not be used. 00:40:47.911 Running I/O for 2 seconds... 00:40:47.911 [2024-10-01 22:39:43.160582] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x158b920) 00:40:47.911 [2024-10-01 22:39:43.160614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:47.911 [2024-10-01 22:39:43.160623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:40:48.172 [2024-10-01 22:39:43.169666] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x158b920) 00:40:48.172 [2024-10-01 22:39:43.169685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:48.172 [2024-10-01 22:39:43.169692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:40:48.172 [2024-10-01 22:39:43.176824] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x158b920) 00:40:48.172 [2024-10-01 22:39:43.176844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:48.172 [2024-10-01 22:39:43.176856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:40:48.172 [2024-10-01 22:39:43.182760] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x158b920) 00:40:48.172 [2024-10-01 22:39:43.182780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:25088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:48.172 [2024-10-01 22:39:43.182786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:40:48.172 [2024-10-01 22:39:43.193291] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x158b920) 00:40:48.172 [2024-10-01 22:39:43.193310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:18784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:48.172 [2024-10-01 22:39:43.193317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:40:48.172 [2024-10-01 22:39:43.202472] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x158b920) 00:40:48.172 [2024-10-01 22:39:43.202491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:15296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:48.172 [2024-10-01 22:39:43.202497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:40:48.172 [2024-10-01 22:39:43.211452] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x158b920) 00:40:48.172 [2024-10-01 22:39:43.211470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:11168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:48.172 [2024-10-01 22:39:43.211476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:40:48.172 [2024-10-01 22:39:43.219297] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x158b920) 00:40:48.172 [2024-10-01 22:39:43.219316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:21536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:48.172 [2024-10-01 22:39:43.219323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:40:48.172 [2024-10-01 22:39:43.228336] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x158b920) 00:40:48.172 [2024-10-01 22:39:43.228354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:25152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:48.172 [2024-10-01 22:39:43.228361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:40:48.172 [2024-10-01 22:39:43.237028] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x158b920) 00:40:48.172 [2024-10-01 22:39:43.237046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:18432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:48.172 [2024-10-01 22:39:43.237053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:40:48.172 [2024-10-01 22:39:43.247823] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x158b920) 00:40:48.172 [2024-10-01 22:39:43.247842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:4480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:48.172 [2024-10-01 22:39:43.247848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:40:48.172 [2024-10-01 22:39:43.259621] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x158b920) 00:40:48.172 [2024-10-01 22:39:43.259649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:5760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:48.172 [2024-10-01 22:39:43.259656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:40:48.172 [2024-10-01 22:39:43.271451] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x158b920) 00:40:48.172 [2024-10-01 22:39:43.271470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:18176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:48.172 [2024-10-01 22:39:43.271476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:40:48.173 [2024-10-01 22:39:43.282285] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x158b920) 00:40:48.173 [2024-10-01 22:39:43.282303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:6976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:48.173 [2024-10-01 22:39:43.282309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:40:48.173 [2024-10-01 22:39:43.290266] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x158b920) 00:40:48.173 [2024-10-01 22:39:43.290283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:18336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:48.173 [2024-10-01 22:39:43.290290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:40:48.173 [2024-10-01 22:39:43.300066] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x158b920) 00:40:48.173 [2024-10-01 22:39:43.300084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:48.173 [2024-10-01 22:39:43.300091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:40:48.173 [2024-10-01 22:39:43.310436] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x158b920) 00:40:48.173 [2024-10-01 22:39:43.310454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:8576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:48.173 [2024-10-01 22:39:43.310461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:40:48.173 [2024-10-01 22:39:43.320252] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x158b920) 00:40:48.173 [2024-10-01 22:39:43.320271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:23200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:48.173 [2024-10-01 22:39:43.320278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:40:48.173 [2024-10-01 22:39:43.330430] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x158b920) 00:40:48.173 [2024-10-01 22:39:43.330448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:21120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:48.173 [2024-10-01 22:39:43.330455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:40:48.173 [2024-10-01 22:39:43.342274] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x158b920) 00:40:48.173 [2024-10-01 22:39:43.342292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:48.173 [2024-10-01 22:39:43.342299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:40:48.173 [2024-10-01 22:39:43.351009] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x158b920) 00:40:48.173 [2024-10-01 22:39:43.351028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:9696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:48.173 [2024-10-01 22:39:43.351035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:40:48.173 [2024-10-01 22:39:43.358650] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x158b920) 00:40:48.173 [2024-10-01 22:39:43.358667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:15808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:48.173 [2024-10-01 22:39:43.358674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:40:48.173 [2024-10-01 22:39:43.368789] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x158b920) 00:40:48.173 [2024-10-01 22:39:43.368807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:17824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:48.173 [2024-10-01 22:39:43.368814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:40:48.173 [2024-10-01 22:39:43.378715] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x158b920) 00:40:48.173 [2024-10-01 22:39:43.378731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:11904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:48.173 [2024-10-01 22:39:43.378738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:40:48.173 [2024-10-01 22:39:43.385694] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x158b920) 00:40:48.173 [2024-10-01 22:39:43.385711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:25056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:48.173 [2024-10-01 22:39:43.385718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:40:48.173 [2024-10-01 22:39:43.395828] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x158b920) 00:40:48.173 [2024-10-01 22:39:43.395846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:48.173 [2024-10-01 22:39:43.395853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:40:48.173 [2024-10-01 22:39:43.405184] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x158b920) 00:40:48.173 [2024-10-01 22:39:43.405202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:7552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:48.173 [2024-10-01 22:39:43.405209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:40:48.173 [2024-10-01 22:39:43.414274] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x158b920) 00:40:48.173 [2024-10-01 22:39:43.414293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:5760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:48.173 [2024-10-01 22:39:43.414299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:40:48.173 [2024-10-01 22:39:43.419687] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x158b920) 00:40:48.173 [2024-10-01 22:39:43.419705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:21888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:48.173 [2024-10-01 22:39:43.419716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:40:48.434 [2024-10-01 22:39:43.425166] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x158b920) 00:40:48.434 [2024-10-01 22:39:43.425184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:11968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:48.434 [2024-10-01 22:39:43.425191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:40:48.434 [2024-10-01 22:39:43.431093] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x158b920) 00:40:48.434 [2024-10-01 22:39:43.431112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:14688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:48.434 [2024-10-01 22:39:43.431119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:40:48.434 [2024-10-01 22:39:43.436478] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x158b920) 00:40:48.434 [2024-10-01 22:39:43.436496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:48.434 [2024-10-01 22:39:43.436503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:40:48.434 [2024-10-01 22:39:43.441868] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x158b920) 00:40:48.434 [2024-10-01 22:39:43.441886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:9888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:48.434 [2024-10-01 22:39:43.441893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:40:48.434 [2024-10-01 22:39:43.447128] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x158b920) 00:40:48.434 [2024-10-01 22:39:43.447147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:48.434 [2024-10-01 22:39:43.447153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:40:48.434 [2024-10-01 22:39:43.454711] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x158b920) 00:40:48.434 [2024-10-01 22:39:43.454730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:22016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:48.434 [2024-10-01 22:39:43.454736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:40:48.434 [2024-10-01 22:39:43.462050] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x158b920) 00:40:48.434 [2024-10-01 22:39:43.462068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:48.434 [2024-10-01 22:39:43.462075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:40:48.434 [2024-10-01 22:39:43.467299] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x158b920) 00:40:48.434 [2024-10-01 22:39:43.467318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:9568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:48.434 [2024-10-01 22:39:43.467324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:40:48.434 [2024-10-01 22:39:43.475249] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x158b920) 00:40:48.434 [2024-10-01 22:39:43.475270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:48.434 [2024-10-01 22:39:43.475277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:40:48.434 [2024-10-01 22:39:43.481689] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x158b920) 00:40:48.434 [2024-10-01 22:39:43.481707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:25152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:48.434 [2024-10-01 22:39:43.481714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:40:48.434 [2024-10-01 22:39:43.492223] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x158b920) 00:40:48.434 [2024-10-01 22:39:43.492242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:48.434 [2024-10-01 22:39:43.492249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:40:48.434 [2024-10-01 22:39:43.503721] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x158b920) 00:40:48.434 [2024-10-01 22:39:43.503740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:23424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:48.434 [2024-10-01 22:39:43.503747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:40:48.434 [2024-10-01 22:39:43.513858] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x158b920) 00:40:48.434 [2024-10-01 22:39:43.513877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:1216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:48.434 [2024-10-01 22:39:43.513883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:40:48.434 [2024-10-01 22:39:43.521207] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x158b920) 00:40:48.434 [2024-10-01 22:39:43.521226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:17216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:48.434 [2024-10-01 22:39:43.521233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:40:48.434 [2024-10-01 22:39:43.530797] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x158b920) 00:40:48.434 [2024-10-01 22:39:43.530816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:48.434 [2024-10-01 22:39:43.530822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:40:48.434 [2024-10-01 22:39:43.541869] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x158b920) 00:40:48.434 [2024-10-01 22:39:43.541887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:48.434 [2024-10-01 22:39:43.541894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:40:48.434 [2024-10-01 22:39:43.551734] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x158b920) 00:40:48.434 [2024-10-01 22:39:43.551752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:10752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:48.434 [2024-10-01 22:39:43.551758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:40:48.434 [2024-10-01 22:39:43.562682] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x158b920) 00:40:48.434 [2024-10-01 22:39:43.562699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:8128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:48.434 [2024-10-01 22:39:43.562706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:40:48.434 [2024-10-01 22:39:43.570729] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x158b920) 00:40:48.434 [2024-10-01 22:39:43.570747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:48.434 [2024-10-01 22:39:43.570753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:40:48.434 [2024-10-01 22:39:43.577457] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x158b920) 00:40:48.434 [2024-10-01 22:39:43.577477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:48.434 [2024-10-01 22:39:43.577483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:40:48.434 [2024-10-01 22:39:43.588380] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x158b920) 00:40:48.434 [2024-10-01 22:39:43.588398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:1824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:48.434 [2024-10-01 22:39:43.588405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:40:48.434 [2024-10-01 22:39:43.600491] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x158b920) 00:40:48.434 [2024-10-01 22:39:43.600510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:3296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:48.434 [2024-10-01 22:39:43.600516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:40:48.434 [2024-10-01 22:39:43.613103] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x158b920) 00:40:48.435 [2024-10-01 22:39:43.613122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:48.435 [2024-10-01 22:39:43.613128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:40:48.435 [2024-10-01 22:39:43.623633] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x158b920) 00:40:48.435 [2024-10-01 22:39:43.623652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:4096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:48.435 [2024-10-01 22:39:43.623658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:40:48.435 [2024-10-01 22:39:43.628907] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x158b920) 00:40:48.435 [2024-10-01 22:39:43.628925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:18240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:48.435 [2024-10-01 22:39:43.628931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:40:48.435 [2024-10-01 22:39:43.634725] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x158b920) 00:40:48.435 [2024-10-01 22:39:43.634743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:13696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:48.435 [2024-10-01 22:39:43.634753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:40:48.435 [2024-10-01 22:39:43.639998] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x158b920) 00:40:48.435 [2024-10-01 22:39:43.640017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:20192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:48.435 [2024-10-01 22:39:43.640023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:40:48.435 [2024-10-01 22:39:43.645143] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x158b920) 00:40:48.435 [2024-10-01 22:39:43.645162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:1856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:48.435 [2024-10-01 22:39:43.645168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:40:48.435 [2024-10-01 22:39:43.657095] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x158b920) 00:40:48.435 [2024-10-01 22:39:43.657113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:48.435 [2024-10-01 22:39:43.657119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:40:48.435 [2024-10-01 22:39:43.665568] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x158b920) 00:40:48.435 [2024-10-01 22:39:43.665586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:7808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:48.435 [2024-10-01 22:39:43.665592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:40:48.435 [2024-10-01 22:39:43.674928] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x158b920) 00:40:48.435 [2024-10-01 22:39:43.674946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:48.435 [2024-10-01 22:39:43.674952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:40:48.435 [2024-10-01 22:39:43.684659] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x158b920) 00:40:48.435 [2024-10-01 22:39:43.684677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:48.435 [2024-10-01 22:39:43.684684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:40:48.697 [2024-10-01 22:39:43.692923] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x158b920) 00:40:48.697 [2024-10-01 22:39:43.692941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:48.697 [2024-10-01 22:39:43.692948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:40:48.697 [2024-10-01 22:39:43.703533] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x158b920) 00:40:48.697 [2024-10-01 22:39:43.703552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:10944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:48.697 [2024-10-01 22:39:43.703558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:40:48.697 [2024-10-01 22:39:43.715167] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x158b920) 00:40:48.697 [2024-10-01 22:39:43.715186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:3104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:48.697 [2024-10-01 22:39:43.715192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:40:48.697 [2024-10-01 22:39:43.722217] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x158b920) 00:40:48.697 [2024-10-01 22:39:43.722235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:17728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:48.697 [2024-10-01 22:39:43.722242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:40:48.697 [2024-10-01 22:39:43.729803] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x158b920) 00:40:48.697 [2024-10-01 22:39:43.729820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:48.697 [2024-10-01 22:39:43.729827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:40:48.697 [2024-10-01 22:39:43.734456] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x158b920) 00:40:48.697 [2024-10-01 22:39:43.734474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:15200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:48.697 [2024-10-01 22:39:43.734480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:40:48.697 [2024-10-01 22:39:43.739319] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x158b920) 00:40:48.697 [2024-10-01 22:39:43.739338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:2336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:48.697 [2024-10-01 22:39:43.739344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:40:48.697 [2024-10-01 22:39:43.744511] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x158b920) 00:40:48.697 [2024-10-01 22:39:43.744529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:48.697 [2024-10-01 22:39:43.744535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:40:48.697 [2024-10-01 22:39:43.750259] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x158b920) 00:40:48.697 [2024-10-01 22:39:43.750277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:48.697 [2024-10-01 22:39:43.750283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:40:48.697 [2024-10-01 22:39:43.757985] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x158b920) 00:40:48.697 [2024-10-01 22:39:43.758003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:15168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:48.697 [2024-10-01 22:39:43.758009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:40:48.697 [2024-10-01 22:39:43.765133] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x158b920) 00:40:48.697 [2024-10-01 22:39:43.765152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:1312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:48.697 [2024-10-01 22:39:43.765161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:40:48.697 [2024-10-01 22:39:43.774385] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x158b920) 00:40:48.697 [2024-10-01 22:39:43.774404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:2336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:48.697 [2024-10-01 22:39:43.774410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:40:48.697 [2024-10-01 22:39:43.785122] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x158b920) 00:40:48.697 [2024-10-01 22:39:43.785142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:20640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:48.697 [2024-10-01 22:39:43.785148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:40:48.697 [2024-10-01 22:39:43.792776] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x158b920) 00:40:48.697 [2024-10-01 22:39:43.792795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:15168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:48.697 [2024-10-01 22:39:43.792801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:40:48.697 [2024-10-01 22:39:43.804059] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x158b920) 00:40:48.697 [2024-10-01 22:39:43.804078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:11392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:48.697 [2024-10-01 22:39:43.804084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:40:48.697 [2024-10-01 22:39:43.815334] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x158b920) 00:40:48.697 [2024-10-01 22:39:43.815353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:4960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:48.697 [2024-10-01 22:39:43.815359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:40:48.697 [2024-10-01 22:39:43.824067] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x158b920) 00:40:48.697 [2024-10-01 22:39:43.824085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:18464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:48.697 [2024-10-01 22:39:43.824091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:40:48.697 [2024-10-01 22:39:43.829272] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x158b920) 00:40:48.697 [2024-10-01 22:39:43.829291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:48.697 [2024-10-01 22:39:43.829297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:40:48.697 [2024-10-01 22:39:43.836332] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x158b920) 00:40:48.697 [2024-10-01 22:39:43.836350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:17824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:48.697 [2024-10-01 22:39:43.836356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:40:48.697 [2024-10-01 22:39:43.846143] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x158b920) 00:40:48.697 [2024-10-01 22:39:43.846166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:5760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:48.697 [2024-10-01 22:39:43.846172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:40:48.697 [2024-10-01 22:39:43.853738] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x158b920) 00:40:48.697 [2024-10-01 22:39:43.853757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:6912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:48.697 [2024-10-01 22:39:43.853763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:40:48.697 [2024-10-01 22:39:43.865185] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x158b920) 00:40:48.697 [2024-10-01 22:39:43.865203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:11648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:48.697 [2024-10-01 22:39:43.865209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:40:48.697 [2024-10-01 22:39:43.873363] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x158b920) 00:40:48.698 [2024-10-01 22:39:43.873382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:1248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:48.698 [2024-10-01 22:39:43.873388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:40:48.698 [2024-10-01 22:39:43.878934] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x158b920) 00:40:48.698 [2024-10-01 22:39:43.878952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:16000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:48.698 [2024-10-01 22:39:43.878959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:40:48.698 [2024-10-01 22:39:43.886362] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x158b920) 00:40:48.698 [2024-10-01 22:39:43.886380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:14432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:48.698 [2024-10-01 22:39:43.886386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:40:48.698 [2024-10-01 22:39:43.892089] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x158b920) 00:40:48.698 [2024-10-01 22:39:43.892108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:7072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:48.698 [2024-10-01 22:39:43.892114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:40:48.698 [2024-10-01 22:39:43.899326] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x158b920) 00:40:48.698 [2024-10-01 22:39:43.899344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:48.698 [2024-10-01 22:39:43.899350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:40:48.698 [2024-10-01 22:39:43.906061] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x158b920) 00:40:48.698 [2024-10-01 22:39:43.906081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:5376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:48.698 [2024-10-01 22:39:43.906087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:40:48.698 [2024-10-01 22:39:43.913941] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x158b920) 00:40:48.698 [2024-10-01 22:39:43.913959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:14816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:48.698 [2024-10-01 22:39:43.913965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:40:48.698 [2024-10-01 22:39:43.923492] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x158b920) 00:40:48.698 [2024-10-01 22:39:43.923510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:48.698 [2024-10-01 22:39:43.923517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:40:48.698 [2024-10-01 22:39:43.929648] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x158b920) 00:40:48.698 [2024-10-01 22:39:43.929666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:8640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:48.698 [2024-10-01 22:39:43.929672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:40:48.698 [2024-10-01 22:39:43.938099] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x158b920) 00:40:48.698 [2024-10-01 22:39:43.938117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:4672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:48.698 [2024-10-01 22:39:43.938124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:40:48.698 [2024-10-01 22:39:43.947655] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x158b920) 00:40:48.698 [2024-10-01 22:39:43.947673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:48.698 [2024-10-01 22:39:43.947680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:40:48.959 [2024-10-01 22:39:43.958682] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x158b920) 00:40:48.959 [2024-10-01 22:39:43.958700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:13984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:48.959 [2024-10-01 22:39:43.958706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:40:48.959 [2024-10-01 22:39:43.964698] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x158b920) 00:40:48.959 [2024-10-01 22:39:43.964716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:48.959 [2024-10-01 22:39:43.964723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:40:48.959 [2024-10-01 22:39:43.973111] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x158b920) 00:40:48.959 [2024-10-01 22:39:43.973128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:4096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:48.959 [2024-10-01 22:39:43.973134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:40:48.959 [2024-10-01 22:39:43.978132] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x158b920) 00:40:48.959 [2024-10-01 22:39:43.978150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:20768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:48.959 [2024-10-01 22:39:43.978160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:40:48.959 [2024-10-01 22:39:43.989279] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x158b920) 00:40:48.959 [2024-10-01 22:39:43.989296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:4928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:48.959 [2024-10-01 22:39:43.989303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:40:48.959 [2024-10-01 22:39:43.994648] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x158b920) 00:40:48.959 [2024-10-01 22:39:43.994667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:8896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:48.959 [2024-10-01 22:39:43.994673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:40:48.959 [2024-10-01 22:39:44.004064] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x158b920) 00:40:48.959 [2024-10-01 22:39:44.004082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:20064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:48.959 [2024-10-01 22:39:44.004088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:40:48.959 [2024-10-01 22:39:44.012712] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x158b920) 00:40:48.959 [2024-10-01 22:39:44.012730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:48.959 [2024-10-01 22:39:44.012737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:40:48.959 [2024-10-01 22:39:44.023348] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x158b920) 00:40:48.960 [2024-10-01 22:39:44.023366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:4768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:48.960 [2024-10-01 22:39:44.023372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:40:48.960 [2024-10-01 22:39:44.028972] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x158b920) 00:40:48.960 [2024-10-01 22:39:44.028990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:1472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:48.960 [2024-10-01 22:39:44.028996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:40:48.960 [2024-10-01 22:39:44.036804] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x158b920) 00:40:48.960 [2024-10-01 22:39:44.036822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:48.960 [2024-10-01 22:39:44.036828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:40:48.960 [2024-10-01 22:39:44.042616] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x158b920) 00:40:48.960 [2024-10-01 22:39:44.042639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:48.960 [2024-10-01 22:39:44.042645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:40:48.960 [2024-10-01 22:39:44.052188] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x158b920) 00:40:48.960 [2024-10-01 22:39:44.052211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:8640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:48.960 [2024-10-01 22:39:44.052218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:40:48.960 [2024-10-01 22:39:44.062716] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x158b920) 00:40:48.960 [2024-10-01 22:39:44.062734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:2944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:48.960 [2024-10-01 22:39:44.062740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:40:48.960 [2024-10-01 22:39:44.072263] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x158b920) 00:40:48.960 [2024-10-01 22:39:44.072282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:4640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:48.960 [2024-10-01 22:39:44.072288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:40:48.960 [2024-10-01 22:39:44.081132] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x158b920) 00:40:48.960 [2024-10-01 22:39:44.081151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:48.960 [2024-10-01 22:39:44.081157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:40:48.960 [2024-10-01 22:39:44.087384] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x158b920) 00:40:48.960 [2024-10-01 22:39:44.087401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:18272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:48.960 [2024-10-01 22:39:44.087407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:40:48.960 [2024-10-01 22:39:44.091460] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x158b920) 00:40:48.960 [2024-10-01 22:39:44.091477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:4736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:48.960 [2024-10-01 22:39:44.091483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:40:48.960 [2024-10-01 22:39:44.100733] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x158b920) 00:40:48.960 [2024-10-01 22:39:44.100750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:13696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:48.960 [2024-10-01 22:39:44.100757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:40:48.960 [2024-10-01 22:39:44.112370] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x158b920) 00:40:48.960 [2024-10-01 22:39:44.112389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:21024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:48.960 [2024-10-01 22:39:44.112395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:40:48.960 [2024-10-01 22:39:44.120429] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x158b920) 00:40:48.960 [2024-10-01 22:39:44.120446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:15488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:48.960 [2024-10-01 22:39:44.120452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:40:48.960 [2024-10-01 22:39:44.129044] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x158b920) 00:40:48.960 [2024-10-01 22:39:44.129061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:24352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:48.960 [2024-10-01 22:39:44.129067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:40:48.960 [2024-10-01 22:39:44.139023] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x158b920) 00:40:48.960 [2024-10-01 22:39:44.139042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:18752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:48.960 [2024-10-01 22:39:44.139048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:40:48.960 3644.00 IOPS, 455.50 MiB/s [2024-10-01 22:39:44.149316] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x158b920) 00:40:48.960 [2024-10-01 22:39:44.149334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:48.960 [2024-10-01 22:39:44.149340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:40:48.960 [2024-10-01 22:39:44.161338] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x158b920) 00:40:48.960 [2024-10-01 22:39:44.161355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:7232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:48.960 [2024-10-01 22:39:44.161361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:40:48.960 [2024-10-01 22:39:44.171895] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x158b920) 00:40:48.960 [2024-10-01 22:39:44.171914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:12288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:48.960 [2024-10-01 22:39:44.171920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:40:48.960 [2024-10-01 22:39:44.183591] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x158b920) 00:40:48.960 [2024-10-01 22:39:44.183608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:48.960 [2024-10-01 22:39:44.183614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:40:48.960 [2024-10-01 22:39:44.190287] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x158b920) 00:40:48.960 [2024-10-01 22:39:44.190305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:2816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:48.960 [2024-10-01 22:39:44.190312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:40:48.960 [2024-10-01 22:39:44.199081] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x158b920) 00:40:48.960 [2024-10-01 22:39:44.199098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:6976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:48.960 [2024-10-01 22:39:44.199105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:40:48.960 [2024-10-01 22:39:44.207974] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x158b920) 00:40:48.960 [2024-10-01 22:39:44.207995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:48.960 [2024-10-01 22:39:44.208001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:40:49.223 [2024-10-01 22:39:44.215398] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x158b920) 00:40:49.223 [2024-10-01 22:39:44.215416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:2880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:49.223 [2024-10-01 22:39:44.215422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:40:49.223 [2024-10-01 22:39:44.227173] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x158b920) 00:40:49.223 [2024-10-01 22:39:44.227192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:49.223 [2024-10-01 22:39:44.227198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:40:49.223 [2024-10-01 22:39:44.238022] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x158b920) 00:40:49.223 [2024-10-01 22:39:44.238040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:12480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:49.223 [2024-10-01 22:39:44.238047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:40:49.223 [2024-10-01 22:39:44.244733] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x158b920) 00:40:49.223 [2024-10-01 22:39:44.244751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:1504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:49.223 [2024-10-01 22:39:44.244760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:40:49.223 [2024-10-01 22:39:44.254431] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x158b920) 00:40:49.223 [2024-10-01 22:39:44.254450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:2880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:49.223 [2024-10-01 22:39:44.254457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:40:49.223 [2024-10-01 22:39:44.262683] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x158b920) 00:40:49.223 [2024-10-01 22:39:44.262701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:3552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:49.223 [2024-10-01 22:39:44.262708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:40:49.223 [2024-10-01 22:39:44.269837] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x158b920) 00:40:49.223 [2024-10-01 22:39:44.269854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:23936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:49.223 [2024-10-01 22:39:44.269862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:40:49.224 [2024-10-01 22:39:44.278087] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x158b920) 00:40:49.224 [2024-10-01 22:39:44.278104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:11744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:49.224 [2024-10-01 22:39:44.278111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:40:49.224 [2024-10-01 22:39:44.283244] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x158b920) 00:40:49.224 [2024-10-01 22:39:44.283262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:25440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:49.224 [2024-10-01 22:39:44.283268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:40:49.224 [2024-10-01 22:39:44.289406] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x158b920) 00:40:49.224 [2024-10-01 22:39:44.289423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:10144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:49.224 [2024-10-01 22:39:44.289430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:40:49.224 [2024-10-01 22:39:44.297539] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x158b920) 00:40:49.224 [2024-10-01 22:39:44.297556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:7264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:49.224 [2024-10-01 22:39:44.297562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:40:49.224 [2024-10-01 22:39:44.302617] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x158b920) 00:40:49.224 [2024-10-01 22:39:44.302641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:16288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:49.224 [2024-10-01 22:39:44.302648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:40:49.224 [2024-10-01 22:39:44.310583] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x158b920) 00:40:49.224 [2024-10-01 22:39:44.310600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:13664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:49.224 [2024-10-01 22:39:44.310606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:40:49.224 [2024-10-01 22:39:44.315880] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x158b920) 00:40:49.224 [2024-10-01 22:39:44.315897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:8896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:49.224 [2024-10-01 22:39:44.315904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:40:49.224 [2024-10-01 22:39:44.324228] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x158b920) 00:40:49.224 [2024-10-01 22:39:44.324246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:18048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:49.224 [2024-10-01 22:39:44.324253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:40:49.224 [2024-10-01 22:39:44.329290] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x158b920) 00:40:49.224 [2024-10-01 22:39:44.329308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:3040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:49.224 [2024-10-01 22:39:44.329314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:40:49.224 [2024-10-01 22:39:44.336229] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x158b920) 00:40:49.224 [2024-10-01 22:39:44.336247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:13664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:49.224 [2024-10-01 22:39:44.336256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:40:49.224 [2024-10-01 22:39:44.342416] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x158b920) 00:40:49.224 [2024-10-01 22:39:44.342434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:2688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:49.224 [2024-10-01 22:39:44.342440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:40:49.224 [2024-10-01 22:39:44.351255] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x158b920) 00:40:49.224 [2024-10-01 22:39:44.351272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:15040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:49.224 [2024-10-01 22:39:44.351279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:40:49.224 [2024-10-01 22:39:44.357524] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x158b920) 00:40:49.224 [2024-10-01 22:39:44.357543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:4288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:49.224 [2024-10-01 22:39:44.357550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:40:49.224 [2024-10-01 22:39:44.362491] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x158b920) 00:40:49.224 [2024-10-01 22:39:44.362510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:1280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:49.224 [2024-10-01 22:39:44.362516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:40:49.224 [2024-10-01 22:39:44.367862] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x158b920) 00:40:49.224 [2024-10-01 22:39:44.367880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:13504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:49.224 [2024-10-01 22:39:44.367886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:40:49.224 [2024-10-01 22:39:44.377112] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x158b920) 00:40:49.224 [2024-10-01 22:39:44.377129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:1504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:49.224 [2024-10-01 22:39:44.377135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:40:49.224 [2024-10-01 22:39:44.384916] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x158b920) 00:40:49.224 [2024-10-01 22:39:44.384934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:5856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:49.224 [2024-10-01 22:39:44.384940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:40:49.224 [2024-10-01 22:39:44.393130] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x158b920) 00:40:49.224 [2024-10-01 22:39:44.393147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:19616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:49.224 [2024-10-01 22:39:44.393154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:40:49.224 [2024-10-01 22:39:44.401974] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x158b920) 00:40:49.224 [2024-10-01 22:39:44.402000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:14432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:49.224 [2024-10-01 22:39:44.402006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:40:49.224 [2024-10-01 22:39:44.407728] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x158b920) 00:40:49.224 [2024-10-01 22:39:44.407745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:13120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:49.224 [2024-10-01 22:39:44.407752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:40:49.224 [2024-10-01 22:39:44.413279] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x158b920) 00:40:49.224 [2024-10-01 22:39:44.413297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:3232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:49.224 [2024-10-01 22:39:44.413303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:40:49.224 [2024-10-01 22:39:44.423240] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x158b920) 00:40:49.224 [2024-10-01 22:39:44.423258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:9664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:49.224 [2024-10-01 22:39:44.423264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:40:49.224 [2024-10-01 22:39:44.431316] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x158b920) 00:40:49.224 [2024-10-01 22:39:44.431335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:14176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:49.224 [2024-10-01 22:39:44.431341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:40:49.224 [2024-10-01 22:39:44.441557] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x158b920) 00:40:49.224 [2024-10-01 22:39:44.441576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:5984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:49.224 [2024-10-01 22:39:44.441583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:40:49.224 [2024-10-01 22:39:44.452601] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x158b920) 00:40:49.224 [2024-10-01 22:39:44.452618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:25216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:49.225 [2024-10-01 22:39:44.452630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:40:49.225 [2024-10-01 22:39:44.459662] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x158b920) 00:40:49.225 [2024-10-01 22:39:44.459681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:11392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:49.225 [2024-10-01 22:39:44.459688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:40:49.225 [2024-10-01 22:39:44.465596] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x158b920) 00:40:49.225 [2024-10-01 22:39:44.465614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:23744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:49.225 [2024-10-01 22:39:44.465620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:40:49.225 [2024-10-01 22:39:44.472933] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x158b920) 00:40:49.225 [2024-10-01 22:39:44.472952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:20800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:49.225 [2024-10-01 22:39:44.472958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:40:49.488 [2024-10-01 22:39:44.480533] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x158b920) 00:40:49.488 [2024-10-01 22:39:44.480552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:21216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:49.488 [2024-10-01 22:39:44.480558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:40:49.488 [2024-10-01 22:39:44.487817] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x158b920) 00:40:49.488 [2024-10-01 22:39:44.487834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:24320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:49.488 [2024-10-01 22:39:44.487840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:40:49.488 [2024-10-01 22:39:44.497778] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x158b920) 00:40:49.488 [2024-10-01 22:39:44.497796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:9760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:49.488 [2024-10-01 22:39:44.497803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:40:49.488 [2024-10-01 22:39:44.505721] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x158b920) 00:40:49.488 [2024-10-01 22:39:44.505738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:23040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:49.488 [2024-10-01 22:39:44.505745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:40:49.488 [2024-10-01 22:39:44.510568] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x158b920) 00:40:49.488 [2024-10-01 22:39:44.510587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:49.488 [2024-10-01 22:39:44.510594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:40:49.488 [2024-10-01 22:39:44.518854] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x158b920) 00:40:49.488 [2024-10-01 22:39:44.518873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:23744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:49.488 [2024-10-01 22:39:44.518879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:40:49.488 [2024-10-01 22:39:44.525830] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x158b920) 00:40:49.488 [2024-10-01 22:39:44.525847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:9984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:49.488 [2024-10-01 22:39:44.525854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:40:49.488 [2024-10-01 22:39:44.535170] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x158b920) 00:40:49.488 [2024-10-01 22:39:44.535189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:8320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:49.488 [2024-10-01 22:39:44.535199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:40:49.488 [2024-10-01 22:39:44.539842] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x158b920) 00:40:49.488 [2024-10-01 22:39:44.539861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:22272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:49.488 [2024-10-01 22:39:44.539867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:40:49.488 [2024-10-01 22:39:44.550741] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x158b920) 00:40:49.488 [2024-10-01 22:39:44.550760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:2080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:49.488 [2024-10-01 22:39:44.550766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:40:49.488 [2024-10-01 22:39:44.560408] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x158b920) 00:40:49.488 [2024-10-01 22:39:44.560427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:8384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:49.488 [2024-10-01 22:39:44.560433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:40:49.488 [2024-10-01 22:39:44.571279] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x158b920) 00:40:49.488 [2024-10-01 22:39:44.571298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:8608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:49.488 [2024-10-01 22:39:44.571305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:40:49.488 [2024-10-01 22:39:44.580403] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x158b920) 00:40:49.488 [2024-10-01 22:39:44.580422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:3456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:49.488 [2024-10-01 22:39:44.580428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:40:49.488 [2024-10-01 22:39:44.586571] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x158b920) 00:40:49.488 [2024-10-01 22:39:44.586590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:4288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:49.488 [2024-10-01 22:39:44.586597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:40:49.488 [2024-10-01 22:39:44.595145] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x158b920) 00:40:49.488 [2024-10-01 22:39:44.595163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:15072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:49.488 [2024-10-01 22:39:44.595170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:40:49.488 [2024-10-01 22:39:44.605825] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x158b920) 00:40:49.488 [2024-10-01 22:39:44.605843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:4224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:49.488 [2024-10-01 22:39:44.605850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:40:49.488 [2024-10-01 22:39:44.615044] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x158b920) 00:40:49.488 [2024-10-01 22:39:44.615066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:7264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:49.488 [2024-10-01 22:39:44.615072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:40:49.488 [2024-10-01 22:39:44.625981] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x158b920) 00:40:49.488 [2024-10-01 22:39:44.626000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:23776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:49.488 [2024-10-01 22:39:44.626006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:40:49.488 [2024-10-01 22:39:44.636258] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x158b920) 00:40:49.488 [2024-10-01 22:39:44.636277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:2688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:49.488 [2024-10-01 22:39:44.636283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:40:49.488 [2024-10-01 22:39:44.643718] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x158b920) 00:40:49.488 [2024-10-01 22:39:44.643736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:24000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:49.488 [2024-10-01 22:39:44.643742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:40:49.488 [2024-10-01 22:39:44.651287] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x158b920) 00:40:49.488 [2024-10-01 22:39:44.651305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:8192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:49.488 [2024-10-01 22:39:44.651311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:40:49.488 [2024-10-01 22:39:44.661698] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x158b920) 00:40:49.488 [2024-10-01 22:39:44.661717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:1056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:49.488 [2024-10-01 22:39:44.661723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:40:49.488 [2024-10-01 22:39:44.669818] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x158b920) 00:40:49.488 [2024-10-01 22:39:44.669836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:20352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:49.488 [2024-10-01 22:39:44.669843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:40:49.488 [2024-10-01 22:39:44.676716] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x158b920) 00:40:49.488 [2024-10-01 22:39:44.676735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:2176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:49.488 [2024-10-01 22:39:44.676741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:40:49.488 [2024-10-01 22:39:44.686980] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x158b920) 00:40:49.488 [2024-10-01 22:39:44.686998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:24224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:49.488 [2024-10-01 22:39:44.687005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:40:49.488 [2024-10-01 22:39:44.695970] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x158b920) 00:40:49.488 [2024-10-01 22:39:44.695989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:19616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:49.488 [2024-10-01 22:39:44.695995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:40:49.489 [2024-10-01 22:39:44.706595] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x158b920) 00:40:49.489 [2024-10-01 22:39:44.706613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:21280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:49.489 [2024-10-01 22:39:44.706620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:40:49.489 [2024-10-01 22:39:44.713486] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x158b920) 00:40:49.489 [2024-10-01 22:39:44.713505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:15776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:49.489 [2024-10-01 22:39:44.713511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:40:49.489 [2024-10-01 22:39:44.718551] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x158b920) 00:40:49.489 [2024-10-01 22:39:44.718570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:12544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:49.489 [2024-10-01 22:39:44.718576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:40:49.489 [2024-10-01 22:39:44.726352] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x158b920) 00:40:49.489 [2024-10-01 22:39:44.726371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:7744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:49.489 [2024-10-01 22:39:44.726377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:40:49.489 [2024-10-01 22:39:44.734169] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x158b920) 00:40:49.489 [2024-10-01 22:39:44.734188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:21664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:49.489 [2024-10-01 22:39:44.734194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:40:49.751 [2024-10-01 22:39:44.742521] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x158b920) 00:40:49.751 [2024-10-01 22:39:44.742539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:22592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:49.751 [2024-10-01 22:39:44.742545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:40:49.751 [2024-10-01 22:39:44.750089] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x158b920) 00:40:49.751 [2024-10-01 22:39:44.750108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:13632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:49.751 [2024-10-01 22:39:44.750114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:40:49.751 [2024-10-01 22:39:44.757464] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x158b920) 00:40:49.751 [2024-10-01 22:39:44.757483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:12032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:49.751 [2024-10-01 22:39:44.757492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:40:49.751 [2024-10-01 22:39:44.766686] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x158b920) 00:40:49.751 [2024-10-01 22:39:44.766705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:10720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:49.751 [2024-10-01 22:39:44.766711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:40:49.751 [2024-10-01 22:39:44.776590] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x158b920) 00:40:49.751 [2024-10-01 22:39:44.776608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:1280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:49.751 [2024-10-01 22:39:44.776615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:40:49.751 [2024-10-01 22:39:44.784923] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x158b920) 00:40:49.752 [2024-10-01 22:39:44.784940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:22528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:49.752 [2024-10-01 22:39:44.784946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:40:49.752 [2024-10-01 22:39:44.793950] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x158b920) 00:40:49.752 [2024-10-01 22:39:44.793969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:21152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:49.752 [2024-10-01 22:39:44.793975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:40:49.752 [2024-10-01 22:39:44.801825] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x158b920) 00:40:49.752 [2024-10-01 22:39:44.801844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:6624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:49.752 [2024-10-01 22:39:44.801850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:40:49.752 [2024-10-01 22:39:44.808655] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x158b920) 00:40:49.752 [2024-10-01 22:39:44.808674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:13056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:49.752 [2024-10-01 22:39:44.808680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:40:49.752 [2024-10-01 22:39:44.813693] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x158b920) 00:40:49.752 [2024-10-01 22:39:44.813711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:12224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:49.752 [2024-10-01 22:39:44.813718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:40:49.752 [2024-10-01 22:39:44.822169] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x158b920) 00:40:49.752 [2024-10-01 22:39:44.822187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:24384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:49.752 [2024-10-01 22:39:44.822194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:40:49.752 [2024-10-01 22:39:44.826980] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x158b920) 00:40:49.752 [2024-10-01 22:39:44.826999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:15936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:49.752 [2024-10-01 22:39:44.827005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:40:49.752 [2024-10-01 22:39:44.838669] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x158b920) 00:40:49.752 [2024-10-01 22:39:44.838687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:10048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:49.752 [2024-10-01 22:39:44.838693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:40:49.752 [2024-10-01 22:39:44.846229] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x158b920) 00:40:49.752 [2024-10-01 22:39:44.846248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:23744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:49.752 [2024-10-01 22:39:44.846254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:40:49.752 [2024-10-01 22:39:44.855412] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x158b920) 00:40:49.752 [2024-10-01 22:39:44.855430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:10400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:49.752 [2024-10-01 22:39:44.855436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:40:49.752 [2024-10-01 22:39:44.865433] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x158b920) 00:40:49.752 [2024-10-01 22:39:44.865452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:17024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:49.752 [2024-10-01 22:39:44.865458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:40:49.752 [2024-10-01 22:39:44.871486] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x158b920) 00:40:49.752 [2024-10-01 22:39:44.871505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:16512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:49.752 [2024-10-01 22:39:44.871511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:40:49.752 [2024-10-01 22:39:44.880526] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x158b920) 00:40:49.752 [2024-10-01 22:39:44.880544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:4480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:49.752 [2024-10-01 22:39:44.880550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:40:49.752 [2024-10-01 22:39:44.885517] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x158b920) 00:40:49.752 [2024-10-01 22:39:44.885535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:15648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:49.752 [2024-10-01 22:39:44.885542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:40:49.752 [2024-10-01 22:39:44.890913] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x158b920) 00:40:49.752 [2024-10-01 22:39:44.890931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:17376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:49.752 [2024-10-01 22:39:44.890943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:40:49.752 [2024-10-01 22:39:44.900962] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x158b920) 00:40:49.752 [2024-10-01 22:39:44.900981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:23520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:49.752 [2024-10-01 22:39:44.900987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:40:49.752 [2024-10-01 22:39:44.911495] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x158b920) 00:40:49.752 [2024-10-01 22:39:44.911514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:20096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:49.752 [2024-10-01 22:39:44.911520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:40:49.752 [2024-10-01 22:39:44.923772] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x158b920) 00:40:49.752 [2024-10-01 22:39:44.923790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:9248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:49.752 [2024-10-01 22:39:44.923797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:40:49.752 [2024-10-01 22:39:44.935275] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x158b920) 00:40:49.752 [2024-10-01 22:39:44.935293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:9728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:49.752 [2024-10-01 22:39:44.935300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:40:49.752 [2024-10-01 22:39:44.947430] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x158b920) 00:40:49.752 [2024-10-01 22:39:44.947449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:18464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:49.752 [2024-10-01 22:39:44.947455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:40:49.752 [2024-10-01 22:39:44.957842] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x158b920) 00:40:49.752 [2024-10-01 22:39:44.957860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:5664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:49.752 [2024-10-01 22:39:44.957867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:40:49.752 [2024-10-01 22:39:44.970642] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x158b920) 00:40:49.752 [2024-10-01 22:39:44.970661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:3072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:49.752 [2024-10-01 22:39:44.970667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:40:49.752 [2024-10-01 22:39:44.982246] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x158b920) 00:40:49.752 [2024-10-01 22:39:44.982264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:13888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:49.752 [2024-10-01 22:39:44.982271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:40:49.752 [2024-10-01 22:39:44.993967] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x158b920) 00:40:49.752 [2024-10-01 22:39:44.993988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:22112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:49.752 [2024-10-01 22:39:44.993995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:40:49.752 [2024-10-01 22:39:45.003370] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x158b920) 00:40:49.752 [2024-10-01 22:39:45.003388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:49.752 [2024-10-01 22:39:45.003395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:40:50.014 [2024-10-01 22:39:45.008584] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x158b920) 00:40:50.014 [2024-10-01 22:39:45.008602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:6208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:50.014 [2024-10-01 22:39:45.008608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:40:50.014 [2024-10-01 22:39:45.017091] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x158b920) 00:40:50.014 [2024-10-01 22:39:45.017109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:4096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:50.014 [2024-10-01 22:39:45.017116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:40:50.014 [2024-10-01 22:39:45.021982] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x158b920) 00:40:50.014 [2024-10-01 22:39:45.022001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:5504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:50.014 [2024-10-01 22:39:45.022007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:40:50.014 [2024-10-01 22:39:45.029516] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x158b920) 00:40:50.014 [2024-10-01 22:39:45.029534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:7424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:50.014 [2024-10-01 22:39:45.029540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:40:50.014 [2024-10-01 22:39:45.037200] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x158b920) 00:40:50.014 [2024-10-01 22:39:45.037218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:3936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:50.014 [2024-10-01 22:39:45.037225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:40:50.014 [2024-10-01 22:39:45.045348] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x158b920) 00:40:50.014 [2024-10-01 22:39:45.045367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:8384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:50.014 [2024-10-01 22:39:45.045373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:40:50.014 [2024-10-01 22:39:45.054522] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x158b920) 00:40:50.014 [2024-10-01 22:39:45.054541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:5728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:50.014 [2024-10-01 22:39:45.054547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:40:50.014 [2024-10-01 22:39:45.065780] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x158b920) 00:40:50.014 [2024-10-01 22:39:45.065798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:50.014 [2024-10-01 22:39:45.065805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:40:50.014 [2024-10-01 22:39:45.070924] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x158b920) 00:40:50.014 [2024-10-01 22:39:45.070942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:18624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:50.014 [2024-10-01 22:39:45.070949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:40:50.014 [2024-10-01 22:39:45.075968] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x158b920) 00:40:50.014 [2024-10-01 22:39:45.075986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:11264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:50.014 [2024-10-01 22:39:45.075992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:40:50.014 [2024-10-01 22:39:45.083997] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x158b920) 00:40:50.014 [2024-10-01 22:39:45.084015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:20480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:50.014 [2024-10-01 22:39:45.084021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:40:50.015 [2024-10-01 22:39:45.093711] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x158b920) 00:40:50.015 [2024-10-01 22:39:45.093730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:14816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:50.015 [2024-10-01 22:39:45.093736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:40:50.015 [2024-10-01 22:39:45.103749] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x158b920) 00:40:50.015 [2024-10-01 22:39:45.103767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:9888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:50.015 [2024-10-01 22:39:45.103773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:40:50.015 [2024-10-01 22:39:45.109628] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x158b920) 00:40:50.015 [2024-10-01 22:39:45.109646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:7968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:50.015 [2024-10-01 22:39:45.109653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:40:50.015 [2024-10-01 22:39:45.120649] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x158b920) 00:40:50.015 [2024-10-01 22:39:45.120668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:18208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:50.015 [2024-10-01 22:39:45.120674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:40:50.015 [2024-10-01 22:39:45.127972] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x158b920) 00:40:50.015 [2024-10-01 22:39:45.127991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:24864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:50.015 [2024-10-01 22:39:45.128001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:40:50.015 [2024-10-01 22:39:45.135846] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x158b920) 00:40:50.015 [2024-10-01 22:39:45.135865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:20736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:50.015 [2024-10-01 22:39:45.135871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:40:50.015 [2024-10-01 22:39:45.146638] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x158b920) 00:40:50.015 [2024-10-01 22:39:45.146657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:11648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:50.015 [2024-10-01 22:39:45.146663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:40:50.015 3690.50 IOPS, 461.31 MiB/s 00:40:50.015 Latency(us) 00:40:50.015 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:40:50.015 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:40:50.015 nvme0n1 : 2.00 3693.06 461.63 0.00 0.00 4329.20 621.23 12888.75 00:40:50.015 =================================================================================================================== 00:40:50.015 Total : 3693.06 461.63 0.00 0.00 4329.20 621.23 12888.75 00:40:50.015 { 00:40:50.015 "results": [ 00:40:50.015 { 00:40:50.015 "job": "nvme0n1", 00:40:50.015 "core_mask": "0x2", 00:40:50.015 "workload": "randread", 00:40:50.015 "status": "finished", 00:40:50.015 "queue_depth": 16, 00:40:50.015 "io_size": 131072, 00:40:50.015 "runtime": 2.002944, 00:40:50.015 "iops": 3693.0638100715746, 00:40:50.015 "mibps": 461.6329762589468, 00:40:50.015 "io_failed": 0, 00:40:50.015 "io_timeout": 0, 00:40:50.015 "avg_latency_us": 4329.197069082061, 00:40:50.015 "min_latency_us": 621.2266666666667, 00:40:50.015 "max_latency_us": 12888.746666666666 00:40:50.015 } 00:40:50.015 ], 00:40:50.015 "core_count": 1 00:40:50.015 } 00:40:50.015 22:39:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:40:50.015 22:39:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:40:50.015 22:39:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:40:50.015 | .driver_specific 00:40:50.015 | .nvme_error 00:40:50.015 | .status_code 00:40:50.015 | .command_transient_transport_error' 00:40:50.015 22:39:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:40:50.277 22:39:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 238 > 0 )) 00:40:50.277 22:39:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 385019 00:40:50.277 22:39:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@950 -- # '[' -z 385019 ']' 00:40:50.277 22:39:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # kill -0 385019 00:40:50.277 22:39:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # uname 00:40:50.277 22:39:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:40:50.277 22:39:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 385019 00:40:50.277 22:39:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:40:50.277 22:39:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:40:50.277 22:39:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@968 -- # echo 'killing process with pid 385019' 00:40:50.277 killing process with pid 385019 00:40:50.277 22:39:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@969 -- # kill 385019 00:40:50.277 Received shutdown signal, test time was about 2.000000 seconds 00:40:50.277 00:40:50.277 Latency(us) 00:40:50.277 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:40:50.277 =================================================================================================================== 00:40:50.277 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:40:50.277 22:39:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@974 -- # wait 385019 00:40:50.540 22:39:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@114 -- # run_bperf_err randwrite 4096 128 00:40:50.540 22:39:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:40:50.540 22:39:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:40:50.540 22:39:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:40:50.540 22:39:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:40:50.540 22:39:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=385700 00:40:50.540 22:39:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 385700 /var/tmp/bperf.sock 00:40:50.540 22:39:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@831 -- # '[' -z 385700 ']' 00:40:50.540 22:39:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z 00:40:50.540 22:39:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:40:50.540 22:39:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # local max_retries=100 00:40:50.540 22:39:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:40:50.540 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:40:50.540 22:39:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # xtrace_disable 00:40:50.540 22:39:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:40:50.540 [2024-10-01 22:39:45.638085] Starting SPDK v25.01-pre git sha1 1b1c3081e / DPDK 24.03.0 initialization... 00:40:50.540 [2024-10-01 22:39:45.638142] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid385700 ] 00:40:50.540 [2024-10-01 22:39:45.714907] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:40:50.540 [2024-10-01 22:39:45.768392] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:40:51.481 22:39:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:40:51.481 22:39:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # return 0 00:40:51.481 22:39:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:40:51.481 22:39:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:40:51.481 22:39:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:40:51.481 22:39:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:51.481 22:39:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:40:51.481 22:39:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:51.481 22:39:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:40:51.481 22:39:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:40:51.741 nvme0n1 00:40:51.741 22:39:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:40:51.741 22:39:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:51.741 22:39:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:40:51.741 22:39:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:51.742 22:39:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:40:51.742 22:39:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:40:52.003 Running I/O for 2 seconds... 00:40:52.003 [2024-10-01 22:39:47.070130] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14105f0) with pdu=0x20002887b560 00:40:52.003 [2024-10-01 22:39:47.070444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:11454 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:52.003 [2024-10-01 22:39:47.070471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:40:52.003 [2024-10-01 22:39:47.082595] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14105f0) with pdu=0x20002887b560 00:40:52.003 [2024-10-01 22:39:47.082878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:120 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:52.003 [2024-10-01 22:39:47.082896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:40:52.003 [2024-10-01 22:39:47.095035] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14105f0) with pdu=0x20002887b560 00:40:52.003 [2024-10-01 22:39:47.095360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22741 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:52.003 [2024-10-01 22:39:47.095377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:40:52.003 [2024-10-01 22:39:47.107487] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14105f0) with pdu=0x20002887b560 00:40:52.003 [2024-10-01 22:39:47.107777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:8609 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:52.003 [2024-10-01 22:39:47.107795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:40:52.003 [2024-10-01 22:39:47.119917] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14105f0) with pdu=0x20002887b560 00:40:52.003 [2024-10-01 22:39:47.120203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:18204 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:52.003 [2024-10-01 22:39:47.120220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:40:52.003 [2024-10-01 22:39:47.132317] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14105f0) with pdu=0x20002887b560 00:40:52.003 [2024-10-01 22:39:47.132612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18949 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:52.003 [2024-10-01 22:39:47.132635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:40:52.003 [2024-10-01 22:39:47.144723] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14105f0) with pdu=0x20002887b560 00:40:52.003 [2024-10-01 22:39:47.145013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23305 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:52.003 [2024-10-01 22:39:47.145030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:40:52.003 [2024-10-01 22:39:47.157118] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14105f0) with pdu=0x20002887b560 00:40:52.003 [2024-10-01 22:39:47.157412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:12139 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:52.003 [2024-10-01 22:39:47.157428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:40:52.003 [2024-10-01 22:39:47.169526] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14105f0) with pdu=0x20002887b560 00:40:52.003 [2024-10-01 22:39:47.169824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:17352 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:52.003 [2024-10-01 22:39:47.169840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:40:52.003 [2024-10-01 22:39:47.181927] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14105f0) with pdu=0x20002887b560 00:40:52.003 [2024-10-01 22:39:47.182273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10448 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:52.003 [2024-10-01 22:39:47.182289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:40:52.003 [2024-10-01 22:39:47.194314] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14105f0) with pdu=0x20002887b560 00:40:52.003 [2024-10-01 22:39:47.194583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4331 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:52.003 [2024-10-01 22:39:47.194599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:40:52.003 [2024-10-01 22:39:47.206699] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14105f0) with pdu=0x20002887b560 00:40:52.003 [2024-10-01 22:39:47.206999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:2186 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:52.003 [2024-10-01 22:39:47.207015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:40:52.003 [2024-10-01 22:39:47.219077] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14105f0) with pdu=0x20002887b560 00:40:52.003 [2024-10-01 22:39:47.219385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:15901 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:52.003 [2024-10-01 22:39:47.219400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:40:52.003 [2024-10-01 22:39:47.231457] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14105f0) with pdu=0x20002887b560 00:40:52.003 [2024-10-01 22:39:47.231749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11650 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:52.003 [2024-10-01 22:39:47.231764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:40:52.003 [2024-10-01 22:39:47.243823] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14105f0) with pdu=0x20002887b560 00:40:52.003 [2024-10-01 22:39:47.244109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22992 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:52.003 [2024-10-01 22:39:47.244125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:40:52.264 [2024-10-01 22:39:47.256256] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14105f0) with pdu=0x20002887b560 00:40:52.264 [2024-10-01 22:39:47.256603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:22367 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:52.264 [2024-10-01 22:39:47.256618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:40:52.264 [2024-10-01 22:39:47.268645] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14105f0) with pdu=0x20002887b560 00:40:52.264 [2024-10-01 22:39:47.268925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:13035 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:52.264 [2024-10-01 22:39:47.268941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:40:52.264 [2024-10-01 22:39:47.280982] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14105f0) with pdu=0x20002887b560 00:40:52.265 [2024-10-01 22:39:47.281248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18869 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:52.265 [2024-10-01 22:39:47.281263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:40:52.265 [2024-10-01 22:39:47.293378] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14105f0) with pdu=0x20002887b560 00:40:52.265 [2024-10-01 22:39:47.293678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9598 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:52.265 [2024-10-01 22:39:47.293694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:40:52.265 [2024-10-01 22:39:47.305764] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14105f0) with pdu=0x20002887b560 00:40:52.265 [2024-10-01 22:39:47.306037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:17979 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:52.265 [2024-10-01 22:39:47.306053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:40:52.265 [2024-10-01 22:39:47.318126] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14105f0) with pdu=0x20002887b560 00:40:52.265 [2024-10-01 22:39:47.318394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:11887 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:52.265 [2024-10-01 22:39:47.318410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:40:52.265 [2024-10-01 22:39:47.330487] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14105f0) with pdu=0x20002887b560 00:40:52.265 [2024-10-01 22:39:47.330773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13812 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:52.265 [2024-10-01 22:39:47.330788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:40:52.265 [2024-10-01 22:39:47.342850] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14105f0) with pdu=0x20002887b560 00:40:52.265 [2024-10-01 22:39:47.343167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12843 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:52.265 [2024-10-01 22:39:47.343183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:40:52.265 [2024-10-01 22:39:47.355214] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14105f0) with pdu=0x20002887b560 00:40:52.265 [2024-10-01 22:39:47.355531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:12641 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:52.265 [2024-10-01 22:39:47.355547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:40:52.265 [2024-10-01 22:39:47.367589] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14105f0) with pdu=0x20002887b560 00:40:52.265 [2024-10-01 22:39:47.367878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:5718 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:52.265 [2024-10-01 22:39:47.367895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:40:52.265 [2024-10-01 22:39:47.379950] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14105f0) with pdu=0x20002887b560 00:40:52.265 [2024-10-01 22:39:47.380121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8118 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:52.265 [2024-10-01 22:39:47.380136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:40:52.265 [2024-10-01 22:39:47.392308] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14105f0) with pdu=0x20002887b560 00:40:52.265 [2024-10-01 22:39:47.392591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12924 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:52.265 [2024-10-01 22:39:47.392606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:40:52.265 [2024-10-01 22:39:47.404703] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14105f0) with pdu=0x20002887b560 00:40:52.265 [2024-10-01 22:39:47.404973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:19715 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:52.265 [2024-10-01 22:39:47.404989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:40:52.265 [2024-10-01 22:39:47.417045] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14105f0) with pdu=0x20002887b560 00:40:52.265 [2024-10-01 22:39:47.417326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24510 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:52.265 [2024-10-01 22:39:47.417341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:40:52.265 [2024-10-01 22:39:47.429437] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14105f0) with pdu=0x20002887b560 00:40:52.265 [2024-10-01 22:39:47.429782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19302 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:52.265 [2024-10-01 22:39:47.429798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:40:52.265 [2024-10-01 22:39:47.441797] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14105f0) with pdu=0x20002887b560 00:40:52.265 [2024-10-01 22:39:47.442086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7046 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:52.265 [2024-10-01 22:39:47.442102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:40:52.265 [2024-10-01 22:39:47.454192] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14105f0) with pdu=0x20002887b560 00:40:52.265 [2024-10-01 22:39:47.454467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:25408 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:52.265 [2024-10-01 22:39:47.454483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:40:52.265 [2024-10-01 22:39:47.466547] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14105f0) with pdu=0x20002887b560 00:40:52.265 [2024-10-01 22:39:47.466866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:8500 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:52.265 [2024-10-01 22:39:47.466882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:40:52.265 [2024-10-01 22:39:47.478941] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14105f0) with pdu=0x20002887b560 00:40:52.265 [2024-10-01 22:39:47.479251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15966 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:52.265 [2024-10-01 22:39:47.479267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:40:52.265 [2024-10-01 22:39:47.491282] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14105f0) with pdu=0x20002887b560 00:40:52.265 [2024-10-01 22:39:47.491566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2768 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:52.265 [2024-10-01 22:39:47.491581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:40:52.265 [2024-10-01 22:39:47.503930] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14105f0) with pdu=0x20002887b560 00:40:52.265 [2024-10-01 22:39:47.504205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:1579 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:52.265 [2024-10-01 22:39:47.504222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:40:52.265 [2024-10-01 22:39:47.516285] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14105f0) with pdu=0x20002887b560 00:40:52.265 [2024-10-01 22:39:47.516560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:4278 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:52.265 [2024-10-01 22:39:47.516581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:40:52.526 [2024-10-01 22:39:47.528664] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14105f0) with pdu=0x20002887b560 00:40:52.527 [2024-10-01 22:39:47.528951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8421 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:52.527 [2024-10-01 22:39:47.528967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:40:52.527 [2024-10-01 22:39:47.541039] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14105f0) with pdu=0x20002887b560 00:40:52.527 [2024-10-01 22:39:47.541314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24653 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:52.527 [2024-10-01 22:39:47.541329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:40:52.527 [2024-10-01 22:39:47.553397] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14105f0) with pdu=0x20002887b560 00:40:52.527 [2024-10-01 22:39:47.553568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:18886 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:52.527 [2024-10-01 22:39:47.553583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:40:52.527 [2024-10-01 22:39:47.565858] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14105f0) with pdu=0x20002887b560 00:40:52.527 [2024-10-01 22:39:47.566163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:2481 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:52.527 [2024-10-01 22:39:47.566182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:40:52.527 [2024-10-01 22:39:47.578219] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14105f0) with pdu=0x20002887b560 00:40:52.527 [2024-10-01 22:39:47.578512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11319 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:52.527 [2024-10-01 22:39:47.578528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:40:52.527 [2024-10-01 22:39:47.590580] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14105f0) with pdu=0x20002887b560 00:40:52.527 [2024-10-01 22:39:47.590896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1125 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:52.527 [2024-10-01 22:39:47.590912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:40:52.527 [2024-10-01 22:39:47.602970] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14105f0) with pdu=0x20002887b560 00:40:52.527 [2024-10-01 22:39:47.603307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:513 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:52.527 [2024-10-01 22:39:47.603323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:40:52.527 [2024-10-01 22:39:47.615339] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14105f0) with pdu=0x20002887b560 00:40:52.527 [2024-10-01 22:39:47.615656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:14083 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:52.527 [2024-10-01 22:39:47.615673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:40:52.527 [2024-10-01 22:39:47.627688] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14105f0) with pdu=0x20002887b560 00:40:52.527 [2024-10-01 22:39:47.627858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13436 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:52.527 [2024-10-01 22:39:47.627873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:40:52.527 [2024-10-01 22:39:47.640045] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14105f0) with pdu=0x20002887b560 00:40:52.527 [2024-10-01 22:39:47.640356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3564 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:52.527 [2024-10-01 22:39:47.640372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:40:52.527 [2024-10-01 22:39:47.652400] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14105f0) with pdu=0x20002887b560 00:40:52.527 [2024-10-01 22:39:47.652682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:18623 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:52.527 [2024-10-01 22:39:47.652698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:40:52.527 [2024-10-01 22:39:47.664783] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14105f0) with pdu=0x20002887b560 00:40:52.527 [2024-10-01 22:39:47.665086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:6393 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:52.527 [2024-10-01 22:39:47.665102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:40:52.527 [2024-10-01 22:39:47.677156] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14105f0) with pdu=0x20002887b560 00:40:52.527 [2024-10-01 22:39:47.677434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3791 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:52.527 [2024-10-01 22:39:47.677450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:40:52.527 [2024-10-01 22:39:47.689554] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14105f0) with pdu=0x20002887b560 00:40:52.527 [2024-10-01 22:39:47.689871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8232 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:52.527 [2024-10-01 22:39:47.689887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:40:52.527 [2024-10-01 22:39:47.701959] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14105f0) with pdu=0x20002887b560 00:40:52.527 [2024-10-01 22:39:47.702239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:16337 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:52.527 [2024-10-01 22:39:47.702254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:40:52.527 [2024-10-01 22:39:47.714334] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14105f0) with pdu=0x20002887b560 00:40:52.527 [2024-10-01 22:39:47.714601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:21328 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:52.527 [2024-10-01 22:39:47.714617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:40:52.527 [2024-10-01 22:39:47.726711] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14105f0) with pdu=0x20002887b560 00:40:52.527 [2024-10-01 22:39:47.727005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7992 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:52.527 [2024-10-01 22:39:47.727021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:40:52.527 [2024-10-01 22:39:47.739100] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14105f0) with pdu=0x20002887b560 00:40:52.527 [2024-10-01 22:39:47.739372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3279 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:52.527 [2024-10-01 22:39:47.739388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:40:52.527 [2024-10-01 22:39:47.751471] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14105f0) with pdu=0x20002887b560 00:40:52.527 [2024-10-01 22:39:47.751740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:1784 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:52.527 [2024-10-01 22:39:47.751757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:40:52.527 [2024-10-01 22:39:47.763851] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14105f0) with pdu=0x20002887b560 00:40:52.527 [2024-10-01 22:39:47.764122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:10040 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:52.527 [2024-10-01 22:39:47.764138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:40:52.527 [2024-10-01 22:39:47.776225] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14105f0) with pdu=0x20002887b560 00:40:52.527 [2024-10-01 22:39:47.776575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16049 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:52.527 [2024-10-01 22:39:47.776591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:40:52.789 [2024-10-01 22:39:47.788706] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14105f0) with pdu=0x20002887b560 00:40:52.789 [2024-10-01 22:39:47.789018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22021 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:52.789 [2024-10-01 22:39:47.789034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:40:52.789 [2024-10-01 22:39:47.801104] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14105f0) with pdu=0x20002887b560 00:40:52.789 [2024-10-01 22:39:47.801457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:8678 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:52.789 [2024-10-01 22:39:47.801473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:40:52.789 [2024-10-01 22:39:47.813478] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14105f0) with pdu=0x20002887b560 00:40:52.789 [2024-10-01 22:39:47.813794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:5452 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:52.789 [2024-10-01 22:39:47.813809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:40:52.789 [2024-10-01 22:39:47.825862] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14105f0) with pdu=0x20002887b560 00:40:52.789 [2024-10-01 22:39:47.826140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22820 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:52.789 [2024-10-01 22:39:47.826156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:40:52.789 [2024-10-01 22:39:47.838234] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14105f0) with pdu=0x20002887b560 00:40:52.789 [2024-10-01 22:39:47.838505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3170 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:52.789 [2024-10-01 22:39:47.838521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:40:52.789 [2024-10-01 22:39:47.850593] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14105f0) with pdu=0x20002887b560 00:40:52.789 [2024-10-01 22:39:47.850878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:1596 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:52.789 [2024-10-01 22:39:47.850893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:40:52.789 [2024-10-01 22:39:47.862982] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14105f0) with pdu=0x20002887b560 00:40:52.789 [2024-10-01 22:39:47.863254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:22426 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:52.789 [2024-10-01 22:39:47.863270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:40:52.789 [2024-10-01 22:39:47.875357] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14105f0) with pdu=0x20002887b560 00:40:52.789 [2024-10-01 22:39:47.875637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2922 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:52.789 [2024-10-01 22:39:47.875653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:40:52.789 [2024-10-01 22:39:47.887823] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14105f0) with pdu=0x20002887b560 00:40:52.789 [2024-10-01 22:39:47.888169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10473 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:52.789 [2024-10-01 22:39:47.888188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:40:52.789 [2024-10-01 22:39:47.900324] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14105f0) with pdu=0x20002887b560 00:40:52.789 [2024-10-01 22:39:47.900603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:12684 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:52.789 [2024-10-01 22:39:47.900619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:40:52.789 [2024-10-01 22:39:47.912717] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14105f0) with pdu=0x20002887b560 00:40:52.789 [2024-10-01 22:39:47.912991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:1951 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:52.789 [2024-10-01 22:39:47.913006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:40:52.789 [2024-10-01 22:39:47.925103] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14105f0) with pdu=0x20002887b560 00:40:52.789 [2024-10-01 22:39:47.925409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:333 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:52.789 [2024-10-01 22:39:47.925425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:40:52.790 [2024-10-01 22:39:47.937496] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14105f0) with pdu=0x20002887b560 00:40:52.790 [2024-10-01 22:39:47.937814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13073 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:52.790 [2024-10-01 22:39:47.937830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:40:52.790 [2024-10-01 22:39:47.949851] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14105f0) with pdu=0x20002887b560 00:40:52.790 [2024-10-01 22:39:47.950163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:4952 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:52.790 [2024-10-01 22:39:47.950180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:40:52.790 [2024-10-01 22:39:47.962264] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14105f0) with pdu=0x20002887b560 00:40:52.790 [2024-10-01 22:39:47.962565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:9290 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:52.790 [2024-10-01 22:39:47.962580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:40:52.790 [2024-10-01 22:39:47.974610] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14105f0) with pdu=0x20002887b560 00:40:52.790 [2024-10-01 22:39:47.974978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17286 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:52.790 [2024-10-01 22:39:47.974993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:40:52.790 [2024-10-01 22:39:47.987020] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14105f0) with pdu=0x20002887b560 00:40:52.790 [2024-10-01 22:39:47.987293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14014 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:52.790 [2024-10-01 22:39:47.987309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:40:52.790 [2024-10-01 22:39:47.999397] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14105f0) with pdu=0x20002887b560 00:40:52.790 [2024-10-01 22:39:47.999719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:4880 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:52.790 [2024-10-01 22:39:47.999735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:40:52.790 [2024-10-01 22:39:48.011817] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14105f0) with pdu=0x20002887b560 00:40:52.790 [2024-10-01 22:39:48.012123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24567 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:52.790 [2024-10-01 22:39:48.012139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:40:52.790 [2024-10-01 22:39:48.024196] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14105f0) with pdu=0x20002887b560 00:40:52.790 [2024-10-01 22:39:48.024462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12734 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:52.790 [2024-10-01 22:39:48.024478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:40:52.790 [2024-10-01 22:39:48.036578] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14105f0) with pdu=0x20002887b560 00:40:52.790 [2024-10-01 22:39:48.036854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4679 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:52.790 [2024-10-01 22:39:48.036869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:40:53.051 [2024-10-01 22:39:48.048937] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14105f0) with pdu=0x20002887b560 00:40:53.051 [2024-10-01 22:39:48.049254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:13847 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:53.051 [2024-10-01 22:39:48.049269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:40:53.051 20580.00 IOPS, 80.39 MiB/s [2024-10-01 22:39:48.061351] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14105f0) with pdu=0x20002887b560 00:40:53.051 [2024-10-01 22:39:48.061640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:22134 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:53.051 [2024-10-01 22:39:48.061662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:40:53.051 [2024-10-01 22:39:48.073719] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14105f0) with pdu=0x20002887b560 00:40:53.051 [2024-10-01 22:39:48.073996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4239 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:53.051 [2024-10-01 22:39:48.074011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:40:53.051 [2024-10-01 22:39:48.086138] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14105f0) with pdu=0x20002887b560 00:40:53.051 [2024-10-01 22:39:48.086418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17522 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:53.051 [2024-10-01 22:39:48.086434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:40:53.051 [2024-10-01 22:39:48.098536] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14105f0) with pdu=0x20002887b560 00:40:53.051 [2024-10-01 22:39:48.098839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:2493 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:53.051 [2024-10-01 22:39:48.098854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:40:53.051 [2024-10-01 22:39:48.110921] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14105f0) with pdu=0x20002887b560 00:40:53.051 [2024-10-01 22:39:48.111213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:4865 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:53.051 [2024-10-01 22:39:48.111228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:40:53.051 [2024-10-01 22:39:48.123298] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14105f0) with pdu=0x20002887b560 00:40:53.051 [2024-10-01 22:39:48.123575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3972 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:53.051 [2024-10-01 22:39:48.123591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:40:53.051 [2024-10-01 22:39:48.135707] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14105f0) with pdu=0x20002887b560 00:40:53.051 [2024-10-01 22:39:48.136000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5746 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:53.051 [2024-10-01 22:39:48.136016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:40:53.051 [2024-10-01 22:39:48.148082] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14105f0) with pdu=0x20002887b560 00:40:53.051 [2024-10-01 22:39:48.148360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:20805 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:53.051 [2024-10-01 22:39:48.148376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:40:53.051 [2024-10-01 22:39:48.160461] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14105f0) with pdu=0x20002887b560 00:40:53.051 [2024-10-01 22:39:48.160757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:19016 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:53.051 [2024-10-01 22:39:48.160773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:40:53.051 [2024-10-01 22:39:48.172839] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14105f0) with pdu=0x20002887b560 00:40:53.051 [2024-10-01 22:39:48.173156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:25364 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:53.051 [2024-10-01 22:39:48.173171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:40:53.051 [2024-10-01 22:39:48.185241] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14105f0) with pdu=0x20002887b560 00:40:53.052 [2024-10-01 22:39:48.185547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2542 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:53.052 [2024-10-01 22:39:48.185563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:40:53.052 [2024-10-01 22:39:48.197649] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14105f0) with pdu=0x20002887b560 00:40:53.052 [2024-10-01 22:39:48.197935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:13280 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:53.052 [2024-10-01 22:39:48.197950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:40:53.052 [2024-10-01 22:39:48.210042] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14105f0) with pdu=0x20002887b560 00:40:53.052 [2024-10-01 22:39:48.210340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:15464 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:53.052 [2024-10-01 22:39:48.210355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:40:53.052 [2024-10-01 22:39:48.222423] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14105f0) with pdu=0x20002887b560 00:40:53.052 [2024-10-01 22:39:48.222726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:682 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:53.052 [2024-10-01 22:39:48.222742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:40:53.052 [2024-10-01 22:39:48.234824] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14105f0) with pdu=0x20002887b560 00:40:53.052 [2024-10-01 22:39:48.235109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:764 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:53.052 [2024-10-01 22:39:48.235124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:40:53.052 [2024-10-01 22:39:48.247189] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14105f0) with pdu=0x20002887b560 00:40:53.052 [2024-10-01 22:39:48.247495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:18095 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:53.052 [2024-10-01 22:39:48.247512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:40:53.052 [2024-10-01 22:39:48.259568] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14105f0) with pdu=0x20002887b560 00:40:53.052 [2024-10-01 22:39:48.259907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:16109 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:53.052 [2024-10-01 22:39:48.259923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:40:53.052 [2024-10-01 22:39:48.271997] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14105f0) with pdu=0x20002887b560 00:40:53.052 [2024-10-01 22:39:48.272294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8862 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:53.052 [2024-10-01 22:39:48.272310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:40:53.052 [2024-10-01 22:39:48.284384] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14105f0) with pdu=0x20002887b560 00:40:53.052 [2024-10-01 22:39:48.284726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7713 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:53.052 [2024-10-01 22:39:48.284742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:40:53.052 [2024-10-01 22:39:48.296786] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14105f0) with pdu=0x20002887b560 00:40:53.052 [2024-10-01 22:39:48.297066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:2756 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:53.052 [2024-10-01 22:39:48.297081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:40:53.313 [2024-10-01 22:39:48.309154] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14105f0) with pdu=0x20002887b560 00:40:53.313 [2024-10-01 22:39:48.309434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:11991 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:53.313 [2024-10-01 22:39:48.309450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:40:53.313 [2024-10-01 22:39:48.321522] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14105f0) with pdu=0x20002887b560 00:40:53.313 [2024-10-01 22:39:48.321842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14908 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:53.313 [2024-10-01 22:39:48.321861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:40:53.313 [2024-10-01 22:39:48.333926] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14105f0) with pdu=0x20002887b560 00:40:53.313 [2024-10-01 22:39:48.334189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8787 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:53.313 [2024-10-01 22:39:48.334205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:40:53.313 [2024-10-01 22:39:48.346308] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14105f0) with pdu=0x20002887b560 00:40:53.313 [2024-10-01 22:39:48.346620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:7969 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:53.313 [2024-10-01 22:39:48.346639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:40:53.313 [2024-10-01 22:39:48.358704] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14105f0) with pdu=0x20002887b560 00:40:53.313 [2024-10-01 22:39:48.358973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:7826 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:53.313 [2024-10-01 22:39:48.358988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:40:53.313 [2024-10-01 22:39:48.371077] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14105f0) with pdu=0x20002887b560 00:40:53.313 [2024-10-01 22:39:48.371354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23996 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:53.313 [2024-10-01 22:39:48.371369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:40:53.313 [2024-10-01 22:39:48.383454] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14105f0) with pdu=0x20002887b560 00:40:53.313 [2024-10-01 22:39:48.383729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25335 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:53.313 [2024-10-01 22:39:48.383744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:40:53.313 [2024-10-01 22:39:48.395848] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14105f0) with pdu=0x20002887b560 00:40:53.313 [2024-10-01 22:39:48.396131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:12982 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:53.313 [2024-10-01 22:39:48.396147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:40:53.313 [2024-10-01 22:39:48.408240] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14105f0) with pdu=0x20002887b560 00:40:53.313 [2024-10-01 22:39:48.408514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:23547 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:53.313 [2024-10-01 22:39:48.408529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:40:53.313 [2024-10-01 22:39:48.420619] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14105f0) with pdu=0x20002887b560 00:40:53.313 [2024-10-01 22:39:48.420928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3143 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:53.313 [2024-10-01 22:39:48.420943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:40:53.313 [2024-10-01 22:39:48.433005] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14105f0) with pdu=0x20002887b560 00:40:53.313 [2024-10-01 22:39:48.433178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6625 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:53.314 [2024-10-01 22:39:48.433193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:40:53.314 [2024-10-01 22:39:48.445355] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14105f0) with pdu=0x20002887b560 00:40:53.314 [2024-10-01 22:39:48.445667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:8805 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:53.314 [2024-10-01 22:39:48.445683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:40:53.314 [2024-10-01 22:39:48.457743] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14105f0) with pdu=0x20002887b560 00:40:53.314 [2024-10-01 22:39:48.458016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:23138 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:53.314 [2024-10-01 22:39:48.458031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:40:53.314 [2024-10-01 22:39:48.470123] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14105f0) with pdu=0x20002887b560 00:40:53.314 [2024-10-01 22:39:48.470432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13815 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:53.314 [2024-10-01 22:39:48.470448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:40:53.314 [2024-10-01 22:39:48.482511] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14105f0) with pdu=0x20002887b560 00:40:53.314 [2024-10-01 22:39:48.482794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9562 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:53.314 [2024-10-01 22:39:48.482810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:40:53.314 [2024-10-01 22:39:48.494894] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14105f0) with pdu=0x20002887b560 00:40:53.314 [2024-10-01 22:39:48.495165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:16128 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:53.314 [2024-10-01 22:39:48.495181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:40:53.314 [2024-10-01 22:39:48.507442] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14105f0) with pdu=0x20002887b560 00:40:53.314 [2024-10-01 22:39:48.507609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:6273 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:53.314 [2024-10-01 22:39:48.507628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:40:53.314 [2024-10-01 22:39:48.519803] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14105f0) with pdu=0x20002887b560 00:40:53.314 [2024-10-01 22:39:48.520106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:470 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:53.314 [2024-10-01 22:39:48.520121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:40:53.314 [2024-10-01 22:39:48.532200] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14105f0) with pdu=0x20002887b560 00:40:53.314 [2024-10-01 22:39:48.532481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15892 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:53.314 [2024-10-01 22:39:48.532496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:40:53.314 [2024-10-01 22:39:48.544595] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14105f0) with pdu=0x20002887b560 00:40:53.314 [2024-10-01 22:39:48.544916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:4937 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:53.314 [2024-10-01 22:39:48.544932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:40:53.314 [2024-10-01 22:39:48.556947] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14105f0) with pdu=0x20002887b560 00:40:53.314 [2024-10-01 22:39:48.557229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:19079 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:53.314 [2024-10-01 22:39:48.557245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:40:53.575 [2024-10-01 22:39:48.569346] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14105f0) with pdu=0x20002887b560 00:40:53.575 [2024-10-01 22:39:48.569627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20445 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:53.575 [2024-10-01 22:39:48.569643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:40:53.575 [2024-10-01 22:39:48.581715] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14105f0) with pdu=0x20002887b560 00:40:53.575 [2024-10-01 22:39:48.582011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6455 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:53.575 [2024-10-01 22:39:48.582027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:40:53.575 [2024-10-01 22:39:48.594102] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14105f0) with pdu=0x20002887b560 00:40:53.575 [2024-10-01 22:39:48.594371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:20977 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:53.575 [2024-10-01 22:39:48.594386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:40:53.575 [2024-10-01 22:39:48.606495] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14105f0) with pdu=0x20002887b560 00:40:53.575 [2024-10-01 22:39:48.606801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:8210 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:53.575 [2024-10-01 22:39:48.606817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:40:53.575 [2024-10-01 22:39:48.618875] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14105f0) with pdu=0x20002887b560 00:40:53.575 [2024-10-01 22:39:48.619223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:25333 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:53.575 [2024-10-01 22:39:48.619239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:40:53.575 [2024-10-01 22:39:48.631253] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14105f0) with pdu=0x20002887b560 00:40:53.575 [2024-10-01 22:39:48.631421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20054 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:53.575 [2024-10-01 22:39:48.631435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:40:53.575 [2024-10-01 22:39:48.643608] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14105f0) with pdu=0x20002887b560 00:40:53.575 [2024-10-01 22:39:48.643785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:21402 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:53.575 [2024-10-01 22:39:48.643803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:40:53.575 [2024-10-01 22:39:48.655972] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14105f0) with pdu=0x20002887b560 00:40:53.575 [2024-10-01 22:39:48.656248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:9776 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:53.575 [2024-10-01 22:39:48.656264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:40:53.575 [2024-10-01 22:39:48.668347] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14105f0) with pdu=0x20002887b560 00:40:53.575 [2024-10-01 22:39:48.668633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23289 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:53.575 [2024-10-01 22:39:48.668650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:40:53.575 [2024-10-01 22:39:48.680736] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14105f0) with pdu=0x20002887b560 00:40:53.575 [2024-10-01 22:39:48.681028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18809 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:53.575 [2024-10-01 22:39:48.681044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:40:53.575 [2024-10-01 22:39:48.693101] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14105f0) with pdu=0x20002887b560 00:40:53.575 [2024-10-01 22:39:48.693383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:19667 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:53.575 [2024-10-01 22:39:48.693399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:40:53.575 [2024-10-01 22:39:48.705473] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14105f0) with pdu=0x20002887b560 00:40:53.575 [2024-10-01 22:39:48.705765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:9566 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:53.575 [2024-10-01 22:39:48.705780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:40:53.575 [2024-10-01 22:39:48.717829] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14105f0) with pdu=0x20002887b560 00:40:53.575 [2024-10-01 22:39:48.718124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:25266 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:53.575 [2024-10-01 22:39:48.718140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:40:53.575 [2024-10-01 22:39:48.730198] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14105f0) with pdu=0x20002887b560 00:40:53.575 [2024-10-01 22:39:48.730364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9492 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:53.575 [2024-10-01 22:39:48.730379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:40:53.575 [2024-10-01 22:39:48.742563] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14105f0) with pdu=0x20002887b560 00:40:53.575 [2024-10-01 22:39:48.742851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:17440 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:53.575 [2024-10-01 22:39:48.742866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:40:53.575 [2024-10-01 22:39:48.754938] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14105f0) with pdu=0x20002887b560 00:40:53.575 [2024-10-01 22:39:48.755240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:8073 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:53.575 [2024-10-01 22:39:48.755256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:40:53.575 [2024-10-01 22:39:48.767307] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14105f0) with pdu=0x20002887b560 00:40:53.575 [2024-10-01 22:39:48.767569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6617 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:53.575 [2024-10-01 22:39:48.767585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:40:53.575 [2024-10-01 22:39:48.779677] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14105f0) with pdu=0x20002887b560 00:40:53.575 [2024-10-01 22:39:48.779953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22542 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:53.575 [2024-10-01 22:39:48.779969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:40:53.576 [2024-10-01 22:39:48.792049] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14105f0) with pdu=0x20002887b560 00:40:53.576 [2024-10-01 22:39:48.792341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:14323 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:53.576 [2024-10-01 22:39:48.792357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:40:53.576 [2024-10-01 22:39:48.804430] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14105f0) with pdu=0x20002887b560 00:40:53.576 [2024-10-01 22:39:48.804713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:25102 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:53.576 [2024-10-01 22:39:48.804729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:40:53.576 [2024-10-01 22:39:48.816787] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14105f0) with pdu=0x20002887b560 00:40:53.576 [2024-10-01 22:39:48.817088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15126 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:53.576 [2024-10-01 22:39:48.817103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:40:53.837 [2024-10-01 22:39:48.829166] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14105f0) with pdu=0x20002887b560 00:40:53.837 [2024-10-01 22:39:48.829443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21435 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:53.837 [2024-10-01 22:39:48.829459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:40:53.837 [2024-10-01 22:39:48.841525] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14105f0) with pdu=0x20002887b560 00:40:53.837 [2024-10-01 22:39:48.841837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:5291 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:53.837 [2024-10-01 22:39:48.841853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:40:53.837 [2024-10-01 22:39:48.853886] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14105f0) with pdu=0x20002887b560 00:40:53.837 [2024-10-01 22:39:48.854153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:956 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:53.837 [2024-10-01 22:39:48.854169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:40:53.837 [2024-10-01 22:39:48.866242] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14105f0) with pdu=0x20002887b560 00:40:53.837 [2024-10-01 22:39:48.866416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23046 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:53.837 [2024-10-01 22:39:48.866431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:40:53.837 [2024-10-01 22:39:48.878616] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14105f0) with pdu=0x20002887b560 00:40:53.837 [2024-10-01 22:39:48.878794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5601 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:53.837 [2024-10-01 22:39:48.878809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:40:53.837 [2024-10-01 22:39:48.890990] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14105f0) with pdu=0x20002887b560 00:40:53.837 [2024-10-01 22:39:48.891270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:12956 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:53.837 [2024-10-01 22:39:48.891285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:40:53.837 [2024-10-01 22:39:48.903419] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14105f0) with pdu=0x20002887b560 00:40:53.837 [2024-10-01 22:39:48.903697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:11983 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:53.837 [2024-10-01 22:39:48.903713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:40:53.837 [2024-10-01 22:39:48.915814] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14105f0) with pdu=0x20002887b560 00:40:53.837 [2024-10-01 22:39:48.916086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21807 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:53.837 [2024-10-01 22:39:48.916102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:40:53.837 [2024-10-01 22:39:48.928206] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14105f0) with pdu=0x20002887b560 00:40:53.837 [2024-10-01 22:39:48.928478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4789 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:53.837 [2024-10-01 22:39:48.928493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:40:53.837 [2024-10-01 22:39:48.940576] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14105f0) with pdu=0x20002887b560 00:40:53.837 [2024-10-01 22:39:48.940859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:4101 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:53.837 [2024-10-01 22:39:48.940875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:40:53.837 [2024-10-01 22:39:48.952960] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14105f0) with pdu=0x20002887b560 00:40:53.837 [2024-10-01 22:39:48.953284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:13046 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:53.837 [2024-10-01 22:39:48.953300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:40:53.837 [2024-10-01 22:39:48.965353] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14105f0) with pdu=0x20002887b560 00:40:53.837 [2024-10-01 22:39:48.965522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8205 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:53.837 [2024-10-01 22:39:48.965537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:40:53.837 [2024-10-01 22:39:48.977729] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14105f0) with pdu=0x20002887b560 00:40:53.837 [2024-10-01 22:39:48.978015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25309 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:53.837 [2024-10-01 22:39:48.978031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:40:53.837 [2024-10-01 22:39:48.990113] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14105f0) with pdu=0x20002887b560 00:40:53.837 [2024-10-01 22:39:48.990420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:17177 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:53.837 [2024-10-01 22:39:48.990436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:40:53.837 [2024-10-01 22:39:49.002486] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14105f0) with pdu=0x20002887b560 00:40:53.837 [2024-10-01 22:39:49.002662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:19277 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:53.837 [2024-10-01 22:39:49.002677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:40:53.837 [2024-10-01 22:39:49.014870] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14105f0) with pdu=0x20002887b560 00:40:53.837 [2024-10-01 22:39:49.015146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17672 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:53.837 [2024-10-01 22:39:49.015162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:40:53.837 [2024-10-01 22:39:49.027237] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14105f0) with pdu=0x20002887b560 00:40:53.838 [2024-10-01 22:39:49.027511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15726 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:53.838 [2024-10-01 22:39:49.027527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:40:53.838 [2024-10-01 22:39:49.039691] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14105f0) with pdu=0x20002887b560 00:40:53.838 [2024-10-01 22:39:49.039978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:9211 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:53.838 [2024-10-01 22:39:49.039994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:40:53.838 [2024-10-01 22:39:49.052056] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14105f0) with pdu=0x20002887b560 00:40:53.838 [2024-10-01 22:39:49.052366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:20578 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:53.838 [2024-10-01 22:39:49.052382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:40:53.838 20609.50 IOPS, 80.51 MiB/s [2024-10-01 22:39:49.064407] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14105f0) with pdu=0x20002887b560 00:40:53.838 [2024-10-01 22:39:49.064696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:892 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:53.838 [2024-10-01 22:39:49.064711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:40:53.838 00:40:53.838 Latency(us) 00:40:53.838 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:40:53.838 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:40:53.838 nvme0n1 : 2.01 20612.77 80.52 0.00 0.00 6197.50 2020.69 12615.68 00:40:53.838 =================================================================================================================== 00:40:53.838 Total : 20612.77 80.52 0.00 0.00 6197.50 2020.69 12615.68 00:40:53.838 { 00:40:53.838 "results": [ 00:40:53.838 { 00:40:53.838 "job": "nvme0n1", 00:40:53.838 "core_mask": "0x2", 00:40:53.838 "workload": "randwrite", 00:40:53.838 "status": "finished", 00:40:53.838 "queue_depth": 128, 00:40:53.838 "io_size": 4096, 00:40:53.838 "runtime": 2.007057, 00:40:53.838 "iops": 20612.767848646054, 00:40:53.838 "mibps": 80.51862440877365, 00:40:53.838 "io_failed": 0, 00:40:53.838 "io_timeout": 0, 00:40:53.838 "avg_latency_us": 6197.499179618573, 00:40:53.838 "min_latency_us": 2020.6933333333334, 00:40:53.838 "max_latency_us": 12615.68 00:40:53.838 } 00:40:53.838 ], 00:40:53.838 "core_count": 1 00:40:53.838 } 00:40:54.097 22:39:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:40:54.097 22:39:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:40:54.097 22:39:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:40:54.097 | .driver_specific 00:40:54.097 | .nvme_error 00:40:54.097 | .status_code 00:40:54.097 | .command_transient_transport_error' 00:40:54.098 22:39:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:40:54.098 22:39:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 162 > 0 )) 00:40:54.098 22:39:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 385700 00:40:54.098 22:39:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@950 -- # '[' -z 385700 ']' 00:40:54.098 22:39:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # kill -0 385700 00:40:54.098 22:39:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # uname 00:40:54.098 22:39:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:40:54.098 22:39:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 385700 00:40:54.098 22:39:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:40:54.098 22:39:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:40:54.098 22:39:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@968 -- # echo 'killing process with pid 385700' 00:40:54.098 killing process with pid 385700 00:40:54.098 22:39:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@969 -- # kill 385700 00:40:54.098 Received shutdown signal, test time was about 2.000000 seconds 00:40:54.098 00:40:54.098 Latency(us) 00:40:54.098 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:40:54.098 =================================================================================================================== 00:40:54.098 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:40:54.098 22:39:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@974 -- # wait 385700 00:40:54.358 22:39:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@115 -- # run_bperf_err randwrite 131072 16 00:40:54.358 22:39:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:40:54.358 22:39:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:40:54.358 22:39:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:40:54.358 22:39:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:40:54.358 22:39:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=386400 00:40:54.358 22:39:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 386400 /var/tmp/bperf.sock 00:40:54.358 22:39:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@831 -- # '[' -z 386400 ']' 00:40:54.358 22:39:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z 00:40:54.358 22:39:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:40:54.358 22:39:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # local max_retries=100 00:40:54.358 22:39:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:40:54.358 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:40:54.358 22:39:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # xtrace_disable 00:40:54.358 22:39:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:40:54.358 [2024-10-01 22:39:49.554646] Starting SPDK v25.01-pre git sha1 1b1c3081e / DPDK 24.03.0 initialization... 00:40:54.358 [2024-10-01 22:39:49.554706] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid386400 ] 00:40:54.358 I/O size of 131072 is greater than zero copy threshold (65536). 00:40:54.358 Zero copy mechanism will not be used. 00:40:54.618 [2024-10-01 22:39:49.632685] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:40:54.618 [2024-10-01 22:39:49.686611] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:40:55.188 22:39:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:40:55.188 22:39:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # return 0 00:40:55.188 22:39:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:40:55.188 22:39:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:40:55.464 22:39:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:40:55.464 22:39:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:55.464 22:39:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:40:55.464 22:39:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:55.464 22:39:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:40:55.464 22:39:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:40:55.724 nvme0n1 00:40:55.724 22:39:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:40:55.724 22:39:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:55.724 22:39:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:40:55.724 22:39:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:55.724 22:39:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:40:55.724 22:39:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:40:55.985 I/O size of 131072 is greater than zero copy threshold (65536). 00:40:55.985 Zero copy mechanism will not be used. 00:40:55.985 Running I/O for 2 seconds... 00:40:55.985 [2024-10-01 22:39:51.041233] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1410930) with pdu=0x20002887df90 00:40:55.985 [2024-10-01 22:39:51.041453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:55.985 [2024-10-01 22:39:51.041479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:40:55.985 [2024-10-01 22:39:51.046293] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1410930) with pdu=0x20002887df90 00:40:55.985 [2024-10-01 22:39:51.046500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:55.985 [2024-10-01 22:39:51.046519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:40:55.985 [2024-10-01 22:39:51.051459] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1410930) with pdu=0x20002887df90 00:40:55.985 [2024-10-01 22:39:51.051663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:55.985 [2024-10-01 22:39:51.051680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:40:55.985 [2024-10-01 22:39:51.056774] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1410930) with pdu=0x20002887df90 00:40:55.985 [2024-10-01 22:39:51.056974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:55.985 [2024-10-01 22:39:51.056991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:40:55.985 [2024-10-01 22:39:51.062188] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1410930) with pdu=0x20002887df90 00:40:55.985 [2024-10-01 22:39:51.062389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:55.985 [2024-10-01 22:39:51.062406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:40:55.985 [2024-10-01 22:39:51.066764] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1410930) with pdu=0x20002887df90 00:40:55.985 [2024-10-01 22:39:51.066953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:55.985 [2024-10-01 22:39:51.066969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:40:55.985 [2024-10-01 22:39:51.071235] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1410930) with pdu=0x20002887df90 00:40:55.985 [2024-10-01 22:39:51.071435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:55.985 [2024-10-01 22:39:51.071451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:40:55.985 [2024-10-01 22:39:51.075999] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1410930) with pdu=0x20002887df90 00:40:55.985 [2024-10-01 22:39:51.076198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:55.985 [2024-10-01 22:39:51.076214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:40:55.985 [2024-10-01 22:39:51.081333] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1410930) with pdu=0x20002887df90 00:40:55.985 [2024-10-01 22:39:51.081531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:55.985 [2024-10-01 22:39:51.081547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:40:55.985 [2024-10-01 22:39:51.086848] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1410930) with pdu=0x20002887df90 00:40:55.985 [2024-10-01 22:39:51.087045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:55.985 [2024-10-01 22:39:51.087061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:40:55.985 [2024-10-01 22:39:51.091895] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1410930) with pdu=0x20002887df90 00:40:55.985 [2024-10-01 22:39:51.092093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:55.985 [2024-10-01 22:39:51.092110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:40:55.985 [2024-10-01 22:39:51.096447] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1410930) with pdu=0x20002887df90 00:40:55.985 [2024-10-01 22:39:51.096648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:55.985 [2024-10-01 22:39:51.096665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:40:55.985 [2024-10-01 22:39:51.101212] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1410930) with pdu=0x20002887df90 00:40:55.985 [2024-10-01 22:39:51.101410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:55.985 [2024-10-01 22:39:51.101427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:40:55.985 [2024-10-01 22:39:51.106110] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1410930) with pdu=0x20002887df90 00:40:55.985 [2024-10-01 22:39:51.106309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:55.985 [2024-10-01 22:39:51.106325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:40:55.986 [2024-10-01 22:39:51.110958] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1410930) with pdu=0x20002887df90 00:40:55.986 [2024-10-01 22:39:51.111154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:55.986 [2024-10-01 22:39:51.111171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:40:55.986 [2024-10-01 22:39:51.115886] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1410930) with pdu=0x20002887df90 00:40:55.986 [2024-10-01 22:39:51.116084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:55.986 [2024-10-01 22:39:51.116100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:40:55.986 [2024-10-01 22:39:51.120389] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1410930) with pdu=0x20002887df90 00:40:55.986 [2024-10-01 22:39:51.120578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:55.986 [2024-10-01 22:39:51.120601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:40:55.986 [2024-10-01 22:39:51.125152] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1410930) with pdu=0x20002887df90 00:40:55.986 [2024-10-01 22:39:51.125348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:55.986 [2024-10-01 22:39:51.125364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:40:55.986 [2024-10-01 22:39:51.129738] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1410930) with pdu=0x20002887df90 00:40:55.986 [2024-10-01 22:39:51.129936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:55.986 [2024-10-01 22:39:51.129953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:40:55.986 [2024-10-01 22:39:51.134890] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1410930) with pdu=0x20002887df90 00:40:55.986 [2024-10-01 22:39:51.135098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:55.986 [2024-10-01 22:39:51.135114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:40:55.986 [2024-10-01 22:39:51.140343] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1410930) with pdu=0x20002887df90 00:40:55.986 [2024-10-01 22:39:51.140539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:55.986 [2024-10-01 22:39:51.140555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:40:55.986 [2024-10-01 22:39:51.145469] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1410930) with pdu=0x20002887df90 00:40:55.986 [2024-10-01 22:39:51.145669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:55.986 [2024-10-01 22:39:51.145685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:40:55.986 [2024-10-01 22:39:51.151676] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1410930) with pdu=0x20002887df90 00:40:55.986 [2024-10-01 22:39:51.151874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:55.986 [2024-10-01 22:39:51.151890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:40:55.986 [2024-10-01 22:39:51.157092] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1410930) with pdu=0x20002887df90 00:40:55.986 [2024-10-01 22:39:51.157299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:55.986 [2024-10-01 22:39:51.157315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:40:55.986 [2024-10-01 22:39:51.162367] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1410930) with pdu=0x20002887df90 00:40:55.986 [2024-10-01 22:39:51.162609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:55.986 [2024-10-01 22:39:51.162630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:40:55.986 [2024-10-01 22:39:51.167427] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1410930) with pdu=0x20002887df90 00:40:55.986 [2024-10-01 22:39:51.167620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:55.986 [2024-10-01 22:39:51.167641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:40:55.986 [2024-10-01 22:39:51.172909] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1410930) with pdu=0x20002887df90 00:40:55.986 [2024-10-01 22:39:51.173107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:55.986 [2024-10-01 22:39:51.173123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:40:55.986 [2024-10-01 22:39:51.178173] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1410930) with pdu=0x20002887df90 00:40:55.986 [2024-10-01 22:39:51.178378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:55.986 [2024-10-01 22:39:51.178394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:40:55.986 [2024-10-01 22:39:51.183554] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1410930) with pdu=0x20002887df90 00:40:55.986 [2024-10-01 22:39:51.183755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:55.986 [2024-10-01 22:39:51.183772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:40:55.986 [2024-10-01 22:39:51.188989] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1410930) with pdu=0x20002887df90 00:40:55.986 [2024-10-01 22:39:51.189194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:55.986 [2024-10-01 22:39:51.189209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:40:55.986 [2024-10-01 22:39:51.195201] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1410930) with pdu=0x20002887df90 00:40:55.986 [2024-10-01 22:39:51.195398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:55.986 [2024-10-01 22:39:51.195414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:40:55.986 [2024-10-01 22:39:51.201229] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1410930) with pdu=0x20002887df90 00:40:55.986 [2024-10-01 22:39:51.201427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:55.986 [2024-10-01 22:39:51.201443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:40:55.986 [2024-10-01 22:39:51.206413] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1410930) with pdu=0x20002887df90 00:40:55.986 [2024-10-01 22:39:51.206608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:55.986 [2024-10-01 22:39:51.206628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:40:55.986 [2024-10-01 22:39:51.212107] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1410930) with pdu=0x20002887df90 00:40:55.986 [2024-10-01 22:39:51.212305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:55.986 [2024-10-01 22:39:51.212321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:40:55.986 [2024-10-01 22:39:51.217834] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1410930) with pdu=0x20002887df90 00:40:55.986 [2024-10-01 22:39:51.218032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:55.986 [2024-10-01 22:39:51.218048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:40:55.986 [2024-10-01 22:39:51.223201] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1410930) with pdu=0x20002887df90 00:40:55.986 [2024-10-01 22:39:51.223398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:55.986 [2024-10-01 22:39:51.223415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:40:55.986 [2024-10-01 22:39:51.227779] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1410930) with pdu=0x20002887df90 00:40:55.986 [2024-10-01 22:39:51.227978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:55.986 [2024-10-01 22:39:51.227995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:40:55.986 [2024-10-01 22:39:51.232454] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1410930) with pdu=0x20002887df90 00:40:55.986 [2024-10-01 22:39:51.232655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:55.986 [2024-10-01 22:39:51.232671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:40:56.248 [2024-10-01 22:39:51.237486] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1410930) with pdu=0x20002887df90 00:40:56.248 [2024-10-01 22:39:51.237688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:56.248 [2024-10-01 22:39:51.237704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:40:56.248 [2024-10-01 22:39:51.242113] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1410930) with pdu=0x20002887df90 00:40:56.248 [2024-10-01 22:39:51.242301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:56.248 [2024-10-01 22:39:51.242317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:40:56.248 [2024-10-01 22:39:51.246843] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1410930) with pdu=0x20002887df90 00:40:56.248 [2024-10-01 22:39:51.247041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:56.248 [2024-10-01 22:39:51.247058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:40:56.248 [2024-10-01 22:39:51.251334] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1410930) with pdu=0x20002887df90 00:40:56.248 [2024-10-01 22:39:51.251531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:56.248 [2024-10-01 22:39:51.251548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:40:56.248 [2024-10-01 22:39:51.256059] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1410930) with pdu=0x20002887df90 00:40:56.248 [2024-10-01 22:39:51.256246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:56.248 [2024-10-01 22:39:51.256266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:40:56.248 [2024-10-01 22:39:51.261014] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1410930) with pdu=0x20002887df90 00:40:56.248 [2024-10-01 22:39:51.261211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:56.249 [2024-10-01 22:39:51.261227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:40:56.249 [2024-10-01 22:39:51.265879] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1410930) with pdu=0x20002887df90 00:40:56.249 [2024-10-01 22:39:51.266078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:56.249 [2024-10-01 22:39:51.266094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:40:56.249 [2024-10-01 22:39:51.270675] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1410930) with pdu=0x20002887df90 00:40:56.249 [2024-10-01 22:39:51.270873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:56.249 [2024-10-01 22:39:51.270889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:40:56.249 [2024-10-01 22:39:51.275418] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1410930) with pdu=0x20002887df90 00:40:56.249 [2024-10-01 22:39:51.275614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:56.249 [2024-10-01 22:39:51.275636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:40:56.249 [2024-10-01 22:39:51.280341] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1410930) with pdu=0x20002887df90 00:40:56.249 [2024-10-01 22:39:51.280537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:56.249 [2024-10-01 22:39:51.280553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:40:56.249 [2024-10-01 22:39:51.285008] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1410930) with pdu=0x20002887df90 00:40:56.249 [2024-10-01 22:39:51.285203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:56.249 [2024-10-01 22:39:51.285220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:40:56.249 [2024-10-01 22:39:51.290187] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1410930) with pdu=0x20002887df90 00:40:56.249 [2024-10-01 22:39:51.290394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:56.249 [2024-10-01 22:39:51.290410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:40:56.249 [2024-10-01 22:39:51.294923] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1410930) with pdu=0x20002887df90 00:40:56.249 [2024-10-01 22:39:51.295119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:56.249 [2024-10-01 22:39:51.295136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:40:56.249 [2024-10-01 22:39:51.299483] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1410930) with pdu=0x20002887df90 00:40:56.249 [2024-10-01 22:39:51.299688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:56.249 [2024-10-01 22:39:51.299705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:40:56.249 [2024-10-01 22:39:51.304280] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1410930) with pdu=0x20002887df90 00:40:56.249 [2024-10-01 22:39:51.304475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:56.249 [2024-10-01 22:39:51.304490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:40:56.249 [2024-10-01 22:39:51.308647] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1410930) with pdu=0x20002887df90 00:40:56.249 [2024-10-01 22:39:51.308757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:56.249 [2024-10-01 22:39:51.308772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:40:56.249 [2024-10-01 22:39:51.313608] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1410930) with pdu=0x20002887df90 00:40:56.249 [2024-10-01 22:39:51.313828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:56.249 [2024-10-01 22:39:51.313844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:40:56.249 [2024-10-01 22:39:51.318122] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1410930) with pdu=0x20002887df90 00:40:56.249 [2024-10-01 22:39:51.318319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:56.249 [2024-10-01 22:39:51.318335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:40:56.249 [2024-10-01 22:39:51.322736] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1410930) with pdu=0x20002887df90 00:40:56.249 [2024-10-01 22:39:51.322933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:56.249 [2024-10-01 22:39:51.322949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:40:56.249 [2024-10-01 22:39:51.327242] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1410930) with pdu=0x20002887df90 00:40:56.249 [2024-10-01 22:39:51.327440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:56.249 [2024-10-01 22:39:51.327456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:40:56.249 [2024-10-01 22:39:51.331891] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1410930) with pdu=0x20002887df90 00:40:56.249 [2024-10-01 22:39:51.332088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:56.249 [2024-10-01 22:39:51.332104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:40:56.249 [2024-10-01 22:39:51.336498] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1410930) with pdu=0x20002887df90 00:40:56.249 [2024-10-01 22:39:51.336699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:56.249 [2024-10-01 22:39:51.336716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:40:56.249 [2024-10-01 22:39:51.341318] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1410930) with pdu=0x20002887df90 00:40:56.249 [2024-10-01 22:39:51.341513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:56.249 [2024-10-01 22:39:51.341529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:40:56.249 [2024-10-01 22:39:51.345920] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1410930) with pdu=0x20002887df90 00:40:56.249 [2024-10-01 22:39:51.346116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:56.249 [2024-10-01 22:39:51.346133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:40:56.249 [2024-10-01 22:39:51.350891] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1410930) with pdu=0x20002887df90 00:40:56.249 [2024-10-01 22:39:51.351100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:56.249 [2024-10-01 22:39:51.351115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:40:56.249 [2024-10-01 22:39:51.356149] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1410930) with pdu=0x20002887df90 00:40:56.249 [2024-10-01 22:39:51.356344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:56.249 [2024-10-01 22:39:51.356360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:40:56.249 [2024-10-01 22:39:51.361215] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1410930) with pdu=0x20002887df90 00:40:56.249 [2024-10-01 22:39:51.361413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:56.249 [2024-10-01 22:39:51.361429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:40:56.249 [2024-10-01 22:39:51.365915] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1410930) with pdu=0x20002887df90 00:40:56.249 [2024-10-01 22:39:51.366113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:56.249 [2024-10-01 22:39:51.366129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:40:56.249 [2024-10-01 22:39:51.370572] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1410930) with pdu=0x20002887df90 00:40:56.249 [2024-10-01 22:39:51.370773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:56.249 [2024-10-01 22:39:51.370789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:40:56.249 [2024-10-01 22:39:51.375413] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1410930) with pdu=0x20002887df90 00:40:56.249 [2024-10-01 22:39:51.375602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:56.249 [2024-10-01 22:39:51.375618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:40:56.249 [2024-10-01 22:39:51.380245] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1410930) with pdu=0x20002887df90 00:40:56.249 [2024-10-01 22:39:51.380442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:56.249 [2024-10-01 22:39:51.380461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:40:56.249 [2024-10-01 22:39:51.384934] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1410930) with pdu=0x20002887df90 00:40:56.249 [2024-10-01 22:39:51.385131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:56.249 [2024-10-01 22:39:51.385147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:40:56.249 [2024-10-01 22:39:51.389388] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1410930) with pdu=0x20002887df90 00:40:56.249 [2024-10-01 22:39:51.389575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:56.249 [2024-10-01 22:39:51.389590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:40:56.249 [2024-10-01 22:39:51.394123] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1410930) with pdu=0x20002887df90 00:40:56.249 [2024-10-01 22:39:51.394330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:56.249 [2024-10-01 22:39:51.394346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:40:56.249 [2024-10-01 22:39:51.398768] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1410930) with pdu=0x20002887df90 00:40:56.249 [2024-10-01 22:39:51.398966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:56.249 [2024-10-01 22:39:51.398982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:40:56.249 [2024-10-01 22:39:51.403822] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1410930) with pdu=0x20002887df90 00:40:56.249 [2024-10-01 22:39:51.404018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:56.249 [2024-10-01 22:39:51.404034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:40:56.249 [2024-10-01 22:39:51.409041] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1410930) with pdu=0x20002887df90 00:40:56.249 [2024-10-01 22:39:51.409238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:56.249 [2024-10-01 22:39:51.409254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:40:56.249 [2024-10-01 22:39:51.414028] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1410930) with pdu=0x20002887df90 00:40:56.249 [2024-10-01 22:39:51.414225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:56.249 [2024-10-01 22:39:51.414241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:40:56.249 [2024-10-01 22:39:51.418802] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1410930) with pdu=0x20002887df90 00:40:56.249 [2024-10-01 22:39:51.418998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:56.249 [2024-10-01 22:39:51.419014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:40:56.249 [2024-10-01 22:39:51.423402] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1410930) with pdu=0x20002887df90 00:40:56.249 [2024-10-01 22:39:51.423603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:56.249 [2024-10-01 22:39:51.423619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:40:56.249 [2024-10-01 22:39:51.428037] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1410930) with pdu=0x20002887df90 00:40:56.249 [2024-10-01 22:39:51.428234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:56.249 [2024-10-01 22:39:51.428250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:40:56.249 [2024-10-01 22:39:51.432724] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1410930) with pdu=0x20002887df90 00:40:56.249 [2024-10-01 22:39:51.432921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:56.249 [2024-10-01 22:39:51.432937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:40:56.249 [2024-10-01 22:39:51.437427] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1410930) with pdu=0x20002887df90 00:40:56.249 [2024-10-01 22:39:51.437629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:56.249 [2024-10-01 22:39:51.437645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:40:56.250 [2024-10-01 22:39:51.442256] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1410930) with pdu=0x20002887df90 00:40:56.250 [2024-10-01 22:39:51.442452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:56.250 [2024-10-01 22:39:51.442468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:40:56.250 [2024-10-01 22:39:51.447228] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1410930) with pdu=0x20002887df90 00:40:56.250 [2024-10-01 22:39:51.447424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:56.250 [2024-10-01 22:39:51.447440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:40:56.250 [2024-10-01 22:39:51.452529] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1410930) with pdu=0x20002887df90 00:40:56.250 [2024-10-01 22:39:51.452730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:56.250 [2024-10-01 22:39:51.452746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:40:56.250 [2024-10-01 22:39:51.458132] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1410930) with pdu=0x20002887df90 00:40:56.250 [2024-10-01 22:39:51.458328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:56.250 [2024-10-01 22:39:51.458344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:40:56.250 [2024-10-01 22:39:51.464479] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1410930) with pdu=0x20002887df90 00:40:56.250 [2024-10-01 22:39:51.464680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:56.250 [2024-10-01 22:39:51.464696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:40:56.250 [2024-10-01 22:39:51.470120] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1410930) with pdu=0x20002887df90 00:40:56.250 [2024-10-01 22:39:51.470327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:56.250 [2024-10-01 22:39:51.470343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:40:56.250 [2024-10-01 22:39:51.475572] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1410930) with pdu=0x20002887df90 00:40:56.250 [2024-10-01 22:39:51.475765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:56.250 [2024-10-01 22:39:51.475781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:40:56.250 [2024-10-01 22:39:51.480788] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1410930) with pdu=0x20002887df90 00:40:56.250 [2024-10-01 22:39:51.480984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:56.250 [2024-10-01 22:39:51.481000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:40:56.250 [2024-10-01 22:39:51.486221] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1410930) with pdu=0x20002887df90 00:40:56.250 [2024-10-01 22:39:51.486418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:56.250 [2024-10-01 22:39:51.486434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:40:56.250 [2024-10-01 22:39:51.491638] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1410930) with pdu=0x20002887df90 00:40:56.250 [2024-10-01 22:39:51.491836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:56.250 [2024-10-01 22:39:51.491851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:40:56.250 [2024-10-01 22:39:51.497115] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1410930) with pdu=0x20002887df90 00:40:56.250 [2024-10-01 22:39:51.497309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:56.250 [2024-10-01 22:39:51.497325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:40:56.512 [2024-10-01 22:39:51.502703] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1410930) with pdu=0x20002887df90 00:40:56.512 [2024-10-01 22:39:51.502912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:56.512 [2024-10-01 22:39:51.502929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:40:56.512 [2024-10-01 22:39:51.508058] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1410930) with pdu=0x20002887df90 00:40:56.512 [2024-10-01 22:39:51.508254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:56.512 [2024-10-01 22:39:51.508271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:40:56.512 [2024-10-01 22:39:51.513380] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1410930) with pdu=0x20002887df90 00:40:56.512 [2024-10-01 22:39:51.513576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:56.512 [2024-10-01 22:39:51.513595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:40:56.512 [2024-10-01 22:39:51.518553] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1410930) with pdu=0x20002887df90 00:40:56.512 [2024-10-01 22:39:51.518753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:56.512 [2024-10-01 22:39:51.518770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:40:56.512 [2024-10-01 22:39:51.523854] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1410930) with pdu=0x20002887df90 00:40:56.512 [2024-10-01 22:39:51.524063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:56.512 [2024-10-01 22:39:51.524079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:40:56.512 [2024-10-01 22:39:51.529441] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1410930) with pdu=0x20002887df90 00:40:56.512 [2024-10-01 22:39:51.529653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:56.512 [2024-10-01 22:39:51.529669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:40:56.512 [2024-10-01 22:39:51.536167] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1410930) with pdu=0x20002887df90 00:40:56.512 [2024-10-01 22:39:51.536366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:56.512 [2024-10-01 22:39:51.536383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:40:56.512 [2024-10-01 22:39:51.541315] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1410930) with pdu=0x20002887df90 00:40:56.512 [2024-10-01 22:39:51.541513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:56.512 [2024-10-01 22:39:51.541529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:40:56.512 [2024-10-01 22:39:51.546929] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1410930) with pdu=0x20002887df90 00:40:56.512 [2024-10-01 22:39:51.547127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:56.512 [2024-10-01 22:39:51.547143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:40:56.512 [2024-10-01 22:39:51.552339] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1410930) with pdu=0x20002887df90 00:40:56.512 [2024-10-01 22:39:51.552536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:56.512 [2024-10-01 22:39:51.552553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:40:56.512 [2024-10-01 22:39:51.557642] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1410930) with pdu=0x20002887df90 00:40:56.512 [2024-10-01 22:39:51.557838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:56.512 [2024-10-01 22:39:51.557854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:40:56.512 [2024-10-01 22:39:51.562821] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1410930) with pdu=0x20002887df90 00:40:56.512 [2024-10-01 22:39:51.563019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:56.512 [2024-10-01 22:39:51.563035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:40:56.512 [2024-10-01 22:39:51.567434] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1410930) with pdu=0x20002887df90 00:40:56.512 [2024-10-01 22:39:51.567635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:56.512 [2024-10-01 22:39:51.567651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:40:56.512 [2024-10-01 22:39:51.571830] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1410930) with pdu=0x20002887df90 00:40:56.512 [2024-10-01 22:39:51.572029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:56.512 [2024-10-01 22:39:51.572045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:40:56.512 [2024-10-01 22:39:51.576230] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1410930) with pdu=0x20002887df90 00:40:56.512 [2024-10-01 22:39:51.576429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:56.512 [2024-10-01 22:39:51.576445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:40:56.512 [2024-10-01 22:39:51.580639] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1410930) with pdu=0x20002887df90 00:40:56.512 [2024-10-01 22:39:51.580846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:56.512 [2024-10-01 22:39:51.580863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:40:56.512 [2024-10-01 22:39:51.585293] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1410930) with pdu=0x20002887df90 00:40:56.512 [2024-10-01 22:39:51.585489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:56.512 [2024-10-01 22:39:51.585505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:40:56.512 [2024-10-01 22:39:51.589721] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1410930) with pdu=0x20002887df90 00:40:56.512 [2024-10-01 22:39:51.589918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:56.512 [2024-10-01 22:39:51.589934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:40:56.512 [2024-10-01 22:39:51.594334] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1410930) with pdu=0x20002887df90 00:40:56.512 [2024-10-01 22:39:51.594530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:56.512 [2024-10-01 22:39:51.594546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:40:56.512 [2024-10-01 22:39:51.598852] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1410930) with pdu=0x20002887df90 00:40:56.512 [2024-10-01 22:39:51.599051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:56.512 [2024-10-01 22:39:51.599070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:40:56.512 [2024-10-01 22:39:51.603369] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1410930) with pdu=0x20002887df90 00:40:56.512 [2024-10-01 22:39:51.603565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:56.512 [2024-10-01 22:39:51.603581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:40:56.512 [2024-10-01 22:39:51.607732] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1410930) with pdu=0x20002887df90 00:40:56.512 [2024-10-01 22:39:51.607929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:56.512 [2024-10-01 22:39:51.607945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:40:56.512 [2024-10-01 22:39:51.614089] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1410930) with pdu=0x20002887df90 00:40:56.512 [2024-10-01 22:39:51.614284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:56.512 [2024-10-01 22:39:51.614300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:40:56.512 [2024-10-01 22:39:51.618869] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1410930) with pdu=0x20002887df90 00:40:56.513 [2024-10-01 22:39:51.619067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:56.513 [2024-10-01 22:39:51.619083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:40:56.513 [2024-10-01 22:39:51.623254] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1410930) with pdu=0x20002887df90 00:40:56.513 [2024-10-01 22:39:51.623453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:56.513 [2024-10-01 22:39:51.623469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:40:56.513 [2024-10-01 22:39:51.627787] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1410930) with pdu=0x20002887df90 00:40:56.513 [2024-10-01 22:39:51.627984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:56.513 [2024-10-01 22:39:51.628000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:40:56.513 [2024-10-01 22:39:51.635522] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1410930) with pdu=0x20002887df90 00:40:56.513 [2024-10-01 22:39:51.635731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:56.513 [2024-10-01 22:39:51.635747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:40:56.513 [2024-10-01 22:39:51.642496] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1410930) with pdu=0x20002887df90 00:40:56.513 [2024-10-01 22:39:51.642697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:56.513 [2024-10-01 22:39:51.642714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:40:56.513 [2024-10-01 22:39:51.649049] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1410930) with pdu=0x20002887df90 00:40:56.513 [2024-10-01 22:39:51.649260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:56.513 [2024-10-01 22:39:51.649276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:40:56.513 [2024-10-01 22:39:51.655383] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1410930) with pdu=0x20002887df90 00:40:56.513 [2024-10-01 22:39:51.655580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:56.513 [2024-10-01 22:39:51.655596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:40:56.513 [2024-10-01 22:39:51.661332] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1410930) with pdu=0x20002887df90 00:40:56.513 [2024-10-01 22:39:51.661538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:56.513 [2024-10-01 22:39:51.661554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:40:56.513 [2024-10-01 22:39:51.667535] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1410930) with pdu=0x20002887df90 00:40:56.513 [2024-10-01 22:39:51.667745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:56.513 [2024-10-01 22:39:51.667761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:40:56.513 [2024-10-01 22:39:51.673370] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1410930) with pdu=0x20002887df90 00:40:56.513 [2024-10-01 22:39:51.673566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:56.513 [2024-10-01 22:39:51.673581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:40:56.513 [2024-10-01 22:39:51.677930] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1410930) with pdu=0x20002887df90 00:40:56.513 [2024-10-01 22:39:51.678128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:56.513 [2024-10-01 22:39:51.678144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:40:56.513 [2024-10-01 22:39:51.684357] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1410930) with pdu=0x20002887df90 00:40:56.513 [2024-10-01 22:39:51.684563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:56.513 [2024-10-01 22:39:51.684579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:40:56.513 [2024-10-01 22:39:51.690564] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1410930) with pdu=0x20002887df90 00:40:56.513 [2024-10-01 22:39:51.690766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:56.513 [2024-10-01 22:39:51.690782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:40:56.513 [2024-10-01 22:39:51.695379] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1410930) with pdu=0x20002887df90 00:40:56.513 [2024-10-01 22:39:51.695574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:56.513 [2024-10-01 22:39:51.695590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:40:56.513 [2024-10-01 22:39:51.700160] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1410930) with pdu=0x20002887df90 00:40:56.513 [2024-10-01 22:39:51.700356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:56.513 [2024-10-01 22:39:51.700372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:40:56.513 [2024-10-01 22:39:51.705123] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1410930) with pdu=0x20002887df90 00:40:56.513 [2024-10-01 22:39:51.705417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:56.513 [2024-10-01 22:39:51.705435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:40:56.513 [2024-10-01 22:39:51.709675] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1410930) with pdu=0x20002887df90 00:40:56.513 [2024-10-01 22:39:51.709875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:56.513 [2024-10-01 22:39:51.709891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:40:56.513 [2024-10-01 22:39:51.714389] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1410930) with pdu=0x20002887df90 00:40:56.513 [2024-10-01 22:39:51.714584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:56.513 [2024-10-01 22:39:51.714600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:40:56.513 [2024-10-01 22:39:51.719012] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1410930) with pdu=0x20002887df90 00:40:56.513 [2024-10-01 22:39:51.719208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:56.513 [2024-10-01 22:39:51.719223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:40:56.513 [2024-10-01 22:39:51.723633] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1410930) with pdu=0x20002887df90 00:40:56.513 [2024-10-01 22:39:51.723819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:56.513 [2024-10-01 22:39:51.723835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:40:56.513 [2024-10-01 22:39:51.728551] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1410930) with pdu=0x20002887df90 00:40:56.513 [2024-10-01 22:39:51.728762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:56.513 [2024-10-01 22:39:51.728778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:40:56.513 [2024-10-01 22:39:51.733510] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1410930) with pdu=0x20002887df90 00:40:56.513 [2024-10-01 22:39:51.733704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:56.513 [2024-10-01 22:39:51.733720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:40:56.513 [2024-10-01 22:39:51.738211] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1410930) with pdu=0x20002887df90 00:40:56.513 [2024-10-01 22:39:51.738409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:56.513 [2024-10-01 22:39:51.738428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:40:56.513 [2024-10-01 22:39:51.742735] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1410930) with pdu=0x20002887df90 00:40:56.513 [2024-10-01 22:39:51.742935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:56.513 [2024-10-01 22:39:51.742951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:40:56.513 [2024-10-01 22:39:51.747607] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1410930) with pdu=0x20002887df90 00:40:56.513 [2024-10-01 22:39:51.747814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:96 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:56.513 [2024-10-01 22:39:51.747830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:40:56.513 [2024-10-01 22:39:51.752309] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1410930) with pdu=0x20002887df90 00:40:56.513 [2024-10-01 22:39:51.752497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:56.513 [2024-10-01 22:39:51.752513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:40:56.513 [2024-10-01 22:39:51.757162] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1410930) with pdu=0x20002887df90 00:40:56.513 [2024-10-01 22:39:51.757358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:56.513 [2024-10-01 22:39:51.757374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:40:56.514 [2024-10-01 22:39:51.761988] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1410930) with pdu=0x20002887df90 00:40:56.514 [2024-10-01 22:39:51.762185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:56.514 [2024-10-01 22:39:51.762200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:40:56.775 [2024-10-01 22:39:51.766606] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1410930) with pdu=0x20002887df90 00:40:56.775 [2024-10-01 22:39:51.766811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:56.775 [2024-10-01 22:39:51.766828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:40:56.775 [2024-10-01 22:39:51.771457] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1410930) with pdu=0x20002887df90 00:40:56.775 [2024-10-01 22:39:51.771668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:56.775 [2024-10-01 22:39:51.771684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:40:56.775 [2024-10-01 22:39:51.776012] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1410930) with pdu=0x20002887df90 00:40:56.775 [2024-10-01 22:39:51.776206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:56.775 [2024-10-01 22:39:51.776222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:40:56.775 [2024-10-01 22:39:51.780731] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1410930) with pdu=0x20002887df90 00:40:56.775 [2024-10-01 22:39:51.780934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:56.775 [2024-10-01 22:39:51.780950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:40:56.775 [2024-10-01 22:39:51.785344] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1410930) with pdu=0x20002887df90 00:40:56.775 [2024-10-01 22:39:51.785532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:56.775 [2024-10-01 22:39:51.785548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:40:56.775 [2024-10-01 22:39:51.790314] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1410930) with pdu=0x20002887df90 00:40:56.775 [2024-10-01 22:39:51.790510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:56.775 [2024-10-01 22:39:51.790526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:40:56.775 [2024-10-01 22:39:51.795296] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1410930) with pdu=0x20002887df90 00:40:56.775 [2024-10-01 22:39:51.795485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:56.775 [2024-10-01 22:39:51.795502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:40:56.775 [2024-10-01 22:39:51.799841] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1410930) with pdu=0x20002887df90 00:40:56.775 [2024-10-01 22:39:51.800076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:56.775 [2024-10-01 22:39:51.800093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:40:56.775 [2024-10-01 22:39:51.804432] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1410930) with pdu=0x20002887df90 00:40:56.775 [2024-10-01 22:39:51.804631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:56.775 [2024-10-01 22:39:51.804648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:40:56.775 [2024-10-01 22:39:51.809248] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1410930) with pdu=0x20002887df90 00:40:56.775 [2024-10-01 22:39:51.809444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:56.775 [2024-10-01 22:39:51.809460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:40:56.775 [2024-10-01 22:39:51.815069] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1410930) with pdu=0x20002887df90 00:40:56.775 [2024-10-01 22:39:51.815268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:56.775 [2024-10-01 22:39:51.815283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:40:56.775 [2024-10-01 22:39:51.819749] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1410930) with pdu=0x20002887df90 00:40:56.775 [2024-10-01 22:39:51.819935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:56.775 [2024-10-01 22:39:51.819951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:40:56.775 [2024-10-01 22:39:51.824363] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1410930) with pdu=0x20002887df90 00:40:56.775 [2024-10-01 22:39:51.824562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:56.775 [2024-10-01 22:39:51.824578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:40:56.775 [2024-10-01 22:39:51.828781] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1410930) with pdu=0x20002887df90 00:40:56.775 [2024-10-01 22:39:51.828981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:56.775 [2024-10-01 22:39:51.828997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:40:56.775 [2024-10-01 22:39:51.833510] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1410930) with pdu=0x20002887df90 00:40:56.775 [2024-10-01 22:39:51.833712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:56.775 [2024-10-01 22:39:51.833728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:40:56.775 [2024-10-01 22:39:51.838127] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1410930) with pdu=0x20002887df90 00:40:56.775 [2024-10-01 22:39:51.838325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:56.775 [2024-10-01 22:39:51.838341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:40:56.775 [2024-10-01 22:39:51.843093] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1410930) with pdu=0x20002887df90 00:40:56.775 [2024-10-01 22:39:51.843289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:56.775 [2024-10-01 22:39:51.843305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:40:56.775 [2024-10-01 22:39:51.847925] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1410930) with pdu=0x20002887df90 00:40:56.775 [2024-10-01 22:39:51.848121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:56.775 [2024-10-01 22:39:51.848137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:40:56.775 [2024-10-01 22:39:51.853248] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1410930) with pdu=0x20002887df90 00:40:56.775 [2024-10-01 22:39:51.853446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:56.775 [2024-10-01 22:39:51.853462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:40:56.775 [2024-10-01 22:39:51.861221] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1410930) with pdu=0x20002887df90 00:40:56.775 [2024-10-01 22:39:51.861427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:56.775 [2024-10-01 22:39:51.861443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:40:56.775 [2024-10-01 22:39:51.869401] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1410930) with pdu=0x20002887df90 00:40:56.775 [2024-10-01 22:39:51.869610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:56.775 [2024-10-01 22:39:51.869635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:40:56.775 [2024-10-01 22:39:51.878115] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1410930) with pdu=0x20002887df90 00:40:56.775 [2024-10-01 22:39:51.878311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:56.776 [2024-10-01 22:39:51.878326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:40:56.776 [2024-10-01 22:39:51.886516] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1410930) with pdu=0x20002887df90 00:40:56.776 [2024-10-01 22:39:51.886719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:56.776 [2024-10-01 22:39:51.886735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:40:56.776 [2024-10-01 22:39:51.894741] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1410930) with pdu=0x20002887df90 00:40:56.776 [2024-10-01 22:39:51.894949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:56.776 [2024-10-01 22:39:51.894965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:40:56.776 [2024-10-01 22:39:51.903219] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1410930) with pdu=0x20002887df90 00:40:56.776 [2024-10-01 22:39:51.903426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:56.776 [2024-10-01 22:39:51.903441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:40:56.776 [2024-10-01 22:39:51.911181] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1410930) with pdu=0x20002887df90 00:40:56.776 [2024-10-01 22:39:51.911367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:56.776 [2024-10-01 22:39:51.911383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:40:56.776 [2024-10-01 22:39:51.918864] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1410930) with pdu=0x20002887df90 00:40:56.776 [2024-10-01 22:39:51.919061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:56.776 [2024-10-01 22:39:51.919077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:40:56.776 [2024-10-01 22:39:51.926783] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1410930) with pdu=0x20002887df90 00:40:56.776 [2024-10-01 22:39:51.926884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:56.776 [2024-10-01 22:39:51.926899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:40:56.776 [2024-10-01 22:39:51.935193] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1410930) with pdu=0x20002887df90 00:40:56.776 [2024-10-01 22:39:51.935390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:56.776 [2024-10-01 22:39:51.935406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:40:56.776 [2024-10-01 22:39:51.943641] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1410930) with pdu=0x20002887df90 00:40:56.776 [2024-10-01 22:39:51.943840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:56.776 [2024-10-01 22:39:51.943856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:40:56.776 [2024-10-01 22:39:51.951206] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1410930) with pdu=0x20002887df90 00:40:56.776 [2024-10-01 22:39:51.951459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:56.776 [2024-10-01 22:39:51.951476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:40:56.776 [2024-10-01 22:39:51.957550] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1410930) with pdu=0x20002887df90 00:40:56.776 [2024-10-01 22:39:51.957762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:56.776 [2024-10-01 22:39:51.957777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:40:56.776 [2024-10-01 22:39:51.962923] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1410930) with pdu=0x20002887df90 00:40:56.776 [2024-10-01 22:39:51.963130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:56.776 [2024-10-01 22:39:51.963146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:40:56.776 [2024-10-01 22:39:51.967704] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1410930) with pdu=0x20002887df90 00:40:56.776 [2024-10-01 22:39:51.967902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:56.776 [2024-10-01 22:39:51.967918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:40:56.776 [2024-10-01 22:39:51.973773] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1410930) with pdu=0x20002887df90 00:40:56.776 [2024-10-01 22:39:51.973980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:56.776 [2024-10-01 22:39:51.973996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:40:56.776 [2024-10-01 22:39:51.980249] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1410930) with pdu=0x20002887df90 00:40:56.776 [2024-10-01 22:39:51.980455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:56.776 [2024-10-01 22:39:51.980471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:40:56.776 [2024-10-01 22:39:51.985824] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1410930) with pdu=0x20002887df90 00:40:56.776 [2024-10-01 22:39:51.986021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:56.776 [2024-10-01 22:39:51.986037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:40:56.776 [2024-10-01 22:39:51.992269] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1410930) with pdu=0x20002887df90 00:40:56.776 [2024-10-01 22:39:51.992475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:56.776 [2024-10-01 22:39:51.992494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:40:56.776 [2024-10-01 22:39:51.997009] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1410930) with pdu=0x20002887df90 00:40:56.776 [2024-10-01 22:39:51.997206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:56.776 [2024-10-01 22:39:51.997222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:40:56.776 [2024-10-01 22:39:52.002798] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1410930) with pdu=0x20002887df90 00:40:56.776 [2024-10-01 22:39:52.003005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:56.776 [2024-10-01 22:39:52.003021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:40:56.776 [2024-10-01 22:39:52.008539] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1410930) with pdu=0x20002887df90 00:40:56.776 [2024-10-01 22:39:52.008740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:56.776 [2024-10-01 22:39:52.008755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:40:56.776 [2024-10-01 22:39:52.013115] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1410930) with pdu=0x20002887df90 00:40:56.776 [2024-10-01 22:39:52.013312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:56.776 [2024-10-01 22:39:52.013328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:40:56.776 [2024-10-01 22:39:52.017715] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1410930) with pdu=0x20002887df90 00:40:56.776 [2024-10-01 22:39:52.017912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:56.776 [2024-10-01 22:39:52.017929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:40:56.776 [2024-10-01 22:39:52.022327] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1410930) with pdu=0x20002887df90 00:40:56.776 [2024-10-01 22:39:52.022523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:56.776 [2024-10-01 22:39:52.022539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:40:57.038 [2024-10-01 22:39:52.026885] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1410930) with pdu=0x20002887df90 00:40:57.038 [2024-10-01 22:39:52.027083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:57.038 [2024-10-01 22:39:52.027099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:40:57.038 [2024-10-01 22:39:52.031411] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1410930) with pdu=0x20002887df90 00:40:57.038 [2024-10-01 22:39:52.031610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:57.038 [2024-10-01 22:39:52.031630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:40:57.038 5868.00 IOPS, 733.50 MiB/s [2024-10-01 22:39:52.037031] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1410930) with pdu=0x20002887df90 00:40:57.038 [2024-10-01 22:39:52.037236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:57.038 [2024-10-01 22:39:52.037252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:40:57.038 [2024-10-01 22:39:52.041655] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1410930) with pdu=0x20002887df90 00:40:57.038 [2024-10-01 22:39:52.041852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:57.038 [2024-10-01 22:39:52.041868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:40:57.038 [2024-10-01 22:39:52.046329] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1410930) with pdu=0x20002887df90 00:40:57.038 [2024-10-01 22:39:52.046526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:57.038 [2024-10-01 22:39:52.046542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:40:57.038 [2024-10-01 22:39:52.050874] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1410930) with pdu=0x20002887df90 00:40:57.038 [2024-10-01 22:39:52.051071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:57.038 [2024-10-01 22:39:52.051087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:40:57.038 [2024-10-01 22:39:52.055826] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1410930) with pdu=0x20002887df90 00:40:57.038 [2024-10-01 22:39:52.056025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:57.038 [2024-10-01 22:39:52.056041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:40:57.038 [2024-10-01 22:39:52.061455] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1410930) with pdu=0x20002887df90 00:40:57.039 [2024-10-01 22:39:52.061664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:57.039 [2024-10-01 22:39:52.061680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:40:57.039 [2024-10-01 22:39:52.066713] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1410930) with pdu=0x20002887df90 00:40:57.039 [2024-10-01 22:39:52.066910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:57.039 [2024-10-01 22:39:52.066926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:40:57.039 [2024-10-01 22:39:52.072018] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1410930) with pdu=0x20002887df90 00:40:57.039 [2024-10-01 22:39:52.072224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:57.039 [2024-10-01 22:39:52.072240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:40:57.039 [2024-10-01 22:39:52.077330] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1410930) with pdu=0x20002887df90 00:40:57.039 [2024-10-01 22:39:52.077527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:57.039 [2024-10-01 22:39:52.077543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:40:57.039 [2024-10-01 22:39:52.082526] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1410930) with pdu=0x20002887df90 00:40:57.039 [2024-10-01 22:39:52.082718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:57.039 [2024-10-01 22:39:52.082734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:40:57.039 [2024-10-01 22:39:52.088448] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1410930) with pdu=0x20002887df90 00:40:57.039 [2024-10-01 22:39:52.088647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:57.039 [2024-10-01 22:39:52.088663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:40:57.039 [2024-10-01 22:39:52.093666] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1410930) with pdu=0x20002887df90 00:40:57.039 [2024-10-01 22:39:52.093862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:57.039 [2024-10-01 22:39:52.093878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:40:57.039 [2024-10-01 22:39:52.099207] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1410930) with pdu=0x20002887df90 00:40:57.039 [2024-10-01 22:39:52.099415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:57.039 [2024-10-01 22:39:52.099431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:40:57.039 [2024-10-01 22:39:52.104721] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1410930) with pdu=0x20002887df90 00:40:57.039 [2024-10-01 22:39:52.104918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:57.039 [2024-10-01 22:39:52.104935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:40:57.039 [2024-10-01 22:39:52.110834] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1410930) with pdu=0x20002887df90 00:40:57.039 [2024-10-01 22:39:52.111033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:57.039 [2024-10-01 22:39:52.111048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:40:57.039 [2024-10-01 22:39:52.116583] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1410930) with pdu=0x20002887df90 00:40:57.039 [2024-10-01 22:39:52.116791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:57.039 [2024-10-01 22:39:52.116808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:40:57.039 [2024-10-01 22:39:52.121911] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1410930) with pdu=0x20002887df90 00:40:57.039 [2024-10-01 22:39:52.122108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:57.039 [2024-10-01 22:39:52.122124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:40:57.039 [2024-10-01 22:39:52.126767] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1410930) with pdu=0x20002887df90 00:40:57.039 [2024-10-01 22:39:52.126996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:57.039 [2024-10-01 22:39:52.127015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:40:57.039 [2024-10-01 22:39:52.132316] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1410930) with pdu=0x20002887df90 00:40:57.039 [2024-10-01 22:39:52.132522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:57.039 [2024-10-01 22:39:52.132538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:40:57.039 [2024-10-01 22:39:52.137731] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1410930) with pdu=0x20002887df90 00:40:57.039 [2024-10-01 22:39:52.137939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:57.039 [2024-10-01 22:39:52.137955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:40:57.039 [2024-10-01 22:39:52.144410] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1410930) with pdu=0x20002887df90 00:40:57.039 [2024-10-01 22:39:52.144606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:57.039 [2024-10-01 22:39:52.144622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:40:57.039 [2024-10-01 22:39:52.150026] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1410930) with pdu=0x20002887df90 00:40:57.039 [2024-10-01 22:39:52.150260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:57.039 [2024-10-01 22:39:52.150276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:40:57.039 [2024-10-01 22:39:52.155191] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1410930) with pdu=0x20002887df90 00:40:57.039 [2024-10-01 22:39:52.155387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:57.039 [2024-10-01 22:39:52.155403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:40:57.039 [2024-10-01 22:39:52.159710] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1410930) with pdu=0x20002887df90 00:40:57.039 [2024-10-01 22:39:52.159908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:57.039 [2024-10-01 22:39:52.159924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:40:57.039 [2024-10-01 22:39:52.164145] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1410930) with pdu=0x20002887df90 00:40:57.039 [2024-10-01 22:39:52.164340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:57.039 [2024-10-01 22:39:52.164356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:40:57.039 [2024-10-01 22:39:52.168702] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1410930) with pdu=0x20002887df90 00:40:57.039 [2024-10-01 22:39:52.168889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:57.039 [2024-10-01 22:39:52.168905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:40:57.039 [2024-10-01 22:39:52.173128] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1410930) with pdu=0x20002887df90 00:40:57.039 [2024-10-01 22:39:52.173328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:57.039 [2024-10-01 22:39:52.173345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:40:57.039 [2024-10-01 22:39:52.177604] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1410930) with pdu=0x20002887df90 00:40:57.039 [2024-10-01 22:39:52.177809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:57.039 [2024-10-01 22:39:52.177825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:40:57.039 [2024-10-01 22:39:52.182409] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1410930) with pdu=0x20002887df90 00:40:57.039 [2024-10-01 22:39:52.182606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:57.039 [2024-10-01 22:39:52.182622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:40:57.039 [2024-10-01 22:39:52.187412] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1410930) with pdu=0x20002887df90 00:40:57.039 [2024-10-01 22:39:52.187607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:57.039 [2024-10-01 22:39:52.187628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:40:57.039 [2024-10-01 22:39:52.191874] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1410930) with pdu=0x20002887df90 00:40:57.039 [2024-10-01 22:39:52.192072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:57.039 [2024-10-01 22:39:52.192088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:40:57.039 [2024-10-01 22:39:52.196820] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1410930) with pdu=0x20002887df90 00:40:57.039 [2024-10-01 22:39:52.197017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:57.039 [2024-10-01 22:39:52.197033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:40:57.040 [2024-10-01 22:39:52.201257] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1410930) with pdu=0x20002887df90 00:40:57.040 [2024-10-01 22:39:52.201452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:57.040 [2024-10-01 22:39:52.201468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:40:57.040 [2024-10-01 22:39:52.205719] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1410930) with pdu=0x20002887df90 00:40:57.040 [2024-10-01 22:39:52.205907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:57.040 [2024-10-01 22:39:52.205923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:40:57.040 [2024-10-01 22:39:52.210521] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1410930) with pdu=0x20002887df90 00:40:57.040 [2024-10-01 22:39:52.210712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:57.040 [2024-10-01 22:39:52.210729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:40:57.040 [2024-10-01 22:39:52.215416] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1410930) with pdu=0x20002887df90 00:40:57.040 [2024-10-01 22:39:52.215622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:57.040 [2024-10-01 22:39:52.215643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:40:57.040 [2024-10-01 22:39:52.220258] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1410930) with pdu=0x20002887df90 00:40:57.040 [2024-10-01 22:39:52.220456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:57.040 [2024-10-01 22:39:52.220472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:40:57.040 [2024-10-01 22:39:52.225092] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1410930) with pdu=0x20002887df90 00:40:57.040 [2024-10-01 22:39:52.225290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:57.040 [2024-10-01 22:39:52.225306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:40:57.040 [2024-10-01 22:39:52.230054] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1410930) with pdu=0x20002887df90 00:40:57.040 [2024-10-01 22:39:52.230242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:57.040 [2024-10-01 22:39:52.230258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:40:57.040 [2024-10-01 22:39:52.235020] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1410930) with pdu=0x20002887df90 00:40:57.040 [2024-10-01 22:39:52.235218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:57.040 [2024-10-01 22:39:52.235234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:40:57.040 [2024-10-01 22:39:52.239575] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1410930) with pdu=0x20002887df90 00:40:57.040 [2024-10-01 22:39:52.239775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:57.040 [2024-10-01 22:39:52.239792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:40:57.040 [2024-10-01 22:39:52.244269] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1410930) with pdu=0x20002887df90 00:40:57.040 [2024-10-01 22:39:52.244455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:57.040 [2024-10-01 22:39:52.244471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:40:57.040 [2024-10-01 22:39:52.248788] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1410930) with pdu=0x20002887df90 00:40:57.040 [2024-10-01 22:39:52.248984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:57.040 [2024-10-01 22:39:52.249001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:40:57.040 [2024-10-01 22:39:52.253533] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1410930) with pdu=0x20002887df90 00:40:57.040 [2024-10-01 22:39:52.253732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:57.040 [2024-10-01 22:39:52.253751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:40:57.040 [2024-10-01 22:39:52.258471] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1410930) with pdu=0x20002887df90 00:40:57.040 [2024-10-01 22:39:52.258670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:57.040 [2024-10-01 22:39:52.258686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:40:57.040 [2024-10-01 22:39:52.263046] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1410930) with pdu=0x20002887df90 00:40:57.040 [2024-10-01 22:39:52.263242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:57.040 [2024-10-01 22:39:52.263258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:40:57.040 [2024-10-01 22:39:52.267779] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1410930) with pdu=0x20002887df90 00:40:57.040 [2024-10-01 22:39:52.267976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:57.040 [2024-10-01 22:39:52.267993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:40:57.040 [2024-10-01 22:39:52.272419] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1410930) with pdu=0x20002887df90 00:40:57.040 [2024-10-01 22:39:52.272619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:57.040 [2024-10-01 22:39:52.272640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:40:57.040 [2024-10-01 22:39:52.276828] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1410930) with pdu=0x20002887df90 00:40:57.040 [2024-10-01 22:39:52.277026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:57.040 [2024-10-01 22:39:52.277041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:40:57.040 [2024-10-01 22:39:52.281373] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1410930) with pdu=0x20002887df90 00:40:57.040 [2024-10-01 22:39:52.281572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:57.040 [2024-10-01 22:39:52.281588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:40:57.040 [2024-10-01 22:39:52.285992] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1410930) with pdu=0x20002887df90 00:40:57.040 [2024-10-01 22:39:52.286187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:57.040 [2024-10-01 22:39:52.286204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:40:57.303 [2024-10-01 22:39:52.291571] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1410930) with pdu=0x20002887df90 00:40:57.303 [2024-10-01 22:39:52.291780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:57.303 [2024-10-01 22:39:52.291796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:40:57.303 [2024-10-01 22:39:52.296585] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1410930) with pdu=0x20002887df90 00:40:57.303 [2024-10-01 22:39:52.296805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:57.303 [2024-10-01 22:39:52.296821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:40:57.303 [2024-10-01 22:39:52.301985] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1410930) with pdu=0x20002887df90 00:40:57.303 [2024-10-01 22:39:52.302180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:57.303 [2024-10-01 22:39:52.302196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:40:57.303 [2024-10-01 22:39:52.308163] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1410930) with pdu=0x20002887df90 00:40:57.303 [2024-10-01 22:39:52.308361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:57.303 [2024-10-01 22:39:52.308377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:40:57.303 [2024-10-01 22:39:52.314117] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1410930) with pdu=0x20002887df90 00:40:57.303 [2024-10-01 22:39:52.314313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:57.303 [2024-10-01 22:39:52.314329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:40:57.303 [2024-10-01 22:39:52.319639] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1410930) with pdu=0x20002887df90 00:40:57.303 [2024-10-01 22:39:52.319690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:57.303 [2024-10-01 22:39:52.319705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:40:57.303 [2024-10-01 22:39:52.325890] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1410930) with pdu=0x20002887df90 00:40:57.303 [2024-10-01 22:39:52.326086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:57.303 [2024-10-01 22:39:52.326102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:40:57.303 [2024-10-01 22:39:52.331351] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1410930) with pdu=0x20002887df90 00:40:57.303 [2024-10-01 22:39:52.331558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:57.303 [2024-10-01 22:39:52.331574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:40:57.303 [2024-10-01 22:39:52.337268] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1410930) with pdu=0x20002887df90 00:40:57.303 [2024-10-01 22:39:52.337465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:57.303 [2024-10-01 22:39:52.337481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:40:57.303 [2024-10-01 22:39:52.342135] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1410930) with pdu=0x20002887df90 00:40:57.303 [2024-10-01 22:39:52.342329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:57.303 [2024-10-01 22:39:52.342348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:40:57.303 [2024-10-01 22:39:52.347411] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1410930) with pdu=0x20002887df90 00:40:57.303 [2024-10-01 22:39:52.347607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:57.303 [2024-10-01 22:39:52.347623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:40:57.303 [2024-10-01 22:39:52.352404] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1410930) with pdu=0x20002887df90 00:40:57.303 [2024-10-01 22:39:52.352600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:57.303 [2024-10-01 22:39:52.352616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:40:57.303 [2024-10-01 22:39:52.357160] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1410930) with pdu=0x20002887df90 00:40:57.303 [2024-10-01 22:39:52.357356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:57.303 [2024-10-01 22:39:52.357371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:40:57.303 [2024-10-01 22:39:52.361544] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1410930) with pdu=0x20002887df90 00:40:57.303 [2024-10-01 22:39:52.361745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:57.303 [2024-10-01 22:39:52.361761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:40:57.303 [2024-10-01 22:39:52.366201] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1410930) with pdu=0x20002887df90 00:40:57.303 [2024-10-01 22:39:52.366398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:57.303 [2024-10-01 22:39:52.366414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:40:57.303 [2024-10-01 22:39:52.371120] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1410930) with pdu=0x20002887df90 00:40:57.303 [2024-10-01 22:39:52.371308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:57.303 [2024-10-01 22:39:52.371323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:40:57.303 [2024-10-01 22:39:52.375690] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1410930) with pdu=0x20002887df90 00:40:57.303 [2024-10-01 22:39:52.375888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:57.303 [2024-10-01 22:39:52.375904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:40:57.303 [2024-10-01 22:39:52.380443] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1410930) with pdu=0x20002887df90 00:40:57.303 [2024-10-01 22:39:52.380643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:57.303 [2024-10-01 22:39:52.380659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:40:57.303 [2024-10-01 22:39:52.384996] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1410930) with pdu=0x20002887df90 00:40:57.303 [2024-10-01 22:39:52.385126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:57.304 [2024-10-01 22:39:52.385141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:40:57.304 [2024-10-01 22:39:52.389444] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1410930) with pdu=0x20002887df90 00:40:57.304 [2024-10-01 22:39:52.389644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:57.304 [2024-10-01 22:39:52.389660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:40:57.304 [2024-10-01 22:39:52.393598] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1410930) with pdu=0x20002887df90 00:40:57.304 [2024-10-01 22:39:52.393732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:57.304 [2024-10-01 22:39:52.393748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:40:57.304 [2024-10-01 22:39:52.398565] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1410930) with pdu=0x20002887df90 00:40:57.304 [2024-10-01 22:39:52.398763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:57.304 [2024-10-01 22:39:52.398779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:40:57.304 [2024-10-01 22:39:52.403136] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1410930) with pdu=0x20002887df90 00:40:57.304 [2024-10-01 22:39:52.403333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:57.304 [2024-10-01 22:39:52.403349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:40:57.304 [2024-10-01 22:39:52.407868] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1410930) with pdu=0x20002887df90 00:40:57.304 [2024-10-01 22:39:52.408066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:57.304 [2024-10-01 22:39:52.408082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:40:57.304 [2024-10-01 22:39:52.412500] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1410930) with pdu=0x20002887df90 00:40:57.304 [2024-10-01 22:39:52.412703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:57.304 [2024-10-01 22:39:52.412719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:40:57.304 [2024-10-01 22:39:52.417048] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1410930) with pdu=0x20002887df90 00:40:57.304 [2024-10-01 22:39:52.417246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:57.304 [2024-10-01 22:39:52.417262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:40:57.304 [2024-10-01 22:39:52.421837] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1410930) with pdu=0x20002887df90 00:40:57.304 [2024-10-01 22:39:52.422034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:57.304 [2024-10-01 22:39:52.422050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:40:57.304 [2024-10-01 22:39:52.426488] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1410930) with pdu=0x20002887df90 00:40:57.304 [2024-10-01 22:39:52.426689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:57.304 [2024-10-01 22:39:52.426705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:40:57.304 [2024-10-01 22:39:52.431161] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1410930) with pdu=0x20002887df90 00:40:57.304 [2024-10-01 22:39:52.431356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:57.304 [2024-10-01 22:39:52.431373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:40:57.304 [2024-10-01 22:39:52.437252] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1410930) with pdu=0x20002887df90 00:40:57.304 [2024-10-01 22:39:52.437448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:57.304 [2024-10-01 22:39:52.437464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:40:57.304 [2024-10-01 22:39:52.442221] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1410930) with pdu=0x20002887df90 00:40:57.304 [2024-10-01 22:39:52.442417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:57.304 [2024-10-01 22:39:52.442433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:40:57.304 [2024-10-01 22:39:52.446755] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1410930) with pdu=0x20002887df90 00:40:57.304 [2024-10-01 22:39:52.446964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:57.304 [2024-10-01 22:39:52.446979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:40:57.304 [2024-10-01 22:39:52.451136] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1410930) with pdu=0x20002887df90 00:40:57.304 [2024-10-01 22:39:52.451333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:57.304 [2024-10-01 22:39:52.451349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:40:57.304 [2024-10-01 22:39:52.456253] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1410930) with pdu=0x20002887df90 00:40:57.304 [2024-10-01 22:39:52.456451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:57.304 [2024-10-01 22:39:52.456466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:40:57.304 [2024-10-01 22:39:52.461147] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1410930) with pdu=0x20002887df90 00:40:57.304 [2024-10-01 22:39:52.461353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:57.304 [2024-10-01 22:39:52.461369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:40:57.304 [2024-10-01 22:39:52.465661] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1410930) with pdu=0x20002887df90 00:40:57.304 [2024-10-01 22:39:52.465859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:57.304 [2024-10-01 22:39:52.465878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:40:57.304 [2024-10-01 22:39:52.470134] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1410930) with pdu=0x20002887df90 00:40:57.304 [2024-10-01 22:39:52.470330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:57.304 [2024-10-01 22:39:52.470346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:40:57.304 [2024-10-01 22:39:52.474923] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1410930) with pdu=0x20002887df90 00:40:57.304 [2024-10-01 22:39:52.475118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:57.304 [2024-10-01 22:39:52.475134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:40:57.304 [2024-10-01 22:39:52.479572] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1410930) with pdu=0x20002887df90 00:40:57.304 [2024-10-01 22:39:52.479771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:57.304 [2024-10-01 22:39:52.479787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:40:57.304 [2024-10-01 22:39:52.484256] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1410930) with pdu=0x20002887df90 00:40:57.304 [2024-10-01 22:39:52.484463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:57.304 [2024-10-01 22:39:52.484478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:40:57.304 [2024-10-01 22:39:52.488658] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1410930) with pdu=0x20002887df90 00:40:57.304 [2024-10-01 22:39:52.488855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:57.304 [2024-10-01 22:39:52.488871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:40:57.304 [2024-10-01 22:39:52.493069] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1410930) with pdu=0x20002887df90 00:40:57.304 [2024-10-01 22:39:52.493255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:57.304 [2024-10-01 22:39:52.493271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:40:57.304 [2024-10-01 22:39:52.497879] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1410930) with pdu=0x20002887df90 00:40:57.304 [2024-10-01 22:39:52.498075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:57.304 [2024-10-01 22:39:52.498091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:40:57.304 [2024-10-01 22:39:52.502595] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1410930) with pdu=0x20002887df90 00:40:57.304 [2024-10-01 22:39:52.502802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:57.304 [2024-10-01 22:39:52.502818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:40:57.304 [2024-10-01 22:39:52.507155] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1410930) with pdu=0x20002887df90 00:40:57.304 [2024-10-01 22:39:52.507359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:57.304 [2024-10-01 22:39:52.507376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:40:57.304 [2024-10-01 22:39:52.512575] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1410930) with pdu=0x20002887df90 00:40:57.305 [2024-10-01 22:39:52.512775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:57.305 [2024-10-01 22:39:52.512791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:40:57.305 [2024-10-01 22:39:52.517294] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1410930) with pdu=0x20002887df90 00:40:57.305 [2024-10-01 22:39:52.517492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:57.305 [2024-10-01 22:39:52.517508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:40:57.305 [2024-10-01 22:39:52.521981] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1410930) with pdu=0x20002887df90 00:40:57.305 [2024-10-01 22:39:52.522179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:57.305 [2024-10-01 22:39:52.522195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:40:57.305 [2024-10-01 22:39:52.526737] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1410930) with pdu=0x20002887df90 00:40:57.305 [2024-10-01 22:39:52.526923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:57.305 [2024-10-01 22:39:52.526937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:40:57.305 [2024-10-01 22:39:52.532278] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1410930) with pdu=0x20002887df90 00:40:57.305 [2024-10-01 22:39:52.532464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:57.305 [2024-10-01 22:39:52.532479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:40:57.305 [2024-10-01 22:39:52.537792] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1410930) with pdu=0x20002887df90 00:40:57.305 [2024-10-01 22:39:52.537999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:57.305 [2024-10-01 22:39:52.538015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:40:57.305 [2024-10-01 22:39:52.543297] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1410930) with pdu=0x20002887df90 00:40:57.305 [2024-10-01 22:39:52.543492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:57.305 [2024-10-01 22:39:52.543508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:40:57.305 [2024-10-01 22:39:52.548879] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1410930) with pdu=0x20002887df90 00:40:57.305 [2024-10-01 22:39:52.549145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:57.305 [2024-10-01 22:39:52.549162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:40:57.305 [2024-10-01 22:39:52.554003] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1410930) with pdu=0x20002887df90 00:40:57.305 [2024-10-01 22:39:52.554200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:57.305 [2024-10-01 22:39:52.554217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:40:57.566 [2024-10-01 22:39:52.559345] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1410930) with pdu=0x20002887df90 00:40:57.566 [2024-10-01 22:39:52.559550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:57.566 [2024-10-01 22:39:52.559567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:40:57.566 [2024-10-01 22:39:52.564454] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1410930) with pdu=0x20002887df90 00:40:57.566 [2024-10-01 22:39:52.564655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:57.566 [2024-10-01 22:39:52.564671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:40:57.566 [2024-10-01 22:39:52.569518] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1410930) with pdu=0x20002887df90 00:40:57.566 [2024-10-01 22:39:52.569716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:57.566 [2024-10-01 22:39:52.569731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:40:57.566 [2024-10-01 22:39:52.574940] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1410930) with pdu=0x20002887df90 00:40:57.566 [2024-10-01 22:39:52.575139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:57.566 [2024-10-01 22:39:52.575155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:40:57.566 [2024-10-01 22:39:52.580343] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1410930) with pdu=0x20002887df90 00:40:57.566 [2024-10-01 22:39:52.580539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:57.566 [2024-10-01 22:39:52.580555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:40:57.566 [2024-10-01 22:39:52.585865] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1410930) with pdu=0x20002887df90 00:40:57.566 [2024-10-01 22:39:52.586064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:57.566 [2024-10-01 22:39:52.586080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:40:57.566 [2024-10-01 22:39:52.591586] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1410930) with pdu=0x20002887df90 00:40:57.566 [2024-10-01 22:39:52.591784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:57.566 [2024-10-01 22:39:52.591799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:40:57.566 [2024-10-01 22:39:52.597159] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1410930) with pdu=0x20002887df90 00:40:57.566 [2024-10-01 22:39:52.597353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:57.566 [2024-10-01 22:39:52.597372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:40:57.567 [2024-10-01 22:39:52.602569] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1410930) with pdu=0x20002887df90 00:40:57.567 [2024-10-01 22:39:52.602855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:57.567 [2024-10-01 22:39:52.602872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:40:57.567 [2024-10-01 22:39:52.608020] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1410930) with pdu=0x20002887df90 00:40:57.567 [2024-10-01 22:39:52.608217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:57.567 [2024-10-01 22:39:52.608233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:40:57.567 [2024-10-01 22:39:52.613659] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1410930) with pdu=0x20002887df90 00:40:57.567 [2024-10-01 22:39:52.613855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:57.567 [2024-10-01 22:39:52.613872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:40:57.567 [2024-10-01 22:39:52.618886] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1410930) with pdu=0x20002887df90 00:40:57.567 [2024-10-01 22:39:52.619082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:57.567 [2024-10-01 22:39:52.619098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:40:57.567 [2024-10-01 22:39:52.624347] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1410930) with pdu=0x20002887df90 00:40:57.567 [2024-10-01 22:39:52.624543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:57.567 [2024-10-01 22:39:52.624559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:40:57.567 [2024-10-01 22:39:52.630031] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1410930) with pdu=0x20002887df90 00:40:57.567 [2024-10-01 22:39:52.630239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:57.567 [2024-10-01 22:39:52.630255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:40:57.567 [2024-10-01 22:39:52.635233] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1410930) with pdu=0x20002887df90 00:40:57.567 [2024-10-01 22:39:52.635419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:57.567 [2024-10-01 22:39:52.635434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:40:57.567 [2024-10-01 22:39:52.640572] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1410930) with pdu=0x20002887df90 00:40:57.567 [2024-10-01 22:39:52.640762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:57.567 [2024-10-01 22:39:52.640779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:40:57.567 [2024-10-01 22:39:52.645872] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1410930) with pdu=0x20002887df90 00:40:57.567 [2024-10-01 22:39:52.646078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:57.567 [2024-10-01 22:39:52.646094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:40:57.567 [2024-10-01 22:39:52.651771] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1410930) with pdu=0x20002887df90 00:40:57.567 [2024-10-01 22:39:52.651986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:57.567 [2024-10-01 22:39:52.652002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:40:57.567 [2024-10-01 22:39:52.657406] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1410930) with pdu=0x20002887df90 00:40:57.567 [2024-10-01 22:39:52.657614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:57.567 [2024-10-01 22:39:52.657634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:40:57.567 [2024-10-01 22:39:52.662808] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1410930) with pdu=0x20002887df90 00:40:57.567 [2024-10-01 22:39:52.663005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:57.567 [2024-10-01 22:39:52.663021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:40:57.567 [2024-10-01 22:39:52.668152] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1410930) with pdu=0x20002887df90 00:40:57.567 [2024-10-01 22:39:52.668348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:57.567 [2024-10-01 22:39:52.668364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:40:57.567 [2024-10-01 22:39:52.673400] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1410930) with pdu=0x20002887df90 00:40:57.567 [2024-10-01 22:39:52.673596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:57.567 [2024-10-01 22:39:52.673612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:40:57.567 [2024-10-01 22:39:52.678842] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1410930) with pdu=0x20002887df90 00:40:57.567 [2024-10-01 22:39:52.679047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:57.567 [2024-10-01 22:39:52.679063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:40:57.567 [2024-10-01 22:39:52.684023] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1410930) with pdu=0x20002887df90 00:40:57.567 [2024-10-01 22:39:52.684219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:57.567 [2024-10-01 22:39:52.684234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:40:57.567 [2024-10-01 22:39:52.689585] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1410930) with pdu=0x20002887df90 00:40:57.567 [2024-10-01 22:39:52.689782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:57.567 [2024-10-01 22:39:52.689801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:40:57.567 [2024-10-01 22:39:52.695195] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1410930) with pdu=0x20002887df90 00:40:57.567 [2024-10-01 22:39:52.695391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:57.567 [2024-10-01 22:39:52.695407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:40:57.567 [2024-10-01 22:39:52.701740] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1410930) with pdu=0x20002887df90 00:40:57.567 [2024-10-01 22:39:52.701936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:57.567 [2024-10-01 22:39:52.701952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:40:57.567 [2024-10-01 22:39:52.707254] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1410930) with pdu=0x20002887df90 00:40:57.567 [2024-10-01 22:39:52.707452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:57.567 [2024-10-01 22:39:52.707468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:40:57.567 [2024-10-01 22:39:52.712571] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1410930) with pdu=0x20002887df90 00:40:57.567 [2024-10-01 22:39:52.712761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:57.567 [2024-10-01 22:39:52.712777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:40:57.567 [2024-10-01 22:39:52.717710] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1410930) with pdu=0x20002887df90 00:40:57.567 [2024-10-01 22:39:52.717907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:57.567 [2024-10-01 22:39:52.717923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:40:57.567 [2024-10-01 22:39:52.723012] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1410930) with pdu=0x20002887df90 00:40:57.567 [2024-10-01 22:39:52.723218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:57.567 [2024-10-01 22:39:52.723234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:40:57.567 [2024-10-01 22:39:52.728267] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1410930) with pdu=0x20002887df90 00:40:57.567 [2024-10-01 22:39:52.728465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:57.567 [2024-10-01 22:39:52.728480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:40:57.567 [2024-10-01 22:39:52.733732] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1410930) with pdu=0x20002887df90 00:40:57.567 [2024-10-01 22:39:52.733928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:57.567 [2024-10-01 22:39:52.733944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:40:57.567 [2024-10-01 22:39:52.739109] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1410930) with pdu=0x20002887df90 00:40:57.567 [2024-10-01 22:39:52.739309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:57.567 [2024-10-01 22:39:52.739326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:40:57.567 [2024-10-01 22:39:52.744514] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1410930) with pdu=0x20002887df90 00:40:57.567 [2024-10-01 22:39:52.744714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:57.567 [2024-10-01 22:39:52.744730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:40:57.567 [2024-10-01 22:39:52.749910] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1410930) with pdu=0x20002887df90 00:40:57.567 [2024-10-01 22:39:52.750107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:57.567 [2024-10-01 22:39:52.750123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:40:57.567 [2024-10-01 22:39:52.755122] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1410930) with pdu=0x20002887df90 00:40:57.567 [2024-10-01 22:39:52.755317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:57.567 [2024-10-01 22:39:52.755333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:40:57.567 [2024-10-01 22:39:52.761527] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1410930) with pdu=0x20002887df90 00:40:57.567 [2024-10-01 22:39:52.761731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:57.567 [2024-10-01 22:39:52.761747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:40:57.567 [2024-10-01 22:39:52.766972] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1410930) with pdu=0x20002887df90 00:40:57.567 [2024-10-01 22:39:52.767169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:57.567 [2024-10-01 22:39:52.767185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:40:57.567 [2024-10-01 22:39:52.772475] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1410930) with pdu=0x20002887df90 00:40:57.567 [2024-10-01 22:39:52.772675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:57.567 [2024-10-01 22:39:52.772691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:40:57.567 [2024-10-01 22:39:52.777706] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1410930) with pdu=0x20002887df90 00:40:57.567 [2024-10-01 22:39:52.777894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:57.567 [2024-10-01 22:39:52.777910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:40:57.567 [2024-10-01 22:39:52.782739] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1410930) with pdu=0x20002887df90 00:40:57.568 [2024-10-01 22:39:52.782937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:57.568 [2024-10-01 22:39:52.782953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:40:57.568 [2024-10-01 22:39:52.787491] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1410930) with pdu=0x20002887df90 00:40:57.568 [2024-10-01 22:39:52.787692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:57.568 [2024-10-01 22:39:52.787708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:40:57.568 [2024-10-01 22:39:52.792356] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1410930) with pdu=0x20002887df90 00:40:57.568 [2024-10-01 22:39:52.792543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:57.568 [2024-10-01 22:39:52.792559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:40:57.568 [2024-10-01 22:39:52.796826] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1410930) with pdu=0x20002887df90 00:40:57.568 [2024-10-01 22:39:52.797022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:57.568 [2024-10-01 22:39:52.797038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:40:57.568 [2024-10-01 22:39:52.801462] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1410930) with pdu=0x20002887df90 00:40:57.568 [2024-10-01 22:39:52.801662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:57.568 [2024-10-01 22:39:52.801678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:40:57.568 [2024-10-01 22:39:52.806280] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1410930) with pdu=0x20002887df90 00:40:57.568 [2024-10-01 22:39:52.806479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:57.568 [2024-10-01 22:39:52.806495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:40:57.568 [2024-10-01 22:39:52.810891] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1410930) with pdu=0x20002887df90 00:40:57.568 [2024-10-01 22:39:52.811087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:57.568 [2024-10-01 22:39:52.811102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:40:57.568 [2024-10-01 22:39:52.815618] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1410930) with pdu=0x20002887df90 00:40:57.568 [2024-10-01 22:39:52.815823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:57.568 [2024-10-01 22:39:52.815838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:40:57.830 [2024-10-01 22:39:52.820162] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1410930) with pdu=0x20002887df90 00:40:57.830 [2024-10-01 22:39:52.820357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:57.830 [2024-10-01 22:39:52.820373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:40:57.830 [2024-10-01 22:39:52.824770] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1410930) with pdu=0x20002887df90 00:40:57.830 [2024-10-01 22:39:52.824968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:57.830 [2024-10-01 22:39:52.824989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:40:57.830 [2024-10-01 22:39:52.829079] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1410930) with pdu=0x20002887df90 00:40:57.830 [2024-10-01 22:39:52.829275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:57.830 [2024-10-01 22:39:52.829291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:40:57.830 [2024-10-01 22:39:52.833753] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1410930) with pdu=0x20002887df90 00:40:57.830 [2024-10-01 22:39:52.833961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:57.830 [2024-10-01 22:39:52.833976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:40:57.830 [2024-10-01 22:39:52.838630] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1410930) with pdu=0x20002887df90 00:40:57.830 [2024-10-01 22:39:52.838828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:57.831 [2024-10-01 22:39:52.838844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:40:57.831 [2024-10-01 22:39:52.843109] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1410930) with pdu=0x20002887df90 00:40:57.831 [2024-10-01 22:39:52.843307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:57.831 [2024-10-01 22:39:52.843322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:40:57.831 [2024-10-01 22:39:52.847612] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1410930) with pdu=0x20002887df90 00:40:57.831 [2024-10-01 22:39:52.847817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:57.831 [2024-10-01 22:39:52.847834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:40:57.831 [2024-10-01 22:39:52.852484] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1410930) with pdu=0x20002887df90 00:40:57.831 [2024-10-01 22:39:52.852685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:57.831 [2024-10-01 22:39:52.852701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:40:57.831 [2024-10-01 22:39:52.857159] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1410930) with pdu=0x20002887df90 00:40:57.831 [2024-10-01 22:39:52.857355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:57.831 [2024-10-01 22:39:52.857371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:40:57.831 [2024-10-01 22:39:52.861953] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1410930) with pdu=0x20002887df90 00:40:57.831 [2024-10-01 22:39:52.862151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:57.831 [2024-10-01 22:39:52.862167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:40:57.831 [2024-10-01 22:39:52.866541] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1410930) with pdu=0x20002887df90 00:40:57.831 [2024-10-01 22:39:52.866748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:57.831 [2024-10-01 22:39:52.866764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:40:57.831 [2024-10-01 22:39:52.870949] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1410930) with pdu=0x20002887df90 00:40:57.831 [2024-10-01 22:39:52.871143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:57.831 [2024-10-01 22:39:52.871159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:40:57.831 [2024-10-01 22:39:52.875418] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1410930) with pdu=0x20002887df90 00:40:57.831 [2024-10-01 22:39:52.875630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:57.831 [2024-10-01 22:39:52.875645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:40:57.831 [2024-10-01 22:39:52.880292] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1410930) with pdu=0x20002887df90 00:40:57.831 [2024-10-01 22:39:52.880497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:57.831 [2024-10-01 22:39:52.880513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:40:57.831 [2024-10-01 22:39:52.884979] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1410930) with pdu=0x20002887df90 00:40:57.831 [2024-10-01 22:39:52.885176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:57.831 [2024-10-01 22:39:52.885192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:40:57.831 [2024-10-01 22:39:52.889391] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1410930) with pdu=0x20002887df90 00:40:57.831 [2024-10-01 22:39:52.889589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:57.831 [2024-10-01 22:39:52.889605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:40:57.831 [2024-10-01 22:39:52.893849] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1410930) with pdu=0x20002887df90 00:40:57.831 [2024-10-01 22:39:52.894038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:57.831 [2024-10-01 22:39:52.894054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:40:57.831 [2024-10-01 22:39:52.898831] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1410930) with pdu=0x20002887df90 00:40:57.831 [2024-10-01 22:39:52.899028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:57.831 [2024-10-01 22:39:52.899044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:40:57.831 [2024-10-01 22:39:52.903565] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1410930) with pdu=0x20002887df90 00:40:57.831 [2024-10-01 22:39:52.903766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:57.831 [2024-10-01 22:39:52.903783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:40:57.831 [2024-10-01 22:39:52.908431] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1410930) with pdu=0x20002887df90 00:40:57.831 [2024-10-01 22:39:52.908642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:57.831 [2024-10-01 22:39:52.908658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:40:57.831 [2024-10-01 22:39:52.913266] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1410930) with pdu=0x20002887df90 00:40:57.831 [2024-10-01 22:39:52.913464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:57.831 [2024-10-01 22:39:52.913480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:40:57.831 [2024-10-01 22:39:52.917968] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1410930) with pdu=0x20002887df90 00:40:57.831 [2024-10-01 22:39:52.918155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:57.831 [2024-10-01 22:39:52.918172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:40:57.831 [2024-10-01 22:39:52.922536] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1410930) with pdu=0x20002887df90 00:40:57.831 [2024-10-01 22:39:52.922738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:57.831 [2024-10-01 22:39:52.922754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:40:57.831 [2024-10-01 22:39:52.927262] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1410930) with pdu=0x20002887df90 00:40:57.831 [2024-10-01 22:39:52.927458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:57.831 [2024-10-01 22:39:52.927474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:40:57.831 [2024-10-01 22:39:52.931911] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1410930) with pdu=0x20002887df90 00:40:57.831 [2024-10-01 22:39:52.932109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:57.831 [2024-10-01 22:39:52.932125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:40:57.831 [2024-10-01 22:39:52.936506] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1410930) with pdu=0x20002887df90 00:40:57.831 [2024-10-01 22:39:52.936707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:57.831 [2024-10-01 22:39:52.936723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:40:57.831 [2024-10-01 22:39:52.941094] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1410930) with pdu=0x20002887df90 00:40:57.831 [2024-10-01 22:39:52.941292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:57.831 [2024-10-01 22:39:52.941308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:40:57.831 [2024-10-01 22:39:52.946041] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1410930) with pdu=0x20002887df90 00:40:57.831 [2024-10-01 22:39:52.946237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:57.831 [2024-10-01 22:39:52.946259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:40:57.831 [2024-10-01 22:39:52.951449] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1410930) with pdu=0x20002887df90 00:40:57.831 [2024-10-01 22:39:52.951652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:57.831 [2024-10-01 22:39:52.951668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:40:57.831 [2024-10-01 22:39:52.956842] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1410930) with pdu=0x20002887df90 00:40:57.831 [2024-10-01 22:39:52.957052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:57.831 [2024-10-01 22:39:52.957067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:40:57.831 [2024-10-01 22:39:52.962361] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1410930) with pdu=0x20002887df90 00:40:57.831 [2024-10-01 22:39:52.962557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:57.831 [2024-10-01 22:39:52.962573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:40:57.831 [2024-10-01 22:39:52.966911] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1410930) with pdu=0x20002887df90 00:40:57.832 [2024-10-01 22:39:52.967109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:57.832 [2024-10-01 22:39:52.967125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:40:57.832 [2024-10-01 22:39:52.971770] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1410930) with pdu=0x20002887df90 00:40:57.832 [2024-10-01 22:39:52.971967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:57.832 [2024-10-01 22:39:52.971983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:40:57.832 [2024-10-01 22:39:52.976509] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1410930) with pdu=0x20002887df90 00:40:57.832 [2024-10-01 22:39:52.976711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:57.832 [2024-10-01 22:39:52.976727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:40:57.832 [2024-10-01 22:39:52.981070] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1410930) with pdu=0x20002887df90 00:40:57.832 [2024-10-01 22:39:52.981266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:57.832 [2024-10-01 22:39:52.981282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:40:57.832 [2024-10-01 22:39:52.985432] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1410930) with pdu=0x20002887df90 00:40:57.832 [2024-10-01 22:39:52.985634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:57.832 [2024-10-01 22:39:52.985650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:40:57.832 [2024-10-01 22:39:52.989916] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1410930) with pdu=0x20002887df90 00:40:57.832 [2024-10-01 22:39:52.990135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:57.832 [2024-10-01 22:39:52.990151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:40:57.832 [2024-10-01 22:39:52.994345] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1410930) with pdu=0x20002887df90 00:40:57.832 [2024-10-01 22:39:52.994552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:57.832 [2024-10-01 22:39:52.994568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:40:57.832 [2024-10-01 22:39:52.998791] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1410930) with pdu=0x20002887df90 00:40:57.832 [2024-10-01 22:39:52.998999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:57.832 [2024-10-01 22:39:52.999014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:40:57.832 [2024-10-01 22:39:53.003285] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1410930) with pdu=0x20002887df90 00:40:57.832 [2024-10-01 22:39:53.003482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:57.832 [2024-10-01 22:39:53.003499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:40:57.832 [2024-10-01 22:39:53.007784] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1410930) with pdu=0x20002887df90 00:40:57.832 [2024-10-01 22:39:53.007982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:57.832 [2024-10-01 22:39:53.007998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:40:57.832 [2024-10-01 22:39:53.013140] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1410930) with pdu=0x20002887df90 00:40:57.832 [2024-10-01 22:39:53.013454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:57.832 [2024-10-01 22:39:53.013472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:40:57.832 [2024-10-01 22:39:53.018839] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1410930) with pdu=0x20002887df90 00:40:57.832 [2024-10-01 22:39:53.019045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:57.832 [2024-10-01 22:39:53.019062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:40:57.832 [2024-10-01 22:39:53.025713] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1410930) with pdu=0x20002887df90 00:40:57.832 [2024-10-01 22:39:53.025910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:57.832 [2024-10-01 22:39:53.025926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:40:57.832 [2024-10-01 22:39:53.030928] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1410930) with pdu=0x20002887df90 00:40:57.832 [2024-10-01 22:39:53.031126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:57.832 [2024-10-01 22:39:53.031142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:40:57.832 [2024-10-01 22:39:53.035423] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1410930) with pdu=0x20002887df90 00:40:57.832 [2024-10-01 22:39:53.036609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:57.832 [2024-10-01 22:39:53.036631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:40:57.832 6029.50 IOPS, 753.69 MiB/s 00:40:57.832 Latency(us) 00:40:57.832 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:40:57.832 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:40:57.832 nvme0n1 : 2.00 6026.50 753.31 0.00 0.00 2651.02 1993.39 8574.29 00:40:57.832 =================================================================================================================== 00:40:57.832 Total : 6026.50 753.31 0.00 0.00 2651.02 1993.39 8574.29 00:40:57.832 { 00:40:57.832 "results": [ 00:40:57.832 { 00:40:57.832 "job": "nvme0n1", 00:40:57.832 "core_mask": "0x2", 00:40:57.832 "workload": "randwrite", 00:40:57.832 "status": "finished", 00:40:57.832 "queue_depth": 16, 00:40:57.832 "io_size": 131072, 00:40:57.832 "runtime": 2.003651, 00:40:57.832 "iops": 6026.498626756856, 00:40:57.832 "mibps": 753.312328344607, 00:40:57.832 "io_failed": 0, 00:40:57.832 "io_timeout": 0, 00:40:57.832 "avg_latency_us": 2651.0188300897166, 00:40:57.832 "min_latency_us": 1993.3866666666668, 00:40:57.832 "max_latency_us": 8574.293333333333 00:40:57.832 } 00:40:57.832 ], 00:40:57.832 "core_count": 1 00:40:57.832 } 00:40:57.832 22:39:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:40:57.832 22:39:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:40:57.832 22:39:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:40:57.832 22:39:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:40:57.832 | .driver_specific 00:40:57.832 | .nvme_error 00:40:57.832 | .status_code 00:40:57.832 | .command_transient_transport_error' 00:40:58.093 22:39:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 389 > 0 )) 00:40:58.094 22:39:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 386400 00:40:58.094 22:39:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@950 -- # '[' -z 386400 ']' 00:40:58.094 22:39:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # kill -0 386400 00:40:58.094 22:39:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # uname 00:40:58.094 22:39:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:40:58.094 22:39:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 386400 00:40:58.094 22:39:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:40:58.094 22:39:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:40:58.094 22:39:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@968 -- # echo 'killing process with pid 386400' 00:40:58.094 killing process with pid 386400 00:40:58.094 22:39:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@969 -- # kill 386400 00:40:58.094 Received shutdown signal, test time was about 2.000000 seconds 00:40:58.094 00:40:58.094 Latency(us) 00:40:58.094 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:40:58.094 =================================================================================================================== 00:40:58.094 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:40:58.094 22:39:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@974 -- # wait 386400 00:40:58.355 22:39:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@116 -- # killprocess 383984 00:40:58.355 22:39:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@950 -- # '[' -z 383984 ']' 00:40:58.355 22:39:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # kill -0 383984 00:40:58.355 22:39:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # uname 00:40:58.355 22:39:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:40:58.355 22:39:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 383984 00:40:58.355 22:39:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:40:58.355 22:39:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:40:58.355 22:39:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@968 -- # echo 'killing process with pid 383984' 00:40:58.355 killing process with pid 383984 00:40:58.355 22:39:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@969 -- # kill 383984 00:40:58.355 22:39:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@974 -- # wait 383984 00:40:58.615 00:40:58.615 real 0m16.925s 00:40:58.615 user 0m33.103s 00:40:58.615 sys 0m3.755s 00:40:58.615 22:39:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1126 -- # xtrace_disable 00:40:58.615 22:39:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:40:58.615 ************************************ 00:40:58.615 END TEST nvmf_digest_error 00:40:58.615 ************************************ 00:40:58.615 22:39:53 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@149 -- # trap - SIGINT SIGTERM EXIT 00:40:58.615 22:39:53 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@150 -- # nvmftestfini 00:40:58.615 22:39:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@514 -- # nvmfcleanup 00:40:58.615 22:39:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@121 -- # sync 00:40:58.615 22:39:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:40:58.615 22:39:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@124 -- # set +e 00:40:58.615 22:39:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@125 -- # for i in {1..20} 00:40:58.615 22:39:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:40:58.615 rmmod nvme_tcp 00:40:58.615 rmmod nvme_fabrics 00:40:58.615 rmmod nvme_keyring 00:40:58.615 22:39:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:40:58.615 22:39:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@128 -- # set -e 00:40:58.615 22:39:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@129 -- # return 0 00:40:58.615 22:39:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@515 -- # '[' -n 383984 ']' 00:40:58.615 22:39:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@516 -- # killprocess 383984 00:40:58.615 22:39:53 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@950 -- # '[' -z 383984 ']' 00:40:58.615 22:39:53 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@954 -- # kill -0 383984 00:40:58.615 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 954: kill: (383984) - No such process 00:40:58.615 22:39:53 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@977 -- # echo 'Process with pid 383984 is not found' 00:40:58.615 Process with pid 383984 is not found 00:40:58.615 22:39:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:40:58.615 22:39:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:40:58.615 22:39:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:40:58.615 22:39:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@297 -- # iptr 00:40:58.615 22:39:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@789 -- # iptables-save 00:40:58.615 22:39:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:40:58.615 22:39:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@789 -- # iptables-restore 00:40:58.615 22:39:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:40:58.615 22:39:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@302 -- # remove_spdk_ns 00:40:58.615 22:39:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:40:58.615 22:39:53 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:40:58.615 22:39:53 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:41:01.160 22:39:55 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:41:01.160 00:41:01.160 real 0m44.360s 00:41:01.160 user 1m9.378s 00:41:01.160 sys 0m13.177s 00:41:01.160 22:39:55 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1126 -- # xtrace_disable 00:41:01.160 22:39:55 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:41:01.160 ************************************ 00:41:01.160 END TEST nvmf_digest 00:41:01.160 ************************************ 00:41:01.160 22:39:55 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@36 -- # [[ 0 -eq 1 ]] 00:41:01.160 22:39:55 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@41 -- # [[ 0 -eq 1 ]] 00:41:01.160 22:39:55 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@46 -- # [[ phy == phy ]] 00:41:01.160 22:39:55 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@47 -- # run_test nvmf_bdevperf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:41:01.160 22:39:55 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:41:01.160 22:39:55 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:41:01.160 22:39:55 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:41:01.160 ************************************ 00:41:01.160 START TEST nvmf_bdevperf 00:41:01.160 ************************************ 00:41:01.160 22:39:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:41:01.160 * Looking for test storage... 00:41:01.160 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:41:01.160 22:39:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:41:01.160 22:39:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1681 -- # lcov --version 00:41:01.160 22:39:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:41:01.160 22:39:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:41:01.160 22:39:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:41:01.160 22:39:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:41:01.161 22:39:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:41:01.161 22:39:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@336 -- # IFS=.-: 00:41:01.161 22:39:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@336 -- # read -ra ver1 00:41:01.161 22:39:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@337 -- # IFS=.-: 00:41:01.161 22:39:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@337 -- # read -ra ver2 00:41:01.161 22:39:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@338 -- # local 'op=<' 00:41:01.161 22:39:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@340 -- # ver1_l=2 00:41:01.161 22:39:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@341 -- # ver2_l=1 00:41:01.161 22:39:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:41:01.161 22:39:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@344 -- # case "$op" in 00:41:01.161 22:39:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@345 -- # : 1 00:41:01.161 22:39:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@364 -- # (( v = 0 )) 00:41:01.161 22:39:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:41:01.161 22:39:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@365 -- # decimal 1 00:41:01.161 22:39:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@353 -- # local d=1 00:41:01.161 22:39:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:41:01.161 22:39:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@355 -- # echo 1 00:41:01.161 22:39:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@365 -- # ver1[v]=1 00:41:01.161 22:39:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@366 -- # decimal 2 00:41:01.161 22:39:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@353 -- # local d=2 00:41:01.161 22:39:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:41:01.161 22:39:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@355 -- # echo 2 00:41:01.161 22:39:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@366 -- # ver2[v]=2 00:41:01.161 22:39:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:41:01.161 22:39:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:41:01.161 22:39:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@368 -- # return 0 00:41:01.161 22:39:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:41:01.161 22:39:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:41:01.161 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:41:01.161 --rc genhtml_branch_coverage=1 00:41:01.161 --rc genhtml_function_coverage=1 00:41:01.161 --rc genhtml_legend=1 00:41:01.161 --rc geninfo_all_blocks=1 00:41:01.161 --rc geninfo_unexecuted_blocks=1 00:41:01.161 00:41:01.161 ' 00:41:01.161 22:39:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:41:01.161 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:41:01.161 --rc genhtml_branch_coverage=1 00:41:01.161 --rc genhtml_function_coverage=1 00:41:01.161 --rc genhtml_legend=1 00:41:01.161 --rc geninfo_all_blocks=1 00:41:01.161 --rc geninfo_unexecuted_blocks=1 00:41:01.161 00:41:01.161 ' 00:41:01.161 22:39:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:41:01.161 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:41:01.161 --rc genhtml_branch_coverage=1 00:41:01.161 --rc genhtml_function_coverage=1 00:41:01.161 --rc genhtml_legend=1 00:41:01.161 --rc geninfo_all_blocks=1 00:41:01.161 --rc geninfo_unexecuted_blocks=1 00:41:01.161 00:41:01.161 ' 00:41:01.161 22:39:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:41:01.161 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:41:01.161 --rc genhtml_branch_coverage=1 00:41:01.161 --rc genhtml_function_coverage=1 00:41:01.161 --rc genhtml_legend=1 00:41:01.161 --rc geninfo_all_blocks=1 00:41:01.161 --rc geninfo_unexecuted_blocks=1 00:41:01.161 00:41:01.161 ' 00:41:01.161 22:39:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:41:01.161 22:39:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@7 -- # uname -s 00:41:01.161 22:39:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:41:01.161 22:39:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:41:01.161 22:39:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:41:01.161 22:39:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:41:01.161 22:39:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:41:01.161 22:39:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:41:01.161 22:39:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:41:01.161 22:39:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:41:01.161 22:39:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:41:01.161 22:39:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:41:01.161 22:39:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 00:41:01.161 22:39:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@18 -- # NVME_HOSTID=80c5a598-37ec-ec11-9bc7-a4bf01928204 00:41:01.161 22:39:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:41:01.161 22:39:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:41:01.161 22:39:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:41:01.161 22:39:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:41:01.161 22:39:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:41:01.161 22:39:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@15 -- # shopt -s extglob 00:41:01.161 22:39:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:41:01.161 22:39:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:41:01.161 22:39:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:41:01.161 22:39:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:01.161 22:39:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:01.161 22:39:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:01.161 22:39:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@5 -- # export PATH 00:41:01.162 22:39:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:01.162 22:39:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@51 -- # : 0 00:41:01.162 22:39:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:41:01.162 22:39:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:41:01.162 22:39:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:41:01.162 22:39:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:41:01.162 22:39:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:41:01.162 22:39:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:41:01.162 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:41:01.162 22:39:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:41:01.162 22:39:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:41:01.162 22:39:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@55 -- # have_pci_nics=0 00:41:01.162 22:39:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@11 -- # MALLOC_BDEV_SIZE=64 00:41:01.162 22:39:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:41:01.162 22:39:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@24 -- # nvmftestinit 00:41:01.162 22:39:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:41:01.162 22:39:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:41:01.162 22:39:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@474 -- # prepare_net_devs 00:41:01.162 22:39:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@436 -- # local -g is_hw=no 00:41:01.162 22:39:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@438 -- # remove_spdk_ns 00:41:01.162 22:39:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:41:01.162 22:39:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:41:01.162 22:39:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:41:01.162 22:39:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:41:01.162 22:39:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:41:01.162 22:39:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@309 -- # xtrace_disable 00:41:01.162 22:39:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:41:07.930 22:40:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:41:07.930 22:40:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@315 -- # pci_devs=() 00:41:07.930 22:40:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@315 -- # local -a pci_devs 00:41:07.930 22:40:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@316 -- # pci_net_devs=() 00:41:07.930 22:40:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:41:07.930 22:40:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@317 -- # pci_drivers=() 00:41:07.930 22:40:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@317 -- # local -A pci_drivers 00:41:07.930 22:40:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@319 -- # net_devs=() 00:41:07.930 22:40:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@319 -- # local -ga net_devs 00:41:07.930 22:40:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@320 -- # e810=() 00:41:07.930 22:40:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@320 -- # local -ga e810 00:41:07.930 22:40:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@321 -- # x722=() 00:41:07.930 22:40:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@321 -- # local -ga x722 00:41:07.930 22:40:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@322 -- # mlx=() 00:41:07.930 22:40:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@322 -- # local -ga mlx 00:41:07.930 22:40:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:41:07.930 22:40:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:41:07.930 22:40:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:41:07.930 22:40:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:41:07.930 22:40:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:41:07.930 22:40:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:41:07.930 22:40:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:41:07.930 22:40:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:41:07.930 22:40:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:41:07.930 22:40:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:41:07.930 22:40:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:41:07.930 22:40:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:41:07.930 22:40:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:41:07.930 22:40:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:41:07.930 22:40:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:41:07.930 22:40:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:41:07.930 22:40:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:41:07.930 22:40:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:41:07.930 22:40:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:41:07.930 22:40:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:41:07.930 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:41:07.930 22:40:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:41:07.930 22:40:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:41:07.930 22:40:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:41:07.930 22:40:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:41:07.930 22:40:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:41:07.930 22:40:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:41:07.930 22:40:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:41:07.930 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:41:07.930 22:40:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:41:07.930 22:40:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:41:07.930 22:40:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:41:07.930 22:40:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:41:07.930 22:40:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:41:07.930 22:40:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:41:07.930 22:40:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:41:07.930 22:40:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:41:07.930 22:40:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:41:07.930 22:40:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:41:07.930 22:40:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:41:07.930 22:40:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:41:07.930 22:40:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@416 -- # [[ up == up ]] 00:41:07.930 22:40:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:41:07.930 22:40:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:41:07.930 22:40:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:41:07.930 Found net devices under 0000:4b:00.0: cvl_0_0 00:41:07.930 22:40:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:41:07.930 22:40:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:41:07.930 22:40:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:41:07.930 22:40:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:41:07.930 22:40:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:41:07.930 22:40:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@416 -- # [[ up == up ]] 00:41:07.930 22:40:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:41:07.930 22:40:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:41:07.930 22:40:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:41:07.930 Found net devices under 0000:4b:00.1: cvl_0_1 00:41:07.930 22:40:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:41:07.930 22:40:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:41:07.930 22:40:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@440 -- # is_hw=yes 00:41:07.930 22:40:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:41:07.930 22:40:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:41:07.930 22:40:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:41:07.930 22:40:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:41:07.930 22:40:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:41:07.930 22:40:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:41:07.930 22:40:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:41:07.930 22:40:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:41:07.930 22:40:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:41:07.930 22:40:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:41:07.930 22:40:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:41:07.930 22:40:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:41:07.930 22:40:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:41:07.930 22:40:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:41:07.931 22:40:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:41:07.931 22:40:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:41:07.931 22:40:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:41:07.931 22:40:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:41:08.191 22:40:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:41:08.191 22:40:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:41:08.191 22:40:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:41:08.191 22:40:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:41:08.191 22:40:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:41:08.191 22:40:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:41:08.191 22:40:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:41:08.191 22:40:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:41:08.191 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:41:08.191 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.621 ms 00:41:08.191 00:41:08.191 --- 10.0.0.2 ping statistics --- 00:41:08.191 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:41:08.191 rtt min/avg/max/mdev = 0.621/0.621/0.621/0.000 ms 00:41:08.191 22:40:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:41:08.191 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:41:08.191 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.265 ms 00:41:08.191 00:41:08.191 --- 10.0.0.1 ping statistics --- 00:41:08.191 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:41:08.191 rtt min/avg/max/mdev = 0.265/0.265/0.265/0.000 ms 00:41:08.191 22:40:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:41:08.191 22:40:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@448 -- # return 0 00:41:08.191 22:40:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:41:08.191 22:40:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:41:08.191 22:40:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:41:08.191 22:40:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:41:08.191 22:40:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:41:08.191 22:40:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:41:08.191 22:40:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:41:08.191 22:40:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@25 -- # tgt_init 00:41:08.191 22:40:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:41:08.191 22:40:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:41:08.191 22:40:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@724 -- # xtrace_disable 00:41:08.191 22:40:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:41:08.191 22:40:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@507 -- # nvmfpid=391417 00:41:08.191 22:40:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@508 -- # waitforlisten 391417 00:41:08.191 22:40:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:41:08.191 22:40:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@831 -- # '[' -z 391417 ']' 00:41:08.191 22:40:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:41:08.191 22:40:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@836 -- # local max_retries=100 00:41:08.191 22:40:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:41:08.191 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:41:08.191 22:40:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@840 -- # xtrace_disable 00:41:08.191 22:40:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:41:08.451 [2024-10-01 22:40:03.451817] Starting SPDK v25.01-pre git sha1 1b1c3081e / DPDK 24.03.0 initialization... 00:41:08.451 [2024-10-01 22:40:03.451872] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:41:08.451 [2024-10-01 22:40:03.536430] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:41:08.451 [2024-10-01 22:40:03.629912] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:41:08.451 [2024-10-01 22:40:03.629972] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:41:08.451 [2024-10-01 22:40:03.629980] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:41:08.451 [2024-10-01 22:40:03.629987] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:41:08.451 [2024-10-01 22:40:03.629993] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:41:08.451 [2024-10-01 22:40:03.630124] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:41:08.451 [2024-10-01 22:40:03.630288] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:41:08.451 [2024-10-01 22:40:03.630289] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:41:09.022 22:40:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:41:09.022 22:40:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@864 -- # return 0 00:41:09.022 22:40:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:41:09.022 22:40:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@730 -- # xtrace_disable 00:41:09.022 22:40:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:41:09.283 22:40:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:41:09.283 22:40:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:41:09.283 22:40:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:09.283 22:40:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:41:09.283 [2024-10-01 22:40:04.300740] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:41:09.283 22:40:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:09.283 22:40:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:41:09.283 22:40:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:09.283 22:40:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:41:09.283 Malloc0 00:41:09.283 22:40:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:09.283 22:40:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:41:09.283 22:40:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:09.283 22:40:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:41:09.283 22:40:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:09.283 22:40:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:41:09.283 22:40:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:09.283 22:40:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:41:09.283 22:40:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:09.283 22:40:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:41:09.283 22:40:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:09.283 22:40:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:41:09.283 [2024-10-01 22:40:04.356341] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:41:09.283 22:40:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:09.283 22:40:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 128 -o 4096 -w verify -t 1 00:41:09.283 22:40:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@27 -- # gen_nvmf_target_json 00:41:09.283 22:40:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@558 -- # config=() 00:41:09.283 22:40:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@558 -- # local subsystem config 00:41:09.283 22:40:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:41:09.283 22:40:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:41:09.283 { 00:41:09.283 "params": { 00:41:09.283 "name": "Nvme$subsystem", 00:41:09.283 "trtype": "$TEST_TRANSPORT", 00:41:09.283 "traddr": "$NVMF_FIRST_TARGET_IP", 00:41:09.283 "adrfam": "ipv4", 00:41:09.283 "trsvcid": "$NVMF_PORT", 00:41:09.283 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:41:09.283 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:41:09.283 "hdgst": ${hdgst:-false}, 00:41:09.283 "ddgst": ${ddgst:-false} 00:41:09.283 }, 00:41:09.283 "method": "bdev_nvme_attach_controller" 00:41:09.283 } 00:41:09.283 EOF 00:41:09.283 )") 00:41:09.283 22:40:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@580 -- # cat 00:41:09.283 22:40:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@582 -- # jq . 00:41:09.283 22:40:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@583 -- # IFS=, 00:41:09.283 22:40:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:41:09.283 "params": { 00:41:09.283 "name": "Nvme1", 00:41:09.283 "trtype": "tcp", 00:41:09.283 "traddr": "10.0.0.2", 00:41:09.283 "adrfam": "ipv4", 00:41:09.283 "trsvcid": "4420", 00:41:09.283 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:41:09.283 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:41:09.283 "hdgst": false, 00:41:09.283 "ddgst": false 00:41:09.283 }, 00:41:09.283 "method": "bdev_nvme_attach_controller" 00:41:09.283 }' 00:41:09.283 [2024-10-01 22:40:04.418429] Starting SPDK v25.01-pre git sha1 1b1c3081e / DPDK 24.03.0 initialization... 00:41:09.284 [2024-10-01 22:40:04.418486] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid391658 ] 00:41:09.284 [2024-10-01 22:40:04.478883] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:41:09.543 [2024-10-01 22:40:04.544097] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:41:09.543 Running I/O for 1 seconds... 00:41:10.926 9027.00 IOPS, 35.26 MiB/s 00:41:10.926 Latency(us) 00:41:10.926 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:41:10.926 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:41:10.926 Verification LBA range: start 0x0 length 0x4000 00:41:10.926 Nvme1n1 : 1.01 9055.93 35.37 0.00 0.00 14075.86 2580.48 14417.92 00:41:10.926 =================================================================================================================== 00:41:10.926 Total : 9055.93 35.37 0.00 0.00 14075.86 2580.48 14417.92 00:41:10.926 22:40:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@30 -- # bdevperfpid=391906 00:41:10.926 22:40:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@32 -- # sleep 3 00:41:10.926 22:40:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -q 128 -o 4096 -w verify -t 15 -f 00:41:10.926 22:40:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@29 -- # gen_nvmf_target_json 00:41:10.926 22:40:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@558 -- # config=() 00:41:10.926 22:40:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@558 -- # local subsystem config 00:41:10.926 22:40:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:41:10.926 22:40:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:41:10.926 { 00:41:10.926 "params": { 00:41:10.926 "name": "Nvme$subsystem", 00:41:10.926 "trtype": "$TEST_TRANSPORT", 00:41:10.926 "traddr": "$NVMF_FIRST_TARGET_IP", 00:41:10.926 "adrfam": "ipv4", 00:41:10.926 "trsvcid": "$NVMF_PORT", 00:41:10.926 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:41:10.926 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:41:10.926 "hdgst": ${hdgst:-false}, 00:41:10.926 "ddgst": ${ddgst:-false} 00:41:10.926 }, 00:41:10.926 "method": "bdev_nvme_attach_controller" 00:41:10.926 } 00:41:10.926 EOF 00:41:10.926 )") 00:41:10.926 22:40:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@580 -- # cat 00:41:10.926 22:40:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@582 -- # jq . 00:41:10.926 22:40:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@583 -- # IFS=, 00:41:10.926 22:40:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:41:10.926 "params": { 00:41:10.926 "name": "Nvme1", 00:41:10.926 "trtype": "tcp", 00:41:10.926 "traddr": "10.0.0.2", 00:41:10.926 "adrfam": "ipv4", 00:41:10.926 "trsvcid": "4420", 00:41:10.926 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:41:10.926 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:41:10.926 "hdgst": false, 00:41:10.926 "ddgst": false 00:41:10.926 }, 00:41:10.926 "method": "bdev_nvme_attach_controller" 00:41:10.926 }' 00:41:10.926 [2024-10-01 22:40:05.996427] Starting SPDK v25.01-pre git sha1 1b1c3081e / DPDK 24.03.0 initialization... 00:41:10.926 [2024-10-01 22:40:05.996484] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid391906 ] 00:41:10.926 [2024-10-01 22:40:06.057905] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:41:10.926 [2024-10-01 22:40:06.120790] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:41:11.498 Running I/O for 15 seconds... 00:41:13.952 11006.00 IOPS, 42.99 MiB/s 10970.00 IOPS, 42.85 MiB/s 22:40:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@33 -- # kill -9 391417 00:41:13.952 22:40:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@35 -- # sleep 3 00:41:13.952 [2024-10-01 22:40:08.963122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:89640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:13.952 [2024-10-01 22:40:08.963164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:13.952 [2024-10-01 22:40:08.963191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:89904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:13.952 [2024-10-01 22:40:08.963203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:13.952 [2024-10-01 22:40:08.963215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:89912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:13.952 [2024-10-01 22:40:08.963224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:13.952 [2024-10-01 22:40:08.963234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:89920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:13.952 [2024-10-01 22:40:08.963244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:13.952 [2024-10-01 22:40:08.963254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:89928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:13.952 [2024-10-01 22:40:08.963263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:13.952 [2024-10-01 22:40:08.963274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:89936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:13.952 [2024-10-01 22:40:08.963282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:13.952 [2024-10-01 22:40:08.963292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:89944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:13.952 [2024-10-01 22:40:08.963302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:13.952 [2024-10-01 22:40:08.963312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:89952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:13.952 [2024-10-01 22:40:08.963321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:13.952 [2024-10-01 22:40:08.963331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:89960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:13.952 [2024-10-01 22:40:08.963341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:13.952 [2024-10-01 22:40:08.963351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:89968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:13.952 [2024-10-01 22:40:08.963361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:13.953 [2024-10-01 22:40:08.963372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:89976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:13.953 [2024-10-01 22:40:08.963381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:13.953 [2024-10-01 22:40:08.963393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:89984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:13.953 [2024-10-01 22:40:08.963402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:13.953 [2024-10-01 22:40:08.963414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:89992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:13.953 [2024-10-01 22:40:08.963423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:13.953 [2024-10-01 22:40:08.963434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:90000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:13.953 [2024-10-01 22:40:08.963447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:13.953 [2024-10-01 22:40:08.963456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:90008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:13.953 [2024-10-01 22:40:08.963464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:13.953 [2024-10-01 22:40:08.963474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:90016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:13.953 [2024-10-01 22:40:08.963482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:13.953 [2024-10-01 22:40:08.963493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:90024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:13.953 [2024-10-01 22:40:08.963502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:13.953 [2024-10-01 22:40:08.963514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:90032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:13.953 [2024-10-01 22:40:08.963523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:13.953 [2024-10-01 22:40:08.963538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:90040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:13.953 [2024-10-01 22:40:08.963546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:13.953 [2024-10-01 22:40:08.963557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:90048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:13.953 [2024-10-01 22:40:08.963566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:13.953 [2024-10-01 22:40:08.963578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:90056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:13.953 [2024-10-01 22:40:08.963589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:13.953 [2024-10-01 22:40:08.963601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:90064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:13.953 [2024-10-01 22:40:08.963611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:13.953 [2024-10-01 22:40:08.963627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:90072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:13.953 [2024-10-01 22:40:08.963635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:13.953 [2024-10-01 22:40:08.963645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:90080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:13.953 [2024-10-01 22:40:08.963652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:13.953 [2024-10-01 22:40:08.963662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:90088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:13.953 [2024-10-01 22:40:08.963669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:13.953 [2024-10-01 22:40:08.963678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:90096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:13.953 [2024-10-01 22:40:08.963686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:13.953 [2024-10-01 22:40:08.963697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:90104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:13.953 [2024-10-01 22:40:08.963705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:13.953 [2024-10-01 22:40:08.963714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:90112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:13.953 [2024-10-01 22:40:08.963721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:13.953 [2024-10-01 22:40:08.963731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:90120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:13.953 [2024-10-01 22:40:08.963739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:13.953 [2024-10-01 22:40:08.963748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:90128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:13.953 [2024-10-01 22:40:08.963756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:13.953 [2024-10-01 22:40:08.963765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:90136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:13.953 [2024-10-01 22:40:08.963772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:13.953 [2024-10-01 22:40:08.963782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:90144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:13.953 [2024-10-01 22:40:08.963789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:13.953 [2024-10-01 22:40:08.963799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:90152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:13.953 [2024-10-01 22:40:08.963807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:13.953 [2024-10-01 22:40:08.963816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:90160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:13.953 [2024-10-01 22:40:08.963824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:13.953 [2024-10-01 22:40:08.963833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:90168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:13.953 [2024-10-01 22:40:08.963841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:13.953 [2024-10-01 22:40:08.963850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:90176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:13.953 [2024-10-01 22:40:08.963857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:13.953 [2024-10-01 22:40:08.963867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:90184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:13.953 [2024-10-01 22:40:08.963874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:13.953 [2024-10-01 22:40:08.963884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:90192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:13.953 [2024-10-01 22:40:08.963891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:13.953 [2024-10-01 22:40:08.963901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:90200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:13.953 [2024-10-01 22:40:08.963910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:13.953 [2024-10-01 22:40:08.963920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:90208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:13.953 [2024-10-01 22:40:08.963927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:13.953 [2024-10-01 22:40:08.963937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:90216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:13.953 [2024-10-01 22:40:08.963944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:13.953 [2024-10-01 22:40:08.963953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:90224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:13.953 [2024-10-01 22:40:08.963961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:13.953 [2024-10-01 22:40:08.963971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:90232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:13.953 [2024-10-01 22:40:08.963978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:13.953 [2024-10-01 22:40:08.963988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:90240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:13.953 [2024-10-01 22:40:08.963995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:13.953 [2024-10-01 22:40:08.964005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:90248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:13.953 [2024-10-01 22:40:08.964012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:13.953 [2024-10-01 22:40:08.964021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:90256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:13.953 [2024-10-01 22:40:08.964029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:13.953 [2024-10-01 22:40:08.964039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:90264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:13.953 [2024-10-01 22:40:08.964046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:13.953 [2024-10-01 22:40:08.964056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:90272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:13.953 [2024-10-01 22:40:08.964063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:13.953 [2024-10-01 22:40:08.964072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:90280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:13.953 [2024-10-01 22:40:08.964080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:13.953 [2024-10-01 22:40:08.964090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:90288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:13.953 [2024-10-01 22:40:08.964097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:13.953 [2024-10-01 22:40:08.964109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:90296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:13.953 [2024-10-01 22:40:08.964116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:13.953 [2024-10-01 22:40:08.964125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:90304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:13.953 [2024-10-01 22:40:08.964134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:13.953 [2024-10-01 22:40:08.964144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:90312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:13.953 [2024-10-01 22:40:08.964152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:13.953 [2024-10-01 22:40:08.964161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:90320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:13.953 [2024-10-01 22:40:08.964168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:13.953 [2024-10-01 22:40:08.964177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:90328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:13.953 [2024-10-01 22:40:08.964185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:13.953 [2024-10-01 22:40:08.964194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:90336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:13.953 [2024-10-01 22:40:08.964202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:13.953 [2024-10-01 22:40:08.964211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:90344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:13.953 [2024-10-01 22:40:08.964219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:13.953 [2024-10-01 22:40:08.964228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:90352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:13.953 [2024-10-01 22:40:08.964235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:13.953 [2024-10-01 22:40:08.964245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:90360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:13.953 [2024-10-01 22:40:08.964252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:13.953 [2024-10-01 22:40:08.964262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:90368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:13.953 [2024-10-01 22:40:08.964269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:13.953 [2024-10-01 22:40:08.964278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:90376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:13.953 [2024-10-01 22:40:08.964286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:13.953 [2024-10-01 22:40:08.964295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:90384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:13.953 [2024-10-01 22:40:08.964302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:13.953 [2024-10-01 22:40:08.964312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:90392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:13.953 [2024-10-01 22:40:08.964320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:13.953 [2024-10-01 22:40:08.964329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:90400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:13.953 [2024-10-01 22:40:08.964336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:13.953 [2024-10-01 22:40:08.964347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:90408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:13.953 [2024-10-01 22:40:08.964355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:13.953 [2024-10-01 22:40:08.964365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:90416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:13.953 [2024-10-01 22:40:08.964372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:13.953 [2024-10-01 22:40:08.964382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:90424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:13.953 [2024-10-01 22:40:08.964389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:13.953 [2024-10-01 22:40:08.964399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:90432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:13.953 [2024-10-01 22:40:08.964406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:13.953 [2024-10-01 22:40:08.964416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:90440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:13.954 [2024-10-01 22:40:08.964423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:13.954 [2024-10-01 22:40:08.964433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:90448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:13.954 [2024-10-01 22:40:08.964440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:13.954 [2024-10-01 22:40:08.964449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:90456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:13.954 [2024-10-01 22:40:08.964457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:13.954 [2024-10-01 22:40:08.964467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:90464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:13.954 [2024-10-01 22:40:08.964474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:13.954 [2024-10-01 22:40:08.964484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:90472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:13.954 [2024-10-01 22:40:08.964491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:13.954 [2024-10-01 22:40:08.964501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:90480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:13.954 [2024-10-01 22:40:08.964508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:13.954 [2024-10-01 22:40:08.964518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:90488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:13.954 [2024-10-01 22:40:08.964525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:13.954 [2024-10-01 22:40:08.964534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:90496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:13.954 [2024-10-01 22:40:08.964542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:13.954 [2024-10-01 22:40:08.964551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:90504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:13.954 [2024-10-01 22:40:08.964560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:13.954 [2024-10-01 22:40:08.964570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:90512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:13.954 [2024-10-01 22:40:08.964577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:13.954 [2024-10-01 22:40:08.964586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:90520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:13.954 [2024-10-01 22:40:08.964594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:13.954 [2024-10-01 22:40:08.964603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:90528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:13.954 [2024-10-01 22:40:08.964610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:13.954 [2024-10-01 22:40:08.964620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:90536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:13.954 [2024-10-01 22:40:08.964631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:13.954 [2024-10-01 22:40:08.964641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:90544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:13.954 [2024-10-01 22:40:08.964648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:13.954 [2024-10-01 22:40:08.964660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:90552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:13.954 [2024-10-01 22:40:08.964668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:13.954 [2024-10-01 22:40:08.964677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:89648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:13.954 [2024-10-01 22:40:08.964685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:13.954 [2024-10-01 22:40:08.964694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:89656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:13.954 [2024-10-01 22:40:08.964702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:13.954 [2024-10-01 22:40:08.964711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:89664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:13.954 [2024-10-01 22:40:08.964719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:13.954 [2024-10-01 22:40:08.964728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:89672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:13.954 [2024-10-01 22:40:08.964736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:13.954 [2024-10-01 22:40:08.964746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:89680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:13.954 [2024-10-01 22:40:08.964753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:13.954 [2024-10-01 22:40:08.964763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:89688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:13.954 [2024-10-01 22:40:08.964770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:13.954 [2024-10-01 22:40:08.964779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:89696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:13.954 [2024-10-01 22:40:08.964788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:13.954 [2024-10-01 22:40:08.964798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:90560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:13.954 [2024-10-01 22:40:08.964805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:13.954 [2024-10-01 22:40:08.964815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:90568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:13.954 [2024-10-01 22:40:08.964822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:13.954 [2024-10-01 22:40:08.964831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:90576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:13.954 [2024-10-01 22:40:08.964838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:13.954 [2024-10-01 22:40:08.964848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:90584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:13.954 [2024-10-01 22:40:08.964855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:13.954 [2024-10-01 22:40:08.964865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:90592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:13.954 [2024-10-01 22:40:08.964872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:13.954 [2024-10-01 22:40:08.964881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:90600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:13.954 [2024-10-01 22:40:08.964888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:13.954 [2024-10-01 22:40:08.964899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:90608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:13.954 [2024-10-01 22:40:08.964907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:13.954 [2024-10-01 22:40:08.964916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:90616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:13.954 [2024-10-01 22:40:08.964924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:13.954 [2024-10-01 22:40:08.964933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:90624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:13.954 [2024-10-01 22:40:08.964941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:13.954 [2024-10-01 22:40:08.964951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:90632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:13.954 [2024-10-01 22:40:08.964958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:13.954 [2024-10-01 22:40:08.964968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:90640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:13.954 [2024-10-01 22:40:08.964975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:13.954 [2024-10-01 22:40:08.964985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:90648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:13.954 [2024-10-01 22:40:08.964993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:13.954 [2024-10-01 22:40:08.965004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:90656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:13.954 [2024-10-01 22:40:08.965012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:13.954 [2024-10-01 22:40:08.965021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:89704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:13.954 [2024-10-01 22:40:08.965029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:13.954 [2024-10-01 22:40:08.965038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:89712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:13.954 [2024-10-01 22:40:08.965045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:13.954 [2024-10-01 22:40:08.965055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:89720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:13.954 [2024-10-01 22:40:08.965063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:13.954 [2024-10-01 22:40:08.965072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:89728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:13.954 [2024-10-01 22:40:08.965080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:13.954 [2024-10-01 22:40:08.965090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:89736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:13.954 [2024-10-01 22:40:08.965097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:13.954 [2024-10-01 22:40:08.965107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:89744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:13.954 [2024-10-01 22:40:08.965114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:13.954 [2024-10-01 22:40:08.965124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:89752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:13.954 [2024-10-01 22:40:08.965131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:13.954 [2024-10-01 22:40:08.965141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:89760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:13.954 [2024-10-01 22:40:08.965148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:13.954 [2024-10-01 22:40:08.965157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:89768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:13.954 [2024-10-01 22:40:08.965165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:13.954 [2024-10-01 22:40:08.965175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:89776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:13.954 [2024-10-01 22:40:08.965182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:13.954 [2024-10-01 22:40:08.965191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:89784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:13.954 [2024-10-01 22:40:08.965199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:13.954 [2024-10-01 22:40:08.965212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:89792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:13.954 [2024-10-01 22:40:08.965221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:13.954 [2024-10-01 22:40:08.965230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:89800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:13.954 [2024-10-01 22:40:08.965238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:13.954 [2024-10-01 22:40:08.965247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:89808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:13.954 [2024-10-01 22:40:08.965254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:13.954 [2024-10-01 22:40:08.965264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:89816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:13.954 [2024-10-01 22:40:08.965271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:13.954 [2024-10-01 22:40:08.965281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:89824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:13.954 [2024-10-01 22:40:08.965288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:13.954 [2024-10-01 22:40:08.965297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:89832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:13.954 [2024-10-01 22:40:08.965305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:13.954 [2024-10-01 22:40:08.965314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:89840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:13.954 [2024-10-01 22:40:08.965322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:13.954 [2024-10-01 22:40:08.965332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:89848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:13.954 [2024-10-01 22:40:08.965339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:13.954 [2024-10-01 22:40:08.965348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:89856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:13.954 [2024-10-01 22:40:08.965356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:13.954 [2024-10-01 22:40:08.965365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:89864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:13.954 [2024-10-01 22:40:08.965373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:13.954 [2024-10-01 22:40:08.965382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:89872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:13.954 [2024-10-01 22:40:08.965390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:13.954 [2024-10-01 22:40:08.965399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:89880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:13.954 [2024-10-01 22:40:08.965406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:13.954 [2024-10-01 22:40:08.965416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:89888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:13.954 [2024-10-01 22:40:08.965424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:13.955 [2024-10-01 22:40:08.965434] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14bed20 is same with the state(6) to be set 00:41:13.955 [2024-10-01 22:40:08.965443] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:41:13.955 [2024-10-01 22:40:08.965449] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:41:13.955 [2024-10-01 22:40:08.965456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:89896 len:8 PRP1 0x0 PRP2 0x0 00:41:13.955 [2024-10-01 22:40:08.965464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:13.955 [2024-10-01 22:40:08.965502] bdev_nvme.c:1730:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x14bed20 was disconnected and freed. reset controller. 00:41:13.955 [2024-10-01 22:40:08.969047] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:41:13.955 [2024-10-01 22:40:08.969096] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14ac290 (9): Bad file descriptor 00:41:13.955 [2024-10-01 22:40:08.969968] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:13.955 [2024-10-01 22:40:08.970005] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ac290 with addr=10.0.0.2, port=4420 00:41:13.955 [2024-10-01 22:40:08.970016] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ac290 is same with the state(6) to be set 00:41:13.955 [2024-10-01 22:40:08.970262] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14ac290 (9): Bad file descriptor 00:41:13.955 [2024-10-01 22:40:08.970489] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:41:13.955 [2024-10-01 22:40:08.970498] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:41:13.955 [2024-10-01 22:40:08.970506] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:41:13.955 [2024-10-01 22:40:08.974227] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:41:13.955 [2024-10-01 22:40:08.983350] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:41:13.955 [2024-10-01 22:40:08.984000] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:13.955 [2024-10-01 22:40:08.984036] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ac290 with addr=10.0.0.2, port=4420 00:41:13.955 [2024-10-01 22:40:08.984047] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ac290 is same with the state(6) to be set 00:41:13.955 [2024-10-01 22:40:08.984290] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14ac290 (9): Bad file descriptor 00:41:13.955 [2024-10-01 22:40:08.984517] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:41:13.955 [2024-10-01 22:40:08.984526] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:41:13.955 [2024-10-01 22:40:08.984534] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:41:13.955 [2024-10-01 22:40:08.988138] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:41:13.955 [2024-10-01 22:40:08.997247] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:41:13.955 [2024-10-01 22:40:08.997949] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:13.955 [2024-10-01 22:40:08.997986] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ac290 with addr=10.0.0.2, port=4420 00:41:13.955 [2024-10-01 22:40:08.997998] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ac290 is same with the state(6) to be set 00:41:13.955 [2024-10-01 22:40:08.998241] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14ac290 (9): Bad file descriptor 00:41:13.955 [2024-10-01 22:40:08.998471] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:41:13.955 [2024-10-01 22:40:08.998481] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:41:13.955 [2024-10-01 22:40:08.998488] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:41:13.955 [2024-10-01 22:40:09.002105] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:41:13.955 [2024-10-01 22:40:09.011217] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:41:13.955 [2024-10-01 22:40:09.011888] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:13.955 [2024-10-01 22:40:09.011925] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ac290 with addr=10.0.0.2, port=4420 00:41:13.955 [2024-10-01 22:40:09.011936] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ac290 is same with the state(6) to be set 00:41:13.955 [2024-10-01 22:40:09.012180] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14ac290 (9): Bad file descriptor 00:41:13.955 [2024-10-01 22:40:09.012406] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:41:13.955 [2024-10-01 22:40:09.012415] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:41:13.955 [2024-10-01 22:40:09.012422] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:41:13.955 [2024-10-01 22:40:09.016036] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:41:13.955 [2024-10-01 22:40:09.025158] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:41:13.955 [2024-10-01 22:40:09.025789] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:13.955 [2024-10-01 22:40:09.025826] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ac290 with addr=10.0.0.2, port=4420 00:41:13.955 [2024-10-01 22:40:09.025838] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ac290 is same with the state(6) to be set 00:41:13.955 [2024-10-01 22:40:09.026080] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14ac290 (9): Bad file descriptor 00:41:13.955 [2024-10-01 22:40:09.026307] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:41:13.955 [2024-10-01 22:40:09.026315] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:41:13.955 [2024-10-01 22:40:09.026323] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:41:13.955 [2024-10-01 22:40:09.029928] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:41:13.955 [2024-10-01 22:40:09.039050] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:41:13.955 [2024-10-01 22:40:09.039724] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:13.955 [2024-10-01 22:40:09.039761] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ac290 with addr=10.0.0.2, port=4420 00:41:13.955 [2024-10-01 22:40:09.039772] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ac290 is same with the state(6) to be set 00:41:13.955 [2024-10-01 22:40:09.040015] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14ac290 (9): Bad file descriptor 00:41:13.955 [2024-10-01 22:40:09.040241] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:41:13.955 [2024-10-01 22:40:09.040250] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:41:13.955 [2024-10-01 22:40:09.040258] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:41:13.955 [2024-10-01 22:40:09.043869] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:41:13.955 [2024-10-01 22:40:09.053000] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:41:13.955 [2024-10-01 22:40:09.053568] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:13.955 [2024-10-01 22:40:09.053604] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ac290 with addr=10.0.0.2, port=4420 00:41:13.955 [2024-10-01 22:40:09.053617] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ac290 is same with the state(6) to be set 00:41:13.955 [2024-10-01 22:40:09.053872] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14ac290 (9): Bad file descriptor 00:41:13.955 [2024-10-01 22:40:09.054101] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:41:13.955 [2024-10-01 22:40:09.054109] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:41:13.955 [2024-10-01 22:40:09.054117] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:41:13.955 [2024-10-01 22:40:09.057716] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:41:13.955 [2024-10-01 22:40:09.067045] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:41:13.955 [2024-10-01 22:40:09.067650] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:13.955 [2024-10-01 22:40:09.067688] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ac290 with addr=10.0.0.2, port=4420 00:41:13.955 [2024-10-01 22:40:09.067699] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ac290 is same with the state(6) to be set 00:41:13.955 [2024-10-01 22:40:09.067942] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14ac290 (9): Bad file descriptor 00:41:13.955 [2024-10-01 22:40:09.068168] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:41:13.955 [2024-10-01 22:40:09.068177] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:41:13.955 [2024-10-01 22:40:09.068185] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:41:13.955 [2024-10-01 22:40:09.071876] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:41:13.955 [2024-10-01 22:40:09.080999] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:41:13.955 [2024-10-01 22:40:09.081537] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:13.955 [2024-10-01 22:40:09.081557] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ac290 with addr=10.0.0.2, port=4420 00:41:13.955 [2024-10-01 22:40:09.081565] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ac290 is same with the state(6) to be set 00:41:13.955 [2024-10-01 22:40:09.081795] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14ac290 (9): Bad file descriptor 00:41:13.955 [2024-10-01 22:40:09.082019] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:41:13.955 [2024-10-01 22:40:09.082027] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:41:13.955 [2024-10-01 22:40:09.082034] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:41:13.955 [2024-10-01 22:40:09.085626] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:41:13.955 [2024-10-01 22:40:09.094950] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:41:13.955 [2024-10-01 22:40:09.095481] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:13.955 [2024-10-01 22:40:09.095497] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ac290 with addr=10.0.0.2, port=4420 00:41:13.955 [2024-10-01 22:40:09.095509] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ac290 is same with the state(6) to be set 00:41:13.955 [2024-10-01 22:40:09.095738] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14ac290 (9): Bad file descriptor 00:41:13.955 [2024-10-01 22:40:09.095961] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:41:13.955 [2024-10-01 22:40:09.095970] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:41:13.955 [2024-10-01 22:40:09.095977] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:41:13.955 [2024-10-01 22:40:09.099566] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:41:13.955 [2024-10-01 22:40:09.108900] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:41:13.955 [2024-10-01 22:40:09.109539] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:13.955 [2024-10-01 22:40:09.109576] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ac290 with addr=10.0.0.2, port=4420 00:41:13.955 [2024-10-01 22:40:09.109587] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ac290 is same with the state(6) to be set 00:41:13.955 [2024-10-01 22:40:09.109841] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14ac290 (9): Bad file descriptor 00:41:13.955 [2024-10-01 22:40:09.110068] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:41:13.955 [2024-10-01 22:40:09.110077] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:41:13.955 [2024-10-01 22:40:09.110084] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:41:13.955 [2024-10-01 22:40:09.113681] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:41:13.955 [2024-10-01 22:40:09.122801] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:41:13.955 [2024-10-01 22:40:09.123278] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:13.955 [2024-10-01 22:40:09.123297] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ac290 with addr=10.0.0.2, port=4420 00:41:13.955 [2024-10-01 22:40:09.123305] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ac290 is same with the state(6) to be set 00:41:13.955 [2024-10-01 22:40:09.123528] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14ac290 (9): Bad file descriptor 00:41:13.955 [2024-10-01 22:40:09.123756] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:41:13.955 [2024-10-01 22:40:09.123765] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:41:13.955 [2024-10-01 22:40:09.123772] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:41:13.955 [2024-10-01 22:40:09.127362] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:41:13.955 [2024-10-01 22:40:09.136683] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:41:13.955 [2024-10-01 22:40:09.137256] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:13.955 [2024-10-01 22:40:09.137272] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ac290 with addr=10.0.0.2, port=4420 00:41:13.955 [2024-10-01 22:40:09.137280] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ac290 is same with the state(6) to be set 00:41:13.955 [2024-10-01 22:40:09.137502] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14ac290 (9): Bad file descriptor 00:41:13.956 [2024-10-01 22:40:09.137730] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:41:13.956 [2024-10-01 22:40:09.137745] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:41:13.956 [2024-10-01 22:40:09.137752] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:41:13.956 [2024-10-01 22:40:09.141343] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:41:13.956 [2024-10-01 22:40:09.150678] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:41:13.956 [2024-10-01 22:40:09.151343] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:13.956 [2024-10-01 22:40:09.151381] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ac290 with addr=10.0.0.2, port=4420 00:41:13.956 [2024-10-01 22:40:09.151392] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ac290 is same with the state(6) to be set 00:41:13.956 [2024-10-01 22:40:09.151643] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14ac290 (9): Bad file descriptor 00:41:13.956 [2024-10-01 22:40:09.151870] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:41:13.956 [2024-10-01 22:40:09.151881] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:41:13.956 [2024-10-01 22:40:09.151890] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:41:13.956 [2024-10-01 22:40:09.155491] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:41:13.956 [2024-10-01 22:40:09.164619] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:41:13.956 [2024-10-01 22:40:09.165178] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:13.956 [2024-10-01 22:40:09.165197] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ac290 with addr=10.0.0.2, port=4420 00:41:13.956 [2024-10-01 22:40:09.165205] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ac290 is same with the state(6) to be set 00:41:13.956 [2024-10-01 22:40:09.165428] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14ac290 (9): Bad file descriptor 00:41:13.956 [2024-10-01 22:40:09.165656] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:41:13.956 [2024-10-01 22:40:09.165665] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:41:13.956 [2024-10-01 22:40:09.165672] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:41:13.956 [2024-10-01 22:40:09.169267] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:41:13.956 [2024-10-01 22:40:09.178606] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:41:13.956 [2024-10-01 22:40:09.179165] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:13.956 [2024-10-01 22:40:09.179183] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ac290 with addr=10.0.0.2, port=4420 00:41:13.956 [2024-10-01 22:40:09.179190] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ac290 is same with the state(6) to be set 00:41:13.956 [2024-10-01 22:40:09.179412] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14ac290 (9): Bad file descriptor 00:41:13.956 [2024-10-01 22:40:09.179639] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:41:13.956 [2024-10-01 22:40:09.179648] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:41:13.956 [2024-10-01 22:40:09.179655] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:41:13.956 [2024-10-01 22:40:09.183250] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:41:13.956 [2024-10-01 22:40:09.192592] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:41:13.956 [2024-10-01 22:40:09.193127] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:13.956 [2024-10-01 22:40:09.193144] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ac290 with addr=10.0.0.2, port=4420 00:41:13.956 [2024-10-01 22:40:09.193151] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ac290 is same with the state(6) to be set 00:41:13.956 [2024-10-01 22:40:09.193373] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14ac290 (9): Bad file descriptor 00:41:13.956 [2024-10-01 22:40:09.193595] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:41:13.956 [2024-10-01 22:40:09.193604] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:41:13.956 [2024-10-01 22:40:09.193611] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:41:13.956 [2024-10-01 22:40:09.197210] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:41:14.217 [2024-10-01 22:40:09.206630] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:41:14.217 [2024-10-01 22:40:09.207199] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:14.217 [2024-10-01 22:40:09.207216] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ac290 with addr=10.0.0.2, port=4420 00:41:14.217 [2024-10-01 22:40:09.207224] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ac290 is same with the state(6) to be set 00:41:14.217 [2024-10-01 22:40:09.207446] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14ac290 (9): Bad file descriptor 00:41:14.217 [2024-10-01 22:40:09.207673] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:41:14.217 [2024-10-01 22:40:09.207682] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:41:14.217 [2024-10-01 22:40:09.207689] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:41:14.217 [2024-10-01 22:40:09.211283] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:41:14.217 [2024-10-01 22:40:09.220622] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:41:14.217 [2024-10-01 22:40:09.221156] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:14.217 [2024-10-01 22:40:09.221173] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ac290 with addr=10.0.0.2, port=4420 00:41:14.217 [2024-10-01 22:40:09.221181] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ac290 is same with the state(6) to be set 00:41:14.217 [2024-10-01 22:40:09.221404] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14ac290 (9): Bad file descriptor 00:41:14.217 [2024-10-01 22:40:09.221632] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:41:14.217 [2024-10-01 22:40:09.221641] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:41:14.217 [2024-10-01 22:40:09.221648] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:41:14.217 [2024-10-01 22:40:09.225243] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:41:14.217 [2024-10-01 22:40:09.234581] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:41:14.217 [2024-10-01 22:40:09.235043] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:14.217 [2024-10-01 22:40:09.235060] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ac290 with addr=10.0.0.2, port=4420 00:41:14.217 [2024-10-01 22:40:09.235067] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ac290 is same with the state(6) to be set 00:41:14.217 [2024-10-01 22:40:09.235293] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14ac290 (9): Bad file descriptor 00:41:14.217 [2024-10-01 22:40:09.235515] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:41:14.217 [2024-10-01 22:40:09.235523] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:41:14.218 [2024-10-01 22:40:09.235530] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:41:14.218 [2024-10-01 22:40:09.239136] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:41:14.218 [2024-10-01 22:40:09.248477] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:41:14.218 [2024-10-01 22:40:09.249020] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:14.218 [2024-10-01 22:40:09.249038] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ac290 with addr=10.0.0.2, port=4420 00:41:14.218 [2024-10-01 22:40:09.249045] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ac290 is same with the state(6) to be set 00:41:14.218 [2024-10-01 22:40:09.249267] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14ac290 (9): Bad file descriptor 00:41:14.218 [2024-10-01 22:40:09.249489] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:41:14.218 [2024-10-01 22:40:09.249497] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:41:14.218 [2024-10-01 22:40:09.249504] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:41:14.218 [2024-10-01 22:40:09.253102] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:41:14.218 [2024-10-01 22:40:09.262538] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:41:14.218 [2024-10-01 22:40:09.263089] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:14.218 [2024-10-01 22:40:09.263105] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ac290 with addr=10.0.0.2, port=4420 00:41:14.218 [2024-10-01 22:40:09.263112] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ac290 is same with the state(6) to be set 00:41:14.218 [2024-10-01 22:40:09.263334] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14ac290 (9): Bad file descriptor 00:41:14.218 [2024-10-01 22:40:09.263556] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:41:14.218 [2024-10-01 22:40:09.263565] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:41:14.218 [2024-10-01 22:40:09.263572] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:41:14.218 [2024-10-01 22:40:09.267174] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:41:14.218 [2024-10-01 22:40:09.276506] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:41:14.218 [2024-10-01 22:40:09.277056] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:14.218 [2024-10-01 22:40:09.277072] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ac290 with addr=10.0.0.2, port=4420 00:41:14.218 [2024-10-01 22:40:09.277080] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ac290 is same with the state(6) to be set 00:41:14.218 [2024-10-01 22:40:09.277302] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14ac290 (9): Bad file descriptor 00:41:14.218 [2024-10-01 22:40:09.277523] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:41:14.218 [2024-10-01 22:40:09.277531] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:41:14.218 [2024-10-01 22:40:09.277542] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:41:14.218 [2024-10-01 22:40:09.281142] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:41:14.218 [2024-10-01 22:40:09.290480] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:41:14.218 [2024-10-01 22:40:09.291016] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:14.218 [2024-10-01 22:40:09.291032] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ac290 with addr=10.0.0.2, port=4420 00:41:14.218 [2024-10-01 22:40:09.291040] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ac290 is same with the state(6) to be set 00:41:14.218 [2024-10-01 22:40:09.291262] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14ac290 (9): Bad file descriptor 00:41:14.218 [2024-10-01 22:40:09.291484] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:41:14.218 [2024-10-01 22:40:09.291492] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:41:14.218 [2024-10-01 22:40:09.291498] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:41:14.218 [2024-10-01 22:40:09.295100] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:41:14.218 [2024-10-01 22:40:09.304445] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:41:14.218 [2024-10-01 22:40:09.304988] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:14.218 [2024-10-01 22:40:09.305004] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ac290 with addr=10.0.0.2, port=4420 00:41:14.218 [2024-10-01 22:40:09.305011] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ac290 is same with the state(6) to be set 00:41:14.218 [2024-10-01 22:40:09.305233] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14ac290 (9): Bad file descriptor 00:41:14.218 [2024-10-01 22:40:09.305455] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:41:14.218 [2024-10-01 22:40:09.305464] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:41:14.218 [2024-10-01 22:40:09.305471] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:41:14.218 [2024-10-01 22:40:09.309073] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:41:14.218 [2024-10-01 22:40:09.318418] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:41:14.218 [2024-10-01 22:40:09.318968] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:14.218 [2024-10-01 22:40:09.318985] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ac290 with addr=10.0.0.2, port=4420 00:41:14.218 [2024-10-01 22:40:09.318993] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ac290 is same with the state(6) to be set 00:41:14.218 [2024-10-01 22:40:09.319215] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14ac290 (9): Bad file descriptor 00:41:14.218 [2024-10-01 22:40:09.319438] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:41:14.218 [2024-10-01 22:40:09.319446] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:41:14.218 [2024-10-01 22:40:09.319453] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:41:14.218 [2024-10-01 22:40:09.323053] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:41:14.218 [2024-10-01 22:40:09.332390] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:41:14.218 [2024-10-01 22:40:09.332920] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:14.218 [2024-10-01 22:40:09.332936] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ac290 with addr=10.0.0.2, port=4420 00:41:14.218 [2024-10-01 22:40:09.332943] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ac290 is same with the state(6) to be set 00:41:14.218 [2024-10-01 22:40:09.333165] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14ac290 (9): Bad file descriptor 00:41:14.218 [2024-10-01 22:40:09.333387] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:41:14.218 [2024-10-01 22:40:09.333395] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:41:14.218 [2024-10-01 22:40:09.333402] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:41:14.218 [2024-10-01 22:40:09.337003] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:41:14.218 [2024-10-01 22:40:09.346333] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:41:14.218 [2024-10-01 22:40:09.346876] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:14.218 [2024-10-01 22:40:09.346892] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ac290 with addr=10.0.0.2, port=4420 00:41:14.218 [2024-10-01 22:40:09.346899] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ac290 is same with the state(6) to be set 00:41:14.218 [2024-10-01 22:40:09.347122] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14ac290 (9): Bad file descriptor 00:41:14.218 [2024-10-01 22:40:09.347344] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:41:14.218 [2024-10-01 22:40:09.347352] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:41:14.218 [2024-10-01 22:40:09.347359] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:41:14.218 [2024-10-01 22:40:09.350969] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:41:14.218 [2024-10-01 22:40:09.360308] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:41:14.218 [2024-10-01 22:40:09.360729] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:14.218 [2024-10-01 22:40:09.360748] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ac290 with addr=10.0.0.2, port=4420 00:41:14.218 [2024-10-01 22:40:09.360756] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ac290 is same with the state(6) to be set 00:41:14.218 [2024-10-01 22:40:09.360978] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14ac290 (9): Bad file descriptor 00:41:14.218 [2024-10-01 22:40:09.361201] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:41:14.218 [2024-10-01 22:40:09.361208] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:41:14.218 [2024-10-01 22:40:09.361215] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:41:14.218 [2024-10-01 22:40:09.364813] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:41:14.218 [2024-10-01 22:40:09.374356] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:41:14.218 [2024-10-01 22:40:09.374771] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:14.218 [2024-10-01 22:40:09.374788] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ac290 with addr=10.0.0.2, port=4420 00:41:14.218 [2024-10-01 22:40:09.374796] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ac290 is same with the state(6) to be set 00:41:14.218 [2024-10-01 22:40:09.375022] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14ac290 (9): Bad file descriptor 00:41:14.218 [2024-10-01 22:40:09.375245] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:41:14.218 [2024-10-01 22:40:09.375252] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:41:14.218 [2024-10-01 22:40:09.375259] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:41:14.218 [2024-10-01 22:40:09.378858] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:41:14.218 [2024-10-01 22:40:09.388404] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:41:14.218 [2024-10-01 22:40:09.388904] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:14.218 [2024-10-01 22:40:09.388920] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ac290 with addr=10.0.0.2, port=4420 00:41:14.218 [2024-10-01 22:40:09.388927] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ac290 is same with the state(6) to be set 00:41:14.218 [2024-10-01 22:40:09.389149] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14ac290 (9): Bad file descriptor 00:41:14.218 [2024-10-01 22:40:09.389371] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:41:14.218 [2024-10-01 22:40:09.389380] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:41:14.218 [2024-10-01 22:40:09.389387] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:41:14.218 [2024-10-01 22:40:09.392985] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:41:14.218 [2024-10-01 22:40:09.402332] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:41:14.218 [2024-10-01 22:40:09.402851] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:14.218 [2024-10-01 22:40:09.402867] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ac290 with addr=10.0.0.2, port=4420 00:41:14.218 [2024-10-01 22:40:09.402875] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ac290 is same with the state(6) to be set 00:41:14.218 [2024-10-01 22:40:09.403096] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14ac290 (9): Bad file descriptor 00:41:14.218 [2024-10-01 22:40:09.403318] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:41:14.218 [2024-10-01 22:40:09.403327] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:41:14.218 [2024-10-01 22:40:09.403334] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:41:14.218 [2024-10-01 22:40:09.406934] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:41:14.218 [2024-10-01 22:40:09.416272] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:41:14.218 [2024-10-01 22:40:09.416735] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:14.218 [2024-10-01 22:40:09.416751] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ac290 with addr=10.0.0.2, port=4420 00:41:14.218 [2024-10-01 22:40:09.416759] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ac290 is same with the state(6) to be set 00:41:14.218 [2024-10-01 22:40:09.416981] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14ac290 (9): Bad file descriptor 00:41:14.219 [2024-10-01 22:40:09.417203] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:41:14.219 [2024-10-01 22:40:09.417212] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:41:14.219 [2024-10-01 22:40:09.417219] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:41:14.219 [2024-10-01 22:40:09.420823] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:41:14.219 [2024-10-01 22:40:09.430156] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:41:14.219 [2024-10-01 22:40:09.430752] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:14.219 [2024-10-01 22:40:09.430789] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ac290 with addr=10.0.0.2, port=4420 00:41:14.219 [2024-10-01 22:40:09.430801] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ac290 is same with the state(6) to be set 00:41:14.219 [2024-10-01 22:40:09.431045] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14ac290 (9): Bad file descriptor 00:41:14.219 [2024-10-01 22:40:09.431272] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:41:14.219 [2024-10-01 22:40:09.431280] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:41:14.219 [2024-10-01 22:40:09.431288] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:41:14.219 [2024-10-01 22:40:09.434893] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:41:14.219 [2024-10-01 22:40:09.444009] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:41:14.219 [2024-10-01 22:40:09.444702] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:14.219 [2024-10-01 22:40:09.444740] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ac290 with addr=10.0.0.2, port=4420 00:41:14.219 [2024-10-01 22:40:09.444751] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ac290 is same with the state(6) to be set 00:41:14.219 [2024-10-01 22:40:09.444993] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14ac290 (9): Bad file descriptor 00:41:14.219 [2024-10-01 22:40:09.445219] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:41:14.219 [2024-10-01 22:40:09.445228] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:41:14.219 [2024-10-01 22:40:09.445236] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:41:14.219 [2024-10-01 22:40:09.448841] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:41:14.219 9196.33 IOPS, 35.92 MiB/s [2024-10-01 22:40:09.458788] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:41:14.219 [2024-10-01 22:40:09.459421] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:14.219 [2024-10-01 22:40:09.459457] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ac290 with addr=10.0.0.2, port=4420 00:41:14.219 [2024-10-01 22:40:09.459468] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ac290 is same with the state(6) to be set 00:41:14.219 [2024-10-01 22:40:09.459721] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14ac290 (9): Bad file descriptor 00:41:14.219 [2024-10-01 22:40:09.459948] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:41:14.219 [2024-10-01 22:40:09.459957] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:41:14.219 [2024-10-01 22:40:09.459965] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:41:14.219 [2024-10-01 22:40:09.463567] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:41:14.480 [2024-10-01 22:40:09.472704] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:41:14.480 [2024-10-01 22:40:09.473192] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:14.480 [2024-10-01 22:40:09.473216] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ac290 with addr=10.0.0.2, port=4420 00:41:14.480 [2024-10-01 22:40:09.473225] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ac290 is same with the state(6) to be set 00:41:14.480 [2024-10-01 22:40:09.473447] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14ac290 (9): Bad file descriptor 00:41:14.480 [2024-10-01 22:40:09.473678] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:41:14.480 [2024-10-01 22:40:09.473687] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:41:14.480 [2024-10-01 22:40:09.473695] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:41:14.480 [2024-10-01 22:40:09.477295] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:41:14.480 [2024-10-01 22:40:09.486642] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:41:14.480 [2024-10-01 22:40:09.487181] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:14.480 [2024-10-01 22:40:09.487197] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ac290 with addr=10.0.0.2, port=4420 00:41:14.480 [2024-10-01 22:40:09.487205] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ac290 is same with the state(6) to be set 00:41:14.480 [2024-10-01 22:40:09.487428] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14ac290 (9): Bad file descriptor 00:41:14.480 [2024-10-01 22:40:09.487655] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:41:14.480 [2024-10-01 22:40:09.487664] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:41:14.480 [2024-10-01 22:40:09.487671] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:41:14.480 [2024-10-01 22:40:09.491266] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:41:14.480 [2024-10-01 22:40:09.500828] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:41:14.480 [2024-10-01 22:40:09.501494] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:14.480 [2024-10-01 22:40:09.501532] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ac290 with addr=10.0.0.2, port=4420 00:41:14.480 [2024-10-01 22:40:09.501543] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ac290 is same with the state(6) to be set 00:41:14.480 [2024-10-01 22:40:09.501796] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14ac290 (9): Bad file descriptor 00:41:14.480 [2024-10-01 22:40:09.502023] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:41:14.480 [2024-10-01 22:40:09.502032] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:41:14.480 [2024-10-01 22:40:09.502039] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:41:14.480 [2024-10-01 22:40:09.505649] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:41:14.480 [2024-10-01 22:40:09.514779] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:41:14.480 [2024-10-01 22:40:09.515465] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:14.480 [2024-10-01 22:40:09.515502] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ac290 with addr=10.0.0.2, port=4420 00:41:14.480 [2024-10-01 22:40:09.515513] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ac290 is same with the state(6) to be set 00:41:14.480 [2024-10-01 22:40:09.515764] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14ac290 (9): Bad file descriptor 00:41:14.481 [2024-10-01 22:40:09.515996] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:41:14.481 [2024-10-01 22:40:09.516006] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:41:14.481 [2024-10-01 22:40:09.516013] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:41:14.481 [2024-10-01 22:40:09.519615] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:41:14.481 [2024-10-01 22:40:09.528753] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:41:14.481 [2024-10-01 22:40:09.529344] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:14.481 [2024-10-01 22:40:09.529363] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ac290 with addr=10.0.0.2, port=4420 00:41:14.481 [2024-10-01 22:40:09.529372] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ac290 is same with the state(6) to be set 00:41:14.481 [2024-10-01 22:40:09.529595] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14ac290 (9): Bad file descriptor 00:41:14.481 [2024-10-01 22:40:09.529826] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:41:14.481 [2024-10-01 22:40:09.529837] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:41:14.481 [2024-10-01 22:40:09.529844] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:41:14.481 [2024-10-01 22:40:09.533443] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:41:14.481 [2024-10-01 22:40:09.542798] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:41:14.481 [2024-10-01 22:40:09.543329] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:14.481 [2024-10-01 22:40:09.543345] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ac290 with addr=10.0.0.2, port=4420 00:41:14.481 [2024-10-01 22:40:09.543353] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ac290 is same with the state(6) to be set 00:41:14.481 [2024-10-01 22:40:09.543575] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14ac290 (9): Bad file descriptor 00:41:14.481 [2024-10-01 22:40:09.543804] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:41:14.481 [2024-10-01 22:40:09.543813] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:41:14.481 [2024-10-01 22:40:09.543821] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:41:14.481 [2024-10-01 22:40:09.547420] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:41:14.481 [2024-10-01 22:40:09.556779] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:41:14.481 [2024-10-01 22:40:09.557355] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:14.481 [2024-10-01 22:40:09.557371] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ac290 with addr=10.0.0.2, port=4420 00:41:14.481 [2024-10-01 22:40:09.557380] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ac290 is same with the state(6) to be set 00:41:14.481 [2024-10-01 22:40:09.557601] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14ac290 (9): Bad file descriptor 00:41:14.481 [2024-10-01 22:40:09.557834] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:41:14.481 [2024-10-01 22:40:09.557843] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:41:14.481 [2024-10-01 22:40:09.557850] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:41:14.481 [2024-10-01 22:40:09.561454] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:41:14.481 [2024-10-01 22:40:09.570799] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:41:14.481 [2024-10-01 22:40:09.571328] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:14.481 [2024-10-01 22:40:09.571344] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ac290 with addr=10.0.0.2, port=4420 00:41:14.481 [2024-10-01 22:40:09.571352] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ac290 is same with the state(6) to be set 00:41:14.481 [2024-10-01 22:40:09.571573] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14ac290 (9): Bad file descriptor 00:41:14.481 [2024-10-01 22:40:09.571803] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:41:14.481 [2024-10-01 22:40:09.571812] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:41:14.481 [2024-10-01 22:40:09.571819] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:41:14.481 [2024-10-01 22:40:09.575412] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:41:14.481 [2024-10-01 22:40:09.584838] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:41:14.481 [2024-10-01 22:40:09.585371] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:14.481 [2024-10-01 22:40:09.585387] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ac290 with addr=10.0.0.2, port=4420 00:41:14.481 [2024-10-01 22:40:09.585396] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ac290 is same with the state(6) to be set 00:41:14.481 [2024-10-01 22:40:09.585618] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14ac290 (9): Bad file descriptor 00:41:14.481 [2024-10-01 22:40:09.585846] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:41:14.481 [2024-10-01 22:40:09.585855] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:41:14.481 [2024-10-01 22:40:09.585862] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:41:14.481 [2024-10-01 22:40:09.589457] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:41:14.481 [2024-10-01 22:40:09.598794] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:41:14.481 [2024-10-01 22:40:09.599232] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:14.481 [2024-10-01 22:40:09.599250] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ac290 with addr=10.0.0.2, port=4420 00:41:14.481 [2024-10-01 22:40:09.599258] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ac290 is same with the state(6) to be set 00:41:14.481 [2024-10-01 22:40:09.599480] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14ac290 (9): Bad file descriptor 00:41:14.481 [2024-10-01 22:40:09.599709] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:41:14.481 [2024-10-01 22:40:09.599719] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:41:14.481 [2024-10-01 22:40:09.599726] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:41:14.481 [2024-10-01 22:40:09.603335] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:41:14.481 [2024-10-01 22:40:09.612675] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:41:14.481 [2024-10-01 22:40:09.613232] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:14.481 [2024-10-01 22:40:09.613248] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ac290 with addr=10.0.0.2, port=4420 00:41:14.481 [2024-10-01 22:40:09.613260] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ac290 is same with the state(6) to be set 00:41:14.481 [2024-10-01 22:40:09.613482] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14ac290 (9): Bad file descriptor 00:41:14.481 [2024-10-01 22:40:09.613709] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:41:14.481 [2024-10-01 22:40:09.613719] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:41:14.481 [2024-10-01 22:40:09.613726] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:41:14.481 [2024-10-01 22:40:09.617326] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:41:14.481 [2024-10-01 22:40:09.626668] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:41:14.481 [2024-10-01 22:40:09.627098] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:14.481 [2024-10-01 22:40:09.627114] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ac290 with addr=10.0.0.2, port=4420 00:41:14.481 [2024-10-01 22:40:09.627122] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ac290 is same with the state(6) to be set 00:41:14.481 [2024-10-01 22:40:09.627344] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14ac290 (9): Bad file descriptor 00:41:14.481 [2024-10-01 22:40:09.627566] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:41:14.481 [2024-10-01 22:40:09.627574] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:41:14.481 [2024-10-01 22:40:09.627581] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:41:14.481 [2024-10-01 22:40:09.631181] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:41:14.481 [2024-10-01 22:40:09.640518] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:41:14.481 [2024-10-01 22:40:09.641122] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:14.481 [2024-10-01 22:40:09.641160] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ac290 with addr=10.0.0.2, port=4420 00:41:14.481 [2024-10-01 22:40:09.641173] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ac290 is same with the state(6) to be set 00:41:14.481 [2024-10-01 22:40:09.641419] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14ac290 (9): Bad file descriptor 00:41:14.481 [2024-10-01 22:40:09.641654] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:41:14.481 [2024-10-01 22:40:09.641664] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:41:14.481 [2024-10-01 22:40:09.641671] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:41:14.481 [2024-10-01 22:40:09.645273] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:41:14.481 [2024-10-01 22:40:09.654419] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:41:14.481 [2024-10-01 22:40:09.655119] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:14.481 [2024-10-01 22:40:09.655155] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ac290 with addr=10.0.0.2, port=4420 00:41:14.481 [2024-10-01 22:40:09.655166] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ac290 is same with the state(6) to be set 00:41:14.481 [2024-10-01 22:40:09.655409] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14ac290 (9): Bad file descriptor 00:41:14.481 [2024-10-01 22:40:09.655645] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:41:14.481 [2024-10-01 22:40:09.655660] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:41:14.482 [2024-10-01 22:40:09.655668] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:41:14.482 [2024-10-01 22:40:09.659265] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:41:14.482 [2024-10-01 22:40:09.668383] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:41:14.482 [2024-10-01 22:40:09.668971] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:14.482 [2024-10-01 22:40:09.669008] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ac290 with addr=10.0.0.2, port=4420 00:41:14.482 [2024-10-01 22:40:09.669021] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ac290 is same with the state(6) to be set 00:41:14.482 [2024-10-01 22:40:09.669264] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14ac290 (9): Bad file descriptor 00:41:14.482 [2024-10-01 22:40:09.669491] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:41:14.482 [2024-10-01 22:40:09.669500] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:41:14.482 [2024-10-01 22:40:09.669507] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:41:14.482 [2024-10-01 22:40:09.673113] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:41:14.482 [2024-10-01 22:40:09.682232] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:41:14.482 [2024-10-01 22:40:09.682950] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:14.482 [2024-10-01 22:40:09.682987] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ac290 with addr=10.0.0.2, port=4420 00:41:14.482 [2024-10-01 22:40:09.683000] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ac290 is same with the state(6) to be set 00:41:14.482 [2024-10-01 22:40:09.683246] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14ac290 (9): Bad file descriptor 00:41:14.482 [2024-10-01 22:40:09.683472] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:41:14.482 [2024-10-01 22:40:09.683480] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:41:14.482 [2024-10-01 22:40:09.683488] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:41:14.482 [2024-10-01 22:40:09.687092] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:41:14.482 [2024-10-01 22:40:09.696212] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:41:14.482 [2024-10-01 22:40:09.696773] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:14.482 [2024-10-01 22:40:09.696792] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ac290 with addr=10.0.0.2, port=4420 00:41:14.482 [2024-10-01 22:40:09.696801] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ac290 is same with the state(6) to be set 00:41:14.482 [2024-10-01 22:40:09.697024] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14ac290 (9): Bad file descriptor 00:41:14.482 [2024-10-01 22:40:09.697246] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:41:14.482 [2024-10-01 22:40:09.697254] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:41:14.482 [2024-10-01 22:40:09.697261] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:41:14.482 [2024-10-01 22:40:09.700866] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:41:14.482 [2024-10-01 22:40:09.710194] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:41:14.482 [2024-10-01 22:40:09.710795] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:14.482 [2024-10-01 22:40:09.710812] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ac290 with addr=10.0.0.2, port=4420 00:41:14.482 [2024-10-01 22:40:09.710820] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ac290 is same with the state(6) to be set 00:41:14.482 [2024-10-01 22:40:09.711042] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14ac290 (9): Bad file descriptor 00:41:14.482 [2024-10-01 22:40:09.711264] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:41:14.482 [2024-10-01 22:40:09.711272] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:41:14.482 [2024-10-01 22:40:09.711279] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:41:14.482 [2024-10-01 22:40:09.714874] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:41:14.482 [2024-10-01 22:40:09.724203] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:41:14.482 [2024-10-01 22:40:09.724744] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:14.482 [2024-10-01 22:40:09.724761] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ac290 with addr=10.0.0.2, port=4420 00:41:14.482 [2024-10-01 22:40:09.724769] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ac290 is same with the state(6) to be set 00:41:14.482 [2024-10-01 22:40:09.724990] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14ac290 (9): Bad file descriptor 00:41:14.482 [2024-10-01 22:40:09.725212] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:41:14.482 [2024-10-01 22:40:09.725220] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:41:14.482 [2024-10-01 22:40:09.725227] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:41:14.482 [2024-10-01 22:40:09.728822] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:41:14.743 [2024-10-01 22:40:09.738143] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:41:14.743 [2024-10-01 22:40:09.738713] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:14.743 [2024-10-01 22:40:09.738729] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ac290 with addr=10.0.0.2, port=4420 00:41:14.743 [2024-10-01 22:40:09.738736] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ac290 is same with the state(6) to be set 00:41:14.743 [2024-10-01 22:40:09.738958] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14ac290 (9): Bad file descriptor 00:41:14.743 [2024-10-01 22:40:09.739180] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:41:14.743 [2024-10-01 22:40:09.739188] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:41:14.743 [2024-10-01 22:40:09.739195] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:41:14.743 [2024-10-01 22:40:09.742790] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:41:14.743 [2024-10-01 22:40:09.752121] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:41:14.743 [2024-10-01 22:40:09.752899] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:14.743 [2024-10-01 22:40:09.752937] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ac290 with addr=10.0.0.2, port=4420 00:41:14.743 [2024-10-01 22:40:09.752948] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ac290 is same with the state(6) to be set 00:41:14.743 [2024-10-01 22:40:09.753196] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14ac290 (9): Bad file descriptor 00:41:14.743 [2024-10-01 22:40:09.753422] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:41:14.743 [2024-10-01 22:40:09.753431] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:41:14.743 [2024-10-01 22:40:09.753438] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:41:14.743 [2024-10-01 22:40:09.757042] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:41:14.743 [2024-10-01 22:40:09.766167] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:41:14.743 [2024-10-01 22:40:09.766761] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:14.743 [2024-10-01 22:40:09.766798] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ac290 with addr=10.0.0.2, port=4420 00:41:14.743 [2024-10-01 22:40:09.766811] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ac290 is same with the state(6) to be set 00:41:14.743 [2024-10-01 22:40:09.767057] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14ac290 (9): Bad file descriptor 00:41:14.744 [2024-10-01 22:40:09.767284] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:41:14.744 [2024-10-01 22:40:09.767292] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:41:14.744 [2024-10-01 22:40:09.767300] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:41:14.744 [2024-10-01 22:40:09.770909] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:41:14.744 [2024-10-01 22:40:09.780031] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:41:14.744 [2024-10-01 22:40:09.780641] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:14.744 [2024-10-01 22:40:09.780679] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ac290 with addr=10.0.0.2, port=4420 00:41:14.744 [2024-10-01 22:40:09.780690] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ac290 is same with the state(6) to be set 00:41:14.744 [2024-10-01 22:40:09.780933] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14ac290 (9): Bad file descriptor 00:41:14.744 [2024-10-01 22:40:09.781159] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:41:14.744 [2024-10-01 22:40:09.781168] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:41:14.744 [2024-10-01 22:40:09.781175] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:41:14.744 [2024-10-01 22:40:09.784778] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:41:14.744 [2024-10-01 22:40:09.793895] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:41:14.744 [2024-10-01 22:40:09.794437] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:14.744 [2024-10-01 22:40:09.794455] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ac290 with addr=10.0.0.2, port=4420 00:41:14.744 [2024-10-01 22:40:09.794464] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ac290 is same with the state(6) to be set 00:41:14.744 [2024-10-01 22:40:09.794692] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14ac290 (9): Bad file descriptor 00:41:14.744 [2024-10-01 22:40:09.794915] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:41:14.744 [2024-10-01 22:40:09.794924] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:41:14.744 [2024-10-01 22:40:09.794935] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:41:14.744 [2024-10-01 22:40:09.798529] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:41:14.744 [2024-10-01 22:40:09.807862] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:41:14.744 [2024-10-01 22:40:09.808500] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:14.744 [2024-10-01 22:40:09.808537] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ac290 with addr=10.0.0.2, port=4420 00:41:14.744 [2024-10-01 22:40:09.808548] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ac290 is same with the state(6) to be set 00:41:14.744 [2024-10-01 22:40:09.808798] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14ac290 (9): Bad file descriptor 00:41:14.744 [2024-10-01 22:40:09.809026] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:41:14.744 [2024-10-01 22:40:09.809034] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:41:14.744 [2024-10-01 22:40:09.809042] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:41:14.744 [2024-10-01 22:40:09.812642] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:41:14.744 [2024-10-01 22:40:09.821771] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:41:14.744 [2024-10-01 22:40:09.822294] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:14.744 [2024-10-01 22:40:09.822330] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ac290 with addr=10.0.0.2, port=4420 00:41:14.744 [2024-10-01 22:40:09.822342] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ac290 is same with the state(6) to be set 00:41:14.744 [2024-10-01 22:40:09.822584] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14ac290 (9): Bad file descriptor 00:41:14.744 [2024-10-01 22:40:09.822818] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:41:14.744 [2024-10-01 22:40:09.822828] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:41:14.744 [2024-10-01 22:40:09.822836] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:41:14.744 [2024-10-01 22:40:09.826432] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:41:14.744 [2024-10-01 22:40:09.835764] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:41:14.744 [2024-10-01 22:40:09.836287] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:14.744 [2024-10-01 22:40:09.836324] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ac290 with addr=10.0.0.2, port=4420 00:41:14.744 [2024-10-01 22:40:09.836336] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ac290 is same with the state(6) to be set 00:41:14.744 [2024-10-01 22:40:09.836582] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14ac290 (9): Bad file descriptor 00:41:14.744 [2024-10-01 22:40:09.836816] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:41:14.744 [2024-10-01 22:40:09.836826] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:41:14.744 [2024-10-01 22:40:09.836833] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:41:14.744 [2024-10-01 22:40:09.840430] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:41:14.744 [2024-10-01 22:40:09.849763] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:41:14.744 [2024-10-01 22:40:09.850312] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:14.744 [2024-10-01 22:40:09.850330] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ac290 with addr=10.0.0.2, port=4420 00:41:14.744 [2024-10-01 22:40:09.850338] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ac290 is same with the state(6) to be set 00:41:14.744 [2024-10-01 22:40:09.850561] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14ac290 (9): Bad file descriptor 00:41:14.744 [2024-10-01 22:40:09.850798] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:41:14.744 [2024-10-01 22:40:09.850808] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:41:14.744 [2024-10-01 22:40:09.850815] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:41:14.744 [2024-10-01 22:40:09.854411] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:41:14.744 [2024-10-01 22:40:09.863742] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:41:14.744 [2024-10-01 22:40:09.864261] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:14.744 [2024-10-01 22:40:09.864297] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ac290 with addr=10.0.0.2, port=4420 00:41:14.744 [2024-10-01 22:40:09.864308] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ac290 is same with the state(6) to be set 00:41:14.744 [2024-10-01 22:40:09.864551] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14ac290 (9): Bad file descriptor 00:41:14.744 [2024-10-01 22:40:09.864784] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:41:14.744 [2024-10-01 22:40:09.864794] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:41:14.744 [2024-10-01 22:40:09.864802] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:41:14.744 [2024-10-01 22:40:09.868402] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:41:14.744 [2024-10-01 22:40:09.877734] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:41:14.744 [2024-10-01 22:40:09.878368] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:14.744 [2024-10-01 22:40:09.878405] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ac290 with addr=10.0.0.2, port=4420 00:41:14.744 [2024-10-01 22:40:09.878417] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ac290 is same with the state(6) to be set 00:41:14.744 [2024-10-01 22:40:09.878671] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14ac290 (9): Bad file descriptor 00:41:14.744 [2024-10-01 22:40:09.878898] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:41:14.744 [2024-10-01 22:40:09.878907] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:41:14.744 [2024-10-01 22:40:09.878915] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:41:14.744 [2024-10-01 22:40:09.882511] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:41:14.744 [2024-10-01 22:40:09.891636] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:41:14.744 [2024-10-01 22:40:09.892326] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:14.744 [2024-10-01 22:40:09.892364] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ac290 with addr=10.0.0.2, port=4420 00:41:14.744 [2024-10-01 22:40:09.892375] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ac290 is same with the state(6) to be set 00:41:14.744 [2024-10-01 22:40:09.892635] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14ac290 (9): Bad file descriptor 00:41:14.744 [2024-10-01 22:40:09.892862] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:41:14.744 [2024-10-01 22:40:09.892871] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:41:14.744 [2024-10-01 22:40:09.892879] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:41:14.744 [2024-10-01 22:40:09.896476] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:41:14.744 [2024-10-01 22:40:09.905643] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:41:14.744 [2024-10-01 22:40:09.906323] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:14.744 [2024-10-01 22:40:09.906360] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ac290 with addr=10.0.0.2, port=4420 00:41:14.744 [2024-10-01 22:40:09.906371] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ac290 is same with the state(6) to be set 00:41:14.744 [2024-10-01 22:40:09.906613] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14ac290 (9): Bad file descriptor 00:41:14.744 [2024-10-01 22:40:09.906847] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:41:14.744 [2024-10-01 22:40:09.906857] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:41:14.744 [2024-10-01 22:40:09.906864] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:41:14.745 [2024-10-01 22:40:09.910460] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:41:14.745 [2024-10-01 22:40:09.919591] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:41:14.745 [2024-10-01 22:40:09.920274] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:14.745 [2024-10-01 22:40:09.920311] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ac290 with addr=10.0.0.2, port=4420 00:41:14.745 [2024-10-01 22:40:09.920323] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ac290 is same with the state(6) to be set 00:41:14.745 [2024-10-01 22:40:09.920565] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14ac290 (9): Bad file descriptor 00:41:14.745 [2024-10-01 22:40:09.920799] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:41:14.745 [2024-10-01 22:40:09.920809] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:41:14.745 [2024-10-01 22:40:09.920817] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:41:14.745 [2024-10-01 22:40:09.924419] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:41:14.745 [2024-10-01 22:40:09.933533] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:41:14.745 [2024-10-01 22:40:09.934162] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:14.745 [2024-10-01 22:40:09.934199] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ac290 with addr=10.0.0.2, port=4420 00:41:14.745 [2024-10-01 22:40:09.934210] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ac290 is same with the state(6) to be set 00:41:14.745 [2024-10-01 22:40:09.934452] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14ac290 (9): Bad file descriptor 00:41:14.745 [2024-10-01 22:40:09.934687] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:41:14.745 [2024-10-01 22:40:09.934697] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:41:14.745 [2024-10-01 22:40:09.934709] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:41:14.745 [2024-10-01 22:40:09.938307] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:41:14.745 [2024-10-01 22:40:09.947424] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:41:14.745 [2024-10-01 22:40:09.948057] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:14.745 [2024-10-01 22:40:09.948094] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ac290 with addr=10.0.0.2, port=4420 00:41:14.745 [2024-10-01 22:40:09.948105] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ac290 is same with the state(6) to be set 00:41:14.745 [2024-10-01 22:40:09.948348] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14ac290 (9): Bad file descriptor 00:41:14.745 [2024-10-01 22:40:09.948574] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:41:14.745 [2024-10-01 22:40:09.948583] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:41:14.745 [2024-10-01 22:40:09.948590] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:41:14.745 [2024-10-01 22:40:09.952210] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:41:14.745 [2024-10-01 22:40:09.961326] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:41:14.745 [2024-10-01 22:40:09.961967] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:14.745 [2024-10-01 22:40:09.962004] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ac290 with addr=10.0.0.2, port=4420 00:41:14.745 [2024-10-01 22:40:09.962015] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ac290 is same with the state(6) to be set 00:41:14.745 [2024-10-01 22:40:09.962257] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14ac290 (9): Bad file descriptor 00:41:14.745 [2024-10-01 22:40:09.962483] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:41:14.745 [2024-10-01 22:40:09.962492] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:41:14.745 [2024-10-01 22:40:09.962500] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:41:14.745 [2024-10-01 22:40:09.966105] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:41:14.745 [2024-10-01 22:40:09.975229] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:41:14.745 [2024-10-01 22:40:09.975918] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:14.745 [2024-10-01 22:40:09.975955] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ac290 with addr=10.0.0.2, port=4420 00:41:14.745 [2024-10-01 22:40:09.975967] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ac290 is same with the state(6) to be set 00:41:14.745 [2024-10-01 22:40:09.976209] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14ac290 (9): Bad file descriptor 00:41:14.745 [2024-10-01 22:40:09.976435] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:41:14.745 [2024-10-01 22:40:09.976444] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:41:14.745 [2024-10-01 22:40:09.976452] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:41:14.745 [2024-10-01 22:40:09.980057] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:41:14.745 [2024-10-01 22:40:09.989178] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:41:14.745 [2024-10-01 22:40:09.989730] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:14.745 [2024-10-01 22:40:09.989771] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ac290 with addr=10.0.0.2, port=4420 00:41:14.745 [2024-10-01 22:40:09.989784] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ac290 is same with the state(6) to be set 00:41:14.745 [2024-10-01 22:40:09.990030] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14ac290 (9): Bad file descriptor 00:41:14.745 [2024-10-01 22:40:09.990256] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:41:14.745 [2024-10-01 22:40:09.990265] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:41:14.745 [2024-10-01 22:40:09.990273] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:41:14.745 [2024-10-01 22:40:09.993877] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:41:15.008 [2024-10-01 22:40:10.003814] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:41:15.008 [2024-10-01 22:40:10.004294] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:15.008 [2024-10-01 22:40:10.004313] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ac290 with addr=10.0.0.2, port=4420 00:41:15.008 [2024-10-01 22:40:10.004322] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ac290 is same with the state(6) to be set 00:41:15.008 [2024-10-01 22:40:10.004545] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14ac290 (9): Bad file descriptor 00:41:15.008 [2024-10-01 22:40:10.004776] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:41:15.008 [2024-10-01 22:40:10.004785] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:41:15.008 [2024-10-01 22:40:10.004792] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:41:15.008 [2024-10-01 22:40:10.009049] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:41:15.008 [2024-10-01 22:40:10.017760] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:41:15.008 [2024-10-01 22:40:10.018231] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:15.008 [2024-10-01 22:40:10.018249] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ac290 with addr=10.0.0.2, port=4420 00:41:15.008 [2024-10-01 22:40:10.018258] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ac290 is same with the state(6) to be set 00:41:15.008 [2024-10-01 22:40:10.018480] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14ac290 (9): Bad file descriptor 00:41:15.008 [2024-10-01 22:40:10.018708] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:41:15.008 [2024-10-01 22:40:10.018718] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:41:15.008 [2024-10-01 22:40:10.018725] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:41:15.008 [2024-10-01 22:40:10.022314] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:41:15.008 [2024-10-01 22:40:10.031826] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:41:15.009 [2024-10-01 22:40:10.032406] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:15.009 [2024-10-01 22:40:10.032423] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ac290 with addr=10.0.0.2, port=4420 00:41:15.009 [2024-10-01 22:40:10.032431] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ac290 is same with the state(6) to be set 00:41:15.009 [2024-10-01 22:40:10.032657] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14ac290 (9): Bad file descriptor 00:41:15.009 [2024-10-01 22:40:10.032885] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:41:15.009 [2024-10-01 22:40:10.032893] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:41:15.009 [2024-10-01 22:40:10.032900] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:41:15.009 [2024-10-01 22:40:10.036493] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:41:15.009 [2024-10-01 22:40:10.045819] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:41:15.009 [2024-10-01 22:40:10.046486] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:15.009 [2024-10-01 22:40:10.046524] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ac290 with addr=10.0.0.2, port=4420 00:41:15.009 [2024-10-01 22:40:10.046535] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ac290 is same with the state(6) to be set 00:41:15.009 [2024-10-01 22:40:10.046787] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14ac290 (9): Bad file descriptor 00:41:15.009 [2024-10-01 22:40:10.047014] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:41:15.009 [2024-10-01 22:40:10.047023] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:41:15.009 [2024-10-01 22:40:10.047031] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:41:15.009 [2024-10-01 22:40:10.050633] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:41:15.009 [2024-10-01 22:40:10.059761] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:41:15.009 [2024-10-01 22:40:10.060293] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:15.009 [2024-10-01 22:40:10.060331] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ac290 with addr=10.0.0.2, port=4420 00:41:15.009 [2024-10-01 22:40:10.060343] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ac290 is same with the state(6) to be set 00:41:15.009 [2024-10-01 22:40:10.060621] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14ac290 (9): Bad file descriptor 00:41:15.009 [2024-10-01 22:40:10.061009] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:41:15.009 [2024-10-01 22:40:10.061039] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:41:15.009 [2024-10-01 22:40:10.061051] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:41:15.009 [2024-10-01 22:40:10.064717] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:41:15.009 [2024-10-01 22:40:10.073628] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:41:15.009 [2024-10-01 22:40:10.074161] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:15.009 [2024-10-01 22:40:10.074198] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ac290 with addr=10.0.0.2, port=4420 00:41:15.009 [2024-10-01 22:40:10.074210] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ac290 is same with the state(6) to be set 00:41:15.009 [2024-10-01 22:40:10.074452] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14ac290 (9): Bad file descriptor 00:41:15.009 [2024-10-01 22:40:10.074686] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:41:15.009 [2024-10-01 22:40:10.074696] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:41:15.009 [2024-10-01 22:40:10.074704] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:41:15.009 [2024-10-01 22:40:10.078309] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:41:15.009 [2024-10-01 22:40:10.087648] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:41:15.009 [2024-10-01 22:40:10.088278] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:15.009 [2024-10-01 22:40:10.088316] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ac290 with addr=10.0.0.2, port=4420 00:41:15.009 [2024-10-01 22:40:10.088327] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ac290 is same with the state(6) to be set 00:41:15.009 [2024-10-01 22:40:10.088570] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14ac290 (9): Bad file descriptor 00:41:15.009 [2024-10-01 22:40:10.088803] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:41:15.009 [2024-10-01 22:40:10.088813] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:41:15.009 [2024-10-01 22:40:10.088820] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:41:15.009 [2024-10-01 22:40:10.092414] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:41:15.009 [2024-10-01 22:40:10.101543] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:41:15.009 [2024-10-01 22:40:10.102227] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:15.009 [2024-10-01 22:40:10.102264] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ac290 with addr=10.0.0.2, port=4420 00:41:15.009 [2024-10-01 22:40:10.102275] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ac290 is same with the state(6) to be set 00:41:15.009 [2024-10-01 22:40:10.102518] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14ac290 (9): Bad file descriptor 00:41:15.009 [2024-10-01 22:40:10.102754] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:41:15.009 [2024-10-01 22:40:10.102763] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:41:15.009 [2024-10-01 22:40:10.102771] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:41:15.009 [2024-10-01 22:40:10.106372] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:41:15.009 [2024-10-01 22:40:10.115491] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:41:15.009 [2024-10-01 22:40:10.116151] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:15.009 [2024-10-01 22:40:10.116188] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ac290 with addr=10.0.0.2, port=4420 00:41:15.009 [2024-10-01 22:40:10.116199] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ac290 is same with the state(6) to be set 00:41:15.009 [2024-10-01 22:40:10.116442] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14ac290 (9): Bad file descriptor 00:41:15.009 [2024-10-01 22:40:10.116677] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:41:15.009 [2024-10-01 22:40:10.116687] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:41:15.009 [2024-10-01 22:40:10.116695] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:41:15.009 [2024-10-01 22:40:10.120293] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:41:15.009 [2024-10-01 22:40:10.129406] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:41:15.009 [2024-10-01 22:40:10.130083] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:15.009 [2024-10-01 22:40:10.130120] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ac290 with addr=10.0.0.2, port=4420 00:41:15.009 [2024-10-01 22:40:10.130136] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ac290 is same with the state(6) to be set 00:41:15.009 [2024-10-01 22:40:10.130378] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14ac290 (9): Bad file descriptor 00:41:15.009 [2024-10-01 22:40:10.130604] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:41:15.009 [2024-10-01 22:40:10.130613] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:41:15.009 [2024-10-01 22:40:10.130621] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:41:15.009 [2024-10-01 22:40:10.134230] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:41:15.009 [2024-10-01 22:40:10.143346] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:41:15.009 [2024-10-01 22:40:10.143909] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:15.009 [2024-10-01 22:40:10.143946] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ac290 with addr=10.0.0.2, port=4420 00:41:15.009 [2024-10-01 22:40:10.143959] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ac290 is same with the state(6) to be set 00:41:15.009 [2024-10-01 22:40:10.144203] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14ac290 (9): Bad file descriptor 00:41:15.009 [2024-10-01 22:40:10.144429] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:41:15.009 [2024-10-01 22:40:10.144438] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:41:15.009 [2024-10-01 22:40:10.144445] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:41:15.009 [2024-10-01 22:40:10.148051] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:41:15.009 [2024-10-01 22:40:10.157390] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:41:15.009 [2024-10-01 22:40:10.158075] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:15.009 [2024-10-01 22:40:10.158112] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ac290 with addr=10.0.0.2, port=4420 00:41:15.009 [2024-10-01 22:40:10.158123] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ac290 is same with the state(6) to be set 00:41:15.009 [2024-10-01 22:40:10.158366] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14ac290 (9): Bad file descriptor 00:41:15.009 [2024-10-01 22:40:10.158592] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:41:15.009 [2024-10-01 22:40:10.158600] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:41:15.009 [2024-10-01 22:40:10.158608] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:41:15.009 [2024-10-01 22:40:10.162214] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:41:15.009 [2024-10-01 22:40:10.171330] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:41:15.009 [2024-10-01 22:40:10.171970] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:15.010 [2024-10-01 22:40:10.172007] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ac290 with addr=10.0.0.2, port=4420 00:41:15.010 [2024-10-01 22:40:10.172018] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ac290 is same with the state(6) to be set 00:41:15.010 [2024-10-01 22:40:10.172260] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14ac290 (9): Bad file descriptor 00:41:15.010 [2024-10-01 22:40:10.172487] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:41:15.010 [2024-10-01 22:40:10.172500] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:41:15.010 [2024-10-01 22:40:10.172508] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:41:15.010 [2024-10-01 22:40:10.176113] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:41:15.010 [2024-10-01 22:40:10.185229] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:41:15.010 [2024-10-01 22:40:10.185886] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:15.010 [2024-10-01 22:40:10.185922] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ac290 with addr=10.0.0.2, port=4420 00:41:15.010 [2024-10-01 22:40:10.185935] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ac290 is same with the state(6) to be set 00:41:15.010 [2024-10-01 22:40:10.186178] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14ac290 (9): Bad file descriptor 00:41:15.010 [2024-10-01 22:40:10.186404] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:41:15.010 [2024-10-01 22:40:10.186413] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:41:15.010 [2024-10-01 22:40:10.186421] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:41:15.010 [2024-10-01 22:40:10.190026] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:41:15.010 [2024-10-01 22:40:10.199140] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:41:15.010 [2024-10-01 22:40:10.199622] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:15.010 [2024-10-01 22:40:10.199671] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ac290 with addr=10.0.0.2, port=4420 00:41:15.010 [2024-10-01 22:40:10.199682] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ac290 is same with the state(6) to be set 00:41:15.010 [2024-10-01 22:40:10.199925] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14ac290 (9): Bad file descriptor 00:41:15.010 [2024-10-01 22:40:10.200151] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:41:15.010 [2024-10-01 22:40:10.200160] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:41:15.010 [2024-10-01 22:40:10.200167] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:41:15.010 [2024-10-01 22:40:10.203780] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:41:15.010 [2024-10-01 22:40:10.213106] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:41:15.010 [2024-10-01 22:40:10.213702] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:15.010 [2024-10-01 22:40:10.213739] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ac290 with addr=10.0.0.2, port=4420 00:41:15.010 [2024-10-01 22:40:10.213751] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ac290 is same with the state(6) to be set 00:41:15.010 [2024-10-01 22:40:10.213997] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14ac290 (9): Bad file descriptor 00:41:15.010 [2024-10-01 22:40:10.214223] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:41:15.010 [2024-10-01 22:40:10.214232] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:41:15.010 [2024-10-01 22:40:10.214239] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:41:15.010 [2024-10-01 22:40:10.217851] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:41:15.010 [2024-10-01 22:40:10.226980] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:41:15.010 [2024-10-01 22:40:10.227694] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:15.010 [2024-10-01 22:40:10.227731] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ac290 with addr=10.0.0.2, port=4420 00:41:15.010 [2024-10-01 22:40:10.227744] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ac290 is same with the state(6) to be set 00:41:15.010 [2024-10-01 22:40:10.227990] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14ac290 (9): Bad file descriptor 00:41:15.010 [2024-10-01 22:40:10.228217] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:41:15.010 [2024-10-01 22:40:10.228231] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:41:15.010 [2024-10-01 22:40:10.228239] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:41:15.010 [2024-10-01 22:40:10.231844] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:41:15.010 [2024-10-01 22:40:10.240958] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:41:15.010 [2024-10-01 22:40:10.241592] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:15.010 [2024-10-01 22:40:10.241636] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ac290 with addr=10.0.0.2, port=4420 00:41:15.010 [2024-10-01 22:40:10.241649] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ac290 is same with the state(6) to be set 00:41:15.010 [2024-10-01 22:40:10.241895] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14ac290 (9): Bad file descriptor 00:41:15.010 [2024-10-01 22:40:10.242122] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:41:15.010 [2024-10-01 22:40:10.242131] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:41:15.010 [2024-10-01 22:40:10.242138] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:41:15.010 [2024-10-01 22:40:10.245738] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:41:15.010 [2024-10-01 22:40:10.254872] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:41:15.010 [2024-10-01 22:40:10.255545] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:15.010 [2024-10-01 22:40:10.255582] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ac290 with addr=10.0.0.2, port=4420 00:41:15.010 [2024-10-01 22:40:10.255594] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ac290 is same with the state(6) to be set 00:41:15.010 [2024-10-01 22:40:10.255849] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14ac290 (9): Bad file descriptor 00:41:15.010 [2024-10-01 22:40:10.256076] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:41:15.010 [2024-10-01 22:40:10.256084] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:41:15.010 [2024-10-01 22:40:10.256092] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:41:15.010 [2024-10-01 22:40:10.259694] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:41:15.273 [2024-10-01 22:40:10.268809] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:41:15.273 [2024-10-01 22:40:10.269429] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:15.273 [2024-10-01 22:40:10.269466] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ac290 with addr=10.0.0.2, port=4420 00:41:15.273 [2024-10-01 22:40:10.269478] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ac290 is same with the state(6) to be set 00:41:15.273 [2024-10-01 22:40:10.269735] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14ac290 (9): Bad file descriptor 00:41:15.273 [2024-10-01 22:40:10.269963] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:41:15.273 [2024-10-01 22:40:10.269972] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:41:15.273 [2024-10-01 22:40:10.269979] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:41:15.273 [2024-10-01 22:40:10.273579] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:41:15.273 [2024-10-01 22:40:10.282700] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:41:15.273 [2024-10-01 22:40:10.283357] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:15.273 [2024-10-01 22:40:10.283394] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ac290 with addr=10.0.0.2, port=4420 00:41:15.273 [2024-10-01 22:40:10.283405] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ac290 is same with the state(6) to be set 00:41:15.273 [2024-10-01 22:40:10.283657] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14ac290 (9): Bad file descriptor 00:41:15.273 [2024-10-01 22:40:10.283884] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:41:15.273 [2024-10-01 22:40:10.283892] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:41:15.273 [2024-10-01 22:40:10.283900] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:41:15.273 [2024-10-01 22:40:10.287500] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:41:15.273 [2024-10-01 22:40:10.296615] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:41:15.273 [2024-10-01 22:40:10.297291] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:15.273 [2024-10-01 22:40:10.297328] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ac290 with addr=10.0.0.2, port=4420 00:41:15.273 [2024-10-01 22:40:10.297340] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ac290 is same with the state(6) to be set 00:41:15.273 [2024-10-01 22:40:10.297582] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14ac290 (9): Bad file descriptor 00:41:15.273 [2024-10-01 22:40:10.297817] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:41:15.273 [2024-10-01 22:40:10.297826] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:41:15.273 [2024-10-01 22:40:10.297834] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:41:15.273 [2024-10-01 22:40:10.301437] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:41:15.273 [2024-10-01 22:40:10.310551] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:41:15.273 [2024-10-01 22:40:10.311227] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:15.273 [2024-10-01 22:40:10.311264] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ac290 with addr=10.0.0.2, port=4420 00:41:15.273 [2024-10-01 22:40:10.311275] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ac290 is same with the state(6) to be set 00:41:15.273 [2024-10-01 22:40:10.311517] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14ac290 (9): Bad file descriptor 00:41:15.273 [2024-10-01 22:40:10.311753] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:41:15.273 [2024-10-01 22:40:10.311763] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:41:15.273 [2024-10-01 22:40:10.311775] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:41:15.273 [2024-10-01 22:40:10.315371] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:41:15.273 [2024-10-01 22:40:10.324492] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:41:15.273 [2024-10-01 22:40:10.325043] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:15.273 [2024-10-01 22:40:10.325063] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ac290 with addr=10.0.0.2, port=4420 00:41:15.273 [2024-10-01 22:40:10.325071] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ac290 is same with the state(6) to be set 00:41:15.273 [2024-10-01 22:40:10.325294] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14ac290 (9): Bad file descriptor 00:41:15.273 [2024-10-01 22:40:10.325516] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:41:15.273 [2024-10-01 22:40:10.325524] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:41:15.273 [2024-10-01 22:40:10.325531] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:41:15.273 [2024-10-01 22:40:10.329129] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:41:15.273 [2024-10-01 22:40:10.338450] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:41:15.273 [2024-10-01 22:40:10.339007] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:15.273 [2024-10-01 22:40:10.339023] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ac290 with addr=10.0.0.2, port=4420 00:41:15.273 [2024-10-01 22:40:10.339031] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ac290 is same with the state(6) to be set 00:41:15.273 [2024-10-01 22:40:10.339253] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14ac290 (9): Bad file descriptor 00:41:15.273 [2024-10-01 22:40:10.339475] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:41:15.273 [2024-10-01 22:40:10.339482] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:41:15.273 [2024-10-01 22:40:10.339489] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:41:15.273 [2024-10-01 22:40:10.343084] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:41:15.273 [2024-10-01 22:40:10.352412] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:41:15.273 [2024-10-01 22:40:10.353010] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:15.273 [2024-10-01 22:40:10.353026] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ac290 with addr=10.0.0.2, port=4420 00:41:15.273 [2024-10-01 22:40:10.353033] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ac290 is same with the state(6) to be set 00:41:15.273 [2024-10-01 22:40:10.353255] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14ac290 (9): Bad file descriptor 00:41:15.274 [2024-10-01 22:40:10.353477] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:41:15.274 [2024-10-01 22:40:10.353485] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:41:15.274 [2024-10-01 22:40:10.353492] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:41:15.274 [2024-10-01 22:40:10.357087] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:41:15.274 [2024-10-01 22:40:10.366406] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:41:15.274 [2024-10-01 22:40:10.366942] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:15.274 [2024-10-01 22:40:10.366958] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ac290 with addr=10.0.0.2, port=4420 00:41:15.274 [2024-10-01 22:40:10.366965] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ac290 is same with the state(6) to be set 00:41:15.274 [2024-10-01 22:40:10.367187] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14ac290 (9): Bad file descriptor 00:41:15.274 [2024-10-01 22:40:10.367409] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:41:15.274 [2024-10-01 22:40:10.367417] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:41:15.274 [2024-10-01 22:40:10.367424] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:41:15.274 [2024-10-01 22:40:10.371020] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:41:15.274 [2024-10-01 22:40:10.380337] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:41:15.274 [2024-10-01 22:40:10.380981] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:15.274 [2024-10-01 22:40:10.381017] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ac290 with addr=10.0.0.2, port=4420 00:41:15.274 [2024-10-01 22:40:10.381029] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ac290 is same with the state(6) to be set 00:41:15.274 [2024-10-01 22:40:10.381271] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14ac290 (9): Bad file descriptor 00:41:15.274 [2024-10-01 22:40:10.381497] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:41:15.274 [2024-10-01 22:40:10.381507] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:41:15.274 [2024-10-01 22:40:10.381514] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:41:15.274 [2024-10-01 22:40:10.385119] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:41:15.274 [2024-10-01 22:40:10.394244] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:41:15.274 [2024-10-01 22:40:10.394768] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:15.274 [2024-10-01 22:40:10.394805] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ac290 with addr=10.0.0.2, port=4420 00:41:15.274 [2024-10-01 22:40:10.394816] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ac290 is same with the state(6) to be set 00:41:15.274 [2024-10-01 22:40:10.395058] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14ac290 (9): Bad file descriptor 00:41:15.274 [2024-10-01 22:40:10.395284] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:41:15.274 [2024-10-01 22:40:10.395293] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:41:15.274 [2024-10-01 22:40:10.395301] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:41:15.274 [2024-10-01 22:40:10.398910] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:41:15.274 [2024-10-01 22:40:10.408252] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:41:15.274 [2024-10-01 22:40:10.408763] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:15.274 [2024-10-01 22:40:10.408800] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ac290 with addr=10.0.0.2, port=4420 00:41:15.274 [2024-10-01 22:40:10.408812] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ac290 is same with the state(6) to be set 00:41:15.274 [2024-10-01 22:40:10.409058] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14ac290 (9): Bad file descriptor 00:41:15.274 [2024-10-01 22:40:10.409289] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:41:15.274 [2024-10-01 22:40:10.409298] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:41:15.274 [2024-10-01 22:40:10.409306] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:41:15.274 [2024-10-01 22:40:10.412911] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:41:15.274 [2024-10-01 22:40:10.422248] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:41:15.274 [2024-10-01 22:40:10.422927] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:15.274 [2024-10-01 22:40:10.422964] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ac290 with addr=10.0.0.2, port=4420 00:41:15.274 [2024-10-01 22:40:10.422976] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ac290 is same with the state(6) to be set 00:41:15.274 [2024-10-01 22:40:10.423223] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14ac290 (9): Bad file descriptor 00:41:15.274 [2024-10-01 22:40:10.423449] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:41:15.274 [2024-10-01 22:40:10.423457] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:41:15.274 [2024-10-01 22:40:10.423465] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:41:15.274 [2024-10-01 22:40:10.427072] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:41:15.274 [2024-10-01 22:40:10.436190] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:41:15.274 [2024-10-01 22:40:10.436660] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:15.274 [2024-10-01 22:40:10.436680] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ac290 with addr=10.0.0.2, port=4420 00:41:15.274 [2024-10-01 22:40:10.436688] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ac290 is same with the state(6) to be set 00:41:15.274 [2024-10-01 22:40:10.436910] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14ac290 (9): Bad file descriptor 00:41:15.274 [2024-10-01 22:40:10.437133] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:41:15.274 [2024-10-01 22:40:10.437142] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:41:15.274 [2024-10-01 22:40:10.437148] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:41:15.274 [2024-10-01 22:40:10.440747] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:41:15.274 [2024-10-01 22:40:10.450081] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:41:15.274 [2024-10-01 22:40:10.450766] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:15.274 [2024-10-01 22:40:10.450803] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ac290 with addr=10.0.0.2, port=4420 00:41:15.274 [2024-10-01 22:40:10.450815] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ac290 is same with the state(6) to be set 00:41:15.274 [2024-10-01 22:40:10.451060] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14ac290 (9): Bad file descriptor 00:41:15.274 [2024-10-01 22:40:10.451286] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:41:15.274 [2024-10-01 22:40:10.451296] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:41:15.274 [2024-10-01 22:40:10.451304] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:41:15.274 [2024-10-01 22:40:10.454926] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:41:15.274 6897.25 IOPS, 26.94 MiB/s [2024-10-01 22:40:10.463998] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:41:15.274 [2024-10-01 22:40:10.464649] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:15.274 [2024-10-01 22:40:10.464686] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ac290 with addr=10.0.0.2, port=4420 00:41:15.274 [2024-10-01 22:40:10.464699] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ac290 is same with the state(6) to be set 00:41:15.274 [2024-10-01 22:40:10.464944] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14ac290 (9): Bad file descriptor 00:41:15.274 [2024-10-01 22:40:10.465170] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:41:15.274 [2024-10-01 22:40:10.465180] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:41:15.274 [2024-10-01 22:40:10.465188] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:41:15.274 [2024-10-01 22:40:10.468794] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:41:15.274 [2024-10-01 22:40:10.477909] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:41:15.274 [2024-10-01 22:40:10.478424] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:15.274 [2024-10-01 22:40:10.478443] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ac290 with addr=10.0.0.2, port=4420 00:41:15.274 [2024-10-01 22:40:10.478451] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ac290 is same with the state(6) to be set 00:41:15.275 [2024-10-01 22:40:10.478707] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14ac290 (9): Bad file descriptor 00:41:15.275 [2024-10-01 22:40:10.478930] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:41:15.275 [2024-10-01 22:40:10.478938] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:41:15.275 [2024-10-01 22:40:10.478945] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:41:15.275 [2024-10-01 22:40:10.482536] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:41:15.275 [2024-10-01 22:40:10.491854] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:41:15.275 [2024-10-01 22:40:10.492519] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:15.275 [2024-10-01 22:40:10.492556] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ac290 with addr=10.0.0.2, port=4420 00:41:15.275 [2024-10-01 22:40:10.492567] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ac290 is same with the state(6) to be set 00:41:15.275 [2024-10-01 22:40:10.492818] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14ac290 (9): Bad file descriptor 00:41:15.275 [2024-10-01 22:40:10.493045] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:41:15.275 [2024-10-01 22:40:10.493053] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:41:15.275 [2024-10-01 22:40:10.493061] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:41:15.275 [2024-10-01 22:40:10.496665] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:41:15.275 [2024-10-01 22:40:10.505802] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:41:15.275 [2024-10-01 22:40:10.506453] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:15.275 [2024-10-01 22:40:10.506495] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ac290 with addr=10.0.0.2, port=4420 00:41:15.275 [2024-10-01 22:40:10.506507] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ac290 is same with the state(6) to be set 00:41:15.275 [2024-10-01 22:40:10.506759] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14ac290 (9): Bad file descriptor 00:41:15.275 [2024-10-01 22:40:10.506986] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:41:15.275 [2024-10-01 22:40:10.506995] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:41:15.275 [2024-10-01 22:40:10.507002] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:41:15.275 [2024-10-01 22:40:10.510601] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:41:15.275 [2024-10-01 22:40:10.519733] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:41:15.275 [2024-10-01 22:40:10.520376] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:15.275 [2024-10-01 22:40:10.520412] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ac290 with addr=10.0.0.2, port=4420 00:41:15.275 [2024-10-01 22:40:10.520424] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ac290 is same with the state(6) to be set 00:41:15.275 [2024-10-01 22:40:10.520675] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14ac290 (9): Bad file descriptor 00:41:15.275 [2024-10-01 22:40:10.520902] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:41:15.275 [2024-10-01 22:40:10.520910] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:41:15.275 [2024-10-01 22:40:10.520918] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:41:15.537 [2024-10-01 22:40:10.524519] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:41:15.537 [2024-10-01 22:40:10.533635] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:41:15.537 [2024-10-01 22:40:10.534307] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:15.537 [2024-10-01 22:40:10.534344] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ac290 with addr=10.0.0.2, port=4420 00:41:15.537 [2024-10-01 22:40:10.534355] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ac290 is same with the state(6) to be set 00:41:15.537 [2024-10-01 22:40:10.534597] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14ac290 (9): Bad file descriptor 00:41:15.537 [2024-10-01 22:40:10.534833] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:41:15.537 [2024-10-01 22:40:10.534843] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:41:15.537 [2024-10-01 22:40:10.534850] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:41:15.537 [2024-10-01 22:40:10.538448] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:41:15.537 [2024-10-01 22:40:10.547607] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:41:15.537 [2024-10-01 22:40:10.548235] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:15.537 [2024-10-01 22:40:10.548272] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ac290 with addr=10.0.0.2, port=4420 00:41:15.537 [2024-10-01 22:40:10.548283] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ac290 is same with the state(6) to be set 00:41:15.537 [2024-10-01 22:40:10.548526] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14ac290 (9): Bad file descriptor 00:41:15.537 [2024-10-01 22:40:10.548767] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:41:15.537 [2024-10-01 22:40:10.548778] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:41:15.537 [2024-10-01 22:40:10.548785] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:41:15.537 [2024-10-01 22:40:10.552383] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:41:15.537 [2024-10-01 22:40:10.561509] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:41:15.538 [2024-10-01 22:40:10.562054] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:15.538 [2024-10-01 22:40:10.562074] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ac290 with addr=10.0.0.2, port=4420 00:41:15.538 [2024-10-01 22:40:10.562083] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ac290 is same with the state(6) to be set 00:41:15.538 [2024-10-01 22:40:10.562305] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14ac290 (9): Bad file descriptor 00:41:15.538 [2024-10-01 22:40:10.562527] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:41:15.538 [2024-10-01 22:40:10.562535] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:41:15.538 [2024-10-01 22:40:10.562542] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:41:15.538 [2024-10-01 22:40:10.566143] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:41:15.538 [2024-10-01 22:40:10.575464] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:41:15.538 [2024-10-01 22:40:10.575983] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:15.538 [2024-10-01 22:40:10.576000] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ac290 with addr=10.0.0.2, port=4420 00:41:15.538 [2024-10-01 22:40:10.576008] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ac290 is same with the state(6) to be set 00:41:15.538 [2024-10-01 22:40:10.576229] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14ac290 (9): Bad file descriptor 00:41:15.538 [2024-10-01 22:40:10.576451] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:41:15.538 [2024-10-01 22:40:10.576460] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:41:15.538 [2024-10-01 22:40:10.576467] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:41:15.538 [2024-10-01 22:40:10.580063] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:41:15.538 [2024-10-01 22:40:10.589383] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:41:15.538 [2024-10-01 22:40:10.589967] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:15.538 [2024-10-01 22:40:10.589983] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ac290 with addr=10.0.0.2, port=4420 00:41:15.538 [2024-10-01 22:40:10.589992] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ac290 is same with the state(6) to be set 00:41:15.538 [2024-10-01 22:40:10.590214] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14ac290 (9): Bad file descriptor 00:41:15.538 [2024-10-01 22:40:10.590436] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:41:15.538 [2024-10-01 22:40:10.590445] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:41:15.538 [2024-10-01 22:40:10.590453] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:41:15.538 [2024-10-01 22:40:10.594057] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:41:15.538 [2024-10-01 22:40:10.603388] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:41:15.538 [2024-10-01 22:40:10.603935] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:15.538 [2024-10-01 22:40:10.603951] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ac290 with addr=10.0.0.2, port=4420 00:41:15.538 [2024-10-01 22:40:10.603959] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ac290 is same with the state(6) to be set 00:41:15.538 [2024-10-01 22:40:10.604180] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14ac290 (9): Bad file descriptor 00:41:15.538 [2024-10-01 22:40:10.604402] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:41:15.538 [2024-10-01 22:40:10.604410] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:41:15.538 [2024-10-01 22:40:10.604417] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:41:15.538 [2024-10-01 22:40:10.608010] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:41:15.538 [2024-10-01 22:40:10.617337] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:41:15.538 [2024-10-01 22:40:10.618012] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:15.538 [2024-10-01 22:40:10.618049] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ac290 with addr=10.0.0.2, port=4420 00:41:15.538 [2024-10-01 22:40:10.618060] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ac290 is same with the state(6) to be set 00:41:15.538 [2024-10-01 22:40:10.618303] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14ac290 (9): Bad file descriptor 00:41:15.538 [2024-10-01 22:40:10.618529] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:41:15.538 [2024-10-01 22:40:10.618537] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:41:15.538 [2024-10-01 22:40:10.618545] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:41:15.538 [2024-10-01 22:40:10.622148] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:41:15.538 [2024-10-01 22:40:10.631265] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:41:15.538 [2024-10-01 22:40:10.631911] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:15.538 [2024-10-01 22:40:10.631949] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ac290 with addr=10.0.0.2, port=4420 00:41:15.538 [2024-10-01 22:40:10.631960] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ac290 is same with the state(6) to be set 00:41:15.538 [2024-10-01 22:40:10.632202] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14ac290 (9): Bad file descriptor 00:41:15.538 [2024-10-01 22:40:10.632429] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:41:15.538 [2024-10-01 22:40:10.632437] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:41:15.538 [2024-10-01 22:40:10.632445] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:41:15.538 [2024-10-01 22:40:10.636050] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:41:15.538 [2024-10-01 22:40:10.645166] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:41:15.538 [2024-10-01 22:40:10.645916] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:15.538 [2024-10-01 22:40:10.645953] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ac290 with addr=10.0.0.2, port=4420 00:41:15.538 [2024-10-01 22:40:10.645969] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ac290 is same with the state(6) to be set 00:41:15.538 [2024-10-01 22:40:10.646211] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14ac290 (9): Bad file descriptor 00:41:15.538 [2024-10-01 22:40:10.646437] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:41:15.538 [2024-10-01 22:40:10.646446] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:41:15.538 [2024-10-01 22:40:10.646453] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:41:15.538 [2024-10-01 22:40:10.650061] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:41:15.538 [2024-10-01 22:40:10.659194] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:41:15.538 [2024-10-01 22:40:10.659928] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:15.538 [2024-10-01 22:40:10.659965] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ac290 with addr=10.0.0.2, port=4420 00:41:15.538 [2024-10-01 22:40:10.659976] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ac290 is same with the state(6) to be set 00:41:15.538 [2024-10-01 22:40:10.660219] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14ac290 (9): Bad file descriptor 00:41:15.538 [2024-10-01 22:40:10.660445] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:41:15.538 [2024-10-01 22:40:10.660454] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:41:15.538 [2024-10-01 22:40:10.660461] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:41:15.538 [2024-10-01 22:40:10.664064] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:41:15.538 [2024-10-01 22:40:10.673194] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:41:15.538 [2024-10-01 22:40:10.673746] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:15.538 [2024-10-01 22:40:10.673765] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ac290 with addr=10.0.0.2, port=4420 00:41:15.538 [2024-10-01 22:40:10.673773] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ac290 is same with the state(6) to be set 00:41:15.538 [2024-10-01 22:40:10.673996] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14ac290 (9): Bad file descriptor 00:41:15.539 [2024-10-01 22:40:10.674218] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:41:15.539 [2024-10-01 22:40:10.674227] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:41:15.539 [2024-10-01 22:40:10.674234] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:41:15.539 [2024-10-01 22:40:10.677831] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:41:15.539 [2024-10-01 22:40:10.687157] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:41:15.539 [2024-10-01 22:40:10.687835] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:15.539 [2024-10-01 22:40:10.687872] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ac290 with addr=10.0.0.2, port=4420 00:41:15.539 [2024-10-01 22:40:10.687883] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ac290 is same with the state(6) to be set 00:41:15.539 [2024-10-01 22:40:10.688126] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14ac290 (9): Bad file descriptor 00:41:15.539 [2024-10-01 22:40:10.688352] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:41:15.539 [2024-10-01 22:40:10.688367] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:41:15.539 [2024-10-01 22:40:10.688374] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:41:15.539 [2024-10-01 22:40:10.691983] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:41:15.539 [2024-10-01 22:40:10.701100] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:41:15.539 [2024-10-01 22:40:10.701728] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:15.539 [2024-10-01 22:40:10.701766] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ac290 with addr=10.0.0.2, port=4420 00:41:15.539 [2024-10-01 22:40:10.701778] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ac290 is same with the state(6) to be set 00:41:15.539 [2024-10-01 22:40:10.702022] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14ac290 (9): Bad file descriptor 00:41:15.539 [2024-10-01 22:40:10.702248] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:41:15.539 [2024-10-01 22:40:10.702257] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:41:15.539 [2024-10-01 22:40:10.702264] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:41:15.539 [2024-10-01 22:40:10.705881] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:41:15.539 [2024-10-01 22:40:10.715006] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:41:15.539 [2024-10-01 22:40:10.715668] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:15.539 [2024-10-01 22:40:10.715705] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ac290 with addr=10.0.0.2, port=4420 00:41:15.539 [2024-10-01 22:40:10.715718] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ac290 is same with the state(6) to be set 00:41:15.539 [2024-10-01 22:40:10.715962] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14ac290 (9): Bad file descriptor 00:41:15.539 [2024-10-01 22:40:10.716194] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:41:15.539 [2024-10-01 22:40:10.716205] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:41:15.539 [2024-10-01 22:40:10.716213] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:41:15.539 [2024-10-01 22:40:10.719821] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:41:15.539 [2024-10-01 22:40:10.728937] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:41:15.539 [2024-10-01 22:40:10.729667] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:15.539 [2024-10-01 22:40:10.729705] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ac290 with addr=10.0.0.2, port=4420 00:41:15.539 [2024-10-01 22:40:10.729717] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ac290 is same with the state(6) to be set 00:41:15.539 [2024-10-01 22:40:10.729964] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14ac290 (9): Bad file descriptor 00:41:15.539 [2024-10-01 22:40:10.730190] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:41:15.539 [2024-10-01 22:40:10.730199] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:41:15.539 [2024-10-01 22:40:10.730207] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:41:15.539 [2024-10-01 22:40:10.733813] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:41:15.539 [2024-10-01 22:40:10.742962] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:41:15.539 [2024-10-01 22:40:10.743647] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:15.539 [2024-10-01 22:40:10.743684] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ac290 with addr=10.0.0.2, port=4420 00:41:15.539 [2024-10-01 22:40:10.743697] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ac290 is same with the state(6) to be set 00:41:15.539 [2024-10-01 22:40:10.743943] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14ac290 (9): Bad file descriptor 00:41:15.539 [2024-10-01 22:40:10.744170] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:41:15.539 [2024-10-01 22:40:10.744179] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:41:15.539 [2024-10-01 22:40:10.744187] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:41:15.539 [2024-10-01 22:40:10.747791] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:41:15.539 [2024-10-01 22:40:10.756920] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:41:15.539 [2024-10-01 22:40:10.757600] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:15.539 [2024-10-01 22:40:10.757643] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ac290 with addr=10.0.0.2, port=4420 00:41:15.539 [2024-10-01 22:40:10.757656] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ac290 is same with the state(6) to be set 00:41:15.539 [2024-10-01 22:40:10.757902] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14ac290 (9): Bad file descriptor 00:41:15.539 [2024-10-01 22:40:10.758128] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:41:15.539 [2024-10-01 22:40:10.758138] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:41:15.539 [2024-10-01 22:40:10.758145] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:41:15.539 [2024-10-01 22:40:10.761750] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:41:15.539 [2024-10-01 22:40:10.770865] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:41:15.539 [2024-10-01 22:40:10.771488] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:15.539 [2024-10-01 22:40:10.771525] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ac290 with addr=10.0.0.2, port=4420 00:41:15.539 [2024-10-01 22:40:10.771536] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ac290 is same with the state(6) to be set 00:41:15.539 [2024-10-01 22:40:10.771787] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14ac290 (9): Bad file descriptor 00:41:15.539 [2024-10-01 22:40:10.772014] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:41:15.539 [2024-10-01 22:40:10.772023] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:41:15.539 [2024-10-01 22:40:10.772031] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:41:15.539 [2024-10-01 22:40:10.775629] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:41:15.539 [2024-10-01 22:40:10.784744] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:41:15.539 [2024-10-01 22:40:10.785438] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:15.539 [2024-10-01 22:40:10.785475] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ac290 with addr=10.0.0.2, port=4420 00:41:15.539 [2024-10-01 22:40:10.785488] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ac290 is same with the state(6) to be set 00:41:15.539 [2024-10-01 22:40:10.785745] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14ac290 (9): Bad file descriptor 00:41:15.539 [2024-10-01 22:40:10.785972] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:41:15.539 [2024-10-01 22:40:10.785981] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:41:15.540 [2024-10-01 22:40:10.785989] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:41:15.802 [2024-10-01 22:40:10.789587] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:41:15.802 [2024-10-01 22:40:10.798706] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:41:15.802 [2024-10-01 22:40:10.799322] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:15.802 [2024-10-01 22:40:10.799358] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ac290 with addr=10.0.0.2, port=4420 00:41:15.802 [2024-10-01 22:40:10.799370] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ac290 is same with the state(6) to be set 00:41:15.802 [2024-10-01 22:40:10.799612] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14ac290 (9): Bad file descriptor 00:41:15.802 [2024-10-01 22:40:10.799847] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:41:15.802 [2024-10-01 22:40:10.799856] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:41:15.802 [2024-10-01 22:40:10.799864] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:41:15.802 [2024-10-01 22:40:10.803474] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:41:15.802 [2024-10-01 22:40:10.812590] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:41:15.802 [2024-10-01 22:40:10.813143] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:15.802 [2024-10-01 22:40:10.813162] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ac290 with addr=10.0.0.2, port=4420 00:41:15.802 [2024-10-01 22:40:10.813171] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ac290 is same with the state(6) to be set 00:41:15.802 [2024-10-01 22:40:10.813393] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14ac290 (9): Bad file descriptor 00:41:15.802 [2024-10-01 22:40:10.813615] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:41:15.802 [2024-10-01 22:40:10.813631] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:41:15.802 [2024-10-01 22:40:10.813638] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:41:15.802 [2024-10-01 22:40:10.817236] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:41:15.802 [2024-10-01 22:40:10.826569] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:41:15.802 [2024-10-01 22:40:10.827149] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:15.802 [2024-10-01 22:40:10.827166] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ac290 with addr=10.0.0.2, port=4420 00:41:15.802 [2024-10-01 22:40:10.827174] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ac290 is same with the state(6) to be set 00:41:15.803 [2024-10-01 22:40:10.827395] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14ac290 (9): Bad file descriptor 00:41:15.803 [2024-10-01 22:40:10.827617] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:41:15.803 [2024-10-01 22:40:10.827632] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:41:15.803 [2024-10-01 22:40:10.827644] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:41:15.803 [2024-10-01 22:40:10.831235] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:41:15.803 [2024-10-01 22:40:10.840557] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:41:15.803 [2024-10-01 22:40:10.841164] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:15.803 [2024-10-01 22:40:10.841201] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ac290 with addr=10.0.0.2, port=4420 00:41:15.803 [2024-10-01 22:40:10.841212] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ac290 is same with the state(6) to be set 00:41:15.803 [2024-10-01 22:40:10.841454] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14ac290 (9): Bad file descriptor 00:41:15.803 [2024-10-01 22:40:10.841689] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:41:15.803 [2024-10-01 22:40:10.841700] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:41:15.803 [2024-10-01 22:40:10.841707] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:41:15.803 [2024-10-01 22:40:10.845303] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:41:15.803 [2024-10-01 22:40:10.854428] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:41:15.803 [2024-10-01 22:40:10.855024] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:15.803 [2024-10-01 22:40:10.855042] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ac290 with addr=10.0.0.2, port=4420 00:41:15.803 [2024-10-01 22:40:10.855050] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ac290 is same with the state(6) to be set 00:41:15.803 [2024-10-01 22:40:10.855273] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14ac290 (9): Bad file descriptor 00:41:15.803 [2024-10-01 22:40:10.855495] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:41:15.803 [2024-10-01 22:40:10.855503] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:41:15.803 [2024-10-01 22:40:10.855510] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:41:15.803 [2024-10-01 22:40:10.859107] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:41:15.803 [2024-10-01 22:40:10.868429] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:41:15.803 [2024-10-01 22:40:10.869035] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:15.803 [2024-10-01 22:40:10.869072] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ac290 with addr=10.0.0.2, port=4420 00:41:15.803 [2024-10-01 22:40:10.869083] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ac290 is same with the state(6) to be set 00:41:15.803 [2024-10-01 22:40:10.869326] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14ac290 (9): Bad file descriptor 00:41:15.803 [2024-10-01 22:40:10.869551] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:41:15.803 [2024-10-01 22:40:10.869560] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:41:15.803 [2024-10-01 22:40:10.869568] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:41:15.803 [2024-10-01 22:40:10.873173] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:41:15.803 [2024-10-01 22:40:10.882291] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:41:15.803 [2024-10-01 22:40:10.882961] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:15.803 [2024-10-01 22:40:10.882997] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ac290 with addr=10.0.0.2, port=4420 00:41:15.803 [2024-10-01 22:40:10.883008] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ac290 is same with the state(6) to be set 00:41:15.803 [2024-10-01 22:40:10.883251] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14ac290 (9): Bad file descriptor 00:41:15.803 [2024-10-01 22:40:10.883477] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:41:15.803 [2024-10-01 22:40:10.883486] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:41:15.803 [2024-10-01 22:40:10.883493] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:41:15.803 [2024-10-01 22:40:10.887097] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:41:15.803 [2024-10-01 22:40:10.896212] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:41:15.803 [2024-10-01 22:40:10.896905] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:15.803 [2024-10-01 22:40:10.896942] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ac290 with addr=10.0.0.2, port=4420 00:41:15.803 [2024-10-01 22:40:10.896953] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ac290 is same with the state(6) to be set 00:41:15.803 [2024-10-01 22:40:10.897196] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14ac290 (9): Bad file descriptor 00:41:15.803 [2024-10-01 22:40:10.897421] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:41:15.803 [2024-10-01 22:40:10.897430] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:41:15.803 [2024-10-01 22:40:10.897438] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:41:15.803 [2024-10-01 22:40:10.901043] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:41:15.803 [2024-10-01 22:40:10.910168] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:41:15.803 [2024-10-01 22:40:10.910881] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:15.803 [2024-10-01 22:40:10.910917] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ac290 with addr=10.0.0.2, port=4420 00:41:15.803 [2024-10-01 22:40:10.910929] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ac290 is same with the state(6) to be set 00:41:15.803 [2024-10-01 22:40:10.911171] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14ac290 (9): Bad file descriptor 00:41:15.803 [2024-10-01 22:40:10.911397] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:41:15.803 [2024-10-01 22:40:10.911406] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:41:15.803 [2024-10-01 22:40:10.911414] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:41:15.803 [2024-10-01 22:40:10.915018] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:41:15.803 [2024-10-01 22:40:10.924148] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:41:15.803 [2024-10-01 22:40:10.924860] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:15.803 [2024-10-01 22:40:10.924897] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ac290 with addr=10.0.0.2, port=4420 00:41:15.803 [2024-10-01 22:40:10.924908] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ac290 is same with the state(6) to be set 00:41:15.803 [2024-10-01 22:40:10.925150] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14ac290 (9): Bad file descriptor 00:41:15.803 [2024-10-01 22:40:10.925381] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:41:15.803 [2024-10-01 22:40:10.925390] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:41:15.803 [2024-10-01 22:40:10.925397] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:41:15.803 [2024-10-01 22:40:10.929006] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:41:15.803 [2024-10-01 22:40:10.938123] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:41:15.803 [2024-10-01 22:40:10.938672] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:15.803 [2024-10-01 22:40:10.938697] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ac290 with addr=10.0.0.2, port=4420 00:41:15.803 [2024-10-01 22:40:10.938706] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ac290 is same with the state(6) to be set 00:41:15.803 [2024-10-01 22:40:10.938933] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14ac290 (9): Bad file descriptor 00:41:15.803 [2024-10-01 22:40:10.939156] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:41:15.803 [2024-10-01 22:40:10.939164] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:41:15.803 [2024-10-01 22:40:10.939171] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:41:15.803 [2024-10-01 22:40:10.942769] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:41:15.804 [2024-10-01 22:40:10.952097] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:41:15.804 [2024-10-01 22:40:10.952751] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:15.804 [2024-10-01 22:40:10.952788] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ac290 with addr=10.0.0.2, port=4420 00:41:15.804 [2024-10-01 22:40:10.952801] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ac290 is same with the state(6) to be set 00:41:15.804 [2024-10-01 22:40:10.953047] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14ac290 (9): Bad file descriptor 00:41:15.804 [2024-10-01 22:40:10.953273] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:41:15.804 [2024-10-01 22:40:10.953282] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:41:15.804 [2024-10-01 22:40:10.953290] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:41:15.804 [2024-10-01 22:40:10.956906] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:41:15.804 [2024-10-01 22:40:10.966024] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:41:15.804 [2024-10-01 22:40:10.966593] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:15.804 [2024-10-01 22:40:10.966637] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ac290 with addr=10.0.0.2, port=4420 00:41:15.804 [2024-10-01 22:40:10.966651] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ac290 is same with the state(6) to be set 00:41:15.804 [2024-10-01 22:40:10.966897] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14ac290 (9): Bad file descriptor 00:41:15.804 [2024-10-01 22:40:10.967124] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:41:15.804 [2024-10-01 22:40:10.967132] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:41:15.804 [2024-10-01 22:40:10.967140] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:41:15.804 [2024-10-01 22:40:10.970748] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:41:15.804 [2024-10-01 22:40:10.980077] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:41:15.804 [2024-10-01 22:40:10.980745] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:15.804 [2024-10-01 22:40:10.980783] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ac290 with addr=10.0.0.2, port=4420 00:41:15.804 [2024-10-01 22:40:10.980795] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ac290 is same with the state(6) to be set 00:41:15.804 [2024-10-01 22:40:10.981041] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14ac290 (9): Bad file descriptor 00:41:15.804 [2024-10-01 22:40:10.981267] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:41:15.804 [2024-10-01 22:40:10.981277] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:41:15.804 [2024-10-01 22:40:10.981285] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:41:15.804 [2024-10-01 22:40:10.984887] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:41:15.804 [2024-10-01 22:40:10.994003] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:41:15.804 [2024-10-01 22:40:10.994650] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:15.804 [2024-10-01 22:40:10.994688] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ac290 with addr=10.0.0.2, port=4420 00:41:15.804 [2024-10-01 22:40:10.994701] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ac290 is same with the state(6) to be set 00:41:15.804 [2024-10-01 22:40:10.994946] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14ac290 (9): Bad file descriptor 00:41:15.804 [2024-10-01 22:40:10.995173] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:41:15.804 [2024-10-01 22:40:10.995182] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:41:15.804 [2024-10-01 22:40:10.995190] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:41:15.804 [2024-10-01 22:40:10.998796] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:41:15.804 [2024-10-01 22:40:11.007928] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:41:15.804 [2024-10-01 22:40:11.008488] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:15.804 [2024-10-01 22:40:11.008524] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ac290 with addr=10.0.0.2, port=4420 00:41:15.804 [2024-10-01 22:40:11.008537] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ac290 is same with the state(6) to be set 00:41:15.804 [2024-10-01 22:40:11.008786] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14ac290 (9): Bad file descriptor 00:41:15.804 [2024-10-01 22:40:11.009014] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:41:15.804 [2024-10-01 22:40:11.009023] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:41:15.804 [2024-10-01 22:40:11.009030] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:41:15.804 [2024-10-01 22:40:11.012631] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:41:15.804 [2024-10-01 22:40:11.021980] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:41:15.804 [2024-10-01 22:40:11.022478] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:15.804 [2024-10-01 22:40:11.022519] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ac290 with addr=10.0.0.2, port=4420 00:41:15.804 [2024-10-01 22:40:11.022532] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ac290 is same with the state(6) to be set 00:41:15.804 [2024-10-01 22:40:11.022785] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14ac290 (9): Bad file descriptor 00:41:15.804 [2024-10-01 22:40:11.023034] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:41:15.804 [2024-10-01 22:40:11.023044] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:41:15.804 [2024-10-01 22:40:11.023051] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:41:15.804 [2024-10-01 22:40:11.026652] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:41:15.804 [2024-10-01 22:40:11.035907] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:41:15.804 [2024-10-01 22:40:11.036361] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:15.804 [2024-10-01 22:40:11.036381] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ac290 with addr=10.0.0.2, port=4420 00:41:15.804 [2024-10-01 22:40:11.036389] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ac290 is same with the state(6) to be set 00:41:15.804 [2024-10-01 22:40:11.036612] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14ac290 (9): Bad file descriptor 00:41:15.804 [2024-10-01 22:40:11.036842] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:41:15.804 [2024-10-01 22:40:11.036853] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:41:15.804 [2024-10-01 22:40:11.036860] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:41:15.804 [2024-10-01 22:40:11.040451] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:41:15.804 [2024-10-01 22:40:11.049780] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:41:15.804 [2024-10-01 22:40:11.050438] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:15.804 [2024-10-01 22:40:11.050475] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ac290 with addr=10.0.0.2, port=4420 00:41:15.804 [2024-10-01 22:40:11.050488] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ac290 is same with the state(6) to be set 00:41:15.804 [2024-10-01 22:40:11.050739] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14ac290 (9): Bad file descriptor 00:41:15.804 [2024-10-01 22:40:11.050966] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:41:15.804 [2024-10-01 22:40:11.050976] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:41:15.804 [2024-10-01 22:40:11.050984] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:41:16.067 [2024-10-01 22:40:11.054583] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:41:16.067 [2024-10-01 22:40:11.063644] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:41:16.067 [2024-10-01 22:40:11.064197] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:16.067 [2024-10-01 22:40:11.064216] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ac290 with addr=10.0.0.2, port=4420 00:41:16.067 [2024-10-01 22:40:11.064224] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ac290 is same with the state(6) to be set 00:41:16.067 [2024-10-01 22:40:11.064447] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14ac290 (9): Bad file descriptor 00:41:16.067 [2024-10-01 22:40:11.064682] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:41:16.067 [2024-10-01 22:40:11.064692] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:41:16.067 [2024-10-01 22:40:11.064700] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:41:16.067 [2024-10-01 22:40:11.068293] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:41:16.067 [2024-10-01 22:40:11.077612] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:41:16.067 [2024-10-01 22:40:11.078272] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:16.067 [2024-10-01 22:40:11.078309] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ac290 with addr=10.0.0.2, port=4420 00:41:16.067 [2024-10-01 22:40:11.078321] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ac290 is same with the state(6) to be set 00:41:16.067 [2024-10-01 22:40:11.078563] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14ac290 (9): Bad file descriptor 00:41:16.067 [2024-10-01 22:40:11.078797] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:41:16.067 [2024-10-01 22:40:11.078807] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:41:16.067 [2024-10-01 22:40:11.078815] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:41:16.067 [2024-10-01 22:40:11.082412] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:41:16.067 [2024-10-01 22:40:11.091528] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:41:16.067 [2024-10-01 22:40:11.092176] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:16.067 [2024-10-01 22:40:11.092214] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ac290 with addr=10.0.0.2, port=4420 00:41:16.068 [2024-10-01 22:40:11.092225] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ac290 is same with the state(6) to be set 00:41:16.068 [2024-10-01 22:40:11.092467] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14ac290 (9): Bad file descriptor 00:41:16.068 [2024-10-01 22:40:11.092702] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:41:16.068 [2024-10-01 22:40:11.092712] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:41:16.068 [2024-10-01 22:40:11.092719] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:41:16.068 [2024-10-01 22:40:11.096317] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:41:16.068 [2024-10-01 22:40:11.105460] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:41:16.068 [2024-10-01 22:40:11.106160] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:16.068 [2024-10-01 22:40:11.106196] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ac290 with addr=10.0.0.2, port=4420 00:41:16.068 [2024-10-01 22:40:11.106209] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ac290 is same with the state(6) to be set 00:41:16.068 [2024-10-01 22:40:11.106452] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14ac290 (9): Bad file descriptor 00:41:16.068 [2024-10-01 22:40:11.106686] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:41:16.068 [2024-10-01 22:40:11.106697] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:41:16.068 [2024-10-01 22:40:11.106705] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:41:16.068 [2024-10-01 22:40:11.110305] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:41:16.068 [2024-10-01 22:40:11.119457] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:41:16.068 [2024-10-01 22:40:11.120076] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:16.068 [2024-10-01 22:40:11.120113] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ac290 with addr=10.0.0.2, port=4420 00:41:16.068 [2024-10-01 22:40:11.120124] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ac290 is same with the state(6) to be set 00:41:16.068 [2024-10-01 22:40:11.120366] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14ac290 (9): Bad file descriptor 00:41:16.068 [2024-10-01 22:40:11.120593] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:41:16.068 [2024-10-01 22:40:11.120602] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:41:16.068 [2024-10-01 22:40:11.120610] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:41:16.068 [2024-10-01 22:40:11.124221] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:41:16.068 [2024-10-01 22:40:11.133356] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:41:16.068 [2024-10-01 22:40:11.133878] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:16.068 [2024-10-01 22:40:11.133898] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ac290 with addr=10.0.0.2, port=4420 00:41:16.068 [2024-10-01 22:40:11.133907] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ac290 is same with the state(6) to be set 00:41:16.068 [2024-10-01 22:40:11.134129] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14ac290 (9): Bad file descriptor 00:41:16.068 [2024-10-01 22:40:11.134352] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:41:16.068 [2024-10-01 22:40:11.134360] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:41:16.068 [2024-10-01 22:40:11.134367] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:41:16.068 [2024-10-01 22:40:11.137971] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:41:16.068 [2024-10-01 22:40:11.147308] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:41:16.068 [2024-10-01 22:40:11.147837] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:16.068 [2024-10-01 22:40:11.147854] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ac290 with addr=10.0.0.2, port=4420 00:41:16.068 [2024-10-01 22:40:11.147862] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ac290 is same with the state(6) to be set 00:41:16.068 [2024-10-01 22:40:11.148084] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14ac290 (9): Bad file descriptor 00:41:16.068 [2024-10-01 22:40:11.148306] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:41:16.068 [2024-10-01 22:40:11.148314] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:41:16.068 [2024-10-01 22:40:11.148321] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:41:16.068 [2024-10-01 22:40:11.151920] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:41:16.068 [2024-10-01 22:40:11.161259] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:41:16.068 [2024-10-01 22:40:11.161846] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:16.068 [2024-10-01 22:40:11.161862] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ac290 with addr=10.0.0.2, port=4420 00:41:16.068 [2024-10-01 22:40:11.161875] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ac290 is same with the state(6) to be set 00:41:16.068 [2024-10-01 22:40:11.162097] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14ac290 (9): Bad file descriptor 00:41:16.068 [2024-10-01 22:40:11.162319] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:41:16.068 [2024-10-01 22:40:11.162327] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:41:16.068 [2024-10-01 22:40:11.162334] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:41:16.068 [2024-10-01 22:40:11.165936] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:41:16.068 [2024-10-01 22:40:11.175279] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:41:16.068 [2024-10-01 22:40:11.175929] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:16.068 [2024-10-01 22:40:11.175967] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ac290 with addr=10.0.0.2, port=4420 00:41:16.068 [2024-10-01 22:40:11.175978] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ac290 is same with the state(6) to be set 00:41:16.068 [2024-10-01 22:40:11.176220] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14ac290 (9): Bad file descriptor 00:41:16.068 [2024-10-01 22:40:11.176445] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:41:16.068 [2024-10-01 22:40:11.176454] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:41:16.068 [2024-10-01 22:40:11.176462] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:41:16.068 [2024-10-01 22:40:11.180069] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:41:16.068 [2024-10-01 22:40:11.189187] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:41:16.068 [2024-10-01 22:40:11.189771] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:16.068 [2024-10-01 22:40:11.189807] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ac290 with addr=10.0.0.2, port=4420 00:41:16.068 [2024-10-01 22:40:11.189820] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ac290 is same with the state(6) to be set 00:41:16.068 [2024-10-01 22:40:11.190066] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14ac290 (9): Bad file descriptor 00:41:16.068 [2024-10-01 22:40:11.190292] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:41:16.068 [2024-10-01 22:40:11.190301] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:41:16.068 [2024-10-01 22:40:11.190309] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:41:16.068 [2024-10-01 22:40:11.193919] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:41:16.068 [2024-10-01 22:40:11.203039] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:41:16.068 [2024-10-01 22:40:11.203702] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:16.068 [2024-10-01 22:40:11.203739] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ac290 with addr=10.0.0.2, port=4420 00:41:16.068 [2024-10-01 22:40:11.203752] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ac290 is same with the state(6) to be set 00:41:16.068 [2024-10-01 22:40:11.203998] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14ac290 (9): Bad file descriptor 00:41:16.068 [2024-10-01 22:40:11.204224] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:41:16.068 [2024-10-01 22:40:11.204238] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:41:16.068 [2024-10-01 22:40:11.204246] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:41:16.068 [2024-10-01 22:40:11.207861] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:41:16.068 [2024-10-01 22:40:11.216989] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:41:16.068 [2024-10-01 22:40:11.217632] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:16.068 [2024-10-01 22:40:11.217669] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ac290 with addr=10.0.0.2, port=4420 00:41:16.068 [2024-10-01 22:40:11.217680] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ac290 is same with the state(6) to be set 00:41:16.068 [2024-10-01 22:40:11.217923] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14ac290 (9): Bad file descriptor 00:41:16.068 [2024-10-01 22:40:11.218150] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:41:16.068 [2024-10-01 22:40:11.218158] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:41:16.068 [2024-10-01 22:40:11.218166] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:41:16.068 [2024-10-01 22:40:11.221766] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:41:16.068 [2024-10-01 22:40:11.230886] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:41:16.068 [2024-10-01 22:40:11.231452] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:16.068 [2024-10-01 22:40:11.231471] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ac290 with addr=10.0.0.2, port=4420 00:41:16.068 [2024-10-01 22:40:11.231479] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ac290 is same with the state(6) to be set 00:41:16.068 [2024-10-01 22:40:11.231707] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14ac290 (9): Bad file descriptor 00:41:16.069 [2024-10-01 22:40:11.231931] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:41:16.069 [2024-10-01 22:40:11.231939] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:41:16.069 [2024-10-01 22:40:11.231946] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:41:16.069 [2024-10-01 22:40:11.235536] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:41:16.069 [2024-10-01 22:40:11.244859] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:41:16.069 [2024-10-01 22:40:11.245426] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:16.069 [2024-10-01 22:40:11.245441] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ac290 with addr=10.0.0.2, port=4420 00:41:16.069 [2024-10-01 22:40:11.245449] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ac290 is same with the state(6) to be set 00:41:16.069 [2024-10-01 22:40:11.245676] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14ac290 (9): Bad file descriptor 00:41:16.069 [2024-10-01 22:40:11.245898] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:41:16.069 [2024-10-01 22:40:11.245906] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:41:16.069 [2024-10-01 22:40:11.245913] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:41:16.069 [2024-10-01 22:40:11.249564] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:41:16.069 [2024-10-01 22:40:11.258904] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:41:16.069 [2024-10-01 22:40:11.259541] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:16.069 [2024-10-01 22:40:11.259578] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ac290 with addr=10.0.0.2, port=4420 00:41:16.069 [2024-10-01 22:40:11.259591] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ac290 is same with the state(6) to be set 00:41:16.069 [2024-10-01 22:40:11.259845] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14ac290 (9): Bad file descriptor 00:41:16.069 [2024-10-01 22:40:11.260072] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:41:16.069 [2024-10-01 22:40:11.260081] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:41:16.069 [2024-10-01 22:40:11.260089] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:41:16.069 [2024-10-01 22:40:11.263690] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:41:16.069 [2024-10-01 22:40:11.272808] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:41:16.069 [2024-10-01 22:40:11.273482] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:16.069 [2024-10-01 22:40:11.273519] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ac290 with addr=10.0.0.2, port=4420 00:41:16.069 [2024-10-01 22:40:11.273530] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ac290 is same with the state(6) to be set 00:41:16.069 [2024-10-01 22:40:11.273781] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14ac290 (9): Bad file descriptor 00:41:16.069 [2024-10-01 22:40:11.274007] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:41:16.069 [2024-10-01 22:40:11.274016] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:41:16.069 [2024-10-01 22:40:11.274024] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:41:16.069 [2024-10-01 22:40:11.277620] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:41:16.069 [2024-10-01 22:40:11.286745] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:41:16.069 [2024-10-01 22:40:11.287381] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:16.069 [2024-10-01 22:40:11.287418] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ac290 with addr=10.0.0.2, port=4420 00:41:16.069 [2024-10-01 22:40:11.287429] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ac290 is same with the state(6) to be set 00:41:16.069 [2024-10-01 22:40:11.287679] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14ac290 (9): Bad file descriptor 00:41:16.069 [2024-10-01 22:40:11.287907] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:41:16.069 [2024-10-01 22:40:11.287916] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:41:16.069 [2024-10-01 22:40:11.287923] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:41:16.069 [2024-10-01 22:40:11.291522] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:41:16.069 [2024-10-01 22:40:11.300648] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:41:16.069 [2024-10-01 22:40:11.301239] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:16.069 [2024-10-01 22:40:11.301259] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ac290 with addr=10.0.0.2, port=4420 00:41:16.069 [2024-10-01 22:40:11.301268] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ac290 is same with the state(6) to be set 00:41:16.069 [2024-10-01 22:40:11.301497] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14ac290 (9): Bad file descriptor 00:41:16.069 [2024-10-01 22:40:11.301725] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:41:16.069 [2024-10-01 22:40:11.301735] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:41:16.069 [2024-10-01 22:40:11.301742] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:41:16.069 [2024-10-01 22:40:11.305333] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:41:16.069 [2024-10-01 22:40:11.314669] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:41:16.069 [2024-10-01 22:40:11.315197] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:16.069 [2024-10-01 22:40:11.315213] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ac290 with addr=10.0.0.2, port=4420 00:41:16.069 [2024-10-01 22:40:11.315221] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ac290 is same with the state(6) to be set 00:41:16.069 [2024-10-01 22:40:11.315443] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14ac290 (9): Bad file descriptor 00:41:16.069 [2024-10-01 22:40:11.315671] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:41:16.069 [2024-10-01 22:40:11.315680] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:41:16.069 [2024-10-01 22:40:11.315687] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:41:16.333 [2024-10-01 22:40:11.319285] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:41:16.333 [2024-10-01 22:40:11.328613] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:41:16.333 [2024-10-01 22:40:11.329185] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:16.333 [2024-10-01 22:40:11.329202] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ac290 with addr=10.0.0.2, port=4420 00:41:16.333 [2024-10-01 22:40:11.329209] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ac290 is same with the state(6) to be set 00:41:16.333 [2024-10-01 22:40:11.329431] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14ac290 (9): Bad file descriptor 00:41:16.333 [2024-10-01 22:40:11.329658] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:41:16.333 [2024-10-01 22:40:11.329666] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:41:16.333 [2024-10-01 22:40:11.329674] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:41:16.333 [2024-10-01 22:40:11.333264] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:41:16.333 [2024-10-01 22:40:11.342583] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:41:16.333 [2024-10-01 22:40:11.343111] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:16.333 [2024-10-01 22:40:11.343127] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ac290 with addr=10.0.0.2, port=4420 00:41:16.333 [2024-10-01 22:40:11.343135] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ac290 is same with the state(6) to be set 00:41:16.333 [2024-10-01 22:40:11.343356] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14ac290 (9): Bad file descriptor 00:41:16.333 [2024-10-01 22:40:11.343578] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:41:16.333 [2024-10-01 22:40:11.343586] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:41:16.333 [2024-10-01 22:40:11.343597] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:41:16.333 [2024-10-01 22:40:11.347187] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:41:16.333 [2024-10-01 22:40:11.356515] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:41:16.333 [2024-10-01 22:40:11.357050] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:16.333 [2024-10-01 22:40:11.357066] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ac290 with addr=10.0.0.2, port=4420 00:41:16.333 [2024-10-01 22:40:11.357074] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ac290 is same with the state(6) to be set 00:41:16.333 [2024-10-01 22:40:11.357296] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14ac290 (9): Bad file descriptor 00:41:16.333 [2024-10-01 22:40:11.357517] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:41:16.333 [2024-10-01 22:40:11.357525] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:41:16.333 [2024-10-01 22:40:11.357532] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:41:16.333 [2024-10-01 22:40:11.361126] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:41:16.333 [2024-10-01 22:40:11.370452] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:41:16.333 [2024-10-01 22:40:11.371075] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:16.333 [2024-10-01 22:40:11.371112] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ac290 with addr=10.0.0.2, port=4420 00:41:16.333 [2024-10-01 22:40:11.371123] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ac290 is same with the state(6) to be set 00:41:16.333 [2024-10-01 22:40:11.371365] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14ac290 (9): Bad file descriptor 00:41:16.333 [2024-10-01 22:40:11.371591] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:41:16.333 [2024-10-01 22:40:11.371600] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:41:16.333 [2024-10-01 22:40:11.371608] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:41:16.333 [2024-10-01 22:40:11.375212] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:41:16.333 [2024-10-01 22:40:11.384330] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:41:16.333 [2024-10-01 22:40:11.385004] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:16.333 [2024-10-01 22:40:11.385041] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ac290 with addr=10.0.0.2, port=4420 00:41:16.333 [2024-10-01 22:40:11.385052] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ac290 is same with the state(6) to be set 00:41:16.333 [2024-10-01 22:40:11.385294] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14ac290 (9): Bad file descriptor 00:41:16.333 [2024-10-01 22:40:11.385520] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:41:16.333 [2024-10-01 22:40:11.385528] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:41:16.333 [2024-10-01 22:40:11.385537] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:41:16.333 [2024-10-01 22:40:11.389140] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:41:16.333 [2024-10-01 22:40:11.398254] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:41:16.333 [2024-10-01 22:40:11.398805] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:16.333 [2024-10-01 22:40:11.398829] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ac290 with addr=10.0.0.2, port=4420 00:41:16.333 [2024-10-01 22:40:11.398838] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ac290 is same with the state(6) to be set 00:41:16.333 [2024-10-01 22:40:11.399061] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14ac290 (9): Bad file descriptor 00:41:16.333 [2024-10-01 22:40:11.399283] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:41:16.333 [2024-10-01 22:40:11.399291] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:41:16.333 [2024-10-01 22:40:11.399299] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:41:16.333 [2024-10-01 22:40:11.402895] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:41:16.333 [2024-10-01 22:40:11.412231] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:41:16.333 [2024-10-01 22:40:11.412937] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:16.333 [2024-10-01 22:40:11.412974] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ac290 with addr=10.0.0.2, port=4420 00:41:16.333 [2024-10-01 22:40:11.412985] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ac290 is same with the state(6) to be set 00:41:16.333 [2024-10-01 22:40:11.413228] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14ac290 (9): Bad file descriptor 00:41:16.333 [2024-10-01 22:40:11.413454] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:41:16.333 [2024-10-01 22:40:11.413463] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:41:16.333 [2024-10-01 22:40:11.413470] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:41:16.333 [2024-10-01 22:40:11.417083] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:41:16.333 [2024-10-01 22:40:11.426205] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:41:16.333 [2024-10-01 22:40:11.426689] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:16.333 [2024-10-01 22:40:11.426709] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ac290 with addr=10.0.0.2, port=4420 00:41:16.333 [2024-10-01 22:40:11.426717] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ac290 is same with the state(6) to be set 00:41:16.333 [2024-10-01 22:40:11.426940] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14ac290 (9): Bad file descriptor 00:41:16.333 [2024-10-01 22:40:11.427162] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:41:16.333 [2024-10-01 22:40:11.427170] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:41:16.333 [2024-10-01 22:40:11.427177] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:41:16.333 [2024-10-01 22:40:11.430774] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:41:16.333 [2024-10-01 22:40:11.440095] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:41:16.333 [2024-10-01 22:40:11.440735] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:16.333 [2024-10-01 22:40:11.440771] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ac290 with addr=10.0.0.2, port=4420 00:41:16.333 [2024-10-01 22:40:11.440784] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ac290 is same with the state(6) to be set 00:41:16.333 [2024-10-01 22:40:11.441027] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14ac290 (9): Bad file descriptor 00:41:16.333 [2024-10-01 22:40:11.441258] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:41:16.333 [2024-10-01 22:40:11.441267] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:41:16.333 [2024-10-01 22:40:11.441274] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:41:16.333 [2024-10-01 22:40:11.444878] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:41:16.333 [2024-10-01 22:40:11.453997] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:41:16.333 [2024-10-01 22:40:11.454588] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:16.333 [2024-10-01 22:40:11.454606] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ac290 with addr=10.0.0.2, port=4420 00:41:16.333 [2024-10-01 22:40:11.454614] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ac290 is same with the state(6) to be set 00:41:16.334 [2024-10-01 22:40:11.454843] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14ac290 (9): Bad file descriptor 00:41:16.334 [2024-10-01 22:40:11.455071] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:41:16.334 [2024-10-01 22:40:11.455080] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:41:16.334 [2024-10-01 22:40:11.455087] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:41:16.334 5517.80 IOPS, 21.55 MiB/s [2024-10-01 22:40:11.460337] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:41:16.334 [2024-10-01 22:40:11.467976] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:41:16.334 [2024-10-01 22:40:11.468560] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:16.334 [2024-10-01 22:40:11.468597] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ac290 with addr=10.0.0.2, port=4420 00:41:16.334 [2024-10-01 22:40:11.468608] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ac290 is same with the state(6) to be set 00:41:16.334 [2024-10-01 22:40:11.468858] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14ac290 (9): Bad file descriptor 00:41:16.334 [2024-10-01 22:40:11.469085] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:41:16.334 [2024-10-01 22:40:11.469094] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:41:16.334 [2024-10-01 22:40:11.469101] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:41:16.334 [2024-10-01 22:40:11.472700] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:41:16.334 [2024-10-01 22:40:11.481821] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:41:16.334 [2024-10-01 22:40:11.482488] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:16.334 [2024-10-01 22:40:11.482525] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ac290 with addr=10.0.0.2, port=4420 00:41:16.334 [2024-10-01 22:40:11.482537] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ac290 is same with the state(6) to be set 00:41:16.334 [2024-10-01 22:40:11.482787] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14ac290 (9): Bad file descriptor 00:41:16.334 [2024-10-01 22:40:11.483015] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:41:16.334 [2024-10-01 22:40:11.483024] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:41:16.334 [2024-10-01 22:40:11.483031] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:41:16.334 [2024-10-01 22:40:11.486638] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:41:16.334 [2024-10-01 22:40:11.495760] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:41:16.334 [2024-10-01 22:40:11.496297] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:16.334 [2024-10-01 22:40:11.496316] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ac290 with addr=10.0.0.2, port=4420 00:41:16.334 [2024-10-01 22:40:11.496325] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ac290 is same with the state(6) to be set 00:41:16.334 [2024-10-01 22:40:11.496547] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14ac290 (9): Bad file descriptor 00:41:16.334 [2024-10-01 22:40:11.496776] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:41:16.334 [2024-10-01 22:40:11.496785] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:41:16.334 [2024-10-01 22:40:11.496792] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:41:16.334 [2024-10-01 22:40:11.500592] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:41:16.334 [2024-10-01 22:40:11.509726] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:41:16.334 [2024-10-01 22:40:11.510350] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:16.334 [2024-10-01 22:40:11.510387] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ac290 with addr=10.0.0.2, port=4420 00:41:16.334 [2024-10-01 22:40:11.510398] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ac290 is same with the state(6) to be set 00:41:16.334 [2024-10-01 22:40:11.510648] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14ac290 (9): Bad file descriptor 00:41:16.334 [2024-10-01 22:40:11.510875] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:41:16.334 [2024-10-01 22:40:11.510884] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:41:16.334 [2024-10-01 22:40:11.510893] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:41:16.334 [2024-10-01 22:40:11.514489] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:41:16.334 [2024-10-01 22:40:11.523614] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:41:16.334 [2024-10-01 22:40:11.524251] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:16.334 [2024-10-01 22:40:11.524288] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ac290 with addr=10.0.0.2, port=4420 00:41:16.334 [2024-10-01 22:40:11.524299] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ac290 is same with the state(6) to be set 00:41:16.334 [2024-10-01 22:40:11.524541] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14ac290 (9): Bad file descriptor 00:41:16.334 [2024-10-01 22:40:11.524776] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:41:16.334 [2024-10-01 22:40:11.524787] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:41:16.334 [2024-10-01 22:40:11.524794] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:41:16.334 [2024-10-01 22:40:11.528391] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:41:16.334 [2024-10-01 22:40:11.537519] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:41:16.334 [2024-10-01 22:40:11.538208] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:16.334 [2024-10-01 22:40:11.538250] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ac290 with addr=10.0.0.2, port=4420 00:41:16.334 [2024-10-01 22:40:11.538262] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ac290 is same with the state(6) to be set 00:41:16.334 [2024-10-01 22:40:11.538504] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14ac290 (9): Bad file descriptor 00:41:16.334 [2024-10-01 22:40:11.538739] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:41:16.334 [2024-10-01 22:40:11.538749] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:41:16.334 [2024-10-01 22:40:11.538756] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:41:16.334 [2024-10-01 22:40:11.542352] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:41:16.334 [2024-10-01 22:40:11.551469] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:41:16.334 [2024-10-01 22:40:11.551932] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:16.334 [2024-10-01 22:40:11.551951] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ac290 with addr=10.0.0.2, port=4420 00:41:16.334 [2024-10-01 22:40:11.551959] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ac290 is same with the state(6) to be set 00:41:16.334 [2024-10-01 22:40:11.552181] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14ac290 (9): Bad file descriptor 00:41:16.334 [2024-10-01 22:40:11.552404] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:41:16.334 [2024-10-01 22:40:11.552412] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:41:16.334 [2024-10-01 22:40:11.552419] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:41:16.334 [2024-10-01 22:40:11.556016] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:41:16.334 [2024-10-01 22:40:11.565356] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:41:16.334 [2024-10-01 22:40:11.565891] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:16.334 [2024-10-01 22:40:11.565908] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ac290 with addr=10.0.0.2, port=4420 00:41:16.334 [2024-10-01 22:40:11.565916] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ac290 is same with the state(6) to be set 00:41:16.334 [2024-10-01 22:40:11.566138] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14ac290 (9): Bad file descriptor 00:41:16.334 [2024-10-01 22:40:11.566360] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:41:16.334 [2024-10-01 22:40:11.566369] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:41:16.334 [2024-10-01 22:40:11.566376] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:41:16.334 [2024-10-01 22:40:11.570047] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:41:16.334 [2024-10-01 22:40:11.579372] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:41:16.334 [2024-10-01 22:40:11.579996] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:16.334 [2024-10-01 22:40:11.580033] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ac290 with addr=10.0.0.2, port=4420 00:41:16.334 [2024-10-01 22:40:11.580044] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ac290 is same with the state(6) to be set 00:41:16.334 [2024-10-01 22:40:11.580286] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14ac290 (9): Bad file descriptor 00:41:16.334 [2024-10-01 22:40:11.580517] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:41:16.334 [2024-10-01 22:40:11.580526] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:41:16.334 [2024-10-01 22:40:11.580534] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:41:16.334 [2024-10-01 22:40:11.584138] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:41:16.597 [2024-10-01 22:40:11.593256] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:41:16.597 [2024-10-01 22:40:11.593864] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:16.597 [2024-10-01 22:40:11.593902] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ac290 with addr=10.0.0.2, port=4420 00:41:16.597 [2024-10-01 22:40:11.593913] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ac290 is same with the state(6) to be set 00:41:16.597 [2024-10-01 22:40:11.594155] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14ac290 (9): Bad file descriptor 00:41:16.597 [2024-10-01 22:40:11.594381] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:41:16.597 [2024-10-01 22:40:11.594390] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:41:16.597 [2024-10-01 22:40:11.594397] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:41:16.597 [2024-10-01 22:40:11.598002] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:41:16.597 [2024-10-01 22:40:11.607130] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:41:16.597 [2024-10-01 22:40:11.607710] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:16.597 [2024-10-01 22:40:11.607729] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ac290 with addr=10.0.0.2, port=4420 00:41:16.597 [2024-10-01 22:40:11.607737] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ac290 is same with the state(6) to be set 00:41:16.597 [2024-10-01 22:40:11.607960] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14ac290 (9): Bad file descriptor 00:41:16.597 [2024-10-01 22:40:11.608183] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:41:16.597 [2024-10-01 22:40:11.608191] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:41:16.597 [2024-10-01 22:40:11.608198] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:41:16.597 [2024-10-01 22:40:11.611794] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:41:16.597 [2024-10-01 22:40:11.621131] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:41:16.597 [2024-10-01 22:40:11.621727] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:16.597 [2024-10-01 22:40:11.621765] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ac290 with addr=10.0.0.2, port=4420 00:41:16.597 [2024-10-01 22:40:11.621777] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ac290 is same with the state(6) to be set 00:41:16.597 [2024-10-01 22:40:11.622023] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14ac290 (9): Bad file descriptor 00:41:16.597 [2024-10-01 22:40:11.622249] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:41:16.597 [2024-10-01 22:40:11.622258] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:41:16.597 [2024-10-01 22:40:11.622266] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:41:16.597 [2024-10-01 22:40:11.625875] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:41:16.597 [2024-10-01 22:40:11.634996] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:41:16.597 [2024-10-01 22:40:11.635577] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:16.597 [2024-10-01 22:40:11.635596] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ac290 with addr=10.0.0.2, port=4420 00:41:16.597 [2024-10-01 22:40:11.635604] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ac290 is same with the state(6) to be set 00:41:16.597 [2024-10-01 22:40:11.635831] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14ac290 (9): Bad file descriptor 00:41:16.597 [2024-10-01 22:40:11.636055] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:41:16.597 [2024-10-01 22:40:11.636063] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:41:16.597 [2024-10-01 22:40:11.636070] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:41:16.597 [2024-10-01 22:40:11.639666] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:41:16.597 [2024-10-01 22:40:11.648993] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:41:16.597 [2024-10-01 22:40:11.649560] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:16.597 [2024-10-01 22:40:11.649576] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ac290 with addr=10.0.0.2, port=4420 00:41:16.597 [2024-10-01 22:40:11.649584] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ac290 is same with the state(6) to be set 00:41:16.597 [2024-10-01 22:40:11.649810] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14ac290 (9): Bad file descriptor 00:41:16.597 [2024-10-01 22:40:11.650033] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:41:16.597 [2024-10-01 22:40:11.650041] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:41:16.597 [2024-10-01 22:40:11.650048] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:41:16.597 [2024-10-01 22:40:11.653641] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:41:16.597 [2024-10-01 22:40:11.662971] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:41:16.597 [2024-10-01 22:40:11.663516] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:16.597 [2024-10-01 22:40:11.663532] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ac290 with addr=10.0.0.2, port=4420 00:41:16.597 [2024-10-01 22:40:11.663539] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ac290 is same with the state(6) to be set 00:41:16.597 [2024-10-01 22:40:11.663766] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14ac290 (9): Bad file descriptor 00:41:16.597 [2024-10-01 22:40:11.663989] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:41:16.597 [2024-10-01 22:40:11.663997] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:41:16.597 [2024-10-01 22:40:11.664004] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:41:16.597 [2024-10-01 22:40:11.667590] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:41:16.597 [2024-10-01 22:40:11.676912] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:41:16.597 [2024-10-01 22:40:11.677479] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:16.597 [2024-10-01 22:40:11.677495] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ac290 with addr=10.0.0.2, port=4420 00:41:16.597 [2024-10-01 22:40:11.677506] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ac290 is same with the state(6) to be set 00:41:16.597 [2024-10-01 22:40:11.677733] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14ac290 (9): Bad file descriptor 00:41:16.597 [2024-10-01 22:40:11.677955] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:41:16.597 [2024-10-01 22:40:11.677963] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:41:16.597 [2024-10-01 22:40:11.677970] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:41:16.597 [2024-10-01 22:40:11.681557] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:41:16.597 [2024-10-01 22:40:11.690873] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:41:16.597 [2024-10-01 22:40:11.691448] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:16.597 [2024-10-01 22:40:11.691464] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ac290 with addr=10.0.0.2, port=4420 00:41:16.597 [2024-10-01 22:40:11.691472] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ac290 is same with the state(6) to be set 00:41:16.597 [2024-10-01 22:40:11.691699] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14ac290 (9): Bad file descriptor 00:41:16.597 [2024-10-01 22:40:11.691923] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:41:16.597 [2024-10-01 22:40:11.691931] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:41:16.597 [2024-10-01 22:40:11.691937] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:41:16.597 [2024-10-01 22:40:11.695563] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:41:16.597 [2024-10-01 22:40:11.704888] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:41:16.597 [2024-10-01 22:40:11.705512] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:16.597 [2024-10-01 22:40:11.705548] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ac290 with addr=10.0.0.2, port=4420 00:41:16.597 [2024-10-01 22:40:11.705559] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ac290 is same with the state(6) to be set 00:41:16.597 [2024-10-01 22:40:11.705811] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14ac290 (9): Bad file descriptor 00:41:16.597 [2024-10-01 22:40:11.706038] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:41:16.597 [2024-10-01 22:40:11.706047] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:41:16.597 [2024-10-01 22:40:11.706054] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:41:16.597 [2024-10-01 22:40:11.709653] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:41:16.598 [2024-10-01 22:40:11.718776] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:41:16.598 [2024-10-01 22:40:11.719433] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:16.598 [2024-10-01 22:40:11.719470] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ac290 with addr=10.0.0.2, port=4420 00:41:16.598 [2024-10-01 22:40:11.719481] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ac290 is same with the state(6) to be set 00:41:16.598 [2024-10-01 22:40:11.719733] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14ac290 (9): Bad file descriptor 00:41:16.598 [2024-10-01 22:40:11.719960] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:41:16.598 [2024-10-01 22:40:11.719973] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:41:16.598 [2024-10-01 22:40:11.719981] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:41:16.598 [2024-10-01 22:40:11.723581] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:41:16.598 [2024-10-01 22:40:11.732713] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:41:16.598 [2024-10-01 22:40:11.733273] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:16.598 [2024-10-01 22:40:11.733310] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ac290 with addr=10.0.0.2, port=4420 00:41:16.598 [2024-10-01 22:40:11.733322] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ac290 is same with the state(6) to be set 00:41:16.598 [2024-10-01 22:40:11.733565] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14ac290 (9): Bad file descriptor 00:41:16.598 [2024-10-01 22:40:11.733800] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:41:16.598 [2024-10-01 22:40:11.733811] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:41:16.598 [2024-10-01 22:40:11.733818] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:41:16.598 [2024-10-01 22:40:11.737419] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:41:16.598 [2024-10-01 22:40:11.746771] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:41:16.598 [2024-10-01 22:40:11.747427] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:16.598 [2024-10-01 22:40:11.747464] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ac290 with addr=10.0.0.2, port=4420 00:41:16.598 [2024-10-01 22:40:11.747475] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ac290 is same with the state(6) to be set 00:41:16.598 [2024-10-01 22:40:11.747728] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14ac290 (9): Bad file descriptor 00:41:16.598 [2024-10-01 22:40:11.747955] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:41:16.598 [2024-10-01 22:40:11.747964] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:41:16.598 [2024-10-01 22:40:11.747972] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:41:16.598 [2024-10-01 22:40:11.751571] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:41:16.598 [2024-10-01 22:40:11.760711] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:41:16.598 [2024-10-01 22:40:11.761399] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:16.598 [2024-10-01 22:40:11.761437] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ac290 with addr=10.0.0.2, port=4420 00:41:16.598 [2024-10-01 22:40:11.761448] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ac290 is same with the state(6) to be set 00:41:16.598 [2024-10-01 22:40:11.761701] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14ac290 (9): Bad file descriptor 00:41:16.598 [2024-10-01 22:40:11.761928] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:41:16.598 [2024-10-01 22:40:11.761937] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:41:16.598 [2024-10-01 22:40:11.761945] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:41:16.598 [2024-10-01 22:40:11.765542] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:41:16.598 [2024-10-01 22:40:11.774655] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:41:16.598 [2024-10-01 22:40:11.775335] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:16.598 [2024-10-01 22:40:11.775371] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ac290 with addr=10.0.0.2, port=4420 00:41:16.598 [2024-10-01 22:40:11.775382] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ac290 is same with the state(6) to be set 00:41:16.598 [2024-10-01 22:40:11.775635] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14ac290 (9): Bad file descriptor 00:41:16.598 [2024-10-01 22:40:11.775863] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:41:16.598 [2024-10-01 22:40:11.775871] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:41:16.598 [2024-10-01 22:40:11.775879] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:41:16.598 [2024-10-01 22:40:11.779479] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:41:16.598 [2024-10-01 22:40:11.788601] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:41:16.598 [2024-10-01 22:40:11.789192] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:16.598 [2024-10-01 22:40:11.789212] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ac290 with addr=10.0.0.2, port=4420 00:41:16.598 [2024-10-01 22:40:11.789221] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ac290 is same with the state(6) to be set 00:41:16.598 [2024-10-01 22:40:11.789443] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14ac290 (9): Bad file descriptor 00:41:16.598 [2024-10-01 22:40:11.789671] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:41:16.598 [2024-10-01 22:40:11.789688] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:41:16.598 [2024-10-01 22:40:11.789695] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:41:16.598 [2024-10-01 22:40:11.793290] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:41:16.598 [2024-10-01 22:40:11.802617] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:41:16.598 [2024-10-01 22:40:11.803148] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:16.598 [2024-10-01 22:40:11.803164] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ac290 with addr=10.0.0.2, port=4420 00:41:16.598 [2024-10-01 22:40:11.803172] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ac290 is same with the state(6) to be set 00:41:16.598 [2024-10-01 22:40:11.803394] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14ac290 (9): Bad file descriptor 00:41:16.598 [2024-10-01 22:40:11.803616] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:41:16.598 [2024-10-01 22:40:11.803629] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:41:16.598 [2024-10-01 22:40:11.803637] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:41:16.598 [2024-10-01 22:40:11.807237] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:41:16.598 [2024-10-01 22:40:11.816567] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:41:16.598 [2024-10-01 22:40:11.817140] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:16.598 [2024-10-01 22:40:11.817158] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ac290 with addr=10.0.0.2, port=4420 00:41:16.598 [2024-10-01 22:40:11.817165] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ac290 is same with the state(6) to be set 00:41:16.598 [2024-10-01 22:40:11.817393] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14ac290 (9): Bad file descriptor 00:41:16.598 [2024-10-01 22:40:11.817615] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:41:16.598 [2024-10-01 22:40:11.817629] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:41:16.598 [2024-10-01 22:40:11.817636] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:41:16.598 [2024-10-01 22:40:11.821225] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:41:16.598 [2024-10-01 22:40:11.830538] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:41:16.598 [2024-10-01 22:40:11.831155] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:16.598 [2024-10-01 22:40:11.831192] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ac290 with addr=10.0.0.2, port=4420 00:41:16.598 [2024-10-01 22:40:11.831203] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ac290 is same with the state(6) to be set 00:41:16.598 [2024-10-01 22:40:11.831446] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14ac290 (9): Bad file descriptor 00:41:16.598 [2024-10-01 22:40:11.831681] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:41:16.598 [2024-10-01 22:40:11.831691] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:41:16.598 [2024-10-01 22:40:11.831698] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:41:16.598 [2024-10-01 22:40:11.835293] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:41:16.598 [2024-10-01 22:40:11.844405] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:41:16.598 [2024-10-01 22:40:11.844950] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:16.598 [2024-10-01 22:40:11.844968] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ac290 with addr=10.0.0.2, port=4420 00:41:16.598 [2024-10-01 22:40:11.844976] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ac290 is same with the state(6) to be set 00:41:16.598 [2024-10-01 22:40:11.845199] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14ac290 (9): Bad file descriptor 00:41:16.598 [2024-10-01 22:40:11.845421] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:41:16.598 [2024-10-01 22:40:11.845429] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:41:16.598 [2024-10-01 22:40:11.845436] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:41:16.860 [2024-10-01 22:40:11.849041] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:41:16.860 [2024-10-01 22:40:11.858375] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:41:16.860 [2024-10-01 22:40:11.858843] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:16.860 [2024-10-01 22:40:11.858860] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ac290 with addr=10.0.0.2, port=4420 00:41:16.860 [2024-10-01 22:40:11.858868] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ac290 is same with the state(6) to be set 00:41:16.860 [2024-10-01 22:40:11.859090] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14ac290 (9): Bad file descriptor 00:41:16.860 [2024-10-01 22:40:11.859312] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:41:16.860 [2024-10-01 22:40:11.859320] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:41:16.860 [2024-10-01 22:40:11.859332] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:41:16.860 [2024-10-01 22:40:11.862927] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:41:16.860 [2024-10-01 22:40:11.872244] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:41:16.860 [2024-10-01 22:40:11.872864] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:16.860 [2024-10-01 22:40:11.872901] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ac290 with addr=10.0.0.2, port=4420 00:41:16.860 [2024-10-01 22:40:11.872912] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ac290 is same with the state(6) to be set 00:41:16.860 [2024-10-01 22:40:11.873154] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14ac290 (9): Bad file descriptor 00:41:16.860 [2024-10-01 22:40:11.873380] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:41:16.860 [2024-10-01 22:40:11.873389] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:41:16.860 [2024-10-01 22:40:11.873396] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:41:16.860 [2024-10-01 22:40:11.877001] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:41:16.861 [2024-10-01 22:40:11.886157] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:41:16.861 [2024-10-01 22:40:11.886793] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:16.861 [2024-10-01 22:40:11.886830] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ac290 with addr=10.0.0.2, port=4420 00:41:16.861 [2024-10-01 22:40:11.886842] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ac290 is same with the state(6) to be set 00:41:16.861 [2024-10-01 22:40:11.887084] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14ac290 (9): Bad file descriptor 00:41:16.861 [2024-10-01 22:40:11.887310] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:41:16.861 [2024-10-01 22:40:11.887319] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:41:16.861 [2024-10-01 22:40:11.887326] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:41:16.861 [2024-10-01 22:40:11.890931] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:41:16.861 [2024-10-01 22:40:11.900046] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:41:16.861 [2024-10-01 22:40:11.900731] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:16.861 [2024-10-01 22:40:11.900768] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ac290 with addr=10.0.0.2, port=4420 00:41:16.861 [2024-10-01 22:40:11.900781] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ac290 is same with the state(6) to be set 00:41:16.861 [2024-10-01 22:40:11.901024] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14ac290 (9): Bad file descriptor 00:41:16.861 [2024-10-01 22:40:11.901250] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:41:16.861 [2024-10-01 22:40:11.901259] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:41:16.861 [2024-10-01 22:40:11.901267] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:41:16.861 [2024-10-01 22:40:11.904879] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:41:16.861 [2024-10-01 22:40:11.913992] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:41:16.861 [2024-10-01 22:40:11.914654] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:16.861 [2024-10-01 22:40:11.914690] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ac290 with addr=10.0.0.2, port=4420 00:41:16.861 [2024-10-01 22:40:11.914702] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ac290 is same with the state(6) to be set 00:41:16.861 [2024-10-01 22:40:11.914944] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14ac290 (9): Bad file descriptor 00:41:16.861 [2024-10-01 22:40:11.915170] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:41:16.861 [2024-10-01 22:40:11.915178] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:41:16.861 [2024-10-01 22:40:11.915186] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:41:16.861 [2024-10-01 22:40:11.918800] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:41:16.861 [2024-10-01 22:40:11.927916] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:41:16.861 [2024-10-01 22:40:11.928582] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:16.861 [2024-10-01 22:40:11.928619] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ac290 with addr=10.0.0.2, port=4420 00:41:16.861 [2024-10-01 22:40:11.928639] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ac290 is same with the state(6) to be set 00:41:16.861 [2024-10-01 22:40:11.928882] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14ac290 (9): Bad file descriptor 00:41:16.861 [2024-10-01 22:40:11.929108] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:41:16.861 [2024-10-01 22:40:11.929117] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:41:16.861 [2024-10-01 22:40:11.929124] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:41:16.861 [2024-10-01 22:40:11.932721] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:41:16.861 [2024-10-01 22:40:11.941832] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:41:16.861 [2024-10-01 22:40:11.942509] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:16.861 [2024-10-01 22:40:11.942546] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ac290 with addr=10.0.0.2, port=4420 00:41:16.861 [2024-10-01 22:40:11.942557] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ac290 is same with the state(6) to be set 00:41:16.861 [2024-10-01 22:40:11.942809] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14ac290 (9): Bad file descriptor 00:41:16.861 [2024-10-01 22:40:11.943036] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:41:16.861 [2024-10-01 22:40:11.943045] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:41:16.861 [2024-10-01 22:40:11.943052] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:41:16.861 [2024-10-01 22:40:11.946648] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:41:16.861 [2024-10-01 22:40:11.955763] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:41:16.861 [2024-10-01 22:40:11.956414] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:16.861 [2024-10-01 22:40:11.956451] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ac290 with addr=10.0.0.2, port=4420 00:41:16.861 [2024-10-01 22:40:11.956462] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ac290 is same with the state(6) to be set 00:41:16.861 [2024-10-01 22:40:11.956714] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14ac290 (9): Bad file descriptor 00:41:16.861 [2024-10-01 22:40:11.956946] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:41:16.861 [2024-10-01 22:40:11.956955] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:41:16.861 [2024-10-01 22:40:11.956963] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:41:16.861 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh: line 35: 391417 Killed "${NVMF_APP[@]}" "$@" 00:41:16.861 22:40:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@36 -- # tgt_init 00:41:16.861 22:40:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:41:16.861 22:40:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:41:16.861 [2024-10-01 22:40:11.960572] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:41:16.861 22:40:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@724 -- # xtrace_disable 00:41:16.861 22:40:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:41:16.861 22:40:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@507 -- # nvmfpid=393117 00:41:16.861 22:40:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@508 -- # waitforlisten 393117 00:41:16.861 22:40:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:41:16.861 22:40:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@831 -- # '[' -z 393117 ']' 00:41:16.861 22:40:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:41:16.861 [2024-10-01 22:40:11.969705] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:41:16.861 22:40:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@836 -- # local max_retries=100 00:41:16.861 22:40:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:41:16.861 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:41:16.861 22:40:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@840 -- # xtrace_disable 00:41:16.861 [2024-10-01 22:40:11.970353] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:16.861 [2024-10-01 22:40:11.970390] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ac290 with addr=10.0.0.2, port=4420 00:41:16.861 [2024-10-01 22:40:11.970402] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ac290 is same with the state(6) to be set 00:41:16.861 22:40:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:41:16.861 [2024-10-01 22:40:11.970653] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14ac290 (9): Bad file descriptor 00:41:16.861 [2024-10-01 22:40:11.970880] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:41:16.862 [2024-10-01 22:40:11.970890] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:41:16.862 [2024-10-01 22:40:11.970899] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:41:16.862 [2024-10-01 22:40:11.974502] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:41:16.862 [2024-10-01 22:40:11.983640] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:41:16.862 [2024-10-01 22:40:11.984306] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:16.862 [2024-10-01 22:40:11.984343] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ac290 with addr=10.0.0.2, port=4420 00:41:16.862 [2024-10-01 22:40:11.984355] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ac290 is same with the state(6) to be set 00:41:16.862 [2024-10-01 22:40:11.984607] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14ac290 (9): Bad file descriptor 00:41:16.862 [2024-10-01 22:40:11.984841] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:41:16.862 [2024-10-01 22:40:11.984851] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:41:16.862 [2024-10-01 22:40:11.984859] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:41:16.862 [2024-10-01 22:40:11.988461] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:41:16.862 [2024-10-01 22:40:11.997591] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:41:16.862 [2024-10-01 22:40:11.998139] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:16.862 [2024-10-01 22:40:11.998158] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ac290 with addr=10.0.0.2, port=4420 00:41:16.862 [2024-10-01 22:40:11.998167] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ac290 is same with the state(6) to be set 00:41:16.862 [2024-10-01 22:40:11.998389] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14ac290 (9): Bad file descriptor 00:41:16.862 [2024-10-01 22:40:11.998611] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:41:16.862 [2024-10-01 22:40:11.998619] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:41:16.862 [2024-10-01 22:40:11.998636] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:41:16.862 [2024-10-01 22:40:12.002231] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:41:16.862 [2024-10-01 22:40:12.011571] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:41:16.862 [2024-10-01 22:40:12.012108] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:16.862 [2024-10-01 22:40:12.012125] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ac290 with addr=10.0.0.2, port=4420 00:41:16.862 [2024-10-01 22:40:12.012133] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ac290 is same with the state(6) to be set 00:41:16.862 [2024-10-01 22:40:12.012356] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14ac290 (9): Bad file descriptor 00:41:16.862 [2024-10-01 22:40:12.012577] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:41:16.862 [2024-10-01 22:40:12.012586] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:41:16.862 [2024-10-01 22:40:12.012593] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:41:16.862 [2024-10-01 22:40:12.016194] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:41:16.862 [2024-10-01 22:40:12.020453] Starting SPDK v25.01-pre git sha1 1b1c3081e / DPDK 24.03.0 initialization... 00:41:16.862 [2024-10-01 22:40:12.020498] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:41:16.862 [2024-10-01 22:40:12.025531] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:41:16.862 [2024-10-01 22:40:12.026113] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:16.862 [2024-10-01 22:40:12.026130] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ac290 with addr=10.0.0.2, port=4420 00:41:16.862 [2024-10-01 22:40:12.026138] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ac290 is same with the state(6) to be set 00:41:16.862 [2024-10-01 22:40:12.026360] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14ac290 (9): Bad file descriptor 00:41:16.862 [2024-10-01 22:40:12.026587] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:41:16.862 [2024-10-01 22:40:12.026596] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:41:16.862 [2024-10-01 22:40:12.026603] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:41:16.862 [2024-10-01 22:40:12.030203] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:41:16.862 [2024-10-01 22:40:12.039537] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:41:16.862 [2024-10-01 22:40:12.040076] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:16.862 [2024-10-01 22:40:12.040092] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ac290 with addr=10.0.0.2, port=4420 00:41:16.862 [2024-10-01 22:40:12.040100] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ac290 is same with the state(6) to be set 00:41:16.862 [2024-10-01 22:40:12.040322] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14ac290 (9): Bad file descriptor 00:41:16.862 [2024-10-01 22:40:12.040544] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:41:16.862 [2024-10-01 22:40:12.040552] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:41:16.862 [2024-10-01 22:40:12.040559] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:41:16.862 [2024-10-01 22:40:12.044158] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:41:16.862 [2024-10-01 22:40:12.053486] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:41:16.862 [2024-10-01 22:40:12.054024] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:16.862 [2024-10-01 22:40:12.054041] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ac290 with addr=10.0.0.2, port=4420 00:41:16.862 [2024-10-01 22:40:12.054049] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ac290 is same with the state(6) to be set 00:41:16.862 [2024-10-01 22:40:12.054271] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14ac290 (9): Bad file descriptor 00:41:16.862 [2024-10-01 22:40:12.054493] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:41:16.862 [2024-10-01 22:40:12.054501] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:41:16.862 [2024-10-01 22:40:12.054508] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:41:16.862 [2024-10-01 22:40:12.058112] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:41:16.862 [2024-10-01 22:40:12.067343] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:41:16.862 [2024-10-01 22:40:12.067882] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:16.862 [2024-10-01 22:40:12.067901] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ac290 with addr=10.0.0.2, port=4420 00:41:16.862 [2024-10-01 22:40:12.067909] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ac290 is same with the state(6) to be set 00:41:16.862 [2024-10-01 22:40:12.068131] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14ac290 (9): Bad file descriptor 00:41:16.862 [2024-10-01 22:40:12.068353] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:41:16.862 [2024-10-01 22:40:12.068361] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:41:16.862 [2024-10-01 22:40:12.068368] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:41:16.862 [2024-10-01 22:40:12.071972] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:41:16.862 [2024-10-01 22:40:12.081311] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:41:16.862 [2024-10-01 22:40:12.081802] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:16.862 [2024-10-01 22:40:12.081820] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ac290 with addr=10.0.0.2, port=4420 00:41:16.862 [2024-10-01 22:40:12.081828] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ac290 is same with the state(6) to be set 00:41:16.862 [2024-10-01 22:40:12.082051] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14ac290 (9): Bad file descriptor 00:41:16.862 [2024-10-01 22:40:12.082273] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:41:16.862 [2024-10-01 22:40:12.082281] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:41:16.863 [2024-10-01 22:40:12.082289] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:41:16.863 [2024-10-01 22:40:12.085889] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:41:16.863 [2024-10-01 22:40:12.095219] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:41:16.863 [2024-10-01 22:40:12.095783] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:16.863 [2024-10-01 22:40:12.095821] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ac290 with addr=10.0.0.2, port=4420 00:41:16.863 [2024-10-01 22:40:12.095834] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ac290 is same with the state(6) to be set 00:41:16.863 [2024-10-01 22:40:12.096080] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14ac290 (9): Bad file descriptor 00:41:16.863 [2024-10-01 22:40:12.096306] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:41:16.863 [2024-10-01 22:40:12.096316] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:41:16.863 [2024-10-01 22:40:12.096324] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:41:16.863 [2024-10-01 22:40:12.099928] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:41:16.863 [2024-10-01 22:40:12.104072] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:41:16.863 [2024-10-01 22:40:12.109266] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:41:16.863 [2024-10-01 22:40:12.109806] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:16.863 [2024-10-01 22:40:12.109826] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ac290 with addr=10.0.0.2, port=4420 00:41:16.863 [2024-10-01 22:40:12.109834] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ac290 is same with the state(6) to be set 00:41:16.863 [2024-10-01 22:40:12.110057] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14ac290 (9): Bad file descriptor 00:41:16.863 [2024-10-01 22:40:12.110280] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:41:16.863 [2024-10-01 22:40:12.110289] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:41:16.863 [2024-10-01 22:40:12.110296] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:41:17.126 [2024-10-01 22:40:12.113898] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:41:17.126 [2024-10-01 22:40:12.123231] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:41:17.126 [2024-10-01 22:40:12.123898] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:17.126 [2024-10-01 22:40:12.123941] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ac290 with addr=10.0.0.2, port=4420 00:41:17.126 [2024-10-01 22:40:12.123952] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ac290 is same with the state(6) to be set 00:41:17.126 [2024-10-01 22:40:12.124195] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14ac290 (9): Bad file descriptor 00:41:17.126 [2024-10-01 22:40:12.124421] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:41:17.126 [2024-10-01 22:40:12.124430] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:41:17.126 [2024-10-01 22:40:12.124438] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:41:17.126 [2024-10-01 22:40:12.128046] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:41:17.126 [2024-10-01 22:40:12.137163] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:41:17.126 [2024-10-01 22:40:12.137730] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:17.126 [2024-10-01 22:40:12.137768] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ac290 with addr=10.0.0.2, port=4420 00:41:17.126 [2024-10-01 22:40:12.137779] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ac290 is same with the state(6) to be set 00:41:17.126 [2024-10-01 22:40:12.138022] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14ac290 (9): Bad file descriptor 00:41:17.126 [2024-10-01 22:40:12.138249] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:41:17.126 [2024-10-01 22:40:12.138258] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:41:17.126 [2024-10-01 22:40:12.138266] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:41:17.126 [2024-10-01 22:40:12.141871] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:41:17.126 [2024-10-01 22:40:12.151197] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:41:17.126 [2024-10-01 22:40:12.151765] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:17.126 [2024-10-01 22:40:12.151785] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ac290 with addr=10.0.0.2, port=4420 00:41:17.126 [2024-10-01 22:40:12.151794] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ac290 is same with the state(6) to be set 00:41:17.126 [2024-10-01 22:40:12.152017] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14ac290 (9): Bad file descriptor 00:41:17.126 [2024-10-01 22:40:12.152240] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:41:17.126 [2024-10-01 22:40:12.152248] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:41:17.126 [2024-10-01 22:40:12.152255] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:41:17.126 [2024-10-01 22:40:12.155861] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:41:17.126 [2024-10-01 22:40:12.157722] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:41:17.126 [2024-10-01 22:40:12.157745] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:41:17.126 [2024-10-01 22:40:12.157751] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:41:17.126 [2024-10-01 22:40:12.157757] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:41:17.126 [2024-10-01 22:40:12.157761] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:41:17.126 [2024-10-01 22:40:12.157896] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:41:17.126 [2024-10-01 22:40:12.158148] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:41:17.126 [2024-10-01 22:40:12.158148] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:41:17.126 [2024-10-01 22:40:12.165209] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:41:17.126 [2024-10-01 22:40:12.165639] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:17.126 [2024-10-01 22:40:12.165657] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ac290 with addr=10.0.0.2, port=4420 00:41:17.126 [2024-10-01 22:40:12.165665] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ac290 is same with the state(6) to be set 00:41:17.126 [2024-10-01 22:40:12.165888] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14ac290 (9): Bad file descriptor 00:41:17.126 [2024-10-01 22:40:12.166110] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:41:17.126 [2024-10-01 22:40:12.166118] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:41:17.126 [2024-10-01 22:40:12.166126] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:41:17.126 [2024-10-01 22:40:12.169723] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:41:17.126 [2024-10-01 22:40:12.179261] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:41:17.126 [2024-10-01 22:40:12.179820] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:17.126 [2024-10-01 22:40:12.179859] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ac290 with addr=10.0.0.2, port=4420 00:41:17.126 [2024-10-01 22:40:12.179870] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ac290 is same with the state(6) to be set 00:41:17.126 [2024-10-01 22:40:12.180118] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14ac290 (9): Bad file descriptor 00:41:17.126 [2024-10-01 22:40:12.180345] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:41:17.126 [2024-10-01 22:40:12.180353] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:41:17.126 [2024-10-01 22:40:12.180362] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:41:17.126 [2024-10-01 22:40:12.183970] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:41:17.127 [2024-10-01 22:40:12.193300] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:41:17.127 [2024-10-01 22:40:12.193832] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:17.127 [2024-10-01 22:40:12.193870] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ac290 with addr=10.0.0.2, port=4420 00:41:17.127 [2024-10-01 22:40:12.193883] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ac290 is same with the state(6) to be set 00:41:17.127 [2024-10-01 22:40:12.194131] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14ac290 (9): Bad file descriptor 00:41:17.127 [2024-10-01 22:40:12.194358] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:41:17.127 [2024-10-01 22:40:12.194367] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:41:17.127 [2024-10-01 22:40:12.194375] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:41:17.127 [2024-10-01 22:40:12.197980] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:41:17.127 [2024-10-01 22:40:12.207321] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:41:17.127 [2024-10-01 22:40:12.207902] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:17.127 [2024-10-01 22:40:12.207945] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ac290 with addr=10.0.0.2, port=4420 00:41:17.127 [2024-10-01 22:40:12.207957] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ac290 is same with the state(6) to be set 00:41:17.127 [2024-10-01 22:40:12.208199] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14ac290 (9): Bad file descriptor 00:41:17.127 [2024-10-01 22:40:12.208425] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:41:17.127 [2024-10-01 22:40:12.208434] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:41:17.127 [2024-10-01 22:40:12.208442] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:41:17.127 [2024-10-01 22:40:12.212046] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:41:17.127 [2024-10-01 22:40:12.221389] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:41:17.127 [2024-10-01 22:40:12.221981] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:17.127 [2024-10-01 22:40:12.222019] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ac290 with addr=10.0.0.2, port=4420 00:41:17.127 [2024-10-01 22:40:12.222030] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ac290 is same with the state(6) to be set 00:41:17.127 [2024-10-01 22:40:12.222273] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14ac290 (9): Bad file descriptor 00:41:17.127 [2024-10-01 22:40:12.222499] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:41:17.127 [2024-10-01 22:40:12.222508] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:41:17.127 [2024-10-01 22:40:12.222516] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:41:17.127 [2024-10-01 22:40:12.226120] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:41:17.127 [2024-10-01 22:40:12.235447] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:41:17.127 [2024-10-01 22:40:12.236048] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:17.127 [2024-10-01 22:40:12.236086] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ac290 with addr=10.0.0.2, port=4420 00:41:17.127 [2024-10-01 22:40:12.236098] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ac290 is same with the state(6) to be set 00:41:17.127 [2024-10-01 22:40:12.236345] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14ac290 (9): Bad file descriptor 00:41:17.127 [2024-10-01 22:40:12.236572] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:41:17.127 [2024-10-01 22:40:12.236581] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:41:17.127 [2024-10-01 22:40:12.236589] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:41:17.127 [2024-10-01 22:40:12.240194] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:41:17.127 [2024-10-01 22:40:12.249305] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:41:17.127 [2024-10-01 22:40:12.249968] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:17.127 [2024-10-01 22:40:12.250005] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ac290 with addr=10.0.0.2, port=4420 00:41:17.127 [2024-10-01 22:40:12.250017] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ac290 is same with the state(6) to be set 00:41:17.127 [2024-10-01 22:40:12.250259] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14ac290 (9): Bad file descriptor 00:41:17.127 [2024-10-01 22:40:12.250491] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:41:17.127 [2024-10-01 22:40:12.250500] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:41:17.127 [2024-10-01 22:40:12.250507] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:41:17.127 [2024-10-01 22:40:12.254114] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:41:17.127 [2024-10-01 22:40:12.263246] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:41:17.127 [2024-10-01 22:40:12.263953] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:17.127 [2024-10-01 22:40:12.263991] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ac290 with addr=10.0.0.2, port=4420 00:41:17.127 [2024-10-01 22:40:12.264002] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ac290 is same with the state(6) to be set 00:41:17.127 [2024-10-01 22:40:12.264245] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14ac290 (9): Bad file descriptor 00:41:17.127 [2024-10-01 22:40:12.264471] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:41:17.127 [2024-10-01 22:40:12.264480] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:41:17.127 [2024-10-01 22:40:12.264488] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:41:17.127 [2024-10-01 22:40:12.268092] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:41:17.127 [2024-10-01 22:40:12.277207] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:41:17.127 [2024-10-01 22:40:12.277921] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:17.127 [2024-10-01 22:40:12.277958] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ac290 with addr=10.0.0.2, port=4420 00:41:17.127 [2024-10-01 22:40:12.277970] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ac290 is same with the state(6) to be set 00:41:17.127 [2024-10-01 22:40:12.278213] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14ac290 (9): Bad file descriptor 00:41:17.127 [2024-10-01 22:40:12.278440] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:41:17.127 [2024-10-01 22:40:12.278448] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:41:17.127 [2024-10-01 22:40:12.278456] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:41:17.127 [2024-10-01 22:40:12.282060] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:41:17.127 [2024-10-01 22:40:12.291175] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:41:17.127 [2024-10-01 22:40:12.291775] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:17.127 [2024-10-01 22:40:12.291812] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ac290 with addr=10.0.0.2, port=4420 00:41:17.127 [2024-10-01 22:40:12.291825] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ac290 is same with the state(6) to be set 00:41:17.127 [2024-10-01 22:40:12.292070] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14ac290 (9): Bad file descriptor 00:41:17.127 [2024-10-01 22:40:12.292297] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:41:17.127 [2024-10-01 22:40:12.292306] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:41:17.127 [2024-10-01 22:40:12.292314] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:41:17.127 [2024-10-01 22:40:12.295924] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:41:17.127 [2024-10-01 22:40:12.305054] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:41:17.127 [2024-10-01 22:40:12.305693] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:17.128 [2024-10-01 22:40:12.305730] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ac290 with addr=10.0.0.2, port=4420 00:41:17.128 [2024-10-01 22:40:12.305743] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ac290 is same with the state(6) to be set 00:41:17.128 [2024-10-01 22:40:12.305989] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14ac290 (9): Bad file descriptor 00:41:17.128 [2024-10-01 22:40:12.306216] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:41:17.128 [2024-10-01 22:40:12.306225] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:41:17.128 [2024-10-01 22:40:12.306233] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:41:17.128 [2024-10-01 22:40:12.309841] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:41:17.128 [2024-10-01 22:40:12.318984] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:41:17.128 [2024-10-01 22:40:12.319555] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:17.128 [2024-10-01 22:40:12.319573] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ac290 with addr=10.0.0.2, port=4420 00:41:17.128 [2024-10-01 22:40:12.319582] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ac290 is same with the state(6) to be set 00:41:17.128 [2024-10-01 22:40:12.319811] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14ac290 (9): Bad file descriptor 00:41:17.128 [2024-10-01 22:40:12.320034] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:41:17.128 [2024-10-01 22:40:12.320042] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:41:17.128 [2024-10-01 22:40:12.320050] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:41:17.128 [2024-10-01 22:40:12.323644] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:41:17.128 [2024-10-01 22:40:12.332962] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:41:17.128 [2024-10-01 22:40:12.333538] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:17.128 [2024-10-01 22:40:12.333554] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ac290 with addr=10.0.0.2, port=4420 00:41:17.128 [2024-10-01 22:40:12.333563] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ac290 is same with the state(6) to be set 00:41:17.128 [2024-10-01 22:40:12.333790] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14ac290 (9): Bad file descriptor 00:41:17.128 [2024-10-01 22:40:12.334013] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:41:17.128 [2024-10-01 22:40:12.334021] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:41:17.128 [2024-10-01 22:40:12.334028] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:41:17.128 [2024-10-01 22:40:12.337618] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:41:17.128 [2024-10-01 22:40:12.346944] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:41:17.128 [2024-10-01 22:40:12.347486] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:17.128 [2024-10-01 22:40:12.347501] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ac290 with addr=10.0.0.2, port=4420 00:41:17.128 [2024-10-01 22:40:12.347514] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ac290 is same with the state(6) to be set 00:41:17.128 [2024-10-01 22:40:12.347741] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14ac290 (9): Bad file descriptor 00:41:17.128 [2024-10-01 22:40:12.347963] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:41:17.128 [2024-10-01 22:40:12.347971] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:41:17.128 [2024-10-01 22:40:12.347978] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:41:17.128 [2024-10-01 22:40:12.351567] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:41:17.128 [2024-10-01 22:40:12.360896] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:41:17.128 [2024-10-01 22:40:12.361445] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:17.128 [2024-10-01 22:40:12.361460] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ac290 with addr=10.0.0.2, port=4420 00:41:17.128 [2024-10-01 22:40:12.361468] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ac290 is same with the state(6) to be set 00:41:17.128 [2024-10-01 22:40:12.361695] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14ac290 (9): Bad file descriptor 00:41:17.128 [2024-10-01 22:40:12.361918] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:41:17.128 [2024-10-01 22:40:12.361927] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:41:17.128 [2024-10-01 22:40:12.361934] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:41:17.128 [2024-10-01 22:40:12.365521] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:41:17.128 [2024-10-01 22:40:12.374845] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:41:17.128 [2024-10-01 22:40:12.375418] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:17.128 [2024-10-01 22:40:12.375433] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ac290 with addr=10.0.0.2, port=4420 00:41:17.128 [2024-10-01 22:40:12.375441] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ac290 is same with the state(6) to be set 00:41:17.128 [2024-10-01 22:40:12.375667] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14ac290 (9): Bad file descriptor 00:41:17.128 [2024-10-01 22:40:12.375890] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:41:17.128 [2024-10-01 22:40:12.375898] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:41:17.128 [2024-10-01 22:40:12.375905] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:41:17.391 [2024-10-01 22:40:12.379495] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:41:17.391 [2024-10-01 22:40:12.388820] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:41:17.391 [2024-10-01 22:40:12.389401] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:17.391 [2024-10-01 22:40:12.389417] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ac290 with addr=10.0.0.2, port=4420 00:41:17.391 [2024-10-01 22:40:12.389424] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ac290 is same with the state(6) to be set 00:41:17.391 [2024-10-01 22:40:12.389650] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14ac290 (9): Bad file descriptor 00:41:17.391 [2024-10-01 22:40:12.389873] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:41:17.391 [2024-10-01 22:40:12.389884] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:41:17.391 [2024-10-01 22:40:12.389891] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:41:17.391 [2024-10-01 22:40:12.393480] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:41:17.391 [2024-10-01 22:40:12.402801] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:41:17.391 [2024-10-01 22:40:12.403352] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:17.391 [2024-10-01 22:40:12.403368] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ac290 with addr=10.0.0.2, port=4420 00:41:17.391 [2024-10-01 22:40:12.403375] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ac290 is same with the state(6) to be set 00:41:17.391 [2024-10-01 22:40:12.403597] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14ac290 (9): Bad file descriptor 00:41:17.391 [2024-10-01 22:40:12.403823] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:41:17.391 [2024-10-01 22:40:12.403832] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:41:17.391 [2024-10-01 22:40:12.403839] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:41:17.391 [2024-10-01 22:40:12.407436] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:41:17.391 [2024-10-01 22:40:12.416757] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:41:17.391 [2024-10-01 22:40:12.417338] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:17.391 [2024-10-01 22:40:12.417354] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ac290 with addr=10.0.0.2, port=4420 00:41:17.391 [2024-10-01 22:40:12.417361] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ac290 is same with the state(6) to be set 00:41:17.391 [2024-10-01 22:40:12.417590] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14ac290 (9): Bad file descriptor 00:41:17.391 [2024-10-01 22:40:12.417820] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:41:17.391 [2024-10-01 22:40:12.417829] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:41:17.391 [2024-10-01 22:40:12.417836] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:41:17.391 [2024-10-01 22:40:12.421425] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:41:17.391 [2024-10-01 22:40:12.430751] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:41:17.391 [2024-10-01 22:40:12.431293] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:17.391 [2024-10-01 22:40:12.431309] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ac290 with addr=10.0.0.2, port=4420 00:41:17.391 [2024-10-01 22:40:12.431317] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ac290 is same with the state(6) to be set 00:41:17.391 [2024-10-01 22:40:12.431538] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14ac290 (9): Bad file descriptor 00:41:17.391 [2024-10-01 22:40:12.431764] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:41:17.391 [2024-10-01 22:40:12.431774] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:41:17.391 [2024-10-01 22:40:12.431781] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:41:17.391 [2024-10-01 22:40:12.435368] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:41:17.391 [2024-10-01 22:40:12.444695] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:41:17.391 [2024-10-01 22:40:12.445239] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:17.391 [2024-10-01 22:40:12.445255] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ac290 with addr=10.0.0.2, port=4420 00:41:17.391 [2024-10-01 22:40:12.445262] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ac290 is same with the state(6) to be set 00:41:17.391 [2024-10-01 22:40:12.445484] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14ac290 (9): Bad file descriptor 00:41:17.391 [2024-10-01 22:40:12.445711] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:41:17.391 [2024-10-01 22:40:12.445720] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:41:17.391 [2024-10-01 22:40:12.445727] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:41:17.391 [2024-10-01 22:40:12.449313] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:41:17.391 [2024-10-01 22:40:12.458642] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:41:17.391 [2024-10-01 22:40:12.459224] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:17.392 [2024-10-01 22:40:12.459239] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ac290 with addr=10.0.0.2, port=4420 00:41:17.392 [2024-10-01 22:40:12.459247] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ac290 is same with the state(6) to be set 00:41:17.392 [2024-10-01 22:40:12.459468] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14ac290 (9): Bad file descriptor 00:41:17.392 [2024-10-01 22:40:12.459696] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:41:17.392 [2024-10-01 22:40:12.459705] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:41:17.392 [2024-10-01 22:40:12.459712] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:41:17.392 4598.17 IOPS, 17.96 MiB/s [2024-10-01 22:40:12.464960] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:41:17.392 [2024-10-01 22:40:12.472587] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:41:17.392 [2024-10-01 22:40:12.473054] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:17.392 [2024-10-01 22:40:12.473070] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ac290 with addr=10.0.0.2, port=4420 00:41:17.392 [2024-10-01 22:40:12.473078] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ac290 is same with the state(6) to be set 00:41:17.392 [2024-10-01 22:40:12.473300] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14ac290 (9): Bad file descriptor 00:41:17.392 [2024-10-01 22:40:12.473522] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:41:17.392 [2024-10-01 22:40:12.473529] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:41:17.392 [2024-10-01 22:40:12.473536] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:41:17.392 [2024-10-01 22:40:12.477131] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:41:17.392 [2024-10-01 22:40:12.486450] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:41:17.392 [2024-10-01 22:40:12.486853] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:17.392 [2024-10-01 22:40:12.486868] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ac290 with addr=10.0.0.2, port=4420 00:41:17.392 [2024-10-01 22:40:12.486880] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ac290 is same with the state(6) to be set 00:41:17.392 [2024-10-01 22:40:12.487102] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14ac290 (9): Bad file descriptor 00:41:17.392 [2024-10-01 22:40:12.487323] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:41:17.392 [2024-10-01 22:40:12.487331] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:41:17.392 [2024-10-01 22:40:12.487338] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:41:17.392 [2024-10-01 22:40:12.490931] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:41:17.392 [2024-10-01 22:40:12.500689] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:41:17.392 [2024-10-01 22:40:12.501211] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:17.392 [2024-10-01 22:40:12.501248] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ac290 with addr=10.0.0.2, port=4420 00:41:17.392 [2024-10-01 22:40:12.501259] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ac290 is same with the state(6) to be set 00:41:17.392 [2024-10-01 22:40:12.501501] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14ac290 (9): Bad file descriptor 00:41:17.392 [2024-10-01 22:40:12.501737] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:41:17.392 [2024-10-01 22:40:12.501747] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:41:17.392 [2024-10-01 22:40:12.501755] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:41:17.392 [2024-10-01 22:40:12.505361] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:41:17.392 [2024-10-01 22:40:12.514694] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:41:17.392 [2024-10-01 22:40:12.515387] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:17.392 [2024-10-01 22:40:12.515424] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ac290 with addr=10.0.0.2, port=4420 00:41:17.392 [2024-10-01 22:40:12.515436] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ac290 is same with the state(6) to be set 00:41:17.392 [2024-10-01 22:40:12.515687] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14ac290 (9): Bad file descriptor 00:41:17.392 [2024-10-01 22:40:12.515913] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:41:17.392 [2024-10-01 22:40:12.515922] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:41:17.392 [2024-10-01 22:40:12.515929] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:41:17.392 [2024-10-01 22:40:12.519533] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:41:17.392 [2024-10-01 22:40:12.528685] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:41:17.392 [2024-10-01 22:40:12.529339] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:17.392 [2024-10-01 22:40:12.529376] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ac290 with addr=10.0.0.2, port=4420 00:41:17.392 [2024-10-01 22:40:12.529388] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ac290 is same with the state(6) to be set 00:41:17.392 [2024-10-01 22:40:12.529639] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14ac290 (9): Bad file descriptor 00:41:17.392 [2024-10-01 22:40:12.529867] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:41:17.392 [2024-10-01 22:40:12.529882] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:41:17.392 [2024-10-01 22:40:12.529890] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:41:17.392 [2024-10-01 22:40:12.533487] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:41:17.392 [2024-10-01 22:40:12.542599] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:41:17.392 [2024-10-01 22:40:12.543238] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:17.392 [2024-10-01 22:40:12.543276] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ac290 with addr=10.0.0.2, port=4420 00:41:17.392 [2024-10-01 22:40:12.543287] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ac290 is same with the state(6) to be set 00:41:17.392 [2024-10-01 22:40:12.543529] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14ac290 (9): Bad file descriptor 00:41:17.392 [2024-10-01 22:40:12.543763] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:41:17.392 [2024-10-01 22:40:12.543774] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:41:17.392 [2024-10-01 22:40:12.543781] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:41:17.392 [2024-10-01 22:40:12.547378] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:41:17.392 [2024-10-01 22:40:12.556495] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:41:17.392 [2024-10-01 22:40:12.557064] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:17.392 [2024-10-01 22:40:12.557082] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ac290 with addr=10.0.0.2, port=4420 00:41:17.392 [2024-10-01 22:40:12.557091] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ac290 is same with the state(6) to be set 00:41:17.392 [2024-10-01 22:40:12.557313] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14ac290 (9): Bad file descriptor 00:41:17.392 [2024-10-01 22:40:12.557536] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:41:17.392 [2024-10-01 22:40:12.557544] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:41:17.392 [2024-10-01 22:40:12.557551] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:41:17.392 [2024-10-01 22:40:12.561155] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:41:17.392 [2024-10-01 22:40:12.570482] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:41:17.392 [2024-10-01 22:40:12.570989] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:17.392 [2024-10-01 22:40:12.571006] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ac290 with addr=10.0.0.2, port=4420 00:41:17.392 [2024-10-01 22:40:12.571014] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ac290 is same with the state(6) to be set 00:41:17.393 [2024-10-01 22:40:12.571236] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14ac290 (9): Bad file descriptor 00:41:17.393 [2024-10-01 22:40:12.571458] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:41:17.393 [2024-10-01 22:40:12.571466] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:41:17.393 [2024-10-01 22:40:12.571473] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:41:17.393 [2024-10-01 22:40:12.575071] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:41:17.393 [2024-10-01 22:40:12.584388] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:41:17.393 [2024-10-01 22:40:12.584950] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:17.393 [2024-10-01 22:40:12.584967] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ac290 with addr=10.0.0.2, port=4420 00:41:17.393 [2024-10-01 22:40:12.584975] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ac290 is same with the state(6) to be set 00:41:17.393 [2024-10-01 22:40:12.585197] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14ac290 (9): Bad file descriptor 00:41:17.393 [2024-10-01 22:40:12.585419] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:41:17.393 [2024-10-01 22:40:12.585427] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:41:17.393 [2024-10-01 22:40:12.585434] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:41:17.393 [2024-10-01 22:40:12.589028] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:41:17.393 [2024-10-01 22:40:12.598347] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:41:17.393 [2024-10-01 22:40:12.598937] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:17.393 [2024-10-01 22:40:12.598974] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ac290 with addr=10.0.0.2, port=4420 00:41:17.393 [2024-10-01 22:40:12.598986] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ac290 is same with the state(6) to be set 00:41:17.393 [2024-10-01 22:40:12.599228] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14ac290 (9): Bad file descriptor 00:41:17.393 [2024-10-01 22:40:12.599454] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:41:17.393 [2024-10-01 22:40:12.599462] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:41:17.393 [2024-10-01 22:40:12.599470] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:41:17.393 [2024-10-01 22:40:12.603073] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:41:17.393 [2024-10-01 22:40:12.612197] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:41:17.393 [2024-10-01 22:40:12.612771] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:17.393 [2024-10-01 22:40:12.612807] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ac290 with addr=10.0.0.2, port=4420 00:41:17.393 [2024-10-01 22:40:12.612820] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ac290 is same with the state(6) to be set 00:41:17.393 [2024-10-01 22:40:12.613065] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14ac290 (9): Bad file descriptor 00:41:17.393 [2024-10-01 22:40:12.613291] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:41:17.393 [2024-10-01 22:40:12.613300] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:41:17.393 [2024-10-01 22:40:12.613308] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:41:17.393 [2024-10-01 22:40:12.616911] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:41:17.393 [2024-10-01 22:40:12.626247] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:41:17.393 [2024-10-01 22:40:12.626853] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:17.393 [2024-10-01 22:40:12.626873] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ac290 with addr=10.0.0.2, port=4420 00:41:17.393 [2024-10-01 22:40:12.626881] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ac290 is same with the state(6) to be set 00:41:17.393 [2024-10-01 22:40:12.627109] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14ac290 (9): Bad file descriptor 00:41:17.393 [2024-10-01 22:40:12.627332] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:41:17.393 [2024-10-01 22:40:12.627340] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:41:17.393 [2024-10-01 22:40:12.627347] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:41:17.393 [2024-10-01 22:40:12.630943] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:41:17.393 [2024-10-01 22:40:12.640274] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:41:17.393 [2024-10-01 22:40:12.640927] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:17.393 [2024-10-01 22:40:12.640964] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ac290 with addr=10.0.0.2, port=4420 00:41:17.393 [2024-10-01 22:40:12.640976] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ac290 is same with the state(6) to be set 00:41:17.393 [2024-10-01 22:40:12.641218] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14ac290 (9): Bad file descriptor 00:41:17.393 [2024-10-01 22:40:12.641445] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:41:17.393 [2024-10-01 22:40:12.641453] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:41:17.393 [2024-10-01 22:40:12.641461] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:41:17.657 [2024-10-01 22:40:12.645067] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:41:17.657 [2024-10-01 22:40:12.654187] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:41:17.657 [2024-10-01 22:40:12.654847] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:17.657 [2024-10-01 22:40:12.654885] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ac290 with addr=10.0.0.2, port=4420 00:41:17.657 [2024-10-01 22:40:12.654897] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ac290 is same with the state(6) to be set 00:41:17.657 [2024-10-01 22:40:12.655142] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14ac290 (9): Bad file descriptor 00:41:17.657 [2024-10-01 22:40:12.655368] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:41:17.657 [2024-10-01 22:40:12.655377] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:41:17.657 [2024-10-01 22:40:12.655385] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:41:17.657 [2024-10-01 22:40:12.658994] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:41:17.657 [2024-10-01 22:40:12.668125] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:41:17.657 [2024-10-01 22:40:12.668771] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:17.657 [2024-10-01 22:40:12.668810] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ac290 with addr=10.0.0.2, port=4420 00:41:17.657 [2024-10-01 22:40:12.668824] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ac290 is same with the state(6) to be set 00:41:17.657 [2024-10-01 22:40:12.669068] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14ac290 (9): Bad file descriptor 00:41:17.657 [2024-10-01 22:40:12.669294] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:41:17.657 [2024-10-01 22:40:12.669303] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:41:17.657 [2024-10-01 22:40:12.669316] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:41:17.657 [2024-10-01 22:40:12.672924] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:41:17.657 [2024-10-01 22:40:12.682049] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:41:17.657 [2024-10-01 22:40:12.682732] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:17.657 [2024-10-01 22:40:12.682769] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ac290 with addr=10.0.0.2, port=4420 00:41:17.657 [2024-10-01 22:40:12.682780] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ac290 is same with the state(6) to be set 00:41:17.657 [2024-10-01 22:40:12.683023] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14ac290 (9): Bad file descriptor 00:41:17.657 [2024-10-01 22:40:12.683249] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:41:17.658 [2024-10-01 22:40:12.683258] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:41:17.658 [2024-10-01 22:40:12.683266] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:41:17.658 [2024-10-01 22:40:12.686873] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:41:17.658 [2024-10-01 22:40:12.695995] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:41:17.658 [2024-10-01 22:40:12.696690] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:17.658 [2024-10-01 22:40:12.696728] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ac290 with addr=10.0.0.2, port=4420 00:41:17.658 [2024-10-01 22:40:12.696740] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ac290 is same with the state(6) to be set 00:41:17.658 [2024-10-01 22:40:12.696987] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14ac290 (9): Bad file descriptor 00:41:17.658 [2024-10-01 22:40:12.697213] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:41:17.658 [2024-10-01 22:40:12.697223] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:41:17.658 [2024-10-01 22:40:12.697230] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:41:17.658 [2024-10-01 22:40:12.700835] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:41:17.658 [2024-10-01 22:40:12.709964] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:41:17.658 [2024-10-01 22:40:12.710692] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:17.658 [2024-10-01 22:40:12.710730] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ac290 with addr=10.0.0.2, port=4420 00:41:17.658 [2024-10-01 22:40:12.710742] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ac290 is same with the state(6) to be set 00:41:17.658 [2024-10-01 22:40:12.710988] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14ac290 (9): Bad file descriptor 00:41:17.658 [2024-10-01 22:40:12.711215] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:41:17.658 [2024-10-01 22:40:12.711224] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:41:17.658 [2024-10-01 22:40:12.711231] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:41:17.658 [2024-10-01 22:40:12.714839] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:41:17.658 [2024-10-01 22:40:12.723980] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:41:17.658 [2024-10-01 22:40:12.724676] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:17.658 [2024-10-01 22:40:12.724722] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ac290 with addr=10.0.0.2, port=4420 00:41:17.658 [2024-10-01 22:40:12.724735] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ac290 is same with the state(6) to be set 00:41:17.658 [2024-10-01 22:40:12.724981] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14ac290 (9): Bad file descriptor 00:41:17.658 [2024-10-01 22:40:12.725208] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:41:17.658 [2024-10-01 22:40:12.725224] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:41:17.658 [2024-10-01 22:40:12.725232] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:41:17.658 [2024-10-01 22:40:12.728838] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:41:17.658 [2024-10-01 22:40:12.737954] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:41:17.658 [2024-10-01 22:40:12.738546] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:17.658 [2024-10-01 22:40:12.738565] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ac290 with addr=10.0.0.2, port=4420 00:41:17.658 [2024-10-01 22:40:12.738573] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ac290 is same with the state(6) to be set 00:41:17.658 [2024-10-01 22:40:12.738803] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14ac290 (9): Bad file descriptor 00:41:17.658 [2024-10-01 22:40:12.739026] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:41:17.658 [2024-10-01 22:40:12.739036] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:41:17.658 [2024-10-01 22:40:12.739044] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:41:17.658 [2024-10-01 22:40:12.742640] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:41:17.658 [2024-10-01 22:40:12.751969] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:41:17.658 [2024-10-01 22:40:12.752656] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:17.658 [2024-10-01 22:40:12.752693] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ac290 with addr=10.0.0.2, port=4420 00:41:17.658 [2024-10-01 22:40:12.752707] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ac290 is same with the state(6) to be set 00:41:17.658 [2024-10-01 22:40:12.752953] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14ac290 (9): Bad file descriptor 00:41:17.658 [2024-10-01 22:40:12.753180] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:41:17.658 [2024-10-01 22:40:12.753189] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:41:17.658 [2024-10-01 22:40:12.753197] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:41:17.658 [2024-10-01 22:40:12.756808] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:41:17.658 [2024-10-01 22:40:12.765940] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:41:17.658 [2024-10-01 22:40:12.766506] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:17.658 [2024-10-01 22:40:12.766525] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ac290 with addr=10.0.0.2, port=4420 00:41:17.658 [2024-10-01 22:40:12.766533] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ac290 is same with the state(6) to be set 00:41:17.658 [2024-10-01 22:40:12.766761] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14ac290 (9): Bad file descriptor 00:41:17.658 [2024-10-01 22:40:12.766989] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:41:17.658 [2024-10-01 22:40:12.766999] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:41:17.658 [2024-10-01 22:40:12.767006] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:41:17.658 [2024-10-01 22:40:12.770635] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:41:17.658 [2024-10-01 22:40:12.779968] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:41:17.658 [2024-10-01 22:40:12.780512] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:17.658 [2024-10-01 22:40:12.780528] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ac290 with addr=10.0.0.2, port=4420 00:41:17.658 [2024-10-01 22:40:12.780536] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ac290 is same with the state(6) to be set 00:41:17.658 [2024-10-01 22:40:12.780765] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14ac290 (9): Bad file descriptor 00:41:17.658 [2024-10-01 22:40:12.780988] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:41:17.658 [2024-10-01 22:40:12.780996] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:41:17.658 [2024-10-01 22:40:12.781003] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:41:17.658 [2024-10-01 22:40:12.784623] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:41:17.658 [2024-10-01 22:40:12.793949] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:41:17.658 [2024-10-01 22:40:12.794532] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:17.658 [2024-10-01 22:40:12.794549] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ac290 with addr=10.0.0.2, port=4420 00:41:17.658 [2024-10-01 22:40:12.794557] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ac290 is same with the state(6) to be set 00:41:17.658 [2024-10-01 22:40:12.794784] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14ac290 (9): Bad file descriptor 00:41:17.658 [2024-10-01 22:40:12.795007] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:41:17.658 [2024-10-01 22:40:12.795015] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:41:17.658 [2024-10-01 22:40:12.795022] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:41:17.658 [2024-10-01 22:40:12.798611] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:41:17.659 [2024-10-01 22:40:12.807948] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:41:17.659 [2024-10-01 22:40:12.808498] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:17.659 [2024-10-01 22:40:12.808514] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ac290 with addr=10.0.0.2, port=4420 00:41:17.659 [2024-10-01 22:40:12.808522] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ac290 is same with the state(6) to be set 00:41:17.659 [2024-10-01 22:40:12.808748] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14ac290 (9): Bad file descriptor 00:41:17.659 [2024-10-01 22:40:12.808971] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:41:17.659 [2024-10-01 22:40:12.808980] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:41:17.659 [2024-10-01 22:40:12.808987] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:41:17.659 [2024-10-01 22:40:12.812581] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:41:17.659 22:40:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:41:17.659 22:40:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@864 -- # return 0 00:41:17.659 22:40:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:41:17.659 22:40:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@730 -- # xtrace_disable 00:41:17.659 22:40:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:41:17.659 [2024-10-01 22:40:12.821916] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:41:17.659 [2024-10-01 22:40:12.822460] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:17.659 [2024-10-01 22:40:12.822477] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ac290 with addr=10.0.0.2, port=4420 00:41:17.659 [2024-10-01 22:40:12.822485] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ac290 is same with the state(6) to be set 00:41:17.659 [2024-10-01 22:40:12.822712] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14ac290 (9): Bad file descriptor 00:41:17.659 [2024-10-01 22:40:12.822935] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:41:17.659 [2024-10-01 22:40:12.822950] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:41:17.659 [2024-10-01 22:40:12.822957] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:41:17.659 [2024-10-01 22:40:12.826547] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:41:17.659 [2024-10-01 22:40:12.835873] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:41:17.659 [2024-10-01 22:40:12.836411] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:17.659 [2024-10-01 22:40:12.836427] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ac290 with addr=10.0.0.2, port=4420 00:41:17.659 [2024-10-01 22:40:12.836435] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ac290 is same with the state(6) to be set 00:41:17.659 [2024-10-01 22:40:12.836662] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14ac290 (9): Bad file descriptor 00:41:17.659 [2024-10-01 22:40:12.836886] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:41:17.659 [2024-10-01 22:40:12.836894] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:41:17.659 [2024-10-01 22:40:12.836901] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:41:17.659 [2024-10-01 22:40:12.840491] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:41:17.659 [2024-10-01 22:40:12.849819] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:41:17.659 [2024-10-01 22:40:12.850395] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:17.659 [2024-10-01 22:40:12.850411] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ac290 with addr=10.0.0.2, port=4420 00:41:17.659 [2024-10-01 22:40:12.850418] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ac290 is same with the state(6) to be set 00:41:17.659 [2024-10-01 22:40:12.850644] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14ac290 (9): Bad file descriptor 00:41:17.659 [2024-10-01 22:40:12.850867] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:41:17.659 [2024-10-01 22:40:12.850876] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:41:17.659 [2024-10-01 22:40:12.850883] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:41:17.659 [2024-10-01 22:40:12.854475] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:41:17.659 22:40:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:41:17.659 22:40:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:41:17.659 22:40:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:17.659 22:40:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:41:17.659 [2024-10-01 22:40:12.863809] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:41:17.659 [2024-10-01 22:40:12.864302] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:17.659 [2024-10-01 22:40:12.864318] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ac290 with addr=10.0.0.2, port=4420 00:41:17.659 [2024-10-01 22:40:12.864325] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ac290 is same with the state(6) to be set 00:41:17.659 [2024-10-01 22:40:12.864547] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14ac290 (9): Bad file descriptor 00:41:17.659 [2024-10-01 22:40:12.864775] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:41:17.659 [2024-10-01 22:40:12.864785] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:41:17.659 [2024-10-01 22:40:12.864792] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:41:17.659 [2024-10-01 22:40:12.867202] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:41:17.659 [2024-10-01 22:40:12.868381] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:41:17.659 22:40:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:17.659 22:40:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:41:17.659 22:40:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:17.659 22:40:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:41:17.659 [2024-10-01 22:40:12.877708] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:41:17.659 [2024-10-01 22:40:12.878247] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:17.659 [2024-10-01 22:40:12.878263] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ac290 with addr=10.0.0.2, port=4420 00:41:17.659 [2024-10-01 22:40:12.878271] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ac290 is same with the state(6) to be set 00:41:17.659 [2024-10-01 22:40:12.878493] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14ac290 (9): Bad file descriptor 00:41:17.659 [2024-10-01 22:40:12.878720] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:41:17.659 [2024-10-01 22:40:12.878729] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:41:17.659 [2024-10-01 22:40:12.878736] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:41:17.659 [2024-10-01 22:40:12.882325] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:41:17.659 Malloc0 00:41:17.659 22:40:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:17.659 22:40:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:41:17.659 22:40:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:17.659 22:40:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:41:17.659 [2024-10-01 22:40:12.891659] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:41:17.659 [2024-10-01 22:40:12.892226] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:17.659 [2024-10-01 22:40:12.892241] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ac290 with addr=10.0.0.2, port=4420 00:41:17.659 [2024-10-01 22:40:12.892249] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ac290 is same with the state(6) to be set 00:41:17.659 [2024-10-01 22:40:12.892471] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14ac290 (9): Bad file descriptor 00:41:17.659 [2024-10-01 22:40:12.892698] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:41:17.659 [2024-10-01 22:40:12.892707] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:41:17.659 [2024-10-01 22:40:12.892714] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:41:17.659 [2024-10-01 22:40:12.896306] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:41:17.660 22:40:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:17.660 22:40:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:41:17.660 22:40:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:17.660 22:40:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:41:17.660 [2024-10-01 22:40:12.905644] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:41:17.660 [2024-10-01 22:40:12.906081] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:17.660 [2024-10-01 22:40:12.906096] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ac290 with addr=10.0.0.2, port=4420 00:41:17.660 [2024-10-01 22:40:12.906104] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ac290 is same with the state(6) to be set 00:41:17.660 [2024-10-01 22:40:12.906326] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14ac290 (9): Bad file descriptor 00:41:17.660 [2024-10-01 22:40:12.906548] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:41:17.660 [2024-10-01 22:40:12.906557] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:41:17.660 [2024-10-01 22:40:12.906564] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:41:17.921 [2024-10-01 22:40:12.910159] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:41:17.921 22:40:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:17.921 22:40:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:41:17.921 22:40:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:17.921 22:40:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:41:17.921 [2024-10-01 22:40:12.919486] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:41:17.921 [2024-10-01 22:40:12.920082] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:17.921 [2024-10-01 22:40:12.920098] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ac290 with addr=10.0.0.2, port=4420 00:41:17.921 [2024-10-01 22:40:12.920106] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ac290 is same with the state(6) to be set 00:41:17.921 [2024-10-01 22:40:12.920328] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14ac290 (9): Bad file descriptor 00:41:17.921 [2024-10-01 22:40:12.920549] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:41:17.921 [2024-10-01 22:40:12.920557] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:41:17.921 [2024-10-01 22:40:12.920568] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:41:17.921 [2024-10-01 22:40:12.921662] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:41:17.921 [2024-10-01 22:40:12.924163] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:41:17.921 22:40:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:17.921 22:40:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@38 -- # wait 391906 00:41:17.921 [2024-10-01 22:40:12.933483] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:41:17.921 [2024-10-01 22:40:13.102173] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:41:26.463 4562.29 IOPS, 17.82 MiB/s 5416.00 IOPS, 21.16 MiB/s 6031.56 IOPS, 23.56 MiB/s 6542.90 IOPS, 25.56 MiB/s 6957.36 IOPS, 27.18 MiB/s 7312.83 IOPS, 28.57 MiB/s 7608.08 IOPS, 29.72 MiB/s 7850.29 IOPS, 30.67 MiB/s 00:41:26.463 Latency(us) 00:41:26.463 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:41:26.463 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:41:26.463 Verification LBA range: start 0x0 length 0x4000 00:41:26.463 Nvme1n1 : 15.01 8058.69 31.48 9964.56 0.00 7075.97 826.03 15073.28 00:41:26.463 =================================================================================================================== 00:41:26.463 Total : 8058.69 31.48 9964.56 0.00 7075.97 826.03 15073.28 00:41:26.463 22:40:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@39 -- # sync 00:41:26.463 22:40:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:41:26.463 22:40:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:26.463 22:40:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:41:26.463 22:40:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:26.463 22:40:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@42 -- # trap - SIGINT SIGTERM EXIT 00:41:26.463 22:40:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@44 -- # nvmftestfini 00:41:26.463 22:40:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@514 -- # nvmfcleanup 00:41:26.463 22:40:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@121 -- # sync 00:41:26.463 22:40:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:41:26.463 22:40:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@124 -- # set +e 00:41:26.463 22:40:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@125 -- # for i in {1..20} 00:41:26.463 22:40:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:41:26.463 rmmod nvme_tcp 00:41:26.723 rmmod nvme_fabrics 00:41:26.724 rmmod nvme_keyring 00:41:26.724 22:40:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:41:26.724 22:40:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@128 -- # set -e 00:41:26.724 22:40:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@129 -- # return 0 00:41:26.724 22:40:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@515 -- # '[' -n 393117 ']' 00:41:26.724 22:40:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@516 -- # killprocess 393117 00:41:26.724 22:40:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@950 -- # '[' -z 393117 ']' 00:41:26.724 22:40:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@954 -- # kill -0 393117 00:41:26.724 22:40:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@955 -- # uname 00:41:26.724 22:40:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:41:26.724 22:40:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 393117 00:41:26.724 22:40:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:41:26.724 22:40:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:41:26.724 22:40:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@968 -- # echo 'killing process with pid 393117' 00:41:26.724 killing process with pid 393117 00:41:26.724 22:40:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@969 -- # kill 393117 00:41:26.724 22:40:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@974 -- # wait 393117 00:41:26.985 22:40:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:41:26.985 22:40:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:41:26.985 22:40:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:41:26.985 22:40:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@297 -- # iptr 00:41:26.985 22:40:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@789 -- # iptables-save 00:41:26.985 22:40:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:41:26.985 22:40:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@789 -- # iptables-restore 00:41:26.985 22:40:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:41:26.985 22:40:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@302 -- # remove_spdk_ns 00:41:26.985 22:40:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:41:26.985 22:40:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:41:26.985 22:40:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:41:28.893 22:40:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:41:28.893 00:41:28.893 real 0m28.101s 00:41:28.893 user 1m3.745s 00:41:28.893 sys 0m7.496s 00:41:28.893 22:40:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:41:28.893 22:40:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:41:28.893 ************************************ 00:41:28.893 END TEST nvmf_bdevperf 00:41:28.893 ************************************ 00:41:28.893 22:40:24 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@48 -- # run_test nvmf_target_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:41:28.893 22:40:24 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:41:28.893 22:40:24 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:41:28.893 22:40:24 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:41:29.154 ************************************ 00:41:29.154 START TEST nvmf_target_disconnect 00:41:29.154 ************************************ 00:41:29.154 22:40:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:41:29.154 * Looking for test storage... 00:41:29.154 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:41:29.154 22:40:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:41:29.154 22:40:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1681 -- # lcov --version 00:41:29.154 22:40:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:41:29.154 22:40:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:41:29.154 22:40:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:41:29.154 22:40:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@333 -- # local ver1 ver1_l 00:41:29.154 22:40:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@334 -- # local ver2 ver2_l 00:41:29.154 22:40:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@336 -- # IFS=.-: 00:41:29.154 22:40:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@336 -- # read -ra ver1 00:41:29.154 22:40:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@337 -- # IFS=.-: 00:41:29.154 22:40:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@337 -- # read -ra ver2 00:41:29.154 22:40:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@338 -- # local 'op=<' 00:41:29.154 22:40:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@340 -- # ver1_l=2 00:41:29.154 22:40:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@341 -- # ver2_l=1 00:41:29.154 22:40:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:41:29.154 22:40:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@344 -- # case "$op" in 00:41:29.154 22:40:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@345 -- # : 1 00:41:29.154 22:40:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@364 -- # (( v = 0 )) 00:41:29.154 22:40:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:41:29.154 22:40:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@365 -- # decimal 1 00:41:29.154 22:40:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@353 -- # local d=1 00:41:29.154 22:40:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:41:29.154 22:40:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@355 -- # echo 1 00:41:29.154 22:40:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@365 -- # ver1[v]=1 00:41:29.154 22:40:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@366 -- # decimal 2 00:41:29.154 22:40:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@353 -- # local d=2 00:41:29.154 22:40:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:41:29.154 22:40:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@355 -- # echo 2 00:41:29.154 22:40:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@366 -- # ver2[v]=2 00:41:29.154 22:40:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:41:29.154 22:40:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:41:29.154 22:40:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@368 -- # return 0 00:41:29.154 22:40:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:41:29.154 22:40:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:41:29.154 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:41:29.154 --rc genhtml_branch_coverage=1 00:41:29.155 --rc genhtml_function_coverage=1 00:41:29.155 --rc genhtml_legend=1 00:41:29.155 --rc geninfo_all_blocks=1 00:41:29.155 --rc geninfo_unexecuted_blocks=1 00:41:29.155 00:41:29.155 ' 00:41:29.155 22:40:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:41:29.155 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:41:29.155 --rc genhtml_branch_coverage=1 00:41:29.155 --rc genhtml_function_coverage=1 00:41:29.155 --rc genhtml_legend=1 00:41:29.155 --rc geninfo_all_blocks=1 00:41:29.155 --rc geninfo_unexecuted_blocks=1 00:41:29.155 00:41:29.155 ' 00:41:29.155 22:40:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:41:29.155 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:41:29.155 --rc genhtml_branch_coverage=1 00:41:29.155 --rc genhtml_function_coverage=1 00:41:29.155 --rc genhtml_legend=1 00:41:29.155 --rc geninfo_all_blocks=1 00:41:29.155 --rc geninfo_unexecuted_blocks=1 00:41:29.155 00:41:29.155 ' 00:41:29.155 22:40:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:41:29.155 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:41:29.155 --rc genhtml_branch_coverage=1 00:41:29.155 --rc genhtml_function_coverage=1 00:41:29.155 --rc genhtml_legend=1 00:41:29.155 --rc geninfo_all_blocks=1 00:41:29.155 --rc geninfo_unexecuted_blocks=1 00:41:29.155 00:41:29.155 ' 00:41:29.155 22:40:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:41:29.155 22:40:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@7 -- # uname -s 00:41:29.155 22:40:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:41:29.155 22:40:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:41:29.155 22:40:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:41:29.155 22:40:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:41:29.155 22:40:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:41:29.155 22:40:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:41:29.155 22:40:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:41:29.155 22:40:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:41:29.155 22:40:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:41:29.155 22:40:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:41:29.155 22:40:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 00:41:29.155 22:40:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=80c5a598-37ec-ec11-9bc7-a4bf01928204 00:41:29.155 22:40:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:41:29.155 22:40:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:41:29.155 22:40:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:41:29.155 22:40:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:41:29.155 22:40:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:41:29.155 22:40:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@15 -- # shopt -s extglob 00:41:29.155 22:40:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:41:29.155 22:40:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:41:29.155 22:40:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:41:29.155 22:40:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:29.155 22:40:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:29.155 22:40:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:29.155 22:40:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@5 -- # export PATH 00:41:29.155 22:40:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:29.155 22:40:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@51 -- # : 0 00:41:29.155 22:40:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:41:29.155 22:40:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:41:29.155 22:40:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:41:29.155 22:40:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:41:29.155 22:40:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:41:29.155 22:40:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:41:29.155 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:41:29.155 22:40:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:41:29.155 22:40:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:41:29.155 22:40:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@55 -- # have_pci_nics=0 00:41:29.155 22:40:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@11 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:41:29.155 22:40:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@13 -- # MALLOC_BDEV_SIZE=64 00:41:29.155 22:40:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:41:29.155 22:40:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@69 -- # nvmftestinit 00:41:29.155 22:40:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:41:29.155 22:40:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:41:29.155 22:40:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@474 -- # prepare_net_devs 00:41:29.155 22:40:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@436 -- # local -g is_hw=no 00:41:29.155 22:40:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@438 -- # remove_spdk_ns 00:41:29.155 22:40:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:41:29.155 22:40:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:41:29.155 22:40:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:41:29.415 22:40:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:41:29.415 22:40:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:41:29.415 22:40:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@309 -- # xtrace_disable 00:41:29.415 22:40:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:41:37.556 22:40:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:41:37.556 22:40:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@315 -- # pci_devs=() 00:41:37.556 22:40:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@315 -- # local -a pci_devs 00:41:37.556 22:40:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@316 -- # pci_net_devs=() 00:41:37.556 22:40:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:41:37.556 22:40:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@317 -- # pci_drivers=() 00:41:37.556 22:40:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@317 -- # local -A pci_drivers 00:41:37.556 22:40:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@319 -- # net_devs=() 00:41:37.556 22:40:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@319 -- # local -ga net_devs 00:41:37.556 22:40:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@320 -- # e810=() 00:41:37.556 22:40:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@320 -- # local -ga e810 00:41:37.556 22:40:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@321 -- # x722=() 00:41:37.556 22:40:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@321 -- # local -ga x722 00:41:37.556 22:40:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@322 -- # mlx=() 00:41:37.556 22:40:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@322 -- # local -ga mlx 00:41:37.556 22:40:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:41:37.556 22:40:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:41:37.556 22:40:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:41:37.556 22:40:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:41:37.556 22:40:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:41:37.556 22:40:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:41:37.556 22:40:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:41:37.556 22:40:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:41:37.556 22:40:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:41:37.556 22:40:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:41:37.556 22:40:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:41:37.556 22:40:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:41:37.556 22:40:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:41:37.556 22:40:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:41:37.556 22:40:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:41:37.556 22:40:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:41:37.556 22:40:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:41:37.556 22:40:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:41:37.556 22:40:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:41:37.556 22:40:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:41:37.556 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:41:37.556 22:40:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:41:37.556 22:40:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:41:37.556 22:40:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:41:37.556 22:40:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:41:37.556 22:40:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:41:37.556 22:40:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:41:37.556 22:40:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:41:37.556 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:41:37.556 22:40:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:41:37.556 22:40:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:41:37.556 22:40:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:41:37.556 22:40:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:41:37.556 22:40:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:41:37.556 22:40:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:41:37.556 22:40:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:41:37.556 22:40:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:41:37.556 22:40:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:41:37.556 22:40:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:41:37.556 22:40:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:41:37.556 22:40:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:41:37.556 22:40:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@416 -- # [[ up == up ]] 00:41:37.556 22:40:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:41:37.556 22:40:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:41:37.556 22:40:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:41:37.556 Found net devices under 0000:4b:00.0: cvl_0_0 00:41:37.556 22:40:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:41:37.556 22:40:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:41:37.556 22:40:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:41:37.556 22:40:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:41:37.556 22:40:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:41:37.556 22:40:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@416 -- # [[ up == up ]] 00:41:37.556 22:40:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:41:37.556 22:40:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:41:37.556 22:40:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:41:37.556 Found net devices under 0000:4b:00.1: cvl_0_1 00:41:37.556 22:40:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:41:37.556 22:40:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:41:37.556 22:40:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@440 -- # is_hw=yes 00:41:37.556 22:40:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:41:37.557 22:40:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:41:37.557 22:40:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:41:37.557 22:40:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:41:37.557 22:40:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:41:37.557 22:40:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:41:37.557 22:40:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:41:37.557 22:40:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:41:37.557 22:40:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:41:37.557 22:40:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:41:37.557 22:40:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:41:37.557 22:40:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:41:37.557 22:40:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:41:37.557 22:40:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:41:37.557 22:40:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:41:37.557 22:40:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:41:37.557 22:40:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:41:37.557 22:40:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:41:37.557 22:40:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:41:37.557 22:40:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:41:37.557 22:40:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:41:37.557 22:40:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:41:37.557 22:40:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:41:37.557 22:40:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:41:37.557 22:40:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:41:37.557 22:40:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:41:37.557 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:41:37.557 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.687 ms 00:41:37.557 00:41:37.557 --- 10.0.0.2 ping statistics --- 00:41:37.557 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:41:37.557 rtt min/avg/max/mdev = 0.687/0.687/0.687/0.000 ms 00:41:37.557 22:40:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:41:37.557 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:41:37.557 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.318 ms 00:41:37.557 00:41:37.557 --- 10.0.0.1 ping statistics --- 00:41:37.557 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:41:37.557 rtt min/avg/max/mdev = 0.318/0.318/0.318/0.000 ms 00:41:37.557 22:40:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:41:37.557 22:40:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@448 -- # return 0 00:41:37.557 22:40:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:41:37.557 22:40:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:41:37.557 22:40:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:41:37.557 22:40:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:41:37.557 22:40:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:41:37.557 22:40:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:41:37.557 22:40:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:41:37.557 22:40:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@70 -- # run_test nvmf_target_disconnect_tc1 nvmf_target_disconnect_tc1 00:41:37.557 22:40:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:41:37.557 22:40:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1107 -- # xtrace_disable 00:41:37.557 22:40:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:41:37.557 ************************************ 00:41:37.557 START TEST nvmf_target_disconnect_tc1 00:41:37.557 ************************************ 00:41:37.557 22:40:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1125 -- # nvmf_target_disconnect_tc1 00:41:37.557 22:40:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- host/target_disconnect.sh@32 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:41:37.557 22:40:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@650 -- # local es=0 00:41:37.557 22:40:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:41:37.557 22:40:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:41:37.557 22:40:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:41:37.557 22:40:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:41:37.557 22:40:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:41:37.557 22:40:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:41:37.557 22:40:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:41:37.557 22:40:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:41:37.557 22:40:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect ]] 00:41:37.557 22:40:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:41:37.557 [2024-10-01 22:40:31.986309] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:37.557 [2024-10-01 22:40:31.986369] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10d2ba0 with addr=10.0.0.2, port=4420 00:41:37.557 [2024-10-01 22:40:31.986398] nvme_tcp.c:2723:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:41:37.557 [2024-10-01 22:40:31.986413] nvme.c: 831:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:41:37.557 [2024-10-01 22:40:31.986421] nvme.c: 939:spdk_nvme_probe_ext: *ERROR*: Create probe context failed 00:41:37.557 spdk_nvme_probe() failed for transport address '10.0.0.2' 00:41:37.557 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect: errors occurred 00:41:37.557 Initializing NVMe Controllers 00:41:37.557 22:40:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@653 -- # es=1 00:41:37.557 22:40:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:41:37.557 22:40:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:41:37.557 22:40:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:41:37.557 00:41:37.557 real 0m0.116s 00:41:37.557 user 0m0.046s 00:41:37.557 sys 0m0.070s 00:41:37.557 22:40:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1126 -- # xtrace_disable 00:41:37.557 22:40:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@10 -- # set +x 00:41:37.557 ************************************ 00:41:37.557 END TEST nvmf_target_disconnect_tc1 00:41:37.557 ************************************ 00:41:37.557 22:40:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@71 -- # run_test nvmf_target_disconnect_tc2 nvmf_target_disconnect_tc2 00:41:37.557 22:40:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:41:37.557 22:40:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1107 -- # xtrace_disable 00:41:37.557 22:40:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:41:37.557 ************************************ 00:41:37.557 START TEST nvmf_target_disconnect_tc2 00:41:37.557 ************************************ 00:41:37.557 22:40:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1125 -- # nvmf_target_disconnect_tc2 00:41:37.557 22:40:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@37 -- # disconnect_init 10.0.0.2 00:41:37.557 22:40:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:41:37.557 22:40:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:41:37.558 22:40:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@724 -- # xtrace_disable 00:41:37.558 22:40:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:41:37.558 22:40:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@507 -- # nvmfpid=399162 00:41:37.558 22:40:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@508 -- # waitforlisten 399162 00:41:37.558 22:40:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@831 -- # '[' -z 399162 ']' 00:41:37.558 22:40:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:41:37.558 22:40:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:41:37.558 22:40:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@836 -- # local max_retries=100 00:41:37.558 22:40:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:41:37.558 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:41:37.558 22:40:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@840 -- # xtrace_disable 00:41:37.558 22:40:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:41:37.558 [2024-10-01 22:40:32.154826] Starting SPDK v25.01-pre git sha1 1b1c3081e / DPDK 24.03.0 initialization... 00:41:37.558 [2024-10-01 22:40:32.154902] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:41:37.558 [2024-10-01 22:40:32.245574] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:41:37.558 [2024-10-01 22:40:32.342015] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:41:37.558 [2024-10-01 22:40:32.342067] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:41:37.558 [2024-10-01 22:40:32.342076] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:41:37.558 [2024-10-01 22:40:32.342084] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:41:37.558 [2024-10-01 22:40:32.342090] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:41:37.558 [2024-10-01 22:40:32.342254] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 5 00:41:37.558 [2024-10-01 22:40:32.342402] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 6 00:41:37.558 [2024-10-01 22:40:32.342594] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 4 00:41:37.558 [2024-10-01 22:40:32.342593] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 7 00:41:37.819 22:40:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:41:37.819 22:40:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@864 -- # return 0 00:41:37.819 22:40:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:41:37.819 22:40:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@730 -- # xtrace_disable 00:41:37.819 22:40:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:41:37.819 22:40:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:41:37.819 22:40:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:41:37.819 22:40:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:37.819 22:40:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:41:37.819 Malloc0 00:41:37.819 22:40:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:37.819 22:40:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:41:37.819 22:40:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:37.819 22:40:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:41:37.819 [2024-10-01 22:40:33.050366] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:41:37.819 22:40:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:37.819 22:40:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:41:37.819 22:40:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:37.819 22:40:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:41:37.819 22:40:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:37.819 22:40:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:41:38.080 22:40:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:38.080 22:40:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:41:38.080 22:40:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:38.080 22:40:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:41:38.080 22:40:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:38.080 22:40:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:41:38.080 [2024-10-01 22:40:33.090753] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:41:38.080 22:40:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:38.080 22:40:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:41:38.080 22:40:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:38.080 22:40:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:41:38.080 22:40:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:38.080 22:40:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@42 -- # reconnectpid=399496 00:41:38.080 22:40:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@44 -- # sleep 2 00:41:38.080 22:40:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:41:40.002 22:40:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@45 -- # kill -9 399162 00:41:40.002 22:40:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@47 -- # sleep 2 00:41:40.002 Read completed with error (sct=0, sc=8) 00:41:40.002 starting I/O failed 00:41:40.002 Read completed with error (sct=0, sc=8) 00:41:40.002 starting I/O failed 00:41:40.002 Read completed with error (sct=0, sc=8) 00:41:40.002 starting I/O failed 00:41:40.002 Read completed with error (sct=0, sc=8) 00:41:40.002 starting I/O failed 00:41:40.002 Read completed with error (sct=0, sc=8) 00:41:40.002 starting I/O failed 00:41:40.002 Read completed with error (sct=0, sc=8) 00:41:40.002 starting I/O failed 00:41:40.002 Read completed with error (sct=0, sc=8) 00:41:40.002 starting I/O failed 00:41:40.002 Read completed with error (sct=0, sc=8) 00:41:40.002 starting I/O failed 00:41:40.002 Read completed with error (sct=0, sc=8) 00:41:40.002 starting I/O failed 00:41:40.002 Read completed with error (sct=0, sc=8) 00:41:40.002 starting I/O failed 00:41:40.002 Read completed with error (sct=0, sc=8) 00:41:40.002 starting I/O failed 00:41:40.002 Read completed with error (sct=0, sc=8) 00:41:40.002 starting I/O failed 00:41:40.002 Read completed with error (sct=0, sc=8) 00:41:40.002 starting I/O failed 00:41:40.002 Read completed with error (sct=0, sc=8) 00:41:40.002 starting I/O failed 00:41:40.002 Read completed with error (sct=0, sc=8) 00:41:40.002 starting I/O failed 00:41:40.002 Write completed with error (sct=0, sc=8) 00:41:40.002 starting I/O failed 00:41:40.002 Read completed with error (sct=0, sc=8) 00:41:40.002 starting I/O failed 00:41:40.002 Read completed with error (sct=0, sc=8) 00:41:40.002 starting I/O failed 00:41:40.002 Write completed with error (sct=0, sc=8) 00:41:40.002 starting I/O failed 00:41:40.002 Read completed with error (sct=0, sc=8) 00:41:40.002 starting I/O failed 00:41:40.002 Read completed with error (sct=0, sc=8) 00:41:40.002 starting I/O failed 00:41:40.002 Read completed with error (sct=0, sc=8) 00:41:40.002 starting I/O failed 00:41:40.002 Read completed with error (sct=0, sc=8) 00:41:40.002 starting I/O failed 00:41:40.002 Read completed with error (sct=0, sc=8) 00:41:40.002 starting I/O failed 00:41:40.002 Write completed with error (sct=0, sc=8) 00:41:40.002 starting I/O failed 00:41:40.002 Write completed with error (sct=0, sc=8) 00:41:40.002 starting I/O failed 00:41:40.002 Read completed with error (sct=0, sc=8) 00:41:40.002 starting I/O failed 00:41:40.002 Read completed with error (sct=0, sc=8) 00:41:40.002 starting I/O failed 00:41:40.002 Write completed with error (sct=0, sc=8) 00:41:40.002 starting I/O failed 00:41:40.002 Read completed with error (sct=0, sc=8) 00:41:40.002 starting I/O failed 00:41:40.002 Read completed with error (sct=0, sc=8) 00:41:40.002 starting I/O failed 00:41:40.002 Write completed with error (sct=0, sc=8) 00:41:40.002 starting I/O failed 00:41:40.002 [2024-10-01 22:40:35.132599] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:41:40.002 [2024-10-01 22:40:35.133060] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.002 [2024-10-01 22:40:35.133090] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.002 qpair failed and we were unable to recover it. 00:41:40.002 [2024-10-01 22:40:35.133372] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.002 [2024-10-01 22:40:35.133388] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.002 qpair failed and we were unable to recover it. 00:41:40.003 [2024-10-01 22:40:35.133603] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.003 [2024-10-01 22:40:35.133622] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.003 qpair failed and we were unable to recover it. 00:41:40.003 [2024-10-01 22:40:35.133871] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.003 [2024-10-01 22:40:35.133915] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.003 qpair failed and we were unable to recover it. 00:41:40.003 [2024-10-01 22:40:35.134240] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.003 [2024-10-01 22:40:35.134257] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.003 qpair failed and we were unable to recover it. 00:41:40.003 [2024-10-01 22:40:35.134481] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.003 [2024-10-01 22:40:35.134498] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.003 qpair failed and we were unable to recover it. 00:41:40.003 [2024-10-01 22:40:35.134796] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.003 [2024-10-01 22:40:35.134815] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.003 qpair failed and we were unable to recover it. 00:41:40.003 [2024-10-01 22:40:35.135112] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.003 [2024-10-01 22:40:35.135122] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.003 qpair failed and we were unable to recover it. 00:41:40.003 [2024-10-01 22:40:35.135341] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.003 [2024-10-01 22:40:35.135352] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.003 qpair failed and we were unable to recover it. 00:41:40.003 [2024-10-01 22:40:35.135534] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.003 [2024-10-01 22:40:35.135545] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.003 qpair failed and we were unable to recover it. 00:41:40.003 [2024-10-01 22:40:35.135740] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.003 [2024-10-01 22:40:35.135752] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.003 qpair failed and we were unable to recover it. 00:41:40.003 [2024-10-01 22:40:35.136123] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.003 [2024-10-01 22:40:35.136133] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.003 qpair failed and we were unable to recover it. 00:41:40.003 [2024-10-01 22:40:35.136420] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.003 [2024-10-01 22:40:35.136430] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.003 qpair failed and we were unable to recover it. 00:41:40.003 [2024-10-01 22:40:35.136741] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.003 [2024-10-01 22:40:35.136752] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.003 qpair failed and we were unable to recover it. 00:41:40.003 [2024-10-01 22:40:35.137070] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.003 [2024-10-01 22:40:35.137081] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.003 qpair failed and we were unable to recover it. 00:41:40.003 [2024-10-01 22:40:35.137415] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.003 [2024-10-01 22:40:35.137425] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.003 qpair failed and we were unable to recover it. 00:41:40.003 [2024-10-01 22:40:35.137744] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.003 [2024-10-01 22:40:35.137756] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.003 qpair failed and we were unable to recover it. 00:41:40.003 [2024-10-01 22:40:35.138063] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.003 [2024-10-01 22:40:35.138074] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.003 qpair failed and we were unable to recover it. 00:41:40.003 [2024-10-01 22:40:35.138410] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.003 [2024-10-01 22:40:35.138421] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.003 qpair failed and we were unable to recover it. 00:41:40.003 [2024-10-01 22:40:35.138746] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.003 [2024-10-01 22:40:35.138757] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.003 qpair failed and we were unable to recover it. 00:41:40.003 [2024-10-01 22:40:35.139054] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.003 [2024-10-01 22:40:35.139065] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.003 qpair failed and we were unable to recover it. 00:41:40.003 [2024-10-01 22:40:35.139340] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.003 [2024-10-01 22:40:35.139350] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.003 qpair failed and we were unable to recover it. 00:41:40.003 [2024-10-01 22:40:35.139622] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.003 [2024-10-01 22:40:35.139639] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.003 qpair failed and we were unable to recover it. 00:41:40.003 [2024-10-01 22:40:35.139869] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.003 [2024-10-01 22:40:35.139880] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.003 qpair failed and we were unable to recover it. 00:41:40.003 [2024-10-01 22:40:35.140173] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.003 [2024-10-01 22:40:35.140184] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.003 qpair failed and we were unable to recover it. 00:41:40.003 [2024-10-01 22:40:35.140384] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.003 [2024-10-01 22:40:35.140395] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.003 qpair failed and we were unable to recover it. 00:41:40.003 [2024-10-01 22:40:35.140557] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.003 [2024-10-01 22:40:35.140568] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.003 qpair failed and we were unable to recover it. 00:41:40.003 [2024-10-01 22:40:35.140892] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.003 [2024-10-01 22:40:35.140903] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.003 qpair failed and we were unable to recover it. 00:41:40.003 [2024-10-01 22:40:35.141200] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.003 [2024-10-01 22:40:35.141212] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.003 qpair failed and we were unable to recover it. 00:41:40.003 [2024-10-01 22:40:35.141529] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.003 [2024-10-01 22:40:35.141540] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.003 qpair failed and we were unable to recover it. 00:41:40.003 [2024-10-01 22:40:35.141896] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.003 [2024-10-01 22:40:35.141907] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.003 qpair failed and we were unable to recover it. 00:41:40.003 [2024-10-01 22:40:35.142216] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.003 [2024-10-01 22:40:35.142227] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.003 qpair failed and we were unable to recover it. 00:41:40.003 [2024-10-01 22:40:35.142553] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.003 [2024-10-01 22:40:35.142564] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.003 qpair failed and we were unable to recover it. 00:41:40.003 [2024-10-01 22:40:35.142869] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.003 [2024-10-01 22:40:35.142881] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.003 qpair failed and we were unable to recover it. 00:41:40.003 [2024-10-01 22:40:35.143184] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.003 [2024-10-01 22:40:35.143195] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.003 qpair failed and we were unable to recover it. 00:41:40.003 [2024-10-01 22:40:35.143425] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.003 [2024-10-01 22:40:35.143435] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.003 qpair failed and we were unable to recover it. 00:41:40.003 [2024-10-01 22:40:35.143651] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.003 [2024-10-01 22:40:35.143662] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.003 qpair failed and we were unable to recover it. 00:41:40.003 [2024-10-01 22:40:35.144004] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.003 [2024-10-01 22:40:35.144013] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.003 qpair failed and we were unable to recover it. 00:41:40.003 [2024-10-01 22:40:35.144352] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.003 [2024-10-01 22:40:35.144362] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.003 qpair failed and we were unable to recover it. 00:41:40.003 [2024-10-01 22:40:35.144689] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.003 [2024-10-01 22:40:35.144700] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.003 qpair failed and we were unable to recover it. 00:41:40.003 [2024-10-01 22:40:35.145007] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.004 [2024-10-01 22:40:35.145017] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.004 qpair failed and we were unable to recover it. 00:41:40.004 [2024-10-01 22:40:35.145208] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.004 [2024-10-01 22:40:35.145218] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.004 qpair failed and we were unable to recover it. 00:41:40.004 [2024-10-01 22:40:35.145518] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.004 [2024-10-01 22:40:35.145527] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.004 qpair failed and we were unable to recover it. 00:41:40.004 [2024-10-01 22:40:35.145866] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.004 [2024-10-01 22:40:35.145876] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.004 qpair failed and we were unable to recover it. 00:41:40.004 [2024-10-01 22:40:35.146141] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.004 [2024-10-01 22:40:35.146151] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.004 qpair failed and we were unable to recover it. 00:41:40.004 [2024-10-01 22:40:35.146474] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.004 [2024-10-01 22:40:35.146484] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.004 qpair failed and we were unable to recover it. 00:41:40.004 [2024-10-01 22:40:35.146785] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.004 [2024-10-01 22:40:35.146795] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.004 qpair failed and we were unable to recover it. 00:41:40.004 [2024-10-01 22:40:35.147102] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.004 [2024-10-01 22:40:35.147111] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.004 qpair failed and we were unable to recover it. 00:41:40.004 [2024-10-01 22:40:35.147384] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.004 [2024-10-01 22:40:35.147394] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.004 qpair failed and we were unable to recover it. 00:41:40.004 [2024-10-01 22:40:35.147672] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.004 [2024-10-01 22:40:35.147682] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.004 qpair failed and we were unable to recover it. 00:41:40.004 [2024-10-01 22:40:35.148010] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.004 [2024-10-01 22:40:35.148020] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.004 qpair failed and we were unable to recover it. 00:41:40.004 [2024-10-01 22:40:35.148295] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.004 [2024-10-01 22:40:35.148304] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.004 qpair failed and we were unable to recover it. 00:41:40.004 [2024-10-01 22:40:35.148691] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.004 [2024-10-01 22:40:35.148702] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.004 qpair failed and we were unable to recover it. 00:41:40.004 [2024-10-01 22:40:35.149050] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.004 [2024-10-01 22:40:35.149060] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.004 qpair failed and we were unable to recover it. 00:41:40.004 [2024-10-01 22:40:35.149378] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.004 [2024-10-01 22:40:35.149388] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.004 qpair failed and we were unable to recover it. 00:41:40.004 [2024-10-01 22:40:35.149704] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.004 [2024-10-01 22:40:35.149715] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.004 qpair failed and we were unable to recover it. 00:41:40.004 [2024-10-01 22:40:35.150016] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.004 [2024-10-01 22:40:35.150025] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.004 qpair failed and we were unable to recover it. 00:41:40.004 [2024-10-01 22:40:35.150329] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.004 [2024-10-01 22:40:35.150346] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.004 qpair failed and we were unable to recover it. 00:41:40.004 [2024-10-01 22:40:35.150671] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.004 [2024-10-01 22:40:35.150681] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.004 qpair failed and we were unable to recover it. 00:41:40.004 [2024-10-01 22:40:35.150992] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.004 [2024-10-01 22:40:35.151002] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.004 qpair failed and we were unable to recover it. 00:41:40.004 [2024-10-01 22:40:35.151296] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.004 [2024-10-01 22:40:35.151306] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.004 qpair failed and we were unable to recover it. 00:41:40.004 [2024-10-01 22:40:35.151604] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.004 [2024-10-01 22:40:35.151614] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.004 qpair failed and we were unable to recover it. 00:41:40.004 [2024-10-01 22:40:35.151865] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.004 [2024-10-01 22:40:35.151875] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.004 qpair failed and we were unable to recover it. 00:41:40.004 [2024-10-01 22:40:35.152188] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.004 [2024-10-01 22:40:35.152198] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.004 qpair failed and we were unable to recover it. 00:41:40.004 [2024-10-01 22:40:35.152500] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.004 [2024-10-01 22:40:35.152510] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.004 qpair failed and we were unable to recover it. 00:41:40.004 [2024-10-01 22:40:35.152865] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.004 [2024-10-01 22:40:35.152876] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.004 qpair failed and we were unable to recover it. 00:41:40.004 [2024-10-01 22:40:35.153218] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.004 [2024-10-01 22:40:35.153228] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.004 qpair failed and we were unable to recover it. 00:41:40.004 [2024-10-01 22:40:35.153518] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.004 [2024-10-01 22:40:35.153528] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.004 qpair failed and we were unable to recover it. 00:41:40.004 [2024-10-01 22:40:35.153816] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.004 [2024-10-01 22:40:35.153826] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.004 qpair failed and we were unable to recover it. 00:41:40.004 [2024-10-01 22:40:35.154117] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.004 [2024-10-01 22:40:35.154127] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.004 qpair failed and we were unable to recover it. 00:41:40.004 [2024-10-01 22:40:35.154414] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.004 [2024-10-01 22:40:35.154426] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.004 qpair failed and we were unable to recover it. 00:41:40.004 [2024-10-01 22:40:35.154736] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.004 [2024-10-01 22:40:35.154746] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.004 qpair failed and we were unable to recover it. 00:41:40.004 [2024-10-01 22:40:35.155037] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.004 [2024-10-01 22:40:35.155046] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.004 qpair failed and we were unable to recover it. 00:41:40.004 [2024-10-01 22:40:35.155320] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.004 [2024-10-01 22:40:35.155329] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.004 qpair failed and we were unable to recover it. 00:41:40.004 [2024-10-01 22:40:35.155609] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.004 [2024-10-01 22:40:35.155618] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.004 qpair failed and we were unable to recover it. 00:41:40.004 [2024-10-01 22:40:35.155911] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.004 [2024-10-01 22:40:35.155921] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.004 qpair failed and we were unable to recover it. 00:41:40.004 [2024-10-01 22:40:35.156202] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.004 [2024-10-01 22:40:35.156211] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.004 qpair failed and we were unable to recover it. 00:41:40.004 [2024-10-01 22:40:35.156452] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.004 [2024-10-01 22:40:35.156462] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.004 qpair failed and we were unable to recover it. 00:41:40.004 [2024-10-01 22:40:35.156776] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.004 [2024-10-01 22:40:35.156786] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.004 qpair failed and we were unable to recover it. 00:41:40.004 [2024-10-01 22:40:35.157081] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.004 [2024-10-01 22:40:35.157090] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.005 qpair failed and we were unable to recover it. 00:41:40.005 [2024-10-01 22:40:35.157372] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.005 [2024-10-01 22:40:35.157382] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.005 qpair failed and we were unable to recover it. 00:41:40.005 [2024-10-01 22:40:35.157583] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.005 [2024-10-01 22:40:35.157593] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.005 qpair failed and we were unable to recover it. 00:41:40.005 [2024-10-01 22:40:35.157905] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.005 [2024-10-01 22:40:35.157916] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.005 qpair failed and we were unable to recover it. 00:41:40.005 [2024-10-01 22:40:35.158122] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.005 [2024-10-01 22:40:35.158132] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.005 qpair failed and we were unable to recover it. 00:41:40.005 [2024-10-01 22:40:35.158430] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.005 [2024-10-01 22:40:35.158441] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.005 qpair failed and we were unable to recover it. 00:41:40.005 [2024-10-01 22:40:35.158741] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.005 [2024-10-01 22:40:35.158752] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.005 qpair failed and we were unable to recover it. 00:41:40.005 [2024-10-01 22:40:35.159030] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.005 [2024-10-01 22:40:35.159040] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.005 qpair failed and we were unable to recover it. 00:41:40.005 [2024-10-01 22:40:35.159367] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.005 [2024-10-01 22:40:35.159377] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.005 qpair failed and we were unable to recover it. 00:41:40.005 [2024-10-01 22:40:35.159649] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.005 [2024-10-01 22:40:35.159659] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.005 qpair failed and we were unable to recover it. 00:41:40.005 [2024-10-01 22:40:35.159975] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.005 [2024-10-01 22:40:35.159985] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.005 qpair failed and we were unable to recover it. 00:41:40.005 [2024-10-01 22:40:35.160266] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.005 [2024-10-01 22:40:35.160283] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.005 qpair failed and we were unable to recover it. 00:41:40.005 [2024-10-01 22:40:35.160605] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.005 [2024-10-01 22:40:35.160615] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.005 qpair failed and we were unable to recover it. 00:41:40.005 [2024-10-01 22:40:35.160984] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.005 [2024-10-01 22:40:35.160994] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.005 qpair failed and we were unable to recover it. 00:41:40.005 [2024-10-01 22:40:35.161283] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.005 [2024-10-01 22:40:35.161293] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.005 qpair failed and we were unable to recover it. 00:41:40.005 [2024-10-01 22:40:35.161579] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.005 [2024-10-01 22:40:35.161589] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.005 qpair failed and we were unable to recover it. 00:41:40.005 [2024-10-01 22:40:35.161893] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.005 [2024-10-01 22:40:35.161904] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.005 qpair failed and we were unable to recover it. 00:41:40.005 [2024-10-01 22:40:35.162185] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.005 [2024-10-01 22:40:35.162195] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.005 qpair failed and we were unable to recover it. 00:41:40.005 [2024-10-01 22:40:35.162501] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.005 [2024-10-01 22:40:35.162514] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.005 qpair failed and we were unable to recover it. 00:41:40.005 [2024-10-01 22:40:35.162790] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.005 [2024-10-01 22:40:35.162800] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.005 qpair failed and we were unable to recover it. 00:41:40.005 [2024-10-01 22:40:35.163107] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.005 [2024-10-01 22:40:35.163116] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.005 qpair failed and we were unable to recover it. 00:41:40.005 [2024-10-01 22:40:35.163487] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.005 [2024-10-01 22:40:35.163497] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.005 qpair failed and we were unable to recover it. 00:41:40.005 [2024-10-01 22:40:35.163794] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.005 [2024-10-01 22:40:35.163804] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.005 qpair failed and we were unable to recover it. 00:41:40.005 [2024-10-01 22:40:35.164118] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.005 [2024-10-01 22:40:35.164127] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.005 qpair failed and we were unable to recover it. 00:41:40.005 [2024-10-01 22:40:35.164487] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.005 [2024-10-01 22:40:35.164496] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.005 qpair failed and we were unable to recover it. 00:41:40.005 [2024-10-01 22:40:35.164797] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.005 [2024-10-01 22:40:35.164807] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.005 qpair failed and we were unable to recover it. 00:41:40.005 [2024-10-01 22:40:35.165089] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.005 [2024-10-01 22:40:35.165099] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.005 qpair failed and we were unable to recover it. 00:41:40.005 [2024-10-01 22:40:35.165396] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.005 [2024-10-01 22:40:35.165406] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.005 qpair failed and we were unable to recover it. 00:41:40.005 [2024-10-01 22:40:35.165632] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.005 [2024-10-01 22:40:35.165643] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.005 qpair failed and we were unable to recover it. 00:41:40.005 [2024-10-01 22:40:35.165961] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.005 [2024-10-01 22:40:35.165971] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.005 qpair failed and we were unable to recover it. 00:41:40.005 [2024-10-01 22:40:35.166245] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.005 [2024-10-01 22:40:35.166256] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.005 qpair failed and we were unable to recover it. 00:41:40.005 [2024-10-01 22:40:35.166559] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.005 [2024-10-01 22:40:35.166568] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.005 qpair failed and we were unable to recover it. 00:41:40.005 [2024-10-01 22:40:35.166869] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.005 [2024-10-01 22:40:35.166880] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.005 qpair failed and we were unable to recover it. 00:41:40.005 [2024-10-01 22:40:35.167171] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.005 [2024-10-01 22:40:35.167181] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.005 qpair failed and we were unable to recover it. 00:41:40.005 [2024-10-01 22:40:35.167405] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.005 [2024-10-01 22:40:35.167415] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.005 qpair failed and we were unable to recover it. 00:41:40.005 [2024-10-01 22:40:35.167714] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.005 [2024-10-01 22:40:35.167725] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.005 qpair failed and we were unable to recover it. 00:41:40.005 [2024-10-01 22:40:35.168042] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.005 [2024-10-01 22:40:35.168053] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.005 qpair failed and we were unable to recover it. 00:41:40.005 [2024-10-01 22:40:35.168398] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.005 [2024-10-01 22:40:35.168408] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.005 qpair failed and we were unable to recover it. 00:41:40.005 [2024-10-01 22:40:35.168612] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.005 [2024-10-01 22:40:35.168622] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.005 qpair failed and we were unable to recover it. 00:41:40.005 [2024-10-01 22:40:35.168924] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.005 [2024-10-01 22:40:35.168935] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.006 qpair failed and we were unable to recover it. 00:41:40.006 [2024-10-01 22:40:35.169248] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.006 [2024-10-01 22:40:35.169258] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.006 qpair failed and we were unable to recover it. 00:41:40.006 [2024-10-01 22:40:35.169547] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.006 [2024-10-01 22:40:35.169558] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.006 qpair failed and we were unable to recover it. 00:41:40.006 [2024-10-01 22:40:35.169872] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.006 [2024-10-01 22:40:35.169882] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.006 qpair failed and we were unable to recover it. 00:41:40.006 [2024-10-01 22:40:35.170166] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.006 [2024-10-01 22:40:35.170175] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.006 qpair failed and we were unable to recover it. 00:41:40.006 [2024-10-01 22:40:35.170533] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.006 [2024-10-01 22:40:35.170543] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.006 qpair failed and we were unable to recover it. 00:41:40.006 [2024-10-01 22:40:35.170850] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.006 [2024-10-01 22:40:35.170866] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.006 qpair failed and we were unable to recover it. 00:41:40.006 [2024-10-01 22:40:35.171202] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.006 [2024-10-01 22:40:35.171212] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.006 qpair failed and we were unable to recover it. 00:41:40.006 [2024-10-01 22:40:35.171503] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.006 [2024-10-01 22:40:35.171512] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.006 qpair failed and we were unable to recover it. 00:41:40.006 [2024-10-01 22:40:35.171724] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.006 [2024-10-01 22:40:35.171734] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.006 qpair failed and we were unable to recover it. 00:41:40.006 [2024-10-01 22:40:35.171966] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.006 [2024-10-01 22:40:35.171975] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.006 qpair failed and we were unable to recover it. 00:41:40.006 [2024-10-01 22:40:35.172237] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.006 [2024-10-01 22:40:35.172247] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.006 qpair failed and we were unable to recover it. 00:41:40.006 [2024-10-01 22:40:35.172511] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.006 [2024-10-01 22:40:35.172521] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.006 qpair failed and we were unable to recover it. 00:41:40.006 [2024-10-01 22:40:35.172722] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.006 [2024-10-01 22:40:35.172732] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.006 qpair failed and we were unable to recover it. 00:41:40.006 [2024-10-01 22:40:35.172901] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.006 [2024-10-01 22:40:35.172912] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.006 qpair failed and we were unable to recover it. 00:41:40.006 [2024-10-01 22:40:35.173106] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.006 [2024-10-01 22:40:35.173116] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.006 qpair failed and we were unable to recover it. 00:41:40.006 [2024-10-01 22:40:35.173312] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.006 [2024-10-01 22:40:35.173322] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.006 qpair failed and we were unable to recover it. 00:41:40.006 [2024-10-01 22:40:35.173472] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.006 [2024-10-01 22:40:35.173483] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.006 qpair failed and we were unable to recover it. 00:41:40.006 [2024-10-01 22:40:35.173811] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.006 [2024-10-01 22:40:35.173822] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.006 qpair failed and we were unable to recover it. 00:41:40.006 [2024-10-01 22:40:35.174102] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.006 [2024-10-01 22:40:35.174112] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.006 qpair failed and we were unable to recover it. 00:41:40.006 [2024-10-01 22:40:35.174442] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.006 [2024-10-01 22:40:35.174452] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.006 qpair failed and we were unable to recover it. 00:41:40.006 [2024-10-01 22:40:35.174790] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.006 [2024-10-01 22:40:35.174801] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.006 qpair failed and we were unable to recover it. 00:41:40.006 [2024-10-01 22:40:35.175127] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.006 [2024-10-01 22:40:35.175136] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.006 qpair failed and we were unable to recover it. 00:41:40.006 [2024-10-01 22:40:35.175436] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.006 [2024-10-01 22:40:35.175446] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.006 qpair failed and we were unable to recover it. 00:41:40.006 [2024-10-01 22:40:35.175640] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.006 [2024-10-01 22:40:35.175651] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.006 qpair failed and we were unable to recover it. 00:41:40.006 [2024-10-01 22:40:35.175936] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.006 [2024-10-01 22:40:35.175945] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.006 qpair failed and we were unable to recover it. 00:41:40.006 [2024-10-01 22:40:35.176240] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.006 [2024-10-01 22:40:35.176250] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.006 qpair failed and we were unable to recover it. 00:41:40.006 [2024-10-01 22:40:35.176464] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.006 [2024-10-01 22:40:35.176474] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.006 qpair failed and we were unable to recover it. 00:41:40.006 [2024-10-01 22:40:35.176761] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.006 [2024-10-01 22:40:35.176771] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.006 qpair failed and we were unable to recover it. 00:41:40.006 [2024-10-01 22:40:35.177082] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.006 [2024-10-01 22:40:35.177092] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.006 qpair failed and we were unable to recover it. 00:41:40.006 [2024-10-01 22:40:35.177424] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.006 [2024-10-01 22:40:35.177433] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.006 qpair failed and we were unable to recover it. 00:41:40.006 [2024-10-01 22:40:35.177729] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.006 [2024-10-01 22:40:35.177740] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.006 qpair failed and we were unable to recover it. 00:41:40.006 [2024-10-01 22:40:35.178033] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.006 [2024-10-01 22:40:35.178043] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.006 qpair failed and we were unable to recover it. 00:41:40.006 [2024-10-01 22:40:35.178404] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.006 [2024-10-01 22:40:35.178413] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.006 qpair failed and we were unable to recover it. 00:41:40.006 [2024-10-01 22:40:35.178682] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.006 [2024-10-01 22:40:35.178693] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.006 qpair failed and we were unable to recover it. 00:41:40.006 [2024-10-01 22:40:35.178981] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.006 [2024-10-01 22:40:35.178990] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.006 qpair failed and we were unable to recover it. 00:41:40.006 [2024-10-01 22:40:35.179285] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.006 [2024-10-01 22:40:35.179295] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.006 qpair failed and we were unable to recover it. 00:41:40.006 [2024-10-01 22:40:35.179569] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.006 [2024-10-01 22:40:35.179579] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.006 qpair failed and we were unable to recover it. 00:41:40.006 [2024-10-01 22:40:35.179881] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.006 [2024-10-01 22:40:35.179891] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.006 qpair failed and we were unable to recover it. 00:41:40.006 [2024-10-01 22:40:35.180267] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.007 [2024-10-01 22:40:35.180277] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.007 qpair failed and we were unable to recover it. 00:41:40.007 [2024-10-01 22:40:35.180606] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.007 [2024-10-01 22:40:35.180615] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.007 qpair failed and we were unable to recover it. 00:41:40.007 [2024-10-01 22:40:35.180930] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.007 [2024-10-01 22:40:35.180940] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.007 qpair failed and we were unable to recover it. 00:41:40.007 [2024-10-01 22:40:35.181217] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.007 [2024-10-01 22:40:35.181227] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.007 qpair failed and we were unable to recover it. 00:41:40.007 [2024-10-01 22:40:35.181525] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.007 [2024-10-01 22:40:35.181535] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.007 qpair failed and we were unable to recover it. 00:41:40.007 [2024-10-01 22:40:35.181849] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.007 [2024-10-01 22:40:35.181859] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.007 qpair failed and we were unable to recover it. 00:41:40.007 [2024-10-01 22:40:35.182129] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.007 [2024-10-01 22:40:35.182139] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.007 qpair failed and we were unable to recover it. 00:41:40.007 [2024-10-01 22:40:35.182426] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.007 [2024-10-01 22:40:35.182436] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.007 qpair failed and we were unable to recover it. 00:41:40.007 [2024-10-01 22:40:35.182645] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.007 [2024-10-01 22:40:35.182656] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.007 qpair failed and we were unable to recover it. 00:41:40.007 [2024-10-01 22:40:35.182973] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.007 [2024-10-01 22:40:35.182982] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.007 qpair failed and we were unable to recover it. 00:41:40.007 [2024-10-01 22:40:35.183265] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.007 [2024-10-01 22:40:35.183283] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.007 qpair failed and we were unable to recover it. 00:41:40.007 [2024-10-01 22:40:35.183632] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.007 [2024-10-01 22:40:35.183642] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.007 qpair failed and we were unable to recover it. 00:41:40.007 [2024-10-01 22:40:35.183923] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.007 [2024-10-01 22:40:35.183932] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.007 qpair failed and we were unable to recover it. 00:41:40.007 [2024-10-01 22:40:35.184242] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.007 [2024-10-01 22:40:35.184252] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.007 qpair failed and we were unable to recover it. 00:41:40.007 [2024-10-01 22:40:35.184562] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.007 [2024-10-01 22:40:35.184571] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.007 qpair failed and we were unable to recover it. 00:41:40.007 [2024-10-01 22:40:35.184796] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.007 [2024-10-01 22:40:35.184806] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.007 qpair failed and we were unable to recover it. 00:41:40.007 [2024-10-01 22:40:35.185116] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.007 [2024-10-01 22:40:35.185126] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.007 qpair failed and we were unable to recover it. 00:41:40.007 [2024-10-01 22:40:35.185418] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.007 [2024-10-01 22:40:35.185428] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.007 qpair failed and we were unable to recover it. 00:41:40.007 [2024-10-01 22:40:35.185744] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.007 [2024-10-01 22:40:35.185755] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.007 qpair failed and we were unable to recover it. 00:41:40.007 [2024-10-01 22:40:35.186055] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.007 [2024-10-01 22:40:35.186064] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.007 qpair failed and we were unable to recover it. 00:41:40.007 [2024-10-01 22:40:35.186358] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.007 [2024-10-01 22:40:35.186368] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.007 qpair failed and we were unable to recover it. 00:41:40.007 [2024-10-01 22:40:35.186667] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.007 [2024-10-01 22:40:35.186677] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.007 qpair failed and we were unable to recover it. 00:41:40.007 [2024-10-01 22:40:35.186924] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.007 [2024-10-01 22:40:35.186934] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.007 qpair failed and we were unable to recover it. 00:41:40.007 [2024-10-01 22:40:35.187234] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.007 [2024-10-01 22:40:35.187244] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.007 qpair failed and we were unable to recover it. 00:41:40.007 [2024-10-01 22:40:35.187528] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.007 [2024-10-01 22:40:35.187537] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.007 qpair failed and we were unable to recover it. 00:41:40.007 [2024-10-01 22:40:35.187796] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.007 [2024-10-01 22:40:35.187806] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.007 qpair failed and we were unable to recover it. 00:41:40.007 [2024-10-01 22:40:35.188126] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.007 [2024-10-01 22:40:35.188136] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.007 qpair failed and we were unable to recover it. 00:41:40.007 [2024-10-01 22:40:35.188429] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.007 [2024-10-01 22:40:35.188439] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.007 qpair failed and we were unable to recover it. 00:41:40.007 [2024-10-01 22:40:35.188730] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.007 [2024-10-01 22:40:35.188741] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.007 qpair failed and we were unable to recover it. 00:41:40.007 [2024-10-01 22:40:35.189030] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.007 [2024-10-01 22:40:35.189040] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.007 qpair failed and we were unable to recover it. 00:41:40.007 [2024-10-01 22:40:35.189330] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.007 [2024-10-01 22:40:35.189340] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.007 qpair failed and we were unable to recover it. 00:41:40.007 [2024-10-01 22:40:35.189644] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.007 [2024-10-01 22:40:35.189654] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.007 qpair failed and we were unable to recover it. 00:41:40.007 [2024-10-01 22:40:35.190075] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.007 [2024-10-01 22:40:35.190085] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.007 qpair failed and we were unable to recover it. 00:41:40.007 [2024-10-01 22:40:35.190371] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.008 [2024-10-01 22:40:35.190381] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.008 qpair failed and we were unable to recover it. 00:41:40.008 [2024-10-01 22:40:35.190683] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.008 [2024-10-01 22:40:35.190694] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.008 qpair failed and we were unable to recover it. 00:41:40.008 [2024-10-01 22:40:35.190987] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.008 [2024-10-01 22:40:35.191000] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.008 qpair failed and we were unable to recover it. 00:41:40.008 [2024-10-01 22:40:35.191298] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.008 [2024-10-01 22:40:35.191308] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.008 qpair failed and we were unable to recover it. 00:41:40.008 [2024-10-01 22:40:35.191646] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.008 [2024-10-01 22:40:35.191656] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.008 qpair failed and we were unable to recover it. 00:41:40.008 [2024-10-01 22:40:35.191961] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.008 [2024-10-01 22:40:35.191970] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.008 qpair failed and we were unable to recover it. 00:41:40.008 [2024-10-01 22:40:35.192260] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.008 [2024-10-01 22:40:35.192270] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.008 qpair failed and we were unable to recover it. 00:41:40.008 [2024-10-01 22:40:35.192570] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.008 [2024-10-01 22:40:35.192581] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.008 qpair failed and we were unable to recover it. 00:41:40.008 [2024-10-01 22:40:35.192863] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.008 [2024-10-01 22:40:35.192874] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.008 qpair failed and we were unable to recover it. 00:41:40.008 [2024-10-01 22:40:35.193171] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.008 [2024-10-01 22:40:35.193182] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.008 qpair failed and we were unable to recover it. 00:41:40.008 [2024-10-01 22:40:35.193473] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.008 [2024-10-01 22:40:35.193483] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.008 qpair failed and we were unable to recover it. 00:41:40.008 [2024-10-01 22:40:35.193748] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.008 [2024-10-01 22:40:35.193758] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.008 qpair failed and we were unable to recover it. 00:41:40.008 [2024-10-01 22:40:35.194046] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.008 [2024-10-01 22:40:35.194056] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.008 qpair failed and we were unable to recover it. 00:41:40.008 [2024-10-01 22:40:35.194365] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.008 [2024-10-01 22:40:35.194374] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.008 qpair failed and we were unable to recover it. 00:41:40.008 [2024-10-01 22:40:35.194688] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.008 [2024-10-01 22:40:35.194699] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.008 qpair failed and we were unable to recover it. 00:41:40.008 [2024-10-01 22:40:35.195025] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.008 [2024-10-01 22:40:35.195036] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.008 qpair failed and we were unable to recover it. 00:41:40.008 [2024-10-01 22:40:35.195295] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.008 [2024-10-01 22:40:35.195306] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.008 qpair failed and we were unable to recover it. 00:41:40.008 [2024-10-01 22:40:35.195587] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.008 [2024-10-01 22:40:35.195598] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.008 qpair failed and we were unable to recover it. 00:41:40.008 [2024-10-01 22:40:35.195799] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.008 [2024-10-01 22:40:35.195809] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.008 qpair failed and we were unable to recover it. 00:41:40.008 [2024-10-01 22:40:35.196105] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.008 [2024-10-01 22:40:35.196115] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.008 qpair failed and we were unable to recover it. 00:41:40.008 [2024-10-01 22:40:35.196426] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.008 [2024-10-01 22:40:35.196436] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.008 qpair failed and we were unable to recover it. 00:41:40.008 [2024-10-01 22:40:35.196650] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.008 [2024-10-01 22:40:35.196660] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.008 qpair failed and we were unable to recover it. 00:41:40.008 [2024-10-01 22:40:35.196995] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.008 [2024-10-01 22:40:35.197005] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.008 qpair failed and we were unable to recover it. 00:41:40.008 [2024-10-01 22:40:35.197284] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.008 [2024-10-01 22:40:35.197293] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.008 qpair failed and we were unable to recover it. 00:41:40.008 [2024-10-01 22:40:35.197615] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.008 [2024-10-01 22:40:35.197631] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.008 qpair failed and we were unable to recover it. 00:41:40.008 [2024-10-01 22:40:35.197925] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.008 [2024-10-01 22:40:35.197934] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.008 qpair failed and we were unable to recover it. 00:41:40.008 [2024-10-01 22:40:35.198243] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.008 [2024-10-01 22:40:35.198252] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.008 qpair failed and we were unable to recover it. 00:41:40.008 [2024-10-01 22:40:35.198571] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.008 [2024-10-01 22:40:35.198581] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.008 qpair failed and we were unable to recover it. 00:41:40.008 [2024-10-01 22:40:35.198907] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.008 [2024-10-01 22:40:35.198918] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.008 qpair failed and we were unable to recover it. 00:41:40.008 [2024-10-01 22:40:35.199226] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.008 [2024-10-01 22:40:35.199238] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.008 qpair failed and we were unable to recover it. 00:41:40.008 [2024-10-01 22:40:35.199528] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.008 [2024-10-01 22:40:35.199538] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.008 qpair failed and we were unable to recover it. 00:41:40.008 [2024-10-01 22:40:35.199724] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.008 [2024-10-01 22:40:35.199734] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.008 qpair failed and we were unable to recover it. 00:41:40.008 [2024-10-01 22:40:35.200111] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.008 [2024-10-01 22:40:35.200121] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.008 qpair failed and we were unable to recover it. 00:41:40.008 [2024-10-01 22:40:35.200409] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.008 [2024-10-01 22:40:35.200419] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.008 qpair failed and we were unable to recover it. 00:41:40.008 [2024-10-01 22:40:35.200749] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.008 [2024-10-01 22:40:35.200759] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.008 qpair failed and we were unable to recover it. 00:41:40.008 [2024-10-01 22:40:35.201055] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.008 [2024-10-01 22:40:35.201066] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.008 qpair failed and we were unable to recover it. 00:41:40.008 [2024-10-01 22:40:35.201344] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.008 [2024-10-01 22:40:35.201354] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.008 qpair failed and we were unable to recover it. 00:41:40.008 [2024-10-01 22:40:35.201647] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.008 [2024-10-01 22:40:35.201657] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.008 qpair failed and we were unable to recover it. 00:41:40.008 [2024-10-01 22:40:35.201873] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.008 [2024-10-01 22:40:35.201883] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.008 qpair failed and we were unable to recover it. 00:41:40.009 [2024-10-01 22:40:35.202185] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.009 [2024-10-01 22:40:35.202195] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.009 qpair failed and we were unable to recover it. 00:41:40.009 [2024-10-01 22:40:35.202501] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.009 [2024-10-01 22:40:35.202511] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.009 qpair failed and we were unable to recover it. 00:41:40.009 [2024-10-01 22:40:35.202830] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.009 [2024-10-01 22:40:35.202841] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.009 qpair failed and we were unable to recover it. 00:41:40.009 [2024-10-01 22:40:35.203119] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.009 [2024-10-01 22:40:35.203129] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.009 qpair failed and we were unable to recover it. 00:41:40.009 [2024-10-01 22:40:35.203446] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.009 [2024-10-01 22:40:35.203455] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.009 qpair failed and we were unable to recover it. 00:41:40.009 [2024-10-01 22:40:35.203760] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.009 [2024-10-01 22:40:35.203770] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.009 qpair failed and we were unable to recover it. 00:41:40.009 [2024-10-01 22:40:35.204061] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.009 [2024-10-01 22:40:35.204071] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.009 qpair failed and we were unable to recover it. 00:41:40.009 [2024-10-01 22:40:35.204364] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.009 [2024-10-01 22:40:35.204374] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.009 qpair failed and we were unable to recover it. 00:41:40.009 [2024-10-01 22:40:35.204696] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.009 [2024-10-01 22:40:35.204706] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.009 qpair failed and we were unable to recover it. 00:41:40.009 [2024-10-01 22:40:35.204999] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.009 [2024-10-01 22:40:35.205010] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.009 qpair failed and we were unable to recover it. 00:41:40.009 [2024-10-01 22:40:35.205284] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.009 [2024-10-01 22:40:35.205295] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.009 qpair failed and we were unable to recover it. 00:41:40.009 [2024-10-01 22:40:35.205462] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.009 [2024-10-01 22:40:35.205474] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.009 qpair failed and we were unable to recover it. 00:41:40.009 [2024-10-01 22:40:35.205654] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.009 [2024-10-01 22:40:35.205666] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.009 qpair failed and we were unable to recover it. 00:41:40.009 [2024-10-01 22:40:35.205834] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.009 [2024-10-01 22:40:35.205845] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.009 qpair failed and we were unable to recover it. 00:41:40.009 [2024-10-01 22:40:35.206089] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.009 [2024-10-01 22:40:35.206099] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.009 qpair failed and we were unable to recover it. 00:41:40.009 [2024-10-01 22:40:35.206384] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.009 [2024-10-01 22:40:35.206402] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.009 qpair failed and we were unable to recover it. 00:41:40.009 [2024-10-01 22:40:35.206702] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.009 [2024-10-01 22:40:35.206713] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.009 qpair failed and we were unable to recover it. 00:41:40.009 [2024-10-01 22:40:35.207024] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.009 [2024-10-01 22:40:35.207034] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.009 qpair failed and we were unable to recover it. 00:41:40.009 [2024-10-01 22:40:35.207363] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.009 [2024-10-01 22:40:35.207373] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.009 qpair failed and we were unable to recover it. 00:41:40.009 [2024-10-01 22:40:35.207579] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.009 [2024-10-01 22:40:35.207588] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.009 qpair failed and we were unable to recover it. 00:41:40.009 [2024-10-01 22:40:35.207891] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.009 [2024-10-01 22:40:35.207900] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.009 qpair failed and we were unable to recover it. 00:41:40.009 [2024-10-01 22:40:35.208199] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.009 [2024-10-01 22:40:35.208210] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.009 qpair failed and we were unable to recover it. 00:41:40.009 [2024-10-01 22:40:35.208521] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.009 [2024-10-01 22:40:35.208531] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.009 qpair failed and we were unable to recover it. 00:41:40.009 [2024-10-01 22:40:35.208864] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.009 [2024-10-01 22:40:35.208875] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.009 qpair failed and we were unable to recover it. 00:41:40.009 [2024-10-01 22:40:35.209155] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.009 [2024-10-01 22:40:35.209165] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.009 qpair failed and we were unable to recover it. 00:41:40.009 [2024-10-01 22:40:35.209444] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.009 [2024-10-01 22:40:35.209454] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.009 qpair failed and we were unable to recover it. 00:41:40.009 [2024-10-01 22:40:35.209772] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.009 [2024-10-01 22:40:35.209782] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.009 qpair failed and we were unable to recover it. 00:41:40.009 [2024-10-01 22:40:35.210077] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.009 [2024-10-01 22:40:35.210087] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.009 qpair failed and we were unable to recover it. 00:41:40.009 [2024-10-01 22:40:35.210428] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.009 [2024-10-01 22:40:35.210438] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.009 qpair failed and we were unable to recover it. 00:41:40.009 [2024-10-01 22:40:35.210726] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.009 [2024-10-01 22:40:35.210736] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.009 qpair failed and we were unable to recover it. 00:41:40.009 [2024-10-01 22:40:35.211077] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.009 [2024-10-01 22:40:35.211087] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.009 qpair failed and we were unable to recover it. 00:41:40.009 [2024-10-01 22:40:35.211391] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.009 [2024-10-01 22:40:35.211401] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.009 qpair failed and we were unable to recover it. 00:41:40.009 [2024-10-01 22:40:35.211703] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.009 [2024-10-01 22:40:35.211714] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.009 qpair failed and we were unable to recover it. 00:41:40.009 [2024-10-01 22:40:35.212001] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.009 [2024-10-01 22:40:35.212011] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.009 qpair failed and we were unable to recover it. 00:41:40.009 [2024-10-01 22:40:35.212331] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.009 [2024-10-01 22:40:35.212340] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.009 qpair failed and we were unable to recover it. 00:41:40.009 [2024-10-01 22:40:35.212643] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.009 [2024-10-01 22:40:35.212653] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.009 qpair failed and we were unable to recover it. 00:41:40.009 [2024-10-01 22:40:35.212985] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.009 [2024-10-01 22:40:35.212995] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.009 qpair failed and we were unable to recover it. 00:41:40.009 [2024-10-01 22:40:35.213290] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.009 [2024-10-01 22:40:35.213300] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.009 qpair failed and we were unable to recover it. 00:41:40.009 [2024-10-01 22:40:35.213616] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.009 [2024-10-01 22:40:35.213631] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.010 qpair failed and we were unable to recover it. 00:41:40.010 [2024-10-01 22:40:35.213947] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.010 [2024-10-01 22:40:35.213958] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.010 qpair failed and we were unable to recover it. 00:41:40.010 [2024-10-01 22:40:35.214270] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.010 [2024-10-01 22:40:35.214280] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.010 qpair failed and we were unable to recover it. 00:41:40.010 [2024-10-01 22:40:35.214474] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.010 [2024-10-01 22:40:35.214484] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.010 qpair failed and we were unable to recover it. 00:41:40.010 [2024-10-01 22:40:35.214780] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.010 [2024-10-01 22:40:35.214790] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.010 qpair failed and we were unable to recover it. 00:41:40.010 [2024-10-01 22:40:35.215076] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.010 [2024-10-01 22:40:35.215086] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.010 qpair failed and we were unable to recover it. 00:41:40.010 [2024-10-01 22:40:35.215399] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.010 [2024-10-01 22:40:35.215408] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.010 qpair failed and we were unable to recover it. 00:41:40.010 [2024-10-01 22:40:35.215749] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.010 [2024-10-01 22:40:35.215760] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.010 qpair failed and we were unable to recover it. 00:41:40.010 [2024-10-01 22:40:35.216068] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.010 [2024-10-01 22:40:35.216078] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.010 qpair failed and we were unable to recover it. 00:41:40.010 [2024-10-01 22:40:35.216273] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.010 [2024-10-01 22:40:35.216283] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.010 qpair failed and we were unable to recover it. 00:41:40.010 [2024-10-01 22:40:35.216635] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.010 [2024-10-01 22:40:35.216645] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.010 qpair failed and we were unable to recover it. 00:41:40.010 [2024-10-01 22:40:35.216918] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.010 [2024-10-01 22:40:35.216928] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.010 qpair failed and we were unable to recover it. 00:41:40.010 [2024-10-01 22:40:35.217252] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.010 [2024-10-01 22:40:35.217262] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.010 qpair failed and we were unable to recover it. 00:41:40.010 [2024-10-01 22:40:35.217576] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.010 [2024-10-01 22:40:35.217586] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.010 qpair failed and we were unable to recover it. 00:41:40.010 [2024-10-01 22:40:35.217903] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.010 [2024-10-01 22:40:35.217913] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.010 qpair failed and we were unable to recover it. 00:41:40.010 [2024-10-01 22:40:35.218251] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.010 [2024-10-01 22:40:35.218260] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.010 qpair failed and we were unable to recover it. 00:41:40.010 [2024-10-01 22:40:35.218564] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.010 [2024-10-01 22:40:35.218573] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.010 qpair failed and we were unable to recover it. 00:41:40.010 [2024-10-01 22:40:35.218863] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.010 [2024-10-01 22:40:35.218874] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.010 qpair failed and we were unable to recover it. 00:41:40.010 [2024-10-01 22:40:35.219177] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.010 [2024-10-01 22:40:35.219187] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.010 qpair failed and we were unable to recover it. 00:41:40.010 [2024-10-01 22:40:35.219470] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.010 [2024-10-01 22:40:35.219480] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.010 qpair failed and we were unable to recover it. 00:41:40.010 [2024-10-01 22:40:35.219798] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.010 [2024-10-01 22:40:35.219810] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.010 qpair failed and we were unable to recover it. 00:41:40.010 [2024-10-01 22:40:35.220113] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.010 [2024-10-01 22:40:35.220124] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.010 qpair failed and we were unable to recover it. 00:41:40.010 [2024-10-01 22:40:35.220313] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.010 [2024-10-01 22:40:35.220324] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.010 qpair failed and we were unable to recover it. 00:41:40.010 [2024-10-01 22:40:35.220505] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.010 [2024-10-01 22:40:35.220516] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.010 qpair failed and we were unable to recover it. 00:41:40.010 [2024-10-01 22:40:35.220863] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.010 [2024-10-01 22:40:35.220874] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.010 qpair failed and we were unable to recover it. 00:41:40.010 [2024-10-01 22:40:35.221150] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.010 [2024-10-01 22:40:35.221160] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.010 qpair failed and we were unable to recover it. 00:41:40.010 [2024-10-01 22:40:35.221482] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.010 [2024-10-01 22:40:35.221493] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.010 qpair failed and we were unable to recover it. 00:41:40.010 [2024-10-01 22:40:35.221797] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.010 [2024-10-01 22:40:35.221806] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.010 qpair failed and we were unable to recover it. 00:41:40.010 [2024-10-01 22:40:35.222104] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.010 [2024-10-01 22:40:35.222115] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.010 qpair failed and we were unable to recover it. 00:41:40.010 [2024-10-01 22:40:35.222418] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.010 [2024-10-01 22:40:35.222428] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.010 qpair failed and we were unable to recover it. 00:41:40.010 [2024-10-01 22:40:35.222732] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.010 [2024-10-01 22:40:35.222742] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.010 qpair failed and we were unable to recover it. 00:41:40.010 [2024-10-01 22:40:35.223050] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.010 [2024-10-01 22:40:35.223059] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.010 qpair failed and we were unable to recover it. 00:41:40.010 [2024-10-01 22:40:35.223364] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.010 [2024-10-01 22:40:35.223373] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.010 qpair failed and we were unable to recover it. 00:41:40.010 [2024-10-01 22:40:35.223699] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.010 [2024-10-01 22:40:35.223709] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.010 qpair failed and we were unable to recover it. 00:41:40.010 [2024-10-01 22:40:35.223998] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.010 [2024-10-01 22:40:35.224016] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.010 qpair failed and we were unable to recover it. 00:41:40.010 [2024-10-01 22:40:35.224347] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.010 [2024-10-01 22:40:35.224357] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.010 qpair failed and we were unable to recover it. 00:41:40.010 [2024-10-01 22:40:35.224664] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.010 [2024-10-01 22:40:35.224674] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.010 qpair failed and we were unable to recover it. 00:41:40.010 [2024-10-01 22:40:35.224971] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.010 [2024-10-01 22:40:35.224981] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.010 qpair failed and we were unable to recover it. 00:41:40.010 [2024-10-01 22:40:35.225288] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.010 [2024-10-01 22:40:35.225297] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.010 qpair failed and we were unable to recover it. 00:41:40.010 [2024-10-01 22:40:35.225600] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.010 [2024-10-01 22:40:35.225609] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.010 qpair failed and we were unable to recover it. 00:41:40.011 [2024-10-01 22:40:35.225921] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.011 [2024-10-01 22:40:35.225931] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.011 qpair failed and we were unable to recover it. 00:41:40.011 [2024-10-01 22:40:35.226141] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.011 [2024-10-01 22:40:35.226151] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.011 qpair failed and we were unable to recover it. 00:41:40.011 [2024-10-01 22:40:35.226437] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.011 [2024-10-01 22:40:35.226447] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.011 qpair failed and we were unable to recover it. 00:41:40.011 [2024-10-01 22:40:35.226730] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.011 [2024-10-01 22:40:35.226740] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.011 qpair failed and we were unable to recover it. 00:41:40.011 [2024-10-01 22:40:35.227063] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.011 [2024-10-01 22:40:35.227072] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.011 qpair failed and we were unable to recover it. 00:41:40.011 [2024-10-01 22:40:35.227374] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.011 [2024-10-01 22:40:35.227384] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.011 qpair failed and we were unable to recover it. 00:41:40.011 [2024-10-01 22:40:35.227694] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.011 [2024-10-01 22:40:35.227704] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.011 qpair failed and we were unable to recover it. 00:41:40.011 [2024-10-01 22:40:35.228002] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.011 [2024-10-01 22:40:35.228014] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.011 qpair failed and we were unable to recover it. 00:41:40.011 [2024-10-01 22:40:35.228336] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.011 [2024-10-01 22:40:35.228345] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.011 qpair failed and we were unable to recover it. 00:41:40.011 [2024-10-01 22:40:35.228657] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.011 [2024-10-01 22:40:35.228667] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.011 qpair failed and we were unable to recover it. 00:41:40.011 [2024-10-01 22:40:35.228885] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.011 [2024-10-01 22:40:35.228896] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.011 qpair failed and we were unable to recover it. 00:41:40.011 [2024-10-01 22:40:35.229169] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.011 [2024-10-01 22:40:35.229179] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.011 qpair failed and we were unable to recover it. 00:41:40.011 [2024-10-01 22:40:35.229417] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.011 [2024-10-01 22:40:35.229427] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.011 qpair failed and we were unable to recover it. 00:41:40.011 [2024-10-01 22:40:35.229782] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.011 [2024-10-01 22:40:35.229792] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.011 qpair failed and we were unable to recover it. 00:41:40.011 [2024-10-01 22:40:35.230105] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.011 [2024-10-01 22:40:35.230115] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.011 qpair failed and we were unable to recover it. 00:41:40.011 [2024-10-01 22:40:35.230396] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.011 [2024-10-01 22:40:35.230413] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.011 qpair failed and we were unable to recover it. 00:41:40.011 [2024-10-01 22:40:35.230737] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.011 [2024-10-01 22:40:35.230748] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.011 qpair failed and we were unable to recover it. 00:41:40.011 [2024-10-01 22:40:35.231114] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.011 [2024-10-01 22:40:35.231125] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.011 qpair failed and we were unable to recover it. 00:41:40.011 [2024-10-01 22:40:35.231468] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.011 [2024-10-01 22:40:35.231478] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.011 qpair failed and we were unable to recover it. 00:41:40.011 [2024-10-01 22:40:35.231783] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.011 [2024-10-01 22:40:35.231793] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.011 qpair failed and we were unable to recover it. 00:41:40.011 [2024-10-01 22:40:35.231982] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.011 [2024-10-01 22:40:35.231993] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.011 qpair failed and we were unable to recover it. 00:41:40.011 [2024-10-01 22:40:35.232412] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.011 [2024-10-01 22:40:35.232422] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.011 qpair failed and we were unable to recover it. 00:41:40.011 [2024-10-01 22:40:35.232699] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.011 [2024-10-01 22:40:35.232710] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.011 qpair failed and we were unable to recover it. 00:41:40.011 [2024-10-01 22:40:35.233062] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.011 [2024-10-01 22:40:35.233072] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.011 qpair failed and we were unable to recover it. 00:41:40.011 [2024-10-01 22:40:35.233342] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.011 [2024-10-01 22:40:35.233352] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.011 qpair failed and we were unable to recover it. 00:41:40.011 [2024-10-01 22:40:35.233667] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.011 [2024-10-01 22:40:35.233677] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.011 qpair failed and we were unable to recover it. 00:41:40.011 [2024-10-01 22:40:35.234002] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.011 [2024-10-01 22:40:35.234012] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.011 qpair failed and we were unable to recover it. 00:41:40.011 [2024-10-01 22:40:35.234318] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.011 [2024-10-01 22:40:35.234327] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.011 qpair failed and we were unable to recover it. 00:41:40.011 [2024-10-01 22:40:35.234678] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.011 [2024-10-01 22:40:35.234694] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.011 qpair failed and we were unable to recover it. 00:41:40.011 [2024-10-01 22:40:35.235030] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.011 [2024-10-01 22:40:35.235040] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.011 qpair failed and we were unable to recover it. 00:41:40.011 [2024-10-01 22:40:35.235354] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.011 [2024-10-01 22:40:35.235364] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.011 qpair failed and we were unable to recover it. 00:41:40.011 [2024-10-01 22:40:35.235669] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.011 [2024-10-01 22:40:35.235679] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.011 qpair failed and we were unable to recover it. 00:41:40.011 [2024-10-01 22:40:35.235968] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.011 [2024-10-01 22:40:35.235977] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.011 qpair failed and we were unable to recover it. 00:41:40.011 [2024-10-01 22:40:35.236256] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.011 [2024-10-01 22:40:35.236266] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.012 qpair failed and we were unable to recover it. 00:41:40.012 [2024-10-01 22:40:35.236571] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.012 [2024-10-01 22:40:35.236583] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.012 qpair failed and we were unable to recover it. 00:41:40.012 [2024-10-01 22:40:35.236937] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.012 [2024-10-01 22:40:35.236948] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.012 qpair failed and we were unable to recover it. 00:41:40.012 [2024-10-01 22:40:35.237274] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.012 [2024-10-01 22:40:35.237284] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.012 qpair failed and we were unable to recover it. 00:41:40.012 [2024-10-01 22:40:35.237566] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.012 [2024-10-01 22:40:35.237576] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.012 qpair failed and we were unable to recover it. 00:41:40.012 [2024-10-01 22:40:35.237873] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.012 [2024-10-01 22:40:35.237883] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.012 qpair failed and we were unable to recover it. 00:41:40.012 [2024-10-01 22:40:35.238152] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.012 [2024-10-01 22:40:35.238162] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.012 qpair failed and we were unable to recover it. 00:41:40.012 [2024-10-01 22:40:35.238481] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.012 [2024-10-01 22:40:35.238492] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.012 qpair failed and we were unable to recover it. 00:41:40.012 [2024-10-01 22:40:35.238662] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.012 [2024-10-01 22:40:35.238673] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.012 qpair failed and we were unable to recover it. 00:41:40.012 [2024-10-01 22:40:35.238985] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.012 [2024-10-01 22:40:35.238994] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.012 qpair failed and we were unable to recover it. 00:41:40.012 [2024-10-01 22:40:35.239308] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.012 [2024-10-01 22:40:35.239318] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.012 qpair failed and we were unable to recover it. 00:41:40.012 [2024-10-01 22:40:35.239634] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.012 [2024-10-01 22:40:35.239644] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.012 qpair failed and we were unable to recover it. 00:41:40.012 [2024-10-01 22:40:35.240035] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.012 [2024-10-01 22:40:35.240044] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.012 qpair failed and we were unable to recover it. 00:41:40.012 [2024-10-01 22:40:35.240319] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.012 [2024-10-01 22:40:35.240329] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.012 qpair failed and we were unable to recover it. 00:41:40.012 [2024-10-01 22:40:35.240646] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.012 [2024-10-01 22:40:35.240657] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.012 qpair failed and we were unable to recover it. 00:41:40.012 [2024-10-01 22:40:35.240873] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.012 [2024-10-01 22:40:35.240883] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.012 qpair failed and we were unable to recover it. 00:41:40.012 [2024-10-01 22:40:35.241177] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.012 [2024-10-01 22:40:35.241186] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.012 qpair failed and we were unable to recover it. 00:41:40.012 [2024-10-01 22:40:35.241500] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.012 [2024-10-01 22:40:35.241510] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.012 qpair failed and we were unable to recover it. 00:41:40.012 [2024-10-01 22:40:35.241793] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.012 [2024-10-01 22:40:35.241803] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.012 qpair failed and we were unable to recover it. 00:41:40.012 [2024-10-01 22:40:35.242099] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.012 [2024-10-01 22:40:35.242114] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.012 qpair failed and we were unable to recover it. 00:41:40.012 [2024-10-01 22:40:35.242399] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.012 [2024-10-01 22:40:35.242409] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.012 qpair failed and we were unable to recover it. 00:41:40.012 [2024-10-01 22:40:35.242722] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.012 [2024-10-01 22:40:35.242733] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.012 qpair failed and we were unable to recover it. 00:41:40.012 [2024-10-01 22:40:35.243028] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.012 [2024-10-01 22:40:35.243038] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.012 qpair failed and we were unable to recover it. 00:41:40.012 [2024-10-01 22:40:35.243325] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.012 [2024-10-01 22:40:35.243334] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.012 qpair failed and we were unable to recover it. 00:41:40.012 [2024-10-01 22:40:35.243615] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.012 [2024-10-01 22:40:35.243630] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.012 qpair failed and we were unable to recover it. 00:41:40.012 [2024-10-01 22:40:35.243936] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.012 [2024-10-01 22:40:35.243946] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.012 qpair failed and we were unable to recover it. 00:41:40.012 [2024-10-01 22:40:35.244252] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.012 [2024-10-01 22:40:35.244263] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.012 qpair failed and we were unable to recover it. 00:41:40.012 [2024-10-01 22:40:35.244566] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.012 [2024-10-01 22:40:35.244576] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.012 qpair failed and we were unable to recover it. 00:41:40.012 [2024-10-01 22:40:35.244875] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.012 [2024-10-01 22:40:35.244885] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.012 qpair failed and we were unable to recover it. 00:41:40.012 [2024-10-01 22:40:35.245191] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.012 [2024-10-01 22:40:35.245202] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.012 qpair failed and we were unable to recover it. 00:41:40.012 [2024-10-01 22:40:35.245503] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.012 [2024-10-01 22:40:35.245512] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.012 qpair failed and we were unable to recover it. 00:41:40.012 [2024-10-01 22:40:35.245884] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.012 [2024-10-01 22:40:35.245896] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.012 qpair failed and we were unable to recover it. 00:41:40.012 [2024-10-01 22:40:35.246176] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.012 [2024-10-01 22:40:35.246186] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.012 qpair failed and we were unable to recover it. 00:41:40.012 [2024-10-01 22:40:35.246580] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.012 [2024-10-01 22:40:35.246590] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.012 qpair failed and we were unable to recover it. 00:41:40.012 [2024-10-01 22:40:35.246873] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.012 [2024-10-01 22:40:35.246884] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.012 qpair failed and we were unable to recover it. 00:41:40.012 [2024-10-01 22:40:35.247168] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.012 [2024-10-01 22:40:35.247178] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.012 qpair failed and we were unable to recover it. 00:41:40.012 [2024-10-01 22:40:35.247462] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.012 [2024-10-01 22:40:35.247472] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.012 qpair failed and we were unable to recover it. 00:41:40.012 [2024-10-01 22:40:35.247779] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.012 [2024-10-01 22:40:35.247790] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.012 qpair failed and we were unable to recover it. 00:41:40.290 [2024-10-01 22:40:35.248098] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.290 [2024-10-01 22:40:35.248110] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.290 qpair failed and we were unable to recover it. 00:41:40.290 [2024-10-01 22:40:35.248417] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.290 [2024-10-01 22:40:35.248428] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.290 qpair failed and we were unable to recover it. 00:41:40.290 [2024-10-01 22:40:35.248705] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.290 [2024-10-01 22:40:35.248715] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.290 qpair failed and we were unable to recover it. 00:41:40.290 [2024-10-01 22:40:35.249018] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.290 [2024-10-01 22:40:35.249028] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.290 qpair failed and we were unable to recover it. 00:41:40.290 [2024-10-01 22:40:35.249339] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.290 [2024-10-01 22:40:35.249352] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.290 qpair failed and we were unable to recover it. 00:41:40.290 [2024-10-01 22:40:35.249634] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.290 [2024-10-01 22:40:35.249644] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.290 qpair failed and we were unable to recover it. 00:41:40.290 [2024-10-01 22:40:35.249945] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.290 [2024-10-01 22:40:35.249955] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.290 qpair failed and we were unable to recover it. 00:41:40.290 [2024-10-01 22:40:35.250265] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.290 [2024-10-01 22:40:35.250275] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.290 qpair failed and we were unable to recover it. 00:41:40.290 [2024-10-01 22:40:35.250654] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.290 [2024-10-01 22:40:35.250665] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.290 qpair failed and we were unable to recover it. 00:41:40.290 [2024-10-01 22:40:35.250992] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.290 [2024-10-01 22:40:35.251002] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.290 qpair failed and we were unable to recover it. 00:41:40.290 [2024-10-01 22:40:35.251165] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.290 [2024-10-01 22:40:35.251176] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.290 qpair failed and we were unable to recover it. 00:41:40.290 [2024-10-01 22:40:35.251472] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.290 [2024-10-01 22:40:35.251482] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.290 qpair failed and we were unable to recover it. 00:41:40.290 [2024-10-01 22:40:35.251786] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.290 [2024-10-01 22:40:35.251796] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.290 qpair failed and we were unable to recover it. 00:41:40.290 [2024-10-01 22:40:35.252100] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.290 [2024-10-01 22:40:35.252110] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.290 qpair failed and we were unable to recover it. 00:41:40.290 [2024-10-01 22:40:35.252392] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.290 [2024-10-01 22:40:35.252401] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.290 qpair failed and we were unable to recover it. 00:41:40.290 [2024-10-01 22:40:35.252708] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.290 [2024-10-01 22:40:35.252718] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.290 qpair failed and we were unable to recover it. 00:41:40.290 [2024-10-01 22:40:35.253032] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.290 [2024-10-01 22:40:35.253042] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.290 qpair failed and we were unable to recover it. 00:41:40.290 [2024-10-01 22:40:35.253409] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.290 [2024-10-01 22:40:35.253419] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.290 qpair failed and we were unable to recover it. 00:41:40.290 [2024-10-01 22:40:35.253773] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.290 [2024-10-01 22:40:35.253784] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.290 qpair failed and we were unable to recover it. 00:41:40.290 [2024-10-01 22:40:35.254102] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.290 [2024-10-01 22:40:35.254112] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.290 qpair failed and we were unable to recover it. 00:41:40.290 [2024-10-01 22:40:35.254468] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.290 [2024-10-01 22:40:35.254477] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.290 qpair failed and we were unable to recover it. 00:41:40.290 [2024-10-01 22:40:35.254815] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.290 [2024-10-01 22:40:35.254825] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.290 qpair failed and we were unable to recover it. 00:41:40.290 [2024-10-01 22:40:35.255115] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.290 [2024-10-01 22:40:35.255125] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.290 qpair failed and we were unable to recover it. 00:41:40.290 [2024-10-01 22:40:35.255427] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.290 [2024-10-01 22:40:35.255437] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.290 qpair failed and we were unable to recover it. 00:41:40.290 [2024-10-01 22:40:35.255741] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.290 [2024-10-01 22:40:35.255751] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.290 qpair failed and we were unable to recover it. 00:41:40.290 [2024-10-01 22:40:35.255915] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.290 [2024-10-01 22:40:35.255926] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.290 qpair failed and we were unable to recover it. 00:41:40.290 [2024-10-01 22:40:35.256202] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.290 [2024-10-01 22:40:35.256212] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.290 qpair failed and we were unable to recover it. 00:41:40.290 [2024-10-01 22:40:35.256516] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.290 [2024-10-01 22:40:35.256526] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.290 qpair failed and we were unable to recover it. 00:41:40.290 [2024-10-01 22:40:35.256840] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.290 [2024-10-01 22:40:35.256850] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.290 qpair failed and we were unable to recover it. 00:41:40.290 [2024-10-01 22:40:35.257155] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.290 [2024-10-01 22:40:35.257165] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.291 qpair failed and we were unable to recover it. 00:41:40.291 [2024-10-01 22:40:35.257435] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.291 [2024-10-01 22:40:35.257445] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.291 qpair failed and we were unable to recover it. 00:41:40.291 [2024-10-01 22:40:35.257767] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.291 [2024-10-01 22:40:35.257779] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.291 qpair failed and we were unable to recover it. 00:41:40.291 [2024-10-01 22:40:35.258062] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.291 [2024-10-01 22:40:35.258072] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.291 qpair failed and we were unable to recover it. 00:41:40.291 [2024-10-01 22:40:35.258378] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.291 [2024-10-01 22:40:35.258388] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.291 qpair failed and we were unable to recover it. 00:41:40.291 [2024-10-01 22:40:35.258665] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.291 [2024-10-01 22:40:35.258676] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.291 qpair failed and we were unable to recover it. 00:41:40.291 [2024-10-01 22:40:35.258976] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.291 [2024-10-01 22:40:35.258986] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.291 qpair failed and we were unable to recover it. 00:41:40.291 [2024-10-01 22:40:35.259303] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.291 [2024-10-01 22:40:35.259313] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.291 qpair failed and we were unable to recover it. 00:41:40.291 [2024-10-01 22:40:35.259602] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.291 [2024-10-01 22:40:35.259613] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.291 qpair failed and we were unable to recover it. 00:41:40.291 [2024-10-01 22:40:35.259938] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.291 [2024-10-01 22:40:35.259950] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.291 qpair failed and we were unable to recover it. 00:41:40.291 [2024-10-01 22:40:35.260251] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.291 [2024-10-01 22:40:35.260261] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.291 qpair failed and we were unable to recover it. 00:41:40.291 [2024-10-01 22:40:35.260446] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.291 [2024-10-01 22:40:35.260456] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.291 qpair failed and we were unable to recover it. 00:41:40.291 [2024-10-01 22:40:35.260791] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.291 [2024-10-01 22:40:35.260802] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.291 qpair failed and we were unable to recover it. 00:41:40.291 [2024-10-01 22:40:35.261103] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.291 [2024-10-01 22:40:35.261112] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.291 qpair failed and we were unable to recover it. 00:41:40.291 [2024-10-01 22:40:35.261432] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.291 [2024-10-01 22:40:35.261441] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.291 qpair failed and we were unable to recover it. 00:41:40.291 [2024-10-01 22:40:35.261744] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.291 [2024-10-01 22:40:35.261755] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.291 qpair failed and we were unable to recover it. 00:41:40.291 [2024-10-01 22:40:35.262065] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.291 [2024-10-01 22:40:35.262075] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.291 qpair failed and we were unable to recover it. 00:41:40.291 [2024-10-01 22:40:35.262362] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.291 [2024-10-01 22:40:35.262373] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.291 qpair failed and we were unable to recover it. 00:41:40.291 [2024-10-01 22:40:35.262678] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.291 [2024-10-01 22:40:35.262688] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.291 qpair failed and we were unable to recover it. 00:41:40.291 [2024-10-01 22:40:35.263000] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.291 [2024-10-01 22:40:35.263016] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.291 qpair failed and we were unable to recover it. 00:41:40.291 [2024-10-01 22:40:35.263373] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.291 [2024-10-01 22:40:35.263383] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.291 qpair failed and we were unable to recover it. 00:41:40.291 [2024-10-01 22:40:35.263702] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.291 [2024-10-01 22:40:35.263712] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.291 qpair failed and we were unable to recover it. 00:41:40.291 [2024-10-01 22:40:35.264014] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.291 [2024-10-01 22:40:35.264024] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.291 qpair failed and we were unable to recover it. 00:41:40.291 [2024-10-01 22:40:35.264327] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.291 [2024-10-01 22:40:35.264337] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.291 qpair failed and we were unable to recover it. 00:41:40.291 [2024-10-01 22:40:35.264640] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.291 [2024-10-01 22:40:35.264651] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.291 qpair failed and we were unable to recover it. 00:41:40.291 [2024-10-01 22:40:35.264957] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.291 [2024-10-01 22:40:35.264967] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.291 qpair failed and we were unable to recover it. 00:41:40.291 [2024-10-01 22:40:35.265270] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.291 [2024-10-01 22:40:35.265280] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.291 qpair failed and we were unable to recover it. 00:41:40.291 [2024-10-01 22:40:35.265581] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.291 [2024-10-01 22:40:35.265591] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.291 qpair failed and we were unable to recover it. 00:41:40.291 [2024-10-01 22:40:35.265979] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.291 [2024-10-01 22:40:35.265989] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.291 qpair failed and we were unable to recover it. 00:41:40.291 [2024-10-01 22:40:35.266284] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.291 [2024-10-01 22:40:35.266295] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.291 qpair failed and we were unable to recover it. 00:41:40.291 [2024-10-01 22:40:35.266556] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.291 [2024-10-01 22:40:35.266566] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.291 qpair failed and we were unable to recover it. 00:41:40.291 [2024-10-01 22:40:35.266872] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.291 [2024-10-01 22:40:35.266883] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.291 qpair failed and we were unable to recover it. 00:41:40.291 [2024-10-01 22:40:35.267198] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.291 [2024-10-01 22:40:35.267208] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.291 qpair failed and we were unable to recover it. 00:41:40.291 [2024-10-01 22:40:35.267591] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.291 [2024-10-01 22:40:35.267601] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.291 qpair failed and we were unable to recover it. 00:41:40.291 [2024-10-01 22:40:35.267898] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.291 [2024-10-01 22:40:35.267908] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.291 qpair failed and we were unable to recover it. 00:41:40.291 [2024-10-01 22:40:35.268231] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.291 [2024-10-01 22:40:35.268241] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.291 qpair failed and we were unable to recover it. 00:41:40.291 [2024-10-01 22:40:35.268535] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.291 [2024-10-01 22:40:35.268545] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.291 qpair failed and we were unable to recover it. 00:41:40.291 [2024-10-01 22:40:35.268853] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.291 [2024-10-01 22:40:35.268863] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.291 qpair failed and we were unable to recover it. 00:41:40.291 [2024-10-01 22:40:35.269144] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.291 [2024-10-01 22:40:35.269154] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.291 qpair failed and we were unable to recover it. 00:41:40.291 [2024-10-01 22:40:35.269458] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.291 [2024-10-01 22:40:35.269468] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.292 qpair failed and we were unable to recover it. 00:41:40.292 [2024-10-01 22:40:35.269780] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.292 [2024-10-01 22:40:35.269791] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.292 qpair failed and we were unable to recover it. 00:41:40.292 [2024-10-01 22:40:35.269986] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.292 [2024-10-01 22:40:35.269997] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.292 qpair failed and we were unable to recover it. 00:41:40.292 [2024-10-01 22:40:35.270289] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.292 [2024-10-01 22:40:35.270298] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.292 qpair failed and we were unable to recover it. 00:41:40.292 [2024-10-01 22:40:35.270599] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.292 [2024-10-01 22:40:35.270609] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.292 qpair failed and we were unable to recover it. 00:41:40.292 [2024-10-01 22:40:35.270915] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.292 [2024-10-01 22:40:35.270925] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.292 qpair failed and we were unable to recover it. 00:41:40.292 [2024-10-01 22:40:35.271236] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.292 [2024-10-01 22:40:35.271246] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.292 qpair failed and we were unable to recover it. 00:41:40.292 [2024-10-01 22:40:35.271563] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.292 [2024-10-01 22:40:35.271572] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.292 qpair failed and we were unable to recover it. 00:41:40.292 [2024-10-01 22:40:35.271878] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.292 [2024-10-01 22:40:35.271888] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.292 qpair failed and we were unable to recover it. 00:41:40.292 [2024-10-01 22:40:35.272224] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.292 [2024-10-01 22:40:35.272234] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.292 qpair failed and we were unable to recover it. 00:41:40.292 [2024-10-01 22:40:35.272598] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.292 [2024-10-01 22:40:35.272608] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.292 qpair failed and we were unable to recover it. 00:41:40.292 [2024-10-01 22:40:35.272929] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.292 [2024-10-01 22:40:35.272939] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.292 qpair failed and we were unable to recover it. 00:41:40.292 [2024-10-01 22:40:35.273248] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.292 [2024-10-01 22:40:35.273258] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.292 qpair failed and we were unable to recover it. 00:41:40.292 [2024-10-01 22:40:35.273543] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.292 [2024-10-01 22:40:35.273560] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.292 qpair failed and we were unable to recover it. 00:41:40.292 [2024-10-01 22:40:35.273886] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.292 [2024-10-01 22:40:35.273896] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.292 qpair failed and we were unable to recover it. 00:41:40.292 [2024-10-01 22:40:35.274204] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.292 [2024-10-01 22:40:35.274214] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.292 qpair failed and we were unable to recover it. 00:41:40.292 [2024-10-01 22:40:35.274537] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.292 [2024-10-01 22:40:35.274546] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.292 qpair failed and we were unable to recover it. 00:41:40.292 [2024-10-01 22:40:35.274743] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.292 [2024-10-01 22:40:35.274753] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.292 qpair failed and we were unable to recover it. 00:41:40.292 [2024-10-01 22:40:35.275085] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.292 [2024-10-01 22:40:35.275096] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.292 qpair failed and we were unable to recover it. 00:41:40.292 [2024-10-01 22:40:35.275291] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.292 [2024-10-01 22:40:35.275301] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.292 qpair failed and we were unable to recover it. 00:41:40.292 [2024-10-01 22:40:35.275651] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.292 [2024-10-01 22:40:35.275662] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.292 qpair failed and we were unable to recover it. 00:41:40.292 [2024-10-01 22:40:35.275971] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.292 [2024-10-01 22:40:35.275981] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.292 qpair failed and we were unable to recover it. 00:41:40.292 [2024-10-01 22:40:35.276263] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.292 [2024-10-01 22:40:35.276273] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.292 qpair failed and we were unable to recover it. 00:41:40.292 [2024-10-01 22:40:35.276577] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.292 [2024-10-01 22:40:35.276587] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.292 qpair failed and we were unable to recover it. 00:41:40.292 [2024-10-01 22:40:35.276852] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.292 [2024-10-01 22:40:35.276862] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.292 qpair failed and we were unable to recover it. 00:41:40.292 [2024-10-01 22:40:35.277051] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.292 [2024-10-01 22:40:35.277060] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.292 qpair failed and we were unable to recover it. 00:41:40.292 [2024-10-01 22:40:35.277417] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.292 [2024-10-01 22:40:35.277427] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.292 qpair failed and we were unable to recover it. 00:41:40.292 [2024-10-01 22:40:35.277736] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.292 [2024-10-01 22:40:35.277746] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.292 qpair failed and we were unable to recover it. 00:41:40.292 [2024-10-01 22:40:35.278045] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.292 [2024-10-01 22:40:35.278055] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.292 qpair failed and we were unable to recover it. 00:41:40.292 [2024-10-01 22:40:35.278330] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.292 [2024-10-01 22:40:35.278340] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.292 qpair failed and we were unable to recover it. 00:41:40.292 [2024-10-01 22:40:35.278645] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.292 [2024-10-01 22:40:35.278656] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.292 qpair failed and we were unable to recover it. 00:41:40.292 [2024-10-01 22:40:35.279018] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.292 [2024-10-01 22:40:35.279028] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.292 qpair failed and we were unable to recover it. 00:41:40.292 [2024-10-01 22:40:35.279227] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.292 [2024-10-01 22:40:35.279237] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.292 qpair failed and we were unable to recover it. 00:41:40.292 [2024-10-01 22:40:35.279557] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.292 [2024-10-01 22:40:35.279567] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.292 qpair failed and we were unable to recover it. 00:41:40.292 [2024-10-01 22:40:35.279864] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.292 [2024-10-01 22:40:35.279875] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.292 qpair failed and we were unable to recover it. 00:41:40.292 [2024-10-01 22:40:35.280186] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.292 [2024-10-01 22:40:35.280196] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.292 qpair failed and we were unable to recover it. 00:41:40.292 [2024-10-01 22:40:35.280478] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.292 [2024-10-01 22:40:35.280488] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.292 qpair failed and we were unable to recover it. 00:41:40.292 [2024-10-01 22:40:35.280793] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.292 [2024-10-01 22:40:35.280803] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.292 qpair failed and we were unable to recover it. 00:41:40.292 [2024-10-01 22:40:35.281096] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.292 [2024-10-01 22:40:35.281107] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.292 qpair failed and we were unable to recover it. 00:41:40.292 [2024-10-01 22:40:35.281413] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.292 [2024-10-01 22:40:35.281423] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.292 qpair failed and we were unable to recover it. 00:41:40.293 [2024-10-01 22:40:35.281704] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.293 [2024-10-01 22:40:35.281714] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.293 qpair failed and we were unable to recover it. 00:41:40.293 [2024-10-01 22:40:35.282016] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.293 [2024-10-01 22:40:35.282025] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.293 qpair failed and we were unable to recover it. 00:41:40.293 [2024-10-01 22:40:35.282302] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.293 [2024-10-01 22:40:35.282312] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.293 qpair failed and we were unable to recover it. 00:41:40.293 [2024-10-01 22:40:35.282664] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.293 [2024-10-01 22:40:35.282674] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.293 qpair failed and we were unable to recover it. 00:41:40.293 [2024-10-01 22:40:35.282939] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.293 [2024-10-01 22:40:35.282949] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.293 qpair failed and we were unable to recover it. 00:41:40.293 [2024-10-01 22:40:35.283311] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.293 [2024-10-01 22:40:35.283321] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.293 qpair failed and we were unable to recover it. 00:41:40.293 [2024-10-01 22:40:35.283631] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.293 [2024-10-01 22:40:35.283642] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.293 qpair failed and we were unable to recover it. 00:41:40.293 [2024-10-01 22:40:35.283967] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.293 [2024-10-01 22:40:35.283977] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.293 qpair failed and we were unable to recover it. 00:41:40.293 [2024-10-01 22:40:35.284247] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.293 [2024-10-01 22:40:35.284257] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.293 qpair failed and we were unable to recover it. 00:41:40.293 [2024-10-01 22:40:35.284559] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.293 [2024-10-01 22:40:35.284569] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.293 qpair failed and we were unable to recover it. 00:41:40.293 [2024-10-01 22:40:35.284866] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.293 [2024-10-01 22:40:35.284877] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.293 qpair failed and we were unable to recover it. 00:41:40.293 [2024-10-01 22:40:35.285184] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.293 [2024-10-01 22:40:35.285195] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.293 qpair failed and we were unable to recover it. 00:41:40.293 [2024-10-01 22:40:35.285495] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.293 [2024-10-01 22:40:35.285505] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.293 qpair failed and we were unable to recover it. 00:41:40.293 [2024-10-01 22:40:35.285787] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.293 [2024-10-01 22:40:35.285798] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.293 qpair failed and we were unable to recover it. 00:41:40.293 [2024-10-01 22:40:35.286091] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.293 [2024-10-01 22:40:35.286101] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.293 qpair failed and we were unable to recover it. 00:41:40.293 [2024-10-01 22:40:35.286402] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.293 [2024-10-01 22:40:35.286413] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.293 qpair failed and we were unable to recover it. 00:41:40.293 [2024-10-01 22:40:35.286575] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.293 [2024-10-01 22:40:35.286586] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.293 qpair failed and we were unable to recover it. 00:41:40.293 [2024-10-01 22:40:35.286923] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.293 [2024-10-01 22:40:35.286933] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.293 qpair failed and we were unable to recover it. 00:41:40.293 [2024-10-01 22:40:35.287310] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.293 [2024-10-01 22:40:35.287323] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.293 qpair failed and we were unable to recover it. 00:41:40.293 [2024-10-01 22:40:35.287622] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.293 [2024-10-01 22:40:35.287637] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.293 qpair failed and we were unable to recover it. 00:41:40.293 [2024-10-01 22:40:35.287944] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.293 [2024-10-01 22:40:35.287954] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.293 qpair failed and we were unable to recover it. 00:41:40.293 [2024-10-01 22:40:35.288260] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.293 [2024-10-01 22:40:35.288270] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.293 qpair failed and we were unable to recover it. 00:41:40.293 [2024-10-01 22:40:35.288670] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.293 [2024-10-01 22:40:35.288680] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.293 qpair failed and we were unable to recover it. 00:41:40.293 [2024-10-01 22:40:35.288959] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.293 [2024-10-01 22:40:35.288969] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.293 qpair failed and we were unable to recover it. 00:41:40.293 [2024-10-01 22:40:35.289273] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.293 [2024-10-01 22:40:35.289284] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.293 qpair failed and we were unable to recover it. 00:41:40.293 [2024-10-01 22:40:35.289483] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.293 [2024-10-01 22:40:35.289492] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.293 qpair failed and we were unable to recover it. 00:41:40.293 [2024-10-01 22:40:35.289810] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.293 [2024-10-01 22:40:35.289820] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.293 qpair failed and we were unable to recover it. 00:41:40.293 [2024-10-01 22:40:35.290100] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.293 [2024-10-01 22:40:35.290110] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.293 qpair failed and we were unable to recover it. 00:41:40.293 [2024-10-01 22:40:35.290423] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.293 [2024-10-01 22:40:35.290432] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.293 qpair failed and we were unable to recover it. 00:41:40.293 [2024-10-01 22:40:35.290736] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.293 [2024-10-01 22:40:35.290746] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.293 qpair failed and we were unable to recover it. 00:41:40.293 [2024-10-01 22:40:35.291026] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.293 [2024-10-01 22:40:35.291036] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.293 qpair failed and we were unable to recover it. 00:41:40.293 [2024-10-01 22:40:35.291348] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.293 [2024-10-01 22:40:35.291358] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.293 qpair failed and we were unable to recover it. 00:41:40.293 [2024-10-01 22:40:35.291665] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.293 [2024-10-01 22:40:35.291676] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.293 qpair failed and we were unable to recover it. 00:41:40.293 [2024-10-01 22:40:35.292051] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.293 [2024-10-01 22:40:35.292061] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.293 qpair failed and we were unable to recover it. 00:41:40.293 [2024-10-01 22:40:35.292395] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.293 [2024-10-01 22:40:35.292406] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.293 qpair failed and we were unable to recover it. 00:41:40.293 [2024-10-01 22:40:35.292714] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.293 [2024-10-01 22:40:35.292725] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.293 qpair failed and we were unable to recover it. 00:41:40.293 [2024-10-01 22:40:35.293035] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.293 [2024-10-01 22:40:35.293045] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.293 qpair failed and we were unable to recover it. 00:41:40.293 [2024-10-01 22:40:35.293434] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.293 [2024-10-01 22:40:35.293444] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.293 qpair failed and we were unable to recover it. 00:41:40.293 [2024-10-01 22:40:35.293775] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.294 [2024-10-01 22:40:35.293786] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.294 qpair failed and we were unable to recover it. 00:41:40.294 [2024-10-01 22:40:35.294099] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.294 [2024-10-01 22:40:35.294109] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.294 qpair failed and we were unable to recover it. 00:41:40.294 [2024-10-01 22:40:35.294414] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.294 [2024-10-01 22:40:35.294423] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.294 qpair failed and we were unable to recover it. 00:41:40.294 [2024-10-01 22:40:35.294710] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.294 [2024-10-01 22:40:35.294720] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.294 qpair failed and we were unable to recover it. 00:41:40.294 [2024-10-01 22:40:35.295051] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.294 [2024-10-01 22:40:35.295061] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.294 qpair failed and we were unable to recover it. 00:41:40.294 [2024-10-01 22:40:35.295367] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.294 [2024-10-01 22:40:35.295376] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.294 qpair failed and we were unable to recover it. 00:41:40.294 [2024-10-01 22:40:35.295764] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.294 [2024-10-01 22:40:35.295774] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.294 qpair failed and we were unable to recover it. 00:41:40.294 [2024-10-01 22:40:35.296080] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.294 [2024-10-01 22:40:35.296093] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.294 qpair failed and we were unable to recover it. 00:41:40.294 [2024-10-01 22:40:35.296422] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.294 [2024-10-01 22:40:35.296432] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.294 qpair failed and we were unable to recover it. 00:41:40.294 [2024-10-01 22:40:35.296707] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.294 [2024-10-01 22:40:35.296717] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.294 qpair failed and we were unable to recover it. 00:41:40.294 [2024-10-01 22:40:35.297024] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.294 [2024-10-01 22:40:35.297034] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.294 qpair failed and we were unable to recover it. 00:41:40.294 [2024-10-01 22:40:35.297329] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.294 [2024-10-01 22:40:35.297338] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.294 qpair failed and we were unable to recover it. 00:41:40.294 [2024-10-01 22:40:35.297617] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.294 [2024-10-01 22:40:35.297630] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.294 qpair failed and we were unable to recover it. 00:41:40.294 [2024-10-01 22:40:35.297819] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.294 [2024-10-01 22:40:35.297829] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.294 qpair failed and we were unable to recover it. 00:41:40.294 [2024-10-01 22:40:35.298107] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.294 [2024-10-01 22:40:35.298118] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.294 qpair failed and we were unable to recover it. 00:41:40.294 [2024-10-01 22:40:35.298431] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.294 [2024-10-01 22:40:35.298442] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.294 qpair failed and we were unable to recover it. 00:41:40.294 [2024-10-01 22:40:35.298766] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.294 [2024-10-01 22:40:35.298777] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.294 qpair failed and we were unable to recover it. 00:41:40.294 [2024-10-01 22:40:35.299076] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.294 [2024-10-01 22:40:35.299087] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.294 qpair failed and we were unable to recover it. 00:41:40.294 [2024-10-01 22:40:35.299393] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.294 [2024-10-01 22:40:35.299403] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.294 qpair failed and we were unable to recover it. 00:41:40.294 [2024-10-01 22:40:35.299703] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.294 [2024-10-01 22:40:35.299713] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.294 qpair failed and we were unable to recover it. 00:41:40.294 [2024-10-01 22:40:35.300005] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.294 [2024-10-01 22:40:35.300015] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.294 qpair failed and we were unable to recover it. 00:41:40.294 [2024-10-01 22:40:35.300327] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.294 [2024-10-01 22:40:35.300337] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.294 qpair failed and we were unable to recover it. 00:41:40.294 [2024-10-01 22:40:35.300635] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.294 [2024-10-01 22:40:35.300646] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.294 qpair failed and we were unable to recover it. 00:41:40.294 [2024-10-01 22:40:35.300972] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.294 [2024-10-01 22:40:35.300982] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.294 qpair failed and we were unable to recover it. 00:41:40.294 [2024-10-01 22:40:35.301146] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.294 [2024-10-01 22:40:35.301157] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.294 qpair failed and we were unable to recover it. 00:41:40.294 [2024-10-01 22:40:35.301430] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.294 [2024-10-01 22:40:35.301439] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.294 qpair failed and we were unable to recover it. 00:41:40.294 [2024-10-01 22:40:35.301745] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.294 [2024-10-01 22:40:35.301755] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.294 qpair failed and we were unable to recover it. 00:41:40.294 [2024-10-01 22:40:35.302073] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.294 [2024-10-01 22:40:35.302083] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.294 qpair failed and we were unable to recover it. 00:41:40.294 [2024-10-01 22:40:35.302367] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.294 [2024-10-01 22:40:35.302376] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.294 qpair failed and we were unable to recover it. 00:41:40.294 [2024-10-01 22:40:35.302694] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.294 [2024-10-01 22:40:35.302705] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.294 qpair failed and we were unable to recover it. 00:41:40.294 [2024-10-01 22:40:35.302995] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.294 [2024-10-01 22:40:35.303004] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.294 qpair failed and we were unable to recover it. 00:41:40.294 [2024-10-01 22:40:35.303311] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.294 [2024-10-01 22:40:35.303321] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.294 qpair failed and we were unable to recover it. 00:41:40.294 [2024-10-01 22:40:35.303526] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.294 [2024-10-01 22:40:35.303536] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.295 qpair failed and we were unable to recover it. 00:41:40.295 [2024-10-01 22:40:35.303859] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.295 [2024-10-01 22:40:35.303869] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.295 qpair failed and we were unable to recover it. 00:41:40.295 [2024-10-01 22:40:35.304144] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.295 [2024-10-01 22:40:35.304156] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.295 qpair failed and we were unable to recover it. 00:41:40.295 [2024-10-01 22:40:35.304461] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.295 [2024-10-01 22:40:35.304470] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.295 qpair failed and we were unable to recover it. 00:41:40.295 [2024-10-01 22:40:35.304752] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.295 [2024-10-01 22:40:35.304763] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.295 qpair failed and we were unable to recover it. 00:41:40.295 [2024-10-01 22:40:35.305073] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.295 [2024-10-01 22:40:35.305082] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.295 qpair failed and we were unable to recover it. 00:41:40.295 [2024-10-01 22:40:35.305394] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.295 [2024-10-01 22:40:35.305405] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.295 qpair failed and we were unable to recover it. 00:41:40.295 [2024-10-01 22:40:35.305729] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.295 [2024-10-01 22:40:35.305739] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.295 qpair failed and we were unable to recover it. 00:41:40.295 [2024-10-01 22:40:35.306031] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.295 [2024-10-01 22:40:35.306047] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.295 qpair failed and we were unable to recover it. 00:41:40.295 [2024-10-01 22:40:35.306343] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.295 [2024-10-01 22:40:35.306354] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.295 qpair failed and we were unable to recover it. 00:41:40.295 [2024-10-01 22:40:35.306662] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.295 [2024-10-01 22:40:35.306673] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.295 qpair failed and we were unable to recover it. 00:41:40.295 [2024-10-01 22:40:35.306978] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.295 [2024-10-01 22:40:35.306988] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.295 qpair failed and we were unable to recover it. 00:41:40.295 [2024-10-01 22:40:35.307277] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.295 [2024-10-01 22:40:35.307288] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.295 qpair failed and we were unable to recover it. 00:41:40.295 [2024-10-01 22:40:35.307589] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.295 [2024-10-01 22:40:35.307599] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.295 qpair failed and we were unable to recover it. 00:41:40.295 [2024-10-01 22:40:35.307880] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.295 [2024-10-01 22:40:35.307890] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.295 qpair failed and we were unable to recover it. 00:41:40.295 [2024-10-01 22:40:35.308086] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.295 [2024-10-01 22:40:35.308097] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.295 qpair failed and we were unable to recover it. 00:41:40.295 [2024-10-01 22:40:35.308360] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.295 [2024-10-01 22:40:35.308369] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.295 qpair failed and we were unable to recover it. 00:41:40.295 [2024-10-01 22:40:35.308675] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.295 [2024-10-01 22:40:35.308685] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.295 qpair failed and we were unable to recover it. 00:41:40.295 [2024-10-01 22:40:35.308979] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.295 [2024-10-01 22:40:35.308989] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.295 qpair failed and we were unable to recover it. 00:41:40.295 [2024-10-01 22:40:35.309296] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.295 [2024-10-01 22:40:35.309306] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.295 qpair failed and we were unable to recover it. 00:41:40.295 [2024-10-01 22:40:35.309585] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.295 [2024-10-01 22:40:35.309595] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.295 qpair failed and we were unable to recover it. 00:41:40.295 [2024-10-01 22:40:35.309950] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.295 [2024-10-01 22:40:35.309961] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.295 qpair failed and we were unable to recover it. 00:41:40.295 [2024-10-01 22:40:35.310265] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.295 [2024-10-01 22:40:35.310275] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.295 qpair failed and we were unable to recover it. 00:41:40.295 [2024-10-01 22:40:35.310578] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.295 [2024-10-01 22:40:35.310588] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.295 qpair failed and we were unable to recover it. 00:41:40.295 [2024-10-01 22:40:35.310874] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.295 [2024-10-01 22:40:35.310885] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.295 qpair failed and we were unable to recover it. 00:41:40.295 [2024-10-01 22:40:35.311191] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.295 [2024-10-01 22:40:35.311200] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.295 qpair failed and we were unable to recover it. 00:41:40.295 [2024-10-01 22:40:35.311400] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.295 [2024-10-01 22:40:35.311410] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.295 qpair failed and we were unable to recover it. 00:41:40.295 [2024-10-01 22:40:35.311716] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.295 [2024-10-01 22:40:35.311727] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.295 qpair failed and we were unable to recover it. 00:41:40.295 [2024-10-01 22:40:35.312006] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.295 [2024-10-01 22:40:35.312015] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.295 qpair failed and we were unable to recover it. 00:41:40.295 [2024-10-01 22:40:35.312239] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.295 [2024-10-01 22:40:35.312249] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.295 qpair failed and we were unable to recover it. 00:41:40.295 [2024-10-01 22:40:35.312559] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.295 [2024-10-01 22:40:35.312568] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.295 qpair failed and we were unable to recover it. 00:41:40.295 [2024-10-01 22:40:35.312853] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.295 [2024-10-01 22:40:35.312864] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.295 qpair failed and we were unable to recover it. 00:41:40.295 [2024-10-01 22:40:35.313144] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.295 [2024-10-01 22:40:35.313155] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.295 qpair failed and we were unable to recover it. 00:41:40.295 [2024-10-01 22:40:35.313317] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.295 [2024-10-01 22:40:35.313327] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.295 qpair failed and we were unable to recover it. 00:41:40.295 [2024-10-01 22:40:35.313541] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.295 [2024-10-01 22:40:35.313551] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.295 qpair failed and we were unable to recover it. 00:41:40.295 [2024-10-01 22:40:35.313833] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.295 [2024-10-01 22:40:35.313843] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.295 qpair failed and we were unable to recover it. 00:41:40.295 [2024-10-01 22:40:35.314168] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.296 [2024-10-01 22:40:35.314177] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.296 qpair failed and we were unable to recover it. 00:41:40.296 [2024-10-01 22:40:35.314338] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.296 [2024-10-01 22:40:35.314349] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.296 qpair failed and we were unable to recover it. 00:41:40.296 [2024-10-01 22:40:35.314680] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.296 [2024-10-01 22:40:35.314691] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.296 qpair failed and we were unable to recover it. 00:41:40.296 [2024-10-01 22:40:35.315015] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.296 [2024-10-01 22:40:35.315026] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.296 qpair failed and we were unable to recover it. 00:41:40.296 [2024-10-01 22:40:35.315308] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.296 [2024-10-01 22:40:35.315318] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.296 qpair failed and we were unable to recover it. 00:41:40.296 [2024-10-01 22:40:35.315628] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.296 [2024-10-01 22:40:35.315638] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.296 qpair failed and we were unable to recover it. 00:41:40.296 [2024-10-01 22:40:35.315831] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.296 [2024-10-01 22:40:35.315842] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.296 qpair failed and we were unable to recover it. 00:41:40.296 [2024-10-01 22:40:35.316158] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.296 [2024-10-01 22:40:35.316171] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.296 qpair failed and we were unable to recover it. 00:41:40.296 [2024-10-01 22:40:35.316496] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.296 [2024-10-01 22:40:35.316506] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.296 qpair failed and we were unable to recover it. 00:41:40.296 [2024-10-01 22:40:35.316789] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.296 [2024-10-01 22:40:35.316799] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.296 qpair failed and we were unable to recover it. 00:41:40.296 [2024-10-01 22:40:35.317088] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.296 [2024-10-01 22:40:35.317107] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.296 qpair failed and we were unable to recover it. 00:41:40.296 [2024-10-01 22:40:35.317410] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.296 [2024-10-01 22:40:35.317419] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.296 qpair failed and we were unable to recover it. 00:41:40.296 [2024-10-01 22:40:35.317694] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.296 [2024-10-01 22:40:35.317704] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.296 qpair failed and we were unable to recover it. 00:41:40.296 [2024-10-01 22:40:35.318012] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.296 [2024-10-01 22:40:35.318022] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.296 qpair failed and we were unable to recover it. 00:41:40.296 [2024-10-01 22:40:35.318329] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.296 [2024-10-01 22:40:35.318338] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.296 qpair failed and we were unable to recover it. 00:41:40.296 [2024-10-01 22:40:35.318644] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.296 [2024-10-01 22:40:35.318654] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.296 qpair failed and we were unable to recover it. 00:41:40.296 [2024-10-01 22:40:35.318971] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.296 [2024-10-01 22:40:35.318981] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.296 qpair failed and we were unable to recover it. 00:41:40.296 [2024-10-01 22:40:35.319286] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.296 [2024-10-01 22:40:35.319296] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.296 qpair failed and we were unable to recover it. 00:41:40.296 [2024-10-01 22:40:35.319601] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.296 [2024-10-01 22:40:35.319611] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.296 qpair failed and we were unable to recover it. 00:41:40.296 [2024-10-01 22:40:35.319808] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.296 [2024-10-01 22:40:35.319819] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.296 qpair failed and we were unable to recover it. 00:41:40.296 [2024-10-01 22:40:35.320115] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.296 [2024-10-01 22:40:35.320124] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.296 qpair failed and we were unable to recover it. 00:41:40.296 [2024-10-01 22:40:35.320330] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.296 [2024-10-01 22:40:35.320341] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.296 qpair failed and we were unable to recover it. 00:41:40.296 [2024-10-01 22:40:35.320648] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.296 [2024-10-01 22:40:35.320658] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.296 qpair failed and we were unable to recover it. 00:41:40.296 [2024-10-01 22:40:35.320970] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.296 [2024-10-01 22:40:35.320980] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.296 qpair failed and we were unable to recover it. 00:41:40.296 [2024-10-01 22:40:35.321163] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.296 [2024-10-01 22:40:35.321174] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.296 qpair failed and we were unable to recover it. 00:41:40.296 [2024-10-01 22:40:35.321484] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.296 [2024-10-01 22:40:35.321494] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.296 qpair failed and we were unable to recover it. 00:41:40.296 [2024-10-01 22:40:35.321796] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.296 [2024-10-01 22:40:35.321806] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.296 qpair failed and we were unable to recover it. 00:41:40.296 [2024-10-01 22:40:35.322099] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.296 [2024-10-01 22:40:35.322117] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.296 qpair failed and we were unable to recover it. 00:41:40.296 [2024-10-01 22:40:35.322443] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.296 [2024-10-01 22:40:35.322453] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.296 qpair failed and we were unable to recover it. 00:41:40.296 [2024-10-01 22:40:35.322760] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.296 [2024-10-01 22:40:35.322770] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.296 qpair failed and we were unable to recover it. 00:41:40.296 [2024-10-01 22:40:35.323079] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.296 [2024-10-01 22:40:35.323089] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.296 qpair failed and we were unable to recover it. 00:41:40.296 [2024-10-01 22:40:35.323393] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.296 [2024-10-01 22:40:35.323403] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.296 qpair failed and we were unable to recover it. 00:41:40.296 [2024-10-01 22:40:35.323682] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.296 [2024-10-01 22:40:35.323692] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.296 qpair failed and we were unable to recover it. 00:41:40.296 [2024-10-01 22:40:35.323986] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.296 [2024-10-01 22:40:35.324003] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.296 qpair failed and we were unable to recover it. 00:41:40.296 [2024-10-01 22:40:35.324384] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.296 [2024-10-01 22:40:35.324396] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.296 qpair failed and we were unable to recover it. 00:41:40.296 [2024-10-01 22:40:35.324701] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.296 [2024-10-01 22:40:35.324712] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.296 qpair failed and we were unable to recover it. 00:41:40.296 [2024-10-01 22:40:35.324996] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.296 [2024-10-01 22:40:35.325006] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.296 qpair failed and we were unable to recover it. 00:41:40.296 [2024-10-01 22:40:35.325307] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.296 [2024-10-01 22:40:35.325316] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.296 qpair failed and we were unable to recover it. 00:41:40.296 [2024-10-01 22:40:35.325620] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.296 [2024-10-01 22:40:35.325636] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.297 qpair failed and we were unable to recover it. 00:41:40.297 [2024-10-01 22:40:35.325976] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.297 [2024-10-01 22:40:35.325986] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.297 qpair failed and we were unable to recover it. 00:41:40.297 [2024-10-01 22:40:35.326355] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.297 [2024-10-01 22:40:35.326365] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.297 qpair failed and we were unable to recover it. 00:41:40.297 [2024-10-01 22:40:35.326663] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.297 [2024-10-01 22:40:35.326673] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.297 qpair failed and we were unable to recover it. 00:41:40.297 [2024-10-01 22:40:35.326993] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.297 [2024-10-01 22:40:35.327005] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.297 qpair failed and we were unable to recover it. 00:41:40.297 [2024-10-01 22:40:35.327331] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.297 [2024-10-01 22:40:35.327342] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.297 qpair failed and we were unable to recover it. 00:41:40.297 [2024-10-01 22:40:35.327622] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.297 [2024-10-01 22:40:35.327639] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.297 qpair failed and we were unable to recover it. 00:41:40.297 [2024-10-01 22:40:35.327820] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.297 [2024-10-01 22:40:35.327831] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.297 qpair failed and we were unable to recover it. 00:41:40.297 [2024-10-01 22:40:35.328161] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.297 [2024-10-01 22:40:35.328171] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.297 qpair failed and we were unable to recover it. 00:41:40.297 [2024-10-01 22:40:35.328468] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.297 [2024-10-01 22:40:35.328477] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.297 qpair failed and we were unable to recover it. 00:41:40.297 [2024-10-01 22:40:35.328791] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.297 [2024-10-01 22:40:35.328802] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.297 qpair failed and we were unable to recover it. 00:41:40.297 [2024-10-01 22:40:35.328964] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.297 [2024-10-01 22:40:35.328975] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.297 qpair failed and we were unable to recover it. 00:41:40.297 [2024-10-01 22:40:35.329348] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.297 [2024-10-01 22:40:35.329357] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.297 qpair failed and we were unable to recover it. 00:41:40.297 [2024-10-01 22:40:35.329663] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.297 [2024-10-01 22:40:35.329674] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.297 qpair failed and we were unable to recover it. 00:41:40.297 [2024-10-01 22:40:35.329965] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.297 [2024-10-01 22:40:35.329975] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.297 qpair failed and we were unable to recover it. 00:41:40.297 [2024-10-01 22:40:35.330312] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.297 [2024-10-01 22:40:35.330322] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.297 qpair failed and we were unable to recover it. 00:41:40.297 [2024-10-01 22:40:35.330634] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.297 [2024-10-01 22:40:35.330644] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.297 qpair failed and we were unable to recover it. 00:41:40.297 [2024-10-01 22:40:35.330856] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.297 [2024-10-01 22:40:35.330866] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.297 qpair failed and we were unable to recover it. 00:41:40.297 [2024-10-01 22:40:35.331195] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.297 [2024-10-01 22:40:35.331205] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.297 qpair failed and we were unable to recover it. 00:41:40.297 [2024-10-01 22:40:35.331513] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.297 [2024-10-01 22:40:35.331523] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.297 qpair failed and we were unable to recover it. 00:41:40.297 [2024-10-01 22:40:35.331831] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.297 [2024-10-01 22:40:35.331843] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.297 qpair failed and we were unable to recover it. 00:41:40.297 [2024-10-01 22:40:35.332155] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.297 [2024-10-01 22:40:35.332165] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.297 qpair failed and we were unable to recover it. 00:41:40.297 [2024-10-01 22:40:35.332433] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.297 [2024-10-01 22:40:35.332443] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.297 qpair failed and we were unable to recover it. 00:41:40.297 [2024-10-01 22:40:35.332644] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.297 [2024-10-01 22:40:35.332657] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.297 qpair failed and we were unable to recover it. 00:41:40.297 [2024-10-01 22:40:35.332936] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.297 [2024-10-01 22:40:35.332946] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.297 qpair failed and we were unable to recover it. 00:41:40.297 [2024-10-01 22:40:35.333150] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.297 [2024-10-01 22:40:35.333162] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.297 qpair failed and we were unable to recover it. 00:41:40.297 [2024-10-01 22:40:35.333476] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.297 [2024-10-01 22:40:35.333487] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.297 qpair failed and we were unable to recover it. 00:41:40.297 [2024-10-01 22:40:35.333727] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.297 [2024-10-01 22:40:35.333738] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.297 qpair failed and we were unable to recover it. 00:41:40.297 [2024-10-01 22:40:35.334053] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.297 [2024-10-01 22:40:35.334065] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.297 qpair failed and we were unable to recover it. 00:41:40.297 [2024-10-01 22:40:35.334381] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.297 [2024-10-01 22:40:35.334391] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.297 qpair failed and we were unable to recover it. 00:41:40.297 [2024-10-01 22:40:35.334715] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.297 [2024-10-01 22:40:35.334725] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.297 qpair failed and we were unable to recover it. 00:41:40.297 [2024-10-01 22:40:35.335084] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.297 [2024-10-01 22:40:35.335095] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.297 qpair failed and we were unable to recover it. 00:41:40.297 [2024-10-01 22:40:35.335393] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.297 [2024-10-01 22:40:35.335404] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.297 qpair failed and we were unable to recover it. 00:41:40.297 [2024-10-01 22:40:35.335706] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.297 [2024-10-01 22:40:35.335716] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.297 qpair failed and we were unable to recover it. 00:41:40.297 [2024-10-01 22:40:35.336009] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.297 [2024-10-01 22:40:35.336019] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.297 qpair failed and we were unable to recover it. 00:41:40.297 [2024-10-01 22:40:35.336331] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.297 [2024-10-01 22:40:35.336341] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.297 qpair failed and we were unable to recover it. 00:41:40.297 [2024-10-01 22:40:35.336556] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.297 [2024-10-01 22:40:35.336565] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.297 qpair failed and we were unable to recover it. 00:41:40.297 [2024-10-01 22:40:35.336862] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.297 [2024-10-01 22:40:35.336873] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.297 qpair failed and we were unable to recover it. 00:41:40.297 [2024-10-01 22:40:35.337181] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.297 [2024-10-01 22:40:35.337190] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.297 qpair failed and we were unable to recover it. 00:41:40.297 [2024-10-01 22:40:35.337495] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.297 [2024-10-01 22:40:35.337505] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.298 qpair failed and we were unable to recover it. 00:41:40.298 [2024-10-01 22:40:35.337711] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.298 [2024-10-01 22:40:35.337721] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.298 qpair failed and we were unable to recover it. 00:41:40.298 [2024-10-01 22:40:35.337941] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.298 [2024-10-01 22:40:35.337950] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.298 qpair failed and we were unable to recover it. 00:41:40.298 [2024-10-01 22:40:35.338261] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.298 [2024-10-01 22:40:35.338271] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.298 qpair failed and we were unable to recover it. 00:41:40.298 [2024-10-01 22:40:35.338580] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.298 [2024-10-01 22:40:35.338591] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.298 qpair failed and we were unable to recover it. 00:41:40.298 [2024-10-01 22:40:35.338955] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.298 [2024-10-01 22:40:35.338965] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.298 qpair failed and we were unable to recover it. 00:41:40.298 [2024-10-01 22:40:35.339161] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.298 [2024-10-01 22:40:35.339170] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.298 qpair failed and we were unable to recover it. 00:41:40.298 [2024-10-01 22:40:35.339308] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.298 [2024-10-01 22:40:35.339318] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.298 qpair failed and we were unable to recover it. 00:41:40.298 [2024-10-01 22:40:35.339632] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.298 [2024-10-01 22:40:35.339642] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.298 qpair failed and we were unable to recover it. 00:41:40.298 [2024-10-01 22:40:35.339927] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.298 [2024-10-01 22:40:35.339936] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.298 qpair failed and we were unable to recover it. 00:41:40.298 [2024-10-01 22:40:35.340262] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.298 [2024-10-01 22:40:35.340272] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.298 qpair failed and we were unable to recover it. 00:41:40.298 [2024-10-01 22:40:35.340553] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.298 [2024-10-01 22:40:35.340563] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.298 qpair failed and we were unable to recover it. 00:41:40.298 [2024-10-01 22:40:35.340870] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.298 [2024-10-01 22:40:35.340880] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.298 qpair failed and we were unable to recover it. 00:41:40.298 [2024-10-01 22:40:35.341225] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.298 [2024-10-01 22:40:35.341235] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.298 qpair failed and we were unable to recover it. 00:41:40.298 [2024-10-01 22:40:35.341545] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.298 [2024-10-01 22:40:35.341555] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.298 qpair failed and we were unable to recover it. 00:41:40.298 [2024-10-01 22:40:35.341852] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.298 [2024-10-01 22:40:35.341862] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.298 qpair failed and we were unable to recover it. 00:41:40.298 [2024-10-01 22:40:35.342188] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.298 [2024-10-01 22:40:35.342197] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.298 qpair failed and we were unable to recover it. 00:41:40.298 [2024-10-01 22:40:35.342544] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.298 [2024-10-01 22:40:35.342555] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.298 qpair failed and we were unable to recover it. 00:41:40.298 [2024-10-01 22:40:35.342845] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.298 [2024-10-01 22:40:35.342855] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.298 qpair failed and we were unable to recover it. 00:41:40.298 [2024-10-01 22:40:35.343161] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.298 [2024-10-01 22:40:35.343171] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.298 qpair failed and we were unable to recover it. 00:41:40.298 [2024-10-01 22:40:35.343375] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.298 [2024-10-01 22:40:35.343386] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.298 qpair failed and we were unable to recover it. 00:41:40.298 [2024-10-01 22:40:35.343701] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.298 [2024-10-01 22:40:35.343711] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.298 qpair failed and we were unable to recover it. 00:41:40.298 [2024-10-01 22:40:35.344019] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.298 [2024-10-01 22:40:35.344029] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.298 qpair failed and we were unable to recover it. 00:41:40.298 [2024-10-01 22:40:35.344319] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.298 [2024-10-01 22:40:35.344329] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.298 qpair failed and we were unable to recover it. 00:41:40.298 [2024-10-01 22:40:35.344633] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.298 [2024-10-01 22:40:35.344644] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.298 qpair failed and we were unable to recover it. 00:41:40.298 [2024-10-01 22:40:35.344852] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.298 [2024-10-01 22:40:35.344862] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.298 qpair failed and we were unable to recover it. 00:41:40.298 [2024-10-01 22:40:35.345168] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.298 [2024-10-01 22:40:35.345178] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.298 qpair failed and we were unable to recover it. 00:41:40.298 [2024-10-01 22:40:35.345368] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.298 [2024-10-01 22:40:35.345378] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.298 qpair failed and we were unable to recover it. 00:41:40.298 [2024-10-01 22:40:35.345685] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.298 [2024-10-01 22:40:35.345695] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.298 qpair failed and we were unable to recover it. 00:41:40.298 [2024-10-01 22:40:35.345886] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.298 [2024-10-01 22:40:35.345896] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.298 qpair failed and we were unable to recover it. 00:41:40.298 [2024-10-01 22:40:35.346158] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.298 [2024-10-01 22:40:35.346167] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.298 qpair failed and we were unable to recover it. 00:41:40.298 [2024-10-01 22:40:35.346508] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.298 [2024-10-01 22:40:35.346518] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.298 qpair failed and we were unable to recover it. 00:41:40.298 [2024-10-01 22:40:35.346836] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.298 [2024-10-01 22:40:35.346846] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.298 qpair failed and we were unable to recover it. 00:41:40.298 [2024-10-01 22:40:35.347149] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.298 [2024-10-01 22:40:35.347159] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.298 qpair failed and we were unable to recover it. 00:41:40.298 [2024-10-01 22:40:35.347457] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.298 [2024-10-01 22:40:35.347467] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.298 qpair failed and we were unable to recover it. 00:41:40.298 [2024-10-01 22:40:35.347650] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.298 [2024-10-01 22:40:35.347661] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.298 qpair failed and we were unable to recover it. 00:41:40.298 [2024-10-01 22:40:35.348032] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.298 [2024-10-01 22:40:35.348041] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.298 qpair failed and we were unable to recover it. 00:41:40.298 [2024-10-01 22:40:35.348248] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.298 [2024-10-01 22:40:35.348257] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.298 qpair failed and we were unable to recover it. 00:41:40.298 [2024-10-01 22:40:35.348580] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.298 [2024-10-01 22:40:35.348589] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.298 qpair failed and we were unable to recover it. 00:41:40.298 [2024-10-01 22:40:35.348925] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.298 [2024-10-01 22:40:35.348937] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.298 qpair failed and we were unable to recover it. 00:41:40.299 [2024-10-01 22:40:35.349241] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.299 [2024-10-01 22:40:35.349251] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.299 qpair failed and we were unable to recover it. 00:41:40.299 [2024-10-01 22:40:35.349524] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.299 [2024-10-01 22:40:35.349534] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.299 qpair failed and we were unable to recover it. 00:41:40.299 [2024-10-01 22:40:35.349858] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.299 [2024-10-01 22:40:35.349868] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.299 qpair failed and we were unable to recover it. 00:41:40.299 [2024-10-01 22:40:35.350043] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.299 [2024-10-01 22:40:35.350052] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.299 qpair failed and we were unable to recover it. 00:41:40.299 [2024-10-01 22:40:35.350317] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.299 [2024-10-01 22:40:35.350327] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.299 qpair failed and we were unable to recover it. 00:41:40.299 [2024-10-01 22:40:35.350637] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.299 [2024-10-01 22:40:35.350647] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.299 qpair failed and we were unable to recover it. 00:41:40.299 [2024-10-01 22:40:35.350930] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.299 [2024-10-01 22:40:35.350940] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.299 qpair failed and we were unable to recover it. 00:41:40.299 [2024-10-01 22:40:35.351245] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.299 [2024-10-01 22:40:35.351255] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.299 qpair failed and we were unable to recover it. 00:41:40.299 [2024-10-01 22:40:35.351561] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.299 [2024-10-01 22:40:35.351572] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.299 qpair failed and we were unable to recover it. 00:41:40.299 [2024-10-01 22:40:35.351895] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.299 [2024-10-01 22:40:35.351905] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.299 qpair failed and we were unable to recover it. 00:41:40.299 [2024-10-01 22:40:35.352173] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.299 [2024-10-01 22:40:35.352183] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.299 qpair failed and we were unable to recover it. 00:41:40.299 [2024-10-01 22:40:35.352456] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.299 [2024-10-01 22:40:35.352466] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.299 qpair failed and we were unable to recover it. 00:41:40.299 [2024-10-01 22:40:35.352776] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.299 [2024-10-01 22:40:35.352789] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.299 qpair failed and we were unable to recover it. 00:41:40.299 [2024-10-01 22:40:35.353060] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.299 [2024-10-01 22:40:35.353069] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.299 qpair failed and we were unable to recover it. 00:41:40.299 [2024-10-01 22:40:35.353375] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.299 [2024-10-01 22:40:35.353385] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.299 qpair failed and we were unable to recover it. 00:41:40.299 [2024-10-01 22:40:35.353673] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.299 [2024-10-01 22:40:35.353683] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.299 qpair failed and we were unable to recover it. 00:41:40.299 [2024-10-01 22:40:35.353981] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.299 [2024-10-01 22:40:35.353991] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.299 qpair failed and we were unable to recover it. 00:41:40.299 [2024-10-01 22:40:35.354304] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.299 [2024-10-01 22:40:35.354313] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.299 qpair failed and we were unable to recover it. 00:41:40.299 [2024-10-01 22:40:35.354472] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.299 [2024-10-01 22:40:35.354483] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.299 qpair failed and we were unable to recover it. 00:41:40.299 [2024-10-01 22:40:35.354712] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.299 [2024-10-01 22:40:35.354722] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.299 qpair failed and we were unable to recover it. 00:41:40.299 [2024-10-01 22:40:35.354934] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.299 [2024-10-01 22:40:35.354945] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.299 qpair failed and we were unable to recover it. 00:41:40.299 [2024-10-01 22:40:35.355090] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.299 [2024-10-01 22:40:35.355100] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.299 qpair failed and we were unable to recover it. 00:41:40.299 [2024-10-01 22:40:35.355382] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.299 [2024-10-01 22:40:35.355392] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.299 qpair failed and we were unable to recover it. 00:41:40.299 [2024-10-01 22:40:35.355700] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.299 [2024-10-01 22:40:35.355710] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.299 qpair failed and we were unable to recover it. 00:41:40.299 [2024-10-01 22:40:35.356043] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.299 [2024-10-01 22:40:35.356052] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.299 qpair failed and we were unable to recover it. 00:41:40.299 [2024-10-01 22:40:35.356356] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.299 [2024-10-01 22:40:35.356365] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.299 qpair failed and we were unable to recover it. 00:41:40.299 [2024-10-01 22:40:35.356759] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.299 [2024-10-01 22:40:35.356770] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.299 qpair failed and we were unable to recover it. 00:41:40.299 [2024-10-01 22:40:35.357066] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.299 [2024-10-01 22:40:35.357076] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.299 qpair failed and we were unable to recover it. 00:41:40.299 [2024-10-01 22:40:35.357379] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.299 [2024-10-01 22:40:35.357388] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.299 qpair failed and we were unable to recover it. 00:41:40.299 [2024-10-01 22:40:35.357638] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.299 [2024-10-01 22:40:35.357648] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.299 qpair failed and we were unable to recover it. 00:41:40.299 [2024-10-01 22:40:35.357985] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.299 [2024-10-01 22:40:35.357994] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.299 qpair failed and we were unable to recover it. 00:41:40.299 [2024-10-01 22:40:35.358297] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.299 [2024-10-01 22:40:35.358307] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.299 qpair failed and we were unable to recover it. 00:41:40.299 [2024-10-01 22:40:35.358614] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.299 [2024-10-01 22:40:35.358633] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.299 qpair failed and we were unable to recover it. 00:41:40.299 [2024-10-01 22:40:35.358837] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.299 [2024-10-01 22:40:35.358847] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.300 qpair failed and we were unable to recover it. 00:41:40.300 [2024-10-01 22:40:35.359162] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.300 [2024-10-01 22:40:35.359172] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.300 qpair failed and we were unable to recover it. 00:41:40.300 [2024-10-01 22:40:35.359504] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.300 [2024-10-01 22:40:35.359515] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.300 qpair failed and we were unable to recover it. 00:41:40.300 [2024-10-01 22:40:35.359811] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.300 [2024-10-01 22:40:35.359821] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.300 qpair failed and we were unable to recover it. 00:41:40.300 [2024-10-01 22:40:35.360099] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.300 [2024-10-01 22:40:35.360109] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.300 qpair failed and we were unable to recover it. 00:41:40.300 [2024-10-01 22:40:35.360412] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.300 [2024-10-01 22:40:35.360422] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.300 qpair failed and we were unable to recover it. 00:41:40.300 [2024-10-01 22:40:35.360733] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.300 [2024-10-01 22:40:35.360745] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.300 qpair failed and we were unable to recover it. 00:41:40.300 [2024-10-01 22:40:35.361070] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.300 [2024-10-01 22:40:35.361080] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.300 qpair failed and we were unable to recover it. 00:41:40.300 [2024-10-01 22:40:35.361384] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.300 [2024-10-01 22:40:35.361394] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.300 qpair failed and we were unable to recover it. 00:41:40.300 [2024-10-01 22:40:35.361763] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.300 [2024-10-01 22:40:35.361773] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.300 qpair failed and we were unable to recover it. 00:41:40.300 [2024-10-01 22:40:35.362069] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.300 [2024-10-01 22:40:35.362079] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.300 qpair failed and we were unable to recover it. 00:41:40.300 [2024-10-01 22:40:35.362405] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.300 [2024-10-01 22:40:35.362414] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.300 qpair failed and we were unable to recover it. 00:41:40.300 [2024-10-01 22:40:35.362732] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.300 [2024-10-01 22:40:35.362750] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.300 qpair failed and we were unable to recover it. 00:41:40.300 [2024-10-01 22:40:35.363057] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.300 [2024-10-01 22:40:35.363067] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.300 qpair failed and we were unable to recover it. 00:41:40.300 [2024-10-01 22:40:35.363359] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.300 [2024-10-01 22:40:35.363369] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.300 qpair failed and we were unable to recover it. 00:41:40.300 [2024-10-01 22:40:35.363632] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.300 [2024-10-01 22:40:35.363642] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.300 qpair failed and we were unable to recover it. 00:41:40.300 [2024-10-01 22:40:35.363957] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.300 [2024-10-01 22:40:35.363967] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.300 qpair failed and we were unable to recover it. 00:41:40.300 [2024-10-01 22:40:35.364263] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.300 [2024-10-01 22:40:35.364273] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.300 qpair failed and we were unable to recover it. 00:41:40.300 [2024-10-01 22:40:35.364573] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.300 [2024-10-01 22:40:35.364584] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.300 qpair failed and we were unable to recover it. 00:41:40.300 [2024-10-01 22:40:35.364976] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.300 [2024-10-01 22:40:35.364987] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.300 qpair failed and we were unable to recover it. 00:41:40.300 [2024-10-01 22:40:35.365286] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.300 [2024-10-01 22:40:35.365297] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.300 qpair failed and we were unable to recover it. 00:41:40.300 [2024-10-01 22:40:35.365600] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.300 [2024-10-01 22:40:35.365611] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.300 qpair failed and we were unable to recover it. 00:41:40.300 [2024-10-01 22:40:35.365918] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.300 [2024-10-01 22:40:35.365930] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.300 qpair failed and we were unable to recover it. 00:41:40.300 [2024-10-01 22:40:35.366235] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.300 [2024-10-01 22:40:35.366246] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.300 qpair failed and we were unable to recover it. 00:41:40.300 [2024-10-01 22:40:35.366592] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.300 [2024-10-01 22:40:35.366603] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.300 qpair failed and we were unable to recover it. 00:41:40.300 [2024-10-01 22:40:35.366914] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.300 [2024-10-01 22:40:35.366925] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.300 qpair failed and we were unable to recover it. 00:41:40.300 [2024-10-01 22:40:35.367211] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.300 [2024-10-01 22:40:35.367222] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.300 qpair failed and we were unable to recover it. 00:41:40.300 [2024-10-01 22:40:35.367384] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.300 [2024-10-01 22:40:35.367395] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.300 qpair failed and we were unable to recover it. 00:41:40.300 [2024-10-01 22:40:35.367686] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.300 [2024-10-01 22:40:35.367696] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.300 qpair failed and we were unable to recover it. 00:41:40.300 [2024-10-01 22:40:35.368016] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.300 [2024-10-01 22:40:35.368026] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.300 qpair failed and we were unable to recover it. 00:41:40.300 [2024-10-01 22:40:35.368317] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.300 [2024-10-01 22:40:35.368327] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.300 qpair failed and we were unable to recover it. 00:41:40.300 [2024-10-01 22:40:35.368641] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.300 [2024-10-01 22:40:35.368651] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.300 qpair failed and we were unable to recover it. 00:41:40.300 [2024-10-01 22:40:35.368954] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.300 [2024-10-01 22:40:35.368963] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.300 qpair failed and we were unable to recover it. 00:41:40.300 [2024-10-01 22:40:35.369272] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.300 [2024-10-01 22:40:35.369284] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.300 qpair failed and we were unable to recover it. 00:41:40.300 [2024-10-01 22:40:35.369617] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.300 [2024-10-01 22:40:35.369630] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.300 qpair failed and we were unable to recover it. 00:41:40.300 [2024-10-01 22:40:35.369921] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.300 [2024-10-01 22:40:35.369931] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.300 qpair failed and we were unable to recover it. 00:41:40.300 [2024-10-01 22:40:35.370095] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.300 [2024-10-01 22:40:35.370106] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.300 qpair failed and we were unable to recover it. 00:41:40.300 [2024-10-01 22:40:35.370446] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.300 [2024-10-01 22:40:35.370456] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.300 qpair failed and we were unable to recover it. 00:41:40.300 [2024-10-01 22:40:35.370620] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.300 [2024-10-01 22:40:35.370636] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.300 qpair failed and we were unable to recover it. 00:41:40.300 [2024-10-01 22:40:35.370935] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.300 [2024-10-01 22:40:35.370945] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.301 qpair failed and we were unable to recover it. 00:41:40.301 [2024-10-01 22:40:35.371130] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.301 [2024-10-01 22:40:35.371141] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.301 qpair failed and we were unable to recover it. 00:41:40.301 [2024-10-01 22:40:35.371499] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.301 [2024-10-01 22:40:35.371509] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.301 qpair failed and we were unable to recover it. 00:41:40.301 [2024-10-01 22:40:35.371795] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.301 [2024-10-01 22:40:35.371805] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.301 qpair failed and we were unable to recover it. 00:41:40.301 [2024-10-01 22:40:35.372154] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.301 [2024-10-01 22:40:35.372164] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.301 qpair failed and we were unable to recover it. 00:41:40.301 [2024-10-01 22:40:35.372448] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.301 [2024-10-01 22:40:35.372458] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.301 qpair failed and we were unable to recover it. 00:41:40.301 [2024-10-01 22:40:35.372763] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.301 [2024-10-01 22:40:35.372774] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.301 qpair failed and we were unable to recover it. 00:41:40.301 [2024-10-01 22:40:35.372966] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.301 [2024-10-01 22:40:35.372976] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.301 qpair failed and we were unable to recover it. 00:41:40.301 [2024-10-01 22:40:35.373286] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.301 [2024-10-01 22:40:35.373296] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.301 qpair failed and we were unable to recover it. 00:41:40.301 [2024-10-01 22:40:35.373614] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.301 [2024-10-01 22:40:35.373627] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.301 qpair failed and we were unable to recover it. 00:41:40.301 [2024-10-01 22:40:35.373910] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.301 [2024-10-01 22:40:35.373920] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.301 qpair failed and we were unable to recover it. 00:41:40.301 [2024-10-01 22:40:35.374203] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.301 [2024-10-01 22:40:35.374220] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.301 qpair failed and we were unable to recover it. 00:41:40.301 [2024-10-01 22:40:35.374528] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.301 [2024-10-01 22:40:35.374538] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.301 qpair failed and we were unable to recover it. 00:41:40.301 [2024-10-01 22:40:35.374848] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.301 [2024-10-01 22:40:35.374858] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.301 qpair failed and we were unable to recover it. 00:41:40.301 [2024-10-01 22:40:35.375163] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.301 [2024-10-01 22:40:35.375173] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.301 qpair failed and we were unable to recover it. 00:41:40.301 [2024-10-01 22:40:35.375453] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.301 [2024-10-01 22:40:35.375463] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.301 qpair failed and we were unable to recover it. 00:41:40.301 [2024-10-01 22:40:35.375787] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.301 [2024-10-01 22:40:35.375798] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.301 qpair failed and we were unable to recover it. 00:41:40.301 [2024-10-01 22:40:35.376103] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.301 [2024-10-01 22:40:35.376114] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.301 qpair failed and we were unable to recover it. 00:41:40.301 [2024-10-01 22:40:35.376419] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.301 [2024-10-01 22:40:35.376429] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.301 qpair failed and we were unable to recover it. 00:41:40.301 [2024-10-01 22:40:35.376714] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.301 [2024-10-01 22:40:35.376724] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.301 qpair failed and we were unable to recover it. 00:41:40.301 [2024-10-01 22:40:35.377027] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.301 [2024-10-01 22:40:35.377037] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.301 qpair failed and we were unable to recover it. 00:41:40.301 [2024-10-01 22:40:35.377341] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.301 [2024-10-01 22:40:35.377351] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.301 qpair failed and we were unable to recover it. 00:41:40.301 [2024-10-01 22:40:35.377657] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.301 [2024-10-01 22:40:35.377668] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.301 qpair failed and we were unable to recover it. 00:41:40.301 [2024-10-01 22:40:35.377958] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.301 [2024-10-01 22:40:35.377976] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.301 qpair failed and we were unable to recover it. 00:41:40.301 [2024-10-01 22:40:35.378298] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.301 [2024-10-01 22:40:35.378307] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.301 qpair failed and we were unable to recover it. 00:41:40.301 [2024-10-01 22:40:35.378610] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.301 [2024-10-01 22:40:35.378619] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.301 qpair failed and we were unable to recover it. 00:41:40.301 [2024-10-01 22:40:35.378922] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.301 [2024-10-01 22:40:35.378932] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.301 qpair failed and we were unable to recover it. 00:41:40.301 [2024-10-01 22:40:35.379159] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.301 [2024-10-01 22:40:35.379170] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.301 qpair failed and we were unable to recover it. 00:41:40.301 [2024-10-01 22:40:35.379472] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.301 [2024-10-01 22:40:35.379482] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.301 qpair failed and we were unable to recover it. 00:41:40.301 [2024-10-01 22:40:35.379788] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.301 [2024-10-01 22:40:35.379798] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.301 qpair failed and we were unable to recover it. 00:41:40.301 [2024-10-01 22:40:35.380111] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.301 [2024-10-01 22:40:35.380121] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.301 qpair failed and we were unable to recover it. 00:41:40.301 [2024-10-01 22:40:35.380288] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.301 [2024-10-01 22:40:35.380298] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.301 qpair failed and we were unable to recover it. 00:41:40.301 [2024-10-01 22:40:35.380652] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.301 [2024-10-01 22:40:35.380662] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.301 qpair failed and we were unable to recover it. 00:41:40.301 [2024-10-01 22:40:35.380998] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.301 [2024-10-01 22:40:35.381008] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.301 qpair failed and we were unable to recover it. 00:41:40.301 [2024-10-01 22:40:35.381312] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.301 [2024-10-01 22:40:35.381321] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.301 qpair failed and we were unable to recover it. 00:41:40.301 [2024-10-01 22:40:35.381599] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.301 [2024-10-01 22:40:35.381611] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.301 qpair failed and we were unable to recover it. 00:41:40.301 [2024-10-01 22:40:35.381904] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.301 [2024-10-01 22:40:35.381914] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.301 qpair failed and we were unable to recover it. 00:41:40.301 [2024-10-01 22:40:35.382195] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.301 [2024-10-01 22:40:35.382212] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.301 qpair failed and we were unable to recover it. 00:41:40.301 [2024-10-01 22:40:35.382520] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.301 [2024-10-01 22:40:35.382530] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.301 qpair failed and we were unable to recover it. 00:41:40.301 [2024-10-01 22:40:35.382836] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.301 [2024-10-01 22:40:35.382846] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.301 qpair failed and we were unable to recover it. 00:41:40.302 [2024-10-01 22:40:35.383160] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.302 [2024-10-01 22:40:35.383170] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.302 qpair failed and we were unable to recover it. 00:41:40.302 [2024-10-01 22:40:35.383452] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.302 [2024-10-01 22:40:35.383462] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.302 qpair failed and we were unable to recover it. 00:41:40.302 [2024-10-01 22:40:35.383786] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.302 [2024-10-01 22:40:35.383797] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.302 qpair failed and we were unable to recover it. 00:41:40.302 [2024-10-01 22:40:35.384173] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.302 [2024-10-01 22:40:35.384183] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.302 qpair failed and we were unable to recover it. 00:41:40.302 [2024-10-01 22:40:35.384491] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.302 [2024-10-01 22:40:35.384501] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.302 qpair failed and we were unable to recover it. 00:41:40.302 [2024-10-01 22:40:35.384817] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.302 [2024-10-01 22:40:35.384828] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.302 qpair failed and we were unable to recover it. 00:41:40.302 [2024-10-01 22:40:35.384996] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.302 [2024-10-01 22:40:35.385008] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.302 qpair failed and we were unable to recover it. 00:41:40.302 [2024-10-01 22:40:35.385330] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.302 [2024-10-01 22:40:35.385340] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.302 qpair failed and we were unable to recover it. 00:41:40.302 [2024-10-01 22:40:35.385647] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.302 [2024-10-01 22:40:35.385657] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.302 qpair failed and we were unable to recover it. 00:41:40.302 [2024-10-01 22:40:35.385966] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.302 [2024-10-01 22:40:35.385976] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.302 qpair failed and we were unable to recover it. 00:41:40.302 [2024-10-01 22:40:35.386283] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.302 [2024-10-01 22:40:35.386293] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.302 qpair failed and we were unable to recover it. 00:41:40.302 [2024-10-01 22:40:35.386577] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.302 [2024-10-01 22:40:35.386587] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.302 qpair failed and we were unable to recover it. 00:41:40.302 [2024-10-01 22:40:35.386777] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.302 [2024-10-01 22:40:35.386787] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.302 qpair failed and we were unable to recover it. 00:41:40.302 [2024-10-01 22:40:35.387138] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.302 [2024-10-01 22:40:35.387148] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.302 qpair failed and we were unable to recover it. 00:41:40.302 [2024-10-01 22:40:35.387332] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.302 [2024-10-01 22:40:35.387342] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.302 qpair failed and we were unable to recover it. 00:41:40.302 [2024-10-01 22:40:35.387698] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.302 [2024-10-01 22:40:35.387708] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.302 qpair failed and we were unable to recover it. 00:41:40.302 [2024-10-01 22:40:35.387881] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.302 [2024-10-01 22:40:35.387891] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.302 qpair failed and we were unable to recover it. 00:41:40.302 [2024-10-01 22:40:35.388162] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.302 [2024-10-01 22:40:35.388172] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.302 qpair failed and we were unable to recover it. 00:41:40.302 [2024-10-01 22:40:35.388494] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.302 [2024-10-01 22:40:35.388503] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.302 qpair failed and we were unable to recover it. 00:41:40.302 [2024-10-01 22:40:35.388815] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.302 [2024-10-01 22:40:35.388826] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.302 qpair failed and we were unable to recover it. 00:41:40.302 [2024-10-01 22:40:35.389137] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.302 [2024-10-01 22:40:35.389147] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.302 qpair failed and we were unable to recover it. 00:41:40.302 [2024-10-01 22:40:35.389430] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.302 [2024-10-01 22:40:35.389440] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.302 qpair failed and we were unable to recover it. 00:41:40.302 [2024-10-01 22:40:35.389768] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.302 [2024-10-01 22:40:35.389780] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.302 qpair failed and we were unable to recover it. 00:41:40.302 [2024-10-01 22:40:35.390078] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.302 [2024-10-01 22:40:35.390089] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.302 qpair failed and we were unable to recover it. 00:41:40.302 [2024-10-01 22:40:35.390431] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.302 [2024-10-01 22:40:35.390441] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.302 qpair failed and we were unable to recover it. 00:41:40.302 [2024-10-01 22:40:35.390733] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.302 [2024-10-01 22:40:35.390743] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.302 qpair failed and we were unable to recover it. 00:41:40.302 [2024-10-01 22:40:35.390949] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.302 [2024-10-01 22:40:35.390959] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.302 qpair failed and we were unable to recover it. 00:41:40.302 [2024-10-01 22:40:35.391281] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.302 [2024-10-01 22:40:35.391291] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.302 qpair failed and we were unable to recover it. 00:41:40.302 [2024-10-01 22:40:35.391518] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.302 [2024-10-01 22:40:35.391528] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.302 qpair failed and we were unable to recover it. 00:41:40.302 [2024-10-01 22:40:35.391817] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.302 [2024-10-01 22:40:35.391828] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.302 qpair failed and we were unable to recover it. 00:41:40.302 [2024-10-01 22:40:35.392043] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.302 [2024-10-01 22:40:35.392052] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.302 qpair failed and we were unable to recover it. 00:41:40.302 [2024-10-01 22:40:35.392386] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.302 [2024-10-01 22:40:35.392395] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.302 qpair failed and we were unable to recover it. 00:41:40.302 [2024-10-01 22:40:35.392706] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.302 [2024-10-01 22:40:35.392716] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.302 qpair failed and we were unable to recover it. 00:41:40.302 [2024-10-01 22:40:35.393032] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.302 [2024-10-01 22:40:35.393042] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.302 qpair failed and we were unable to recover it. 00:41:40.302 [2024-10-01 22:40:35.393227] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.302 [2024-10-01 22:40:35.393237] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.302 qpair failed and we were unable to recover it. 00:41:40.302 [2024-10-01 22:40:35.393433] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.302 [2024-10-01 22:40:35.393443] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.302 qpair failed and we were unable to recover it. 00:41:40.302 [2024-10-01 22:40:35.393755] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.302 [2024-10-01 22:40:35.393766] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.302 qpair failed and we were unable to recover it. 00:41:40.302 [2024-10-01 22:40:35.393998] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.302 [2024-10-01 22:40:35.394008] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.302 qpair failed and we were unable to recover it. 00:41:40.302 [2024-10-01 22:40:35.394328] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.302 [2024-10-01 22:40:35.394338] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.302 qpair failed and we were unable to recover it. 00:41:40.302 [2024-10-01 22:40:35.394636] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.303 [2024-10-01 22:40:35.394646] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.303 qpair failed and we were unable to recover it. 00:41:40.303 [2024-10-01 22:40:35.394945] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.303 [2024-10-01 22:40:35.394955] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.303 qpair failed and we were unable to recover it. 00:41:40.303 [2024-10-01 22:40:35.395287] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.303 [2024-10-01 22:40:35.395298] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.303 qpair failed and we were unable to recover it. 00:41:40.303 [2024-10-01 22:40:35.395582] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.303 [2024-10-01 22:40:35.395592] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.303 qpair failed and we were unable to recover it. 00:41:40.303 [2024-10-01 22:40:35.395903] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.303 [2024-10-01 22:40:35.395913] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.303 qpair failed and we were unable to recover it. 00:41:40.303 [2024-10-01 22:40:35.396224] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.303 [2024-10-01 22:40:35.396233] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.303 qpair failed and we were unable to recover it. 00:41:40.303 [2024-10-01 22:40:35.396542] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.303 [2024-10-01 22:40:35.396552] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.303 qpair failed and we were unable to recover it. 00:41:40.303 [2024-10-01 22:40:35.396760] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.303 [2024-10-01 22:40:35.396771] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.303 qpair failed and we were unable to recover it. 00:41:40.303 [2024-10-01 22:40:35.397075] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.303 [2024-10-01 22:40:35.397085] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.303 qpair failed and we were unable to recover it. 00:41:40.303 [2024-10-01 22:40:35.397412] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.303 [2024-10-01 22:40:35.397422] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.303 qpair failed and we were unable to recover it. 00:41:40.303 [2024-10-01 22:40:35.397741] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.303 [2024-10-01 22:40:35.397753] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.303 qpair failed and we were unable to recover it. 00:41:40.303 [2024-10-01 22:40:35.398062] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.303 [2024-10-01 22:40:35.398072] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.303 qpair failed and we were unable to recover it. 00:41:40.303 [2024-10-01 22:40:35.398358] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.303 [2024-10-01 22:40:35.398373] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.303 qpair failed and we were unable to recover it. 00:41:40.303 [2024-10-01 22:40:35.398692] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.303 [2024-10-01 22:40:35.398703] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.303 qpair failed and we were unable to recover it. 00:41:40.303 [2024-10-01 22:40:35.399012] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.303 [2024-10-01 22:40:35.399021] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.303 qpair failed and we were unable to recover it. 00:41:40.303 [2024-10-01 22:40:35.399341] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.303 [2024-10-01 22:40:35.399351] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.303 qpair failed and we were unable to recover it. 00:41:40.303 [2024-10-01 22:40:35.399663] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.303 [2024-10-01 22:40:35.399673] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.303 qpair failed and we were unable to recover it. 00:41:40.303 [2024-10-01 22:40:35.400011] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.303 [2024-10-01 22:40:35.400021] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.303 qpair failed and we were unable to recover it. 00:41:40.303 [2024-10-01 22:40:35.400333] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.303 [2024-10-01 22:40:35.400344] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.303 qpair failed and we were unable to recover it. 00:41:40.303 [2024-10-01 22:40:35.400659] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.303 [2024-10-01 22:40:35.400669] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.303 qpair failed and we were unable to recover it. 00:41:40.303 [2024-10-01 22:40:35.400976] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.303 [2024-10-01 22:40:35.400987] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.303 qpair failed and we were unable to recover it. 00:41:40.303 [2024-10-01 22:40:35.401183] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.303 [2024-10-01 22:40:35.401193] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.303 qpair failed and we were unable to recover it. 00:41:40.303 [2024-10-01 22:40:35.401497] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.303 [2024-10-01 22:40:35.401507] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.303 qpair failed and we were unable to recover it. 00:41:40.303 [2024-10-01 22:40:35.401875] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.303 [2024-10-01 22:40:35.401885] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.303 qpair failed and we were unable to recover it. 00:41:40.303 [2024-10-01 22:40:35.402171] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.303 [2024-10-01 22:40:35.402187] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.303 qpair failed and we were unable to recover it. 00:41:40.303 [2024-10-01 22:40:35.402518] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.303 [2024-10-01 22:40:35.402527] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.303 qpair failed and we were unable to recover it. 00:41:40.303 [2024-10-01 22:40:35.402851] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.303 [2024-10-01 22:40:35.402861] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.303 qpair failed and we were unable to recover it. 00:41:40.303 [2024-10-01 22:40:35.403059] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.303 [2024-10-01 22:40:35.403070] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.303 qpair failed and we were unable to recover it. 00:41:40.303 [2024-10-01 22:40:35.403410] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.303 [2024-10-01 22:40:35.403420] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.303 qpair failed and we were unable to recover it. 00:41:40.303 [2024-10-01 22:40:35.403754] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.303 [2024-10-01 22:40:35.403764] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.303 qpair failed and we were unable to recover it. 00:41:40.303 [2024-10-01 22:40:35.404090] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.303 [2024-10-01 22:40:35.404100] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.303 qpair failed and we were unable to recover it. 00:41:40.303 [2024-10-01 22:40:35.404412] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.303 [2024-10-01 22:40:35.404423] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.303 qpair failed and we were unable to recover it. 00:41:40.303 [2024-10-01 22:40:35.404600] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.303 [2024-10-01 22:40:35.404610] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.303 qpair failed and we were unable to recover it. 00:41:40.303 [2024-10-01 22:40:35.405019] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.303 [2024-10-01 22:40:35.405030] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.303 qpair failed and we were unable to recover it. 00:41:40.303 [2024-10-01 22:40:35.405211] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.303 [2024-10-01 22:40:35.405220] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.303 qpair failed and we were unable to recover it. 00:41:40.303 [2024-10-01 22:40:35.405602] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.303 [2024-10-01 22:40:35.405612] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.303 qpair failed and we were unable to recover it. 00:41:40.303 [2024-10-01 22:40:35.405821] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.303 [2024-10-01 22:40:35.405831] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.303 qpair failed and we were unable to recover it. 00:41:40.303 [2024-10-01 22:40:35.406038] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.303 [2024-10-01 22:40:35.406048] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.303 qpair failed and we were unable to recover it. 00:41:40.303 [2024-10-01 22:40:35.406364] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.303 [2024-10-01 22:40:35.406374] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.303 qpair failed and we were unable to recover it. 00:41:40.303 [2024-10-01 22:40:35.406645] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.304 [2024-10-01 22:40:35.406656] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.304 qpair failed and we were unable to recover it. 00:41:40.304 [2024-10-01 22:40:35.406905] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.304 [2024-10-01 22:40:35.406915] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.304 qpair failed and we were unable to recover it. 00:41:40.304 [2024-10-01 22:40:35.407238] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.304 [2024-10-01 22:40:35.407248] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.304 qpair failed and we were unable to recover it. 00:41:40.304 [2024-10-01 22:40:35.407568] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.304 [2024-10-01 22:40:35.407579] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.304 qpair failed and we were unable to recover it. 00:41:40.304 [2024-10-01 22:40:35.407891] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.304 [2024-10-01 22:40:35.407901] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.304 qpair failed and we were unable to recover it. 00:41:40.304 [2024-10-01 22:40:35.408201] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.304 [2024-10-01 22:40:35.408211] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.304 qpair failed and we were unable to recover it. 00:41:40.304 [2024-10-01 22:40:35.408400] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.304 [2024-10-01 22:40:35.408411] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.304 qpair failed and we were unable to recover it. 00:41:40.304 [2024-10-01 22:40:35.408726] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.304 [2024-10-01 22:40:35.408736] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.304 qpair failed and we were unable to recover it. 00:41:40.304 [2024-10-01 22:40:35.408795] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.304 [2024-10-01 22:40:35.408805] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.304 qpair failed and we were unable to recover it. 00:41:40.304 [2024-10-01 22:40:35.409081] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.304 [2024-10-01 22:40:35.409091] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.304 qpair failed and we were unable to recover it. 00:41:40.304 [2024-10-01 22:40:35.409261] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.304 [2024-10-01 22:40:35.409272] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.304 qpair failed and we were unable to recover it. 00:41:40.304 [2024-10-01 22:40:35.409552] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.304 [2024-10-01 22:40:35.409562] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.304 qpair failed and we were unable to recover it. 00:41:40.304 [2024-10-01 22:40:35.409934] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.304 [2024-10-01 22:40:35.409945] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.304 qpair failed and we were unable to recover it. 00:41:40.304 [2024-10-01 22:40:35.410257] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.304 [2024-10-01 22:40:35.410267] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.304 qpair failed and we were unable to recover it. 00:41:40.304 [2024-10-01 22:40:35.410569] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.304 [2024-10-01 22:40:35.410579] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.304 qpair failed and we were unable to recover it. 00:41:40.304 [2024-10-01 22:40:35.410852] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.304 [2024-10-01 22:40:35.410862] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.304 qpair failed and we were unable to recover it. 00:41:40.304 [2024-10-01 22:40:35.411180] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.304 [2024-10-01 22:40:35.411189] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.304 qpair failed and we were unable to recover it. 00:41:40.304 [2024-10-01 22:40:35.411502] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.304 [2024-10-01 22:40:35.411512] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.304 qpair failed and we were unable to recover it. 00:41:40.304 [2024-10-01 22:40:35.411684] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.304 [2024-10-01 22:40:35.411695] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.304 qpair failed and we were unable to recover it. 00:41:40.304 [2024-10-01 22:40:35.412033] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.304 [2024-10-01 22:40:35.412043] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.304 qpair failed and we were unable to recover it. 00:41:40.304 [2024-10-01 22:40:35.412368] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.304 [2024-10-01 22:40:35.412378] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.304 qpair failed and we were unable to recover it. 00:41:40.304 [2024-10-01 22:40:35.412702] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.304 [2024-10-01 22:40:35.412712] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.304 qpair failed and we were unable to recover it. 00:41:40.304 [2024-10-01 22:40:35.412996] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.304 [2024-10-01 22:40:35.413006] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.304 qpair failed and we were unable to recover it. 00:41:40.304 [2024-10-01 22:40:35.413310] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.304 [2024-10-01 22:40:35.413321] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.304 qpair failed and we were unable to recover it. 00:41:40.304 [2024-10-01 22:40:35.413521] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.304 [2024-10-01 22:40:35.413532] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.304 qpair failed and we were unable to recover it. 00:41:40.304 [2024-10-01 22:40:35.413719] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.304 [2024-10-01 22:40:35.413729] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.304 qpair failed and we were unable to recover it. 00:41:40.304 [2024-10-01 22:40:35.414046] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.304 [2024-10-01 22:40:35.414056] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.304 qpair failed and we were unable to recover it. 00:41:40.304 [2024-10-01 22:40:35.414232] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.304 [2024-10-01 22:40:35.414243] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.304 qpair failed and we were unable to recover it. 00:41:40.304 [2024-10-01 22:40:35.414564] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.304 [2024-10-01 22:40:35.414574] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.304 qpair failed and we were unable to recover it. 00:41:40.304 [2024-10-01 22:40:35.414870] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.304 [2024-10-01 22:40:35.414881] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.304 qpair failed and we were unable to recover it. 00:41:40.304 [2024-10-01 22:40:35.415200] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.304 [2024-10-01 22:40:35.415210] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.304 qpair failed and we were unable to recover it. 00:41:40.304 [2024-10-01 22:40:35.415495] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.304 [2024-10-01 22:40:35.415505] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.304 qpair failed and we were unable to recover it. 00:41:40.304 [2024-10-01 22:40:35.415834] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.304 [2024-10-01 22:40:35.415844] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.304 qpair failed and we were unable to recover it. 00:41:40.304 [2024-10-01 22:40:35.416154] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.304 [2024-10-01 22:40:35.416164] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.304 qpair failed and we were unable to recover it. 00:41:40.304 [2024-10-01 22:40:35.416479] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.304 [2024-10-01 22:40:35.416489] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.304 qpair failed and we were unable to recover it. 00:41:40.304 [2024-10-01 22:40:35.416796] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.305 [2024-10-01 22:40:35.416806] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.305 qpair failed and we were unable to recover it. 00:41:40.305 [2024-10-01 22:40:35.416979] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.305 [2024-10-01 22:40:35.416989] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.305 qpair failed and we were unable to recover it. 00:41:40.305 [2024-10-01 22:40:35.417208] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.305 [2024-10-01 22:40:35.417217] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.305 qpair failed and we were unable to recover it. 00:41:40.305 [2024-10-01 22:40:35.417593] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.305 [2024-10-01 22:40:35.417603] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.305 qpair failed and we were unable to recover it. 00:41:40.305 [2024-10-01 22:40:35.417792] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.305 [2024-10-01 22:40:35.417806] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.305 qpair failed and we were unable to recover it. 00:41:40.305 [2024-10-01 22:40:35.418126] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.305 [2024-10-01 22:40:35.418136] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.305 qpair failed and we were unable to recover it. 00:41:40.305 [2024-10-01 22:40:35.418325] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.305 [2024-10-01 22:40:35.418336] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.305 qpair failed and we were unable to recover it. 00:41:40.305 [2024-10-01 22:40:35.418642] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.305 [2024-10-01 22:40:35.418654] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.305 qpair failed and we were unable to recover it. 00:41:40.305 [2024-10-01 22:40:35.418992] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.305 [2024-10-01 22:40:35.419002] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.305 qpair failed and we were unable to recover it. 00:41:40.305 [2024-10-01 22:40:35.419317] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.305 [2024-10-01 22:40:35.419327] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.305 qpair failed and we were unable to recover it. 00:41:40.305 [2024-10-01 22:40:35.419604] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.305 [2024-10-01 22:40:35.419614] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.305 qpair failed and we were unable to recover it. 00:41:40.305 [2024-10-01 22:40:35.419982] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.305 [2024-10-01 22:40:35.419992] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.305 qpair failed and we were unable to recover it. 00:41:40.305 [2024-10-01 22:40:35.420195] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.305 [2024-10-01 22:40:35.420205] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.305 qpair failed and we were unable to recover it. 00:41:40.305 [2024-10-01 22:40:35.420400] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.305 [2024-10-01 22:40:35.420410] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.305 qpair failed and we were unable to recover it. 00:41:40.305 [2024-10-01 22:40:35.420765] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.305 [2024-10-01 22:40:35.420776] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.305 qpair failed and we were unable to recover it. 00:41:40.305 [2024-10-01 22:40:35.421087] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.305 [2024-10-01 22:40:35.421096] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.305 qpair failed and we were unable to recover it. 00:41:40.305 [2024-10-01 22:40:35.421421] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.305 [2024-10-01 22:40:35.421430] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.305 qpair failed and we were unable to recover it. 00:41:40.305 [2024-10-01 22:40:35.421611] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.305 [2024-10-01 22:40:35.421620] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.305 qpair failed and we were unable to recover it. 00:41:40.305 [2024-10-01 22:40:35.422013] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.305 [2024-10-01 22:40:35.422024] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.305 qpair failed and we were unable to recover it. 00:41:40.305 [2024-10-01 22:40:35.422225] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.305 [2024-10-01 22:40:35.422234] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.305 qpair failed and we were unable to recover it. 00:41:40.305 [2024-10-01 22:40:35.422562] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.305 [2024-10-01 22:40:35.422572] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.305 qpair failed and we were unable to recover it. 00:41:40.305 [2024-10-01 22:40:35.422754] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.305 [2024-10-01 22:40:35.422765] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.305 qpair failed and we were unable to recover it. 00:41:40.305 [2024-10-01 22:40:35.422939] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.305 [2024-10-01 22:40:35.422948] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.305 qpair failed and we were unable to recover it. 00:41:40.305 [2024-10-01 22:40:35.423235] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.305 [2024-10-01 22:40:35.423245] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.305 qpair failed and we were unable to recover it. 00:41:40.305 [2024-10-01 22:40:35.423537] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.305 [2024-10-01 22:40:35.423547] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.305 qpair failed and we were unable to recover it. 00:41:40.305 [2024-10-01 22:40:35.423850] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.305 [2024-10-01 22:40:35.423860] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.305 qpair failed and we were unable to recover it. 00:41:40.305 [2024-10-01 22:40:35.424165] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.305 [2024-10-01 22:40:35.424175] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.305 qpair failed and we were unable to recover it. 00:41:40.305 [2024-10-01 22:40:35.424504] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.305 [2024-10-01 22:40:35.424514] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.305 qpair failed and we were unable to recover it. 00:41:40.305 [2024-10-01 22:40:35.424817] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.305 [2024-10-01 22:40:35.424827] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.305 qpair failed and we were unable to recover it. 00:41:40.305 [2024-10-01 22:40:35.425156] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.305 [2024-10-01 22:40:35.425166] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.305 qpair failed and we were unable to recover it. 00:41:40.305 [2024-10-01 22:40:35.425464] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.305 [2024-10-01 22:40:35.425474] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.305 qpair failed and we were unable to recover it. 00:41:40.305 [2024-10-01 22:40:35.425685] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.305 [2024-10-01 22:40:35.425697] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.305 qpair failed and we were unable to recover it. 00:41:40.305 [2024-10-01 22:40:35.425997] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.305 [2024-10-01 22:40:35.426007] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.305 qpair failed and we were unable to recover it. 00:41:40.305 [2024-10-01 22:40:35.426323] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.305 [2024-10-01 22:40:35.426334] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.305 qpair failed and we were unable to recover it. 00:41:40.305 [2024-10-01 22:40:35.426506] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.305 [2024-10-01 22:40:35.426518] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.305 qpair failed and we were unable to recover it. 00:41:40.305 [2024-10-01 22:40:35.426708] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.305 [2024-10-01 22:40:35.426719] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.305 qpair failed and we were unable to recover it. 00:41:40.305 [2024-10-01 22:40:35.426988] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.305 [2024-10-01 22:40:35.426997] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.305 qpair failed and we were unable to recover it. 00:41:40.305 [2024-10-01 22:40:35.427306] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.305 [2024-10-01 22:40:35.427317] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.305 qpair failed and we were unable to recover it. 00:41:40.305 [2024-10-01 22:40:35.427612] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.305 [2024-10-01 22:40:35.427622] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.305 qpair failed and we were unable to recover it. 00:41:40.305 [2024-10-01 22:40:35.427939] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.306 [2024-10-01 22:40:35.427950] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.306 qpair failed and we were unable to recover it. 00:41:40.306 [2024-10-01 22:40:35.428262] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.306 [2024-10-01 22:40:35.428273] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.306 qpair failed and we were unable to recover it. 00:41:40.306 [2024-10-01 22:40:35.428580] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.306 [2024-10-01 22:40:35.428591] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.306 qpair failed and we were unable to recover it. 00:41:40.306 [2024-10-01 22:40:35.428937] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.306 [2024-10-01 22:40:35.428948] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.306 qpair failed and we were unable to recover it. 00:41:40.306 [2024-10-01 22:40:35.429243] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.306 [2024-10-01 22:40:35.429254] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.306 qpair failed and we were unable to recover it. 00:41:40.306 [2024-10-01 22:40:35.429564] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.306 [2024-10-01 22:40:35.429575] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.306 qpair failed and we were unable to recover it. 00:41:40.306 [2024-10-01 22:40:35.429909] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.306 [2024-10-01 22:40:35.429920] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.306 qpair failed and we were unable to recover it. 00:41:40.306 [2024-10-01 22:40:35.430086] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.306 [2024-10-01 22:40:35.430097] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.306 qpair failed and we were unable to recover it. 00:41:40.306 [2024-10-01 22:40:35.430417] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.306 [2024-10-01 22:40:35.430428] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.306 qpair failed and we were unable to recover it. 00:41:40.306 [2024-10-01 22:40:35.430615] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.306 [2024-10-01 22:40:35.430632] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.306 qpair failed and we were unable to recover it. 00:41:40.306 [2024-10-01 22:40:35.430949] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.306 [2024-10-01 22:40:35.430959] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.306 qpair failed and we were unable to recover it. 00:41:40.306 [2024-10-01 22:40:35.431268] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.306 [2024-10-01 22:40:35.431277] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.306 qpair failed and we were unable to recover it. 00:41:40.306 [2024-10-01 22:40:35.431559] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.306 [2024-10-01 22:40:35.431569] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.306 qpair failed and we were unable to recover it. 00:41:40.306 [2024-10-01 22:40:35.431918] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.306 [2024-10-01 22:40:35.431928] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.306 qpair failed and we were unable to recover it. 00:41:40.306 [2024-10-01 22:40:35.432242] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.306 [2024-10-01 22:40:35.432252] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.306 qpair failed and we were unable to recover it. 00:41:40.306 [2024-10-01 22:40:35.432558] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.306 [2024-10-01 22:40:35.432567] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.306 qpair failed and we were unable to recover it. 00:41:40.306 [2024-10-01 22:40:35.432871] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.306 [2024-10-01 22:40:35.432882] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.306 qpair failed and we were unable to recover it. 00:41:40.306 [2024-10-01 22:40:35.433198] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.306 [2024-10-01 22:40:35.433208] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.306 qpair failed and we were unable to recover it. 00:41:40.306 [2024-10-01 22:40:35.433519] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.306 [2024-10-01 22:40:35.433529] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.306 qpair failed and we were unable to recover it. 00:41:40.306 [2024-10-01 22:40:35.433797] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.306 [2024-10-01 22:40:35.433807] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.306 qpair failed and we were unable to recover it. 00:41:40.306 [2024-10-01 22:40:35.434220] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.306 [2024-10-01 22:40:35.434231] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.306 qpair failed and we were unable to recover it. 00:41:40.306 [2024-10-01 22:40:35.434533] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.306 [2024-10-01 22:40:35.434543] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.306 qpair failed and we were unable to recover it. 00:41:40.306 [2024-10-01 22:40:35.434837] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.306 [2024-10-01 22:40:35.434847] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.306 qpair failed and we were unable to recover it. 00:41:40.306 [2024-10-01 22:40:35.435152] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.306 [2024-10-01 22:40:35.435162] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.306 qpair failed and we were unable to recover it. 00:41:40.306 [2024-10-01 22:40:35.435469] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.306 [2024-10-01 22:40:35.435478] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.306 qpair failed and we were unable to recover it. 00:41:40.306 [2024-10-01 22:40:35.435785] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.306 [2024-10-01 22:40:35.435796] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.306 qpair failed and we were unable to recover it. 00:41:40.306 [2024-10-01 22:40:35.436094] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.306 [2024-10-01 22:40:35.436105] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.306 qpair failed and we were unable to recover it. 00:41:40.306 [2024-10-01 22:40:35.436408] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.306 [2024-10-01 22:40:35.436418] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.306 qpair failed and we were unable to recover it. 00:41:40.306 [2024-10-01 22:40:35.436778] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.306 [2024-10-01 22:40:35.436789] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.306 qpair failed and we were unable to recover it. 00:41:40.306 [2024-10-01 22:40:35.437137] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.306 [2024-10-01 22:40:35.437147] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.306 qpair failed and we were unable to recover it. 00:41:40.306 [2024-10-01 22:40:35.437355] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.306 [2024-10-01 22:40:35.437364] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.306 qpair failed and we were unable to recover it. 00:41:40.306 [2024-10-01 22:40:35.437696] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.306 [2024-10-01 22:40:35.437706] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.306 qpair failed and we were unable to recover it. 00:41:40.306 [2024-10-01 22:40:35.437892] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.306 [2024-10-01 22:40:35.437903] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.306 qpair failed and we were unable to recover it. 00:41:40.306 [2024-10-01 22:40:35.438196] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.306 [2024-10-01 22:40:35.438213] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.306 qpair failed and we were unable to recover it. 00:41:40.306 [2024-10-01 22:40:35.438548] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.306 [2024-10-01 22:40:35.438559] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.306 qpair failed and we were unable to recover it. 00:41:40.306 [2024-10-01 22:40:35.438844] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.306 [2024-10-01 22:40:35.438854] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.306 qpair failed and we were unable to recover it. 00:41:40.306 [2024-10-01 22:40:35.439158] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.306 [2024-10-01 22:40:35.439169] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.306 qpair failed and we were unable to recover it. 00:41:40.306 [2024-10-01 22:40:35.439460] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.306 [2024-10-01 22:40:35.439471] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.306 qpair failed and we were unable to recover it. 00:41:40.306 [2024-10-01 22:40:35.439776] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.306 [2024-10-01 22:40:35.439787] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.306 qpair failed and we were unable to recover it. 00:41:40.306 [2024-10-01 22:40:35.440100] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.307 [2024-10-01 22:40:35.440111] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.307 qpair failed and we were unable to recover it. 00:41:40.307 [2024-10-01 22:40:35.440337] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.307 [2024-10-01 22:40:35.440346] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.307 qpair failed and we were unable to recover it. 00:41:40.307 [2024-10-01 22:40:35.440533] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.307 [2024-10-01 22:40:35.440542] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.307 qpair failed and we were unable to recover it. 00:41:40.307 [2024-10-01 22:40:35.440874] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.307 [2024-10-01 22:40:35.440884] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.307 qpair failed and we were unable to recover it. 00:41:40.307 [2024-10-01 22:40:35.441214] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.307 [2024-10-01 22:40:35.441224] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.307 qpair failed and we were unable to recover it. 00:41:40.307 [2024-10-01 22:40:35.441387] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.307 [2024-10-01 22:40:35.441398] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.307 qpair failed and we were unable to recover it. 00:41:40.307 [2024-10-01 22:40:35.441725] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.307 [2024-10-01 22:40:35.441736] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.307 qpair failed and we were unable to recover it. 00:41:40.307 [2024-10-01 22:40:35.441939] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.307 [2024-10-01 22:40:35.441949] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.307 qpair failed and we were unable to recover it. 00:41:40.307 [2024-10-01 22:40:35.442282] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.307 [2024-10-01 22:40:35.442292] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.307 qpair failed and we were unable to recover it. 00:41:40.307 [2024-10-01 22:40:35.442477] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.307 [2024-10-01 22:40:35.442487] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.307 qpair failed and we were unable to recover it. 00:41:40.307 [2024-10-01 22:40:35.442821] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.307 [2024-10-01 22:40:35.442831] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.307 qpair failed and we were unable to recover it. 00:41:40.307 [2024-10-01 22:40:35.443127] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.307 [2024-10-01 22:40:35.443137] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.307 qpair failed and we were unable to recover it. 00:41:40.307 [2024-10-01 22:40:35.443457] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.307 [2024-10-01 22:40:35.443467] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.307 qpair failed and we were unable to recover it. 00:41:40.307 [2024-10-01 22:40:35.443676] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.307 [2024-10-01 22:40:35.443686] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.307 qpair failed and we were unable to recover it. 00:41:40.307 [2024-10-01 22:40:35.444051] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.307 [2024-10-01 22:40:35.444061] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.307 qpair failed and we were unable to recover it. 00:41:40.307 [2024-10-01 22:40:35.444346] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.307 [2024-10-01 22:40:35.444356] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.307 qpair failed and we were unable to recover it. 00:41:40.307 [2024-10-01 22:40:35.444633] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.307 [2024-10-01 22:40:35.444643] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.307 qpair failed and we were unable to recover it. 00:41:40.307 [2024-10-01 22:40:35.444949] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.307 [2024-10-01 22:40:35.444959] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.307 qpair failed and we were unable to recover it. 00:41:40.307 [2024-10-01 22:40:35.445291] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.307 [2024-10-01 22:40:35.445301] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.307 qpair failed and we were unable to recover it. 00:41:40.307 [2024-10-01 22:40:35.445621] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.307 [2024-10-01 22:40:35.445636] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.307 qpair failed and we were unable to recover it. 00:41:40.307 [2024-10-01 22:40:35.445914] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.307 [2024-10-01 22:40:35.445924] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.307 qpair failed and we were unable to recover it. 00:41:40.307 [2024-10-01 22:40:35.446208] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.307 [2024-10-01 22:40:35.446226] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.307 qpair failed and we were unable to recover it. 00:41:40.307 [2024-10-01 22:40:35.446447] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.307 [2024-10-01 22:40:35.446458] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.307 qpair failed and we were unable to recover it. 00:41:40.307 [2024-10-01 22:40:35.446770] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.307 [2024-10-01 22:40:35.446781] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.307 qpair failed and we were unable to recover it. 00:41:40.307 [2024-10-01 22:40:35.446981] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.307 [2024-10-01 22:40:35.446991] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.307 qpair failed and we were unable to recover it. 00:41:40.307 [2024-10-01 22:40:35.447288] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.307 [2024-10-01 22:40:35.447297] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.307 qpair failed and we were unable to recover it. 00:41:40.307 [2024-10-01 22:40:35.447621] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.307 [2024-10-01 22:40:35.447634] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.307 qpair failed and we were unable to recover it. 00:41:40.307 [2024-10-01 22:40:35.448007] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.307 [2024-10-01 22:40:35.448017] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.307 qpair failed and we were unable to recover it. 00:41:40.307 [2024-10-01 22:40:35.448316] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.307 [2024-10-01 22:40:35.448326] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.307 qpair failed and we were unable to recover it. 00:41:40.307 [2024-10-01 22:40:35.448699] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.307 [2024-10-01 22:40:35.448710] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.307 qpair failed and we were unable to recover it. 00:41:40.307 [2024-10-01 22:40:35.449039] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.307 [2024-10-01 22:40:35.449049] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.307 qpair failed and we were unable to recover it. 00:41:40.307 [2024-10-01 22:40:35.449342] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.307 [2024-10-01 22:40:35.449352] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.307 qpair failed and we were unable to recover it. 00:41:40.307 [2024-10-01 22:40:35.449672] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.307 [2024-10-01 22:40:35.449682] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.307 qpair failed and we were unable to recover it. 00:41:40.307 [2024-10-01 22:40:35.449982] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.307 [2024-10-01 22:40:35.449997] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.307 qpair failed and we were unable to recover it. 00:41:40.307 [2024-10-01 22:40:35.450180] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.307 [2024-10-01 22:40:35.450190] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.307 qpair failed and we were unable to recover it. 00:41:40.307 [2024-10-01 22:40:35.450485] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.307 [2024-10-01 22:40:35.450494] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.307 qpair failed and we were unable to recover it. 00:41:40.307 [2024-10-01 22:40:35.450813] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.307 [2024-10-01 22:40:35.450824] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.307 qpair failed and we were unable to recover it. 00:41:40.307 [2024-10-01 22:40:35.451132] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.307 [2024-10-01 22:40:35.451142] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.307 qpair failed and we were unable to recover it. 00:41:40.307 [2024-10-01 22:40:35.451450] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.307 [2024-10-01 22:40:35.451460] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.307 qpair failed and we were unable to recover it. 00:41:40.307 [2024-10-01 22:40:35.451759] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.308 [2024-10-01 22:40:35.451769] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.308 qpair failed and we were unable to recover it. 00:41:40.308 [2024-10-01 22:40:35.452000] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.308 [2024-10-01 22:40:35.452010] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.308 qpair failed and we were unable to recover it. 00:41:40.308 [2024-10-01 22:40:35.452295] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.308 [2024-10-01 22:40:35.452305] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.308 qpair failed and we were unable to recover it. 00:41:40.308 [2024-10-01 22:40:35.452647] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.308 [2024-10-01 22:40:35.452657] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.308 qpair failed and we were unable to recover it. 00:41:40.308 [2024-10-01 22:40:35.452964] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.308 [2024-10-01 22:40:35.452974] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.308 qpair failed and we were unable to recover it. 00:41:40.308 [2024-10-01 22:40:35.453240] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.308 [2024-10-01 22:40:35.453250] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.308 qpair failed and we were unable to recover it. 00:41:40.308 [2024-10-01 22:40:35.453534] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.308 [2024-10-01 22:40:35.453544] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.308 qpair failed and we were unable to recover it. 00:41:40.308 [2024-10-01 22:40:35.453868] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.308 [2024-10-01 22:40:35.453879] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.308 qpair failed and we were unable to recover it. 00:41:40.308 [2024-10-01 22:40:35.454185] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.308 [2024-10-01 22:40:35.454195] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.308 qpair failed and we were unable to recover it. 00:41:40.308 [2024-10-01 22:40:35.454593] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.308 [2024-10-01 22:40:35.454605] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.308 qpair failed and we were unable to recover it. 00:41:40.308 [2024-10-01 22:40:35.454995] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.308 [2024-10-01 22:40:35.455006] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.308 qpair failed and we were unable to recover it. 00:41:40.308 [2024-10-01 22:40:35.455196] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.308 [2024-10-01 22:40:35.455206] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.308 qpair failed and we were unable to recover it. 00:41:40.308 [2024-10-01 22:40:35.455533] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.308 [2024-10-01 22:40:35.455544] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.308 qpair failed and we were unable to recover it. 00:41:40.308 [2024-10-01 22:40:35.455918] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.308 [2024-10-01 22:40:35.455929] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.308 qpair failed and we were unable to recover it. 00:41:40.308 [2024-10-01 22:40:35.456235] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.308 [2024-10-01 22:40:35.456246] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.308 qpair failed and we were unable to recover it. 00:41:40.308 [2024-10-01 22:40:35.456528] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.308 [2024-10-01 22:40:35.456539] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.308 qpair failed and we were unable to recover it. 00:41:40.308 [2024-10-01 22:40:35.456834] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.308 [2024-10-01 22:40:35.456845] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.308 qpair failed and we were unable to recover it. 00:41:40.308 [2024-10-01 22:40:35.457029] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.308 [2024-10-01 22:40:35.457040] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.308 qpair failed and we were unable to recover it. 00:41:40.308 [2024-10-01 22:40:35.457387] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.308 [2024-10-01 22:40:35.457397] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.308 qpair failed and we were unable to recover it. 00:41:40.308 [2024-10-01 22:40:35.457593] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.308 [2024-10-01 22:40:35.457603] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.308 qpair failed and we were unable to recover it. 00:41:40.308 [2024-10-01 22:40:35.457775] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.308 [2024-10-01 22:40:35.457785] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.308 qpair failed and we were unable to recover it. 00:41:40.308 [2024-10-01 22:40:35.458087] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.308 [2024-10-01 22:40:35.458098] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.308 qpair failed and we were unable to recover it. 00:41:40.308 [2024-10-01 22:40:35.458382] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.308 [2024-10-01 22:40:35.458394] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.308 qpair failed and we were unable to recover it. 00:41:40.308 [2024-10-01 22:40:35.458714] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.308 [2024-10-01 22:40:35.458724] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.308 qpair failed and we were unable to recover it. 00:41:40.308 [2024-10-01 22:40:35.459072] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.308 [2024-10-01 22:40:35.459082] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.308 qpair failed and we were unable to recover it. 00:41:40.308 [2024-10-01 22:40:35.459392] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.308 [2024-10-01 22:40:35.459402] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.308 qpair failed and we were unable to recover it. 00:41:40.308 [2024-10-01 22:40:35.459704] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.308 [2024-10-01 22:40:35.459715] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.308 qpair failed and we were unable to recover it. 00:41:40.308 [2024-10-01 22:40:35.460010] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.308 [2024-10-01 22:40:35.460021] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.308 qpair failed and we were unable to recover it. 00:41:40.308 [2024-10-01 22:40:35.460329] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.308 [2024-10-01 22:40:35.460339] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.308 qpair failed and we were unable to recover it. 00:41:40.308 [2024-10-01 22:40:35.460638] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.308 [2024-10-01 22:40:35.460648] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.308 qpair failed and we were unable to recover it. 00:41:40.308 [2024-10-01 22:40:35.460727] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.308 [2024-10-01 22:40:35.460738] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.308 qpair failed and we were unable to recover it. 00:41:40.308 [2024-10-01 22:40:35.461069] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.308 [2024-10-01 22:40:35.461079] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.308 qpair failed and we were unable to recover it. 00:41:40.308 [2024-10-01 22:40:35.461395] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.308 [2024-10-01 22:40:35.461405] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.308 qpair failed and we were unable to recover it. 00:41:40.308 [2024-10-01 22:40:35.461720] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.308 [2024-10-01 22:40:35.461730] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.308 qpair failed and we were unable to recover it. 00:41:40.308 [2024-10-01 22:40:35.462037] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.309 [2024-10-01 22:40:35.462047] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.309 qpair failed and we were unable to recover it. 00:41:40.309 [2024-10-01 22:40:35.462264] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.309 [2024-10-01 22:40:35.462273] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.309 qpair failed and we were unable to recover it. 00:41:40.309 [2024-10-01 22:40:35.462636] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.309 [2024-10-01 22:40:35.462648] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.309 qpair failed and we were unable to recover it. 00:41:40.309 [2024-10-01 22:40:35.462971] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.309 [2024-10-01 22:40:35.462980] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.309 qpair failed and we were unable to recover it. 00:41:40.309 [2024-10-01 22:40:35.463288] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.309 [2024-10-01 22:40:35.463299] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.309 qpair failed and we were unable to recover it. 00:41:40.309 [2024-10-01 22:40:35.463621] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.309 [2024-10-01 22:40:35.463637] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.309 qpair failed and we were unable to recover it. 00:41:40.309 [2024-10-01 22:40:35.463944] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.309 [2024-10-01 22:40:35.463954] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.309 qpair failed and we were unable to recover it. 00:41:40.309 [2024-10-01 22:40:35.464271] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.309 [2024-10-01 22:40:35.464281] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.309 qpair failed and we were unable to recover it. 00:41:40.309 [2024-10-01 22:40:35.464600] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.309 [2024-10-01 22:40:35.464609] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.309 qpair failed and we were unable to recover it. 00:41:40.309 [2024-10-01 22:40:35.464985] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.309 [2024-10-01 22:40:35.464995] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.309 qpair failed and we were unable to recover it. 00:41:40.309 [2024-10-01 22:40:35.465295] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.309 [2024-10-01 22:40:35.465305] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.309 qpair failed and we were unable to recover it. 00:41:40.309 [2024-10-01 22:40:35.465633] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.309 [2024-10-01 22:40:35.465644] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.309 qpair failed and we were unable to recover it. 00:41:40.309 [2024-10-01 22:40:35.465813] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.309 [2024-10-01 22:40:35.465824] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.309 qpair failed and we were unable to recover it. 00:41:40.309 [2024-10-01 22:40:35.466126] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.309 [2024-10-01 22:40:35.466136] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.309 qpair failed and we were unable to recover it. 00:41:40.309 [2024-10-01 22:40:35.466347] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.309 [2024-10-01 22:40:35.466356] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.309 qpair failed and we were unable to recover it. 00:41:40.309 [2024-10-01 22:40:35.466720] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.309 [2024-10-01 22:40:35.466731] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.309 qpair failed and we were unable to recover it. 00:41:40.309 [2024-10-01 22:40:35.467050] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.309 [2024-10-01 22:40:35.467060] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.309 qpair failed and we were unable to recover it. 00:41:40.309 [2024-10-01 22:40:35.467417] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.309 [2024-10-01 22:40:35.467428] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.309 qpair failed and we were unable to recover it. 00:41:40.309 [2024-10-01 22:40:35.467736] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.309 [2024-10-01 22:40:35.467747] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.309 qpair failed and we were unable to recover it. 00:41:40.309 [2024-10-01 22:40:35.468040] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.309 [2024-10-01 22:40:35.468059] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.309 qpair failed and we were unable to recover it. 00:41:40.309 [2024-10-01 22:40:35.468372] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.309 [2024-10-01 22:40:35.468381] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.309 qpair failed and we were unable to recover it. 00:41:40.309 [2024-10-01 22:40:35.468727] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.309 [2024-10-01 22:40:35.468738] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.309 qpair failed and we were unable to recover it. 00:41:40.309 [2024-10-01 22:40:35.469065] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.309 [2024-10-01 22:40:35.469075] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.309 qpair failed and we were unable to recover it. 00:41:40.309 [2024-10-01 22:40:35.469276] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.309 [2024-10-01 22:40:35.469286] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.309 qpair failed and we were unable to recover it. 00:41:40.309 [2024-10-01 22:40:35.469621] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.309 [2024-10-01 22:40:35.469642] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.309 qpair failed and we were unable to recover it. 00:41:40.309 [2024-10-01 22:40:35.470002] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.309 [2024-10-01 22:40:35.470013] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.309 qpair failed and we were unable to recover it. 00:41:40.309 [2024-10-01 22:40:35.470187] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.309 [2024-10-01 22:40:35.470197] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.309 qpair failed and we were unable to recover it. 00:41:40.309 [2024-10-01 22:40:35.470619] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.309 [2024-10-01 22:40:35.470639] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.309 qpair failed and we were unable to recover it. 00:41:40.309 [2024-10-01 22:40:35.470911] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.309 [2024-10-01 22:40:35.470921] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.309 qpair failed and we were unable to recover it. 00:41:40.309 [2024-10-01 22:40:35.471246] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.309 [2024-10-01 22:40:35.471257] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.309 qpair failed and we were unable to recover it. 00:41:40.309 [2024-10-01 22:40:35.471568] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.309 [2024-10-01 22:40:35.471578] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.309 qpair failed and we were unable to recover it. 00:41:40.309 [2024-10-01 22:40:35.471887] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.309 [2024-10-01 22:40:35.471898] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.309 qpair failed and we were unable to recover it. 00:41:40.309 [2024-10-01 22:40:35.472183] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.309 [2024-10-01 22:40:35.472193] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.309 qpair failed and we were unable to recover it. 00:41:40.309 [2024-10-01 22:40:35.472479] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.309 [2024-10-01 22:40:35.472490] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.309 qpair failed and we were unable to recover it. 00:41:40.309 [2024-10-01 22:40:35.472797] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.309 [2024-10-01 22:40:35.472807] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.309 qpair failed and we were unable to recover it. 00:41:40.309 [2024-10-01 22:40:35.473193] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.309 [2024-10-01 22:40:35.473202] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.309 qpair failed and we were unable to recover it. 00:41:40.309 [2024-10-01 22:40:35.473506] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.309 [2024-10-01 22:40:35.473516] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.309 qpair failed and we were unable to recover it. 00:41:40.309 [2024-10-01 22:40:35.473580] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.309 [2024-10-01 22:40:35.473590] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.309 qpair failed and we were unable to recover it. 00:41:40.309 [2024-10-01 22:40:35.473907] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.309 [2024-10-01 22:40:35.473917] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.309 qpair failed and we were unable to recover it. 00:41:40.309 [2024-10-01 22:40:35.474220] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.309 [2024-10-01 22:40:35.474229] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.310 qpair failed and we were unable to recover it. 00:41:40.310 [2024-10-01 22:40:35.474550] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.310 [2024-10-01 22:40:35.474560] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.310 qpair failed and we were unable to recover it. 00:41:40.310 [2024-10-01 22:40:35.474865] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.310 [2024-10-01 22:40:35.474875] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.310 qpair failed and we were unable to recover it. 00:41:40.310 [2024-10-01 22:40:35.475174] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.310 [2024-10-01 22:40:35.475184] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.310 qpair failed and we were unable to recover it. 00:41:40.310 [2024-10-01 22:40:35.475489] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.310 [2024-10-01 22:40:35.475500] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.310 qpair failed and we were unable to recover it. 00:41:40.310 [2024-10-01 22:40:35.475795] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.310 [2024-10-01 22:40:35.475806] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.310 qpair failed and we were unable to recover it. 00:41:40.310 [2024-10-01 22:40:35.476113] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.310 [2024-10-01 22:40:35.476123] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.310 qpair failed and we were unable to recover it. 00:41:40.310 [2024-10-01 22:40:35.476400] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.310 [2024-10-01 22:40:35.476410] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.310 qpair failed and we were unable to recover it. 00:41:40.310 [2024-10-01 22:40:35.476712] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.310 [2024-10-01 22:40:35.476723] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.310 qpair failed and we were unable to recover it. 00:41:40.310 [2024-10-01 22:40:35.477031] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.310 [2024-10-01 22:40:35.477041] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.310 qpair failed and we were unable to recover it. 00:41:40.310 [2024-10-01 22:40:35.477414] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.310 [2024-10-01 22:40:35.477424] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.310 qpair failed and we were unable to recover it. 00:41:40.310 [2024-10-01 22:40:35.477695] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.310 [2024-10-01 22:40:35.477706] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.310 qpair failed and we were unable to recover it. 00:41:40.310 [2024-10-01 22:40:35.478073] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.310 [2024-10-01 22:40:35.478083] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.310 qpair failed and we were unable to recover it. 00:41:40.310 [2024-10-01 22:40:35.478364] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.310 [2024-10-01 22:40:35.478373] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.310 qpair failed and we were unable to recover it. 00:41:40.310 [2024-10-01 22:40:35.478684] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.310 [2024-10-01 22:40:35.478694] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.310 qpair failed and we were unable to recover it. 00:41:40.310 [2024-10-01 22:40:35.479000] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.310 [2024-10-01 22:40:35.479009] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.310 qpair failed and we were unable to recover it. 00:41:40.310 [2024-10-01 22:40:35.479321] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.310 [2024-10-01 22:40:35.479330] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.310 qpair failed and we were unable to recover it. 00:41:40.310 [2024-10-01 22:40:35.479621] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.310 [2024-10-01 22:40:35.479635] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.310 qpair failed and we were unable to recover it. 00:41:40.310 [2024-10-01 22:40:35.479981] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.310 [2024-10-01 22:40:35.479991] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.310 qpair failed and we were unable to recover it. 00:41:40.310 [2024-10-01 22:40:35.480277] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.310 [2024-10-01 22:40:35.480287] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.310 qpair failed and we were unable to recover it. 00:41:40.310 [2024-10-01 22:40:35.480483] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.310 [2024-10-01 22:40:35.480493] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.310 qpair failed and we were unable to recover it. 00:41:40.310 [2024-10-01 22:40:35.480808] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.310 [2024-10-01 22:40:35.480818] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.310 qpair failed and we were unable to recover it. 00:41:40.310 [2024-10-01 22:40:35.481151] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.310 [2024-10-01 22:40:35.481162] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.310 qpair failed and we were unable to recover it. 00:41:40.310 [2024-10-01 22:40:35.481351] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.310 [2024-10-01 22:40:35.481361] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.310 qpair failed and we were unable to recover it. 00:41:40.310 [2024-10-01 22:40:35.481664] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.310 [2024-10-01 22:40:35.481674] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.310 qpair failed and we were unable to recover it. 00:41:40.310 [2024-10-01 22:40:35.481964] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.310 [2024-10-01 22:40:35.481974] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.310 qpair failed and we were unable to recover it. 00:41:40.310 [2024-10-01 22:40:35.482279] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.310 [2024-10-01 22:40:35.482288] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.310 qpair failed and we were unable to recover it. 00:41:40.310 [2024-10-01 22:40:35.482577] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.310 [2024-10-01 22:40:35.482586] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.310 qpair failed and we were unable to recover it. 00:41:40.310 [2024-10-01 22:40:35.482941] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.310 [2024-10-01 22:40:35.482951] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.310 qpair failed and we were unable to recover it. 00:41:40.310 [2024-10-01 22:40:35.483144] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.310 [2024-10-01 22:40:35.483153] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.310 qpair failed and we were unable to recover it. 00:41:40.310 [2024-10-01 22:40:35.483448] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.310 [2024-10-01 22:40:35.483458] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.310 qpair failed and we were unable to recover it. 00:41:40.310 [2024-10-01 22:40:35.483759] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.310 [2024-10-01 22:40:35.483771] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.310 qpair failed and we were unable to recover it. 00:41:40.310 [2024-10-01 22:40:35.483968] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.310 [2024-10-01 22:40:35.483978] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.310 qpair failed and we were unable to recover it. 00:41:40.310 [2024-10-01 22:40:35.484242] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.310 [2024-10-01 22:40:35.484252] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.310 qpair failed and we were unable to recover it. 00:41:40.310 [2024-10-01 22:40:35.484420] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.310 [2024-10-01 22:40:35.484430] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.310 qpair failed and we were unable to recover it. 00:41:40.310 [2024-10-01 22:40:35.484742] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.310 [2024-10-01 22:40:35.484752] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.310 qpair failed and we were unable to recover it. 00:41:40.310 [2024-10-01 22:40:35.485103] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.310 [2024-10-01 22:40:35.485114] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.310 qpair failed and we were unable to recover it. 00:41:40.310 [2024-10-01 22:40:35.485417] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.310 [2024-10-01 22:40:35.485427] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.310 qpair failed and we were unable to recover it. 00:41:40.310 [2024-10-01 22:40:35.485595] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.310 [2024-10-01 22:40:35.485607] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.310 qpair failed and we were unable to recover it. 00:41:40.310 [2024-10-01 22:40:35.485908] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.310 [2024-10-01 22:40:35.485918] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.310 qpair failed and we were unable to recover it. 00:41:40.311 [2024-10-01 22:40:35.486221] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.311 [2024-10-01 22:40:35.486232] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.311 qpair failed and we were unable to recover it. 00:41:40.311 [2024-10-01 22:40:35.486532] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.311 [2024-10-01 22:40:35.486543] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.311 qpair failed and we were unable to recover it. 00:41:40.311 [2024-10-01 22:40:35.486856] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.311 [2024-10-01 22:40:35.486867] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.311 qpair failed and we were unable to recover it. 00:41:40.311 [2024-10-01 22:40:35.487147] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.311 [2024-10-01 22:40:35.487157] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.311 qpair failed and we were unable to recover it. 00:41:40.311 [2024-10-01 22:40:35.487465] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.311 [2024-10-01 22:40:35.487476] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.311 qpair failed and we were unable to recover it. 00:41:40.311 [2024-10-01 22:40:35.487792] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.311 [2024-10-01 22:40:35.487803] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.311 qpair failed and we were unable to recover it. 00:41:40.311 [2024-10-01 22:40:35.488157] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.311 [2024-10-01 22:40:35.488166] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.311 qpair failed and we were unable to recover it. 00:41:40.311 [2024-10-01 22:40:35.488442] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.311 [2024-10-01 22:40:35.488451] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.311 qpair failed and we were unable to recover it. 00:41:40.311 [2024-10-01 22:40:35.488734] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.311 [2024-10-01 22:40:35.488744] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.311 qpair failed and we were unable to recover it. 00:41:40.311 [2024-10-01 22:40:35.489028] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.311 [2024-10-01 22:40:35.489038] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.311 qpair failed and we were unable to recover it. 00:41:40.311 [2024-10-01 22:40:35.489349] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.311 [2024-10-01 22:40:35.489359] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.311 qpair failed and we were unable to recover it. 00:41:40.311 [2024-10-01 22:40:35.489644] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.311 [2024-10-01 22:40:35.489655] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.311 qpair failed and we were unable to recover it. 00:41:40.311 [2024-10-01 22:40:35.489960] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.311 [2024-10-01 22:40:35.489970] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.311 qpair failed and we were unable to recover it. 00:41:40.311 [2024-10-01 22:40:35.490253] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.311 [2024-10-01 22:40:35.490263] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.311 qpair failed and we were unable to recover it. 00:41:40.311 [2024-10-01 22:40:35.490468] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.311 [2024-10-01 22:40:35.490478] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.311 qpair failed and we were unable to recover it. 00:41:40.311 [2024-10-01 22:40:35.490788] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.311 [2024-10-01 22:40:35.490798] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.311 qpair failed and we were unable to recover it. 00:41:40.311 [2024-10-01 22:40:35.490981] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.311 [2024-10-01 22:40:35.490992] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.311 qpair failed and we were unable to recover it. 00:41:40.311 [2024-10-01 22:40:35.491374] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.311 [2024-10-01 22:40:35.491384] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.311 qpair failed and we were unable to recover it. 00:41:40.311 [2024-10-01 22:40:35.491663] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.311 [2024-10-01 22:40:35.491676] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.311 qpair failed and we were unable to recover it. 00:41:40.311 [2024-10-01 22:40:35.491996] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.311 [2024-10-01 22:40:35.492005] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.311 qpair failed and we were unable to recover it. 00:41:40.311 [2024-10-01 22:40:35.492391] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.311 [2024-10-01 22:40:35.492401] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.311 qpair failed and we were unable to recover it. 00:41:40.311 [2024-10-01 22:40:35.492687] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.311 [2024-10-01 22:40:35.492698] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.311 qpair failed and we were unable to recover it. 00:41:40.311 [2024-10-01 22:40:35.492982] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.311 [2024-10-01 22:40:35.492991] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.311 qpair failed and we were unable to recover it. 00:41:40.311 [2024-10-01 22:40:35.493322] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.311 [2024-10-01 22:40:35.493331] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.311 qpair failed and we were unable to recover it. 00:41:40.311 [2024-10-01 22:40:35.493674] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.311 [2024-10-01 22:40:35.493685] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.311 qpair failed and we were unable to recover it. 00:41:40.311 [2024-10-01 22:40:35.494006] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.311 [2024-10-01 22:40:35.494016] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.311 qpair failed and we were unable to recover it. 00:41:40.311 [2024-10-01 22:40:35.494186] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.311 [2024-10-01 22:40:35.494196] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.311 qpair failed and we were unable to recover it. 00:41:40.311 [2024-10-01 22:40:35.494552] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.311 [2024-10-01 22:40:35.494562] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.311 qpair failed and we were unable to recover it. 00:41:40.311 [2024-10-01 22:40:35.494809] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.311 [2024-10-01 22:40:35.494819] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.311 qpair failed and we were unable to recover it. 00:41:40.311 [2024-10-01 22:40:35.495125] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.311 [2024-10-01 22:40:35.495135] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.311 qpair failed and we were unable to recover it. 00:41:40.311 [2024-10-01 22:40:35.495447] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.311 [2024-10-01 22:40:35.495457] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.311 qpair failed and we were unable to recover it. 00:41:40.311 [2024-10-01 22:40:35.495761] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.311 [2024-10-01 22:40:35.495771] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.311 qpair failed and we were unable to recover it. 00:41:40.311 [2024-10-01 22:40:35.495968] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.311 [2024-10-01 22:40:35.495978] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.311 qpair failed and we were unable to recover it. 00:41:40.311 [2024-10-01 22:40:35.496317] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.311 [2024-10-01 22:40:35.496327] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.311 qpair failed and we were unable to recover it. 00:41:40.311 [2024-10-01 22:40:35.496612] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.311 [2024-10-01 22:40:35.496622] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.311 qpair failed and we were unable to recover it. 00:41:40.311 [2024-10-01 22:40:35.496956] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.311 [2024-10-01 22:40:35.496966] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.311 qpair failed and we were unable to recover it. 00:41:40.311 [2024-10-01 22:40:35.497269] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.311 [2024-10-01 22:40:35.497278] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.311 qpair failed and we were unable to recover it. 00:41:40.311 [2024-10-01 22:40:35.497585] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.311 [2024-10-01 22:40:35.497595] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.311 qpair failed and we were unable to recover it. 00:41:40.311 [2024-10-01 22:40:35.497874] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.311 [2024-10-01 22:40:35.497885] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.311 qpair failed and we were unable to recover it. 00:41:40.312 [2024-10-01 22:40:35.498177] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.312 [2024-10-01 22:40:35.498187] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.312 qpair failed and we were unable to recover it. 00:41:40.312 [2024-10-01 22:40:35.498480] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.312 [2024-10-01 22:40:35.498490] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.312 qpair failed and we were unable to recover it. 00:41:40.312 [2024-10-01 22:40:35.498819] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.312 [2024-10-01 22:40:35.498832] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.312 qpair failed and we were unable to recover it. 00:41:40.312 [2024-10-01 22:40:35.499138] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.312 [2024-10-01 22:40:35.499150] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.312 qpair failed and we were unable to recover it. 00:41:40.312 [2024-10-01 22:40:35.499472] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.312 [2024-10-01 22:40:35.499481] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.312 qpair failed and we were unable to recover it. 00:41:40.312 [2024-10-01 22:40:35.499670] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.312 [2024-10-01 22:40:35.499682] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.312 qpair failed and we were unable to recover it. 00:41:40.312 [2024-10-01 22:40:35.499980] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.312 [2024-10-01 22:40:35.499990] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.312 qpair failed and we were unable to recover it. 00:41:40.312 [2024-10-01 22:40:35.500279] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.312 [2024-10-01 22:40:35.500290] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.312 qpair failed and we were unable to recover it. 00:41:40.312 [2024-10-01 22:40:35.500600] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.312 [2024-10-01 22:40:35.500610] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.312 qpair failed and we were unable to recover it. 00:41:40.312 [2024-10-01 22:40:35.500901] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.312 [2024-10-01 22:40:35.500911] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.312 qpair failed and we were unable to recover it. 00:41:40.312 [2024-10-01 22:40:35.501176] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.312 [2024-10-01 22:40:35.501185] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.312 qpair failed and we were unable to recover it. 00:41:40.312 [2024-10-01 22:40:35.501499] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.312 [2024-10-01 22:40:35.501509] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.312 qpair failed and we were unable to recover it. 00:41:40.312 [2024-10-01 22:40:35.501838] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.312 [2024-10-01 22:40:35.501850] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.312 qpair failed and we were unable to recover it. 00:41:40.312 [2024-10-01 22:40:35.502143] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.312 [2024-10-01 22:40:35.502154] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.312 qpair failed and we were unable to recover it. 00:41:40.312 [2024-10-01 22:40:35.502365] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.312 [2024-10-01 22:40:35.502375] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.312 qpair failed and we were unable to recover it. 00:41:40.312 [2024-10-01 22:40:35.502623] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.312 [2024-10-01 22:40:35.502637] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.312 qpair failed and we were unable to recover it. 00:41:40.312 [2024-10-01 22:40:35.502831] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.312 [2024-10-01 22:40:35.502841] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.312 qpair failed and we were unable to recover it. 00:41:40.312 [2024-10-01 22:40:35.503161] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.312 [2024-10-01 22:40:35.503171] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.312 qpair failed and we were unable to recover it. 00:41:40.312 [2024-10-01 22:40:35.503478] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.312 [2024-10-01 22:40:35.503488] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.312 qpair failed and we were unable to recover it. 00:41:40.312 [2024-10-01 22:40:35.503790] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.312 [2024-10-01 22:40:35.503800] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.312 qpair failed and we were unable to recover it. 00:41:40.312 [2024-10-01 22:40:35.503966] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.312 [2024-10-01 22:40:35.503976] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.312 qpair failed and we were unable to recover it. 00:41:40.312 [2024-10-01 22:40:35.504325] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.312 [2024-10-01 22:40:35.504335] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.312 qpair failed and we were unable to recover it. 00:41:40.312 [2024-10-01 22:40:35.504703] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.312 [2024-10-01 22:40:35.504713] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.312 qpair failed and we were unable to recover it. 00:41:40.312 [2024-10-01 22:40:35.505007] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.312 [2024-10-01 22:40:35.505016] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.312 qpair failed and we were unable to recover it. 00:41:40.312 [2024-10-01 22:40:35.505392] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.312 [2024-10-01 22:40:35.505402] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.312 qpair failed and we were unable to recover it. 00:41:40.312 [2024-10-01 22:40:35.505580] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.312 [2024-10-01 22:40:35.505589] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.312 qpair failed and we were unable to recover it. 00:41:40.312 [2024-10-01 22:40:35.506016] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.312 [2024-10-01 22:40:35.506026] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.312 qpair failed and we were unable to recover it. 00:41:40.312 [2024-10-01 22:40:35.506313] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.312 [2024-10-01 22:40:35.506323] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.312 qpair failed and we were unable to recover it. 00:41:40.312 [2024-10-01 22:40:35.506646] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.312 [2024-10-01 22:40:35.506656] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.312 qpair failed and we were unable to recover it. 00:41:40.312 [2024-10-01 22:40:35.506968] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.312 [2024-10-01 22:40:35.506979] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.312 qpair failed and we were unable to recover it. 00:41:40.312 [2024-10-01 22:40:35.507280] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.312 [2024-10-01 22:40:35.507290] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.312 qpair failed and we were unable to recover it. 00:41:40.312 [2024-10-01 22:40:35.507589] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.312 [2024-10-01 22:40:35.507599] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.312 qpair failed and we were unable to recover it. 00:41:40.312 [2024-10-01 22:40:35.507879] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.312 [2024-10-01 22:40:35.507889] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.312 qpair failed and we were unable to recover it. 00:41:40.312 [2024-10-01 22:40:35.508206] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.312 [2024-10-01 22:40:35.508215] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.312 qpair failed and we were unable to recover it. 00:41:40.312 [2024-10-01 22:40:35.508498] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.312 [2024-10-01 22:40:35.508509] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.312 qpair failed and we were unable to recover it. 00:41:40.312 [2024-10-01 22:40:35.508787] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.312 [2024-10-01 22:40:35.508797] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.312 qpair failed and we were unable to recover it. 00:41:40.312 [2024-10-01 22:40:35.509105] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.312 [2024-10-01 22:40:35.509115] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.312 qpair failed and we were unable to recover it. 00:41:40.312 [2024-10-01 22:40:35.509417] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.312 [2024-10-01 22:40:35.509426] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.312 qpair failed and we were unable to recover it. 00:41:40.312 [2024-10-01 22:40:35.509797] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.312 [2024-10-01 22:40:35.509808] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.312 qpair failed and we were unable to recover it. 00:41:40.312 [2024-10-01 22:40:35.510111] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.313 [2024-10-01 22:40:35.510120] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.313 qpair failed and we were unable to recover it. 00:41:40.313 [2024-10-01 22:40:35.510430] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.313 [2024-10-01 22:40:35.510440] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.313 qpair failed and we were unable to recover it. 00:41:40.313 [2024-10-01 22:40:35.510748] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.313 [2024-10-01 22:40:35.510758] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.313 qpair failed and we were unable to recover it. 00:41:40.313 [2024-10-01 22:40:35.511049] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.313 [2024-10-01 22:40:35.511059] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.313 qpair failed and we were unable to recover it. 00:41:40.313 [2024-10-01 22:40:35.511371] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.313 [2024-10-01 22:40:35.511381] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.313 qpair failed and we were unable to recover it. 00:41:40.313 [2024-10-01 22:40:35.511673] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.313 [2024-10-01 22:40:35.511683] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.313 qpair failed and we were unable to recover it. 00:41:40.313 [2024-10-01 22:40:35.511990] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.313 [2024-10-01 22:40:35.512000] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.313 qpair failed and we were unable to recover it. 00:41:40.313 [2024-10-01 22:40:35.512180] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.313 [2024-10-01 22:40:35.512191] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.313 qpair failed and we were unable to recover it. 00:41:40.313 [2024-10-01 22:40:35.512539] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.313 [2024-10-01 22:40:35.512557] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.313 qpair failed and we were unable to recover it. 00:41:40.313 [2024-10-01 22:40:35.512737] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.313 [2024-10-01 22:40:35.512747] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.313 qpair failed and we were unable to recover it. 00:41:40.313 [2024-10-01 22:40:35.513045] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.313 [2024-10-01 22:40:35.513055] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.313 qpair failed and we were unable to recover it. 00:41:40.313 [2024-10-01 22:40:35.513379] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.313 [2024-10-01 22:40:35.513389] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.313 qpair failed and we were unable to recover it. 00:41:40.313 [2024-10-01 22:40:35.513569] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.313 [2024-10-01 22:40:35.513580] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.313 qpair failed and we were unable to recover it. 00:41:40.313 [2024-10-01 22:40:35.513774] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.313 [2024-10-01 22:40:35.513785] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.313 qpair failed and we were unable to recover it. 00:41:40.313 [2024-10-01 22:40:35.514117] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.313 [2024-10-01 22:40:35.514127] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.313 qpair failed and we were unable to recover it. 00:41:40.313 [2024-10-01 22:40:35.514429] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.313 [2024-10-01 22:40:35.514439] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.313 qpair failed and we were unable to recover it. 00:41:40.313 [2024-10-01 22:40:35.514745] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.313 [2024-10-01 22:40:35.514756] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.313 qpair failed and we were unable to recover it. 00:41:40.313 [2024-10-01 22:40:35.515074] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.313 [2024-10-01 22:40:35.515084] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.313 qpair failed and we were unable to recover it. 00:41:40.313 [2024-10-01 22:40:35.515389] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.313 [2024-10-01 22:40:35.515398] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.313 qpair failed and we were unable to recover it. 00:41:40.313 [2024-10-01 22:40:35.515729] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.313 [2024-10-01 22:40:35.515740] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.313 qpair failed and we were unable to recover it. 00:41:40.313 [2024-10-01 22:40:35.516035] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.313 [2024-10-01 22:40:35.516045] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.313 qpair failed and we were unable to recover it. 00:41:40.313 [2024-10-01 22:40:35.516377] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.313 [2024-10-01 22:40:35.516387] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.313 qpair failed and we were unable to recover it. 00:41:40.313 [2024-10-01 22:40:35.516690] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.313 [2024-10-01 22:40:35.516700] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.313 qpair failed and we were unable to recover it. 00:41:40.313 [2024-10-01 22:40:35.516989] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.313 [2024-10-01 22:40:35.516999] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.313 qpair failed and we were unable to recover it. 00:41:40.313 [2024-10-01 22:40:35.517207] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.313 [2024-10-01 22:40:35.517217] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.313 qpair failed and we were unable to recover it. 00:41:40.313 [2024-10-01 22:40:35.517541] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.313 [2024-10-01 22:40:35.517550] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.313 qpair failed and we were unable to recover it. 00:41:40.313 [2024-10-01 22:40:35.517882] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.313 [2024-10-01 22:40:35.517892] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.313 qpair failed and we were unable to recover it. 00:41:40.313 [2024-10-01 22:40:35.518195] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.313 [2024-10-01 22:40:35.518205] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.313 qpair failed and we were unable to recover it. 00:41:40.313 [2024-10-01 22:40:35.518511] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.313 [2024-10-01 22:40:35.518521] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.313 qpair failed and we were unable to recover it. 00:41:40.313 [2024-10-01 22:40:35.518832] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.313 [2024-10-01 22:40:35.518843] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.313 qpair failed and we were unable to recover it. 00:41:40.313 [2024-10-01 22:40:35.519122] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.313 [2024-10-01 22:40:35.519132] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.313 qpair failed and we were unable to recover it. 00:41:40.313 [2024-10-01 22:40:35.519434] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.313 [2024-10-01 22:40:35.519444] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.313 qpair failed and we were unable to recover it. 00:41:40.313 [2024-10-01 22:40:35.519749] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.313 [2024-10-01 22:40:35.519759] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.313 qpair failed and we were unable to recover it. 00:41:40.313 [2024-10-01 22:40:35.520040] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.313 [2024-10-01 22:40:35.520050] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.313 qpair failed and we were unable to recover it. 00:41:40.313 [2024-10-01 22:40:35.520372] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.313 [2024-10-01 22:40:35.520381] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.313 qpair failed and we were unable to recover it. 00:41:40.313 [2024-10-01 22:40:35.520684] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.314 [2024-10-01 22:40:35.520696] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.314 qpair failed and we were unable to recover it. 00:41:40.314 [2024-10-01 22:40:35.521011] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.314 [2024-10-01 22:40:35.521021] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.314 qpair failed and we were unable to recover it. 00:41:40.314 [2024-10-01 22:40:35.521389] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.314 [2024-10-01 22:40:35.521399] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.314 qpair failed and we were unable to recover it. 00:41:40.314 [2024-10-01 22:40:35.521696] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.314 [2024-10-01 22:40:35.521707] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.314 qpair failed and we were unable to recover it. 00:41:40.314 [2024-10-01 22:40:35.522012] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.314 [2024-10-01 22:40:35.522021] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.314 qpair failed and we were unable to recover it. 00:41:40.314 [2024-10-01 22:40:35.522323] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.314 [2024-10-01 22:40:35.522333] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.314 qpair failed and we were unable to recover it. 00:41:40.314 [2024-10-01 22:40:35.522655] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.314 [2024-10-01 22:40:35.522666] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.314 qpair failed and we were unable to recover it. 00:41:40.314 [2024-10-01 22:40:35.522951] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.314 [2024-10-01 22:40:35.522961] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.314 qpair failed and we were unable to recover it. 00:41:40.314 [2024-10-01 22:40:35.523324] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.314 [2024-10-01 22:40:35.523334] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.314 qpair failed and we were unable to recover it. 00:41:40.314 [2024-10-01 22:40:35.523635] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.314 [2024-10-01 22:40:35.523646] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.314 qpair failed and we were unable to recover it. 00:41:40.314 [2024-10-01 22:40:35.523955] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.314 [2024-10-01 22:40:35.523965] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.314 qpair failed and we were unable to recover it. 00:41:40.314 [2024-10-01 22:40:35.524271] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.314 [2024-10-01 22:40:35.524280] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.314 qpair failed and we were unable to recover it. 00:41:40.314 [2024-10-01 22:40:35.524590] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.314 [2024-10-01 22:40:35.524599] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.314 qpair failed and we were unable to recover it. 00:41:40.314 [2024-10-01 22:40:35.524931] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.314 [2024-10-01 22:40:35.524941] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.314 qpair failed and we were unable to recover it. 00:41:40.314 [2024-10-01 22:40:35.525241] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.314 [2024-10-01 22:40:35.525251] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.314 qpair failed and we were unable to recover it. 00:41:40.314 [2024-10-01 22:40:35.525420] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.314 [2024-10-01 22:40:35.525431] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.314 qpair failed and we were unable to recover it. 00:41:40.314 [2024-10-01 22:40:35.525820] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.314 [2024-10-01 22:40:35.525830] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.314 qpair failed and we were unable to recover it. 00:41:40.314 [2024-10-01 22:40:35.526119] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.314 [2024-10-01 22:40:35.526135] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.314 qpair failed and we were unable to recover it. 00:41:40.595 [2024-10-01 22:40:35.526415] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.595 [2024-10-01 22:40:35.526427] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.595 qpair failed and we were unable to recover it. 00:41:40.595 [2024-10-01 22:40:35.526735] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.595 [2024-10-01 22:40:35.526745] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.595 qpair failed and we were unable to recover it. 00:41:40.595 [2024-10-01 22:40:35.527058] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.595 [2024-10-01 22:40:35.527068] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.595 qpair failed and we were unable to recover it. 00:41:40.595 [2024-10-01 22:40:35.527371] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.595 [2024-10-01 22:40:35.527381] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.595 qpair failed and we were unable to recover it. 00:41:40.595 [2024-10-01 22:40:35.527691] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.595 [2024-10-01 22:40:35.527701] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.595 qpair failed and we were unable to recover it. 00:41:40.595 [2024-10-01 22:40:35.528023] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.595 [2024-10-01 22:40:35.528033] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.595 qpair failed and we were unable to recover it. 00:41:40.595 [2024-10-01 22:40:35.528340] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.595 [2024-10-01 22:40:35.528350] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.595 qpair failed and we were unable to recover it. 00:41:40.595 [2024-10-01 22:40:35.528653] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.595 [2024-10-01 22:40:35.528663] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.595 qpair failed and we were unable to recover it. 00:41:40.595 [2024-10-01 22:40:35.528832] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.595 [2024-10-01 22:40:35.528843] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.595 qpair failed and we were unable to recover it. 00:41:40.595 [2024-10-01 22:40:35.529171] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.595 [2024-10-01 22:40:35.529184] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.595 qpair failed and we were unable to recover it. 00:41:40.595 [2024-10-01 22:40:35.529371] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.595 [2024-10-01 22:40:35.529382] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.595 qpair failed and we were unable to recover it. 00:41:40.595 [2024-10-01 22:40:35.529685] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.595 [2024-10-01 22:40:35.529696] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.595 qpair failed and we were unable to recover it. 00:41:40.595 [2024-10-01 22:40:35.529989] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.595 [2024-10-01 22:40:35.530000] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.595 qpair failed and we were unable to recover it. 00:41:40.595 [2024-10-01 22:40:35.530301] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.595 [2024-10-01 22:40:35.530311] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.595 qpair failed and we were unable to recover it. 00:41:40.595 [2024-10-01 22:40:35.530589] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.595 [2024-10-01 22:40:35.530598] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.595 qpair failed and we were unable to recover it. 00:41:40.595 [2024-10-01 22:40:35.530902] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.595 [2024-10-01 22:40:35.530913] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.595 qpair failed and we were unable to recover it. 00:41:40.595 [2024-10-01 22:40:35.531096] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.595 [2024-10-01 22:40:35.531105] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.595 qpair failed and we were unable to recover it. 00:41:40.595 [2024-10-01 22:40:35.531323] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.595 [2024-10-01 22:40:35.531334] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.595 qpair failed and we were unable to recover it. 00:41:40.595 [2024-10-01 22:40:35.531640] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.595 [2024-10-01 22:40:35.531651] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.595 qpair failed and we were unable to recover it. 00:41:40.595 [2024-10-01 22:40:35.531959] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.595 [2024-10-01 22:40:35.531976] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.595 qpair failed and we were unable to recover it. 00:41:40.595 [2024-10-01 22:40:35.532304] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.596 [2024-10-01 22:40:35.532314] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.596 qpair failed and we were unable to recover it. 00:41:40.596 [2024-10-01 22:40:35.532611] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.596 [2024-10-01 22:40:35.532621] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.596 qpair failed and we were unable to recover it. 00:41:40.596 [2024-10-01 22:40:35.532931] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.596 [2024-10-01 22:40:35.532941] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.596 qpair failed and we were unable to recover it. 00:41:40.596 [2024-10-01 22:40:35.533247] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.596 [2024-10-01 22:40:35.533257] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.596 qpair failed and we were unable to recover it. 00:41:40.596 [2024-10-01 22:40:35.533578] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.596 [2024-10-01 22:40:35.533588] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.596 qpair failed and we were unable to recover it. 00:41:40.596 [2024-10-01 22:40:35.533870] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.596 [2024-10-01 22:40:35.533881] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.596 qpair failed and we were unable to recover it. 00:41:40.596 [2024-10-01 22:40:35.534205] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.596 [2024-10-01 22:40:35.534215] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.596 qpair failed and we were unable to recover it. 00:41:40.596 [2024-10-01 22:40:35.534496] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.596 [2024-10-01 22:40:35.534507] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.596 qpair failed and we were unable to recover it. 00:41:40.596 [2024-10-01 22:40:35.534682] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.596 [2024-10-01 22:40:35.534693] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.596 qpair failed and we were unable to recover it. 00:41:40.596 [2024-10-01 22:40:35.534999] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.596 [2024-10-01 22:40:35.535008] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.596 qpair failed and we were unable to recover it. 00:41:40.596 [2024-10-01 22:40:35.535313] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.596 [2024-10-01 22:40:35.535323] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.596 qpair failed and we were unable to recover it. 00:41:40.596 [2024-10-01 22:40:35.535635] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.596 [2024-10-01 22:40:35.535646] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.596 qpair failed and we were unable to recover it. 00:41:40.596 [2024-10-01 22:40:35.535956] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.596 [2024-10-01 22:40:35.535966] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.596 qpair failed and we were unable to recover it. 00:41:40.596 [2024-10-01 22:40:35.536284] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.596 [2024-10-01 22:40:35.536294] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.596 qpair failed and we were unable to recover it. 00:41:40.596 [2024-10-01 22:40:35.536597] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.596 [2024-10-01 22:40:35.536607] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.596 qpair failed and we were unable to recover it. 00:41:40.596 [2024-10-01 22:40:35.536923] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.596 [2024-10-01 22:40:35.536934] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.596 qpair failed and we were unable to recover it. 00:41:40.596 [2024-10-01 22:40:35.537217] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.596 [2024-10-01 22:40:35.537228] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.596 qpair failed and we were unable to recover it. 00:41:40.596 [2024-10-01 22:40:35.537531] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.596 [2024-10-01 22:40:35.537542] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.596 qpair failed and we were unable to recover it. 00:41:40.596 [2024-10-01 22:40:35.537778] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.596 [2024-10-01 22:40:35.537789] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.596 qpair failed and we were unable to recover it. 00:41:40.596 [2024-10-01 22:40:35.538108] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.596 [2024-10-01 22:40:35.538119] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.596 qpair failed and we were unable to recover it. 00:41:40.596 [2024-10-01 22:40:35.538398] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.596 [2024-10-01 22:40:35.538409] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.596 qpair failed and we were unable to recover it. 00:41:40.596 [2024-10-01 22:40:35.538714] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.596 [2024-10-01 22:40:35.538725] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.596 qpair failed and we were unable to recover it. 00:41:40.596 [2024-10-01 22:40:35.539030] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.596 [2024-10-01 22:40:35.539041] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.596 qpair failed and we were unable to recover it. 00:41:40.596 [2024-10-01 22:40:35.539248] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.596 [2024-10-01 22:40:35.539258] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.596 qpair failed and we were unable to recover it. 00:41:40.596 [2024-10-01 22:40:35.539595] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.596 [2024-10-01 22:40:35.539605] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.596 qpair failed and we were unable to recover it. 00:41:40.596 [2024-10-01 22:40:35.539895] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.596 [2024-10-01 22:40:35.539906] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.596 qpair failed and we were unable to recover it. 00:41:40.596 [2024-10-01 22:40:35.540226] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.596 [2024-10-01 22:40:35.540235] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.596 qpair failed and we were unable to recover it. 00:41:40.596 [2024-10-01 22:40:35.540515] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.596 [2024-10-01 22:40:35.540533] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.596 qpair failed and we were unable to recover it. 00:41:40.596 [2024-10-01 22:40:35.540839] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.596 [2024-10-01 22:40:35.540849] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.596 qpair failed and we were unable to recover it. 00:41:40.596 [2024-10-01 22:40:35.541116] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.596 [2024-10-01 22:40:35.541126] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.596 qpair failed and we were unable to recover it. 00:41:40.596 [2024-10-01 22:40:35.541313] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.597 [2024-10-01 22:40:35.541323] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.597 qpair failed and we were unable to recover it. 00:41:40.597 [2024-10-01 22:40:35.541633] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.597 [2024-10-01 22:40:35.541644] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.597 qpair failed and we were unable to recover it. 00:41:40.597 [2024-10-01 22:40:35.541955] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.597 [2024-10-01 22:40:35.541965] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.597 qpair failed and we were unable to recover it. 00:41:40.597 [2024-10-01 22:40:35.542272] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.597 [2024-10-01 22:40:35.542282] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.597 qpair failed and we were unable to recover it. 00:41:40.597 [2024-10-01 22:40:35.542658] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.597 [2024-10-01 22:40:35.542668] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.597 qpair failed and we were unable to recover it. 00:41:40.597 [2024-10-01 22:40:35.542960] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.597 [2024-10-01 22:40:35.542970] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.597 qpair failed and we were unable to recover it. 00:41:40.597 [2024-10-01 22:40:35.543264] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.597 [2024-10-01 22:40:35.543274] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.597 qpair failed and we were unable to recover it. 00:41:40.597 [2024-10-01 22:40:35.543594] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.597 [2024-10-01 22:40:35.543603] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.597 qpair failed and we were unable to recover it. 00:41:40.597 [2024-10-01 22:40:35.543940] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.597 [2024-10-01 22:40:35.543950] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.597 qpair failed and we were unable to recover it. 00:41:40.597 [2024-10-01 22:40:35.544135] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.597 [2024-10-01 22:40:35.544145] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.597 qpair failed and we were unable to recover it. 00:41:40.597 [2024-10-01 22:40:35.544427] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.597 [2024-10-01 22:40:35.544437] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.597 qpair failed and we were unable to recover it. 00:41:40.597 [2024-10-01 22:40:35.544616] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.597 [2024-10-01 22:40:35.544629] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.597 qpair failed and we were unable to recover it. 00:41:40.597 [2024-10-01 22:40:35.545043] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.597 [2024-10-01 22:40:35.545053] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.597 qpair failed and we were unable to recover it. 00:41:40.597 [2024-10-01 22:40:35.545241] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.597 [2024-10-01 22:40:35.545250] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.597 qpair failed and we were unable to recover it. 00:41:40.597 [2024-10-01 22:40:35.545553] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.597 [2024-10-01 22:40:35.545563] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.597 qpair failed and we were unable to recover it. 00:41:40.597 [2024-10-01 22:40:35.545885] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.597 [2024-10-01 22:40:35.545895] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.597 qpair failed and we were unable to recover it. 00:41:40.597 [2024-10-01 22:40:35.546200] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.597 [2024-10-01 22:40:35.546209] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.597 qpair failed and we were unable to recover it. 00:41:40.597 [2024-10-01 22:40:35.546480] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.597 [2024-10-01 22:40:35.546490] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.597 qpair failed and we were unable to recover it. 00:41:40.597 [2024-10-01 22:40:35.546827] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.597 [2024-10-01 22:40:35.546837] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.597 qpair failed and we were unable to recover it. 00:41:40.597 [2024-10-01 22:40:35.547147] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.597 [2024-10-01 22:40:35.547156] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.597 qpair failed and we were unable to recover it. 00:41:40.597 [2024-10-01 22:40:35.547435] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.597 [2024-10-01 22:40:35.547445] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.597 qpair failed and we were unable to recover it. 00:41:40.597 [2024-10-01 22:40:35.547798] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.597 [2024-10-01 22:40:35.547808] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.597 qpair failed and we were unable to recover it. 00:41:40.597 [2024-10-01 22:40:35.548136] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.597 [2024-10-01 22:40:35.548146] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.597 qpair failed and we were unable to recover it. 00:41:40.597 [2024-10-01 22:40:35.548453] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.597 [2024-10-01 22:40:35.548462] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.597 qpair failed and we were unable to recover it. 00:41:40.597 [2024-10-01 22:40:35.548846] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.597 [2024-10-01 22:40:35.548857] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.597 qpair failed and we were unable to recover it. 00:41:40.597 [2024-10-01 22:40:35.549161] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.597 [2024-10-01 22:40:35.549172] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.597 qpair failed and we were unable to recover it. 00:41:40.597 [2024-10-01 22:40:35.549489] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.597 [2024-10-01 22:40:35.549499] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.597 qpair failed and we were unable to recover it. 00:41:40.597 [2024-10-01 22:40:35.549795] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.597 [2024-10-01 22:40:35.549807] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.597 qpair failed and we were unable to recover it. 00:41:40.597 [2024-10-01 22:40:35.550125] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.597 [2024-10-01 22:40:35.550135] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.597 qpair failed and we were unable to recover it. 00:41:40.597 [2024-10-01 22:40:35.550426] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.597 [2024-10-01 22:40:35.550437] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.598 qpair failed and we were unable to recover it. 00:41:40.598 [2024-10-01 22:40:35.550724] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.598 [2024-10-01 22:40:35.550734] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.598 qpair failed and we were unable to recover it. 00:41:40.598 [2024-10-01 22:40:35.551035] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.598 [2024-10-01 22:40:35.551046] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.598 qpair failed and we were unable to recover it. 00:41:40.598 [2024-10-01 22:40:35.551350] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.598 [2024-10-01 22:40:35.551360] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.598 qpair failed and we were unable to recover it. 00:41:40.598 [2024-10-01 22:40:35.551699] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.598 [2024-10-01 22:40:35.551710] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.598 qpair failed and we were unable to recover it. 00:41:40.598 [2024-10-01 22:40:35.552029] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.598 [2024-10-01 22:40:35.552039] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.598 qpair failed and we were unable to recover it. 00:41:40.598 [2024-10-01 22:40:35.552345] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.598 [2024-10-01 22:40:35.552355] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.598 qpair failed and we were unable to recover it. 00:41:40.598 [2024-10-01 22:40:35.552663] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.598 [2024-10-01 22:40:35.552674] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.598 qpair failed and we were unable to recover it. 00:41:40.598 [2024-10-01 22:40:35.553069] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.598 [2024-10-01 22:40:35.553079] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.598 qpair failed and we were unable to recover it. 00:41:40.598 [2024-10-01 22:40:35.553410] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.598 [2024-10-01 22:40:35.553420] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.598 qpair failed and we were unable to recover it. 00:41:40.598 [2024-10-01 22:40:35.553690] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.598 [2024-10-01 22:40:35.553700] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.598 qpair failed and we were unable to recover it. 00:41:40.598 [2024-10-01 22:40:35.553981] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.598 [2024-10-01 22:40:35.553990] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.598 qpair failed and we were unable to recover it. 00:41:40.598 [2024-10-01 22:40:35.554294] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.598 [2024-10-01 22:40:35.554303] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.598 qpair failed and we were unable to recover it. 00:41:40.598 [2024-10-01 22:40:35.554581] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.598 [2024-10-01 22:40:35.554591] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.598 qpair failed and we were unable to recover it. 00:41:40.598 [2024-10-01 22:40:35.554901] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.598 [2024-10-01 22:40:35.554911] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.598 qpair failed and we were unable to recover it. 00:41:40.598 [2024-10-01 22:40:35.555213] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.598 [2024-10-01 22:40:35.555223] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.598 qpair failed and we were unable to recover it. 00:41:40.598 [2024-10-01 22:40:35.555575] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.598 [2024-10-01 22:40:35.555585] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.598 qpair failed and we were unable to recover it. 00:41:40.598 [2024-10-01 22:40:35.555887] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.598 [2024-10-01 22:40:35.555898] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.598 qpair failed and we were unable to recover it. 00:41:40.598 [2024-10-01 22:40:35.556203] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.598 [2024-10-01 22:40:35.556213] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.598 qpair failed and we were unable to recover it. 00:41:40.598 [2024-10-01 22:40:35.556530] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.598 [2024-10-01 22:40:35.556540] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.598 qpair failed and we were unable to recover it. 00:41:40.598 [2024-10-01 22:40:35.556862] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.598 [2024-10-01 22:40:35.556872] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.598 qpair failed and we were unable to recover it. 00:41:40.598 [2024-10-01 22:40:35.557161] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.598 [2024-10-01 22:40:35.557171] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.598 qpair failed and we were unable to recover it. 00:41:40.598 [2024-10-01 22:40:35.557495] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.598 [2024-10-01 22:40:35.557505] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.598 qpair failed and we were unable to recover it. 00:41:40.598 [2024-10-01 22:40:35.557834] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.598 [2024-10-01 22:40:35.557844] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.598 qpair failed and we were unable to recover it. 00:41:40.598 [2024-10-01 22:40:35.558150] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.598 [2024-10-01 22:40:35.558160] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.598 qpair failed and we were unable to recover it. 00:41:40.598 [2024-10-01 22:40:35.558444] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.598 [2024-10-01 22:40:35.558456] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.598 qpair failed and we were unable to recover it. 00:41:40.598 [2024-10-01 22:40:35.558775] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.598 [2024-10-01 22:40:35.558786] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.598 qpair failed and we were unable to recover it. 00:41:40.598 [2024-10-01 22:40:35.559070] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.598 [2024-10-01 22:40:35.559087] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.598 qpair failed and we were unable to recover it. 00:41:40.598 [2024-10-01 22:40:35.559397] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.598 [2024-10-01 22:40:35.559407] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.598 qpair failed and we were unable to recover it. 00:41:40.598 [2024-10-01 22:40:35.559687] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.599 [2024-10-01 22:40:35.559698] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.599 qpair failed and we were unable to recover it. 00:41:40.599 [2024-10-01 22:40:35.559902] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.599 [2024-10-01 22:40:35.559912] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.599 qpair failed and we were unable to recover it. 00:41:40.599 [2024-10-01 22:40:35.560217] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.599 [2024-10-01 22:40:35.560227] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.599 qpair failed and we were unable to recover it. 00:41:40.599 [2024-10-01 22:40:35.560549] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.599 [2024-10-01 22:40:35.560559] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.599 qpair failed and we were unable to recover it. 00:41:40.599 [2024-10-01 22:40:35.560866] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.599 [2024-10-01 22:40:35.560876] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.599 qpair failed and we were unable to recover it. 00:41:40.599 [2024-10-01 22:40:35.561194] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.599 [2024-10-01 22:40:35.561204] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.599 qpair failed and we were unable to recover it. 00:41:40.599 [2024-10-01 22:40:35.561518] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.599 [2024-10-01 22:40:35.561527] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.599 qpair failed and we were unable to recover it. 00:41:40.599 [2024-10-01 22:40:35.561843] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.599 [2024-10-01 22:40:35.561853] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.599 qpair failed and we were unable to recover it. 00:41:40.599 [2024-10-01 22:40:35.562170] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.599 [2024-10-01 22:40:35.562179] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.599 qpair failed and we were unable to recover it. 00:41:40.599 [2024-10-01 22:40:35.562490] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.599 [2024-10-01 22:40:35.562499] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.599 qpair failed and we were unable to recover it. 00:41:40.599 [2024-10-01 22:40:35.562744] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.599 [2024-10-01 22:40:35.562755] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.599 qpair failed and we were unable to recover it. 00:41:40.599 [2024-10-01 22:40:35.563066] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.599 [2024-10-01 22:40:35.563076] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.599 qpair failed and we were unable to recover it. 00:41:40.599 [2024-10-01 22:40:35.563358] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.599 [2024-10-01 22:40:35.563368] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.599 qpair failed and we were unable to recover it. 00:41:40.599 [2024-10-01 22:40:35.563684] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.599 [2024-10-01 22:40:35.563695] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.599 qpair failed and we were unable to recover it. 00:41:40.599 [2024-10-01 22:40:35.564007] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.599 [2024-10-01 22:40:35.564017] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.599 qpair failed and we were unable to recover it. 00:41:40.599 [2024-10-01 22:40:35.564329] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.599 [2024-10-01 22:40:35.564338] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.599 qpair failed and we were unable to recover it. 00:41:40.599 [2024-10-01 22:40:35.564676] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.599 [2024-10-01 22:40:35.564686] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.599 qpair failed and we were unable to recover it. 00:41:40.599 [2024-10-01 22:40:35.564860] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.599 [2024-10-01 22:40:35.564870] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.599 qpair failed and we were unable to recover it. 00:41:40.599 [2024-10-01 22:40:35.565192] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.599 [2024-10-01 22:40:35.565202] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.599 qpair failed and we were unable to recover it. 00:41:40.599 [2024-10-01 22:40:35.565512] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.599 [2024-10-01 22:40:35.565522] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.599 qpair failed and we were unable to recover it. 00:41:40.599 [2024-10-01 22:40:35.565817] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.599 [2024-10-01 22:40:35.565829] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.599 qpair failed and we were unable to recover it. 00:41:40.599 [2024-10-01 22:40:35.566108] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.599 [2024-10-01 22:40:35.566126] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.599 qpair failed and we were unable to recover it. 00:41:40.599 [2024-10-01 22:40:35.566437] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.599 [2024-10-01 22:40:35.566447] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.599 qpair failed and we were unable to recover it. 00:41:40.599 [2024-10-01 22:40:35.566754] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.599 [2024-10-01 22:40:35.566764] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.599 qpair failed and we were unable to recover it. 00:41:40.599 [2024-10-01 22:40:35.566950] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.599 [2024-10-01 22:40:35.566960] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.599 qpair failed and we were unable to recover it. 00:41:40.599 [2024-10-01 22:40:35.567288] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.599 [2024-10-01 22:40:35.567298] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.599 qpair failed and we were unable to recover it. 00:41:40.599 [2024-10-01 22:40:35.567484] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.599 [2024-10-01 22:40:35.567496] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.599 qpair failed and we were unable to recover it. 00:41:40.599 [2024-10-01 22:40:35.567804] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.599 [2024-10-01 22:40:35.567814] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.599 qpair failed and we were unable to recover it. 00:41:40.599 [2024-10-01 22:40:35.568105] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.599 [2024-10-01 22:40:35.568115] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.599 qpair failed and we were unable to recover it. 00:41:40.599 [2024-10-01 22:40:35.568424] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.599 [2024-10-01 22:40:35.568434] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.599 qpair failed and we were unable to recover it. 00:41:40.599 [2024-10-01 22:40:35.568677] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.599 [2024-10-01 22:40:35.568687] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.599 qpair failed and we were unable to recover it. 00:41:40.599 [2024-10-01 22:40:35.568977] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.599 [2024-10-01 22:40:35.568986] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.599 qpair failed and we were unable to recover it. 00:41:40.599 [2024-10-01 22:40:35.569392] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.599 [2024-10-01 22:40:35.569402] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.600 qpair failed and we were unable to recover it. 00:41:40.600 [2024-10-01 22:40:35.569707] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.600 [2024-10-01 22:40:35.569717] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.600 qpair failed and we were unable to recover it. 00:41:40.600 [2024-10-01 22:40:35.570040] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.600 [2024-10-01 22:40:35.570049] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.600 qpair failed and we were unable to recover it. 00:41:40.600 [2024-10-01 22:40:35.570356] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.600 [2024-10-01 22:40:35.570367] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.600 qpair failed and we were unable to recover it. 00:41:40.600 [2024-10-01 22:40:35.570645] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.600 [2024-10-01 22:40:35.570655] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.600 qpair failed and we were unable to recover it. 00:41:40.600 [2024-10-01 22:40:35.571039] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.600 [2024-10-01 22:40:35.571049] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.600 qpair failed and we were unable to recover it. 00:41:40.600 [2024-10-01 22:40:35.571380] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.600 [2024-10-01 22:40:35.571390] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.600 qpair failed and we were unable to recover it. 00:41:40.600 [2024-10-01 22:40:35.571705] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.600 [2024-10-01 22:40:35.571715] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.600 qpair failed and we were unable to recover it. 00:41:40.600 [2024-10-01 22:40:35.572005] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.600 [2024-10-01 22:40:35.572015] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.600 qpair failed and we were unable to recover it. 00:41:40.600 [2024-10-01 22:40:35.572380] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.600 [2024-10-01 22:40:35.572391] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.600 qpair failed and we were unable to recover it. 00:41:40.600 [2024-10-01 22:40:35.572750] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.600 [2024-10-01 22:40:35.572761] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.600 qpair failed and we were unable to recover it. 00:41:40.600 [2024-10-01 22:40:35.573088] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.600 [2024-10-01 22:40:35.573098] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.600 qpair failed and we were unable to recover it. 00:41:40.600 [2024-10-01 22:40:35.573297] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.600 [2024-10-01 22:40:35.573306] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.600 qpair failed and we were unable to recover it. 00:41:40.600 [2024-10-01 22:40:35.573579] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.600 [2024-10-01 22:40:35.573589] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.600 qpair failed and we were unable to recover it. 00:41:40.600 [2024-10-01 22:40:35.573790] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.600 [2024-10-01 22:40:35.573801] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.600 qpair failed and we were unable to recover it. 00:41:40.600 [2024-10-01 22:40:35.574088] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.600 [2024-10-01 22:40:35.574098] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.600 qpair failed and we were unable to recover it. 00:41:40.600 [2024-10-01 22:40:35.574380] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.600 [2024-10-01 22:40:35.574390] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.600 qpair failed and we were unable to recover it. 00:41:40.600 [2024-10-01 22:40:35.574697] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.600 [2024-10-01 22:40:35.574708] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.600 qpair failed and we were unable to recover it. 00:41:40.600 [2024-10-01 22:40:35.575081] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.600 [2024-10-01 22:40:35.575090] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.600 qpair failed and we were unable to recover it. 00:41:40.600 [2024-10-01 22:40:35.575395] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.600 [2024-10-01 22:40:35.575405] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.600 qpair failed and we were unable to recover it. 00:41:40.600 [2024-10-01 22:40:35.575737] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.600 [2024-10-01 22:40:35.575747] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.600 qpair failed and we were unable to recover it. 00:41:40.600 [2024-10-01 22:40:35.576034] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.600 [2024-10-01 22:40:35.576045] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.600 qpair failed and we were unable to recover it. 00:41:40.600 [2024-10-01 22:40:35.576350] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.600 [2024-10-01 22:40:35.576361] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.600 qpair failed and we were unable to recover it. 00:41:40.600 [2024-10-01 22:40:35.576662] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.600 [2024-10-01 22:40:35.576672] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.600 qpair failed and we were unable to recover it. 00:41:40.600 [2024-10-01 22:40:35.576976] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.600 [2024-10-01 22:40:35.576986] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.600 qpair failed and we were unable to recover it. 00:41:40.600 [2024-10-01 22:40:35.577261] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.600 [2024-10-01 22:40:35.577271] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.600 qpair failed and we were unable to recover it. 00:41:40.600 [2024-10-01 22:40:35.577555] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.600 [2024-10-01 22:40:35.577565] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.600 qpair failed and we were unable to recover it. 00:41:40.600 [2024-10-01 22:40:35.577863] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.600 [2024-10-01 22:40:35.577873] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.600 qpair failed and we were unable to recover it. 00:41:40.600 [2024-10-01 22:40:35.578070] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.600 [2024-10-01 22:40:35.578079] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.600 qpair failed and we were unable to recover it. 00:41:40.600 [2024-10-01 22:40:35.578426] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.600 [2024-10-01 22:40:35.578436] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.600 qpair failed and we were unable to recover it. 00:41:40.600 [2024-10-01 22:40:35.578740] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.600 [2024-10-01 22:40:35.578750] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.600 qpair failed and we were unable to recover it. 00:41:40.600 [2024-10-01 22:40:35.579064] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.601 [2024-10-01 22:40:35.579073] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.601 qpair failed and we were unable to recover it. 00:41:40.601 [2024-10-01 22:40:35.579441] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.601 [2024-10-01 22:40:35.579453] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.601 qpair failed and we were unable to recover it. 00:41:40.601 [2024-10-01 22:40:35.579763] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.601 [2024-10-01 22:40:35.579773] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.601 qpair failed and we were unable to recover it. 00:41:40.601 [2024-10-01 22:40:35.580099] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.601 [2024-10-01 22:40:35.580110] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.601 qpair failed and we were unable to recover it. 00:41:40.601 [2024-10-01 22:40:35.580415] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.601 [2024-10-01 22:40:35.580426] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.601 qpair failed and we were unable to recover it. 00:41:40.601 [2024-10-01 22:40:35.580708] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.601 [2024-10-01 22:40:35.580718] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.601 qpair failed and we were unable to recover it. 00:41:40.601 [2024-10-01 22:40:35.581047] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.601 [2024-10-01 22:40:35.581057] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.601 qpair failed and we were unable to recover it. 00:41:40.601 [2024-10-01 22:40:35.581364] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.601 [2024-10-01 22:40:35.581375] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.601 qpair failed and we were unable to recover it. 00:41:40.601 [2024-10-01 22:40:35.581681] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.601 [2024-10-01 22:40:35.581691] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.601 qpair failed and we were unable to recover it. 00:41:40.601 [2024-10-01 22:40:35.582000] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.601 [2024-10-01 22:40:35.582011] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.601 qpair failed and we were unable to recover it. 00:41:40.601 [2024-10-01 22:40:35.582194] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.601 [2024-10-01 22:40:35.582205] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.601 qpair failed and we were unable to recover it. 00:41:40.601 [2024-10-01 22:40:35.582521] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.601 [2024-10-01 22:40:35.582531] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.601 qpair failed and we were unable to recover it. 00:41:40.601 [2024-10-01 22:40:35.582847] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.601 [2024-10-01 22:40:35.582857] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.601 qpair failed and we were unable to recover it. 00:41:40.601 [2024-10-01 22:40:35.583132] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.601 [2024-10-01 22:40:35.583141] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.601 qpair failed and we were unable to recover it. 00:41:40.601 [2024-10-01 22:40:35.583453] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.601 [2024-10-01 22:40:35.583462] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.601 qpair failed and we were unable to recover it. 00:41:40.601 [2024-10-01 22:40:35.583746] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.601 [2024-10-01 22:40:35.583756] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.601 qpair failed and we were unable to recover it. 00:41:40.601 [2024-10-01 22:40:35.584015] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.601 [2024-10-01 22:40:35.584025] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.601 qpair failed and we were unable to recover it. 00:41:40.601 [2024-10-01 22:40:35.584332] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.601 [2024-10-01 22:40:35.584342] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.601 qpair failed and we were unable to recover it. 00:41:40.601 [2024-10-01 22:40:35.584552] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.601 [2024-10-01 22:40:35.584562] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.601 qpair failed and we were unable to recover it. 00:41:40.601 [2024-10-01 22:40:35.584870] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.601 [2024-10-01 22:40:35.584881] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.601 qpair failed and we were unable to recover it. 00:41:40.601 [2024-10-01 22:40:35.585184] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.601 [2024-10-01 22:40:35.585194] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.601 qpair failed and we were unable to recover it. 00:41:40.601 [2024-10-01 22:40:35.585481] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.601 [2024-10-01 22:40:35.585490] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.601 qpair failed and we were unable to recover it. 00:41:40.601 [2024-10-01 22:40:35.585798] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.601 [2024-10-01 22:40:35.585808] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.601 qpair failed and we were unable to recover it. 00:41:40.601 [2024-10-01 22:40:35.586116] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.601 [2024-10-01 22:40:35.586127] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.601 qpair failed and we were unable to recover it. 00:41:40.601 [2024-10-01 22:40:35.586419] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.601 [2024-10-01 22:40:35.586430] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.601 qpair failed and we were unable to recover it. 00:41:40.601 [2024-10-01 22:40:35.586716] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.601 [2024-10-01 22:40:35.586727] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.601 qpair failed and we were unable to recover it. 00:41:40.601 [2024-10-01 22:40:35.586925] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.601 [2024-10-01 22:40:35.586935] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.601 qpair failed and we were unable to recover it. 00:41:40.601 [2024-10-01 22:40:35.587256] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.601 [2024-10-01 22:40:35.587266] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.601 qpair failed and we were unable to recover it. 00:41:40.601 [2024-10-01 22:40:35.587577] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.601 [2024-10-01 22:40:35.587588] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.601 qpair failed and we were unable to recover it. 00:41:40.601 [2024-10-01 22:40:35.587899] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.602 [2024-10-01 22:40:35.587909] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.602 qpair failed and we were unable to recover it. 00:41:40.602 [2024-10-01 22:40:35.588212] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.602 [2024-10-01 22:40:35.588222] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.602 qpair failed and we were unable to recover it. 00:41:40.602 [2024-10-01 22:40:35.588524] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.602 [2024-10-01 22:40:35.588534] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.602 qpair failed and we were unable to recover it. 00:41:40.602 [2024-10-01 22:40:35.588801] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.602 [2024-10-01 22:40:35.588812] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.602 qpair failed and we were unable to recover it. 00:41:40.602 [2024-10-01 22:40:35.589002] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.602 [2024-10-01 22:40:35.589018] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.602 qpair failed and we were unable to recover it. 00:41:40.602 [2024-10-01 22:40:35.589375] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.602 [2024-10-01 22:40:35.589386] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.602 qpair failed and we were unable to recover it. 00:41:40.602 [2024-10-01 22:40:35.589579] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.602 [2024-10-01 22:40:35.589589] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.602 qpair failed and we were unable to recover it. 00:41:40.602 [2024-10-01 22:40:35.589683] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.602 [2024-10-01 22:40:35.589694] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.602 qpair failed and we were unable to recover it. 00:41:40.602 [2024-10-01 22:40:35.589878] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.602 [2024-10-01 22:40:35.589888] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.602 qpair failed and we were unable to recover it. 00:41:40.602 [2024-10-01 22:40:35.590205] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.602 [2024-10-01 22:40:35.590215] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.602 qpair failed and we were unable to recover it. 00:41:40.602 [2024-10-01 22:40:35.590534] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.602 [2024-10-01 22:40:35.590544] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.602 qpair failed and we were unable to recover it. 00:41:40.602 [2024-10-01 22:40:35.590838] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.602 [2024-10-01 22:40:35.590849] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.602 qpair failed and we were unable to recover it. 00:41:40.602 [2024-10-01 22:40:35.591205] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.602 [2024-10-01 22:40:35.591215] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.602 qpair failed and we were unable to recover it. 00:41:40.602 [2024-10-01 22:40:35.591514] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.602 [2024-10-01 22:40:35.591526] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.602 qpair failed and we were unable to recover it. 00:41:40.602 [2024-10-01 22:40:35.591838] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.602 [2024-10-01 22:40:35.591849] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.602 qpair failed and we were unable to recover it. 00:41:40.602 [2024-10-01 22:40:35.592150] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.602 [2024-10-01 22:40:35.592160] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.602 qpair failed and we were unable to recover it. 00:41:40.602 [2024-10-01 22:40:35.592467] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.602 [2024-10-01 22:40:35.592477] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.602 qpair failed and we were unable to recover it. 00:41:40.602 [2024-10-01 22:40:35.592755] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.602 [2024-10-01 22:40:35.592765] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.602 qpair failed and we were unable to recover it. 00:41:40.602 [2024-10-01 22:40:35.593084] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.602 [2024-10-01 22:40:35.593094] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.602 qpair failed and we were unable to recover it. 00:41:40.602 [2024-10-01 22:40:35.593421] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.602 [2024-10-01 22:40:35.593431] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.602 qpair failed and we were unable to recover it. 00:41:40.602 [2024-10-01 22:40:35.593716] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.602 [2024-10-01 22:40:35.593726] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.602 qpair failed and we were unable to recover it. 00:41:40.602 [2024-10-01 22:40:35.594061] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.602 [2024-10-01 22:40:35.594071] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.602 qpair failed and we were unable to recover it. 00:41:40.602 [2024-10-01 22:40:35.594366] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.602 [2024-10-01 22:40:35.594376] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.602 qpair failed and we were unable to recover it. 00:41:40.602 [2024-10-01 22:40:35.594714] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.602 [2024-10-01 22:40:35.594725] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.602 qpair failed and we were unable to recover it. 00:41:40.602 [2024-10-01 22:40:35.595040] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.602 [2024-10-01 22:40:35.595050] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.602 qpair failed and we were unable to recover it. 00:41:40.602 [2024-10-01 22:40:35.595335] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.602 [2024-10-01 22:40:35.595345] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.602 qpair failed and we were unable to recover it. 00:41:40.602 [2024-10-01 22:40:35.595538] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.602 [2024-10-01 22:40:35.595550] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.602 qpair failed and we were unable to recover it. 00:41:40.602 [2024-10-01 22:40:35.595856] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.602 [2024-10-01 22:40:35.595867] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.602 qpair failed and we were unable to recover it. 00:41:40.602 [2024-10-01 22:40:35.596171] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.602 [2024-10-01 22:40:35.596182] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.602 qpair failed and we were unable to recover it. 00:41:40.602 [2024-10-01 22:40:35.596463] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.602 [2024-10-01 22:40:35.596473] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.602 qpair failed and we were unable to recover it. 00:41:40.602 [2024-10-01 22:40:35.596789] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.602 [2024-10-01 22:40:35.596799] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.602 qpair failed and we were unable to recover it. 00:41:40.602 [2024-10-01 22:40:35.596982] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.602 [2024-10-01 22:40:35.596993] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.602 qpair failed and we were unable to recover it. 00:41:40.602 [2024-10-01 22:40:35.597344] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.602 [2024-10-01 22:40:35.597354] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.602 qpair failed and we were unable to recover it. 00:41:40.602 [2024-10-01 22:40:35.597635] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.603 [2024-10-01 22:40:35.597645] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.603 qpair failed and we were unable to recover it. 00:41:40.603 [2024-10-01 22:40:35.597936] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.603 [2024-10-01 22:40:35.597946] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.603 qpair failed and we were unable to recover it. 00:41:40.603 [2024-10-01 22:40:35.598242] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.603 [2024-10-01 22:40:35.598252] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.603 qpair failed and we were unable to recover it. 00:41:40.603 [2024-10-01 22:40:35.598555] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.603 [2024-10-01 22:40:35.598565] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.603 qpair failed and we were unable to recover it. 00:41:40.603 [2024-10-01 22:40:35.598852] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.603 [2024-10-01 22:40:35.598863] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.603 qpair failed and we were unable to recover it. 00:41:40.603 [2024-10-01 22:40:35.599185] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.603 [2024-10-01 22:40:35.599195] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.603 qpair failed and we were unable to recover it. 00:41:40.603 [2024-10-01 22:40:35.599471] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.603 [2024-10-01 22:40:35.599481] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.603 qpair failed and we were unable to recover it. 00:41:40.603 [2024-10-01 22:40:35.599786] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.603 [2024-10-01 22:40:35.599797] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.603 qpair failed and we were unable to recover it. 00:41:40.603 [2024-10-01 22:40:35.600081] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.603 [2024-10-01 22:40:35.600098] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.603 qpair failed and we were unable to recover it. 00:41:40.603 [2024-10-01 22:40:35.600398] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.603 [2024-10-01 22:40:35.600407] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.603 qpair failed and we were unable to recover it. 00:41:40.603 [2024-10-01 22:40:35.600683] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.603 [2024-10-01 22:40:35.600694] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.603 qpair failed and we were unable to recover it. 00:41:40.603 [2024-10-01 22:40:35.601002] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.603 [2024-10-01 22:40:35.601011] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.603 qpair failed and we were unable to recover it. 00:41:40.603 [2024-10-01 22:40:35.601326] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.603 [2024-10-01 22:40:35.601336] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.603 qpair failed and we were unable to recover it. 00:41:40.603 [2024-10-01 22:40:35.601646] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.603 [2024-10-01 22:40:35.601657] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.603 qpair failed and we were unable to recover it. 00:41:40.603 [2024-10-01 22:40:35.601963] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.603 [2024-10-01 22:40:35.601973] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.603 qpair failed and we were unable to recover it. 00:41:40.603 [2024-10-01 22:40:35.602255] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.603 [2024-10-01 22:40:35.602265] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.603 qpair failed and we were unable to recover it. 00:41:40.603 [2024-10-01 22:40:35.602589] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.603 [2024-10-01 22:40:35.602599] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.603 qpair failed and we were unable to recover it. 00:41:40.603 [2024-10-01 22:40:35.602915] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.603 [2024-10-01 22:40:35.602926] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.603 qpair failed and we were unable to recover it. 00:41:40.603 [2024-10-01 22:40:35.603225] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.603 [2024-10-01 22:40:35.603235] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.603 qpair failed and we were unable to recover it. 00:41:40.603 [2024-10-01 22:40:35.603495] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.603 [2024-10-01 22:40:35.603504] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.603 qpair failed and we were unable to recover it. 00:41:40.603 [2024-10-01 22:40:35.603699] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.603 [2024-10-01 22:40:35.603710] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.603 qpair failed and we were unable to recover it. 00:41:40.603 [2024-10-01 22:40:35.603910] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.603 [2024-10-01 22:40:35.603920] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.603 qpair failed and we were unable to recover it. 00:41:40.603 [2024-10-01 22:40:35.604116] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.603 [2024-10-01 22:40:35.604126] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.603 qpair failed and we were unable to recover it. 00:41:40.603 [2024-10-01 22:40:35.604506] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.603 [2024-10-01 22:40:35.604516] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.603 qpair failed and we were unable to recover it. 00:41:40.603 [2024-10-01 22:40:35.604791] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.603 [2024-10-01 22:40:35.604801] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.603 qpair failed and we were unable to recover it. 00:41:40.603 [2024-10-01 22:40:35.605118] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.603 [2024-10-01 22:40:35.605127] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.603 qpair failed and we were unable to recover it. 00:41:40.603 [2024-10-01 22:40:35.605409] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.603 [2024-10-01 22:40:35.605419] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.603 qpair failed and we were unable to recover it. 00:41:40.603 [2024-10-01 22:40:35.605756] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.603 [2024-10-01 22:40:35.605767] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.603 qpair failed and we were unable to recover it. 00:41:40.603 [2024-10-01 22:40:35.606050] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.603 [2024-10-01 22:40:35.606059] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.603 qpair failed and we were unable to recover it. 00:41:40.603 [2024-10-01 22:40:35.606268] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.603 [2024-10-01 22:40:35.606278] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.603 qpair failed and we were unable to recover it. 00:41:40.604 [2024-10-01 22:40:35.606594] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.604 [2024-10-01 22:40:35.606604] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.604 qpair failed and we were unable to recover it. 00:41:40.604 [2024-10-01 22:40:35.606905] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.604 [2024-10-01 22:40:35.606916] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.604 qpair failed and we were unable to recover it. 00:41:40.604 [2024-10-01 22:40:35.607196] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.604 [2024-10-01 22:40:35.607206] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.604 qpair failed and we were unable to recover it. 00:41:40.604 [2024-10-01 22:40:35.607543] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.604 [2024-10-01 22:40:35.607554] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.604 qpair failed and we were unable to recover it. 00:41:40.604 [2024-10-01 22:40:35.607857] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.604 [2024-10-01 22:40:35.607868] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.604 qpair failed and we were unable to recover it. 00:41:40.604 [2024-10-01 22:40:35.608073] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.604 [2024-10-01 22:40:35.608084] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.604 qpair failed and we were unable to recover it. 00:41:40.604 [2024-10-01 22:40:35.608387] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.604 [2024-10-01 22:40:35.608397] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.604 qpair failed and we were unable to recover it. 00:41:40.604 [2024-10-01 22:40:35.608570] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.604 [2024-10-01 22:40:35.608580] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.604 qpair failed and we were unable to recover it. 00:41:40.604 [2024-10-01 22:40:35.608945] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.604 [2024-10-01 22:40:35.608955] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.604 qpair failed and we were unable to recover it. 00:41:40.604 [2024-10-01 22:40:35.609262] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.604 [2024-10-01 22:40:35.609272] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.604 qpair failed and we were unable to recover it. 00:41:40.604 [2024-10-01 22:40:35.609607] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.604 [2024-10-01 22:40:35.609617] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.604 qpair failed and we were unable to recover it. 00:41:40.604 [2024-10-01 22:40:35.609914] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.604 [2024-10-01 22:40:35.609925] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.604 qpair failed and we were unable to recover it. 00:41:40.604 [2024-10-01 22:40:35.610231] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.604 [2024-10-01 22:40:35.610241] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.604 qpair failed and we were unable to recover it. 00:41:40.604 [2024-10-01 22:40:35.610521] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.604 [2024-10-01 22:40:35.610540] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.604 qpair failed and we were unable to recover it. 00:41:40.604 [2024-10-01 22:40:35.610855] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.604 [2024-10-01 22:40:35.610866] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.604 qpair failed and we were unable to recover it. 00:41:40.604 [2024-10-01 22:40:35.611170] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.604 [2024-10-01 22:40:35.611180] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.604 qpair failed and we were unable to recover it. 00:41:40.604 [2024-10-01 22:40:35.611487] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.604 [2024-10-01 22:40:35.611497] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.604 qpair failed and we were unable to recover it. 00:41:40.604 [2024-10-01 22:40:35.611789] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.604 [2024-10-01 22:40:35.611799] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.604 qpair failed and we were unable to recover it. 00:41:40.604 [2024-10-01 22:40:35.612102] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.604 [2024-10-01 22:40:35.612112] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.604 qpair failed and we were unable to recover it. 00:41:40.604 [2024-10-01 22:40:35.612423] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.604 [2024-10-01 22:40:35.612433] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.604 qpair failed and we were unable to recover it. 00:41:40.604 [2024-10-01 22:40:35.612759] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.604 [2024-10-01 22:40:35.612770] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.604 qpair failed and we were unable to recover it. 00:41:40.604 [2024-10-01 22:40:35.612966] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.604 [2024-10-01 22:40:35.612976] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.604 qpair failed and we were unable to recover it. 00:41:40.604 [2024-10-01 22:40:35.613302] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.604 [2024-10-01 22:40:35.613312] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.604 qpair failed and we were unable to recover it. 00:41:40.604 [2024-10-01 22:40:35.613630] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.604 [2024-10-01 22:40:35.613640] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.604 qpair failed and we were unable to recover it. 00:41:40.604 [2024-10-01 22:40:35.613944] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.604 [2024-10-01 22:40:35.613954] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.604 qpair failed and we were unable to recover it. 00:41:40.604 [2024-10-01 22:40:35.614264] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.604 [2024-10-01 22:40:35.614274] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.604 qpair failed and we were unable to recover it. 00:41:40.604 [2024-10-01 22:40:35.614467] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.604 [2024-10-01 22:40:35.614478] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.604 qpair failed and we were unable to recover it. 00:41:40.604 [2024-10-01 22:40:35.614775] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.604 [2024-10-01 22:40:35.614786] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.604 qpair failed and we were unable to recover it. 00:41:40.604 [2024-10-01 22:40:35.615102] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.604 [2024-10-01 22:40:35.615112] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.604 qpair failed and we were unable to recover it. 00:41:40.604 [2024-10-01 22:40:35.615432] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.604 [2024-10-01 22:40:35.615441] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.604 qpair failed and we were unable to recover it. 00:41:40.604 [2024-10-01 22:40:35.615726] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.604 [2024-10-01 22:40:35.615737] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.604 qpair failed and we were unable to recover it. 00:41:40.604 [2024-10-01 22:40:35.615901] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.604 [2024-10-01 22:40:35.615916] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.605 qpair failed and we were unable to recover it. 00:41:40.605 [2024-10-01 22:40:35.616175] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.605 [2024-10-01 22:40:35.616185] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.605 qpair failed and we were unable to recover it. 00:41:40.605 [2024-10-01 22:40:35.616495] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.605 [2024-10-01 22:40:35.616505] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.605 qpair failed and we were unable to recover it. 00:41:40.605 [2024-10-01 22:40:35.616823] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.605 [2024-10-01 22:40:35.616834] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.605 qpair failed and we were unable to recover it. 00:41:40.605 [2024-10-01 22:40:35.617113] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.605 [2024-10-01 22:40:35.617131] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.605 qpair failed and we were unable to recover it. 00:41:40.605 [2024-10-01 22:40:35.617438] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.605 [2024-10-01 22:40:35.617448] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.605 qpair failed and we were unable to recover it. 00:41:40.605 [2024-10-01 22:40:35.617759] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.605 [2024-10-01 22:40:35.617769] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.605 qpair failed and we were unable to recover it. 00:41:40.605 [2024-10-01 22:40:35.618059] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.605 [2024-10-01 22:40:35.618069] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.605 qpair failed and we were unable to recover it. 00:41:40.605 [2024-10-01 22:40:35.618373] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.605 [2024-10-01 22:40:35.618383] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.605 qpair failed and we were unable to recover it. 00:41:40.605 [2024-10-01 22:40:35.618734] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.605 [2024-10-01 22:40:35.618745] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.605 qpair failed and we were unable to recover it. 00:41:40.605 [2024-10-01 22:40:35.619062] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.605 [2024-10-01 22:40:35.619073] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.605 qpair failed and we were unable to recover it. 00:41:40.605 [2024-10-01 22:40:35.619263] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.605 [2024-10-01 22:40:35.619273] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.605 qpair failed and we were unable to recover it. 00:41:40.605 [2024-10-01 22:40:35.619588] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.605 [2024-10-01 22:40:35.619598] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.605 qpair failed and we were unable to recover it. 00:41:40.605 [2024-10-01 22:40:35.619909] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.605 [2024-10-01 22:40:35.619920] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.605 qpair failed and we were unable to recover it. 00:41:40.605 [2024-10-01 22:40:35.620223] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.605 [2024-10-01 22:40:35.620233] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.605 qpair failed and we were unable to recover it. 00:41:40.605 [2024-10-01 22:40:35.620554] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.605 [2024-10-01 22:40:35.620565] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.605 qpair failed and we were unable to recover it. 00:41:40.605 [2024-10-01 22:40:35.620867] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.605 [2024-10-01 22:40:35.620877] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.605 qpair failed and we were unable to recover it. 00:41:40.605 [2024-10-01 22:40:35.621181] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.605 [2024-10-01 22:40:35.621191] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.605 qpair failed and we were unable to recover it. 00:41:40.605 [2024-10-01 22:40:35.621496] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.605 [2024-10-01 22:40:35.621506] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.605 qpair failed and we were unable to recover it. 00:41:40.605 [2024-10-01 22:40:35.621791] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.605 [2024-10-01 22:40:35.621801] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.605 qpair failed and we were unable to recover it. 00:41:40.605 [2024-10-01 22:40:35.622116] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.605 [2024-10-01 22:40:35.622125] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.605 qpair failed and we were unable to recover it. 00:41:40.605 [2024-10-01 22:40:35.622398] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.605 [2024-10-01 22:40:35.622408] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.605 qpair failed and we were unable to recover it. 00:41:40.605 [2024-10-01 22:40:35.622695] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.605 [2024-10-01 22:40:35.622705] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.605 qpair failed and we were unable to recover it. 00:41:40.605 [2024-10-01 22:40:35.622901] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.605 [2024-10-01 22:40:35.622911] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.605 qpair failed and we were unable to recover it. 00:41:40.605 [2024-10-01 22:40:35.623192] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.605 [2024-10-01 22:40:35.623202] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.605 qpair failed and we were unable to recover it. 00:41:40.605 [2024-10-01 22:40:35.623526] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.605 [2024-10-01 22:40:35.623536] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.605 qpair failed and we were unable to recover it. 00:41:40.605 [2024-10-01 22:40:35.623866] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.605 [2024-10-01 22:40:35.623876] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.605 qpair failed and we were unable to recover it. 00:41:40.605 [2024-10-01 22:40:35.624159] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.605 [2024-10-01 22:40:35.624172] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.605 qpair failed and we were unable to recover it. 00:41:40.605 [2024-10-01 22:40:35.624437] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.605 [2024-10-01 22:40:35.624447] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.605 qpair failed and we were unable to recover it. 00:41:40.605 [2024-10-01 22:40:35.624744] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.605 [2024-10-01 22:40:35.624755] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.605 qpair failed and we were unable to recover it. 00:41:40.605 [2024-10-01 22:40:35.625072] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.605 [2024-10-01 22:40:35.625082] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.605 qpair failed and we were unable to recover it. 00:41:40.606 [2024-10-01 22:40:35.625362] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.606 [2024-10-01 22:40:35.625378] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.606 qpair failed and we were unable to recover it. 00:41:40.606 [2024-10-01 22:40:35.625677] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.606 [2024-10-01 22:40:35.625688] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.606 qpair failed and we were unable to recover it. 00:41:40.606 [2024-10-01 22:40:35.625866] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.606 [2024-10-01 22:40:35.625877] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.606 qpair failed and we were unable to recover it. 00:41:40.606 [2024-10-01 22:40:35.626158] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.606 [2024-10-01 22:40:35.626167] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.606 qpair failed and we were unable to recover it. 00:41:40.606 [2024-10-01 22:40:35.626458] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.606 [2024-10-01 22:40:35.626468] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.606 qpair failed and we were unable to recover it. 00:41:40.606 [2024-10-01 22:40:35.626759] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.606 [2024-10-01 22:40:35.626770] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.606 qpair failed and we were unable to recover it. 00:41:40.606 [2024-10-01 22:40:35.627093] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.606 [2024-10-01 22:40:35.627103] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.606 qpair failed and we were unable to recover it. 00:41:40.606 [2024-10-01 22:40:35.627412] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.606 [2024-10-01 22:40:35.627422] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.606 qpair failed and we were unable to recover it. 00:41:40.606 [2024-10-01 22:40:35.627732] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.606 [2024-10-01 22:40:35.627742] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.606 qpair failed and we were unable to recover it. 00:41:40.606 [2024-10-01 22:40:35.628049] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.606 [2024-10-01 22:40:35.628059] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.606 qpair failed and we were unable to recover it. 00:41:40.606 [2024-10-01 22:40:35.628369] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.606 [2024-10-01 22:40:35.628379] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.606 qpair failed and we were unable to recover it. 00:41:40.606 [2024-10-01 22:40:35.628762] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.606 [2024-10-01 22:40:35.628772] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.606 qpair failed and we were unable to recover it. 00:41:40.606 [2024-10-01 22:40:35.629069] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.606 [2024-10-01 22:40:35.629079] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.606 qpair failed and we were unable to recover it. 00:41:40.606 [2024-10-01 22:40:35.629381] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.606 [2024-10-01 22:40:35.629391] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.606 qpair failed and we were unable to recover it. 00:41:40.606 [2024-10-01 22:40:35.629696] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.606 [2024-10-01 22:40:35.629706] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.606 qpair failed and we were unable to recover it. 00:41:40.606 [2024-10-01 22:40:35.630019] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.606 [2024-10-01 22:40:35.630029] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.606 qpair failed and we were unable to recover it. 00:41:40.606 [2024-10-01 22:40:35.630314] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.606 [2024-10-01 22:40:35.630324] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.606 qpair failed and we were unable to recover it. 00:41:40.606 [2024-10-01 22:40:35.630688] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.606 [2024-10-01 22:40:35.630698] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.606 qpair failed and we were unable to recover it. 00:41:40.606 [2024-10-01 22:40:35.630979] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.606 [2024-10-01 22:40:35.630996] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.606 qpair failed and we were unable to recover it. 00:41:40.606 [2024-10-01 22:40:35.631302] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.606 [2024-10-01 22:40:35.631312] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.606 qpair failed and we were unable to recover it. 00:41:40.606 [2024-10-01 22:40:35.631593] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.606 [2024-10-01 22:40:35.631603] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.606 qpair failed and we were unable to recover it. 00:41:40.606 [2024-10-01 22:40:35.631948] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.606 [2024-10-01 22:40:35.631959] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.606 qpair failed and we were unable to recover it. 00:41:40.606 [2024-10-01 22:40:35.632255] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.606 [2024-10-01 22:40:35.632266] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.606 qpair failed and we were unable to recover it. 00:41:40.606 [2024-10-01 22:40:35.632567] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.606 [2024-10-01 22:40:35.632577] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.606 qpair failed and we were unable to recover it. 00:41:40.606 [2024-10-01 22:40:35.632762] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.606 [2024-10-01 22:40:35.632774] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.606 qpair failed and we were unable to recover it. 00:41:40.606 [2024-10-01 22:40:35.633103] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.606 [2024-10-01 22:40:35.633114] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.606 qpair failed and we were unable to recover it. 00:41:40.606 [2024-10-01 22:40:35.633316] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.606 [2024-10-01 22:40:35.633326] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.606 qpair failed and we were unable to recover it. 00:41:40.606 [2024-10-01 22:40:35.633644] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.606 [2024-10-01 22:40:35.633655] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.607 qpair failed and we were unable to recover it. 00:41:40.607 [2024-10-01 22:40:35.633961] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.607 [2024-10-01 22:40:35.633971] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.607 qpair failed and we were unable to recover it. 00:41:40.607 [2024-10-01 22:40:35.634279] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.607 [2024-10-01 22:40:35.634288] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.607 qpair failed and we were unable to recover it. 00:41:40.607 [2024-10-01 22:40:35.634521] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.607 [2024-10-01 22:40:35.634531] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.607 qpair failed and we were unable to recover it. 00:41:40.607 [2024-10-01 22:40:35.634749] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.607 [2024-10-01 22:40:35.634759] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.607 qpair failed and we were unable to recover it. 00:41:40.607 [2024-10-01 22:40:35.634964] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.607 [2024-10-01 22:40:35.634974] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.607 qpair failed and we were unable to recover it. 00:41:40.607 [2024-10-01 22:40:35.635312] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.607 [2024-10-01 22:40:35.635322] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.607 qpair failed and we were unable to recover it. 00:41:40.607 [2024-10-01 22:40:35.635521] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.607 [2024-10-01 22:40:35.635531] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.607 qpair failed and we were unable to recover it. 00:41:40.607 [2024-10-01 22:40:35.635773] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.607 [2024-10-01 22:40:35.635783] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.607 qpair failed and we were unable to recover it. 00:41:40.607 [2024-10-01 22:40:35.636080] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.607 [2024-10-01 22:40:35.636089] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.607 qpair failed and we were unable to recover it. 00:41:40.607 [2024-10-01 22:40:35.636396] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.607 [2024-10-01 22:40:35.636407] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.607 qpair failed and we were unable to recover it. 00:41:40.607 [2024-10-01 22:40:35.636720] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.607 [2024-10-01 22:40:35.636730] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.607 qpair failed and we were unable to recover it. 00:41:40.607 [2024-10-01 22:40:35.636945] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.607 [2024-10-01 22:40:35.636955] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.607 qpair failed and we were unable to recover it. 00:41:40.607 [2024-10-01 22:40:35.637285] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.607 [2024-10-01 22:40:35.637294] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.607 qpair failed and we were unable to recover it. 00:41:40.607 [2024-10-01 22:40:35.637576] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.607 [2024-10-01 22:40:35.637593] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.607 qpair failed and we were unable to recover it. 00:41:40.607 [2024-10-01 22:40:35.637916] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.607 [2024-10-01 22:40:35.637926] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.607 qpair failed and we were unable to recover it. 00:41:40.607 [2024-10-01 22:40:35.638232] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.607 [2024-10-01 22:40:35.638242] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.607 qpair failed and we were unable to recover it. 00:41:40.607 [2024-10-01 22:40:35.638549] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.607 [2024-10-01 22:40:35.638559] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.607 qpair failed and we were unable to recover it. 00:41:40.607 [2024-10-01 22:40:35.638914] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.607 [2024-10-01 22:40:35.638925] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.607 qpair failed and we were unable to recover it. 00:41:40.607 [2024-10-01 22:40:35.639166] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.607 [2024-10-01 22:40:35.639175] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.607 qpair failed and we were unable to recover it. 00:41:40.607 [2024-10-01 22:40:35.639515] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.607 [2024-10-01 22:40:35.639525] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.607 qpair failed and we were unable to recover it. 00:41:40.607 [2024-10-01 22:40:35.639845] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.607 [2024-10-01 22:40:35.639856] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.607 qpair failed and we were unable to recover it. 00:41:40.607 [2024-10-01 22:40:35.640164] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.607 [2024-10-01 22:40:35.640174] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.607 qpair failed and we were unable to recover it. 00:41:40.607 [2024-10-01 22:40:35.640481] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.607 [2024-10-01 22:40:35.640491] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.607 qpair failed and we were unable to recover it. 00:41:40.607 [2024-10-01 22:40:35.640679] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.607 [2024-10-01 22:40:35.640689] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.607 qpair failed and we were unable to recover it. 00:41:40.607 [2024-10-01 22:40:35.640995] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.607 [2024-10-01 22:40:35.641005] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.607 qpair failed and we were unable to recover it. 00:41:40.607 [2024-10-01 22:40:35.641332] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.607 [2024-10-01 22:40:35.641342] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.607 qpair failed and we were unable to recover it. 00:41:40.607 [2024-10-01 22:40:35.641640] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.607 [2024-10-01 22:40:35.641652] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.607 qpair failed and we were unable to recover it. 00:41:40.607 [2024-10-01 22:40:35.641949] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.607 [2024-10-01 22:40:35.641959] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.607 qpair failed and we were unable to recover it. 00:41:40.607 [2024-10-01 22:40:35.642270] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.607 [2024-10-01 22:40:35.642280] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.607 qpair failed and we were unable to recover it. 00:41:40.607 [2024-10-01 22:40:35.642604] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.607 [2024-10-01 22:40:35.642614] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.607 qpair failed and we were unable to recover it. 00:41:40.607 [2024-10-01 22:40:35.642946] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.607 [2024-10-01 22:40:35.642956] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.607 qpair failed and we were unable to recover it. 00:41:40.607 [2024-10-01 22:40:35.643267] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.607 [2024-10-01 22:40:35.643277] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.607 qpair failed and we were unable to recover it. 00:41:40.608 [2024-10-01 22:40:35.643333] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.608 [2024-10-01 22:40:35.643343] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.608 qpair failed and we were unable to recover it. 00:41:40.608 [2024-10-01 22:40:35.643635] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.608 [2024-10-01 22:40:35.643646] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.608 qpair failed and we were unable to recover it. 00:41:40.608 [2024-10-01 22:40:35.643771] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.608 [2024-10-01 22:40:35.643781] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.608 qpair failed and we were unable to recover it. 00:41:40.608 [2024-10-01 22:40:35.644063] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.608 [2024-10-01 22:40:35.644073] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.608 qpair failed and we were unable to recover it. 00:41:40.608 [2024-10-01 22:40:35.644393] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.608 [2024-10-01 22:40:35.644405] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.608 qpair failed and we were unable to recover it. 00:41:40.608 [2024-10-01 22:40:35.644693] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.608 [2024-10-01 22:40:35.644703] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.608 qpair failed and we were unable to recover it. 00:41:40.608 [2024-10-01 22:40:35.645028] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.608 [2024-10-01 22:40:35.645038] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.608 qpair failed and we were unable to recover it. 00:41:40.608 [2024-10-01 22:40:35.645319] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.608 [2024-10-01 22:40:35.645329] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.608 qpair failed and we were unable to recover it. 00:41:40.608 [2024-10-01 22:40:35.645516] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.608 [2024-10-01 22:40:35.645525] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.608 qpair failed and we were unable to recover it. 00:41:40.608 [2024-10-01 22:40:35.645844] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.608 [2024-10-01 22:40:35.645855] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.608 qpair failed and we were unable to recover it. 00:41:40.608 [2024-10-01 22:40:35.646166] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.608 [2024-10-01 22:40:35.646176] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.608 qpair failed and we were unable to recover it. 00:41:40.608 [2024-10-01 22:40:35.646480] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.608 [2024-10-01 22:40:35.646490] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.608 qpair failed and we were unable to recover it. 00:41:40.608 [2024-10-01 22:40:35.646792] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.608 [2024-10-01 22:40:35.646802] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.608 qpair failed and we were unable to recover it. 00:41:40.608 [2024-10-01 22:40:35.646992] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.608 [2024-10-01 22:40:35.647002] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.608 qpair failed and we were unable to recover it. 00:41:40.608 [2024-10-01 22:40:35.647169] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.608 [2024-10-01 22:40:35.647179] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.608 qpair failed and we were unable to recover it. 00:41:40.608 [2024-10-01 22:40:35.647237] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.608 [2024-10-01 22:40:35.647246] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.608 qpair failed and we were unable to recover it. 00:41:40.608 [2024-10-01 22:40:35.647420] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.608 [2024-10-01 22:40:35.647429] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.608 qpair failed and we were unable to recover it. 00:41:40.608 [2024-10-01 22:40:35.647641] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.608 [2024-10-01 22:40:35.647652] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.608 qpair failed and we were unable to recover it. 00:41:40.608 [2024-10-01 22:40:35.647958] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.608 [2024-10-01 22:40:35.647968] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.608 qpair failed and we were unable to recover it. 00:41:40.608 [2024-10-01 22:40:35.648292] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.608 [2024-10-01 22:40:35.648301] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.608 qpair failed and we were unable to recover it. 00:41:40.608 [2024-10-01 22:40:35.648567] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.608 [2024-10-01 22:40:35.648577] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.608 qpair failed and we were unable to recover it. 00:41:40.608 [2024-10-01 22:40:35.648925] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.608 [2024-10-01 22:40:35.648936] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.608 qpair failed and we were unable to recover it. 00:41:40.608 [2024-10-01 22:40:35.649247] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.608 [2024-10-01 22:40:35.649257] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.608 qpair failed and we were unable to recover it. 00:41:40.608 [2024-10-01 22:40:35.649556] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.608 [2024-10-01 22:40:35.649567] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.608 qpair failed and we were unable to recover it. 00:41:40.608 [2024-10-01 22:40:35.649937] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.608 [2024-10-01 22:40:35.649947] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.608 qpair failed and we were unable to recover it. 00:41:40.608 [2024-10-01 22:40:35.650256] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.608 [2024-10-01 22:40:35.650265] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.608 qpair failed and we were unable to recover it. 00:41:40.608 [2024-10-01 22:40:35.650598] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.608 [2024-10-01 22:40:35.650609] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.608 qpair failed and we were unable to recover it. 00:41:40.608 [2024-10-01 22:40:35.650933] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.608 [2024-10-01 22:40:35.650944] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.608 qpair failed and we were unable to recover it. 00:41:40.608 [2024-10-01 22:40:35.651241] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.608 [2024-10-01 22:40:35.651251] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.608 qpair failed and we were unable to recover it. 00:41:40.608 [2024-10-01 22:40:35.651438] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.608 [2024-10-01 22:40:35.651447] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.608 qpair failed and we were unable to recover it. 00:41:40.608 [2024-10-01 22:40:35.651521] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.609 [2024-10-01 22:40:35.651531] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.609 qpair failed and we were unable to recover it. 00:41:40.609 [2024-10-01 22:40:35.651723] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.609 [2024-10-01 22:40:35.651735] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.609 qpair failed and we were unable to recover it. 00:41:40.609 [2024-10-01 22:40:35.652036] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.609 [2024-10-01 22:40:35.652045] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.609 qpair failed and we were unable to recover it. 00:41:40.609 [2024-10-01 22:40:35.652350] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.609 [2024-10-01 22:40:35.652360] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.609 qpair failed and we were unable to recover it. 00:41:40.609 [2024-10-01 22:40:35.652533] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.609 [2024-10-01 22:40:35.652543] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.609 qpair failed and we were unable to recover it. 00:41:40.609 [2024-10-01 22:40:35.652889] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.609 [2024-10-01 22:40:35.652900] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.609 qpair failed and we were unable to recover it. 00:41:40.609 [2024-10-01 22:40:35.653075] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.609 [2024-10-01 22:40:35.653085] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.609 qpair failed and we were unable to recover it. 00:41:40.609 [2024-10-01 22:40:35.653289] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.609 [2024-10-01 22:40:35.653299] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.609 qpair failed and we were unable to recover it. 00:41:40.609 [2024-10-01 22:40:35.653581] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.609 [2024-10-01 22:40:35.653591] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.609 qpair failed and we were unable to recover it. 00:41:40.609 [2024-10-01 22:40:35.653937] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.609 [2024-10-01 22:40:35.653947] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.609 qpair failed and we were unable to recover it. 00:41:40.609 [2024-10-01 22:40:35.654256] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.609 [2024-10-01 22:40:35.654266] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.609 qpair failed and we were unable to recover it. 00:41:40.609 [2024-10-01 22:40:35.654573] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.609 [2024-10-01 22:40:35.654583] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.609 qpair failed and we were unable to recover it. 00:41:40.609 [2024-10-01 22:40:35.654935] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.609 [2024-10-01 22:40:35.654946] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.609 qpair failed and we were unable to recover it. 00:41:40.609 [2024-10-01 22:40:35.655232] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.609 [2024-10-01 22:40:35.655243] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.609 qpair failed and we were unable to recover it. 00:41:40.609 [2024-10-01 22:40:35.655552] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.609 [2024-10-01 22:40:35.655563] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.609 qpair failed and we were unable to recover it. 00:41:40.609 [2024-10-01 22:40:35.655759] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.609 [2024-10-01 22:40:35.655770] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.609 qpair failed and we were unable to recover it. 00:41:40.609 [2024-10-01 22:40:35.656112] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.609 [2024-10-01 22:40:35.656121] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.609 qpair failed and we were unable to recover it. 00:41:40.609 [2024-10-01 22:40:35.656397] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.609 [2024-10-01 22:40:35.656407] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.609 qpair failed and we were unable to recover it. 00:41:40.609 [2024-10-01 22:40:35.656717] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.609 [2024-10-01 22:40:35.656727] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.609 qpair failed and we were unable to recover it. 00:41:40.609 [2024-10-01 22:40:35.656934] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.609 [2024-10-01 22:40:35.656944] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.609 qpair failed and we were unable to recover it. 00:41:40.609 [2024-10-01 22:40:35.657302] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.609 [2024-10-01 22:40:35.657312] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.609 qpair failed and we were unable to recover it. 00:41:40.609 [2024-10-01 22:40:35.657695] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.609 [2024-10-01 22:40:35.657705] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.609 qpair failed and we were unable to recover it. 00:41:40.609 [2024-10-01 22:40:35.657987] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.609 [2024-10-01 22:40:35.657997] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.609 qpair failed and we were unable to recover it. 00:41:40.609 [2024-10-01 22:40:35.658342] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.609 [2024-10-01 22:40:35.658352] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.609 qpair failed and we were unable to recover it. 00:41:40.609 [2024-10-01 22:40:35.658634] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.609 [2024-10-01 22:40:35.658644] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.609 qpair failed and we were unable to recover it. 00:41:40.609 [2024-10-01 22:40:35.658917] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.609 [2024-10-01 22:40:35.658927] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.609 qpair failed and we were unable to recover it. 00:41:40.609 [2024-10-01 22:40:35.659232] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.609 [2024-10-01 22:40:35.659241] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.609 qpair failed and we were unable to recover it. 00:41:40.609 [2024-10-01 22:40:35.659545] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.609 [2024-10-01 22:40:35.659555] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.609 qpair failed and we were unable to recover it. 00:41:40.609 [2024-10-01 22:40:35.659829] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.609 [2024-10-01 22:40:35.659842] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.609 qpair failed and we were unable to recover it. 00:41:40.609 [2024-10-01 22:40:35.660023] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.609 [2024-10-01 22:40:35.660033] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.609 qpair failed and we were unable to recover it. 00:41:40.609 [2024-10-01 22:40:35.660339] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.609 [2024-10-01 22:40:35.660349] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.609 qpair failed and we were unable to recover it. 00:41:40.610 [2024-10-01 22:40:35.660640] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.610 [2024-10-01 22:40:35.660650] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.610 qpair failed and we were unable to recover it. 00:41:40.610 [2024-10-01 22:40:35.660973] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.610 [2024-10-01 22:40:35.660983] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.610 qpair failed and we were unable to recover it. 00:41:40.610 [2024-10-01 22:40:35.661192] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.610 [2024-10-01 22:40:35.661202] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.610 qpair failed and we were unable to recover it. 00:41:40.610 [2024-10-01 22:40:35.661508] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.610 [2024-10-01 22:40:35.661518] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.610 qpair failed and we were unable to recover it. 00:41:40.610 [2024-10-01 22:40:35.661835] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.610 [2024-10-01 22:40:35.661845] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.610 qpair failed and we were unable to recover it. 00:41:40.610 [2024-10-01 22:40:35.662159] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.610 [2024-10-01 22:40:35.662169] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.610 qpair failed and we were unable to recover it. 00:41:40.610 [2024-10-01 22:40:35.662455] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.610 [2024-10-01 22:40:35.662465] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.610 qpair failed and we were unable to recover it. 00:41:40.610 [2024-10-01 22:40:35.662768] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.610 [2024-10-01 22:40:35.662779] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.610 qpair failed and we were unable to recover it. 00:41:40.610 [2024-10-01 22:40:35.663108] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.610 [2024-10-01 22:40:35.663118] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.610 qpair failed and we were unable to recover it. 00:41:40.610 [2024-10-01 22:40:35.663434] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.610 [2024-10-01 22:40:35.663443] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.610 qpair failed and we were unable to recover it. 00:41:40.610 [2024-10-01 22:40:35.663734] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.610 [2024-10-01 22:40:35.663745] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.610 qpair failed and we were unable to recover it. 00:41:40.610 [2024-10-01 22:40:35.664097] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.610 [2024-10-01 22:40:35.664107] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.610 qpair failed and we were unable to recover it. 00:41:40.610 [2024-10-01 22:40:35.664386] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.610 [2024-10-01 22:40:35.664405] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.610 qpair failed and we were unable to recover it. 00:41:40.610 [2024-10-01 22:40:35.664708] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.610 [2024-10-01 22:40:35.664718] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.610 qpair failed and we were unable to recover it. 00:41:40.610 [2024-10-01 22:40:35.665049] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.610 [2024-10-01 22:40:35.665059] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.610 qpair failed and we were unable to recover it. 00:41:40.610 [2024-10-01 22:40:35.665366] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.610 [2024-10-01 22:40:35.665375] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.610 qpair failed and we were unable to recover it. 00:41:40.610 [2024-10-01 22:40:35.665684] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.610 [2024-10-01 22:40:35.665694] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.610 qpair failed and we were unable to recover it. 00:41:40.610 [2024-10-01 22:40:35.666011] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.610 [2024-10-01 22:40:35.666021] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.610 qpair failed and we were unable to recover it. 00:41:40.610 [2024-10-01 22:40:35.666310] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.610 [2024-10-01 22:40:35.666320] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.610 qpair failed and we were unable to recover it. 00:41:40.610 [2024-10-01 22:40:35.666633] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.610 [2024-10-01 22:40:35.666643] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.610 qpair failed and we were unable to recover it. 00:41:40.610 [2024-10-01 22:40:35.666976] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.610 [2024-10-01 22:40:35.666986] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.610 qpair failed and we were unable to recover it. 00:41:40.610 [2024-10-01 22:40:35.667183] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.610 [2024-10-01 22:40:35.667193] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.610 qpair failed and we were unable to recover it. 00:41:40.610 [2024-10-01 22:40:35.667374] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.610 [2024-10-01 22:40:35.667384] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.610 qpair failed and we were unable to recover it. 00:41:40.610 [2024-10-01 22:40:35.667590] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.610 [2024-10-01 22:40:35.667600] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.610 qpair failed and we were unable to recover it. 00:41:40.610 [2024-10-01 22:40:35.667874] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.610 [2024-10-01 22:40:35.667884] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.610 qpair failed and we were unable to recover it. 00:41:40.610 [2024-10-01 22:40:35.668081] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.610 [2024-10-01 22:40:35.668091] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.610 qpair failed and we were unable to recover it. 00:41:40.610 [2024-10-01 22:40:35.668435] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.610 [2024-10-01 22:40:35.668445] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.610 qpair failed and we were unable to recover it. 00:41:40.610 [2024-10-01 22:40:35.668765] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.610 [2024-10-01 22:40:35.668775] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.610 qpair failed and we were unable to recover it. 00:41:40.610 [2024-10-01 22:40:35.669079] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.610 [2024-10-01 22:40:35.669089] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.610 qpair failed and we were unable to recover it. 00:41:40.610 [2024-10-01 22:40:35.669298] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.610 [2024-10-01 22:40:35.669307] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.610 qpair failed and we were unable to recover it. 00:41:40.610 [2024-10-01 22:40:35.669612] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.610 [2024-10-01 22:40:35.669621] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.610 qpair failed and we were unable to recover it. 00:41:40.610 [2024-10-01 22:40:35.669954] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.610 [2024-10-01 22:40:35.669963] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.610 qpair failed and we were unable to recover it. 00:41:40.610 [2024-10-01 22:40:35.670273] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.610 [2024-10-01 22:40:35.670282] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.610 qpair failed and we were unable to recover it. 00:41:40.610 [2024-10-01 22:40:35.670636] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.610 [2024-10-01 22:40:35.670646] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.610 qpair failed and we were unable to recover it. 00:41:40.610 [2024-10-01 22:40:35.670979] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.611 [2024-10-01 22:40:35.670988] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.611 qpair failed and we were unable to recover it. 00:41:40.611 [2024-10-01 22:40:35.671165] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.611 [2024-10-01 22:40:35.671174] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.611 qpair failed and we were unable to recover it. 00:41:40.611 [2024-10-01 22:40:35.671533] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.611 [2024-10-01 22:40:35.671542] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.611 qpair failed and we were unable to recover it. 00:41:40.611 [2024-10-01 22:40:35.671740] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.611 [2024-10-01 22:40:35.671750] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.611 qpair failed and we were unable to recover it. 00:41:40.611 [2024-10-01 22:40:35.672057] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.611 [2024-10-01 22:40:35.672067] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.611 qpair failed and we were unable to recover it. 00:41:40.611 [2024-10-01 22:40:35.672389] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.611 [2024-10-01 22:40:35.672398] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.611 qpair failed and we were unable to recover it. 00:41:40.611 [2024-10-01 22:40:35.672752] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.611 [2024-10-01 22:40:35.672762] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.611 qpair failed and we were unable to recover it. 00:41:40.611 [2024-10-01 22:40:35.673075] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.611 [2024-10-01 22:40:35.673085] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.611 qpair failed and we were unable to recover it. 00:41:40.611 [2024-10-01 22:40:35.673385] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.611 [2024-10-01 22:40:35.673395] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.611 qpair failed and we were unable to recover it. 00:41:40.611 [2024-10-01 22:40:35.673603] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.611 [2024-10-01 22:40:35.673613] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.611 qpair failed and we were unable to recover it. 00:41:40.611 [2024-10-01 22:40:35.673949] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.611 [2024-10-01 22:40:35.673960] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.611 qpair failed and we were unable to recover it. 00:41:40.611 [2024-10-01 22:40:35.674266] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.611 [2024-10-01 22:40:35.674276] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.611 qpair failed and we were unable to recover it. 00:41:40.611 [2024-10-01 22:40:35.674587] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.611 [2024-10-01 22:40:35.674597] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.611 qpair failed and we were unable to recover it. 00:41:40.611 [2024-10-01 22:40:35.674875] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.611 [2024-10-01 22:40:35.674886] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.611 qpair failed and we were unable to recover it. 00:41:40.611 [2024-10-01 22:40:35.675186] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.611 [2024-10-01 22:40:35.675197] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.611 qpair failed and we were unable to recover it. 00:41:40.611 [2024-10-01 22:40:35.675437] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.611 [2024-10-01 22:40:35.675448] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.611 qpair failed and we were unable to recover it. 00:41:40.611 [2024-10-01 22:40:35.675509] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.611 [2024-10-01 22:40:35.675519] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.611 qpair failed and we were unable to recover it. 00:41:40.611 [2024-10-01 22:40:35.675791] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.611 [2024-10-01 22:40:35.675802] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.611 qpair failed and we were unable to recover it. 00:41:40.611 [2024-10-01 22:40:35.676018] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.611 [2024-10-01 22:40:35.676029] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.611 qpair failed and we were unable to recover it. 00:41:40.611 [2024-10-01 22:40:35.676312] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.611 [2024-10-01 22:40:35.676323] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.611 qpair failed and we were unable to recover it. 00:41:40.611 [2024-10-01 22:40:35.676653] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.611 [2024-10-01 22:40:35.676665] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.611 qpair failed and we were unable to recover it. 00:41:40.611 [2024-10-01 22:40:35.676972] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.611 [2024-10-01 22:40:35.676983] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.611 qpair failed and we were unable to recover it. 00:41:40.611 [2024-10-01 22:40:35.677303] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.611 [2024-10-01 22:40:35.677314] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.611 qpair failed and we were unable to recover it. 00:41:40.611 [2024-10-01 22:40:35.677618] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.611 [2024-10-01 22:40:35.677633] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.611 qpair failed and we were unable to recover it. 00:41:40.611 [2024-10-01 22:40:35.677934] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.611 [2024-10-01 22:40:35.677944] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.611 qpair failed and we were unable to recover it. 00:41:40.611 [2024-10-01 22:40:35.678226] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.611 [2024-10-01 22:40:35.678237] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.611 qpair failed and we were unable to recover it. 00:41:40.611 [2024-10-01 22:40:35.678579] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.611 [2024-10-01 22:40:35.678589] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.611 qpair failed and we were unable to recover it. 00:41:40.611 [2024-10-01 22:40:35.678784] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.611 [2024-10-01 22:40:35.678795] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.611 qpair failed and we were unable to recover it. 00:41:40.611 [2024-10-01 22:40:35.679115] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.611 [2024-10-01 22:40:35.679125] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.611 qpair failed and we were unable to recover it. 00:41:40.611 [2024-10-01 22:40:35.679412] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.611 [2024-10-01 22:40:35.679422] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.611 qpair failed and we were unable to recover it. 00:41:40.611 [2024-10-01 22:40:35.679600] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.611 [2024-10-01 22:40:35.679610] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.611 qpair failed and we were unable to recover it. 00:41:40.611 [2024-10-01 22:40:35.679826] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.611 [2024-10-01 22:40:35.679840] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.611 qpair failed and we were unable to recover it. 00:41:40.611 [2024-10-01 22:40:35.680168] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.611 [2024-10-01 22:40:35.680179] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.611 qpair failed and we were unable to recover it. 00:41:40.611 [2024-10-01 22:40:35.680363] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.612 [2024-10-01 22:40:35.680374] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.612 qpair failed and we were unable to recover it. 00:41:40.612 [2024-10-01 22:40:35.680572] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.612 [2024-10-01 22:40:35.680583] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.612 qpair failed and we were unable to recover it. 00:41:40.612 [2024-10-01 22:40:35.680672] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.612 [2024-10-01 22:40:35.680682] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.612 qpair failed and we were unable to recover it. 00:41:40.612 [2024-10-01 22:40:35.681002] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.612 [2024-10-01 22:40:35.681014] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.612 qpair failed and we were unable to recover it. 00:41:40.612 [2024-10-01 22:40:35.681324] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.612 [2024-10-01 22:40:35.681334] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.612 qpair failed and we were unable to recover it. 00:41:40.612 [2024-10-01 22:40:35.681619] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.612 [2024-10-01 22:40:35.681635] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.612 qpair failed and we were unable to recover it. 00:41:40.612 [2024-10-01 22:40:35.681937] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.612 [2024-10-01 22:40:35.681948] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.612 qpair failed and we were unable to recover it. 00:41:40.612 [2024-10-01 22:40:35.682131] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.612 [2024-10-01 22:40:35.682141] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.612 qpair failed and we were unable to recover it. 00:41:40.612 [2024-10-01 22:40:35.682330] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.612 [2024-10-01 22:40:35.682341] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.612 qpair failed and we were unable to recover it. 00:41:40.612 [2024-10-01 22:40:35.682693] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.612 [2024-10-01 22:40:35.682704] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.612 qpair failed and we were unable to recover it. 00:41:40.612 [2024-10-01 22:40:35.682764] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.612 [2024-10-01 22:40:35.682774] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.612 qpair failed and we were unable to recover it. 00:41:40.612 [2024-10-01 22:40:35.683086] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.612 [2024-10-01 22:40:35.683096] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.612 qpair failed and we were unable to recover it. 00:41:40.612 [2024-10-01 22:40:35.683423] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.612 [2024-10-01 22:40:35.683433] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.612 qpair failed and we were unable to recover it. 00:41:40.612 [2024-10-01 22:40:35.683725] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.612 [2024-10-01 22:40:35.683736] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.612 qpair failed and we were unable to recover it. 00:41:40.612 [2024-10-01 22:40:35.683910] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.612 [2024-10-01 22:40:35.683921] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.612 qpair failed and we were unable to recover it. 00:41:40.612 [2024-10-01 22:40:35.684289] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.612 [2024-10-01 22:40:35.684300] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.612 qpair failed and we were unable to recover it. 00:41:40.612 [2024-10-01 22:40:35.684606] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.612 [2024-10-01 22:40:35.684616] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.612 qpair failed and we were unable to recover it. 00:41:40.612 [2024-10-01 22:40:35.684806] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.612 [2024-10-01 22:40:35.684817] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.612 qpair failed and we were unable to recover it. 00:41:40.612 [2024-10-01 22:40:35.685041] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.612 [2024-10-01 22:40:35.685051] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.612 qpair failed and we were unable to recover it. 00:41:40.612 [2024-10-01 22:40:35.685246] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.612 [2024-10-01 22:40:35.685256] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.612 qpair failed and we were unable to recover it. 00:41:40.612 [2024-10-01 22:40:35.685565] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.612 [2024-10-01 22:40:35.685575] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.612 qpair failed and we were unable to recover it. 00:41:40.612 [2024-10-01 22:40:35.685940] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.612 [2024-10-01 22:40:35.685951] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.612 qpair failed and we were unable to recover it. 00:41:40.612 [2024-10-01 22:40:35.686255] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.612 [2024-10-01 22:40:35.686265] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.612 qpair failed and we were unable to recover it. 00:41:40.612 [2024-10-01 22:40:35.686557] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.612 [2024-10-01 22:40:35.686568] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.612 qpair failed and we were unable to recover it. 00:41:40.612 [2024-10-01 22:40:35.686783] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.612 [2024-10-01 22:40:35.686794] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.612 qpair failed and we were unable to recover it. 00:41:40.612 [2024-10-01 22:40:35.687089] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.612 [2024-10-01 22:40:35.687102] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.612 qpair failed and we were unable to recover it. 00:41:40.612 [2024-10-01 22:40:35.687282] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.612 [2024-10-01 22:40:35.687293] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.612 qpair failed and we were unable to recover it. 00:41:40.612 [2024-10-01 22:40:35.687594] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.612 [2024-10-01 22:40:35.687605] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.613 qpair failed and we were unable to recover it. 00:41:40.613 [2024-10-01 22:40:35.687833] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.613 [2024-10-01 22:40:35.687843] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.613 qpair failed and we were unable to recover it. 00:41:40.613 [2024-10-01 22:40:35.688186] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.613 [2024-10-01 22:40:35.688197] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.613 qpair failed and we were unable to recover it. 00:41:40.613 [2024-10-01 22:40:35.688506] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.613 [2024-10-01 22:40:35.688516] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.613 qpair failed and we were unable to recover it. 00:41:40.613 [2024-10-01 22:40:35.688814] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.613 [2024-10-01 22:40:35.688825] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.613 qpair failed and we were unable to recover it. 00:41:40.613 [2024-10-01 22:40:35.689012] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.613 [2024-10-01 22:40:35.689023] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.613 qpair failed and we were unable to recover it. 00:41:40.613 [2024-10-01 22:40:35.689361] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.613 [2024-10-01 22:40:35.689372] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.613 qpair failed and we were unable to recover it. 00:41:40.613 [2024-10-01 22:40:35.689549] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.613 [2024-10-01 22:40:35.689559] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.613 qpair failed and we were unable to recover it. 00:41:40.613 [2024-10-01 22:40:35.689858] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.613 [2024-10-01 22:40:35.689869] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.613 qpair failed and we were unable to recover it. 00:41:40.613 [2024-10-01 22:40:35.690248] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.613 [2024-10-01 22:40:35.690259] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.613 qpair failed and we were unable to recover it. 00:41:40.613 [2024-10-01 22:40:35.690553] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.613 [2024-10-01 22:40:35.690564] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.613 qpair failed and we were unable to recover it. 00:41:40.613 [2024-10-01 22:40:35.690753] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.613 [2024-10-01 22:40:35.690764] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.613 qpair failed and we were unable to recover it. 00:41:40.613 [2024-10-01 22:40:35.690981] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.613 [2024-10-01 22:40:35.690992] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.613 qpair failed and we were unable to recover it. 00:41:40.613 [2024-10-01 22:40:35.691274] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.613 [2024-10-01 22:40:35.691285] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.613 qpair failed and we were unable to recover it. 00:41:40.613 [2024-10-01 22:40:35.691568] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.613 [2024-10-01 22:40:35.691578] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.613 qpair failed and we were unable to recover it. 00:41:40.613 [2024-10-01 22:40:35.691748] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.613 [2024-10-01 22:40:35.691759] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.613 qpair failed and we were unable to recover it. 00:41:40.613 [2024-10-01 22:40:35.692077] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.613 [2024-10-01 22:40:35.692087] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.613 qpair failed and we were unable to recover it. 00:41:40.613 [2024-10-01 22:40:35.692393] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.613 [2024-10-01 22:40:35.692404] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.613 qpair failed and we were unable to recover it. 00:41:40.613 [2024-10-01 22:40:35.692577] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.613 [2024-10-01 22:40:35.692588] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.613 qpair failed and we were unable to recover it. 00:41:40.613 [2024-10-01 22:40:35.692881] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.613 [2024-10-01 22:40:35.692892] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.613 qpair failed and we were unable to recover it. 00:41:40.613 [2024-10-01 22:40:35.693116] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.613 [2024-10-01 22:40:35.693125] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.613 qpair failed and we were unable to recover it. 00:41:40.613 [2024-10-01 22:40:35.693446] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.613 [2024-10-01 22:40:35.693455] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.613 qpair failed and we were unable to recover it. 00:41:40.613 [2024-10-01 22:40:35.693769] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.613 [2024-10-01 22:40:35.693781] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.613 qpair failed and we were unable to recover it. 00:41:40.613 [2024-10-01 22:40:35.694091] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.613 [2024-10-01 22:40:35.694101] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.613 qpair failed and we were unable to recover it. 00:41:40.613 [2024-10-01 22:40:35.694282] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.613 [2024-10-01 22:40:35.694292] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.613 qpair failed and we were unable to recover it. 00:41:40.613 [2024-10-01 22:40:35.694652] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.613 [2024-10-01 22:40:35.694662] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.613 qpair failed and we were unable to recover it. 00:41:40.613 [2024-10-01 22:40:35.694962] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.613 [2024-10-01 22:40:35.694972] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.613 qpair failed and we were unable to recover it. 00:41:40.613 [2024-10-01 22:40:35.695170] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.613 [2024-10-01 22:40:35.695180] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.613 qpair failed and we were unable to recover it. 00:41:40.613 [2024-10-01 22:40:35.695507] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.613 [2024-10-01 22:40:35.695517] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.613 qpair failed and we were unable to recover it. 00:41:40.613 [2024-10-01 22:40:35.695684] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.613 [2024-10-01 22:40:35.695694] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.613 qpair failed and we were unable to recover it. 00:41:40.613 [2024-10-01 22:40:35.695974] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.613 [2024-10-01 22:40:35.695984] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.613 qpair failed and we were unable to recover it. 00:41:40.613 [2024-10-01 22:40:35.696044] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.613 [2024-10-01 22:40:35.696055] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.613 qpair failed and we were unable to recover it. 00:41:40.613 [2024-10-01 22:40:35.696345] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.613 [2024-10-01 22:40:35.696355] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.613 qpair failed and we were unable to recover it. 00:41:40.613 [2024-10-01 22:40:35.696668] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.613 [2024-10-01 22:40:35.696678] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.613 qpair failed and we were unable to recover it. 00:41:40.613 [2024-10-01 22:40:35.696877] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.614 [2024-10-01 22:40:35.696887] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.614 qpair failed and we were unable to recover it. 00:41:40.614 [2024-10-01 22:40:35.697220] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.614 [2024-10-01 22:40:35.697230] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.614 qpair failed and we were unable to recover it. 00:41:40.614 [2024-10-01 22:40:35.697543] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.614 [2024-10-01 22:40:35.697553] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.614 qpair failed and we were unable to recover it. 00:41:40.614 [2024-10-01 22:40:35.697872] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.614 [2024-10-01 22:40:35.697882] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.614 qpair failed and we were unable to recover it. 00:41:40.614 [2024-10-01 22:40:35.698185] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.614 [2024-10-01 22:40:35.698195] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.614 qpair failed and we were unable to recover it. 00:41:40.614 [2024-10-01 22:40:35.698498] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.614 [2024-10-01 22:40:35.698508] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.614 qpair failed and we were unable to recover it. 00:41:40.614 [2024-10-01 22:40:35.698791] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.614 [2024-10-01 22:40:35.698801] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.614 qpair failed and we were unable to recover it. 00:41:40.614 [2024-10-01 22:40:35.698996] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.614 [2024-10-01 22:40:35.699006] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.614 qpair failed and we were unable to recover it. 00:41:40.614 [2024-10-01 22:40:35.699332] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.614 [2024-10-01 22:40:35.699342] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.614 qpair failed and we were unable to recover it. 00:41:40.614 [2024-10-01 22:40:35.699630] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.614 [2024-10-01 22:40:35.699640] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.614 qpair failed and we were unable to recover it. 00:41:40.614 [2024-10-01 22:40:35.699820] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.614 [2024-10-01 22:40:35.699830] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.614 qpair failed and we were unable to recover it. 00:41:40.614 [2024-10-01 22:40:35.699989] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.614 [2024-10-01 22:40:35.699998] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.614 qpair failed and we were unable to recover it. 00:41:40.614 [2024-10-01 22:40:35.700285] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.614 [2024-10-01 22:40:35.700294] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.614 qpair failed and we were unable to recover it. 00:41:40.614 [2024-10-01 22:40:35.700481] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.614 [2024-10-01 22:40:35.700491] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.614 qpair failed and we were unable to recover it. 00:41:40.614 [2024-10-01 22:40:35.700671] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.614 [2024-10-01 22:40:35.700681] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.614 qpair failed and we were unable to recover it. 00:41:40.614 [2024-10-01 22:40:35.700979] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.614 [2024-10-01 22:40:35.700988] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.614 qpair failed and we were unable to recover it. 00:41:40.614 [2024-10-01 22:40:35.701181] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.614 [2024-10-01 22:40:35.701191] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.614 qpair failed and we were unable to recover it. 00:41:40.614 [2024-10-01 22:40:35.701490] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.614 [2024-10-01 22:40:35.701500] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.614 qpair failed and we were unable to recover it. 00:41:40.614 [2024-10-01 22:40:35.701730] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.614 [2024-10-01 22:40:35.701741] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.614 qpair failed and we were unable to recover it. 00:41:40.614 [2024-10-01 22:40:35.702068] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.614 [2024-10-01 22:40:35.702078] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.614 qpair failed and we were unable to recover it. 00:41:40.614 [2024-10-01 22:40:35.702355] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.614 [2024-10-01 22:40:35.702365] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.614 qpair failed and we were unable to recover it. 00:41:40.614 [2024-10-01 22:40:35.702651] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.614 [2024-10-01 22:40:35.702661] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.614 qpair failed and we were unable to recover it. 00:41:40.614 [2024-10-01 22:40:35.702962] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.614 [2024-10-01 22:40:35.702972] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.614 qpair failed and we were unable to recover it. 00:41:40.614 [2024-10-01 22:40:35.703170] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.614 [2024-10-01 22:40:35.703180] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.614 qpair failed and we were unable to recover it. 00:41:40.614 [2024-10-01 22:40:35.703501] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.614 [2024-10-01 22:40:35.703510] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.614 qpair failed and we were unable to recover it. 00:41:40.614 [2024-10-01 22:40:35.703697] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.614 [2024-10-01 22:40:35.703710] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.614 qpair failed and we were unable to recover it. 00:41:40.614 [2024-10-01 22:40:35.703953] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.614 [2024-10-01 22:40:35.703963] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.614 qpair failed and we were unable to recover it. 00:41:40.614 [2024-10-01 22:40:35.704123] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.614 [2024-10-01 22:40:35.704133] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.614 qpair failed and we were unable to recover it. 00:41:40.614 [2024-10-01 22:40:35.704457] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.614 [2024-10-01 22:40:35.704468] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.614 qpair failed and we were unable to recover it. 00:41:40.614 [2024-10-01 22:40:35.704659] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.614 [2024-10-01 22:40:35.704670] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.614 qpair failed and we were unable to recover it. 00:41:40.614 [2024-10-01 22:40:35.704966] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.614 [2024-10-01 22:40:35.704976] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.614 qpair failed and we were unable to recover it. 00:41:40.614 [2024-10-01 22:40:35.705284] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.614 [2024-10-01 22:40:35.705294] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.614 qpair failed and we were unable to recover it. 00:41:40.614 [2024-10-01 22:40:35.705608] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.614 [2024-10-01 22:40:35.705621] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.614 qpair failed and we were unable to recover it. 00:41:40.614 [2024-10-01 22:40:35.705915] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.614 [2024-10-01 22:40:35.705925] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.614 qpair failed and we were unable to recover it. 00:41:40.614 [2024-10-01 22:40:35.706232] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.614 [2024-10-01 22:40:35.706242] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.614 qpair failed and we were unable to recover it. 00:41:40.614 [2024-10-01 22:40:35.706544] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.614 [2024-10-01 22:40:35.706554] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.614 qpair failed and we were unable to recover it. 00:41:40.615 [2024-10-01 22:40:35.706742] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.615 [2024-10-01 22:40:35.706754] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.615 qpair failed and we were unable to recover it. 00:41:40.615 [2024-10-01 22:40:35.707029] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.615 [2024-10-01 22:40:35.707039] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.615 qpair failed and we were unable to recover it. 00:41:40.615 [2024-10-01 22:40:35.707341] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.615 [2024-10-01 22:40:35.707350] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.615 qpair failed and we were unable to recover it. 00:41:40.615 [2024-10-01 22:40:35.707663] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.615 [2024-10-01 22:40:35.707673] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.615 qpair failed and we were unable to recover it. 00:41:40.615 [2024-10-01 22:40:35.707989] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.615 [2024-10-01 22:40:35.707998] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.615 qpair failed and we were unable to recover it. 00:41:40.615 [2024-10-01 22:40:35.708364] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.615 [2024-10-01 22:40:35.708373] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.615 qpair failed and we were unable to recover it. 00:41:40.615 [2024-10-01 22:40:35.708688] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.615 [2024-10-01 22:40:35.708698] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.615 qpair failed and we were unable to recover it. 00:41:40.615 [2024-10-01 22:40:35.709024] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.615 [2024-10-01 22:40:35.709033] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.615 qpair failed and we were unable to recover it. 00:41:40.615 [2024-10-01 22:40:35.709236] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.615 [2024-10-01 22:40:35.709246] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.615 qpair failed and we were unable to recover it. 00:41:40.615 [2024-10-01 22:40:35.709555] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.615 [2024-10-01 22:40:35.709566] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.615 qpair failed and we were unable to recover it. 00:41:40.615 [2024-10-01 22:40:35.709948] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.615 [2024-10-01 22:40:35.709960] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.615 qpair failed and we were unable to recover it. 00:41:40.615 [2024-10-01 22:40:35.710269] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.615 [2024-10-01 22:40:35.710279] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.615 qpair failed and we were unable to recover it. 00:41:40.615 [2024-10-01 22:40:35.710586] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.615 [2024-10-01 22:40:35.710597] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.615 qpair failed and we were unable to recover it. 00:41:40.615 [2024-10-01 22:40:35.710795] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.615 [2024-10-01 22:40:35.710805] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.615 qpair failed and we were unable to recover it. 00:41:40.615 [2024-10-01 22:40:35.711105] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.615 [2024-10-01 22:40:35.711115] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.615 qpair failed and we were unable to recover it. 00:41:40.615 [2024-10-01 22:40:35.711420] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.615 [2024-10-01 22:40:35.711429] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.615 qpair failed and we were unable to recover it. 00:41:40.615 [2024-10-01 22:40:35.711714] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.615 [2024-10-01 22:40:35.711724] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.615 qpair failed and we were unable to recover it. 00:41:40.615 [2024-10-01 22:40:35.712039] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.615 [2024-10-01 22:40:35.712048] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.615 qpair failed and we were unable to recover it. 00:41:40.615 [2024-10-01 22:40:35.712357] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.615 [2024-10-01 22:40:35.712367] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.615 qpair failed and we were unable to recover it. 00:41:40.615 [2024-10-01 22:40:35.712673] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.615 [2024-10-01 22:40:35.712683] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.615 qpair failed and we were unable to recover it. 00:41:40.615 [2024-10-01 22:40:35.713008] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.615 [2024-10-01 22:40:35.713017] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.615 qpair failed and we were unable to recover it. 00:41:40.615 [2024-10-01 22:40:35.713309] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.615 [2024-10-01 22:40:35.713319] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.615 qpair failed and we were unable to recover it. 00:41:40.615 [2024-10-01 22:40:35.713630] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.615 [2024-10-01 22:40:35.713640] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.615 qpair failed and we were unable to recover it. 00:41:40.615 [2024-10-01 22:40:35.713926] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.615 [2024-10-01 22:40:35.713938] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.615 qpair failed and we were unable to recover it. 00:41:40.615 [2024-10-01 22:40:35.714246] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.615 [2024-10-01 22:40:35.714255] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.615 qpair failed and we were unable to recover it. 00:41:40.615 [2024-10-01 22:40:35.714593] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.615 [2024-10-01 22:40:35.714603] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.615 qpair failed and we were unable to recover it. 00:41:40.615 [2024-10-01 22:40:35.714847] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.615 [2024-10-01 22:40:35.714858] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.615 qpair failed and we were unable to recover it. 00:41:40.615 [2024-10-01 22:40:35.715162] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.615 [2024-10-01 22:40:35.715172] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.615 qpair failed and we were unable to recover it. 00:41:40.615 [2024-10-01 22:40:35.715452] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.615 [2024-10-01 22:40:35.715462] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.615 qpair failed and we were unable to recover it. 00:41:40.615 [2024-10-01 22:40:35.715774] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.615 [2024-10-01 22:40:35.715784] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.615 qpair failed and we were unable to recover it. 00:41:40.615 [2024-10-01 22:40:35.716107] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.615 [2024-10-01 22:40:35.716117] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.615 qpair failed and we were unable to recover it. 00:41:40.615 [2024-10-01 22:40:35.716394] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.615 [2024-10-01 22:40:35.716403] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.615 qpair failed and we were unable to recover it. 00:41:40.615 [2024-10-01 22:40:35.716687] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.615 [2024-10-01 22:40:35.716698] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.615 qpair failed and we were unable to recover it. 00:41:40.615 [2024-10-01 22:40:35.717016] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.615 [2024-10-01 22:40:35.717026] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.615 qpair failed and we were unable to recover it. 00:41:40.615 [2024-10-01 22:40:35.717321] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.615 [2024-10-01 22:40:35.717330] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.615 qpair failed and we were unable to recover it. 00:41:40.615 [2024-10-01 22:40:35.717617] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.615 [2024-10-01 22:40:35.717633] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.615 qpair failed and we were unable to recover it. 00:41:40.615 [2024-10-01 22:40:35.717841] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.615 [2024-10-01 22:40:35.717851] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.615 qpair failed and we were unable to recover it. 00:41:40.615 [2024-10-01 22:40:35.718160] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.615 [2024-10-01 22:40:35.718170] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.615 qpair failed and we were unable to recover it. 00:41:40.615 [2024-10-01 22:40:35.718495] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.615 [2024-10-01 22:40:35.718505] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.615 qpair failed and we were unable to recover it. 00:41:40.615 [2024-10-01 22:40:35.718790] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.615 [2024-10-01 22:40:35.718802] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.615 qpair failed and we were unable to recover it. 00:41:40.615 [2024-10-01 22:40:35.719102] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.615 [2024-10-01 22:40:35.719111] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.615 qpair failed and we were unable to recover it. 00:41:40.615 [2024-10-01 22:40:35.719393] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.615 [2024-10-01 22:40:35.719402] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.615 qpair failed and we were unable to recover it. 00:41:40.615 [2024-10-01 22:40:35.719693] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.615 [2024-10-01 22:40:35.719704] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.615 qpair failed and we were unable to recover it. 00:41:40.615 [2024-10-01 22:40:35.720074] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.615 [2024-10-01 22:40:35.720085] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.616 qpair failed and we were unable to recover it. 00:41:40.616 [2024-10-01 22:40:35.720254] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.616 [2024-10-01 22:40:35.720265] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.616 qpair failed and we were unable to recover it. 00:41:40.616 [2024-10-01 22:40:35.720557] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.616 [2024-10-01 22:40:35.720567] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.616 qpair failed and we were unable to recover it. 00:41:40.616 [2024-10-01 22:40:35.720891] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.616 [2024-10-01 22:40:35.720902] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.616 qpair failed and we were unable to recover it. 00:41:40.616 [2024-10-01 22:40:35.721252] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.616 [2024-10-01 22:40:35.721262] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.616 qpair failed and we were unable to recover it. 00:41:40.616 [2024-10-01 22:40:35.721571] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.616 [2024-10-01 22:40:35.721581] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.616 qpair failed and we were unable to recover it. 00:41:40.616 [2024-10-01 22:40:35.721738] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.616 [2024-10-01 22:40:35.721750] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.616 qpair failed and we were unable to recover it. 00:41:40.616 [2024-10-01 22:40:35.722047] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.616 [2024-10-01 22:40:35.722061] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.616 qpair failed and we were unable to recover it. 00:41:40.616 [2024-10-01 22:40:35.722362] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.616 [2024-10-01 22:40:35.722371] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.616 qpair failed and we were unable to recover it. 00:41:40.616 [2024-10-01 22:40:35.722692] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.616 [2024-10-01 22:40:35.722702] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.616 qpair failed and we were unable to recover it. 00:41:40.616 [2024-10-01 22:40:35.722986] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.616 [2024-10-01 22:40:35.722996] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.616 qpair failed and we were unable to recover it. 00:41:40.616 [2024-10-01 22:40:35.723302] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.616 [2024-10-01 22:40:35.723312] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.616 qpair failed and we were unable to recover it. 00:41:40.616 [2024-10-01 22:40:35.723649] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.616 [2024-10-01 22:40:35.723660] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.616 qpair failed and we were unable to recover it. 00:41:40.616 [2024-10-01 22:40:35.723943] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.616 [2024-10-01 22:40:35.723953] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.616 qpair failed and we were unable to recover it. 00:41:40.616 [2024-10-01 22:40:35.724331] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.616 [2024-10-01 22:40:35.724340] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.616 qpair failed and we were unable to recover it. 00:41:40.616 [2024-10-01 22:40:35.724613] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.616 [2024-10-01 22:40:35.724623] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.616 qpair failed and we were unable to recover it. 00:41:40.616 [2024-10-01 22:40:35.725015] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.616 [2024-10-01 22:40:35.725024] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.616 qpair failed and we were unable to recover it. 00:41:40.616 [2024-10-01 22:40:35.725305] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.616 [2024-10-01 22:40:35.725315] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.616 qpair failed and we were unable to recover it. 00:41:40.616 [2024-10-01 22:40:35.725651] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.616 [2024-10-01 22:40:35.725661] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.616 qpair failed and we were unable to recover it. 00:41:40.616 [2024-10-01 22:40:35.725995] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.616 [2024-10-01 22:40:35.726006] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.616 qpair failed and we were unable to recover it. 00:41:40.616 [2024-10-01 22:40:35.726181] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.616 [2024-10-01 22:40:35.726191] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.616 qpair failed and we were unable to recover it. 00:41:40.616 [2024-10-01 22:40:35.726483] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.616 [2024-10-01 22:40:35.726493] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.616 qpair failed and we were unable to recover it. 00:41:40.616 [2024-10-01 22:40:35.726779] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.616 [2024-10-01 22:40:35.726790] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.616 qpair failed and we were unable to recover it. 00:41:40.616 [2024-10-01 22:40:35.727033] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.616 [2024-10-01 22:40:35.727042] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.616 qpair failed and we were unable to recover it. 00:41:40.616 [2024-10-01 22:40:35.727371] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.616 [2024-10-01 22:40:35.727380] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.616 qpair failed and we were unable to recover it. 00:41:40.616 [2024-10-01 22:40:35.727677] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.616 [2024-10-01 22:40:35.727688] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.616 qpair failed and we were unable to recover it. 00:41:40.616 [2024-10-01 22:40:35.728021] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.616 [2024-10-01 22:40:35.728031] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.616 qpair failed and we were unable to recover it. 00:41:40.616 [2024-10-01 22:40:35.728335] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.616 [2024-10-01 22:40:35.728345] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.616 qpair failed and we were unable to recover it. 00:41:40.616 [2024-10-01 22:40:35.728622] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.616 [2024-10-01 22:40:35.728636] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.616 qpair failed and we were unable to recover it. 00:41:40.616 [2024-10-01 22:40:35.728933] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.616 [2024-10-01 22:40:35.728943] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.616 qpair failed and we were unable to recover it. 00:41:40.616 [2024-10-01 22:40:35.729226] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.616 [2024-10-01 22:40:35.729236] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.616 qpair failed and we were unable to recover it. 00:41:40.616 [2024-10-01 22:40:35.729423] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.616 [2024-10-01 22:40:35.729434] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.616 qpair failed and we were unable to recover it. 00:41:40.616 [2024-10-01 22:40:35.729744] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.616 [2024-10-01 22:40:35.729754] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.616 qpair failed and we were unable to recover it. 00:41:40.616 [2024-10-01 22:40:35.730062] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.616 [2024-10-01 22:40:35.730072] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.616 qpair failed and we were unable to recover it. 00:41:40.616 [2024-10-01 22:40:35.730378] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.616 [2024-10-01 22:40:35.730388] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.616 qpair failed and we were unable to recover it. 00:41:40.616 [2024-10-01 22:40:35.730686] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.616 [2024-10-01 22:40:35.730696] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.616 qpair failed and we were unable to recover it. 00:41:40.616 [2024-10-01 22:40:35.730876] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.616 [2024-10-01 22:40:35.730886] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.616 qpair failed and we were unable to recover it. 00:41:40.616 [2024-10-01 22:40:35.731224] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.616 [2024-10-01 22:40:35.731234] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.616 qpair failed and we were unable to recover it. 00:41:40.616 [2024-10-01 22:40:35.731514] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.616 [2024-10-01 22:40:35.731524] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.616 qpair failed and we were unable to recover it. 00:41:40.616 [2024-10-01 22:40:35.731728] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.616 [2024-10-01 22:40:35.731738] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.616 qpair failed and we were unable to recover it. 00:41:40.617 [2024-10-01 22:40:35.732015] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.617 [2024-10-01 22:40:35.732025] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.617 qpair failed and we were unable to recover it. 00:41:40.617 [2024-10-01 22:40:35.732336] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.617 [2024-10-01 22:40:35.732346] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.617 qpair failed and we were unable to recover it. 00:41:40.617 [2024-10-01 22:40:35.732639] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.617 [2024-10-01 22:40:35.732650] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.617 qpair failed and we were unable to recover it. 00:41:40.617 [2024-10-01 22:40:35.732966] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.617 [2024-10-01 22:40:35.732976] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.617 qpair failed and we were unable to recover it. 00:41:40.617 [2024-10-01 22:40:35.733278] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.617 [2024-10-01 22:40:35.733288] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.617 qpair failed and we were unable to recover it. 00:41:40.617 [2024-10-01 22:40:35.733453] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.617 [2024-10-01 22:40:35.733463] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.617 qpair failed and we were unable to recover it. 00:41:40.617 [2024-10-01 22:40:35.733753] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.617 [2024-10-01 22:40:35.733763] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.617 qpair failed and we were unable to recover it. 00:41:40.617 [2024-10-01 22:40:35.734076] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.617 [2024-10-01 22:40:35.734086] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.617 qpair failed and we were unable to recover it. 00:41:40.617 [2024-10-01 22:40:35.734366] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.617 [2024-10-01 22:40:35.734383] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.617 qpair failed and we were unable to recover it. 00:41:40.617 [2024-10-01 22:40:35.734724] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.617 [2024-10-01 22:40:35.734735] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.617 qpair failed and we were unable to recover it. 00:41:40.617 [2024-10-01 22:40:35.735094] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.617 [2024-10-01 22:40:35.735105] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.617 qpair failed and we were unable to recover it. 00:41:40.617 [2024-10-01 22:40:35.735404] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.617 [2024-10-01 22:40:35.735415] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.617 qpair failed and we were unable to recover it. 00:41:40.617 [2024-10-01 22:40:35.735739] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.617 [2024-10-01 22:40:35.735749] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.617 qpair failed and we were unable to recover it. 00:41:40.617 [2024-10-01 22:40:35.736050] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.617 [2024-10-01 22:40:35.736060] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.617 qpair failed and we were unable to recover it. 00:41:40.617 [2024-10-01 22:40:35.736365] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.617 [2024-10-01 22:40:35.736375] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.617 qpair failed and we were unable to recover it. 00:41:40.617 [2024-10-01 22:40:35.736677] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.617 [2024-10-01 22:40:35.736687] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.617 qpair failed and we were unable to recover it. 00:41:40.617 [2024-10-01 22:40:35.736997] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.617 [2024-10-01 22:40:35.737006] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.617 qpair failed and we were unable to recover it. 00:41:40.617 [2024-10-01 22:40:35.737285] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.617 [2024-10-01 22:40:35.737295] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.617 qpair failed and we were unable to recover it. 00:41:40.617 [2024-10-01 22:40:35.737579] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.617 [2024-10-01 22:40:35.737589] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.617 qpair failed and we were unable to recover it. 00:41:40.617 [2024-10-01 22:40:35.737891] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.617 [2024-10-01 22:40:35.737901] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.617 qpair failed and we were unable to recover it. 00:41:40.617 [2024-10-01 22:40:35.738197] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.617 [2024-10-01 22:40:35.738207] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.617 qpair failed and we were unable to recover it. 00:41:40.617 [2024-10-01 22:40:35.738486] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.617 [2024-10-01 22:40:35.738496] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.617 qpair failed and we were unable to recover it. 00:41:40.617 [2024-10-01 22:40:35.738774] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.617 [2024-10-01 22:40:35.738785] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.617 qpair failed and we were unable to recover it. 00:41:40.617 [2024-10-01 22:40:35.739053] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.617 [2024-10-01 22:40:35.739063] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.617 qpair failed and we were unable to recover it. 00:41:40.617 [2024-10-01 22:40:35.739384] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.617 [2024-10-01 22:40:35.739394] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.617 qpair failed and we were unable to recover it. 00:41:40.617 [2024-10-01 22:40:35.739687] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.617 [2024-10-01 22:40:35.739698] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.617 qpair failed and we were unable to recover it. 00:41:40.617 [2024-10-01 22:40:35.739885] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.617 [2024-10-01 22:40:35.739894] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.617 qpair failed and we were unable to recover it. 00:41:40.617 [2024-10-01 22:40:35.740211] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.617 [2024-10-01 22:40:35.740221] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.617 qpair failed and we were unable to recover it. 00:41:40.617 [2024-10-01 22:40:35.740528] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.617 [2024-10-01 22:40:35.740537] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.617 qpair failed and we were unable to recover it. 00:41:40.617 [2024-10-01 22:40:35.740845] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.617 [2024-10-01 22:40:35.740855] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.617 qpair failed and we were unable to recover it. 00:41:40.617 [2024-10-01 22:40:35.741207] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.617 [2024-10-01 22:40:35.741216] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.617 qpair failed and we were unable to recover it. 00:41:40.617 [2024-10-01 22:40:35.741519] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.617 [2024-10-01 22:40:35.741528] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.617 qpair failed and we were unable to recover it. 00:41:40.617 [2024-10-01 22:40:35.741849] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.617 [2024-10-01 22:40:35.741859] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.617 qpair failed and we were unable to recover it. 00:41:40.617 [2024-10-01 22:40:35.742130] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.617 [2024-10-01 22:40:35.742140] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.617 qpair failed and we were unable to recover it. 00:41:40.617 [2024-10-01 22:40:35.742348] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.617 [2024-10-01 22:40:35.742358] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.617 qpair failed and we were unable to recover it. 00:41:40.617 [2024-10-01 22:40:35.742670] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.617 [2024-10-01 22:40:35.742683] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.617 qpair failed and we were unable to recover it. 00:41:40.617 [2024-10-01 22:40:35.742960] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.617 [2024-10-01 22:40:35.742970] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.617 qpair failed and we were unable to recover it. 00:41:40.617 [2024-10-01 22:40:35.743279] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.617 [2024-10-01 22:40:35.743288] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.617 qpair failed and we were unable to recover it. 00:41:40.617 [2024-10-01 22:40:35.743579] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.617 [2024-10-01 22:40:35.743596] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.617 qpair failed and we were unable to recover it. 00:41:40.617 [2024-10-01 22:40:35.743888] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.617 [2024-10-01 22:40:35.743898] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.617 qpair failed and we were unable to recover it. 00:41:40.617 [2024-10-01 22:40:35.744183] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.617 [2024-10-01 22:40:35.744193] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.617 qpair failed and we were unable to recover it. 00:41:40.617 [2024-10-01 22:40:35.744505] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.617 [2024-10-01 22:40:35.744514] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.617 qpair failed and we were unable to recover it. 00:41:40.617 [2024-10-01 22:40:35.744871] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.617 [2024-10-01 22:40:35.744882] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.617 qpair failed and we were unable to recover it. 00:41:40.617 [2024-10-01 22:40:35.745185] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.617 [2024-10-01 22:40:35.745195] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.617 qpair failed and we were unable to recover it. 00:41:40.617 [2024-10-01 22:40:35.745501] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.617 [2024-10-01 22:40:35.745511] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.617 qpair failed and we were unable to recover it. 00:41:40.617 [2024-10-01 22:40:35.745816] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.617 [2024-10-01 22:40:35.745826] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.617 qpair failed and we were unable to recover it. 00:41:40.617 [2024-10-01 22:40:35.746109] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.617 [2024-10-01 22:40:35.746119] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.617 qpair failed and we were unable to recover it. 00:41:40.618 [2024-10-01 22:40:35.746442] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.618 [2024-10-01 22:40:35.746452] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.618 qpair failed and we were unable to recover it. 00:41:40.618 [2024-10-01 22:40:35.746765] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.618 [2024-10-01 22:40:35.746776] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.618 qpair failed and we were unable to recover it. 00:41:40.618 [2024-10-01 22:40:35.746981] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.618 [2024-10-01 22:40:35.746992] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.618 qpair failed and we were unable to recover it. 00:41:40.618 [2024-10-01 22:40:35.747293] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.618 [2024-10-01 22:40:35.747303] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.618 qpair failed and we were unable to recover it. 00:41:40.618 [2024-10-01 22:40:35.747606] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.618 [2024-10-01 22:40:35.747616] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.618 qpair failed and we were unable to recover it. 00:41:40.618 [2024-10-01 22:40:35.747933] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.618 [2024-10-01 22:40:35.747943] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.618 qpair failed and we were unable to recover it. 00:41:40.618 [2024-10-01 22:40:35.748244] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.618 [2024-10-01 22:40:35.748254] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.618 qpair failed and we were unable to recover it. 00:41:40.618 [2024-10-01 22:40:35.748438] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.618 [2024-10-01 22:40:35.748449] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.618 qpair failed and we were unable to recover it. 00:41:40.618 [2024-10-01 22:40:35.748752] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.618 [2024-10-01 22:40:35.748762] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.618 qpair failed and we were unable to recover it. 00:41:40.618 [2024-10-01 22:40:35.749122] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.618 [2024-10-01 22:40:35.749131] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.618 qpair failed and we were unable to recover it. 00:41:40.618 [2024-10-01 22:40:35.749462] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.618 [2024-10-01 22:40:35.749472] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.618 qpair failed and we were unable to recover it. 00:41:40.618 [2024-10-01 22:40:35.749764] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.618 [2024-10-01 22:40:35.749774] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.618 qpair failed and we were unable to recover it. 00:41:40.618 [2024-10-01 22:40:35.749961] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.618 [2024-10-01 22:40:35.749971] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.618 qpair failed and we were unable to recover it. 00:41:40.618 [2024-10-01 22:40:35.750169] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.618 [2024-10-01 22:40:35.750179] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.618 qpair failed and we were unable to recover it. 00:41:40.618 [2024-10-01 22:40:35.750487] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.618 [2024-10-01 22:40:35.750497] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.618 qpair failed and we were unable to recover it. 00:41:40.618 [2024-10-01 22:40:35.750819] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.618 [2024-10-01 22:40:35.750832] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.618 qpair failed and we were unable to recover it. 00:41:40.618 [2024-10-01 22:40:35.751140] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.618 [2024-10-01 22:40:35.751150] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.618 qpair failed and we were unable to recover it. 00:41:40.618 [2024-10-01 22:40:35.751458] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.618 [2024-10-01 22:40:35.751467] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.618 qpair failed and we were unable to recover it. 00:41:40.618 [2024-10-01 22:40:35.751754] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.618 [2024-10-01 22:40:35.751764] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.618 qpair failed and we were unable to recover it. 00:41:40.618 [2024-10-01 22:40:35.752122] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.618 [2024-10-01 22:40:35.752132] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.618 qpair failed and we were unable to recover it. 00:41:40.618 [2024-10-01 22:40:35.752309] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.618 [2024-10-01 22:40:35.752320] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.618 qpair failed and we were unable to recover it. 00:41:40.618 [2024-10-01 22:40:35.752589] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.618 [2024-10-01 22:40:35.752599] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.618 qpair failed and we were unable to recover it. 00:41:40.618 [2024-10-01 22:40:35.752891] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.618 [2024-10-01 22:40:35.752901] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.618 qpair failed and we were unable to recover it. 00:41:40.618 [2024-10-01 22:40:35.753249] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.618 [2024-10-01 22:40:35.753259] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.618 qpair failed and we were unable to recover it. 00:41:40.618 [2024-10-01 22:40:35.753540] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.618 [2024-10-01 22:40:35.753556] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.618 qpair failed and we were unable to recover it. 00:41:40.618 [2024-10-01 22:40:35.753871] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.618 [2024-10-01 22:40:35.753881] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.618 qpair failed and we were unable to recover it. 00:41:40.618 [2024-10-01 22:40:35.754186] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.618 [2024-10-01 22:40:35.754196] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.618 qpair failed and we were unable to recover it. 00:41:40.618 [2024-10-01 22:40:35.754519] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.618 [2024-10-01 22:40:35.754529] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.618 qpair failed and we were unable to recover it. 00:41:40.618 [2024-10-01 22:40:35.754882] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.618 [2024-10-01 22:40:35.754893] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.618 qpair failed and we were unable to recover it. 00:41:40.618 [2024-10-01 22:40:35.755201] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.618 [2024-10-01 22:40:35.755211] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.618 qpair failed and we were unable to recover it. 00:41:40.618 [2024-10-01 22:40:35.755539] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.618 [2024-10-01 22:40:35.755549] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.618 qpair failed and we were unable to recover it. 00:41:40.618 [2024-10-01 22:40:35.755858] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.618 [2024-10-01 22:40:35.755868] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.618 qpair failed and we were unable to recover it. 00:41:40.618 [2024-10-01 22:40:35.756036] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.618 [2024-10-01 22:40:35.756047] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.618 qpair failed and we were unable to recover it. 00:41:40.618 [2024-10-01 22:40:35.756352] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.618 [2024-10-01 22:40:35.756362] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.618 qpair failed and we were unable to recover it. 00:41:40.618 [2024-10-01 22:40:35.756672] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.618 [2024-10-01 22:40:35.756682] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.618 qpair failed and we were unable to recover it. 00:41:40.618 [2024-10-01 22:40:35.756870] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.618 [2024-10-01 22:40:35.756881] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.618 qpair failed and we were unable to recover it. 00:41:40.618 [2024-10-01 22:40:35.757060] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.618 [2024-10-01 22:40:35.757070] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.618 qpair failed and we were unable to recover it. 00:41:40.618 [2024-10-01 22:40:35.757378] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.618 [2024-10-01 22:40:35.757388] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.618 qpair failed and we were unable to recover it. 00:41:40.618 [2024-10-01 22:40:35.757667] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.618 [2024-10-01 22:40:35.757677] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.618 qpair failed and we were unable to recover it. 00:41:40.618 [2024-10-01 22:40:35.758031] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.618 [2024-10-01 22:40:35.758041] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.618 qpair failed and we were unable to recover it. 00:41:40.618 [2024-10-01 22:40:35.758322] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.618 [2024-10-01 22:40:35.758333] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.618 qpair failed and we were unable to recover it. 00:41:40.618 [2024-10-01 22:40:35.758684] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.618 [2024-10-01 22:40:35.758694] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.618 qpair failed and we were unable to recover it. 00:41:40.618 [2024-10-01 22:40:35.759018] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.618 [2024-10-01 22:40:35.759027] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.618 qpair failed and we were unable to recover it. 00:41:40.618 [2024-10-01 22:40:35.759312] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.618 [2024-10-01 22:40:35.759322] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.618 qpair failed and we were unable to recover it. 00:41:40.618 [2024-10-01 22:40:35.759608] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.619 [2024-10-01 22:40:35.759618] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.619 qpair failed and we were unable to recover it. 00:41:40.619 [2024-10-01 22:40:35.759939] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.619 [2024-10-01 22:40:35.759949] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.619 qpair failed and we were unable to recover it. 00:41:40.619 [2024-10-01 22:40:35.760257] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.619 [2024-10-01 22:40:35.760266] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.619 qpair failed and we were unable to recover it. 00:41:40.619 [2024-10-01 22:40:35.760442] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.619 [2024-10-01 22:40:35.760452] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.619 qpair failed and we were unable to recover it. 00:41:40.619 [2024-10-01 22:40:35.760820] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.619 [2024-10-01 22:40:35.760831] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.619 qpair failed and we were unable to recover it. 00:41:40.619 [2024-10-01 22:40:35.761120] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.619 [2024-10-01 22:40:35.761130] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.619 qpair failed and we were unable to recover it. 00:41:40.619 [2024-10-01 22:40:35.761434] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.619 [2024-10-01 22:40:35.761444] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.619 qpair failed and we were unable to recover it. 00:41:40.619 [2024-10-01 22:40:35.761727] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.619 [2024-10-01 22:40:35.761738] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.619 qpair failed and we were unable to recover it. 00:41:40.619 [2024-10-01 22:40:35.762045] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.619 [2024-10-01 22:40:35.762056] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.619 qpair failed and we were unable to recover it. 00:41:40.619 [2024-10-01 22:40:35.762363] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.619 [2024-10-01 22:40:35.762373] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.619 qpair failed and we were unable to recover it. 00:41:40.619 [2024-10-01 22:40:35.762669] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.619 [2024-10-01 22:40:35.762679] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.619 qpair failed and we were unable to recover it. 00:41:40.619 [2024-10-01 22:40:35.763045] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.619 [2024-10-01 22:40:35.763055] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.619 qpair failed and we were unable to recover it. 00:41:40.619 [2024-10-01 22:40:35.763356] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.619 [2024-10-01 22:40:35.763366] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.619 qpair failed and we were unable to recover it. 00:41:40.619 [2024-10-01 22:40:35.763690] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.619 [2024-10-01 22:40:35.763700] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.619 qpair failed and we were unable to recover it. 00:41:40.619 [2024-10-01 22:40:35.763992] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.619 [2024-10-01 22:40:35.764003] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.619 qpair failed and we were unable to recover it. 00:41:40.619 [2024-10-01 22:40:35.764322] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.619 [2024-10-01 22:40:35.764331] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.619 qpair failed and we were unable to recover it. 00:41:40.619 [2024-10-01 22:40:35.764638] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.619 [2024-10-01 22:40:35.764649] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.619 qpair failed and we were unable to recover it. 00:41:40.619 [2024-10-01 22:40:35.764823] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.619 [2024-10-01 22:40:35.764833] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.619 qpair failed and we were unable to recover it. 00:41:40.619 [2024-10-01 22:40:35.765024] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.619 [2024-10-01 22:40:35.765033] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.619 qpair failed and we were unable to recover it. 00:41:40.619 [2024-10-01 22:40:35.765320] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.619 [2024-10-01 22:40:35.765336] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.619 qpair failed and we were unable to recover it. 00:41:40.619 [2024-10-01 22:40:35.765642] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.619 [2024-10-01 22:40:35.765652] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.619 qpair failed and we were unable to recover it. 00:41:40.619 [2024-10-01 22:40:35.765962] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.619 [2024-10-01 22:40:35.765972] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.619 qpair failed and we were unable to recover it. 00:41:40.619 [2024-10-01 22:40:35.766272] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.619 [2024-10-01 22:40:35.766282] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.619 qpair failed and we were unable to recover it. 00:41:40.619 [2024-10-01 22:40:35.766559] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.619 [2024-10-01 22:40:35.766569] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.619 qpair failed and we were unable to recover it. 00:41:40.619 [2024-10-01 22:40:35.766879] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.619 [2024-10-01 22:40:35.766889] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.619 qpair failed and we were unable to recover it. 00:41:40.619 [2024-10-01 22:40:35.767195] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.619 [2024-10-01 22:40:35.767204] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.619 qpair failed and we were unable to recover it. 00:41:40.619 [2024-10-01 22:40:35.767517] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.619 [2024-10-01 22:40:35.767528] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.619 qpair failed and we were unable to recover it. 00:41:40.619 [2024-10-01 22:40:35.767848] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.619 [2024-10-01 22:40:35.767858] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.619 qpair failed and we were unable to recover it. 00:41:40.619 [2024-10-01 22:40:35.768134] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.619 [2024-10-01 22:40:35.768143] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.619 qpair failed and we were unable to recover it. 00:41:40.619 [2024-10-01 22:40:35.768459] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.619 [2024-10-01 22:40:35.768470] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.619 qpair failed and we were unable to recover it. 00:41:40.619 [2024-10-01 22:40:35.768776] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.619 [2024-10-01 22:40:35.768786] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.619 qpair failed and we were unable to recover it. 00:41:40.619 [2024-10-01 22:40:35.769153] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.619 [2024-10-01 22:40:35.769164] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.619 qpair failed and we were unable to recover it. 00:41:40.619 [2024-10-01 22:40:35.769365] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.619 [2024-10-01 22:40:35.769375] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.619 qpair failed and we were unable to recover it. 00:41:40.619 [2024-10-01 22:40:35.769688] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.619 [2024-10-01 22:40:35.769698] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.619 qpair failed and we were unable to recover it. 00:41:40.619 [2024-10-01 22:40:35.769881] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.619 [2024-10-01 22:40:35.769890] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.619 qpair failed and we were unable to recover it. 00:41:40.619 [2024-10-01 22:40:35.770220] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.619 [2024-10-01 22:40:35.770230] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.619 qpair failed and we were unable to recover it. 00:41:40.619 [2024-10-01 22:40:35.770438] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.619 [2024-10-01 22:40:35.770447] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.619 qpair failed and we were unable to recover it. 00:41:40.619 [2024-10-01 22:40:35.770722] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.619 [2024-10-01 22:40:35.770734] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.619 qpair failed and we were unable to recover it. 00:41:40.619 [2024-10-01 22:40:35.771002] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.619 [2024-10-01 22:40:35.771011] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.619 qpair failed and we were unable to recover it. 00:41:40.619 [2024-10-01 22:40:35.771373] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.619 [2024-10-01 22:40:35.771386] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.619 qpair failed and we were unable to recover it. 00:41:40.619 [2024-10-01 22:40:35.771636] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.619 [2024-10-01 22:40:35.771647] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.619 qpair failed and we were unable to recover it. 00:41:40.619 [2024-10-01 22:40:35.772007] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.619 [2024-10-01 22:40:35.772016] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.619 qpair failed and we were unable to recover it. 00:41:40.619 [2024-10-01 22:40:35.772330] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.619 [2024-10-01 22:40:35.772340] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.619 qpair failed and we were unable to recover it. 00:41:40.620 [2024-10-01 22:40:35.772666] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.620 [2024-10-01 22:40:35.772676] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.620 qpair failed and we were unable to recover it. 00:41:40.620 [2024-10-01 22:40:35.772966] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.620 [2024-10-01 22:40:35.772976] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.620 qpair failed and we were unable to recover it. 00:41:40.620 [2024-10-01 22:40:35.773280] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.620 [2024-10-01 22:40:35.773289] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.620 qpair failed and we were unable to recover it. 00:41:40.620 [2024-10-01 22:40:35.773584] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.620 [2024-10-01 22:40:35.773594] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.620 qpair failed and we were unable to recover it. 00:41:40.620 [2024-10-01 22:40:35.773798] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.620 [2024-10-01 22:40:35.773808] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.620 qpair failed and we were unable to recover it. 00:41:40.620 [2024-10-01 22:40:35.774130] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.620 [2024-10-01 22:40:35.774139] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.620 qpair failed and we were unable to recover it. 00:41:40.620 [2024-10-01 22:40:35.774444] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.620 [2024-10-01 22:40:35.774454] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.620 qpair failed and we were unable to recover it. 00:41:40.620 [2024-10-01 22:40:35.774761] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.620 [2024-10-01 22:40:35.774771] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.620 qpair failed and we were unable to recover it. 00:41:40.620 [2024-10-01 22:40:35.774960] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.620 [2024-10-01 22:40:35.774970] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.620 qpair failed and we were unable to recover it. 00:41:40.620 [2024-10-01 22:40:35.775299] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.620 [2024-10-01 22:40:35.775309] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.620 qpair failed and we were unable to recover it. 00:41:40.620 [2024-10-01 22:40:35.775645] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.620 [2024-10-01 22:40:35.775656] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.620 qpair failed and we were unable to recover it. 00:41:40.620 [2024-10-01 22:40:35.775964] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.620 [2024-10-01 22:40:35.775973] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.620 qpair failed and we were unable to recover it. 00:41:40.620 [2024-10-01 22:40:35.776278] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.620 [2024-10-01 22:40:35.776288] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.620 qpair failed and we were unable to recover it. 00:41:40.620 [2024-10-01 22:40:35.776594] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.620 [2024-10-01 22:40:35.776604] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.620 qpair failed and we were unable to recover it. 00:41:40.620 [2024-10-01 22:40:35.777000] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.620 [2024-10-01 22:40:35.777010] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.620 qpair failed and we were unable to recover it. 00:41:40.620 [2024-10-01 22:40:35.777277] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.620 [2024-10-01 22:40:35.777287] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.620 qpair failed and we were unable to recover it. 00:41:40.620 [2024-10-01 22:40:35.777608] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.620 [2024-10-01 22:40:35.777618] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.620 qpair failed and we were unable to recover it. 00:41:40.620 [2024-10-01 22:40:35.777951] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.620 [2024-10-01 22:40:35.777961] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.620 qpair failed and we were unable to recover it. 00:41:40.620 [2024-10-01 22:40:35.778270] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.620 [2024-10-01 22:40:35.778280] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.620 qpair failed and we were unable to recover it. 00:41:40.620 [2024-10-01 22:40:35.778587] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.620 [2024-10-01 22:40:35.778596] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.620 qpair failed and we were unable to recover it. 00:41:40.620 [2024-10-01 22:40:35.778894] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.620 [2024-10-01 22:40:35.778904] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.620 qpair failed and we were unable to recover it. 00:41:40.620 [2024-10-01 22:40:35.779207] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.620 [2024-10-01 22:40:35.779217] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.620 qpair failed and we were unable to recover it. 00:41:40.620 [2024-10-01 22:40:35.779528] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.620 [2024-10-01 22:40:35.779538] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.620 qpair failed and we were unable to recover it. 00:41:40.620 [2024-10-01 22:40:35.779826] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.620 [2024-10-01 22:40:35.779838] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.620 qpair failed and we were unable to recover it. 00:41:40.620 [2024-10-01 22:40:35.780120] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.620 [2024-10-01 22:40:35.780129] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.620 qpair failed and we were unable to recover it. 00:41:40.620 [2024-10-01 22:40:35.780430] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.620 [2024-10-01 22:40:35.780441] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.620 qpair failed and we were unable to recover it. 00:41:40.620 [2024-10-01 22:40:35.780750] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.620 [2024-10-01 22:40:35.780761] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.620 qpair failed and we were unable to recover it. 00:41:40.620 [2024-10-01 22:40:35.781069] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.620 [2024-10-01 22:40:35.781080] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.620 qpair failed and we were unable to recover it. 00:41:40.620 [2024-10-01 22:40:35.781374] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.621 [2024-10-01 22:40:35.781383] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.621 qpair failed and we were unable to recover it. 00:41:40.621 [2024-10-01 22:40:35.781664] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.621 [2024-10-01 22:40:35.781674] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.621 qpair failed and we were unable to recover it. 00:41:40.621 [2024-10-01 22:40:35.781997] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.621 [2024-10-01 22:40:35.782006] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.621 qpair failed and we were unable to recover it. 00:41:40.621 [2024-10-01 22:40:35.782313] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.621 [2024-10-01 22:40:35.782323] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.621 qpair failed and we were unable to recover it. 00:41:40.621 [2024-10-01 22:40:35.782606] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.621 [2024-10-01 22:40:35.782615] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.621 qpair failed and we were unable to recover it. 00:41:40.621 [2024-10-01 22:40:35.782906] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.621 [2024-10-01 22:40:35.782917] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.621 qpair failed and we were unable to recover it. 00:41:40.621 [2024-10-01 22:40:35.783219] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.621 [2024-10-01 22:40:35.783228] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.621 qpair failed and we were unable to recover it. 00:41:40.621 [2024-10-01 22:40:35.783531] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.621 [2024-10-01 22:40:35.783541] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.621 qpair failed and we were unable to recover it. 00:41:40.621 [2024-10-01 22:40:35.783837] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.621 [2024-10-01 22:40:35.783847] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.621 qpair failed and we were unable to recover it. 00:41:40.621 [2024-10-01 22:40:35.784151] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.621 [2024-10-01 22:40:35.784161] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.621 qpair failed and we were unable to recover it. 00:41:40.621 [2024-10-01 22:40:35.784320] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.621 [2024-10-01 22:40:35.784331] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.621 qpair failed and we were unable to recover it. 00:41:40.621 [2024-10-01 22:40:35.784635] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.621 [2024-10-01 22:40:35.784646] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.621 qpair failed and we were unable to recover it. 00:41:40.621 [2024-10-01 22:40:35.784981] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.621 [2024-10-01 22:40:35.784990] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.621 qpair failed and we were unable to recover it. 00:41:40.621 [2024-10-01 22:40:35.785284] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.621 [2024-10-01 22:40:35.785293] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.621 qpair failed and we were unable to recover it. 00:41:40.621 [2024-10-01 22:40:35.785610] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.621 [2024-10-01 22:40:35.785620] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.621 qpair failed and we were unable to recover it. 00:41:40.621 [2024-10-01 22:40:35.785797] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.621 [2024-10-01 22:40:35.785808] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.621 qpair failed and we were unable to recover it. 00:41:40.621 [2024-10-01 22:40:35.786026] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.621 [2024-10-01 22:40:35.786035] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.621 qpair failed and we were unable to recover it. 00:41:40.621 [2024-10-01 22:40:35.786340] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.621 [2024-10-01 22:40:35.786350] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.621 qpair failed and we were unable to recover it. 00:41:40.621 [2024-10-01 22:40:35.786658] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.621 [2024-10-01 22:40:35.786668] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.621 qpair failed and we were unable to recover it. 00:41:40.621 [2024-10-01 22:40:35.786968] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.621 [2024-10-01 22:40:35.786978] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.621 qpair failed and we were unable to recover it. 00:41:40.621 [2024-10-01 22:40:35.787265] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.621 [2024-10-01 22:40:35.787274] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.621 qpair failed and we were unable to recover it. 00:41:40.621 [2024-10-01 22:40:35.787476] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.621 [2024-10-01 22:40:35.787486] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.621 qpair failed and we were unable to recover it. 00:41:40.621 [2024-10-01 22:40:35.787792] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.621 [2024-10-01 22:40:35.787805] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.621 qpair failed and we were unable to recover it. 00:41:40.621 [2024-10-01 22:40:35.788103] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.621 [2024-10-01 22:40:35.788113] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.621 qpair failed and we were unable to recover it. 00:41:40.621 [2024-10-01 22:40:35.788392] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.621 [2024-10-01 22:40:35.788402] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.621 qpair failed and we were unable to recover it. 00:41:40.621 [2024-10-01 22:40:35.788686] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.621 [2024-10-01 22:40:35.788697] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.621 qpair failed and we were unable to recover it. 00:41:40.621 [2024-10-01 22:40:35.788995] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.621 [2024-10-01 22:40:35.789004] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.621 qpair failed and we were unable to recover it. 00:41:40.621 [2024-10-01 22:40:35.789275] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.621 [2024-10-01 22:40:35.789285] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.621 qpair failed and we were unable to recover it. 00:41:40.621 [2024-10-01 22:40:35.789472] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.621 [2024-10-01 22:40:35.789482] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.621 qpair failed and we were unable to recover it. 00:41:40.621 [2024-10-01 22:40:35.789690] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.621 [2024-10-01 22:40:35.789700] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.621 qpair failed and we were unable to recover it. 00:41:40.621 [2024-10-01 22:40:35.790067] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.621 [2024-10-01 22:40:35.790076] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.621 qpair failed and we were unable to recover it. 00:41:40.621 [2024-10-01 22:40:35.790291] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.621 [2024-10-01 22:40:35.790301] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.621 qpair failed and we were unable to recover it. 00:41:40.621 [2024-10-01 22:40:35.790634] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.621 [2024-10-01 22:40:35.790645] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.621 qpair failed and we were unable to recover it. 00:41:40.621 [2024-10-01 22:40:35.790936] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.621 [2024-10-01 22:40:35.790946] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.621 qpair failed and we were unable to recover it. 00:41:40.621 [2024-10-01 22:40:35.791269] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.621 [2024-10-01 22:40:35.791279] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.622 qpair failed and we were unable to recover it. 00:41:40.622 [2024-10-01 22:40:35.791576] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.622 [2024-10-01 22:40:35.791586] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.622 qpair failed and we were unable to recover it. 00:41:40.622 [2024-10-01 22:40:35.791798] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.622 [2024-10-01 22:40:35.791808] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.622 qpair failed and we were unable to recover it. 00:41:40.622 [2024-10-01 22:40:35.792110] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.622 [2024-10-01 22:40:35.792120] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.622 qpair failed and we were unable to recover it. 00:41:40.622 [2024-10-01 22:40:35.792309] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.622 [2024-10-01 22:40:35.792319] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.622 qpair failed and we were unable to recover it. 00:41:40.622 [2024-10-01 22:40:35.792666] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.622 [2024-10-01 22:40:35.792676] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.622 qpair failed and we were unable to recover it. 00:41:40.622 [2024-10-01 22:40:35.792967] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.622 [2024-10-01 22:40:35.792976] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.622 qpair failed and we were unable to recover it. 00:41:40.622 [2024-10-01 22:40:35.793295] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.622 [2024-10-01 22:40:35.793305] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.622 qpair failed and we were unable to recover it. 00:41:40.622 [2024-10-01 22:40:35.793613] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.622 [2024-10-01 22:40:35.793623] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.622 qpair failed and we were unable to recover it. 00:41:40.622 [2024-10-01 22:40:35.793936] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.622 [2024-10-01 22:40:35.793947] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.622 qpair failed and we were unable to recover it. 00:41:40.622 [2024-10-01 22:40:35.794225] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.622 [2024-10-01 22:40:35.794236] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.622 qpair failed and we were unable to recover it. 00:41:40.622 [2024-10-01 22:40:35.794541] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.622 [2024-10-01 22:40:35.794550] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.622 qpair failed and we were unable to recover it. 00:41:40.622 [2024-10-01 22:40:35.794859] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.622 [2024-10-01 22:40:35.794869] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.622 qpair failed and we were unable to recover it. 00:41:40.622 [2024-10-01 22:40:35.795187] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.622 [2024-10-01 22:40:35.795197] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.622 qpair failed and we were unable to recover it. 00:41:40.622 [2024-10-01 22:40:35.795479] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.622 [2024-10-01 22:40:35.795489] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.622 qpair failed and we were unable to recover it. 00:41:40.622 [2024-10-01 22:40:35.795788] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.622 [2024-10-01 22:40:35.795799] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.622 qpair failed and we were unable to recover it. 00:41:40.622 [2024-10-01 22:40:35.796110] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.622 [2024-10-01 22:40:35.796120] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.622 qpair failed and we were unable to recover it. 00:41:40.622 [2024-10-01 22:40:35.796310] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.622 [2024-10-01 22:40:35.796321] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.622 qpair failed and we were unable to recover it. 00:41:40.622 [2024-10-01 22:40:35.796643] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.622 [2024-10-01 22:40:35.796653] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.622 qpair failed and we were unable to recover it. 00:41:40.622 [2024-10-01 22:40:35.796952] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.622 [2024-10-01 22:40:35.796961] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.622 qpair failed and we were unable to recover it. 00:41:40.622 [2024-10-01 22:40:35.797275] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.622 [2024-10-01 22:40:35.797285] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.622 qpair failed and we were unable to recover it. 00:41:40.622 [2024-10-01 22:40:35.797665] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.622 [2024-10-01 22:40:35.797675] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.622 qpair failed and we were unable to recover it. 00:41:40.622 [2024-10-01 22:40:35.797957] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.622 [2024-10-01 22:40:35.797967] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.622 qpair failed and we were unable to recover it. 00:41:40.622 [2024-10-01 22:40:35.798276] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.622 [2024-10-01 22:40:35.798285] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.622 qpair failed and we were unable to recover it. 00:41:40.622 [2024-10-01 22:40:35.798612] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.622 [2024-10-01 22:40:35.798621] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.622 qpair failed and we were unable to recover it. 00:41:40.622 [2024-10-01 22:40:35.798949] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.622 [2024-10-01 22:40:35.798959] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.622 qpair failed and we were unable to recover it. 00:41:40.622 [2024-10-01 22:40:35.799245] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.622 [2024-10-01 22:40:35.799254] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.622 qpair failed and we were unable to recover it. 00:41:40.622 [2024-10-01 22:40:35.799601] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.622 [2024-10-01 22:40:35.799610] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.622 qpair failed and we were unable to recover it. 00:41:40.622 [2024-10-01 22:40:35.799917] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.622 [2024-10-01 22:40:35.799928] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.622 qpair failed and we were unable to recover it. 00:41:40.622 [2024-10-01 22:40:35.800080] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.622 [2024-10-01 22:40:35.800091] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.622 qpair failed and we were unable to recover it. 00:41:40.622 [2024-10-01 22:40:35.800371] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.622 [2024-10-01 22:40:35.800380] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.622 qpair failed and we were unable to recover it. 00:41:40.622 [2024-10-01 22:40:35.800713] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.622 [2024-10-01 22:40:35.800725] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.622 qpair failed and we were unable to recover it. 00:41:40.622 [2024-10-01 22:40:35.801095] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.622 [2024-10-01 22:40:35.801104] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.622 qpair failed and we were unable to recover it. 00:41:40.622 [2024-10-01 22:40:35.801286] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.622 [2024-10-01 22:40:35.801297] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.622 qpair failed and we were unable to recover it. 00:41:40.622 [2024-10-01 22:40:35.801621] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.622 [2024-10-01 22:40:35.801640] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.622 qpair failed and we were unable to recover it. 00:41:40.622 [2024-10-01 22:40:35.801922] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.622 [2024-10-01 22:40:35.801932] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.622 qpair failed and we were unable to recover it. 00:41:40.622 [2024-10-01 22:40:35.802248] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.622 [2024-10-01 22:40:35.802258] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.622 qpair failed and we were unable to recover it. 00:41:40.622 [2024-10-01 22:40:35.802559] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.622 [2024-10-01 22:40:35.802568] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.622 qpair failed and we were unable to recover it. 00:41:40.622 [2024-10-01 22:40:35.802784] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.622 [2024-10-01 22:40:35.802794] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.622 qpair failed and we were unable to recover it. 00:41:40.622 [2024-10-01 22:40:35.803142] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.622 [2024-10-01 22:40:35.803151] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.622 qpair failed and we were unable to recover it. 00:41:40.622 [2024-10-01 22:40:35.803454] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.622 [2024-10-01 22:40:35.803464] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.623 qpair failed and we were unable to recover it. 00:41:40.623 [2024-10-01 22:40:35.803789] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.623 [2024-10-01 22:40:35.803799] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.623 qpair failed and we were unable to recover it. 00:41:40.623 [2024-10-01 22:40:35.804102] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.623 [2024-10-01 22:40:35.804112] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.623 qpair failed and we were unable to recover it. 00:41:40.623 [2024-10-01 22:40:35.804414] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.623 [2024-10-01 22:40:35.804424] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.623 qpair failed and we were unable to recover it. 00:41:40.623 [2024-10-01 22:40:35.804738] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.623 [2024-10-01 22:40:35.804748] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.623 qpair failed and we were unable to recover it. 00:41:40.623 [2024-10-01 22:40:35.805063] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.623 [2024-10-01 22:40:35.805072] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.623 qpair failed and we were unable to recover it. 00:41:40.623 [2024-10-01 22:40:35.805356] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.623 [2024-10-01 22:40:35.805365] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.623 qpair failed and we were unable to recover it. 00:41:40.623 [2024-10-01 22:40:35.805673] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.623 [2024-10-01 22:40:35.805683] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.623 qpair failed and we were unable to recover it. 00:41:40.623 [2024-10-01 22:40:35.805992] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.623 [2024-10-01 22:40:35.806002] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.623 qpair failed and we were unable to recover it. 00:41:40.623 [2024-10-01 22:40:35.806303] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.623 [2024-10-01 22:40:35.806313] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.623 qpair failed and we were unable to recover it. 00:41:40.623 [2024-10-01 22:40:35.806599] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.623 [2024-10-01 22:40:35.806609] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.623 qpair failed and we were unable to recover it. 00:41:40.623 [2024-10-01 22:40:35.806908] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.623 [2024-10-01 22:40:35.806918] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.623 qpair failed and we were unable to recover it. 00:41:40.623 [2024-10-01 22:40:35.807186] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.623 [2024-10-01 22:40:35.807196] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.623 qpair failed and we were unable to recover it. 00:41:40.623 [2024-10-01 22:40:35.807522] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.623 [2024-10-01 22:40:35.807533] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.623 qpair failed and we were unable to recover it. 00:41:40.623 [2024-10-01 22:40:35.807862] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.623 [2024-10-01 22:40:35.807873] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.623 qpair failed and we were unable to recover it. 00:41:40.623 [2024-10-01 22:40:35.808176] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.623 [2024-10-01 22:40:35.808187] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.623 qpair failed and we were unable to recover it. 00:41:40.623 [2024-10-01 22:40:35.808391] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.623 [2024-10-01 22:40:35.808404] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.623 qpair failed and we were unable to recover it. 00:41:40.623 [2024-10-01 22:40:35.808706] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.623 [2024-10-01 22:40:35.808716] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.623 qpair failed and we were unable to recover it. 00:41:40.623 [2024-10-01 22:40:35.809041] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.623 [2024-10-01 22:40:35.809051] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.623 qpair failed and we were unable to recover it. 00:41:40.623 [2024-10-01 22:40:35.809352] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.623 [2024-10-01 22:40:35.809362] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.623 qpair failed and we were unable to recover it. 00:41:40.623 [2024-10-01 22:40:35.809672] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.623 [2024-10-01 22:40:35.809683] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.623 qpair failed and we were unable to recover it. 00:41:40.623 [2024-10-01 22:40:35.809988] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.623 [2024-10-01 22:40:35.809998] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.623 qpair failed and we were unable to recover it. 00:41:40.623 [2024-10-01 22:40:35.810280] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.623 [2024-10-01 22:40:35.810290] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.623 qpair failed and we were unable to recover it. 00:41:40.623 [2024-10-01 22:40:35.810562] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.623 [2024-10-01 22:40:35.810572] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.623 qpair failed and we were unable to recover it. 00:41:40.623 [2024-10-01 22:40:35.810885] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.623 [2024-10-01 22:40:35.810896] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.623 qpair failed and we were unable to recover it. 00:41:40.623 [2024-10-01 22:40:35.811195] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.623 [2024-10-01 22:40:35.811205] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.623 qpair failed and we were unable to recover it. 00:41:40.623 [2024-10-01 22:40:35.811576] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.623 [2024-10-01 22:40:35.811587] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.623 qpair failed and we were unable to recover it. 00:41:40.623 [2024-10-01 22:40:35.811882] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.623 [2024-10-01 22:40:35.811892] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.623 qpair failed and we were unable to recover it. 00:41:40.623 [2024-10-01 22:40:35.812194] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.623 [2024-10-01 22:40:35.812205] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.623 qpair failed and we were unable to recover it. 00:41:40.623 [2024-10-01 22:40:35.812449] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.623 [2024-10-01 22:40:35.812459] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.623 qpair failed and we were unable to recover it. 00:41:40.623 [2024-10-01 22:40:35.812765] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.623 [2024-10-01 22:40:35.812775] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.623 qpair failed and we were unable to recover it. 00:41:40.623 [2024-10-01 22:40:35.813078] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.623 [2024-10-01 22:40:35.813087] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.623 qpair failed and we were unable to recover it. 00:41:40.623 [2024-10-01 22:40:35.813399] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.623 [2024-10-01 22:40:35.813408] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.623 qpair failed and we were unable to recover it. 00:41:40.623 [2024-10-01 22:40:35.813741] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.623 [2024-10-01 22:40:35.813751] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.623 qpair failed and we were unable to recover it. 00:41:40.623 [2024-10-01 22:40:35.814063] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.623 [2024-10-01 22:40:35.814073] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.623 qpair failed and we were unable to recover it. 00:41:40.623 [2024-10-01 22:40:35.814395] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.623 [2024-10-01 22:40:35.814406] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.623 qpair failed and we were unable to recover it. 00:41:40.623 [2024-10-01 22:40:35.814708] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.623 [2024-10-01 22:40:35.814718] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.623 qpair failed and we were unable to recover it. 00:41:40.623 [2024-10-01 22:40:35.814925] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.623 [2024-10-01 22:40:35.814934] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.623 qpair failed and we were unable to recover it. 00:41:40.623 [2024-10-01 22:40:35.815146] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.623 [2024-10-01 22:40:35.815155] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.623 qpair failed and we were unable to recover it. 00:41:40.623 [2024-10-01 22:40:35.815464] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.623 [2024-10-01 22:40:35.815474] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.623 qpair failed and we were unable to recover it. 00:41:40.623 [2024-10-01 22:40:35.815779] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.623 [2024-10-01 22:40:35.815789] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.623 qpair failed and we were unable to recover it. 00:41:40.623 [2024-10-01 22:40:35.815970] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.623 [2024-10-01 22:40:35.815981] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.623 qpair failed and we were unable to recover it. 00:41:40.623 [2024-10-01 22:40:35.816290] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.623 [2024-10-01 22:40:35.816300] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.623 qpair failed and we were unable to recover it. 00:41:40.623 [2024-10-01 22:40:35.816508] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.623 [2024-10-01 22:40:35.816521] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.623 qpair failed and we were unable to recover it. 00:41:40.623 [2024-10-01 22:40:35.816823] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.623 [2024-10-01 22:40:35.816834] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.623 qpair failed and we were unable to recover it. 00:41:40.623 [2024-10-01 22:40:35.817111] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.623 [2024-10-01 22:40:35.817121] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.623 qpair failed and we were unable to recover it. 00:41:40.623 [2024-10-01 22:40:35.817441] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.623 [2024-10-01 22:40:35.817450] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.623 qpair failed and we were unable to recover it. 00:41:40.623 [2024-10-01 22:40:35.817644] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.623 [2024-10-01 22:40:35.817654] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.623 qpair failed and we were unable to recover it. 00:41:40.623 [2024-10-01 22:40:35.817993] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.624 [2024-10-01 22:40:35.818002] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.624 qpair failed and we were unable to recover it. 00:41:40.624 [2024-10-01 22:40:35.818318] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.624 [2024-10-01 22:40:35.818327] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.624 qpair failed and we were unable to recover it. 00:41:40.624 [2024-10-01 22:40:35.818545] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.624 [2024-10-01 22:40:35.818555] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.624 qpair failed and we were unable to recover it. 00:41:40.624 [2024-10-01 22:40:35.818883] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.624 [2024-10-01 22:40:35.818894] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.624 qpair failed and we were unable to recover it. 00:41:40.624 [2024-10-01 22:40:35.819213] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.624 [2024-10-01 22:40:35.819222] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.624 qpair failed and we were unable to recover it. 00:41:40.624 [2024-10-01 22:40:35.819420] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.624 [2024-10-01 22:40:35.819430] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.624 qpair failed and we were unable to recover it. 00:41:40.624 [2024-10-01 22:40:35.819794] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.624 [2024-10-01 22:40:35.819804] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.624 qpair failed and we were unable to recover it. 00:41:40.624 [2024-10-01 22:40:35.820066] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.624 [2024-10-01 22:40:35.820075] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.624 qpair failed and we were unable to recover it. 00:41:40.624 [2024-10-01 22:40:35.820360] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.624 [2024-10-01 22:40:35.820370] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.624 qpair failed and we were unable to recover it. 00:41:40.624 [2024-10-01 22:40:35.820686] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.624 [2024-10-01 22:40:35.820697] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.624 qpair failed and we were unable to recover it. 00:41:40.624 [2024-10-01 22:40:35.820872] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.624 [2024-10-01 22:40:35.820882] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.624 qpair failed and we were unable to recover it. 00:41:40.624 [2024-10-01 22:40:35.821189] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.624 [2024-10-01 22:40:35.821199] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.624 qpair failed and we were unable to recover it. 00:41:40.624 [2024-10-01 22:40:35.821519] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.624 [2024-10-01 22:40:35.821528] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.624 qpair failed and we were unable to recover it. 00:41:40.624 [2024-10-01 22:40:35.821825] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.624 [2024-10-01 22:40:35.821835] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.624 qpair failed and we were unable to recover it. 00:41:40.624 [2024-10-01 22:40:35.822126] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.624 [2024-10-01 22:40:35.822135] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.624 qpair failed and we were unable to recover it. 00:41:40.624 [2024-10-01 22:40:35.822444] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.624 [2024-10-01 22:40:35.822454] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.624 qpair failed and we were unable to recover it. 00:41:40.624 [2024-10-01 22:40:35.822761] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.624 [2024-10-01 22:40:35.822771] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.624 qpair failed and we were unable to recover it. 00:41:40.624 [2024-10-01 22:40:35.823034] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.624 [2024-10-01 22:40:35.823044] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.624 qpair failed and we were unable to recover it. 00:41:40.624 [2024-10-01 22:40:35.823401] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.624 [2024-10-01 22:40:35.823411] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.624 qpair failed and we were unable to recover it. 00:41:40.624 [2024-10-01 22:40:35.823692] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.624 [2024-10-01 22:40:35.823702] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.624 qpair failed and we were unable to recover it. 00:41:40.624 [2024-10-01 22:40:35.824023] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.624 [2024-10-01 22:40:35.824033] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.624 qpair failed and we were unable to recover it. 00:41:40.624 [2024-10-01 22:40:35.824335] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.624 [2024-10-01 22:40:35.824345] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.624 qpair failed and we were unable to recover it. 00:41:40.624 [2024-10-01 22:40:35.824632] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.624 [2024-10-01 22:40:35.824642] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.624 qpair failed and we were unable to recover it. 00:41:40.624 [2024-10-01 22:40:35.824947] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.624 [2024-10-01 22:40:35.824957] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.624 qpair failed and we were unable to recover it. 00:41:40.624 [2024-10-01 22:40:35.825276] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.624 [2024-10-01 22:40:35.825285] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.624 qpair failed and we were unable to recover it. 00:41:40.624 [2024-10-01 22:40:35.825599] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.624 [2024-10-01 22:40:35.825609] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.624 qpair failed and we were unable to recover it. 00:41:40.624 [2024-10-01 22:40:35.825925] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.624 [2024-10-01 22:40:35.825936] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.624 qpair failed and we were unable to recover it. 00:41:40.624 [2024-10-01 22:40:35.826243] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.624 [2024-10-01 22:40:35.826254] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.624 qpair failed and we were unable to recover it. 00:41:40.624 [2024-10-01 22:40:35.826559] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.624 [2024-10-01 22:40:35.826570] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.624 qpair failed and we were unable to recover it. 00:41:40.624 [2024-10-01 22:40:35.826876] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.624 [2024-10-01 22:40:35.826887] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.624 qpair failed and we were unable to recover it. 00:41:40.624 [2024-10-01 22:40:35.827219] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.624 [2024-10-01 22:40:35.827229] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.624 qpair failed and we were unable to recover it. 00:41:40.624 [2024-10-01 22:40:35.827402] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.624 [2024-10-01 22:40:35.827413] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.624 qpair failed and we were unable to recover it. 00:41:40.624 [2024-10-01 22:40:35.827580] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.624 [2024-10-01 22:40:35.827591] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.624 qpair failed and we were unable to recover it. 00:41:40.624 [2024-10-01 22:40:35.827883] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.624 [2024-10-01 22:40:35.827893] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.624 qpair failed and we were unable to recover it. 00:41:40.624 [2024-10-01 22:40:35.828111] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.624 [2024-10-01 22:40:35.828121] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.624 qpair failed and we were unable to recover it. 00:41:40.913 [2024-10-01 22:40:35.828314] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.913 [2024-10-01 22:40:35.828327] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.913 qpair failed and we were unable to recover it. 00:41:40.913 [2024-10-01 22:40:35.828632] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.913 [2024-10-01 22:40:35.828643] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.913 qpair failed and we were unable to recover it. 00:41:40.913 [2024-10-01 22:40:35.828996] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.913 [2024-10-01 22:40:35.829006] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.913 qpair failed and we were unable to recover it. 00:41:40.913 [2024-10-01 22:40:35.829397] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.913 [2024-10-01 22:40:35.829407] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.913 qpair failed and we were unable to recover it. 00:41:40.913 [2024-10-01 22:40:35.829689] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.913 [2024-10-01 22:40:35.829699] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.913 qpair failed and we were unable to recover it. 00:41:40.913 [2024-10-01 22:40:35.829995] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.913 [2024-10-01 22:40:35.830012] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.913 qpair failed and we were unable to recover it. 00:41:40.913 [2024-10-01 22:40:35.830199] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.913 [2024-10-01 22:40:35.830211] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.913 qpair failed and we were unable to recover it. 00:41:40.913 [2024-10-01 22:40:35.830514] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.913 [2024-10-01 22:40:35.830525] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.913 qpair failed and we were unable to recover it. 00:41:40.913 [2024-10-01 22:40:35.830834] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.913 [2024-10-01 22:40:35.830844] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.913 qpair failed and we were unable to recover it. 00:41:40.913 [2024-10-01 22:40:35.831044] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.913 [2024-10-01 22:40:35.831054] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.913 qpair failed and we were unable to recover it. 00:41:40.913 [2024-10-01 22:40:35.831393] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.913 [2024-10-01 22:40:35.831404] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.913 qpair failed and we were unable to recover it. 00:41:40.913 [2024-10-01 22:40:35.831685] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.913 [2024-10-01 22:40:35.831695] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.913 qpair failed and we were unable to recover it. 00:41:40.913 [2024-10-01 22:40:35.831979] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.913 [2024-10-01 22:40:35.831988] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.913 qpair failed and we were unable to recover it. 00:41:40.913 [2024-10-01 22:40:35.832299] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.913 [2024-10-01 22:40:35.832309] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.913 qpair failed and we were unable to recover it. 00:41:40.913 [2024-10-01 22:40:35.832620] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.913 [2024-10-01 22:40:35.832635] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.913 qpair failed and we were unable to recover it. 00:41:40.913 [2024-10-01 22:40:35.832980] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.913 [2024-10-01 22:40:35.832990] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.913 qpair failed and we were unable to recover it. 00:41:40.913 [2024-10-01 22:40:35.833187] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.913 [2024-10-01 22:40:35.833197] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.913 qpair failed and we were unable to recover it. 00:41:40.913 [2024-10-01 22:40:35.833521] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.913 [2024-10-01 22:40:35.833531] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.913 qpair failed and we were unable to recover it. 00:41:40.913 [2024-10-01 22:40:35.833840] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.913 [2024-10-01 22:40:35.833851] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.913 qpair failed and we were unable to recover it. 00:41:40.913 [2024-10-01 22:40:35.834136] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.913 [2024-10-01 22:40:35.834146] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.913 qpair failed and we were unable to recover it. 00:41:40.913 [2024-10-01 22:40:35.834448] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.913 [2024-10-01 22:40:35.834458] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.913 qpair failed and we were unable to recover it. 00:41:40.913 [2024-10-01 22:40:35.834651] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.913 [2024-10-01 22:40:35.834662] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.913 qpair failed and we were unable to recover it. 00:41:40.913 [2024-10-01 22:40:35.834989] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.913 [2024-10-01 22:40:35.834999] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.913 qpair failed and we were unable to recover it. 00:41:40.913 [2024-10-01 22:40:35.835312] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.913 [2024-10-01 22:40:35.835322] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.913 qpair failed and we were unable to recover it. 00:41:40.913 [2024-10-01 22:40:35.835513] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.913 [2024-10-01 22:40:35.835523] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.913 qpair failed and we were unable to recover it. 00:41:40.913 [2024-10-01 22:40:35.835808] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.913 [2024-10-01 22:40:35.835819] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.913 qpair failed and we were unable to recover it. 00:41:40.913 [2024-10-01 22:40:35.836144] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.913 [2024-10-01 22:40:35.836154] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.913 qpair failed and we were unable to recover it. 00:41:40.913 [2024-10-01 22:40:35.836480] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.913 [2024-10-01 22:40:35.836490] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.913 qpair failed and we were unable to recover it. 00:41:40.913 [2024-10-01 22:40:35.836768] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.913 [2024-10-01 22:40:35.836781] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.913 qpair failed and we were unable to recover it. 00:41:40.913 [2024-10-01 22:40:35.837012] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.913 [2024-10-01 22:40:35.837022] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.913 qpair failed and we were unable to recover it. 00:41:40.913 [2024-10-01 22:40:35.837371] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.913 [2024-10-01 22:40:35.837381] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.913 qpair failed and we were unable to recover it. 00:41:40.913 [2024-10-01 22:40:35.837664] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.913 [2024-10-01 22:40:35.837674] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.913 qpair failed and we were unable to recover it. 00:41:40.913 [2024-10-01 22:40:35.837971] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.913 [2024-10-01 22:40:35.837982] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.913 qpair failed and we were unable to recover it. 00:41:40.913 [2024-10-01 22:40:35.838289] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.913 [2024-10-01 22:40:35.838298] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.913 qpair failed and we were unable to recover it. 00:41:40.913 [2024-10-01 22:40:35.838524] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.913 [2024-10-01 22:40:35.838534] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.913 qpair failed and we were unable to recover it. 00:41:40.913 [2024-10-01 22:40:35.838856] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.913 [2024-10-01 22:40:35.838867] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.913 qpair failed and we were unable to recover it. 00:41:40.913 [2024-10-01 22:40:35.839172] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.913 [2024-10-01 22:40:35.839182] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.913 qpair failed and we were unable to recover it. 00:41:40.913 [2024-10-01 22:40:35.839503] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.913 [2024-10-01 22:40:35.839518] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.913 qpair failed and we were unable to recover it. 00:41:40.914 [2024-10-01 22:40:35.839835] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.914 [2024-10-01 22:40:35.839846] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.914 qpair failed and we were unable to recover it. 00:41:40.914 [2024-10-01 22:40:35.840130] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.914 [2024-10-01 22:40:35.840140] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.914 qpair failed and we were unable to recover it. 00:41:40.914 [2024-10-01 22:40:35.840447] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.914 [2024-10-01 22:40:35.840456] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.914 qpair failed and we were unable to recover it. 00:41:40.914 [2024-10-01 22:40:35.840763] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.914 [2024-10-01 22:40:35.840773] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.914 qpair failed and we were unable to recover it. 00:41:40.914 [2024-10-01 22:40:35.841073] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.914 [2024-10-01 22:40:35.841083] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.914 qpair failed and we were unable to recover it. 00:41:40.914 [2024-10-01 22:40:35.841265] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.914 [2024-10-01 22:40:35.841275] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.914 qpair failed and we were unable to recover it. 00:41:40.914 [2024-10-01 22:40:35.841631] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.914 [2024-10-01 22:40:35.841642] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.914 qpair failed and we were unable to recover it. 00:41:40.914 [2024-10-01 22:40:35.841923] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.914 [2024-10-01 22:40:35.841933] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.914 qpair failed and we were unable to recover it. 00:41:40.914 [2024-10-01 22:40:35.842236] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.914 [2024-10-01 22:40:35.842245] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.914 qpair failed and we were unable to recover it. 00:41:40.914 [2024-10-01 22:40:35.842533] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.914 [2024-10-01 22:40:35.842543] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.914 qpair failed and we were unable to recover it. 00:41:40.914 [2024-10-01 22:40:35.842920] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.914 [2024-10-01 22:40:35.842931] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.914 qpair failed and we were unable to recover it. 00:41:40.914 [2024-10-01 22:40:35.843126] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.914 [2024-10-01 22:40:35.843136] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.914 qpair failed and we were unable to recover it. 00:41:40.914 [2024-10-01 22:40:35.843405] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.914 [2024-10-01 22:40:35.843415] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.914 qpair failed and we were unable to recover it. 00:41:40.914 [2024-10-01 22:40:35.843739] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.914 [2024-10-01 22:40:35.843749] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.914 qpair failed and we were unable to recover it. 00:41:40.914 [2024-10-01 22:40:35.844053] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.914 [2024-10-01 22:40:35.844063] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.914 qpair failed and we were unable to recover it. 00:41:40.914 [2024-10-01 22:40:35.844374] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.914 [2024-10-01 22:40:35.844383] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.914 qpair failed and we were unable to recover it. 00:41:40.914 [2024-10-01 22:40:35.844663] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.914 [2024-10-01 22:40:35.844673] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.914 qpair failed and we were unable to recover it. 00:41:40.914 [2024-10-01 22:40:35.844982] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.914 [2024-10-01 22:40:35.844997] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.914 qpair failed and we were unable to recover it. 00:41:40.914 [2024-10-01 22:40:35.845295] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.914 [2024-10-01 22:40:35.845304] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.914 qpair failed and we were unable to recover it. 00:41:40.914 [2024-10-01 22:40:35.845616] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.914 [2024-10-01 22:40:35.845629] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.914 qpair failed and we were unable to recover it. 00:41:40.914 [2024-10-01 22:40:35.846009] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.914 [2024-10-01 22:40:35.846019] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.914 qpair failed and we were unable to recover it. 00:41:40.914 [2024-10-01 22:40:35.846355] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.914 [2024-10-01 22:40:35.846365] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.914 qpair failed and we were unable to recover it. 00:41:40.914 [2024-10-01 22:40:35.846648] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.914 [2024-10-01 22:40:35.846659] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.914 qpair failed and we were unable to recover it. 00:41:40.914 [2024-10-01 22:40:35.846963] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.914 [2024-10-01 22:40:35.846973] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.914 qpair failed and we were unable to recover it. 00:41:40.914 [2024-10-01 22:40:35.847174] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.914 [2024-10-01 22:40:35.847183] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.914 qpair failed and we were unable to recover it. 00:41:40.914 [2024-10-01 22:40:35.847510] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.914 [2024-10-01 22:40:35.847519] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.914 qpair failed and we were unable to recover it. 00:41:40.914 [2024-10-01 22:40:35.847841] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.914 [2024-10-01 22:40:35.847851] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.914 qpair failed and we were unable to recover it. 00:41:40.914 [2024-10-01 22:40:35.848159] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.914 [2024-10-01 22:40:35.848168] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.914 qpair failed and we were unable to recover it. 00:41:40.914 [2024-10-01 22:40:35.848452] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.914 [2024-10-01 22:40:35.848461] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.914 qpair failed and we were unable to recover it. 00:41:40.914 [2024-10-01 22:40:35.848757] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.914 [2024-10-01 22:40:35.848767] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.914 qpair failed and we were unable to recover it. 00:41:40.914 [2024-10-01 22:40:35.849074] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.914 [2024-10-01 22:40:35.849084] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.914 qpair failed and we were unable to recover it. 00:41:40.914 [2024-10-01 22:40:35.849387] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.914 [2024-10-01 22:40:35.849397] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.914 qpair failed and we were unable to recover it. 00:41:40.914 [2024-10-01 22:40:35.849702] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.914 [2024-10-01 22:40:35.849713] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.914 qpair failed and we were unable to recover it. 00:41:40.914 [2024-10-01 22:40:35.850017] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.914 [2024-10-01 22:40:35.850027] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.914 qpair failed and we were unable to recover it. 00:41:40.914 [2024-10-01 22:40:35.850352] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.914 [2024-10-01 22:40:35.850362] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.914 qpair failed and we were unable to recover it. 00:41:40.914 [2024-10-01 22:40:35.850674] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.914 [2024-10-01 22:40:35.850684] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.914 qpair failed and we were unable to recover it. 00:41:40.914 [2024-10-01 22:40:35.850985] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.914 [2024-10-01 22:40:35.850994] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.914 qpair failed and we were unable to recover it. 00:41:40.914 [2024-10-01 22:40:35.851302] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.914 [2024-10-01 22:40:35.851312] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.914 qpair failed and we were unable to recover it. 00:41:40.914 [2024-10-01 22:40:35.851634] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.914 [2024-10-01 22:40:35.851644] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.914 qpair failed and we were unable to recover it. 00:41:40.915 [2024-10-01 22:40:35.851979] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.915 [2024-10-01 22:40:35.851989] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.915 qpair failed and we were unable to recover it. 00:41:40.915 [2024-10-01 22:40:35.852257] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.915 [2024-10-01 22:40:35.852267] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.915 qpair failed and we were unable to recover it. 00:41:40.915 [2024-10-01 22:40:35.852553] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.915 [2024-10-01 22:40:35.852562] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.915 qpair failed and we were unable to recover it. 00:41:40.915 [2024-10-01 22:40:35.852872] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.915 [2024-10-01 22:40:35.852882] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.915 qpair failed and we were unable to recover it. 00:41:40.915 [2024-10-01 22:40:35.853190] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.915 [2024-10-01 22:40:35.853200] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.915 qpair failed and we were unable to recover it. 00:41:40.915 [2024-10-01 22:40:35.853503] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.915 [2024-10-01 22:40:35.853515] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.915 qpair failed and we were unable to recover it. 00:41:40.915 [2024-10-01 22:40:35.853843] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.915 [2024-10-01 22:40:35.853854] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.915 qpair failed and we were unable to recover it. 00:41:40.915 [2024-10-01 22:40:35.854161] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.915 [2024-10-01 22:40:35.854171] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.915 qpair failed and we were unable to recover it. 00:41:40.915 [2024-10-01 22:40:35.854523] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.915 [2024-10-01 22:40:35.854533] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.915 qpair failed and we were unable to recover it. 00:41:40.915 [2024-10-01 22:40:35.854847] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.915 [2024-10-01 22:40:35.854857] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.915 qpair failed and we were unable to recover it. 00:41:40.915 [2024-10-01 22:40:35.855195] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.915 [2024-10-01 22:40:35.855205] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.915 qpair failed and we were unable to recover it. 00:41:40.915 [2024-10-01 22:40:35.855390] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.915 [2024-10-01 22:40:35.855401] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.915 qpair failed and we were unable to recover it. 00:41:40.915 [2024-10-01 22:40:35.855701] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.915 [2024-10-01 22:40:35.855711] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.915 qpair failed and we were unable to recover it. 00:41:40.915 [2024-10-01 22:40:35.855899] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.915 [2024-10-01 22:40:35.855909] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.915 qpair failed and we were unable to recover it. 00:41:40.915 [2024-10-01 22:40:35.856248] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.915 [2024-10-01 22:40:35.856257] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.915 qpair failed and we were unable to recover it. 00:41:40.915 [2024-10-01 22:40:35.856561] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.915 [2024-10-01 22:40:35.856570] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.915 qpair failed and we were unable to recover it. 00:41:40.915 [2024-10-01 22:40:35.856869] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.915 [2024-10-01 22:40:35.856879] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.915 qpair failed and we were unable to recover it. 00:41:40.915 [2024-10-01 22:40:35.857183] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.915 [2024-10-01 22:40:35.857193] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.915 qpair failed and we were unable to recover it. 00:41:40.915 [2024-10-01 22:40:35.857524] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.915 [2024-10-01 22:40:35.857533] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.915 qpair failed and we were unable to recover it. 00:41:40.915 [2024-10-01 22:40:35.857839] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.915 [2024-10-01 22:40:35.857857] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.915 qpair failed and we were unable to recover it. 00:41:40.915 [2024-10-01 22:40:35.858215] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.915 [2024-10-01 22:40:35.858225] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.915 qpair failed and we were unable to recover it. 00:41:40.915 [2024-10-01 22:40:35.858521] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.915 [2024-10-01 22:40:35.858531] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.915 qpair failed and we were unable to recover it. 00:41:40.915 [2024-10-01 22:40:35.858849] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.915 [2024-10-01 22:40:35.858859] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.915 qpair failed and we were unable to recover it. 00:41:40.915 [2024-10-01 22:40:35.859182] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.915 [2024-10-01 22:40:35.859192] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.915 qpair failed and we were unable to recover it. 00:41:40.915 [2024-10-01 22:40:35.859393] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.915 [2024-10-01 22:40:35.859402] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.915 qpair failed and we were unable to recover it. 00:41:40.915 [2024-10-01 22:40:35.859674] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.915 [2024-10-01 22:40:35.859684] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.915 qpair failed and we were unable to recover it. 00:41:40.915 [2024-10-01 22:40:35.860009] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.915 [2024-10-01 22:40:35.860019] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.915 qpair failed and we were unable to recover it. 00:41:40.915 [2024-10-01 22:40:35.860327] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.915 [2024-10-01 22:40:35.860337] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.915 qpair failed and we were unable to recover it. 00:41:40.915 [2024-10-01 22:40:35.860644] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.915 [2024-10-01 22:40:35.860654] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.915 qpair failed and we were unable to recover it. 00:41:40.915 [2024-10-01 22:40:35.860958] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.915 [2024-10-01 22:40:35.860968] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.915 qpair failed and we were unable to recover it. 00:41:40.915 [2024-10-01 22:40:35.861254] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.915 [2024-10-01 22:40:35.861264] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.915 qpair failed and we were unable to recover it. 00:41:40.915 [2024-10-01 22:40:35.861570] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.915 [2024-10-01 22:40:35.861579] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.915 qpair failed and we were unable to recover it. 00:41:40.915 [2024-10-01 22:40:35.861889] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.915 [2024-10-01 22:40:35.861899] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.915 qpair failed and we were unable to recover it. 00:41:40.915 [2024-10-01 22:40:35.862216] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.915 [2024-10-01 22:40:35.862226] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.915 qpair failed and we were unable to recover it. 00:41:40.915 [2024-10-01 22:40:35.862580] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.915 [2024-10-01 22:40:35.862589] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.915 qpair failed and we were unable to recover it. 00:41:40.915 [2024-10-01 22:40:35.862902] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.915 [2024-10-01 22:40:35.862912] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.915 qpair failed and we were unable to recover it. 00:41:40.915 [2024-10-01 22:40:35.863225] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.915 [2024-10-01 22:40:35.863235] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.915 qpair failed and we were unable to recover it. 00:41:40.915 [2024-10-01 22:40:35.863385] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.915 [2024-10-01 22:40:35.863395] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.915 qpair failed and we were unable to recover it. 00:41:40.915 [2024-10-01 22:40:35.863750] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.915 [2024-10-01 22:40:35.863761] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.915 qpair failed and we were unable to recover it. 00:41:40.916 [2024-10-01 22:40:35.864100] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.916 [2024-10-01 22:40:35.864110] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.916 qpair failed and we were unable to recover it. 00:41:40.916 [2024-10-01 22:40:35.864415] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.916 [2024-10-01 22:40:35.864425] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.916 qpair failed and we were unable to recover it. 00:41:40.916 [2024-10-01 22:40:35.864719] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.916 [2024-10-01 22:40:35.864729] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.916 qpair failed and we were unable to recover it. 00:41:40.916 [2024-10-01 22:40:35.865018] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.916 [2024-10-01 22:40:35.865028] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.916 qpair failed and we were unable to recover it. 00:41:40.916 [2024-10-01 22:40:35.865354] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.916 [2024-10-01 22:40:35.865363] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.916 qpair failed and we were unable to recover it. 00:41:40.916 [2024-10-01 22:40:35.865645] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.916 [2024-10-01 22:40:35.865655] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.916 qpair failed and we were unable to recover it. 00:41:40.916 [2024-10-01 22:40:35.865984] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.916 [2024-10-01 22:40:35.865993] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.916 qpair failed and we were unable to recover it. 00:41:40.916 [2024-10-01 22:40:35.866280] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.916 [2024-10-01 22:40:35.866292] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.916 qpair failed and we were unable to recover it. 00:41:40.916 [2024-10-01 22:40:35.866600] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.916 [2024-10-01 22:40:35.866609] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.916 qpair failed and we were unable to recover it. 00:41:40.916 [2024-10-01 22:40:35.866916] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.916 [2024-10-01 22:40:35.866926] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.916 qpair failed and we were unable to recover it. 00:41:40.916 [2024-10-01 22:40:35.867130] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.916 [2024-10-01 22:40:35.867139] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.916 qpair failed and we were unable to recover it. 00:41:40.916 [2024-10-01 22:40:35.867423] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.916 [2024-10-01 22:40:35.867433] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.916 qpair failed and we were unable to recover it. 00:41:40.916 [2024-10-01 22:40:35.867747] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.916 [2024-10-01 22:40:35.867758] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.916 qpair failed and we were unable to recover it. 00:41:40.916 [2024-10-01 22:40:35.868084] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.916 [2024-10-01 22:40:35.868093] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.916 qpair failed and we were unable to recover it. 00:41:40.916 [2024-10-01 22:40:35.868432] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.916 [2024-10-01 22:40:35.868442] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.916 qpair failed and we were unable to recover it. 00:41:40.916 [2024-10-01 22:40:35.868660] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.916 [2024-10-01 22:40:35.868671] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.916 qpair failed and we were unable to recover it. 00:41:40.916 [2024-10-01 22:40:35.868940] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.916 [2024-10-01 22:40:35.868949] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.916 qpair failed and we were unable to recover it. 00:41:40.916 [2024-10-01 22:40:35.869222] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.916 [2024-10-01 22:40:35.869232] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.916 qpair failed and we were unable to recover it. 00:41:40.916 [2024-10-01 22:40:35.869541] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.916 [2024-10-01 22:40:35.869551] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.916 qpair failed and we were unable to recover it. 00:41:40.916 [2024-10-01 22:40:35.869831] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.916 [2024-10-01 22:40:35.869841] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.916 qpair failed and we were unable to recover it. 00:41:40.916 [2024-10-01 22:40:35.870160] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.916 [2024-10-01 22:40:35.870169] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.916 qpair failed and we were unable to recover it. 00:41:40.916 [2024-10-01 22:40:35.870473] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.916 [2024-10-01 22:40:35.870482] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.916 qpair failed and we were unable to recover it. 00:41:40.916 [2024-10-01 22:40:35.870786] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.916 [2024-10-01 22:40:35.870796] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.916 qpair failed and we were unable to recover it. 00:41:40.916 [2024-10-01 22:40:35.871081] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.916 [2024-10-01 22:40:35.871091] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.916 qpair failed and we were unable to recover it. 00:41:40.916 [2024-10-01 22:40:35.871297] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.916 [2024-10-01 22:40:35.871307] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.916 qpair failed and we were unable to recover it. 00:41:40.916 [2024-10-01 22:40:35.871605] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.916 [2024-10-01 22:40:35.871615] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.916 qpair failed and we were unable to recover it. 00:41:40.916 [2024-10-01 22:40:35.871935] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.916 [2024-10-01 22:40:35.871944] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.916 qpair failed and we were unable to recover it. 00:41:40.916 [2024-10-01 22:40:35.872230] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.916 [2024-10-01 22:40:35.872241] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.916 qpair failed and we were unable to recover it. 00:41:40.916 [2024-10-01 22:40:35.872541] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.916 [2024-10-01 22:40:35.872551] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.916 qpair failed and we were unable to recover it. 00:41:40.916 [2024-10-01 22:40:35.872771] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.916 [2024-10-01 22:40:35.872781] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.916 qpair failed and we were unable to recover it. 00:41:40.916 [2024-10-01 22:40:35.873135] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.916 [2024-10-01 22:40:35.873145] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.916 qpair failed and we were unable to recover it. 00:41:40.916 [2024-10-01 22:40:35.873425] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.916 [2024-10-01 22:40:35.873435] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.916 qpair failed and we were unable to recover it. 00:41:40.916 [2024-10-01 22:40:35.873747] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.916 [2024-10-01 22:40:35.873758] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.916 qpair failed and we were unable to recover it. 00:41:40.916 [2024-10-01 22:40:35.874060] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.916 [2024-10-01 22:40:35.874071] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.916 qpair failed and we were unable to recover it. 00:41:40.916 [2024-10-01 22:40:35.874377] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.916 [2024-10-01 22:40:35.874390] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.916 qpair failed and we were unable to recover it. 00:41:40.916 [2024-10-01 22:40:35.874696] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.916 [2024-10-01 22:40:35.874706] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.916 qpair failed and we were unable to recover it. 00:41:40.916 [2024-10-01 22:40:35.875009] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.916 [2024-10-01 22:40:35.875025] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.916 qpair failed and we were unable to recover it. 00:41:40.916 [2024-10-01 22:40:35.875338] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.916 [2024-10-01 22:40:35.875348] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.916 qpair failed and we were unable to recover it. 00:41:40.916 [2024-10-01 22:40:35.875654] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.916 [2024-10-01 22:40:35.875664] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.916 qpair failed and we were unable to recover it. 00:41:40.917 [2024-10-01 22:40:35.876030] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.917 [2024-10-01 22:40:35.876040] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.917 qpair failed and we were unable to recover it. 00:41:40.917 [2024-10-01 22:40:35.876245] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.917 [2024-10-01 22:40:35.876254] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.917 qpair failed and we were unable to recover it. 00:41:40.917 [2024-10-01 22:40:35.876550] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.917 [2024-10-01 22:40:35.876560] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.917 qpair failed and we were unable to recover it. 00:41:40.917 [2024-10-01 22:40:35.876732] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.917 [2024-10-01 22:40:35.876742] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.917 qpair failed and we were unable to recover it. 00:41:40.917 [2024-10-01 22:40:35.877083] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.917 [2024-10-01 22:40:35.877092] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.917 qpair failed and we were unable to recover it. 00:41:40.917 [2024-10-01 22:40:35.877435] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.917 [2024-10-01 22:40:35.877445] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.917 qpair failed and we were unable to recover it. 00:41:40.917 [2024-10-01 22:40:35.877637] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.917 [2024-10-01 22:40:35.877647] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.917 qpair failed and we were unable to recover it. 00:41:40.917 [2024-10-01 22:40:35.877918] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.917 [2024-10-01 22:40:35.877928] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.917 qpair failed and we were unable to recover it. 00:41:40.917 [2024-10-01 22:40:35.878209] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.917 [2024-10-01 22:40:35.878224] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.917 qpair failed and we were unable to recover it. 00:41:40.917 [2024-10-01 22:40:35.878554] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.917 [2024-10-01 22:40:35.878564] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.917 qpair failed and we were unable to recover it. 00:41:40.917 [2024-10-01 22:40:35.878872] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.917 [2024-10-01 22:40:35.878883] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.917 qpair failed and we were unable to recover it. 00:41:40.917 [2024-10-01 22:40:35.879182] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.917 [2024-10-01 22:40:35.879192] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.917 qpair failed and we were unable to recover it. 00:41:40.917 [2024-10-01 22:40:35.879386] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.917 [2024-10-01 22:40:35.879397] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.917 qpair failed and we were unable to recover it. 00:41:40.917 [2024-10-01 22:40:35.879663] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.917 [2024-10-01 22:40:35.879675] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.917 qpair failed and we were unable to recover it. 00:41:40.917 [2024-10-01 22:40:35.879991] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.917 [2024-10-01 22:40:35.880001] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.917 qpair failed and we were unable to recover it. 00:41:40.917 [2024-10-01 22:40:35.880316] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.917 [2024-10-01 22:40:35.880325] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.917 qpair failed and we were unable to recover it. 00:41:40.917 [2024-10-01 22:40:35.880495] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.917 [2024-10-01 22:40:35.880505] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.917 qpair failed and we were unable to recover it. 00:41:40.917 [2024-10-01 22:40:35.880784] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.917 [2024-10-01 22:40:35.880794] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.917 qpair failed and we were unable to recover it. 00:41:40.917 [2024-10-01 22:40:35.881114] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.917 [2024-10-01 22:40:35.881124] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.917 qpair failed and we were unable to recover it. 00:41:40.917 [2024-10-01 22:40:35.881431] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.917 [2024-10-01 22:40:35.881441] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.917 qpair failed and we were unable to recover it. 00:41:40.917 [2024-10-01 22:40:35.881746] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.917 [2024-10-01 22:40:35.881757] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.917 qpair failed and we were unable to recover it. 00:41:40.917 [2024-10-01 22:40:35.882042] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.917 [2024-10-01 22:40:35.882052] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.917 qpair failed and we were unable to recover it. 00:41:40.917 [2024-10-01 22:40:35.882365] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.917 [2024-10-01 22:40:35.882378] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.917 qpair failed and we were unable to recover it. 00:41:40.917 [2024-10-01 22:40:35.882680] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.917 [2024-10-01 22:40:35.882690] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.917 qpair failed and we were unable to recover it. 00:41:40.917 [2024-10-01 22:40:35.883057] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.917 [2024-10-01 22:40:35.883067] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.917 qpair failed and we were unable to recover it. 00:41:40.917 [2024-10-01 22:40:35.883369] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.917 [2024-10-01 22:40:35.883379] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.917 qpair failed and we were unable to recover it. 00:41:40.917 [2024-10-01 22:40:35.883680] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.917 [2024-10-01 22:40:35.883690] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.917 qpair failed and we were unable to recover it. 00:41:40.917 [2024-10-01 22:40:35.884009] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.917 [2024-10-01 22:40:35.884019] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.917 qpair failed and we were unable to recover it. 00:41:40.917 [2024-10-01 22:40:35.884209] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.917 [2024-10-01 22:40:35.884219] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.917 qpair failed and we were unable to recover it. 00:41:40.917 [2024-10-01 22:40:35.884543] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.917 [2024-10-01 22:40:35.884553] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.917 qpair failed and we were unable to recover it. 00:41:40.917 [2024-10-01 22:40:35.884865] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.917 [2024-10-01 22:40:35.884876] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.917 qpair failed and we were unable to recover it. 00:41:40.917 [2024-10-01 22:40:35.885168] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.917 [2024-10-01 22:40:35.885178] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.917 qpair failed and we were unable to recover it. 00:41:40.917 [2024-10-01 22:40:35.885458] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.918 [2024-10-01 22:40:35.885468] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.918 qpair failed and we were unable to recover it. 00:41:40.918 [2024-10-01 22:40:35.885678] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.918 [2024-10-01 22:40:35.885690] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.918 qpair failed and we were unable to recover it. 00:41:40.918 [2024-10-01 22:40:35.886009] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.918 [2024-10-01 22:40:35.886019] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.918 qpair failed and we were unable to recover it. 00:41:40.918 [2024-10-01 22:40:35.886311] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.918 [2024-10-01 22:40:35.886321] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.918 qpair failed and we were unable to recover it. 00:41:40.918 [2024-10-01 22:40:35.886523] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.918 [2024-10-01 22:40:35.886533] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.918 qpair failed and we were unable to recover it. 00:41:40.918 [2024-10-01 22:40:35.886826] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.918 [2024-10-01 22:40:35.886837] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.918 qpair failed and we were unable to recover it. 00:41:40.918 [2024-10-01 22:40:35.887037] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.918 [2024-10-01 22:40:35.887047] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.918 qpair failed and we were unable to recover it. 00:41:40.918 [2024-10-01 22:40:35.887314] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.918 [2024-10-01 22:40:35.887324] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.918 qpair failed and we were unable to recover it. 00:41:40.918 [2024-10-01 22:40:35.887608] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.918 [2024-10-01 22:40:35.887618] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.918 qpair failed and we were unable to recover it. 00:41:40.918 [2024-10-01 22:40:35.887915] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.918 [2024-10-01 22:40:35.887924] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.918 qpair failed and we were unable to recover it. 00:41:40.918 [2024-10-01 22:40:35.888221] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.918 [2024-10-01 22:40:35.888231] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.918 qpair failed and we were unable to recover it. 00:41:40.918 [2024-10-01 22:40:35.888547] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.918 [2024-10-01 22:40:35.888556] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.918 qpair failed and we were unable to recover it. 00:41:40.918 [2024-10-01 22:40:35.888861] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.918 [2024-10-01 22:40:35.888872] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.918 qpair failed and we were unable to recover it. 00:41:40.918 [2024-10-01 22:40:35.889177] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.918 [2024-10-01 22:40:35.889186] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.918 qpair failed and we were unable to recover it. 00:41:40.918 [2024-10-01 22:40:35.889480] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.918 [2024-10-01 22:40:35.889490] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.918 qpair failed and we were unable to recover it. 00:41:40.918 [2024-10-01 22:40:35.889778] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.918 [2024-10-01 22:40:35.889789] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.918 qpair failed and we were unable to recover it. 00:41:40.918 [2024-10-01 22:40:35.890086] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.918 [2024-10-01 22:40:35.890096] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.918 qpair failed and we were unable to recover it. 00:41:40.918 [2024-10-01 22:40:35.890406] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.918 [2024-10-01 22:40:35.890415] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.918 qpair failed and we were unable to recover it. 00:41:40.918 [2024-10-01 22:40:35.890602] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.918 [2024-10-01 22:40:35.890612] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.918 qpair failed and we were unable to recover it. 00:41:40.918 [2024-10-01 22:40:35.890892] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.918 [2024-10-01 22:40:35.890902] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.918 qpair failed and we were unable to recover it. 00:41:40.918 [2024-10-01 22:40:35.891216] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.918 [2024-10-01 22:40:35.891226] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.918 qpair failed and we were unable to recover it. 00:41:40.918 [2024-10-01 22:40:35.891609] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.918 [2024-10-01 22:40:35.891619] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.918 qpair failed and we were unable to recover it. 00:41:40.918 [2024-10-01 22:40:35.891916] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.918 [2024-10-01 22:40:35.891926] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.918 qpair failed and we were unable to recover it. 00:41:40.918 [2024-10-01 22:40:35.892262] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.918 [2024-10-01 22:40:35.892272] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.918 qpair failed and we were unable to recover it. 00:41:40.918 [2024-10-01 22:40:35.892471] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.918 [2024-10-01 22:40:35.892481] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.918 qpair failed and we were unable to recover it. 00:41:40.918 [2024-10-01 22:40:35.892828] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.918 [2024-10-01 22:40:35.892839] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.918 qpair failed and we were unable to recover it. 00:41:40.918 [2024-10-01 22:40:35.893152] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.918 [2024-10-01 22:40:35.893162] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.918 qpair failed and we were unable to recover it. 00:41:40.918 [2024-10-01 22:40:35.893475] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.918 [2024-10-01 22:40:35.893485] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.918 qpair failed and we were unable to recover it. 00:41:40.918 [2024-10-01 22:40:35.893784] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.918 [2024-10-01 22:40:35.893794] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.918 qpair failed and we were unable to recover it. 00:41:40.918 [2024-10-01 22:40:35.894088] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.918 [2024-10-01 22:40:35.894097] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.918 qpair failed and we were unable to recover it. 00:41:40.918 [2024-10-01 22:40:35.894403] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.918 [2024-10-01 22:40:35.894412] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.918 qpair failed and we were unable to recover it. 00:41:40.918 [2024-10-01 22:40:35.894706] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.918 [2024-10-01 22:40:35.894717] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.918 qpair failed and we were unable to recover it. 00:41:40.918 [2024-10-01 22:40:35.895069] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.918 [2024-10-01 22:40:35.895079] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.918 qpair failed and we were unable to recover it. 00:41:40.918 [2024-10-01 22:40:35.895278] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.918 [2024-10-01 22:40:35.895287] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.918 qpair failed and we were unable to recover it. 00:41:40.918 [2024-10-01 22:40:35.895619] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.918 [2024-10-01 22:40:35.895632] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.918 qpair failed and we were unable to recover it. 00:41:40.918 [2024-10-01 22:40:35.895813] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.918 [2024-10-01 22:40:35.895823] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.918 qpair failed and we were unable to recover it. 00:41:40.918 [2024-10-01 22:40:35.896179] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.918 [2024-10-01 22:40:35.896189] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.918 qpair failed and we were unable to recover it. 00:41:40.918 [2024-10-01 22:40:35.896365] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.918 [2024-10-01 22:40:35.896375] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.918 qpair failed and we were unable to recover it. 00:41:40.918 [2024-10-01 22:40:35.896621] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.918 [2024-10-01 22:40:35.896638] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.918 qpair failed and we were unable to recover it. 00:41:40.918 [2024-10-01 22:40:35.896925] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.918 [2024-10-01 22:40:35.896935] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.919 qpair failed and we were unable to recover it. 00:41:40.919 [2024-10-01 22:40:35.897220] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.919 [2024-10-01 22:40:35.897236] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.919 qpair failed and we were unable to recover it. 00:41:40.919 [2024-10-01 22:40:35.897555] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.919 [2024-10-01 22:40:35.897564] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.919 qpair failed and we were unable to recover it. 00:41:40.919 [2024-10-01 22:40:35.897932] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.919 [2024-10-01 22:40:35.897943] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.919 qpair failed and we were unable to recover it. 00:41:40.919 [2024-10-01 22:40:35.898256] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.919 [2024-10-01 22:40:35.898266] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.919 qpair failed and we were unable to recover it. 00:41:40.919 [2024-10-01 22:40:35.898467] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.919 [2024-10-01 22:40:35.898476] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.919 qpair failed and we were unable to recover it. 00:41:40.919 [2024-10-01 22:40:35.898671] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.919 [2024-10-01 22:40:35.898682] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.919 qpair failed and we were unable to recover it. 00:41:40.919 [2024-10-01 22:40:35.899041] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.919 [2024-10-01 22:40:35.899051] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.919 qpair failed and we were unable to recover it. 00:41:40.919 [2024-10-01 22:40:35.899450] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.919 [2024-10-01 22:40:35.899460] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.919 qpair failed and we were unable to recover it. 00:41:40.919 [2024-10-01 22:40:35.899804] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.919 [2024-10-01 22:40:35.899814] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.919 qpair failed and we were unable to recover it. 00:41:40.919 [2024-10-01 22:40:35.900005] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.919 [2024-10-01 22:40:35.900015] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.919 qpair failed and we were unable to recover it. 00:41:40.919 [2024-10-01 22:40:35.900333] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.919 [2024-10-01 22:40:35.900343] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.919 qpair failed and we were unable to recover it. 00:41:40.919 [2024-10-01 22:40:35.900653] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.919 [2024-10-01 22:40:35.900663] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.919 qpair failed and we were unable to recover it. 00:41:40.919 [2024-10-01 22:40:35.900985] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.919 [2024-10-01 22:40:35.900995] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.919 qpair failed and we were unable to recover it. 00:41:40.919 [2024-10-01 22:40:35.901307] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.919 [2024-10-01 22:40:35.901317] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.919 qpair failed and we were unable to recover it. 00:41:40.919 [2024-10-01 22:40:35.901639] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.919 [2024-10-01 22:40:35.901649] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.919 qpair failed and we were unable to recover it. 00:41:40.919 [2024-10-01 22:40:35.901942] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.919 [2024-10-01 22:40:35.901952] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.919 qpair failed and we were unable to recover it. 00:41:40.919 [2024-10-01 22:40:35.902161] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.919 [2024-10-01 22:40:35.902171] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.919 qpair failed and we were unable to recover it. 00:41:40.919 [2024-10-01 22:40:35.902492] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.919 [2024-10-01 22:40:35.902502] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.919 qpair failed and we were unable to recover it. 00:41:40.919 [2024-10-01 22:40:35.902786] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.919 [2024-10-01 22:40:35.902799] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.919 qpair failed and we were unable to recover it. 00:41:40.919 [2024-10-01 22:40:35.903095] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.919 [2024-10-01 22:40:35.903106] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.919 qpair failed and we were unable to recover it. 00:41:40.919 [2024-10-01 22:40:35.903304] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.919 [2024-10-01 22:40:35.903313] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.919 qpair failed and we were unable to recover it. 00:41:40.919 [2024-10-01 22:40:35.903620] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.919 [2024-10-01 22:40:35.903640] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.919 qpair failed and we were unable to recover it. 00:41:40.919 [2024-10-01 22:40:35.903993] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.919 [2024-10-01 22:40:35.904003] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.919 qpair failed and we were unable to recover it. 00:41:40.919 [2024-10-01 22:40:35.904194] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.919 [2024-10-01 22:40:35.904204] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.919 qpair failed and we were unable to recover it. 00:41:40.919 [2024-10-01 22:40:35.904488] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.919 [2024-10-01 22:40:35.904497] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.919 qpair failed and we were unable to recover it. 00:41:40.919 [2024-10-01 22:40:35.904695] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.919 [2024-10-01 22:40:35.904705] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.919 qpair failed and we were unable to recover it. 00:41:40.919 [2024-10-01 22:40:35.905008] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.919 [2024-10-01 22:40:35.905018] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.919 qpair failed and we were unable to recover it. 00:41:40.919 [2024-10-01 22:40:35.905382] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.919 [2024-10-01 22:40:35.905392] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.919 qpair failed and we were unable to recover it. 00:41:40.919 [2024-10-01 22:40:35.905698] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.919 [2024-10-01 22:40:35.905709] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.919 qpair failed and we were unable to recover it. 00:41:40.919 [2024-10-01 22:40:35.906010] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.919 [2024-10-01 22:40:35.906021] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.919 qpair failed and we were unable to recover it. 00:41:40.919 [2024-10-01 22:40:35.906303] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.919 [2024-10-01 22:40:35.906313] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.919 qpair failed and we were unable to recover it. 00:41:40.919 [2024-10-01 22:40:35.906630] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.919 [2024-10-01 22:40:35.906641] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.919 qpair failed and we were unable to recover it. 00:41:40.919 [2024-10-01 22:40:35.906841] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.919 [2024-10-01 22:40:35.906851] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.919 qpair failed and we were unable to recover it. 00:41:40.919 [2024-10-01 22:40:35.907084] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.919 [2024-10-01 22:40:35.907094] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.919 qpair failed and we were unable to recover it. 00:41:40.919 [2024-10-01 22:40:35.907373] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.919 [2024-10-01 22:40:35.907383] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.919 qpair failed and we were unable to recover it. 00:41:40.919 [2024-10-01 22:40:35.907688] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.919 [2024-10-01 22:40:35.907698] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.919 qpair failed and we were unable to recover it. 00:41:40.919 [2024-10-01 22:40:35.907996] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.919 [2024-10-01 22:40:35.908006] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.919 qpair failed and we were unable to recover it. 00:41:40.919 [2024-10-01 22:40:35.908199] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.919 [2024-10-01 22:40:35.908209] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.919 qpair failed and we were unable to recover it. 00:41:40.919 [2024-10-01 22:40:35.908561] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.919 [2024-10-01 22:40:35.908570] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.919 qpair failed and we were unable to recover it. 00:41:40.920 [2024-10-01 22:40:35.908917] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.920 [2024-10-01 22:40:35.908927] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.920 qpair failed and we were unable to recover it. 00:41:40.920 [2024-10-01 22:40:35.909236] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.920 [2024-10-01 22:40:35.909246] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.920 qpair failed and we were unable to recover it. 00:41:40.920 [2024-10-01 22:40:35.909554] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.920 [2024-10-01 22:40:35.909563] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.920 qpair failed and we were unable to recover it. 00:41:40.920 [2024-10-01 22:40:35.909753] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.920 [2024-10-01 22:40:35.909764] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.920 qpair failed and we were unable to recover it. 00:41:40.920 [2024-10-01 22:40:35.910096] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.920 [2024-10-01 22:40:35.910106] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.920 qpair failed and we were unable to recover it. 00:41:40.920 [2024-10-01 22:40:35.910438] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.920 [2024-10-01 22:40:35.910448] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.920 qpair failed and we were unable to recover it. 00:41:40.920 [2024-10-01 22:40:35.910758] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.920 [2024-10-01 22:40:35.910771] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.920 qpair failed and we were unable to recover it. 00:41:40.920 [2024-10-01 22:40:35.911066] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.920 [2024-10-01 22:40:35.911076] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.920 qpair failed and we were unable to recover it. 00:41:40.920 [2024-10-01 22:40:35.911388] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.920 [2024-10-01 22:40:35.911399] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.920 qpair failed and we were unable to recover it. 00:41:40.920 [2024-10-01 22:40:35.911693] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.920 [2024-10-01 22:40:35.911703] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.920 qpair failed and we were unable to recover it. 00:41:40.920 [2024-10-01 22:40:35.911888] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.920 [2024-10-01 22:40:35.911898] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.920 qpair failed and we were unable to recover it. 00:41:40.920 [2024-10-01 22:40:35.912207] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.920 [2024-10-01 22:40:35.912216] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.920 qpair failed and we were unable to recover it. 00:41:40.920 [2024-10-01 22:40:35.912579] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.920 [2024-10-01 22:40:35.912588] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.920 qpair failed and we were unable to recover it. 00:41:40.920 [2024-10-01 22:40:35.912796] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.920 [2024-10-01 22:40:35.912807] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.920 qpair failed and we were unable to recover it. 00:41:40.920 [2024-10-01 22:40:35.912973] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.920 [2024-10-01 22:40:35.912983] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.920 qpair failed and we were unable to recover it. 00:41:40.920 [2024-10-01 22:40:35.913347] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.920 [2024-10-01 22:40:35.913356] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.920 qpair failed and we were unable to recover it. 00:41:40.920 [2024-10-01 22:40:35.913594] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.920 [2024-10-01 22:40:35.913605] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.920 qpair failed and we were unable to recover it. 00:41:40.920 [2024-10-01 22:40:35.913935] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.920 [2024-10-01 22:40:35.913945] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.920 qpair failed and we were unable to recover it. 00:41:40.920 [2024-10-01 22:40:35.914122] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.920 [2024-10-01 22:40:35.914131] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.920 qpair failed and we were unable to recover it. 00:41:40.920 [2024-10-01 22:40:35.914474] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.920 [2024-10-01 22:40:35.914484] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.920 qpair failed and we were unable to recover it. 00:41:40.920 [2024-10-01 22:40:35.914779] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.920 [2024-10-01 22:40:35.914790] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.920 qpair failed and we were unable to recover it. 00:41:40.920 [2024-10-01 22:40:35.914986] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.920 [2024-10-01 22:40:35.914996] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.920 qpair failed and we were unable to recover it. 00:41:40.920 [2024-10-01 22:40:35.915324] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.920 [2024-10-01 22:40:35.915333] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.920 qpair failed and we were unable to recover it. 00:41:40.920 [2024-10-01 22:40:35.915650] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.920 [2024-10-01 22:40:35.915660] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.920 qpair failed and we were unable to recover it. 00:41:40.920 [2024-10-01 22:40:35.915971] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.920 [2024-10-01 22:40:35.915980] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.920 qpair failed and we were unable to recover it. 00:41:40.920 [2024-10-01 22:40:35.916385] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.920 [2024-10-01 22:40:35.916395] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.920 qpair failed and we were unable to recover it. 00:41:40.920 [2024-10-01 22:40:35.916691] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.920 [2024-10-01 22:40:35.916701] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.920 qpair failed and we were unable to recover it. 00:41:40.920 [2024-10-01 22:40:35.917028] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.920 [2024-10-01 22:40:35.917038] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.920 qpair failed and we were unable to recover it. 00:41:40.920 [2024-10-01 22:40:35.917346] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.920 [2024-10-01 22:40:35.917356] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.920 qpair failed and we were unable to recover it. 00:41:40.920 [2024-10-01 22:40:35.917641] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.920 [2024-10-01 22:40:35.917651] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.920 qpair failed and we were unable to recover it. 00:41:40.920 [2024-10-01 22:40:35.917957] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.920 [2024-10-01 22:40:35.917971] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.920 qpair failed and we were unable to recover it. 00:41:40.920 [2024-10-01 22:40:35.918255] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.920 [2024-10-01 22:40:35.918266] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.920 qpair failed and we were unable to recover it. 00:41:40.920 [2024-10-01 22:40:35.918572] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.920 [2024-10-01 22:40:35.918582] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.920 qpair failed and we were unable to recover it. 00:41:40.920 [2024-10-01 22:40:35.918885] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.920 [2024-10-01 22:40:35.918898] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.920 qpair failed and we were unable to recover it. 00:41:40.920 [2024-10-01 22:40:35.919060] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.920 [2024-10-01 22:40:35.919071] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.920 qpair failed and we were unable to recover it. 00:41:40.920 [2024-10-01 22:40:35.919262] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.920 [2024-10-01 22:40:35.919272] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.920 qpair failed and we were unable to recover it. 00:41:40.920 [2024-10-01 22:40:35.919496] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.920 [2024-10-01 22:40:35.919506] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.920 qpair failed and we were unable to recover it. 00:41:40.920 [2024-10-01 22:40:35.919708] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.920 [2024-10-01 22:40:35.919719] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.920 qpair failed and we were unable to recover it. 00:41:40.920 [2024-10-01 22:40:35.919938] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.920 [2024-10-01 22:40:35.919948] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.920 qpair failed and we were unable to recover it. 00:41:40.921 [2024-10-01 22:40:35.920251] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.921 [2024-10-01 22:40:35.920261] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.921 qpair failed and we were unable to recover it. 00:41:40.921 [2024-10-01 22:40:35.920567] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.921 [2024-10-01 22:40:35.920577] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.921 qpair failed and we were unable to recover it. 00:41:40.921 [2024-10-01 22:40:35.920894] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.921 [2024-10-01 22:40:35.920905] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.921 qpair failed and we were unable to recover it. 00:41:40.921 [2024-10-01 22:40:35.921188] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.921 [2024-10-01 22:40:35.921207] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.921 qpair failed and we were unable to recover it. 00:41:40.921 [2024-10-01 22:40:35.921401] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.921 [2024-10-01 22:40:35.921411] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.921 qpair failed and we were unable to recover it. 00:41:40.921 [2024-10-01 22:40:35.921656] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.921 [2024-10-01 22:40:35.921667] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.921 qpair failed and we were unable to recover it. 00:41:40.921 [2024-10-01 22:40:35.921866] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.921 [2024-10-01 22:40:35.921877] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.921 qpair failed and we were unable to recover it. 00:41:40.921 [2024-10-01 22:40:35.922235] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.921 [2024-10-01 22:40:35.922245] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.921 qpair failed and we were unable to recover it. 00:41:40.921 [2024-10-01 22:40:35.922552] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.921 [2024-10-01 22:40:35.922567] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.921 qpair failed and we were unable to recover it. 00:41:40.921 [2024-10-01 22:40:35.922847] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.921 [2024-10-01 22:40:35.922858] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.921 qpair failed and we were unable to recover it. 00:41:40.921 [2024-10-01 22:40:35.923056] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.921 [2024-10-01 22:40:35.923066] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.921 qpair failed and we were unable to recover it. 00:41:40.921 [2024-10-01 22:40:35.923288] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.921 [2024-10-01 22:40:35.923298] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.921 qpair failed and we were unable to recover it. 00:41:40.921 [2024-10-01 22:40:35.923620] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.921 [2024-10-01 22:40:35.923633] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.921 qpair failed and we were unable to recover it. 00:41:40.921 [2024-10-01 22:40:35.923850] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.921 [2024-10-01 22:40:35.923860] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.921 qpair failed and we were unable to recover it. 00:41:40.921 [2024-10-01 22:40:35.924214] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.921 [2024-10-01 22:40:35.924224] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.921 qpair failed and we were unable to recover it. 00:41:40.921 [2024-10-01 22:40:35.924424] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.921 [2024-10-01 22:40:35.924434] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.921 qpair failed and we were unable to recover it. 00:41:40.921 [2024-10-01 22:40:35.924717] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.921 [2024-10-01 22:40:35.924727] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.921 qpair failed and we were unable to recover it. 00:41:40.921 [2024-10-01 22:40:35.925038] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.921 [2024-10-01 22:40:35.925048] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.921 qpair failed and we were unable to recover it. 00:41:40.921 [2024-10-01 22:40:35.925349] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.921 [2024-10-01 22:40:35.925359] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.921 qpair failed and we were unable to recover it. 00:41:40.921 [2024-10-01 22:40:35.925640] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.921 [2024-10-01 22:40:35.925650] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.921 qpair failed and we were unable to recover it. 00:41:40.921 [2024-10-01 22:40:35.926060] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.921 [2024-10-01 22:40:35.926069] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.921 qpair failed and we were unable to recover it. 00:41:40.921 [2024-10-01 22:40:35.926228] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.921 [2024-10-01 22:40:35.926238] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.921 qpair failed and we were unable to recover it. 00:41:40.921 [2024-10-01 22:40:35.926540] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.921 [2024-10-01 22:40:35.926555] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.921 qpair failed and we were unable to recover it. 00:41:40.921 [2024-10-01 22:40:35.926861] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.921 [2024-10-01 22:40:35.926871] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.921 qpair failed and we were unable to recover it. 00:41:40.921 [2024-10-01 22:40:35.927032] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.921 [2024-10-01 22:40:35.927043] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.921 qpair failed and we were unable to recover it. 00:41:40.921 [2024-10-01 22:40:35.927219] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.921 [2024-10-01 22:40:35.927230] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.921 qpair failed and we were unable to recover it. 00:41:40.921 [2024-10-01 22:40:35.927555] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.921 [2024-10-01 22:40:35.927566] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.921 qpair failed and we were unable to recover it. 00:41:40.921 [2024-10-01 22:40:35.927882] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.921 [2024-10-01 22:40:35.927893] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.921 qpair failed and we were unable to recover it. 00:41:40.921 [2024-10-01 22:40:35.928225] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.921 [2024-10-01 22:40:35.928236] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.921 qpair failed and we were unable to recover it. 00:41:40.921 [2024-10-01 22:40:35.928538] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.921 [2024-10-01 22:40:35.928548] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.921 qpair failed and we were unable to recover it. 00:41:40.921 [2024-10-01 22:40:35.928858] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.921 [2024-10-01 22:40:35.928870] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.921 qpair failed and we were unable to recover it. 00:41:40.921 [2024-10-01 22:40:35.929223] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.921 [2024-10-01 22:40:35.929233] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.921 qpair failed and we were unable to recover it. 00:41:40.921 [2024-10-01 22:40:35.929463] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.921 [2024-10-01 22:40:35.929473] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.921 qpair failed and we were unable to recover it. 00:41:40.921 [2024-10-01 22:40:35.929651] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.921 [2024-10-01 22:40:35.929665] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.921 qpair failed and we were unable to recover it. 00:41:40.921 [2024-10-01 22:40:35.929865] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.921 [2024-10-01 22:40:35.929875] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.921 qpair failed and we were unable to recover it. 00:41:40.921 [2024-10-01 22:40:35.930094] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.921 [2024-10-01 22:40:35.930107] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.921 qpair failed and we were unable to recover it. 00:41:40.921 [2024-10-01 22:40:35.930374] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.921 [2024-10-01 22:40:35.930386] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.921 qpair failed and we were unable to recover it. 00:41:40.921 [2024-10-01 22:40:35.930704] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.922 [2024-10-01 22:40:35.930715] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.922 qpair failed and we were unable to recover it. 00:41:40.922 [2024-10-01 22:40:35.931083] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.922 [2024-10-01 22:40:35.931095] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.922 qpair failed and we were unable to recover it. 00:41:40.922 [2024-10-01 22:40:35.931410] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.922 [2024-10-01 22:40:35.931420] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.922 qpair failed and we were unable to recover it. 00:41:40.922 [2024-10-01 22:40:35.931702] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.922 [2024-10-01 22:40:35.931712] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.922 qpair failed and we were unable to recover it. 00:41:40.922 [2024-10-01 22:40:35.932002] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.922 [2024-10-01 22:40:35.932012] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.922 qpair failed and we were unable to recover it. 00:41:40.922 [2024-10-01 22:40:35.932308] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.922 [2024-10-01 22:40:35.932326] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.922 qpair failed and we were unable to recover it. 00:41:40.922 [2024-10-01 22:40:35.932643] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.922 [2024-10-01 22:40:35.932654] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.922 qpair failed and we were unable to recover it. 00:41:40.922 [2024-10-01 22:40:35.933005] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.922 [2024-10-01 22:40:35.933015] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.922 qpair failed and we were unable to recover it. 00:41:40.922 [2024-10-01 22:40:35.933241] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.922 [2024-10-01 22:40:35.933250] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.922 qpair failed and we were unable to recover it. 00:41:40.922 [2024-10-01 22:40:35.933475] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.922 [2024-10-01 22:40:35.933485] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.922 qpair failed and we were unable to recover it. 00:41:40.922 [2024-10-01 22:40:35.933697] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.922 [2024-10-01 22:40:35.933708] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.922 qpair failed and we were unable to recover it. 00:41:40.922 [2024-10-01 22:40:35.933918] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.922 [2024-10-01 22:40:35.933928] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.922 qpair failed and we were unable to recover it. 00:41:40.922 [2024-10-01 22:40:35.934256] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.922 [2024-10-01 22:40:35.934266] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.922 qpair failed and we were unable to recover it. 00:41:40.922 [2024-10-01 22:40:35.934603] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.922 [2024-10-01 22:40:35.934612] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.922 qpair failed and we were unable to recover it. 00:41:40.922 [2024-10-01 22:40:35.934888] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.922 [2024-10-01 22:40:35.934899] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.922 qpair failed and we were unable to recover it. 00:41:40.922 [2024-10-01 22:40:35.935064] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.922 [2024-10-01 22:40:35.935074] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.922 qpair failed and we were unable to recover it. 00:41:40.922 [2024-10-01 22:40:35.935382] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.922 [2024-10-01 22:40:35.935392] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.922 qpair failed and we were unable to recover it. 00:41:40.922 [2024-10-01 22:40:35.935678] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.922 [2024-10-01 22:40:35.935688] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.922 qpair failed and we were unable to recover it. 00:41:40.922 [2024-10-01 22:40:35.936028] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.922 [2024-10-01 22:40:35.936037] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.922 qpair failed and we were unable to recover it. 00:41:40.922 [2024-10-01 22:40:35.936302] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.922 [2024-10-01 22:40:35.936312] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.922 qpair failed and we were unable to recover it. 00:41:40.922 [2024-10-01 22:40:35.936523] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.922 [2024-10-01 22:40:35.936533] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.922 qpair failed and we were unable to recover it. 00:41:40.922 [2024-10-01 22:40:35.936795] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.922 [2024-10-01 22:40:35.936805] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.922 qpair failed and we were unable to recover it. 00:41:40.922 [2024-10-01 22:40:35.937134] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.922 [2024-10-01 22:40:35.937144] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.922 qpair failed and we were unable to recover it. 00:41:40.922 [2024-10-01 22:40:35.937456] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.922 [2024-10-01 22:40:35.937466] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.922 qpair failed and we were unable to recover it. 00:41:40.922 [2024-10-01 22:40:35.937858] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.922 [2024-10-01 22:40:35.937868] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.922 qpair failed and we were unable to recover it. 00:41:40.922 [2024-10-01 22:40:35.938039] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.922 [2024-10-01 22:40:35.938051] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.922 qpair failed and we were unable to recover it. 00:41:40.922 [2024-10-01 22:40:35.938435] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.922 [2024-10-01 22:40:35.938445] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.922 qpair failed and we were unable to recover it. 00:41:40.922 [2024-10-01 22:40:35.938751] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.922 [2024-10-01 22:40:35.938762] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.922 qpair failed and we were unable to recover it. 00:41:40.922 [2024-10-01 22:40:35.938959] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.922 [2024-10-01 22:40:35.938969] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.922 qpair failed and we were unable to recover it. 00:41:40.922 [2024-10-01 22:40:35.939323] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.922 [2024-10-01 22:40:35.939332] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.922 qpair failed and we were unable to recover it. 00:41:40.922 [2024-10-01 22:40:35.939643] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.922 [2024-10-01 22:40:35.939654] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.922 qpair failed and we were unable to recover it. 00:41:40.922 [2024-10-01 22:40:35.939907] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.922 [2024-10-01 22:40:35.939917] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.922 qpair failed and we were unable to recover it. 00:41:40.922 [2024-10-01 22:40:35.940253] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.922 [2024-10-01 22:40:35.940264] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.922 qpair failed and we were unable to recover it. 00:41:40.922 [2024-10-01 22:40:35.940583] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.922 [2024-10-01 22:40:35.940593] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.922 qpair failed and we were unable to recover it. 00:41:40.922 [2024-10-01 22:40:35.940906] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.923 [2024-10-01 22:40:35.940916] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.923 qpair failed and we were unable to recover it. 00:41:40.923 [2024-10-01 22:40:35.941222] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.923 [2024-10-01 22:40:35.941232] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.923 qpair failed and we were unable to recover it. 00:41:40.923 [2024-10-01 22:40:35.941523] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.923 [2024-10-01 22:40:35.941534] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.923 qpair failed and we were unable to recover it. 00:41:40.923 [2024-10-01 22:40:35.941710] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.923 [2024-10-01 22:40:35.941721] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.923 qpair failed and we were unable to recover it. 00:41:40.923 [2024-10-01 22:40:35.942033] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.923 [2024-10-01 22:40:35.942042] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.923 qpair failed and we were unable to recover it. 00:41:40.923 [2024-10-01 22:40:35.942255] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.923 [2024-10-01 22:40:35.942265] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.923 qpair failed and we were unable to recover it. 00:41:40.923 [2024-10-01 22:40:35.942586] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.923 [2024-10-01 22:40:35.942596] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.923 qpair failed and we were unable to recover it. 00:41:40.923 [2024-10-01 22:40:35.942781] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.923 [2024-10-01 22:40:35.942791] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.923 qpair failed and we were unable to recover it. 00:41:40.923 [2024-10-01 22:40:35.943108] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.923 [2024-10-01 22:40:35.943118] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.923 qpair failed and we were unable to recover it. 00:41:40.923 [2024-10-01 22:40:35.943399] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.923 [2024-10-01 22:40:35.943417] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.923 qpair failed and we were unable to recover it. 00:41:40.923 [2024-10-01 22:40:35.943712] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.923 [2024-10-01 22:40:35.943723] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.923 qpair failed and we were unable to recover it. 00:41:40.923 [2024-10-01 22:40:35.944040] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.923 [2024-10-01 22:40:35.944050] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.923 qpair failed and we were unable to recover it. 00:41:40.923 [2024-10-01 22:40:35.944359] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.923 [2024-10-01 22:40:35.944368] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.923 qpair failed and we were unable to recover it. 00:41:40.923 [2024-10-01 22:40:35.944654] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.923 [2024-10-01 22:40:35.944664] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.923 qpair failed and we were unable to recover it. 00:41:40.923 [2024-10-01 22:40:35.944961] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.923 [2024-10-01 22:40:35.944971] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.923 qpair failed and we were unable to recover it. 00:41:40.923 [2024-10-01 22:40:35.945244] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.923 [2024-10-01 22:40:35.945254] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.923 qpair failed and we were unable to recover it. 00:41:40.923 [2024-10-01 22:40:35.945574] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.923 [2024-10-01 22:40:35.945584] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.923 qpair failed and we were unable to recover it. 00:41:40.923 [2024-10-01 22:40:35.945754] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.923 [2024-10-01 22:40:35.945765] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.923 qpair failed and we were unable to recover it. 00:41:40.923 [2024-10-01 22:40:35.946124] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.923 [2024-10-01 22:40:35.946136] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.923 qpair failed and we were unable to recover it. 00:41:40.923 [2024-10-01 22:40:35.946419] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.923 [2024-10-01 22:40:35.946428] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.923 qpair failed and we were unable to recover it. 00:41:40.923 [2024-10-01 22:40:35.946741] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.923 [2024-10-01 22:40:35.946751] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.923 qpair failed and we were unable to recover it. 00:41:40.923 [2024-10-01 22:40:35.947059] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.923 [2024-10-01 22:40:35.947069] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.923 qpair failed and we were unable to recover it. 00:41:40.923 [2024-10-01 22:40:35.947346] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.923 [2024-10-01 22:40:35.947356] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.923 qpair failed and we were unable to recover it. 00:41:40.923 [2024-10-01 22:40:35.947642] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.923 [2024-10-01 22:40:35.947653] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.923 qpair failed and we were unable to recover it. 00:41:40.923 [2024-10-01 22:40:35.947848] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.923 [2024-10-01 22:40:35.947858] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.923 qpair failed and we were unable to recover it. 00:41:40.923 [2024-10-01 22:40:35.948176] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.923 [2024-10-01 22:40:35.948186] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.923 qpair failed and we were unable to recover it. 00:41:40.923 [2024-10-01 22:40:35.948498] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.923 [2024-10-01 22:40:35.948508] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.923 qpair failed and we were unable to recover it. 00:41:40.923 [2024-10-01 22:40:35.948795] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.923 [2024-10-01 22:40:35.948806] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.923 qpair failed and we were unable to recover it. 00:41:40.923 [2024-10-01 22:40:35.949118] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.923 [2024-10-01 22:40:35.949128] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.923 qpair failed and we were unable to recover it. 00:41:40.923 [2024-10-01 22:40:35.949322] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.923 [2024-10-01 22:40:35.949332] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.923 qpair failed and we were unable to recover it. 00:41:40.923 [2024-10-01 22:40:35.949737] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.923 [2024-10-01 22:40:35.949747] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.923 qpair failed and we were unable to recover it. 00:41:40.923 [2024-10-01 22:40:35.950077] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.923 [2024-10-01 22:40:35.950087] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.923 qpair failed and we were unable to recover it. 00:41:40.923 [2024-10-01 22:40:35.950360] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.923 [2024-10-01 22:40:35.950370] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.923 qpair failed and we were unable to recover it. 00:41:40.923 [2024-10-01 22:40:35.950684] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.923 [2024-10-01 22:40:35.950694] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.923 qpair failed and we were unable to recover it. 00:41:40.923 [2024-10-01 22:40:35.951002] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.923 [2024-10-01 22:40:35.951012] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.924 qpair failed and we were unable to recover it. 00:41:40.924 [2024-10-01 22:40:35.951300] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.924 [2024-10-01 22:40:35.951310] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.924 qpair failed and we were unable to recover it. 00:41:40.924 [2024-10-01 22:40:35.951620] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.924 [2024-10-01 22:40:35.951634] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.924 qpair failed and we were unable to recover it. 00:41:40.924 [2024-10-01 22:40:35.951936] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.924 [2024-10-01 22:40:35.951946] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.924 qpair failed and we were unable to recover it. 00:41:40.924 [2024-10-01 22:40:35.952257] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.924 [2024-10-01 22:40:35.952267] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.924 qpair failed and we were unable to recover it. 00:41:40.924 [2024-10-01 22:40:35.952554] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.924 [2024-10-01 22:40:35.952564] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.924 qpair failed and we were unable to recover it. 00:41:40.924 [2024-10-01 22:40:35.952888] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.924 [2024-10-01 22:40:35.952899] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.924 qpair failed and we were unable to recover it. 00:41:40.924 [2024-10-01 22:40:35.953097] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.924 [2024-10-01 22:40:35.953107] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.924 qpair failed and we were unable to recover it. 00:41:40.924 [2024-10-01 22:40:35.953480] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.924 [2024-10-01 22:40:35.953490] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.924 qpair failed and we were unable to recover it. 00:41:40.924 [2024-10-01 22:40:35.953790] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.924 [2024-10-01 22:40:35.953802] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.924 qpair failed and we were unable to recover it. 00:41:40.924 [2024-10-01 22:40:35.954105] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.924 [2024-10-01 22:40:35.954114] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.924 qpair failed and we were unable to recover it. 00:41:40.924 [2024-10-01 22:40:35.954396] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.924 [2024-10-01 22:40:35.954414] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.924 qpair failed and we were unable to recover it. 00:41:40.924 [2024-10-01 22:40:35.954670] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.924 [2024-10-01 22:40:35.954680] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.924 qpair failed and we were unable to recover it. 00:41:40.924 [2024-10-01 22:40:35.954857] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.924 [2024-10-01 22:40:35.954868] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.924 qpair failed and we were unable to recover it. 00:41:40.924 [2024-10-01 22:40:35.955169] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.924 [2024-10-01 22:40:35.955180] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.924 qpair failed and we were unable to recover it. 00:41:40.924 [2024-10-01 22:40:35.955485] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.924 [2024-10-01 22:40:35.955496] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.924 qpair failed and we were unable to recover it. 00:41:40.924 [2024-10-01 22:40:35.955797] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.924 [2024-10-01 22:40:35.955808] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.924 qpair failed and we were unable to recover it. 00:41:40.924 [2024-10-01 22:40:35.956001] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.924 [2024-10-01 22:40:35.956011] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.924 qpair failed and we were unable to recover it. 00:41:40.924 [2024-10-01 22:40:35.956354] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.924 [2024-10-01 22:40:35.956364] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.924 qpair failed and we were unable to recover it. 00:41:40.924 [2024-10-01 22:40:35.956649] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.924 [2024-10-01 22:40:35.956659] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.924 qpair failed and we were unable to recover it. 00:41:40.924 [2024-10-01 22:40:35.957044] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.924 [2024-10-01 22:40:35.957055] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.924 qpair failed and we were unable to recover it. 00:41:40.924 [2024-10-01 22:40:35.957347] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.924 [2024-10-01 22:40:35.957356] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.924 qpair failed and we were unable to recover it. 00:41:40.924 [2024-10-01 22:40:35.957663] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.924 [2024-10-01 22:40:35.957673] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.924 qpair failed and we were unable to recover it. 00:41:40.924 [2024-10-01 22:40:35.957883] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.924 [2024-10-01 22:40:35.957893] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.924 qpair failed and we were unable to recover it. 00:41:40.924 [2024-10-01 22:40:35.958200] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.924 [2024-10-01 22:40:35.958210] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.924 qpair failed and we were unable to recover it. 00:41:40.924 [2024-10-01 22:40:35.958411] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.924 [2024-10-01 22:40:35.958421] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.924 qpair failed and we were unable to recover it. 00:41:40.924 [2024-10-01 22:40:35.958744] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.924 [2024-10-01 22:40:35.958755] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.924 qpair failed and we were unable to recover it. 00:41:40.924 [2024-10-01 22:40:35.959066] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.924 [2024-10-01 22:40:35.959076] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.924 qpair failed and we were unable to recover it. 00:41:40.924 [2024-10-01 22:40:35.959406] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.924 [2024-10-01 22:40:35.959417] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.924 qpair failed and we were unable to recover it. 00:41:40.924 [2024-10-01 22:40:35.959734] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.924 [2024-10-01 22:40:35.959745] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.924 qpair failed and we were unable to recover it. 00:41:40.924 [2024-10-01 22:40:35.960061] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.924 [2024-10-01 22:40:35.960071] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.924 qpair failed and we were unable to recover it. 00:41:40.924 [2024-10-01 22:40:35.960396] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.924 [2024-10-01 22:40:35.960406] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.924 qpair failed and we were unable to recover it. 00:41:40.924 [2024-10-01 22:40:35.960690] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.924 [2024-10-01 22:40:35.960701] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.924 qpair failed and we were unable to recover it. 00:41:40.924 [2024-10-01 22:40:35.961032] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.924 [2024-10-01 22:40:35.961044] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.924 qpair failed and we were unable to recover it. 00:41:40.924 [2024-10-01 22:40:35.961360] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.924 [2024-10-01 22:40:35.961371] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.924 qpair failed and we were unable to recover it. 00:41:40.924 [2024-10-01 22:40:35.961689] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.924 [2024-10-01 22:40:35.961701] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.924 qpair failed and we were unable to recover it. 00:41:40.924 [2024-10-01 22:40:35.961986] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.925 [2024-10-01 22:40:35.961996] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.925 qpair failed and we were unable to recover it. 00:41:40.925 [2024-10-01 22:40:35.962257] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.925 [2024-10-01 22:40:35.962267] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.925 qpair failed and we were unable to recover it. 00:41:40.925 [2024-10-01 22:40:35.962549] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.925 [2024-10-01 22:40:35.962567] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.925 qpair failed and we were unable to recover it. 00:41:40.925 [2024-10-01 22:40:35.962879] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.925 [2024-10-01 22:40:35.962890] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.925 qpair failed and we were unable to recover it. 00:41:40.925 [2024-10-01 22:40:35.963182] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.925 [2024-10-01 22:40:35.963192] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.925 qpair failed and we were unable to recover it. 00:41:40.925 [2024-10-01 22:40:35.963489] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.925 [2024-10-01 22:40:35.963499] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.925 qpair failed and we were unable to recover it. 00:41:40.925 [2024-10-01 22:40:35.963797] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.925 [2024-10-01 22:40:35.963807] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.925 qpair failed and we were unable to recover it. 00:41:40.925 [2024-10-01 22:40:35.964113] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.925 [2024-10-01 22:40:35.964123] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.925 qpair failed and we were unable to recover it. 00:41:40.925 [2024-10-01 22:40:35.964448] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.925 [2024-10-01 22:40:35.964459] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.925 qpair failed and we were unable to recover it. 00:41:40.925 [2024-10-01 22:40:35.964761] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.925 [2024-10-01 22:40:35.964771] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.925 qpair failed and we were unable to recover it. 00:41:40.925 [2024-10-01 22:40:35.965066] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.925 [2024-10-01 22:40:35.965078] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.925 qpair failed and we were unable to recover it. 00:41:40.925 [2024-10-01 22:40:35.965375] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.925 [2024-10-01 22:40:35.965385] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.925 qpair failed and we were unable to recover it. 00:41:40.925 [2024-10-01 22:40:35.965698] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.925 [2024-10-01 22:40:35.965708] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.925 qpair failed and we were unable to recover it. 00:41:40.925 [2024-10-01 22:40:35.965912] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.925 [2024-10-01 22:40:35.965922] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.925 qpair failed and we were unable to recover it. 00:41:40.925 [2024-10-01 22:40:35.966286] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.925 [2024-10-01 22:40:35.966296] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.925 qpair failed and we were unable to recover it. 00:41:40.925 [2024-10-01 22:40:35.966604] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.925 [2024-10-01 22:40:35.966614] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.925 qpair failed and we were unable to recover it. 00:41:40.925 [2024-10-01 22:40:35.966918] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.925 [2024-10-01 22:40:35.966933] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.925 qpair failed and we were unable to recover it. 00:41:40.925 [2024-10-01 22:40:35.967213] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.925 [2024-10-01 22:40:35.967223] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.925 qpair failed and we were unable to recover it. 00:41:40.925 [2024-10-01 22:40:35.967536] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.925 [2024-10-01 22:40:35.967546] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.925 qpair failed and we were unable to recover it. 00:41:40.925 [2024-10-01 22:40:35.967858] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.925 [2024-10-01 22:40:35.967868] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.925 qpair failed and we were unable to recover it. 00:41:40.925 [2024-10-01 22:40:35.968174] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.925 [2024-10-01 22:40:35.968184] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.925 qpair failed and we were unable to recover it. 00:41:40.925 [2024-10-01 22:40:35.968468] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.925 [2024-10-01 22:40:35.968478] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.925 qpair failed and we were unable to recover it. 00:41:40.925 [2024-10-01 22:40:35.968797] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.925 [2024-10-01 22:40:35.968808] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.925 qpair failed and we were unable to recover it. 00:41:40.925 [2024-10-01 22:40:35.969117] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.925 [2024-10-01 22:40:35.969126] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.925 qpair failed and we were unable to recover it. 00:41:40.925 [2024-10-01 22:40:35.969430] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.925 [2024-10-01 22:40:35.969440] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.925 qpair failed and we were unable to recover it. 00:41:40.925 [2024-10-01 22:40:35.969751] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.925 [2024-10-01 22:40:35.969762] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.925 qpair failed and we were unable to recover it. 00:41:40.925 [2024-10-01 22:40:35.970138] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.925 [2024-10-01 22:40:35.970149] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.925 qpair failed and we were unable to recover it. 00:41:40.925 [2024-10-01 22:40:35.970444] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.925 [2024-10-01 22:40:35.970455] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.925 qpair failed and we were unable to recover it. 00:41:40.925 [2024-10-01 22:40:35.970748] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.925 [2024-10-01 22:40:35.970758] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.925 qpair failed and we were unable to recover it. 00:41:40.925 [2024-10-01 22:40:35.971050] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.925 [2024-10-01 22:40:35.971061] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.925 qpair failed and we were unable to recover it. 00:41:40.925 [2024-10-01 22:40:35.971362] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.925 [2024-10-01 22:40:35.971372] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.925 qpair failed and we were unable to recover it. 00:41:40.925 [2024-10-01 22:40:35.971552] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.925 [2024-10-01 22:40:35.971563] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.925 qpair failed and we were unable to recover it. 00:41:40.926 [2024-10-01 22:40:35.971910] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.926 [2024-10-01 22:40:35.971923] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.926 qpair failed and we were unable to recover it. 00:41:40.926 [2024-10-01 22:40:35.972213] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.926 [2024-10-01 22:40:35.972224] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.926 qpair failed and we were unable to recover it. 00:41:40.926 [2024-10-01 22:40:35.972529] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.926 [2024-10-01 22:40:35.972539] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.926 qpair failed and we were unable to recover it. 00:41:40.926 [2024-10-01 22:40:35.972806] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.926 [2024-10-01 22:40:35.972817] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.926 qpair failed and we were unable to recover it. 00:41:40.926 [2024-10-01 22:40:35.973129] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.926 [2024-10-01 22:40:35.973140] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.926 qpair failed and we were unable to recover it. 00:41:40.926 [2024-10-01 22:40:35.973464] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.926 [2024-10-01 22:40:35.973476] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.926 qpair failed and we were unable to recover it. 00:41:40.926 [2024-10-01 22:40:35.973778] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.926 [2024-10-01 22:40:35.973789] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.926 qpair failed and we were unable to recover it. 00:41:40.926 [2024-10-01 22:40:35.974080] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.926 [2024-10-01 22:40:35.974090] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.926 qpair failed and we were unable to recover it. 00:41:40.926 [2024-10-01 22:40:35.974413] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.926 [2024-10-01 22:40:35.974423] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.926 qpair failed and we were unable to recover it. 00:41:40.926 [2024-10-01 22:40:35.974768] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.926 [2024-10-01 22:40:35.974778] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.926 qpair failed and we were unable to recover it. 00:41:40.926 [2024-10-01 22:40:35.974969] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.926 [2024-10-01 22:40:35.974979] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.926 qpair failed and we were unable to recover it. 00:41:40.926 [2024-10-01 22:40:35.975284] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.926 [2024-10-01 22:40:35.975297] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.926 qpair failed and we were unable to recover it. 00:41:40.926 [2024-10-01 22:40:35.975583] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.926 [2024-10-01 22:40:35.975592] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.926 qpair failed and we were unable to recover it. 00:41:40.926 [2024-10-01 22:40:35.975890] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.926 [2024-10-01 22:40:35.975900] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.926 qpair failed and we were unable to recover it. 00:41:40.926 [2024-10-01 22:40:35.976184] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.926 [2024-10-01 22:40:35.976193] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.926 qpair failed and we were unable to recover it. 00:41:40.926 [2024-10-01 22:40:35.976388] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.926 [2024-10-01 22:40:35.976398] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.926 qpair failed and we were unable to recover it. 00:41:40.926 [2024-10-01 22:40:35.976697] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.926 [2024-10-01 22:40:35.976708] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.926 qpair failed and we were unable to recover it. 00:41:40.926 [2024-10-01 22:40:35.977000] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.926 [2024-10-01 22:40:35.977010] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.926 qpair failed and we were unable to recover it. 00:41:40.926 [2024-10-01 22:40:35.977314] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.926 [2024-10-01 22:40:35.977323] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.926 qpair failed and we were unable to recover it. 00:41:40.926 [2024-10-01 22:40:35.977637] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.926 [2024-10-01 22:40:35.977649] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.926 qpair failed and we were unable to recover it. 00:41:40.926 [2024-10-01 22:40:35.977861] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.926 [2024-10-01 22:40:35.977872] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.926 qpair failed and we were unable to recover it. 00:41:40.926 [2024-10-01 22:40:35.978201] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.926 [2024-10-01 22:40:35.978211] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.926 qpair failed and we were unable to recover it. 00:41:40.926 [2024-10-01 22:40:35.978514] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.926 [2024-10-01 22:40:35.978524] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.926 qpair failed and we were unable to recover it. 00:41:40.926 [2024-10-01 22:40:35.978840] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.926 [2024-10-01 22:40:35.978851] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.926 qpair failed and we were unable to recover it. 00:41:40.926 [2024-10-01 22:40:35.979190] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.926 [2024-10-01 22:40:35.979200] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.926 qpair failed and we were unable to recover it. 00:41:40.926 [2024-10-01 22:40:35.979488] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.926 [2024-10-01 22:40:35.979499] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.926 qpair failed and we were unable to recover it. 00:41:40.926 [2024-10-01 22:40:35.979726] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.926 [2024-10-01 22:40:35.979736] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.926 qpair failed and we were unable to recover it. 00:41:40.926 [2024-10-01 22:40:35.980044] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.926 [2024-10-01 22:40:35.980054] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.926 qpair failed and we were unable to recover it. 00:41:40.926 [2024-10-01 22:40:35.980364] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.926 [2024-10-01 22:40:35.980375] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.926 qpair failed and we were unable to recover it. 00:41:40.926 [2024-10-01 22:40:35.980579] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.926 [2024-10-01 22:40:35.980590] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.926 qpair failed and we were unable to recover it. 00:41:40.926 [2024-10-01 22:40:35.980894] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.926 [2024-10-01 22:40:35.980905] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.926 qpair failed and we were unable to recover it. 00:41:40.926 [2024-10-01 22:40:35.981202] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.926 [2024-10-01 22:40:35.981212] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.926 qpair failed and we were unable to recover it. 00:41:40.926 [2024-10-01 22:40:35.981567] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.926 [2024-10-01 22:40:35.981577] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.926 qpair failed and we were unable to recover it. 00:41:40.926 [2024-10-01 22:40:35.981869] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.927 [2024-10-01 22:40:35.981880] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.927 qpair failed and we were unable to recover it. 00:41:40.927 [2024-10-01 22:40:35.982184] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.927 [2024-10-01 22:40:35.982195] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.927 qpair failed and we were unable to recover it. 00:41:40.927 [2024-10-01 22:40:35.982497] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.927 [2024-10-01 22:40:35.982507] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.927 qpair failed and we were unable to recover it. 00:41:40.927 [2024-10-01 22:40:35.982795] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.927 [2024-10-01 22:40:35.982805] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.927 qpair failed and we were unable to recover it. 00:41:40.927 [2024-10-01 22:40:35.983093] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.927 [2024-10-01 22:40:35.983103] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.927 qpair failed and we were unable to recover it. 00:41:40.927 [2024-10-01 22:40:35.983408] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.927 [2024-10-01 22:40:35.983420] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.927 qpair failed and we were unable to recover it. 00:41:40.927 [2024-10-01 22:40:35.983727] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.927 [2024-10-01 22:40:35.983738] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.927 qpair failed and we were unable to recover it. 00:41:40.927 [2024-10-01 22:40:35.984045] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.927 [2024-10-01 22:40:35.984054] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.927 qpair failed and we were unable to recover it. 00:41:40.927 [2024-10-01 22:40:35.984332] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.927 [2024-10-01 22:40:35.984342] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.927 qpair failed and we were unable to recover it. 00:41:40.927 [2024-10-01 22:40:35.984690] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.927 [2024-10-01 22:40:35.984700] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.927 qpair failed and we were unable to recover it. 00:41:40.927 [2024-10-01 22:40:35.984978] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.927 [2024-10-01 22:40:35.984989] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.927 qpair failed and we were unable to recover it. 00:41:40.927 [2024-10-01 22:40:35.985294] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.927 [2024-10-01 22:40:35.985304] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.927 qpair failed and we were unable to recover it. 00:41:40.927 [2024-10-01 22:40:35.985586] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.927 [2024-10-01 22:40:35.985596] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.927 qpair failed and we were unable to recover it. 00:41:40.927 [2024-10-01 22:40:35.985871] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.927 [2024-10-01 22:40:35.985882] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.927 qpair failed and we were unable to recover it. 00:41:40.927 [2024-10-01 22:40:35.986187] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.927 [2024-10-01 22:40:35.986198] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.927 qpair failed and we were unable to recover it. 00:41:40.927 [2024-10-01 22:40:35.986463] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.927 [2024-10-01 22:40:35.986474] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.927 qpair failed and we were unable to recover it. 00:41:40.927 [2024-10-01 22:40:35.986756] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.927 [2024-10-01 22:40:35.986767] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.927 qpair failed and we were unable to recover it. 00:41:40.927 [2024-10-01 22:40:35.987073] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.927 [2024-10-01 22:40:35.987084] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.927 qpair failed and we were unable to recover it. 00:41:40.927 [2024-10-01 22:40:35.987264] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.927 [2024-10-01 22:40:35.987275] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.927 qpair failed and we were unable to recover it. 00:41:40.927 [2024-10-01 22:40:35.987588] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.927 [2024-10-01 22:40:35.987598] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.927 qpair failed and we were unable to recover it. 00:41:40.927 [2024-10-01 22:40:35.987897] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.927 [2024-10-01 22:40:35.987907] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.927 qpair failed and we were unable to recover it. 00:41:40.927 [2024-10-01 22:40:35.988214] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.927 [2024-10-01 22:40:35.988224] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.927 qpair failed and we were unable to recover it. 00:41:40.927 [2024-10-01 22:40:35.988527] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.927 [2024-10-01 22:40:35.988536] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.927 qpair failed and we were unable to recover it. 00:41:40.927 [2024-10-01 22:40:35.988834] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.927 [2024-10-01 22:40:35.988845] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.927 qpair failed and we were unable to recover it. 00:41:40.927 [2024-10-01 22:40:35.989121] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.927 [2024-10-01 22:40:35.989131] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.927 qpair failed and we were unable to recover it. 00:41:40.927 [2024-10-01 22:40:35.989495] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.927 [2024-10-01 22:40:35.989506] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.927 qpair failed and we were unable to recover it. 00:41:40.927 [2024-10-01 22:40:35.989804] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.927 [2024-10-01 22:40:35.989814] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.927 qpair failed and we were unable to recover it. 00:41:40.927 [2024-10-01 22:40:35.990108] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.927 [2024-10-01 22:40:35.990154] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.927 qpair failed and we were unable to recover it. 00:41:40.927 [2024-10-01 22:40:35.990451] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.928 [2024-10-01 22:40:35.990461] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.928 qpair failed and we were unable to recover it. 00:41:40.928 [2024-10-01 22:40:35.990773] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.928 [2024-10-01 22:40:35.990784] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.928 qpair failed and we were unable to recover it. 00:41:40.928 [2024-10-01 22:40:35.991100] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.928 [2024-10-01 22:40:35.991110] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.928 qpair failed and we were unable to recover it. 00:41:40.928 [2024-10-01 22:40:35.991401] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.928 [2024-10-01 22:40:35.991411] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.928 qpair failed and we were unable to recover it. 00:41:40.928 [2024-10-01 22:40:35.991692] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.928 [2024-10-01 22:40:35.991702] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.928 qpair failed and we were unable to recover it. 00:41:40.928 [2024-10-01 22:40:35.991986] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.928 [2024-10-01 22:40:35.991997] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.928 qpair failed and we were unable to recover it. 00:41:40.928 [2024-10-01 22:40:35.992309] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.928 [2024-10-01 22:40:35.992319] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.928 qpair failed and we were unable to recover it. 00:41:40.928 [2024-10-01 22:40:35.992628] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.928 [2024-10-01 22:40:35.992638] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.928 qpair failed and we were unable to recover it. 00:41:40.928 [2024-10-01 22:40:35.992990] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.928 [2024-10-01 22:40:35.993000] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.928 qpair failed and we were unable to recover it. 00:41:40.928 [2024-10-01 22:40:35.993319] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.928 [2024-10-01 22:40:35.993330] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.928 qpair failed and we were unable to recover it. 00:41:40.928 [2024-10-01 22:40:35.993638] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.928 [2024-10-01 22:40:35.993649] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.928 qpair failed and we were unable to recover it. 00:41:40.928 [2024-10-01 22:40:35.993861] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.928 [2024-10-01 22:40:35.993872] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.928 qpair failed and we were unable to recover it. 00:41:40.928 [2024-10-01 22:40:35.994212] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.928 [2024-10-01 22:40:35.994223] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.928 qpair failed and we were unable to recover it. 00:41:40.928 [2024-10-01 22:40:35.994524] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.928 [2024-10-01 22:40:35.994534] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.928 qpair failed and we were unable to recover it. 00:41:40.928 [2024-10-01 22:40:35.994839] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.928 [2024-10-01 22:40:35.994849] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.928 qpair failed and we were unable to recover it. 00:41:40.928 [2024-10-01 22:40:35.995168] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.928 [2024-10-01 22:40:35.995179] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.928 qpair failed and we were unable to recover it. 00:41:40.928 [2024-10-01 22:40:35.995503] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.928 [2024-10-01 22:40:35.995514] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.928 qpair failed and we were unable to recover it. 00:41:40.928 [2024-10-01 22:40:35.995793] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.928 [2024-10-01 22:40:35.995803] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.928 qpair failed and we were unable to recover it. 00:41:40.928 [2024-10-01 22:40:35.996091] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.928 [2024-10-01 22:40:35.996103] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.928 qpair failed and we were unable to recover it. 00:41:40.928 [2024-10-01 22:40:35.996408] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.928 [2024-10-01 22:40:35.996418] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.928 qpair failed and we were unable to recover it. 00:41:40.928 [2024-10-01 22:40:35.996691] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.928 [2024-10-01 22:40:35.996701] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.928 qpair failed and we were unable to recover it. 00:41:40.928 [2024-10-01 22:40:35.996981] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.928 [2024-10-01 22:40:35.996990] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.928 qpair failed and we were unable to recover it. 00:41:40.928 [2024-10-01 22:40:35.997308] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.928 [2024-10-01 22:40:35.997318] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.928 qpair failed and we were unable to recover it. 00:41:40.928 [2024-10-01 22:40:35.997483] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.928 [2024-10-01 22:40:35.997493] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.928 qpair failed and we were unable to recover it. 00:41:40.928 [2024-10-01 22:40:35.997693] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.928 [2024-10-01 22:40:35.997704] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.928 qpair failed and we were unable to recover it. 00:41:40.928 [2024-10-01 22:40:35.998034] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.928 [2024-10-01 22:40:35.998044] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.928 qpair failed and we were unable to recover it. 00:41:40.928 [2024-10-01 22:40:35.998347] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.928 [2024-10-01 22:40:35.998357] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.928 qpair failed and we were unable to recover it. 00:41:40.928 [2024-10-01 22:40:35.998636] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.928 [2024-10-01 22:40:35.998646] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.928 qpair failed and we were unable to recover it. 00:41:40.928 [2024-10-01 22:40:35.998832] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.928 [2024-10-01 22:40:35.998843] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.928 qpair failed and we were unable to recover it. 00:41:40.928 [2024-10-01 22:40:35.999168] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.928 [2024-10-01 22:40:35.999177] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.928 qpair failed and we were unable to recover it. 00:41:40.928 [2024-10-01 22:40:35.999503] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.928 [2024-10-01 22:40:35.999514] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.928 qpair failed and we were unable to recover it. 00:41:40.928 [2024-10-01 22:40:35.999794] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.928 [2024-10-01 22:40:35.999805] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.928 qpair failed and we were unable to recover it. 00:41:40.928 [2024-10-01 22:40:36.000100] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.928 [2024-10-01 22:40:36.000111] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.928 qpair failed and we were unable to recover it. 00:41:40.928 [2024-10-01 22:40:36.000412] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.928 [2024-10-01 22:40:36.000423] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.928 qpair failed and we were unable to recover it. 00:41:40.929 [2024-10-01 22:40:36.000734] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.929 [2024-10-01 22:40:36.000745] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.929 qpair failed and we were unable to recover it. 00:41:40.929 [2024-10-01 22:40:36.001044] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.929 [2024-10-01 22:40:36.001054] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.929 qpair failed and we were unable to recover it. 00:41:40.929 [2024-10-01 22:40:36.001325] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.929 [2024-10-01 22:40:36.001335] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.929 qpair failed and we were unable to recover it. 00:41:40.929 [2024-10-01 22:40:36.001612] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.929 [2024-10-01 22:40:36.001622] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.929 qpair failed and we were unable to recover it. 00:41:40.929 [2024-10-01 22:40:36.001935] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.929 [2024-10-01 22:40:36.001945] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.929 qpair failed and we were unable to recover it. 00:41:40.929 [2024-10-01 22:40:36.002256] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.929 [2024-10-01 22:40:36.002266] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.929 qpair failed and we were unable to recover it. 00:41:40.929 [2024-10-01 22:40:36.002547] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.929 [2024-10-01 22:40:36.002565] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.929 qpair failed and we were unable to recover it. 00:41:40.929 [2024-10-01 22:40:36.002895] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.929 [2024-10-01 22:40:36.002905] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.929 qpair failed and we were unable to recover it. 00:41:40.929 [2024-10-01 22:40:36.003217] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.929 [2024-10-01 22:40:36.003227] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.929 qpair failed and we were unable to recover it. 00:41:40.929 [2024-10-01 22:40:36.003437] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.929 [2024-10-01 22:40:36.003448] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.929 qpair failed and we were unable to recover it. 00:41:40.929 [2024-10-01 22:40:36.003770] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.929 [2024-10-01 22:40:36.003780] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.929 qpair failed and we were unable to recover it. 00:41:40.929 [2024-10-01 22:40:36.004093] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.929 [2024-10-01 22:40:36.004105] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.929 qpair failed and we were unable to recover it. 00:41:40.929 [2024-10-01 22:40:36.004401] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.929 [2024-10-01 22:40:36.004410] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.929 qpair failed and we were unable to recover it. 00:41:40.929 [2024-10-01 22:40:36.004689] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.929 [2024-10-01 22:40:36.004700] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.929 qpair failed and we were unable to recover it. 00:41:40.929 [2024-10-01 22:40:36.005014] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.929 [2024-10-01 22:40:36.005025] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.929 qpair failed and we were unable to recover it. 00:41:40.929 [2024-10-01 22:40:36.005319] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.929 [2024-10-01 22:40:36.005329] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.929 qpair failed and we were unable to recover it. 00:41:40.929 [2024-10-01 22:40:36.005652] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.929 [2024-10-01 22:40:36.005663] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.929 qpair failed and we were unable to recover it. 00:41:40.929 [2024-10-01 22:40:36.005978] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.929 [2024-10-01 22:40:36.005988] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.929 qpair failed and we were unable to recover it. 00:41:40.929 [2024-10-01 22:40:36.006271] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.929 [2024-10-01 22:40:36.006280] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.929 qpair failed and we were unable to recover it. 00:41:40.929 [2024-10-01 22:40:36.006584] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.929 [2024-10-01 22:40:36.006594] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.929 qpair failed and we were unable to recover it. 00:41:40.929 [2024-10-01 22:40:36.006919] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.929 [2024-10-01 22:40:36.006929] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.929 qpair failed and we were unable to recover it. 00:41:40.929 [2024-10-01 22:40:36.007232] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.929 [2024-10-01 22:40:36.007243] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.929 qpair failed and we were unable to recover it. 00:41:40.929 [2024-10-01 22:40:36.007552] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.929 [2024-10-01 22:40:36.007563] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.929 qpair failed and we were unable to recover it. 00:41:40.929 [2024-10-01 22:40:36.007851] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.929 [2024-10-01 22:40:36.007861] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.929 qpair failed and we were unable to recover it. 00:41:40.929 [2024-10-01 22:40:36.008167] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.929 [2024-10-01 22:40:36.008177] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.929 qpair failed and we were unable to recover it. 00:41:40.929 [2024-10-01 22:40:36.008499] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.929 [2024-10-01 22:40:36.008510] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.929 qpair failed and we were unable to recover it. 00:41:40.929 [2024-10-01 22:40:36.008813] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.929 [2024-10-01 22:40:36.008824] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.929 qpair failed and we were unable to recover it. 00:41:40.929 [2024-10-01 22:40:36.009145] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.929 [2024-10-01 22:40:36.009156] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.929 qpair failed and we were unable to recover it. 00:41:40.929 [2024-10-01 22:40:36.009481] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.929 [2024-10-01 22:40:36.009491] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.929 qpair failed and we were unable to recover it. 00:41:40.929 [2024-10-01 22:40:36.009792] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.929 [2024-10-01 22:40:36.009802] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.929 qpair failed and we were unable to recover it. 00:41:40.929 [2024-10-01 22:40:36.010107] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.929 [2024-10-01 22:40:36.010117] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.929 qpair failed and we were unable to recover it. 00:41:40.929 [2024-10-01 22:40:36.010425] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.929 [2024-10-01 22:40:36.010435] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.929 qpair failed and we were unable to recover it. 00:41:40.929 [2024-10-01 22:40:36.010608] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.929 [2024-10-01 22:40:36.010618] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.929 qpair failed and we were unable to recover it. 00:41:40.929 [2024-10-01 22:40:36.010909] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.929 [2024-10-01 22:40:36.010919] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.929 qpair failed and we were unable to recover it. 00:41:40.929 [2024-10-01 22:40:36.011229] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.929 [2024-10-01 22:40:36.011240] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.929 qpair failed and we were unable to recover it. 00:41:40.929 [2024-10-01 22:40:36.011554] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.929 [2024-10-01 22:40:36.011563] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.929 qpair failed and we were unable to recover it. 00:41:40.929 [2024-10-01 22:40:36.011738] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.929 [2024-10-01 22:40:36.011751] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.929 qpair failed and we were unable to recover it. 00:41:40.929 [2024-10-01 22:40:36.012044] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.929 [2024-10-01 22:40:36.012056] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.929 qpair failed and we were unable to recover it. 00:41:40.929 [2024-10-01 22:40:36.012359] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.929 [2024-10-01 22:40:36.012373] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.929 qpair failed and we were unable to recover it. 00:41:40.929 [2024-10-01 22:40:36.012676] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.930 [2024-10-01 22:40:36.012689] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.930 qpair failed and we were unable to recover it. 00:41:40.930 [2024-10-01 22:40:36.012997] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.930 [2024-10-01 22:40:36.013007] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.930 qpair failed and we were unable to recover it. 00:41:40.930 [2024-10-01 22:40:36.013290] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.930 [2024-10-01 22:40:36.013301] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.930 qpair failed and we were unable to recover it. 00:41:40.930 [2024-10-01 22:40:36.013684] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.930 [2024-10-01 22:40:36.013695] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.930 qpair failed and we were unable to recover it. 00:41:40.930 [2024-10-01 22:40:36.014043] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.930 [2024-10-01 22:40:36.014053] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.930 qpair failed and we were unable to recover it. 00:41:40.930 [2024-10-01 22:40:36.014359] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.930 [2024-10-01 22:40:36.014369] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.930 qpair failed and we were unable to recover it. 00:41:40.930 [2024-10-01 22:40:36.014675] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.930 [2024-10-01 22:40:36.014686] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.930 qpair failed and we were unable to recover it. 00:41:40.930 [2024-10-01 22:40:36.015067] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.930 [2024-10-01 22:40:36.015077] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.930 qpair failed and we were unable to recover it. 00:41:40.930 [2024-10-01 22:40:36.015277] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.930 [2024-10-01 22:40:36.015287] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.930 qpair failed and we were unable to recover it. 00:41:40.930 [2024-10-01 22:40:36.015617] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.930 [2024-10-01 22:40:36.015633] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.930 qpair failed and we were unable to recover it. 00:41:40.930 [2024-10-01 22:40:36.015969] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.930 [2024-10-01 22:40:36.015979] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.930 qpair failed and we were unable to recover it. 00:41:40.930 [2024-10-01 22:40:36.016292] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.930 [2024-10-01 22:40:36.016302] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.930 qpair failed and we were unable to recover it. 00:41:40.930 [2024-10-01 22:40:36.016621] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.930 [2024-10-01 22:40:36.016636] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.930 qpair failed and we were unable to recover it. 00:41:40.930 [2024-10-01 22:40:36.016908] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.930 [2024-10-01 22:40:36.016918] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.930 qpair failed and we were unable to recover it. 00:41:40.930 [2024-10-01 22:40:36.017212] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.930 [2024-10-01 22:40:36.017229] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.930 qpair failed and we were unable to recover it. 00:41:40.930 [2024-10-01 22:40:36.017557] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.930 [2024-10-01 22:40:36.017568] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.930 qpair failed and we were unable to recover it. 00:41:40.930 [2024-10-01 22:40:36.017871] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.930 [2024-10-01 22:40:36.017882] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.930 qpair failed and we were unable to recover it. 00:41:40.930 [2024-10-01 22:40:36.018182] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.930 [2024-10-01 22:40:36.018192] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.930 qpair failed and we were unable to recover it. 00:41:40.930 [2024-10-01 22:40:36.018497] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.930 [2024-10-01 22:40:36.018507] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.930 qpair failed and we were unable to recover it. 00:41:40.930 [2024-10-01 22:40:36.018788] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.930 [2024-10-01 22:40:36.018798] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.930 qpair failed and we were unable to recover it. 00:41:40.930 [2024-10-01 22:40:36.019105] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.930 [2024-10-01 22:40:36.019115] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.930 qpair failed and we were unable to recover it. 00:41:40.930 [2024-10-01 22:40:36.019424] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.930 [2024-10-01 22:40:36.019434] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.930 qpair failed and we were unable to recover it. 00:41:40.930 [2024-10-01 22:40:36.019743] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.930 [2024-10-01 22:40:36.019753] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.930 qpair failed and we were unable to recover it. 00:41:40.930 [2024-10-01 22:40:36.020050] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.930 [2024-10-01 22:40:36.020060] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.930 qpair failed and we were unable to recover it. 00:41:40.930 [2024-10-01 22:40:36.020369] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.930 [2024-10-01 22:40:36.020378] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.930 qpair failed and we were unable to recover it. 00:41:40.930 [2024-10-01 22:40:36.020685] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.930 [2024-10-01 22:40:36.020695] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.930 qpair failed and we were unable to recover it. 00:41:40.930 [2024-10-01 22:40:36.021004] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.930 [2024-10-01 22:40:36.021015] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.930 qpair failed and we were unable to recover it. 00:41:40.930 [2024-10-01 22:40:36.021343] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.930 [2024-10-01 22:40:36.021353] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.930 qpair failed and we were unable to recover it. 00:41:40.930 [2024-10-01 22:40:36.021537] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.930 [2024-10-01 22:40:36.021548] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.930 qpair failed and we were unable to recover it. 00:41:40.930 [2024-10-01 22:40:36.021899] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.930 [2024-10-01 22:40:36.021911] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.930 qpair failed and we were unable to recover it. 00:41:40.930 [2024-10-01 22:40:36.022222] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.930 [2024-10-01 22:40:36.022232] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.930 qpair failed and we were unable to recover it. 00:41:40.930 [2024-10-01 22:40:36.022406] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.930 [2024-10-01 22:40:36.022417] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.930 qpair failed and we were unable to recover it. 00:41:40.930 [2024-10-01 22:40:36.022568] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.930 [2024-10-01 22:40:36.022581] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.930 qpair failed and we were unable to recover it. 00:41:40.930 [2024-10-01 22:40:36.022748] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.930 [2024-10-01 22:40:36.022759] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.930 qpair failed and we were unable to recover it. 00:41:40.930 [2024-10-01 22:40:36.023064] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.930 [2024-10-01 22:40:36.023075] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.930 qpair failed and we were unable to recover it. 00:41:40.930 [2024-10-01 22:40:36.023348] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.930 [2024-10-01 22:40:36.023358] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.930 qpair failed and we were unable to recover it. 00:41:40.930 [2024-10-01 22:40:36.023670] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.931 [2024-10-01 22:40:36.023681] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.931 qpair failed and we were unable to recover it. 00:41:40.931 [2024-10-01 22:40:36.023990] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.931 [2024-10-01 22:40:36.024001] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.931 qpair failed and we were unable to recover it. 00:41:40.931 [2024-10-01 22:40:36.024318] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.931 [2024-10-01 22:40:36.024328] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.931 qpair failed and we were unable to recover it. 00:41:40.931 [2024-10-01 22:40:36.024606] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.931 [2024-10-01 22:40:36.024616] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.931 qpair failed and we were unable to recover it. 00:41:40.931 [2024-10-01 22:40:36.024955] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.931 [2024-10-01 22:40:36.024965] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.931 qpair failed and we were unable to recover it. 00:41:40.931 [2024-10-01 22:40:36.025154] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.931 [2024-10-01 22:40:36.025163] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.931 qpair failed and we were unable to recover it. 00:41:40.931 [2024-10-01 22:40:36.025483] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.931 [2024-10-01 22:40:36.025492] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.931 qpair failed and we were unable to recover it. 00:41:40.931 [2024-10-01 22:40:36.025801] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.931 [2024-10-01 22:40:36.025811] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.931 qpair failed and we were unable to recover it. 00:41:40.931 [2024-10-01 22:40:36.026041] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.931 [2024-10-01 22:40:36.026051] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.931 qpair failed and we were unable to recover it. 00:41:40.931 [2024-10-01 22:40:36.026366] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.931 [2024-10-01 22:40:36.026376] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.931 qpair failed and we were unable to recover it. 00:41:40.931 [2024-10-01 22:40:36.026741] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.931 [2024-10-01 22:40:36.026752] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.931 qpair failed and we were unable to recover it. 00:41:40.931 [2024-10-01 22:40:36.027063] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.931 [2024-10-01 22:40:36.027073] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.931 qpair failed and we were unable to recover it. 00:41:40.931 [2024-10-01 22:40:36.027357] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.931 [2024-10-01 22:40:36.027368] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.931 qpair failed and we were unable to recover it. 00:41:40.931 [2024-10-01 22:40:36.027668] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.931 [2024-10-01 22:40:36.027679] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.931 qpair failed and we were unable to recover it. 00:41:40.931 [2024-10-01 22:40:36.028007] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.931 [2024-10-01 22:40:36.028017] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.931 qpair failed and we were unable to recover it. 00:41:40.931 [2024-10-01 22:40:36.028323] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.931 [2024-10-01 22:40:36.028333] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.931 qpair failed and we were unable to recover it. 00:41:40.931 [2024-10-01 22:40:36.028637] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.931 [2024-10-01 22:40:36.028647] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.931 qpair failed and we were unable to recover it. 00:41:40.931 [2024-10-01 22:40:36.028962] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.931 [2024-10-01 22:40:36.028971] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.931 qpair failed and we were unable to recover it. 00:41:40.931 [2024-10-01 22:40:36.029285] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.931 [2024-10-01 22:40:36.029295] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.931 qpair failed and we were unable to recover it. 00:41:40.931 [2024-10-01 22:40:36.029607] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.931 [2024-10-01 22:40:36.029617] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.931 qpair failed and we were unable to recover it. 00:41:40.931 [2024-10-01 22:40:36.029950] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.931 [2024-10-01 22:40:36.029961] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.931 qpair failed and we were unable to recover it. 00:41:40.931 [2024-10-01 22:40:36.030292] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.931 [2024-10-01 22:40:36.030303] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.931 qpair failed and we were unable to recover it. 00:41:40.931 [2024-10-01 22:40:36.030613] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.931 [2024-10-01 22:40:36.030628] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.931 qpair failed and we were unable to recover it. 00:41:40.931 [2024-10-01 22:40:36.030990] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.931 [2024-10-01 22:40:36.031000] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.931 qpair failed and we were unable to recover it. 00:41:40.931 [2024-10-01 22:40:36.031291] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.931 [2024-10-01 22:40:36.031302] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.931 qpair failed and we were unable to recover it. 00:41:40.931 [2024-10-01 22:40:36.031628] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.931 [2024-10-01 22:40:36.031639] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.931 qpair failed and we were unable to recover it. 00:41:40.931 [2024-10-01 22:40:36.031940] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.931 [2024-10-01 22:40:36.031951] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.931 qpair failed and we were unable to recover it. 00:41:40.931 [2024-10-01 22:40:36.032337] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.931 [2024-10-01 22:40:36.032346] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.931 qpair failed and we were unable to recover it. 00:41:40.931 [2024-10-01 22:40:36.032690] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.931 [2024-10-01 22:40:36.032701] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.931 qpair failed and we were unable to recover it. 00:41:40.931 [2024-10-01 22:40:36.033006] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.931 [2024-10-01 22:40:36.033016] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.931 qpair failed and we were unable to recover it. 00:41:40.931 [2024-10-01 22:40:36.033372] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.931 [2024-10-01 22:40:36.033382] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.931 qpair failed and we were unable to recover it. 00:41:40.931 [2024-10-01 22:40:36.033690] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.931 [2024-10-01 22:40:36.033703] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.931 qpair failed and we were unable to recover it. 00:41:40.931 [2024-10-01 22:40:36.033894] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.931 [2024-10-01 22:40:36.033904] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.931 qpair failed and we were unable to recover it. 00:41:40.931 [2024-10-01 22:40:36.034114] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.931 [2024-10-01 22:40:36.034124] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.931 qpair failed and we were unable to recover it. 00:41:40.931 [2024-10-01 22:40:36.034374] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.932 [2024-10-01 22:40:36.034384] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.932 qpair failed and we were unable to recover it. 00:41:40.932 [2024-10-01 22:40:36.034736] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.932 [2024-10-01 22:40:36.034747] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.932 qpair failed and we were unable to recover it. 00:41:40.932 [2024-10-01 22:40:36.035031] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.932 [2024-10-01 22:40:36.035041] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.932 qpair failed and we were unable to recover it. 00:41:40.932 [2024-10-01 22:40:36.035321] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.932 [2024-10-01 22:40:36.035331] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.932 qpair failed and we were unable to recover it. 00:41:40.932 [2024-10-01 22:40:36.035657] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.932 [2024-10-01 22:40:36.035667] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.932 qpair failed and we were unable to recover it. 00:41:40.932 [2024-10-01 22:40:36.035870] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.932 [2024-10-01 22:40:36.035879] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.932 qpair failed and we were unable to recover it. 00:41:40.932 [2024-10-01 22:40:36.036185] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.932 [2024-10-01 22:40:36.036195] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.932 qpair failed and we were unable to recover it. 00:41:40.932 [2024-10-01 22:40:36.036521] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.932 [2024-10-01 22:40:36.036531] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.932 qpair failed and we were unable to recover it. 00:41:40.932 [2024-10-01 22:40:36.036850] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.932 [2024-10-01 22:40:36.036861] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.932 qpair failed and we were unable to recover it. 00:41:40.932 [2024-10-01 22:40:36.037035] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.932 [2024-10-01 22:40:36.037046] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.932 qpair failed and we were unable to recover it. 00:41:40.932 [2024-10-01 22:40:36.037355] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.932 [2024-10-01 22:40:36.037365] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.932 qpair failed and we were unable to recover it. 00:41:40.932 [2024-10-01 22:40:36.037666] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.932 [2024-10-01 22:40:36.037676] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.932 qpair failed and we were unable to recover it. 00:41:40.932 [2024-10-01 22:40:36.038001] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.932 [2024-10-01 22:40:36.038011] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.932 qpair failed and we were unable to recover it. 00:41:40.932 [2024-10-01 22:40:36.038350] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.932 [2024-10-01 22:40:36.038360] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.932 qpair failed and we were unable to recover it. 00:41:40.932 [2024-10-01 22:40:36.038676] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.932 [2024-10-01 22:40:36.038690] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.932 qpair failed and we were unable to recover it. 00:41:40.932 [2024-10-01 22:40:36.039017] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.932 [2024-10-01 22:40:36.039027] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.932 qpair failed and we were unable to recover it. 00:41:40.932 [2024-10-01 22:40:36.039329] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.932 [2024-10-01 22:40:36.039339] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.932 qpair failed and we were unable to recover it. 00:41:40.932 [2024-10-01 22:40:36.039641] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.932 [2024-10-01 22:40:36.039657] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.932 qpair failed and we were unable to recover it. 00:41:40.932 [2024-10-01 22:40:36.039866] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.932 [2024-10-01 22:40:36.039876] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.932 qpair failed and we were unable to recover it. 00:41:40.932 [2024-10-01 22:40:36.040178] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.932 [2024-10-01 22:40:36.040188] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.932 qpair failed and we were unable to recover it. 00:41:40.932 [2024-10-01 22:40:36.040359] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.932 [2024-10-01 22:40:36.040371] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.932 qpair failed and we were unable to recover it. 00:41:40.932 [2024-10-01 22:40:36.040679] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.932 [2024-10-01 22:40:36.040690] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.932 qpair failed and we were unable to recover it. 00:41:40.932 [2024-10-01 22:40:36.040897] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.932 [2024-10-01 22:40:36.040907] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.932 qpair failed and we were unable to recover it. 00:41:40.932 [2024-10-01 22:40:36.041182] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.932 [2024-10-01 22:40:36.041192] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.932 qpair failed and we were unable to recover it. 00:41:40.932 [2024-10-01 22:40:36.041361] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.932 [2024-10-01 22:40:36.041374] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.932 qpair failed and we were unable to recover it. 00:41:40.932 [2024-10-01 22:40:36.041622] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.932 [2024-10-01 22:40:36.041635] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.932 qpair failed and we were unable to recover it. 00:41:40.932 [2024-10-01 22:40:36.041909] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.932 [2024-10-01 22:40:36.041919] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.932 qpair failed and we were unable to recover it. 00:41:40.932 [2024-10-01 22:40:36.042228] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.932 [2024-10-01 22:40:36.042238] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.932 qpair failed and we were unable to recover it. 00:41:40.932 [2024-10-01 22:40:36.042574] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.932 [2024-10-01 22:40:36.042584] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.932 qpair failed and we were unable to recover it. 00:41:40.932 [2024-10-01 22:40:36.042888] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.932 [2024-10-01 22:40:36.042899] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.932 qpair failed and we were unable to recover it. 00:41:40.932 [2024-10-01 22:40:36.043207] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.932 [2024-10-01 22:40:36.043217] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.932 qpair failed and we were unable to recover it. 00:41:40.933 [2024-10-01 22:40:36.043600] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.933 [2024-10-01 22:40:36.043610] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.933 qpair failed and we were unable to recover it. 00:41:40.933 [2024-10-01 22:40:36.043905] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.933 [2024-10-01 22:40:36.043915] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.933 qpair failed and we were unable to recover it. 00:41:40.933 [2024-10-01 22:40:36.044221] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.933 [2024-10-01 22:40:36.044230] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.933 qpair failed and we were unable to recover it. 00:41:40.933 [2024-10-01 22:40:36.044554] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.933 [2024-10-01 22:40:36.044564] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.933 qpair failed and we were unable to recover it. 00:41:40.933 [2024-10-01 22:40:36.044864] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.933 [2024-10-01 22:40:36.044875] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.933 qpair failed and we were unable to recover it. 00:41:40.933 [2024-10-01 22:40:36.045184] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.933 [2024-10-01 22:40:36.045194] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.933 qpair failed and we were unable to recover it. 00:41:40.933 [2024-10-01 22:40:36.045504] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.933 [2024-10-01 22:40:36.045515] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.933 qpair failed and we were unable to recover it. 00:41:40.933 [2024-10-01 22:40:36.045800] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.933 [2024-10-01 22:40:36.045811] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.933 qpair failed and we were unable to recover it. 00:41:40.933 [2024-10-01 22:40:36.046131] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.933 [2024-10-01 22:40:36.046141] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.933 qpair failed and we were unable to recover it. 00:41:40.933 [2024-10-01 22:40:36.046442] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.933 [2024-10-01 22:40:36.046453] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.933 qpair failed and we were unable to recover it. 00:41:40.933 [2024-10-01 22:40:36.046777] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.933 [2024-10-01 22:40:36.046787] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.933 qpair failed and we were unable to recover it. 00:41:40.933 [2024-10-01 22:40:36.047078] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.933 [2024-10-01 22:40:36.047087] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.933 qpair failed and we were unable to recover it. 00:41:40.933 [2024-10-01 22:40:36.047408] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.933 [2024-10-01 22:40:36.047418] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.933 qpair failed and we were unable to recover it. 00:41:40.933 [2024-10-01 22:40:36.047706] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.933 [2024-10-01 22:40:36.047717] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.933 qpair failed and we were unable to recover it. 00:41:40.933 [2024-10-01 22:40:36.048016] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.933 [2024-10-01 22:40:36.048026] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.933 qpair failed and we were unable to recover it. 00:41:40.933 [2024-10-01 22:40:36.048340] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.933 [2024-10-01 22:40:36.048350] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.933 qpair failed and we were unable to recover it. 00:41:40.933 [2024-10-01 22:40:36.048658] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.933 [2024-10-01 22:40:36.048669] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.933 qpair failed and we were unable to recover it. 00:41:40.933 [2024-10-01 22:40:36.048972] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.933 [2024-10-01 22:40:36.048988] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.933 qpair failed and we were unable to recover it. 00:41:40.933 [2024-10-01 22:40:36.049321] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.933 [2024-10-01 22:40:36.049331] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.933 qpair failed and we were unable to recover it. 00:41:40.933 [2024-10-01 22:40:36.049608] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.933 [2024-10-01 22:40:36.049618] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.933 qpair failed and we were unable to recover it. 00:41:40.933 [2024-10-01 22:40:36.049922] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.933 [2024-10-01 22:40:36.049933] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.933 qpair failed and we were unable to recover it. 00:41:40.933 [2024-10-01 22:40:36.050238] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.933 [2024-10-01 22:40:36.050249] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.933 qpair failed and we were unable to recover it. 00:41:40.933 [2024-10-01 22:40:36.050544] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.933 [2024-10-01 22:40:36.050555] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.933 qpair failed and we were unable to recover it. 00:41:40.933 [2024-10-01 22:40:36.050840] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.933 [2024-10-01 22:40:36.050851] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.933 qpair failed and we were unable to recover it. 00:41:40.933 [2024-10-01 22:40:36.051201] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.933 [2024-10-01 22:40:36.051211] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.933 qpair failed and we were unable to recover it. 00:41:40.933 [2024-10-01 22:40:36.051510] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.933 [2024-10-01 22:40:36.051520] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.933 qpair failed and we were unable to recover it. 00:41:40.933 [2024-10-01 22:40:36.051797] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.933 [2024-10-01 22:40:36.051809] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.933 qpair failed and we were unable to recover it. 00:41:40.933 [2024-10-01 22:40:36.052119] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.933 [2024-10-01 22:40:36.052130] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.933 qpair failed and we were unable to recover it. 00:41:40.933 [2024-10-01 22:40:36.052311] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.933 [2024-10-01 22:40:36.052323] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.933 qpair failed and we were unable to recover it. 00:41:40.933 [2024-10-01 22:40:36.052597] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.933 [2024-10-01 22:40:36.052608] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.933 qpair failed and we were unable to recover it. 00:41:40.933 [2024-10-01 22:40:36.052895] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.933 [2024-10-01 22:40:36.052906] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.933 qpair failed and we were unable to recover it. 00:41:40.933 [2024-10-01 22:40:36.053220] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.933 [2024-10-01 22:40:36.053229] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.933 qpair failed and we were unable to recover it. 00:41:40.933 [2024-10-01 22:40:36.053524] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.933 [2024-10-01 22:40:36.053533] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.933 qpair failed and we were unable to recover it. 00:41:40.933 [2024-10-01 22:40:36.053802] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.933 [2024-10-01 22:40:36.053812] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.933 qpair failed and we were unable to recover it. 00:41:40.933 [2024-10-01 22:40:36.054104] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.933 [2024-10-01 22:40:36.054115] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.933 qpair failed and we were unable to recover it. 00:41:40.933 [2024-10-01 22:40:36.054308] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.933 [2024-10-01 22:40:36.054319] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.933 qpair failed and we were unable to recover it. 00:41:40.933 [2024-10-01 22:40:36.054641] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.933 [2024-10-01 22:40:36.054653] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.933 qpair failed and we were unable to recover it. 00:41:40.933 [2024-10-01 22:40:36.054967] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.933 [2024-10-01 22:40:36.054977] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.934 qpair failed and we were unable to recover it. 00:41:40.934 [2024-10-01 22:40:36.055346] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.934 [2024-10-01 22:40:36.055356] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.934 qpair failed and we were unable to recover it. 00:41:40.934 [2024-10-01 22:40:36.055665] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.934 [2024-10-01 22:40:36.055675] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.934 qpair failed and we were unable to recover it. 00:41:40.934 [2024-10-01 22:40:36.055958] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.934 [2024-10-01 22:40:36.055968] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.934 qpair failed and we were unable to recover it. 00:41:40.934 [2024-10-01 22:40:36.056148] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.934 [2024-10-01 22:40:36.056159] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.934 qpair failed and we were unable to recover it. 00:41:40.934 [2024-10-01 22:40:36.056451] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.934 [2024-10-01 22:40:36.056462] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.934 qpair failed and we were unable to recover it. 00:41:40.934 [2024-10-01 22:40:36.056665] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.934 [2024-10-01 22:40:36.056675] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.934 qpair failed and we were unable to recover it. 00:41:40.934 [2024-10-01 22:40:36.056991] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.934 [2024-10-01 22:40:36.057001] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.934 qpair failed and we were unable to recover it. 00:41:40.934 [2024-10-01 22:40:36.057294] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.934 [2024-10-01 22:40:36.057311] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.934 qpair failed and we were unable to recover it. 00:41:40.934 [2024-10-01 22:40:36.057529] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.934 [2024-10-01 22:40:36.057538] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.934 qpair failed and we were unable to recover it. 00:41:40.934 [2024-10-01 22:40:36.057896] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.934 [2024-10-01 22:40:36.057907] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.934 qpair failed and we were unable to recover it. 00:41:40.934 [2024-10-01 22:40:36.058223] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.934 [2024-10-01 22:40:36.058233] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.934 qpair failed and we were unable to recover it. 00:41:40.934 [2024-10-01 22:40:36.058553] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.934 [2024-10-01 22:40:36.058562] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.934 qpair failed and we were unable to recover it. 00:41:40.934 [2024-10-01 22:40:36.058862] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.934 [2024-10-01 22:40:36.058872] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.934 qpair failed and we were unable to recover it. 00:41:40.934 [2024-10-01 22:40:36.059179] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.934 [2024-10-01 22:40:36.059189] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.934 qpair failed and we were unable to recover it. 00:41:40.934 [2024-10-01 22:40:36.059499] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.934 [2024-10-01 22:40:36.059508] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.934 qpair failed and we were unable to recover it. 00:41:40.934 [2024-10-01 22:40:36.059839] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.934 [2024-10-01 22:40:36.059855] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.934 qpair failed and we were unable to recover it. 00:41:40.934 [2024-10-01 22:40:36.060122] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.934 [2024-10-01 22:40:36.060132] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.934 qpair failed and we were unable to recover it. 00:41:40.934 [2024-10-01 22:40:36.060437] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.934 [2024-10-01 22:40:36.060447] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.934 qpair failed and we were unable to recover it. 00:41:40.934 [2024-10-01 22:40:36.060757] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.934 [2024-10-01 22:40:36.060767] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.934 qpair failed and we were unable to recover it. 00:41:40.934 [2024-10-01 22:40:36.060956] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.934 [2024-10-01 22:40:36.060966] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.934 qpair failed and we were unable to recover it. 00:41:40.934 [2024-10-01 22:40:36.061308] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.934 [2024-10-01 22:40:36.061319] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.934 qpair failed and we were unable to recover it. 00:41:40.934 [2024-10-01 22:40:36.061583] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.934 [2024-10-01 22:40:36.061594] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.934 qpair failed and we were unable to recover it. 00:41:40.934 [2024-10-01 22:40:36.061901] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.934 [2024-10-01 22:40:36.061912] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.934 qpair failed and we were unable to recover it. 00:41:40.934 [2024-10-01 22:40:36.062178] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.934 [2024-10-01 22:40:36.062190] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.934 qpair failed and we were unable to recover it. 00:41:40.934 [2024-10-01 22:40:36.062472] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.934 [2024-10-01 22:40:36.062483] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.934 qpair failed and we were unable to recover it. 00:41:40.934 [2024-10-01 22:40:36.062749] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.934 [2024-10-01 22:40:36.062760] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.934 qpair failed and we were unable to recover it. 00:41:40.934 [2024-10-01 22:40:36.063054] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.934 [2024-10-01 22:40:36.063065] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.934 qpair failed and we were unable to recover it. 00:41:40.934 [2024-10-01 22:40:36.063367] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.934 [2024-10-01 22:40:36.063377] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.934 qpair failed and we were unable to recover it. 00:41:40.934 [2024-10-01 22:40:36.063660] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.934 [2024-10-01 22:40:36.063670] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.934 qpair failed and we were unable to recover it. 00:41:40.934 [2024-10-01 22:40:36.064000] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.934 [2024-10-01 22:40:36.064010] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.934 qpair failed and we were unable to recover it. 00:41:40.934 [2024-10-01 22:40:36.064292] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.934 [2024-10-01 22:40:36.064303] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.934 qpair failed and we were unable to recover it. 00:41:40.934 [2024-10-01 22:40:36.064493] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.934 [2024-10-01 22:40:36.064504] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.934 qpair failed and we were unable to recover it. 00:41:40.934 [2024-10-01 22:40:36.064794] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.934 [2024-10-01 22:40:36.064805] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.934 qpair failed and we were unable to recover it. 00:41:40.934 [2024-10-01 22:40:36.065093] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.934 [2024-10-01 22:40:36.065103] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.934 qpair failed and we were unable to recover it. 00:41:40.934 [2024-10-01 22:40:36.065326] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.934 [2024-10-01 22:40:36.065336] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.934 qpair failed and we were unable to recover it. 00:41:40.934 [2024-10-01 22:40:36.065659] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.935 [2024-10-01 22:40:36.065669] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.935 qpair failed and we were unable to recover it. 00:41:40.935 [2024-10-01 22:40:36.065983] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.935 [2024-10-01 22:40:36.065993] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.935 qpair failed and we were unable to recover it. 00:41:40.935 [2024-10-01 22:40:36.066274] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.935 [2024-10-01 22:40:36.066284] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.935 qpair failed and we were unable to recover it. 00:41:40.935 [2024-10-01 22:40:36.066610] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.935 [2024-10-01 22:40:36.066620] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.935 qpair failed and we were unable to recover it. 00:41:40.935 [2024-10-01 22:40:36.066975] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.935 [2024-10-01 22:40:36.066986] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.935 qpair failed and we were unable to recover it. 00:41:40.935 [2024-10-01 22:40:36.067290] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.935 [2024-10-01 22:40:36.067300] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.935 qpair failed and we were unable to recover it. 00:41:40.935 [2024-10-01 22:40:36.067608] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.935 [2024-10-01 22:40:36.067618] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.935 qpair failed and we were unable to recover it. 00:41:40.935 [2024-10-01 22:40:36.067900] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.935 [2024-10-01 22:40:36.067910] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.935 qpair failed and we were unable to recover it. 00:41:40.935 [2024-10-01 22:40:36.068197] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.935 [2024-10-01 22:40:36.068207] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.935 qpair failed and we were unable to recover it. 00:41:40.935 [2024-10-01 22:40:36.068508] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.935 [2024-10-01 22:40:36.068517] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.935 qpair failed and we were unable to recover it. 00:41:40.935 [2024-10-01 22:40:36.068714] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.935 [2024-10-01 22:40:36.068724] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.935 qpair failed and we were unable to recover it. 00:41:40.935 [2024-10-01 22:40:36.069043] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.935 [2024-10-01 22:40:36.069053] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.935 qpair failed and we were unable to recover it. 00:41:40.935 [2024-10-01 22:40:36.069359] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.935 [2024-10-01 22:40:36.069369] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.935 qpair failed and we were unable to recover it. 00:41:40.935 [2024-10-01 22:40:36.069656] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.935 [2024-10-01 22:40:36.069666] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.935 qpair failed and we were unable to recover it. 00:41:40.935 [2024-10-01 22:40:36.070008] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.935 [2024-10-01 22:40:36.070018] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.935 qpair failed and we were unable to recover it. 00:41:40.935 [2024-10-01 22:40:36.070324] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.935 [2024-10-01 22:40:36.070335] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.935 qpair failed and we were unable to recover it. 00:41:40.935 [2024-10-01 22:40:36.070640] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.935 [2024-10-01 22:40:36.070650] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.935 qpair failed and we were unable to recover it. 00:41:40.935 [2024-10-01 22:40:36.070962] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.935 [2024-10-01 22:40:36.070972] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.935 qpair failed and we were unable to recover it. 00:41:40.935 [2024-10-01 22:40:36.071276] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.935 [2024-10-01 22:40:36.071286] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.935 qpair failed and we were unable to recover it. 00:41:40.935 [2024-10-01 22:40:36.071490] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.935 [2024-10-01 22:40:36.071500] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.935 qpair failed and we were unable to recover it. 00:41:40.935 [2024-10-01 22:40:36.071837] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.935 [2024-10-01 22:40:36.071847] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.935 qpair failed and we were unable to recover it. 00:41:40.935 [2024-10-01 22:40:36.072128] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.935 [2024-10-01 22:40:36.072138] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.935 qpair failed and we were unable to recover it. 00:41:40.935 [2024-10-01 22:40:36.072448] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.935 [2024-10-01 22:40:36.072458] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.935 qpair failed and we were unable to recover it. 00:41:40.935 [2024-10-01 22:40:36.072771] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.935 [2024-10-01 22:40:36.072781] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.935 qpair failed and we were unable to recover it. 00:41:40.935 [2024-10-01 22:40:36.072940] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.935 [2024-10-01 22:40:36.072951] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.935 qpair failed and we were unable to recover it. 00:41:40.935 [2024-10-01 22:40:36.073250] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.935 [2024-10-01 22:40:36.073260] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.935 qpair failed and we were unable to recover it. 00:41:40.935 [2024-10-01 22:40:36.073563] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.935 [2024-10-01 22:40:36.073573] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.935 qpair failed and we were unable to recover it. 00:41:40.935 [2024-10-01 22:40:36.073884] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.935 [2024-10-01 22:40:36.073895] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.935 qpair failed and we were unable to recover it. 00:41:40.935 [2024-10-01 22:40:36.074215] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.935 [2024-10-01 22:40:36.074225] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.935 qpair failed and we were unable to recover it. 00:41:40.935 [2024-10-01 22:40:36.074557] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.935 [2024-10-01 22:40:36.074568] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.935 qpair failed and we were unable to recover it. 00:41:40.935 [2024-10-01 22:40:36.074862] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.935 [2024-10-01 22:40:36.074873] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.935 qpair failed and we were unable to recover it. 00:41:40.935 [2024-10-01 22:40:36.075192] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.935 [2024-10-01 22:40:36.075202] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.935 qpair failed and we were unable to recover it. 00:41:40.935 [2024-10-01 22:40:36.075424] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.935 [2024-10-01 22:40:36.075433] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.935 qpair failed and we were unable to recover it. 00:41:40.935 [2024-10-01 22:40:36.075741] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.935 [2024-10-01 22:40:36.075751] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.935 qpair failed and we were unable to recover it. 00:41:40.935 [2024-10-01 22:40:36.076046] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.935 [2024-10-01 22:40:36.076063] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.935 qpair failed and we were unable to recover it. 00:41:40.935 [2024-10-01 22:40:36.076408] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.936 [2024-10-01 22:40:36.076419] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.936 qpair failed and we were unable to recover it. 00:41:40.936 [2024-10-01 22:40:36.076722] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.936 [2024-10-01 22:40:36.076732] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.936 qpair failed and we were unable to recover it. 00:41:40.936 [2024-10-01 22:40:36.077048] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.936 [2024-10-01 22:40:36.077058] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.936 qpair failed and we were unable to recover it. 00:41:40.936 [2024-10-01 22:40:36.077315] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.936 [2024-10-01 22:40:36.077325] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.936 qpair failed and we were unable to recover it. 00:41:40.936 [2024-10-01 22:40:36.077640] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.936 [2024-10-01 22:40:36.077651] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.936 qpair failed and we were unable to recover it. 00:41:40.936 [2024-10-01 22:40:36.077804] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.936 [2024-10-01 22:40:36.077815] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.936 qpair failed and we were unable to recover it. 00:41:40.936 [2024-10-01 22:40:36.078110] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.936 [2024-10-01 22:40:36.078120] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.936 qpair failed and we were unable to recover it. 00:41:40.936 [2024-10-01 22:40:36.078400] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.936 [2024-10-01 22:40:36.078412] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.936 qpair failed and we were unable to recover it. 00:41:40.936 [2024-10-01 22:40:36.078748] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.936 [2024-10-01 22:40:36.078758] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.936 qpair failed and we were unable to recover it. 00:41:40.936 [2024-10-01 22:40:36.079121] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.936 [2024-10-01 22:40:36.079131] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.936 qpair failed and we were unable to recover it. 00:41:40.936 [2024-10-01 22:40:36.079435] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.936 [2024-10-01 22:40:36.079445] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.936 qpair failed and we were unable to recover it. 00:41:40.936 [2024-10-01 22:40:36.079665] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.936 [2024-10-01 22:40:36.079676] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.936 qpair failed and we were unable to recover it. 00:41:40.936 [2024-10-01 22:40:36.079977] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.936 [2024-10-01 22:40:36.079988] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.936 qpair failed and we were unable to recover it. 00:41:40.936 [2024-10-01 22:40:36.080290] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.936 [2024-10-01 22:40:36.080300] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.936 qpair failed and we were unable to recover it. 00:41:40.936 [2024-10-01 22:40:36.080465] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.936 [2024-10-01 22:40:36.080476] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.936 qpair failed and we were unable to recover it. 00:41:40.936 [2024-10-01 22:40:36.080786] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.936 [2024-10-01 22:40:36.080796] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.936 qpair failed and we were unable to recover it. 00:41:40.936 [2024-10-01 22:40:36.081074] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.936 [2024-10-01 22:40:36.081083] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.936 qpair failed and we were unable to recover it. 00:41:40.936 [2024-10-01 22:40:36.081479] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.936 [2024-10-01 22:40:36.081488] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.936 qpair failed and we were unable to recover it. 00:41:40.936 [2024-10-01 22:40:36.081828] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.936 [2024-10-01 22:40:36.081838] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.936 qpair failed and we were unable to recover it. 00:41:40.936 [2024-10-01 22:40:36.082168] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.936 [2024-10-01 22:40:36.082178] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.936 qpair failed and we were unable to recover it. 00:41:40.936 [2024-10-01 22:40:36.082472] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.936 [2024-10-01 22:40:36.082482] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.936 qpair failed and we were unable to recover it. 00:41:40.936 [2024-10-01 22:40:36.082789] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.936 [2024-10-01 22:40:36.082799] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.936 qpair failed and we were unable to recover it. 00:41:40.936 [2024-10-01 22:40:36.083098] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.936 [2024-10-01 22:40:36.083107] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.936 qpair failed and we were unable to recover it. 00:41:40.936 [2024-10-01 22:40:36.083431] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.936 [2024-10-01 22:40:36.083441] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.936 qpair failed and we were unable to recover it. 00:41:40.936 [2024-10-01 22:40:36.083747] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.936 [2024-10-01 22:40:36.083758] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.936 qpair failed and we were unable to recover it. 00:41:40.936 [2024-10-01 22:40:36.084076] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.936 [2024-10-01 22:40:36.084086] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.936 qpair failed and we were unable to recover it. 00:41:40.936 [2024-10-01 22:40:36.084373] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.936 [2024-10-01 22:40:36.084383] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.936 qpair failed and we were unable to recover it. 00:41:40.936 [2024-10-01 22:40:36.084670] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.936 [2024-10-01 22:40:36.084681] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.936 qpair failed and we were unable to recover it. 00:41:40.936 [2024-10-01 22:40:36.084974] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.936 [2024-10-01 22:40:36.084983] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.936 qpair failed and we were unable to recover it. 00:41:40.936 [2024-10-01 22:40:36.085289] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.936 [2024-10-01 22:40:36.085298] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.936 qpair failed and we were unable to recover it. 00:41:40.936 [2024-10-01 22:40:36.085522] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.936 [2024-10-01 22:40:36.085532] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.936 qpair failed and we were unable to recover it. 00:41:40.936 [2024-10-01 22:40:36.085792] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.936 [2024-10-01 22:40:36.085802] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.936 qpair failed and we were unable to recover it. 00:41:40.936 [2024-10-01 22:40:36.086086] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.936 [2024-10-01 22:40:36.086095] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.936 qpair failed and we were unable to recover it. 00:41:40.936 [2024-10-01 22:40:36.086406] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.936 [2024-10-01 22:40:36.086417] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.936 qpair failed and we were unable to recover it. 00:41:40.936 [2024-10-01 22:40:36.086720] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.937 [2024-10-01 22:40:36.086731] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.937 qpair failed and we were unable to recover it. 00:41:40.937 [2024-10-01 22:40:36.087018] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.937 [2024-10-01 22:40:36.087028] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.937 qpair failed and we were unable to recover it. 00:41:40.937 [2024-10-01 22:40:36.087330] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.937 [2024-10-01 22:40:36.087339] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.937 qpair failed and we were unable to recover it. 00:41:40.937 [2024-10-01 22:40:36.087642] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.937 [2024-10-01 22:40:36.087652] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.937 qpair failed and we were unable to recover it. 00:41:40.937 [2024-10-01 22:40:36.087957] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.937 [2024-10-01 22:40:36.087967] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.937 qpair failed and we were unable to recover it. 00:41:40.937 [2024-10-01 22:40:36.088257] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.937 [2024-10-01 22:40:36.088267] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.937 qpair failed and we were unable to recover it. 00:41:40.937 [2024-10-01 22:40:36.088577] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.937 [2024-10-01 22:40:36.088587] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.937 qpair failed and we were unable to recover it. 00:41:40.937 [2024-10-01 22:40:36.088876] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.937 [2024-10-01 22:40:36.088886] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.937 qpair failed and we were unable to recover it. 00:41:40.937 [2024-10-01 22:40:36.089178] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.937 [2024-10-01 22:40:36.089188] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.937 qpair failed and we were unable to recover it. 00:41:40.937 [2024-10-01 22:40:36.089360] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.937 [2024-10-01 22:40:36.089370] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.937 qpair failed and we were unable to recover it. 00:41:40.937 [2024-10-01 22:40:36.089714] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.937 [2024-10-01 22:40:36.089725] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.937 qpair failed and we were unable to recover it. 00:41:40.937 [2024-10-01 22:40:36.090031] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.937 [2024-10-01 22:40:36.090041] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.937 qpair failed and we were unable to recover it. 00:41:40.937 [2024-10-01 22:40:36.090414] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.937 [2024-10-01 22:40:36.090424] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.937 qpair failed and we were unable to recover it. 00:41:40.937 [2024-10-01 22:40:36.090639] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.937 [2024-10-01 22:40:36.090650] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.937 qpair failed and we were unable to recover it. 00:41:40.937 [2024-10-01 22:40:36.090955] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.937 [2024-10-01 22:40:36.090966] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.937 qpair failed and we were unable to recover it. 00:41:40.937 [2024-10-01 22:40:36.091153] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.937 [2024-10-01 22:40:36.091163] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.937 qpair failed and we were unable to recover it. 00:41:40.937 [2024-10-01 22:40:36.091466] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.937 [2024-10-01 22:40:36.091477] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.937 qpair failed and we were unable to recover it. 00:41:40.937 [2024-10-01 22:40:36.091760] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.937 [2024-10-01 22:40:36.091771] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.937 qpair failed and we were unable to recover it. 00:41:40.937 [2024-10-01 22:40:36.092043] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.937 [2024-10-01 22:40:36.092052] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.937 qpair failed and we were unable to recover it. 00:41:40.937 [2024-10-01 22:40:36.092375] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.937 [2024-10-01 22:40:36.092386] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.937 qpair failed and we were unable to recover it. 00:41:40.937 [2024-10-01 22:40:36.092712] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.937 [2024-10-01 22:40:36.092722] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.937 qpair failed and we were unable to recover it. 00:41:40.937 [2024-10-01 22:40:36.093007] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.937 [2024-10-01 22:40:36.093017] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.937 qpair failed and we were unable to recover it. 00:41:40.937 [2024-10-01 22:40:36.093313] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.937 [2024-10-01 22:40:36.093322] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.937 qpair failed and we were unable to recover it. 00:41:40.937 [2024-10-01 22:40:36.093614] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.937 [2024-10-01 22:40:36.093629] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.937 qpair failed and we were unable to recover it. 00:41:40.937 [2024-10-01 22:40:36.093932] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.937 [2024-10-01 22:40:36.093941] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.937 qpair failed and we were unable to recover it. 00:41:40.937 [2024-10-01 22:40:36.094223] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.937 [2024-10-01 22:40:36.094234] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.937 qpair failed and we were unable to recover it. 00:41:40.937 [2024-10-01 22:40:36.094538] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.937 [2024-10-01 22:40:36.094547] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.937 qpair failed and we were unable to recover it. 00:41:40.937 [2024-10-01 22:40:36.094829] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.937 [2024-10-01 22:40:36.094840] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.937 qpair failed and we were unable to recover it. 00:41:40.937 [2024-10-01 22:40:36.095147] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.937 [2024-10-01 22:40:36.095157] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.937 qpair failed and we were unable to recover it. 00:41:40.937 [2024-10-01 22:40:36.095445] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.937 [2024-10-01 22:40:36.095456] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.937 qpair failed and we were unable to recover it. 00:41:40.937 [2024-10-01 22:40:36.095803] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.937 [2024-10-01 22:40:36.095813] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.937 qpair failed and we were unable to recover it. 00:41:40.937 [2024-10-01 22:40:36.096120] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.937 [2024-10-01 22:40:36.096130] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.937 qpair failed and we were unable to recover it. 00:41:40.937 [2024-10-01 22:40:36.096440] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.937 [2024-10-01 22:40:36.096450] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.937 qpair failed and we were unable to recover it. 00:41:40.937 [2024-10-01 22:40:36.096749] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.937 [2024-10-01 22:40:36.096759] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.937 qpair failed and we were unable to recover it. 00:41:40.937 [2024-10-01 22:40:36.097055] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.937 [2024-10-01 22:40:36.097066] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.937 qpair failed and we were unable to recover it. 00:41:40.937 [2024-10-01 22:40:36.097368] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.937 [2024-10-01 22:40:36.097377] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.937 qpair failed and we were unable to recover it. 00:41:40.937 [2024-10-01 22:40:36.097659] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.937 [2024-10-01 22:40:36.097669] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.937 qpair failed and we were unable to recover it. 00:41:40.937 [2024-10-01 22:40:36.097978] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.937 [2024-10-01 22:40:36.097988] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.937 qpair failed and we were unable to recover it. 00:41:40.937 [2024-10-01 22:40:36.098271] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.937 [2024-10-01 22:40:36.098282] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.938 qpair failed and we were unable to recover it. 00:41:40.938 [2024-10-01 22:40:36.098587] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.938 [2024-10-01 22:40:36.098597] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.938 qpair failed and we were unable to recover it. 00:41:40.938 [2024-10-01 22:40:36.098893] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.938 [2024-10-01 22:40:36.098904] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.938 qpair failed and we were unable to recover it. 00:41:40.938 [2024-10-01 22:40:36.099200] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.938 [2024-10-01 22:40:36.099212] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.938 qpair failed and we were unable to recover it. 00:41:40.938 [2024-10-01 22:40:36.099510] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.938 [2024-10-01 22:40:36.099519] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.938 qpair failed and we were unable to recover it. 00:41:40.938 [2024-10-01 22:40:36.099838] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.938 [2024-10-01 22:40:36.099849] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.938 qpair failed and we were unable to recover it. 00:41:40.938 [2024-10-01 22:40:36.100153] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.938 [2024-10-01 22:40:36.100163] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.938 qpair failed and we were unable to recover it. 00:41:40.938 [2024-10-01 22:40:36.100444] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.938 [2024-10-01 22:40:36.100455] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.938 qpair failed and we were unable to recover it. 00:41:40.938 [2024-10-01 22:40:36.100760] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.938 [2024-10-01 22:40:36.100771] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.938 qpair failed and we were unable to recover it. 00:41:40.938 [2024-10-01 22:40:36.101070] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.938 [2024-10-01 22:40:36.101081] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.938 qpair failed and we were unable to recover it. 00:41:40.938 [2024-10-01 22:40:36.101431] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.938 [2024-10-01 22:40:36.101442] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.938 qpair failed and we were unable to recover it. 00:41:40.938 [2024-10-01 22:40:36.101741] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.938 [2024-10-01 22:40:36.101752] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.938 qpair failed and we were unable to recover it. 00:41:40.938 [2024-10-01 22:40:36.102071] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.938 [2024-10-01 22:40:36.102081] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.938 qpair failed and we were unable to recover it. 00:41:40.938 [2024-10-01 22:40:36.102380] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.938 [2024-10-01 22:40:36.102390] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.938 qpair failed and we were unable to recover it. 00:41:40.938 [2024-10-01 22:40:36.102667] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.938 [2024-10-01 22:40:36.102677] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.938 qpair failed and we were unable to recover it. 00:41:40.938 [2024-10-01 22:40:36.102986] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.938 [2024-10-01 22:40:36.102995] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.938 qpair failed and we were unable to recover it. 00:41:40.938 [2024-10-01 22:40:36.103299] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.938 [2024-10-01 22:40:36.103309] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.938 qpair failed and we were unable to recover it. 00:41:40.938 [2024-10-01 22:40:36.103612] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.938 [2024-10-01 22:40:36.103622] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.938 qpair failed and we were unable to recover it. 00:41:40.938 [2024-10-01 22:40:36.103936] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.938 [2024-10-01 22:40:36.103946] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.938 qpair failed and we were unable to recover it. 00:41:40.938 [2024-10-01 22:40:36.104268] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.938 [2024-10-01 22:40:36.104278] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.938 qpair failed and we were unable to recover it. 00:41:40.938 [2024-10-01 22:40:36.104575] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.938 [2024-10-01 22:40:36.104585] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.938 qpair failed and we were unable to recover it. 00:41:40.938 [2024-10-01 22:40:36.104875] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.938 [2024-10-01 22:40:36.104885] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.938 qpair failed and we were unable to recover it. 00:41:40.938 [2024-10-01 22:40:36.105177] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.938 [2024-10-01 22:40:36.105194] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.938 qpair failed and we were unable to recover it. 00:41:40.938 [2024-10-01 22:40:36.105526] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.938 [2024-10-01 22:40:36.105536] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.938 qpair failed and we were unable to recover it. 00:41:40.938 [2024-10-01 22:40:36.105859] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.938 [2024-10-01 22:40:36.105870] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.938 qpair failed and we were unable to recover it. 00:41:40.938 [2024-10-01 22:40:36.106184] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.938 [2024-10-01 22:40:36.106195] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.938 qpair failed and we were unable to recover it. 00:41:40.938 [2024-10-01 22:40:36.106497] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.938 [2024-10-01 22:40:36.106507] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.938 qpair failed and we were unable to recover it. 00:41:40.938 [2024-10-01 22:40:36.106788] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.938 [2024-10-01 22:40:36.106799] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.938 qpair failed and we were unable to recover it. 00:41:40.938 [2024-10-01 22:40:36.107112] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.938 [2024-10-01 22:40:36.107122] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.938 qpair failed and we were unable to recover it. 00:41:40.938 [2024-10-01 22:40:36.107432] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.938 [2024-10-01 22:40:36.107443] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.938 qpair failed and we were unable to recover it. 00:41:40.938 [2024-10-01 22:40:36.107741] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.938 [2024-10-01 22:40:36.107754] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.938 qpair failed and we were unable to recover it. 00:41:40.938 [2024-10-01 22:40:36.108052] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.938 [2024-10-01 22:40:36.108062] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.938 qpair failed and we were unable to recover it. 00:41:40.938 [2024-10-01 22:40:36.108342] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.938 [2024-10-01 22:40:36.108351] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.938 qpair failed and we were unable to recover it. 00:41:40.938 [2024-10-01 22:40:36.108511] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.938 [2024-10-01 22:40:36.108521] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.938 qpair failed and we were unable to recover it. 00:41:40.938 [2024-10-01 22:40:36.108708] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.938 [2024-10-01 22:40:36.108719] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.938 qpair failed and we were unable to recover it. 00:41:40.938 [2024-10-01 22:40:36.108991] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.938 [2024-10-01 22:40:36.109000] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.938 qpair failed and we were unable to recover it. 00:41:40.938 [2024-10-01 22:40:36.109305] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.938 [2024-10-01 22:40:36.109315] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.938 qpair failed and we were unable to recover it. 00:41:40.938 [2024-10-01 22:40:36.109619] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.938 [2024-10-01 22:40:36.109641] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.938 qpair failed and we were unable to recover it. 00:41:40.938 [2024-10-01 22:40:36.109862] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.938 [2024-10-01 22:40:36.109871] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.939 qpair failed and we were unable to recover it. 00:41:40.939 [2024-10-01 22:40:36.110153] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.939 [2024-10-01 22:40:36.110164] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.939 qpair failed and we were unable to recover it. 00:41:40.939 [2024-10-01 22:40:36.110466] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.939 [2024-10-01 22:40:36.110476] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.939 qpair failed and we were unable to recover it. 00:41:40.939 [2024-10-01 22:40:36.110830] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.939 [2024-10-01 22:40:36.110841] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.939 qpair failed and we were unable to recover it. 00:41:40.939 [2024-10-01 22:40:36.111041] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.939 [2024-10-01 22:40:36.111051] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.939 qpair failed and we were unable to recover it. 00:41:40.939 [2024-10-01 22:40:36.111345] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.939 [2024-10-01 22:40:36.111356] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.939 qpair failed and we were unable to recover it. 00:41:40.939 [2024-10-01 22:40:36.111748] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.939 [2024-10-01 22:40:36.111758] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.939 qpair failed and we were unable to recover it. 00:41:40.939 [2024-10-01 22:40:36.112063] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.939 [2024-10-01 22:40:36.112073] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.939 qpair failed and we were unable to recover it. 00:41:40.939 [2024-10-01 22:40:36.112399] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.939 [2024-10-01 22:40:36.112409] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.939 qpair failed and we were unable to recover it. 00:41:40.939 [2024-10-01 22:40:36.112691] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.939 [2024-10-01 22:40:36.112701] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.939 qpair failed and we were unable to recover it. 00:41:40.939 [2024-10-01 22:40:36.113011] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.939 [2024-10-01 22:40:36.113021] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.939 qpair failed and we were unable to recover it. 00:41:40.939 [2024-10-01 22:40:36.113305] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.939 [2024-10-01 22:40:36.113315] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.939 qpair failed and we were unable to recover it. 00:41:40.939 [2024-10-01 22:40:36.113622] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.939 [2024-10-01 22:40:36.113640] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.939 qpair failed and we were unable to recover it. 00:41:40.939 [2024-10-01 22:40:36.114007] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.939 [2024-10-01 22:40:36.114016] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.939 qpair failed and we were unable to recover it. 00:41:40.939 [2024-10-01 22:40:36.114182] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.939 [2024-10-01 22:40:36.114193] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.939 qpair failed and we were unable to recover it. 00:41:40.939 [2024-10-01 22:40:36.114486] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.939 [2024-10-01 22:40:36.114496] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.939 qpair failed and we were unable to recover it. 00:41:40.939 [2024-10-01 22:40:36.114774] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.939 [2024-10-01 22:40:36.114784] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.939 qpair failed and we were unable to recover it. 00:41:40.939 [2024-10-01 22:40:36.115008] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.939 [2024-10-01 22:40:36.115018] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.939 qpair failed and we were unable to recover it. 00:41:40.939 [2024-10-01 22:40:36.115345] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.939 [2024-10-01 22:40:36.115355] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.939 qpair failed and we were unable to recover it. 00:41:40.939 [2024-10-01 22:40:36.115663] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.939 [2024-10-01 22:40:36.115673] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.939 qpair failed and we were unable to recover it. 00:41:40.939 [2024-10-01 22:40:36.115970] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.939 [2024-10-01 22:40:36.115981] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.939 qpair failed and we were unable to recover it. 00:41:40.939 [2024-10-01 22:40:36.116259] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.939 [2024-10-01 22:40:36.116269] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.939 qpair failed and we were unable to recover it. 00:41:40.939 [2024-10-01 22:40:36.116575] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.939 [2024-10-01 22:40:36.116586] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.939 qpair failed and we were unable to recover it. 00:41:40.939 [2024-10-01 22:40:36.116825] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.939 [2024-10-01 22:40:36.116835] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.939 qpair failed and we were unable to recover it. 00:41:40.939 [2024-10-01 22:40:36.117217] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.939 [2024-10-01 22:40:36.117227] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.939 qpair failed and we were unable to recover it. 00:41:40.939 [2024-10-01 22:40:36.117566] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.939 [2024-10-01 22:40:36.117575] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.939 qpair failed and we were unable to recover it. 00:41:40.939 [2024-10-01 22:40:36.117978] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.939 [2024-10-01 22:40:36.117988] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.939 qpair failed and we were unable to recover it. 00:41:40.939 [2024-10-01 22:40:36.118173] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.939 [2024-10-01 22:40:36.118183] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.939 qpair failed and we were unable to recover it. 00:41:40.939 [2024-10-01 22:40:36.118412] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.939 [2024-10-01 22:40:36.118422] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.939 qpair failed and we were unable to recover it. 00:41:40.939 [2024-10-01 22:40:36.118718] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.939 [2024-10-01 22:40:36.118729] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.939 qpair failed and we were unable to recover it. 00:41:40.939 [2024-10-01 22:40:36.119020] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.939 [2024-10-01 22:40:36.119029] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.939 qpair failed and we were unable to recover it. 00:41:40.939 [2024-10-01 22:40:36.119353] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.939 [2024-10-01 22:40:36.119363] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.939 qpair failed and we were unable to recover it. 00:41:40.939 [2024-10-01 22:40:36.119643] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.939 [2024-10-01 22:40:36.119654] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.939 qpair failed and we were unable to recover it. 00:41:40.939 [2024-10-01 22:40:36.119830] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.939 [2024-10-01 22:40:36.119840] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.939 qpair failed and we were unable to recover it. 00:41:40.939 [2024-10-01 22:40:36.120005] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.939 [2024-10-01 22:40:36.120015] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.939 qpair failed and we were unable to recover it. 00:41:40.939 [2024-10-01 22:40:36.120284] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.940 [2024-10-01 22:40:36.120294] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.940 qpair failed and we were unable to recover it. 00:41:40.940 [2024-10-01 22:40:36.120503] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.940 [2024-10-01 22:40:36.120513] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.940 qpair failed and we were unable to recover it. 00:41:40.940 [2024-10-01 22:40:36.120856] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.940 [2024-10-01 22:40:36.120866] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.940 qpair failed and we were unable to recover it. 00:41:40.940 [2024-10-01 22:40:36.121250] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.940 [2024-10-01 22:40:36.121260] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.940 qpair failed and we were unable to recover it. 00:41:40.940 [2024-10-01 22:40:36.121570] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.940 [2024-10-01 22:40:36.121580] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.940 qpair failed and we were unable to recover it. 00:41:40.940 [2024-10-01 22:40:36.121886] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.940 [2024-10-01 22:40:36.121896] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.940 qpair failed and we were unable to recover it. 00:41:40.940 [2024-10-01 22:40:36.122283] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.940 [2024-10-01 22:40:36.122293] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.940 qpair failed and we were unable to recover it. 00:41:40.940 [2024-10-01 22:40:36.122640] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.940 [2024-10-01 22:40:36.122651] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.940 qpair failed and we were unable to recover it. 00:41:40.940 [2024-10-01 22:40:36.122991] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.940 [2024-10-01 22:40:36.123001] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.940 qpair failed and we were unable to recover it. 00:41:40.940 [2024-10-01 22:40:36.123306] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.940 [2024-10-01 22:40:36.123316] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.940 qpair failed and we were unable to recover it. 00:41:40.940 [2024-10-01 22:40:36.123627] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.940 [2024-10-01 22:40:36.123637] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.940 qpair failed and we were unable to recover it. 00:41:40.940 [2024-10-01 22:40:36.123940] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.940 [2024-10-01 22:40:36.123950] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.940 qpair failed and we were unable to recover it. 00:41:40.940 [2024-10-01 22:40:36.124261] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.940 [2024-10-01 22:40:36.124271] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.940 qpair failed and we were unable to recover it. 00:41:40.940 [2024-10-01 22:40:36.124581] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.940 [2024-10-01 22:40:36.124591] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.940 qpair failed and we were unable to recover it. 00:41:40.940 [2024-10-01 22:40:36.124960] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.940 [2024-10-01 22:40:36.124970] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.940 qpair failed and we were unable to recover it. 00:41:40.940 [2024-10-01 22:40:36.125304] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.940 [2024-10-01 22:40:36.125314] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.940 qpair failed and we were unable to recover it. 00:41:40.940 [2024-10-01 22:40:36.125646] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.940 [2024-10-01 22:40:36.125657] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.940 qpair failed and we were unable to recover it. 00:41:40.940 [2024-10-01 22:40:36.125883] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.940 [2024-10-01 22:40:36.125893] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.940 qpair failed and we were unable to recover it. 00:41:40.940 [2024-10-01 22:40:36.126294] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.940 [2024-10-01 22:40:36.126305] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.940 qpair failed and we were unable to recover it. 00:41:40.940 [2024-10-01 22:40:36.126630] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.940 [2024-10-01 22:40:36.126641] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.940 qpair failed and we were unable to recover it. 00:41:40.940 [2024-10-01 22:40:36.126754] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.940 [2024-10-01 22:40:36.126765] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.940 qpair failed and we were unable to recover it. 00:41:40.940 [2024-10-01 22:40:36.126932] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.940 [2024-10-01 22:40:36.126942] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.940 qpair failed and we were unable to recover it. 00:41:40.940 [2024-10-01 22:40:36.127256] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.940 [2024-10-01 22:40:36.127266] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.940 qpair failed and we were unable to recover it. 00:41:40.940 [2024-10-01 22:40:36.127559] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.940 [2024-10-01 22:40:36.127570] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.940 qpair failed and we were unable to recover it. 00:41:40.940 [2024-10-01 22:40:36.127758] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.940 [2024-10-01 22:40:36.127770] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.940 qpair failed and we were unable to recover it. 00:41:40.940 [2024-10-01 22:40:36.127958] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.940 [2024-10-01 22:40:36.127972] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.940 qpair failed and we were unable to recover it. 00:41:40.940 [2024-10-01 22:40:36.128145] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.940 [2024-10-01 22:40:36.128156] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.940 qpair failed and we were unable to recover it. 00:41:40.940 [2024-10-01 22:40:36.128471] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.940 [2024-10-01 22:40:36.128481] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.940 qpair failed and we were unable to recover it. 00:41:40.940 [2024-10-01 22:40:36.128790] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.940 [2024-10-01 22:40:36.128800] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.940 qpair failed and we were unable to recover it. 00:41:40.940 [2024-10-01 22:40:36.129155] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.940 [2024-10-01 22:40:36.129165] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.940 qpair failed and we were unable to recover it. 00:41:40.940 [2024-10-01 22:40:36.129395] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.940 [2024-10-01 22:40:36.129405] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.940 qpair failed and we were unable to recover it. 00:41:40.940 [2024-10-01 22:40:36.129738] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.940 [2024-10-01 22:40:36.129748] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.940 qpair failed and we were unable to recover it. 00:41:40.940 [2024-10-01 22:40:36.130056] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.940 [2024-10-01 22:40:36.130066] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.940 qpair failed and we were unable to recover it. 00:41:40.940 [2024-10-01 22:40:36.130372] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.940 [2024-10-01 22:40:36.130382] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.940 qpair failed and we were unable to recover it. 00:41:40.940 [2024-10-01 22:40:36.130663] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.940 [2024-10-01 22:40:36.130673] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.941 qpair failed and we were unable to recover it. 00:41:40.941 [2024-10-01 22:40:36.130999] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.941 [2024-10-01 22:40:36.131009] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.941 qpair failed and we were unable to recover it. 00:41:40.941 [2024-10-01 22:40:36.131316] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.941 [2024-10-01 22:40:36.131326] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.941 qpair failed and we were unable to recover it. 00:41:40.941 [2024-10-01 22:40:36.131638] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.941 [2024-10-01 22:40:36.131649] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.941 qpair failed and we were unable to recover it. 00:41:40.941 [2024-10-01 22:40:36.131953] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.941 [2024-10-01 22:40:36.131971] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.941 qpair failed and we were unable to recover it. 00:41:40.941 [2024-10-01 22:40:36.132262] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.941 [2024-10-01 22:40:36.132272] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.941 qpair failed and we were unable to recover it. 00:41:40.941 [2024-10-01 22:40:36.132570] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.941 [2024-10-01 22:40:36.132581] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.941 qpair failed and we were unable to recover it. 00:41:40.941 [2024-10-01 22:40:36.132969] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.941 [2024-10-01 22:40:36.132980] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.941 qpair failed and we were unable to recover it. 00:41:40.941 [2024-10-01 22:40:36.133292] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.941 [2024-10-01 22:40:36.133302] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.941 qpair failed and we were unable to recover it. 00:41:40.941 [2024-10-01 22:40:36.133619] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.941 [2024-10-01 22:40:36.133635] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.941 qpair failed and we were unable to recover it. 00:41:40.941 [2024-10-01 22:40:36.133921] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.941 [2024-10-01 22:40:36.133932] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.941 qpair failed and we were unable to recover it. 00:41:40.941 [2024-10-01 22:40:36.134253] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.941 [2024-10-01 22:40:36.134264] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.941 qpair failed and we were unable to recover it. 00:41:40.941 [2024-10-01 22:40:36.134451] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.941 [2024-10-01 22:40:36.134461] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.941 qpair failed and we were unable to recover it. 00:41:40.941 [2024-10-01 22:40:36.134839] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.941 [2024-10-01 22:40:36.134851] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.941 qpair failed and we were unable to recover it. 00:41:40.941 [2024-10-01 22:40:36.135120] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.941 [2024-10-01 22:40:36.135131] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.941 qpair failed and we were unable to recover it. 00:41:40.941 [2024-10-01 22:40:36.135346] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.941 [2024-10-01 22:40:36.135356] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.941 qpair failed and we were unable to recover it. 00:41:40.941 [2024-10-01 22:40:36.135665] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.941 [2024-10-01 22:40:36.135676] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.941 qpair failed and we were unable to recover it. 00:41:40.941 [2024-10-01 22:40:36.135871] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.941 [2024-10-01 22:40:36.135882] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.941 qpair failed and we were unable to recover it. 00:41:40.941 [2024-10-01 22:40:36.136211] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.941 [2024-10-01 22:40:36.136225] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.941 qpair failed and we were unable to recover it. 00:41:40.941 [2024-10-01 22:40:36.136541] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.941 [2024-10-01 22:40:36.136552] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.941 qpair failed and we were unable to recover it. 00:41:40.941 [2024-10-01 22:40:36.136870] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.941 [2024-10-01 22:40:36.136881] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.941 qpair failed and we were unable to recover it. 00:41:40.941 [2024-10-01 22:40:36.137177] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.941 [2024-10-01 22:40:36.137188] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.941 qpair failed and we were unable to recover it. 00:41:40.941 [2024-10-01 22:40:36.137496] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.941 [2024-10-01 22:40:36.137506] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.941 qpair failed and we were unable to recover it. 00:41:40.941 [2024-10-01 22:40:36.137794] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.941 [2024-10-01 22:40:36.137805] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.941 qpair failed and we were unable to recover it. 00:41:40.941 [2024-10-01 22:40:36.138194] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.941 [2024-10-01 22:40:36.138205] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.941 qpair failed and we were unable to recover it. 00:41:40.941 [2024-10-01 22:40:36.138539] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.941 [2024-10-01 22:40:36.138549] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.941 qpair failed and we were unable to recover it. 00:41:40.941 [2024-10-01 22:40:36.138864] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.941 [2024-10-01 22:40:36.138876] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.941 qpair failed and we were unable to recover it. 00:41:40.941 [2024-10-01 22:40:36.138987] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.941 [2024-10-01 22:40:36.138996] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.941 qpair failed and we were unable to recover it. 00:41:40.941 [2024-10-01 22:40:36.139299] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.941 [2024-10-01 22:40:36.139310] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.941 qpair failed and we were unable to recover it. 00:41:40.941 [2024-10-01 22:40:36.139610] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.941 [2024-10-01 22:40:36.139621] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.941 qpair failed and we were unable to recover it. 00:41:40.941 [2024-10-01 22:40:36.139944] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.941 [2024-10-01 22:40:36.139956] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.941 qpair failed and we were unable to recover it. 00:41:40.941 [2024-10-01 22:40:36.140158] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.941 [2024-10-01 22:40:36.140169] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.941 qpair failed and we were unable to recover it. 00:41:40.941 [2024-10-01 22:40:36.140482] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.941 [2024-10-01 22:40:36.140493] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.941 qpair failed and we were unable to recover it. 00:41:40.941 [2024-10-01 22:40:36.140849] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.941 [2024-10-01 22:40:36.140861] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.941 qpair failed and we were unable to recover it. 00:41:40.941 [2024-10-01 22:40:36.141162] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.942 [2024-10-01 22:40:36.141172] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.942 qpair failed and we were unable to recover it. 00:41:40.942 [2024-10-01 22:40:36.141352] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.942 [2024-10-01 22:40:36.141363] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.942 qpair failed and we were unable to recover it. 00:41:40.942 [2024-10-01 22:40:36.141688] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.942 [2024-10-01 22:40:36.141699] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.942 qpair failed and we were unable to recover it. 00:41:40.942 [2024-10-01 22:40:36.141944] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.942 [2024-10-01 22:40:36.141955] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.942 qpair failed and we were unable to recover it. 00:41:40.942 [2024-10-01 22:40:36.142275] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.942 [2024-10-01 22:40:36.142286] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.942 qpair failed and we were unable to recover it. 00:41:40.942 [2024-10-01 22:40:36.142596] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.942 [2024-10-01 22:40:36.142606] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.942 qpair failed and we were unable to recover it. 00:41:40.942 [2024-10-01 22:40:36.142908] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.942 [2024-10-01 22:40:36.142919] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.942 qpair failed and we were unable to recover it. 00:41:40.942 [2024-10-01 22:40:36.143107] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.942 [2024-10-01 22:40:36.143118] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.942 qpair failed and we were unable to recover it. 00:41:40.942 [2024-10-01 22:40:36.143293] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.942 [2024-10-01 22:40:36.143304] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.942 qpair failed and we were unable to recover it. 00:41:40.942 [2024-10-01 22:40:36.143638] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.942 [2024-10-01 22:40:36.143649] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.942 qpair failed and we were unable to recover it. 00:41:40.942 [2024-10-01 22:40:36.143952] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.942 [2024-10-01 22:40:36.143962] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.942 qpair failed and we were unable to recover it. 00:41:40.942 [2024-10-01 22:40:36.144259] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.942 [2024-10-01 22:40:36.144272] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.942 qpair failed and we were unable to recover it. 00:41:40.942 [2024-10-01 22:40:36.144658] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.942 [2024-10-01 22:40:36.144669] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.942 qpair failed and we were unable to recover it. 00:41:40.942 [2024-10-01 22:40:36.145006] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.942 [2024-10-01 22:40:36.145018] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.942 qpair failed and we were unable to recover it. 00:41:40.942 [2024-10-01 22:40:36.145329] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.942 [2024-10-01 22:40:36.145340] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.942 qpair failed and we were unable to recover it. 00:41:40.942 [2024-10-01 22:40:36.145615] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.942 [2024-10-01 22:40:36.145633] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.942 qpair failed and we were unable to recover it. 00:41:40.942 [2024-10-01 22:40:36.145820] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.942 [2024-10-01 22:40:36.145829] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:40.942 qpair failed and we were unable to recover it. 00:41:41.283 [2024-10-01 22:40:36.146109] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.284 [2024-10-01 22:40:36.146120] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.284 qpair failed and we were unable to recover it. 00:41:41.284 [2024-10-01 22:40:36.146427] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.284 [2024-10-01 22:40:36.146444] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.284 qpair failed and we were unable to recover it. 00:41:41.284 [2024-10-01 22:40:36.146751] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.284 [2024-10-01 22:40:36.146761] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.284 qpair failed and we were unable to recover it. 00:41:41.284 [2024-10-01 22:40:36.147152] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.284 [2024-10-01 22:40:36.147162] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.284 qpair failed and we were unable to recover it. 00:41:41.284 [2024-10-01 22:40:36.147365] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.284 [2024-10-01 22:40:36.147375] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.284 qpair failed and we were unable to recover it. 00:41:41.284 [2024-10-01 22:40:36.147713] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.284 [2024-10-01 22:40:36.147724] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.284 qpair failed and we were unable to recover it. 00:41:41.284 [2024-10-01 22:40:36.148099] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.284 [2024-10-01 22:40:36.148109] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.284 qpair failed and we were unable to recover it. 00:41:41.284 [2024-10-01 22:40:36.148408] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.284 [2024-10-01 22:40:36.148418] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.284 qpair failed and we were unable to recover it. 00:41:41.284 [2024-10-01 22:40:36.148731] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.284 [2024-10-01 22:40:36.148741] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.284 qpair failed and we were unable to recover it. 00:41:41.284 [2024-10-01 22:40:36.149042] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.284 [2024-10-01 22:40:36.149052] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.284 qpair failed and we were unable to recover it. 00:41:41.284 [2024-10-01 22:40:36.149281] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.284 [2024-10-01 22:40:36.149291] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.284 qpair failed and we were unable to recover it. 00:41:41.284 [2024-10-01 22:40:36.149609] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.284 [2024-10-01 22:40:36.149619] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.284 qpair failed and we were unable to recover it. 00:41:41.284 [2024-10-01 22:40:36.150004] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.284 [2024-10-01 22:40:36.150015] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.284 qpair failed and we were unable to recover it. 00:41:41.284 [2024-10-01 22:40:36.150210] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.284 [2024-10-01 22:40:36.150220] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.284 qpair failed and we were unable to recover it. 00:41:41.284 [2024-10-01 22:40:36.150530] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.284 [2024-10-01 22:40:36.150540] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.284 qpair failed and we were unable to recover it. 00:41:41.284 [2024-10-01 22:40:36.150753] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.284 [2024-10-01 22:40:36.150763] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.284 qpair failed and we were unable to recover it. 00:41:41.284 [2024-10-01 22:40:36.150958] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.284 [2024-10-01 22:40:36.150968] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.284 qpair failed and we were unable to recover it. 00:41:41.284 [2024-10-01 22:40:36.151189] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.284 [2024-10-01 22:40:36.151199] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.284 qpair failed and we were unable to recover it. 00:41:41.284 [2024-10-01 22:40:36.151488] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.284 [2024-10-01 22:40:36.151498] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.284 qpair failed and we were unable to recover it. 00:41:41.284 [2024-10-01 22:40:36.151716] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.284 [2024-10-01 22:40:36.151726] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.284 qpair failed and we were unable to recover it. 00:41:41.284 [2024-10-01 22:40:36.152020] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.284 [2024-10-01 22:40:36.152030] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.284 qpair failed and we were unable to recover it. 00:41:41.284 [2024-10-01 22:40:36.152315] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.284 [2024-10-01 22:40:36.152325] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.284 qpair failed and we were unable to recover it. 00:41:41.284 [2024-10-01 22:40:36.152514] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.284 [2024-10-01 22:40:36.152525] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.284 qpair failed and we were unable to recover it. 00:41:41.284 [2024-10-01 22:40:36.152715] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.284 [2024-10-01 22:40:36.152725] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.284 qpair failed and we were unable to recover it. 00:41:41.284 [2024-10-01 22:40:36.153075] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.284 [2024-10-01 22:40:36.153085] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.284 qpair failed and we were unable to recover it. 00:41:41.284 [2024-10-01 22:40:36.153388] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.284 [2024-10-01 22:40:36.153398] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.284 qpair failed and we were unable to recover it. 00:41:41.284 [2024-10-01 22:40:36.153700] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.284 [2024-10-01 22:40:36.153711] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.284 qpair failed and we were unable to recover it. 00:41:41.284 [2024-10-01 22:40:36.154011] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.284 [2024-10-01 22:40:36.154022] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.284 qpair failed and we were unable to recover it. 00:41:41.284 [2024-10-01 22:40:36.154330] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.284 [2024-10-01 22:40:36.154340] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.284 qpair failed and we were unable to recover it. 00:41:41.284 [2024-10-01 22:40:36.154664] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.284 [2024-10-01 22:40:36.154675] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.284 qpair failed and we were unable to recover it. 00:41:41.284 [2024-10-01 22:40:36.154981] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.284 [2024-10-01 22:40:36.154991] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.284 qpair failed and we were unable to recover it. 00:41:41.284 [2024-10-01 22:40:36.155328] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.284 [2024-10-01 22:40:36.155338] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.284 qpair failed and we were unable to recover it. 00:41:41.284 [2024-10-01 22:40:36.155688] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.284 [2024-10-01 22:40:36.155698] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.284 qpair failed and we were unable to recover it. 00:41:41.284 [2024-10-01 22:40:36.156061] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.284 [2024-10-01 22:40:36.156070] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.284 qpair failed and we were unable to recover it. 00:41:41.284 [2024-10-01 22:40:36.156419] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.284 [2024-10-01 22:40:36.156430] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.284 qpair failed and we were unable to recover it. 00:41:41.284 [2024-10-01 22:40:36.156620] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.284 [2024-10-01 22:40:36.156637] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.284 qpair failed and we were unable to recover it. 00:41:41.284 [2024-10-01 22:40:36.156993] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.284 [2024-10-01 22:40:36.157002] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.284 qpair failed and we were unable to recover it. 00:41:41.284 [2024-10-01 22:40:36.157304] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.284 [2024-10-01 22:40:36.157320] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.284 qpair failed and we were unable to recover it. 00:41:41.285 [2024-10-01 22:40:36.157663] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.285 [2024-10-01 22:40:36.157673] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.285 qpair failed and we were unable to recover it. 00:41:41.285 [2024-10-01 22:40:36.157993] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.285 [2024-10-01 22:40:36.158003] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.285 qpair failed and we were unable to recover it. 00:41:41.285 [2024-10-01 22:40:36.158286] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.285 [2024-10-01 22:40:36.158296] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.285 qpair failed and we were unable to recover it. 00:41:41.285 [2024-10-01 22:40:36.158500] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.285 [2024-10-01 22:40:36.158510] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.285 qpair failed and we were unable to recover it. 00:41:41.285 [2024-10-01 22:40:36.158848] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.285 [2024-10-01 22:40:36.158858] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.285 qpair failed and we were unable to recover it. 00:41:41.285 [2024-10-01 22:40:36.159226] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.285 [2024-10-01 22:40:36.159236] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.285 qpair failed and we were unable to recover it. 00:41:41.285 [2024-10-01 22:40:36.159547] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.285 [2024-10-01 22:40:36.159557] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.285 qpair failed and we were unable to recover it. 00:41:41.285 [2024-10-01 22:40:36.159758] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.285 [2024-10-01 22:40:36.159768] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.285 qpair failed and we were unable to recover it. 00:41:41.285 [2024-10-01 22:40:36.160111] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.285 [2024-10-01 22:40:36.160120] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.285 qpair failed and we were unable to recover it. 00:41:41.285 [2024-10-01 22:40:36.160536] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.285 [2024-10-01 22:40:36.160546] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.285 qpair failed and we were unable to recover it. 00:41:41.285 [2024-10-01 22:40:36.160716] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.285 [2024-10-01 22:40:36.160726] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.285 qpair failed and we were unable to recover it. 00:41:41.285 [2024-10-01 22:40:36.160958] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.285 [2024-10-01 22:40:36.160968] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.285 qpair failed and we were unable to recover it. 00:41:41.285 [2024-10-01 22:40:36.161185] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.285 [2024-10-01 22:40:36.161195] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.285 qpair failed and we were unable to recover it. 00:41:41.285 [2024-10-01 22:40:36.161482] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.285 [2024-10-01 22:40:36.161491] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.285 qpair failed and we were unable to recover it. 00:41:41.285 [2024-10-01 22:40:36.161708] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.285 [2024-10-01 22:40:36.161719] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.285 qpair failed and we were unable to recover it. 00:41:41.285 [2024-10-01 22:40:36.162087] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.285 [2024-10-01 22:40:36.162097] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.285 qpair failed and we were unable to recover it. 00:41:41.285 [2024-10-01 22:40:36.162279] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.285 [2024-10-01 22:40:36.162290] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.285 qpair failed and we were unable to recover it. 00:41:41.285 [2024-10-01 22:40:36.162554] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.285 [2024-10-01 22:40:36.162564] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.285 qpair failed and we were unable to recover it. 00:41:41.285 [2024-10-01 22:40:36.162902] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.285 [2024-10-01 22:40:36.162913] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.285 qpair failed and we were unable to recover it. 00:41:41.285 [2024-10-01 22:40:36.163228] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.285 [2024-10-01 22:40:36.163238] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.285 qpair failed and we were unable to recover it. 00:41:41.285 [2024-10-01 22:40:36.163520] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.285 [2024-10-01 22:40:36.163530] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.285 qpair failed and we were unable to recover it. 00:41:41.285 [2024-10-01 22:40:36.163848] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.285 [2024-10-01 22:40:36.163858] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.285 qpair failed and we were unable to recover it. 00:41:41.285 [2024-10-01 22:40:36.164183] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.285 [2024-10-01 22:40:36.164193] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.285 qpair failed and we were unable to recover it. 00:41:41.285 [2024-10-01 22:40:36.164475] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.285 [2024-10-01 22:40:36.164485] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.285 qpair failed and we were unable to recover it. 00:41:41.285 [2024-10-01 22:40:36.164800] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.285 [2024-10-01 22:40:36.164813] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.285 qpair failed and we were unable to recover it. 00:41:41.285 [2024-10-01 22:40:36.165110] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.285 [2024-10-01 22:40:36.165121] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.285 qpair failed and we were unable to recover it. 00:41:41.285 [2024-10-01 22:40:36.165428] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.285 [2024-10-01 22:40:36.165439] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.285 qpair failed and we were unable to recover it. 00:41:41.285 [2024-10-01 22:40:36.165742] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.285 [2024-10-01 22:40:36.165753] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.285 qpair failed and we were unable to recover it. 00:41:41.285 [2024-10-01 22:40:36.166130] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.285 [2024-10-01 22:40:36.166140] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.285 qpair failed and we were unable to recover it. 00:41:41.285 [2024-10-01 22:40:36.166446] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.285 [2024-10-01 22:40:36.166456] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.285 qpair failed and we were unable to recover it. 00:41:41.285 [2024-10-01 22:40:36.166649] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.285 [2024-10-01 22:40:36.166659] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.285 qpair failed and we were unable to recover it. 00:41:41.285 [2024-10-01 22:40:36.166986] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.285 [2024-10-01 22:40:36.166996] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.285 qpair failed and we were unable to recover it. 00:41:41.285 [2024-10-01 22:40:36.167289] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.285 [2024-10-01 22:40:36.167298] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.285 qpair failed and we were unable to recover it. 00:41:41.285 [2024-10-01 22:40:36.167629] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.285 [2024-10-01 22:40:36.167640] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.285 qpair failed and we were unable to recover it. 00:41:41.285 [2024-10-01 22:40:36.167956] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.285 [2024-10-01 22:40:36.167966] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.285 qpair failed and we were unable to recover it. 00:41:41.285 [2024-10-01 22:40:36.168281] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.285 [2024-10-01 22:40:36.168291] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.285 qpair failed and we were unable to recover it. 00:41:41.285 [2024-10-01 22:40:36.168571] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.286 [2024-10-01 22:40:36.168581] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.286 qpair failed and we were unable to recover it. 00:41:41.286 [2024-10-01 22:40:36.168884] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.286 [2024-10-01 22:40:36.168895] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.286 qpair failed and we were unable to recover it. 00:41:41.286 [2024-10-01 22:40:36.169181] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.286 [2024-10-01 22:40:36.169198] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.286 qpair failed and we were unable to recover it. 00:41:41.286 [2024-10-01 22:40:36.169505] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.286 [2024-10-01 22:40:36.169515] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.286 qpair failed and we were unable to recover it. 00:41:41.286 [2024-10-01 22:40:36.169678] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.286 [2024-10-01 22:40:36.169689] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.286 qpair failed and we were unable to recover it. 00:41:41.286 [2024-10-01 22:40:36.170054] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.286 [2024-10-01 22:40:36.170064] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.286 qpair failed and we were unable to recover it. 00:41:41.286 [2024-10-01 22:40:36.170371] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.286 [2024-10-01 22:40:36.170380] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.286 qpair failed and we were unable to recover it. 00:41:41.286 [2024-10-01 22:40:36.170691] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.286 [2024-10-01 22:40:36.170701] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.286 qpair failed and we were unable to recover it. 00:41:41.286 [2024-10-01 22:40:36.171005] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.286 [2024-10-01 22:40:36.171015] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.286 qpair failed and we were unable to recover it. 00:41:41.286 [2024-10-01 22:40:36.171321] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.286 [2024-10-01 22:40:36.171331] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.286 qpair failed and we were unable to recover it. 00:41:41.286 [2024-10-01 22:40:36.171639] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.286 [2024-10-01 22:40:36.171650] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.286 qpair failed and we were unable to recover it. 00:41:41.286 [2024-10-01 22:40:36.171950] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.286 [2024-10-01 22:40:36.171960] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.286 qpair failed and we were unable to recover it. 00:41:41.286 [2024-10-01 22:40:36.172239] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.286 [2024-10-01 22:40:36.172249] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.286 qpair failed and we were unable to recover it. 00:41:41.286 [2024-10-01 22:40:36.172535] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.286 [2024-10-01 22:40:36.172545] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.286 qpair failed and we were unable to recover it. 00:41:41.286 [2024-10-01 22:40:36.172849] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.286 [2024-10-01 22:40:36.172860] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.286 qpair failed and we were unable to recover it. 00:41:41.286 [2024-10-01 22:40:36.173021] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.286 [2024-10-01 22:40:36.173035] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.286 qpair failed and we were unable to recover it. 00:41:41.286 [2024-10-01 22:40:36.173317] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.286 [2024-10-01 22:40:36.173327] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.286 qpair failed and we were unable to recover it. 00:41:41.286 [2024-10-01 22:40:36.173641] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.286 [2024-10-01 22:40:36.173651] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.286 qpair failed and we were unable to recover it. 00:41:41.286 [2024-10-01 22:40:36.173979] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.286 [2024-10-01 22:40:36.173989] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.286 qpair failed and we were unable to recover it. 00:41:41.286 [2024-10-01 22:40:36.174260] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.286 [2024-10-01 22:40:36.174271] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.286 qpair failed and we were unable to recover it. 00:41:41.286 [2024-10-01 22:40:36.174579] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.286 [2024-10-01 22:40:36.174588] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.286 qpair failed and we were unable to recover it. 00:41:41.286 [2024-10-01 22:40:36.174782] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.286 [2024-10-01 22:40:36.174792] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.286 qpair failed and we were unable to recover it. 00:41:41.286 [2024-10-01 22:40:36.175133] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.286 [2024-10-01 22:40:36.175142] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.286 qpair failed and we were unable to recover it. 00:41:41.286 [2024-10-01 22:40:36.175453] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.286 [2024-10-01 22:40:36.175463] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.286 qpair failed and we were unable to recover it. 00:41:41.286 [2024-10-01 22:40:36.175651] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.286 [2024-10-01 22:40:36.175662] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.286 qpair failed and we were unable to recover it. 00:41:41.286 [2024-10-01 22:40:36.176001] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.286 [2024-10-01 22:40:36.176010] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.286 qpair failed and we were unable to recover it. 00:41:41.286 [2024-10-01 22:40:36.176319] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.286 [2024-10-01 22:40:36.176330] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.286 qpair failed and we were unable to recover it. 00:41:41.286 [2024-10-01 22:40:36.176607] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.286 [2024-10-01 22:40:36.176617] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.286 qpair failed and we were unable to recover it. 00:41:41.286 [2024-10-01 22:40:36.176920] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.286 [2024-10-01 22:40:36.176931] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.286 qpair failed and we were unable to recover it. 00:41:41.286 [2024-10-01 22:40:36.177120] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.286 [2024-10-01 22:40:36.177131] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.286 qpair failed and we were unable to recover it. 00:41:41.286 [2024-10-01 22:40:36.177402] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.286 [2024-10-01 22:40:36.177413] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.286 qpair failed and we were unable to recover it. 00:41:41.286 [2024-10-01 22:40:36.177731] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.286 [2024-10-01 22:40:36.177742] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.286 qpair failed and we were unable to recover it. 00:41:41.286 [2024-10-01 22:40:36.178034] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.286 [2024-10-01 22:40:36.178044] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.286 qpair failed and we were unable to recover it. 00:41:41.286 [2024-10-01 22:40:36.178237] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.286 [2024-10-01 22:40:36.178247] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.286 qpair failed and we were unable to recover it. 00:41:41.286 [2024-10-01 22:40:36.178507] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.286 [2024-10-01 22:40:36.178516] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.286 qpair failed and we were unable to recover it. 00:41:41.286 [2024-10-01 22:40:36.178809] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.286 [2024-10-01 22:40:36.178820] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.286 qpair failed and we were unable to recover it. 00:41:41.286 [2024-10-01 22:40:36.179125] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.286 [2024-10-01 22:40:36.179134] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.286 qpair failed and we were unable to recover it. 00:41:41.286 [2024-10-01 22:40:36.179449] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.286 [2024-10-01 22:40:36.179459] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.287 qpair failed and we were unable to recover it. 00:41:41.287 [2024-10-01 22:40:36.179761] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.287 [2024-10-01 22:40:36.179772] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.287 qpair failed and we were unable to recover it. 00:41:41.287 [2024-10-01 22:40:36.180075] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.287 [2024-10-01 22:40:36.180085] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.287 qpair failed and we were unable to recover it. 00:41:41.287 [2024-10-01 22:40:36.180415] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.287 [2024-10-01 22:40:36.180425] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.287 qpair failed and we were unable to recover it. 00:41:41.287 [2024-10-01 22:40:36.180736] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.287 [2024-10-01 22:40:36.180746] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.287 qpair failed and we were unable to recover it. 00:41:41.287 [2024-10-01 22:40:36.181063] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.287 [2024-10-01 22:40:36.181073] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.287 qpair failed and we were unable to recover it. 00:41:41.287 [2024-10-01 22:40:36.181232] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.287 [2024-10-01 22:40:36.181242] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.287 qpair failed and we were unable to recover it. 00:41:41.287 [2024-10-01 22:40:36.181635] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.287 [2024-10-01 22:40:36.181645] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.287 qpair failed and we were unable to recover it. 00:41:41.287 [2024-10-01 22:40:36.181958] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.287 [2024-10-01 22:40:36.181968] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.287 qpair failed and we were unable to recover it. 00:41:41.287 [2024-10-01 22:40:36.182245] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.287 [2024-10-01 22:40:36.182255] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.287 qpair failed and we were unable to recover it. 00:41:41.287 [2024-10-01 22:40:36.182487] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.287 [2024-10-01 22:40:36.182497] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.287 qpair failed and we were unable to recover it. 00:41:41.287 [2024-10-01 22:40:36.182712] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.287 [2024-10-01 22:40:36.182724] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.287 qpair failed and we were unable to recover it. 00:41:41.287 [2024-10-01 22:40:36.182913] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.287 [2024-10-01 22:40:36.182923] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.287 qpair failed and we were unable to recover it. 00:41:41.287 [2024-10-01 22:40:36.183213] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.287 [2024-10-01 22:40:36.183223] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.287 qpair failed and we were unable to recover it. 00:41:41.287 [2024-10-01 22:40:36.183523] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.287 [2024-10-01 22:40:36.183533] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.287 qpair failed and we were unable to recover it. 00:41:41.287 [2024-10-01 22:40:36.183821] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.287 [2024-10-01 22:40:36.183832] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.287 qpair failed and we were unable to recover it. 00:41:41.287 [2024-10-01 22:40:36.184139] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.287 [2024-10-01 22:40:36.184149] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.287 qpair failed and we were unable to recover it. 00:41:41.287 [2024-10-01 22:40:36.184456] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.287 [2024-10-01 22:40:36.184465] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.287 qpair failed and we were unable to recover it. 00:41:41.287 [2024-10-01 22:40:36.184669] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.287 [2024-10-01 22:40:36.184680] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.287 qpair failed and we were unable to recover it. 00:41:41.287 [2024-10-01 22:40:36.185017] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.287 [2024-10-01 22:40:36.185028] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.287 qpair failed and we were unable to recover it. 00:41:41.287 [2024-10-01 22:40:36.185220] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.287 [2024-10-01 22:40:36.185229] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.287 qpair failed and we were unable to recover it. 00:41:41.287 [2024-10-01 22:40:36.185580] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.287 [2024-10-01 22:40:36.185591] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.287 qpair failed and we were unable to recover it. 00:41:41.287 [2024-10-01 22:40:36.185903] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.287 [2024-10-01 22:40:36.185914] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.287 qpair failed and we were unable to recover it. 00:41:41.287 [2024-10-01 22:40:36.186277] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.287 [2024-10-01 22:40:36.186287] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.287 qpair failed and we were unable to recover it. 00:41:41.287 [2024-10-01 22:40:36.186602] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.287 [2024-10-01 22:40:36.186612] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.287 qpair failed and we were unable to recover it. 00:41:41.287 [2024-10-01 22:40:36.186940] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.287 [2024-10-01 22:40:36.186951] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.287 qpair failed and we were unable to recover it. 00:41:41.287 [2024-10-01 22:40:36.187322] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.287 [2024-10-01 22:40:36.187332] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.287 qpair failed and we were unable to recover it. 00:41:41.287 [2024-10-01 22:40:36.187641] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.287 [2024-10-01 22:40:36.187652] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.287 qpair failed and we were unable to recover it. 00:41:41.287 [2024-10-01 22:40:36.188003] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.287 [2024-10-01 22:40:36.188013] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.287 qpair failed and we were unable to recover it. 00:41:41.287 [2024-10-01 22:40:36.188353] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.287 [2024-10-01 22:40:36.188363] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.287 qpair failed and we were unable to recover it. 00:41:41.287 [2024-10-01 22:40:36.188665] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.287 [2024-10-01 22:40:36.188675] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.287 qpair failed and we were unable to recover it. 00:41:41.287 [2024-10-01 22:40:36.188989] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.287 [2024-10-01 22:40:36.188999] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.287 qpair failed and we were unable to recover it. 00:41:41.287 [2024-10-01 22:40:36.189322] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.287 [2024-10-01 22:40:36.189332] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.287 qpair failed and we were unable to recover it. 00:41:41.287 [2024-10-01 22:40:36.189658] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.287 [2024-10-01 22:40:36.189668] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.287 qpair failed and we were unable to recover it. 00:41:41.287 [2024-10-01 22:40:36.190019] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.287 [2024-10-01 22:40:36.190029] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.287 qpair failed and we were unable to recover it. 00:41:41.287 [2024-10-01 22:40:36.190346] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.287 [2024-10-01 22:40:36.190356] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.287 qpair failed and we were unable to recover it. 00:41:41.287 [2024-10-01 22:40:36.190667] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.287 [2024-10-01 22:40:36.190677] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.287 qpair failed and we were unable to recover it. 00:41:41.287 [2024-10-01 22:40:36.190979] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.287 [2024-10-01 22:40:36.190990] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.287 qpair failed and we were unable to recover it. 00:41:41.287 [2024-10-01 22:40:36.191298] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.287 [2024-10-01 22:40:36.191308] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.287 qpair failed and we were unable to recover it. 00:41:41.287 [2024-10-01 22:40:36.191622] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.288 [2024-10-01 22:40:36.191640] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.288 qpair failed and we were unable to recover it. 00:41:41.288 [2024-10-01 22:40:36.191947] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.288 [2024-10-01 22:40:36.191956] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.288 qpair failed and we were unable to recover it. 00:41:41.288 [2024-10-01 22:40:36.192228] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.288 [2024-10-01 22:40:36.192238] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.288 qpair failed and we were unable to recover it. 00:41:41.288 [2024-10-01 22:40:36.192560] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.288 [2024-10-01 22:40:36.192570] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.288 qpair failed and we were unable to recover it. 00:41:41.288 [2024-10-01 22:40:36.192877] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.288 [2024-10-01 22:40:36.192887] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.288 qpair failed and we were unable to recover it. 00:41:41.288 [2024-10-01 22:40:36.193195] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.288 [2024-10-01 22:40:36.193205] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.288 qpair failed and we were unable to recover it. 00:41:41.288 [2024-10-01 22:40:36.193400] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.288 [2024-10-01 22:40:36.193409] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.288 qpair failed and we were unable to recover it. 00:41:41.288 [2024-10-01 22:40:36.193734] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.288 [2024-10-01 22:40:36.193747] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.288 qpair failed and we were unable to recover it. 00:41:41.288 [2024-10-01 22:40:36.193926] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.288 [2024-10-01 22:40:36.193937] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.288 qpair failed and we were unable to recover it. 00:41:41.288 [2024-10-01 22:40:36.194276] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.288 [2024-10-01 22:40:36.194287] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.288 qpair failed and we were unable to recover it. 00:41:41.288 [2024-10-01 22:40:36.194456] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.288 [2024-10-01 22:40:36.194467] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.288 qpair failed and we were unable to recover it. 00:41:41.288 [2024-10-01 22:40:36.194762] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.288 [2024-10-01 22:40:36.194772] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.288 qpair failed and we were unable to recover it. 00:41:41.288 [2024-10-01 22:40:36.195090] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.288 [2024-10-01 22:40:36.195101] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.288 qpair failed and we were unable to recover it. 00:41:41.288 [2024-10-01 22:40:36.195456] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.288 [2024-10-01 22:40:36.195466] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.288 qpair failed and we were unable to recover it. 00:41:41.288 [2024-10-01 22:40:36.195876] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.288 [2024-10-01 22:40:36.195887] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.288 qpair failed and we were unable to recover it. 00:41:41.288 [2024-10-01 22:40:36.196194] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.288 [2024-10-01 22:40:36.196204] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.288 qpair failed and we were unable to recover it. 00:41:41.288 [2024-10-01 22:40:36.196492] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.288 [2024-10-01 22:40:36.196502] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.288 qpair failed and we were unable to recover it. 00:41:41.288 [2024-10-01 22:40:36.196791] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.288 [2024-10-01 22:40:36.196802] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.288 qpair failed and we were unable to recover it. 00:41:41.288 [2024-10-01 22:40:36.197124] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.288 [2024-10-01 22:40:36.197134] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.288 qpair failed and we were unable to recover it. 00:41:41.288 [2024-10-01 22:40:36.197447] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.288 [2024-10-01 22:40:36.197458] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.288 qpair failed and we were unable to recover it. 00:41:41.288 [2024-10-01 22:40:36.197764] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.288 [2024-10-01 22:40:36.197775] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.288 qpair failed and we were unable to recover it. 00:41:41.288 [2024-10-01 22:40:36.197940] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.288 [2024-10-01 22:40:36.197951] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.288 qpair failed and we were unable to recover it. 00:41:41.288 [2024-10-01 22:40:36.198281] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.288 [2024-10-01 22:40:36.198291] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.288 qpair failed and we were unable to recover it. 00:41:41.288 [2024-10-01 22:40:36.198617] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.288 [2024-10-01 22:40:36.198631] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.288 qpair failed and we were unable to recover it. 00:41:41.288 [2024-10-01 22:40:36.198967] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.288 [2024-10-01 22:40:36.198977] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.288 qpair failed and we were unable to recover it. 00:41:41.288 [2024-10-01 22:40:36.199161] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.288 [2024-10-01 22:40:36.199171] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.288 qpair failed and we were unable to recover it. 00:41:41.288 [2024-10-01 22:40:36.199482] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.288 [2024-10-01 22:40:36.199491] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.288 qpair failed and we were unable to recover it. 00:41:41.288 [2024-10-01 22:40:36.199661] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.288 [2024-10-01 22:40:36.199671] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.288 qpair failed and we were unable to recover it. 00:41:41.288 [2024-10-01 22:40:36.199883] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.288 [2024-10-01 22:40:36.199894] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.288 qpair failed and we were unable to recover it. 00:41:41.288 [2024-10-01 22:40:36.200174] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.288 [2024-10-01 22:40:36.200184] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.288 qpair failed and we were unable to recover it. 00:41:41.288 [2024-10-01 22:40:36.200512] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.288 [2024-10-01 22:40:36.200522] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.288 qpair failed and we were unable to recover it. 00:41:41.288 [2024-10-01 22:40:36.200695] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.288 [2024-10-01 22:40:36.200706] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.288 qpair failed and we were unable to recover it. 00:41:41.288 [2024-10-01 22:40:36.201003] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.288 [2024-10-01 22:40:36.201013] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.288 qpair failed and we were unable to recover it. 00:41:41.289 [2024-10-01 22:40:36.201324] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.289 [2024-10-01 22:40:36.201334] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.289 qpair failed and we were unable to recover it. 00:41:41.289 [2024-10-01 22:40:36.201667] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.289 [2024-10-01 22:40:36.201680] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.289 qpair failed and we were unable to recover it. 00:41:41.289 [2024-10-01 22:40:36.201964] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.289 [2024-10-01 22:40:36.201973] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.289 qpair failed and we were unable to recover it. 00:41:41.289 [2024-10-01 22:40:36.202347] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.289 [2024-10-01 22:40:36.202357] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.289 qpair failed and we were unable to recover it. 00:41:41.289 [2024-10-01 22:40:36.202557] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.289 [2024-10-01 22:40:36.202567] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.289 qpair failed and we were unable to recover it. 00:41:41.289 [2024-10-01 22:40:36.202848] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.289 [2024-10-01 22:40:36.202858] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.289 qpair failed and we were unable to recover it. 00:41:41.289 [2024-10-01 22:40:36.203184] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.289 [2024-10-01 22:40:36.203194] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.289 qpair failed and we were unable to recover it. 00:41:41.289 [2024-10-01 22:40:36.203522] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.289 [2024-10-01 22:40:36.203533] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.289 qpair failed and we were unable to recover it. 00:41:41.289 [2024-10-01 22:40:36.203859] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.289 [2024-10-01 22:40:36.203869] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.289 qpair failed and we were unable to recover it. 00:41:41.289 [2024-10-01 22:40:36.204150] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.289 [2024-10-01 22:40:36.204165] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.289 qpair failed and we were unable to recover it. 00:41:41.289 [2024-10-01 22:40:36.204483] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.289 [2024-10-01 22:40:36.204494] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.289 qpair failed and we were unable to recover it. 00:41:41.289 [2024-10-01 22:40:36.204817] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.289 [2024-10-01 22:40:36.204828] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.289 qpair failed and we were unable to recover it. 00:41:41.289 [2024-10-01 22:40:36.205182] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.289 [2024-10-01 22:40:36.205192] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.289 qpair failed and we were unable to recover it. 00:41:41.289 [2024-10-01 22:40:36.205511] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.289 [2024-10-01 22:40:36.205521] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.289 qpair failed and we were unable to recover it. 00:41:41.289 [2024-10-01 22:40:36.205841] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.289 [2024-10-01 22:40:36.205851] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.289 qpair failed and we were unable to recover it. 00:41:41.289 [2024-10-01 22:40:36.206140] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.289 [2024-10-01 22:40:36.206151] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.289 qpair failed and we were unable to recover it. 00:41:41.289 [2024-10-01 22:40:36.206479] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.289 [2024-10-01 22:40:36.206490] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.289 qpair failed and we were unable to recover it. 00:41:41.289 [2024-10-01 22:40:36.206858] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.289 [2024-10-01 22:40:36.206869] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.289 qpair failed and we were unable to recover it. 00:41:41.289 [2024-10-01 22:40:36.207200] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.289 [2024-10-01 22:40:36.207210] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.289 qpair failed and we were unable to recover it. 00:41:41.289 [2024-10-01 22:40:36.207504] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.289 [2024-10-01 22:40:36.207515] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.289 qpair failed and we were unable to recover it. 00:41:41.289 [2024-10-01 22:40:36.207801] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.289 [2024-10-01 22:40:36.207811] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.289 qpair failed and we were unable to recover it. 00:41:41.289 [2024-10-01 22:40:36.208115] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.289 [2024-10-01 22:40:36.208126] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.289 qpair failed and we were unable to recover it. 00:41:41.289 [2024-10-01 22:40:36.208434] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.289 [2024-10-01 22:40:36.208443] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.289 qpair failed and we were unable to recover it. 00:41:41.289 [2024-10-01 22:40:36.208767] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.289 [2024-10-01 22:40:36.208778] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.289 qpair failed and we were unable to recover it. 00:41:41.289 [2024-10-01 22:40:36.209081] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.289 [2024-10-01 22:40:36.209091] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.289 qpair failed and we were unable to recover it. 00:41:41.289 [2024-10-01 22:40:36.209418] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.289 [2024-10-01 22:40:36.209427] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.289 qpair failed and we were unable to recover it. 00:41:41.289 [2024-10-01 22:40:36.209736] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.289 [2024-10-01 22:40:36.209746] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.289 qpair failed and we were unable to recover it. 00:41:41.289 [2024-10-01 22:40:36.210055] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.289 [2024-10-01 22:40:36.210066] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.289 qpair failed and we were unable to recover it. 00:41:41.289 [2024-10-01 22:40:36.210375] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.289 [2024-10-01 22:40:36.210387] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.289 qpair failed and we were unable to recover it. 00:41:41.289 [2024-10-01 22:40:36.210693] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.289 [2024-10-01 22:40:36.210703] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.289 qpair failed and we were unable to recover it. 00:41:41.289 [2024-10-01 22:40:36.211034] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.289 [2024-10-01 22:40:36.211044] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.289 qpair failed and we were unable to recover it. 00:41:41.289 [2024-10-01 22:40:36.211342] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.289 [2024-10-01 22:40:36.211352] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.289 qpair failed and we were unable to recover it. 00:41:41.289 [2024-10-01 22:40:36.211656] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.289 [2024-10-01 22:40:36.211667] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.289 qpair failed and we were unable to recover it. 00:41:41.289 [2024-10-01 22:40:36.211842] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.289 [2024-10-01 22:40:36.211853] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.289 qpair failed and we were unable to recover it. 00:41:41.289 [2024-10-01 22:40:36.212189] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.289 [2024-10-01 22:40:36.212199] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.289 qpair failed and we were unable to recover it. 00:41:41.289 [2024-10-01 22:40:36.212490] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.289 [2024-10-01 22:40:36.212500] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.289 qpair failed and we were unable to recover it. 00:41:41.289 [2024-10-01 22:40:36.212698] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.289 [2024-10-01 22:40:36.212709] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.289 qpair failed and we were unable to recover it. 00:41:41.289 [2024-10-01 22:40:36.213025] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.289 [2024-10-01 22:40:36.213035] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.289 qpair failed and we were unable to recover it. 00:41:41.289 [2024-10-01 22:40:36.213341] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.289 [2024-10-01 22:40:36.213351] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.289 qpair failed and we were unable to recover it. 00:41:41.290 [2024-10-01 22:40:36.213709] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.290 [2024-10-01 22:40:36.213719] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.290 qpair failed and we were unable to recover it. 00:41:41.290 [2024-10-01 22:40:36.214081] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.290 [2024-10-01 22:40:36.214091] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.290 qpair failed and we were unable to recover it. 00:41:41.290 [2024-10-01 22:40:36.214369] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.290 [2024-10-01 22:40:36.214379] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.290 qpair failed and we were unable to recover it. 00:41:41.290 [2024-10-01 22:40:36.214688] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.290 [2024-10-01 22:40:36.214699] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.290 qpair failed and we were unable to recover it. 00:41:41.290 [2024-10-01 22:40:36.215020] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.290 [2024-10-01 22:40:36.215030] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.290 qpair failed and we were unable to recover it. 00:41:41.290 [2024-10-01 22:40:36.215340] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.290 [2024-10-01 22:40:36.215350] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.290 qpair failed and we were unable to recover it. 00:41:41.290 [2024-10-01 22:40:36.215684] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.290 [2024-10-01 22:40:36.215694] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.290 qpair failed and we were unable to recover it. 00:41:41.290 [2024-10-01 22:40:36.216092] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.290 [2024-10-01 22:40:36.216101] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.290 qpair failed and we were unable to recover it. 00:41:41.290 [2024-10-01 22:40:36.216433] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.290 [2024-10-01 22:40:36.216443] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.290 qpair failed and we were unable to recover it. 00:41:41.290 [2024-10-01 22:40:36.216735] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.290 [2024-10-01 22:40:36.216745] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.290 qpair failed and we were unable to recover it. 00:41:41.290 [2024-10-01 22:40:36.217059] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.290 [2024-10-01 22:40:36.217069] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.290 qpair failed and we were unable to recover it. 00:41:41.290 [2024-10-01 22:40:36.217269] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.290 [2024-10-01 22:40:36.217278] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.290 qpair failed and we were unable to recover it. 00:41:41.290 [2024-10-01 22:40:36.217481] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.290 [2024-10-01 22:40:36.217492] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.290 qpair failed and we were unable to recover it. 00:41:41.290 [2024-10-01 22:40:36.217675] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.290 [2024-10-01 22:40:36.217686] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.290 qpair failed and we were unable to recover it. 00:41:41.290 [2024-10-01 22:40:36.218044] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.290 [2024-10-01 22:40:36.218054] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.290 qpair failed and we were unable to recover it. 00:41:41.290 [2024-10-01 22:40:36.218353] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.290 [2024-10-01 22:40:36.218363] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.290 qpair failed and we were unable to recover it. 00:41:41.290 [2024-10-01 22:40:36.218655] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.290 [2024-10-01 22:40:36.218665] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.290 qpair failed and we were unable to recover it. 00:41:41.290 [2024-10-01 22:40:36.218956] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.290 [2024-10-01 22:40:36.218966] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.290 qpair failed and we were unable to recover it. 00:41:41.290 [2024-10-01 22:40:36.219287] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.290 [2024-10-01 22:40:36.219297] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.290 qpair failed and we were unable to recover it. 00:41:41.290 [2024-10-01 22:40:36.219641] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.290 [2024-10-01 22:40:36.219652] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.290 qpair failed and we were unable to recover it. 00:41:41.290 [2024-10-01 22:40:36.219956] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.290 [2024-10-01 22:40:36.219966] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.290 qpair failed and we were unable to recover it. 00:41:41.290 [2024-10-01 22:40:36.220280] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.290 [2024-10-01 22:40:36.220291] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.290 qpair failed and we were unable to recover it. 00:41:41.290 [2024-10-01 22:40:36.220594] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.290 [2024-10-01 22:40:36.220604] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.290 qpair failed and we were unable to recover it. 00:41:41.290 [2024-10-01 22:40:36.220870] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.290 [2024-10-01 22:40:36.220881] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.290 qpair failed and we were unable to recover it. 00:41:41.290 [2024-10-01 22:40:36.221225] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.290 [2024-10-01 22:40:36.221235] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.290 qpair failed and we were unable to recover it. 00:41:41.290 [2024-10-01 22:40:36.221538] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.290 [2024-10-01 22:40:36.221548] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.290 qpair failed and we were unable to recover it. 00:41:41.290 [2024-10-01 22:40:36.221663] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.290 [2024-10-01 22:40:36.221674] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.290 qpair failed and we were unable to recover it. 00:41:41.290 [2024-10-01 22:40:36.221961] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.290 [2024-10-01 22:40:36.221971] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.290 qpair failed and we were unable to recover it. 00:41:41.290 [2024-10-01 22:40:36.222250] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.290 [2024-10-01 22:40:36.222260] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.290 qpair failed and we were unable to recover it. 00:41:41.290 [2024-10-01 22:40:36.222567] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.290 [2024-10-01 22:40:36.222577] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.290 qpair failed and we were unable to recover it. 00:41:41.290 [2024-10-01 22:40:36.222875] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.290 [2024-10-01 22:40:36.222885] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.290 qpair failed and we were unable to recover it. 00:41:41.290 [2024-10-01 22:40:36.223102] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.290 [2024-10-01 22:40:36.223112] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.290 qpair failed and we were unable to recover it. 00:41:41.290 [2024-10-01 22:40:36.223437] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.290 [2024-10-01 22:40:36.223446] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.290 qpair failed and we were unable to recover it. 00:41:41.290 [2024-10-01 22:40:36.223643] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.290 [2024-10-01 22:40:36.223653] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.290 qpair failed and we were unable to recover it. 00:41:41.290 [2024-10-01 22:40:36.223997] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.290 [2024-10-01 22:40:36.224006] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.290 qpair failed and we were unable to recover it. 00:41:41.290 [2024-10-01 22:40:36.224318] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.290 [2024-10-01 22:40:36.224328] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.290 qpair failed and we were unable to recover it. 00:41:41.290 [2024-10-01 22:40:36.224665] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.290 [2024-10-01 22:40:36.224676] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.290 qpair failed and we were unable to recover it. 00:41:41.290 [2024-10-01 22:40:36.224987] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.290 [2024-10-01 22:40:36.224996] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.290 qpair failed and we were unable to recover it. 00:41:41.290 [2024-10-01 22:40:36.225303] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.290 [2024-10-01 22:40:36.225313] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.290 qpair failed and we were unable to recover it. 00:41:41.291 [2024-10-01 22:40:36.225623] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.291 [2024-10-01 22:40:36.225638] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.291 qpair failed and we were unable to recover it. 00:41:41.291 [2024-10-01 22:40:36.225948] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.291 [2024-10-01 22:40:36.225958] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.291 qpair failed and we were unable to recover it. 00:41:41.291 [2024-10-01 22:40:36.226265] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.291 [2024-10-01 22:40:36.226275] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.291 qpair failed and we were unable to recover it. 00:41:41.291 [2024-10-01 22:40:36.226577] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.291 [2024-10-01 22:40:36.226587] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.291 qpair failed and we were unable to recover it. 00:41:41.291 [2024-10-01 22:40:36.226942] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.291 [2024-10-01 22:40:36.226953] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.291 qpair failed and we were unable to recover it. 00:41:41.291 [2024-10-01 22:40:36.227242] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.291 [2024-10-01 22:40:36.227253] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.291 qpair failed and we were unable to recover it. 00:41:41.291 [2024-10-01 22:40:36.227573] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.291 [2024-10-01 22:40:36.227583] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.291 qpair failed and we were unable to recover it. 00:41:41.291 [2024-10-01 22:40:36.227980] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.291 [2024-10-01 22:40:36.227991] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.291 qpair failed and we were unable to recover it. 00:41:41.291 [2024-10-01 22:40:36.228272] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.291 [2024-10-01 22:40:36.228283] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.291 qpair failed and we were unable to recover it. 00:41:41.291 [2024-10-01 22:40:36.228578] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.291 [2024-10-01 22:40:36.228589] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.291 qpair failed and we were unable to recover it. 00:41:41.291 [2024-10-01 22:40:36.228910] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.291 [2024-10-01 22:40:36.228921] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.291 qpair failed and we were unable to recover it. 00:41:41.291 [2024-10-01 22:40:36.229106] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.291 [2024-10-01 22:40:36.229116] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.291 qpair failed and we were unable to recover it. 00:41:41.291 [2024-10-01 22:40:36.229420] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.291 [2024-10-01 22:40:36.229431] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.291 qpair failed and we were unable to recover it. 00:41:41.291 [2024-10-01 22:40:36.229745] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.291 [2024-10-01 22:40:36.229755] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.291 qpair failed and we were unable to recover it. 00:41:41.291 [2024-10-01 22:40:36.230015] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.291 [2024-10-01 22:40:36.230025] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.291 qpair failed and we were unable to recover it. 00:41:41.291 [2024-10-01 22:40:36.230186] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.291 [2024-10-01 22:40:36.230197] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.291 qpair failed and we were unable to recover it. 00:41:41.291 [2024-10-01 22:40:36.230383] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.291 [2024-10-01 22:40:36.230394] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.291 qpair failed and we were unable to recover it. 00:41:41.291 [2024-10-01 22:40:36.230646] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.291 [2024-10-01 22:40:36.230656] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.291 qpair failed and we were unable to recover it. 00:41:41.291 [2024-10-01 22:40:36.230956] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.291 [2024-10-01 22:40:36.230968] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.291 qpair failed and we were unable to recover it. 00:41:41.291 [2024-10-01 22:40:36.231292] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.291 [2024-10-01 22:40:36.231302] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.291 qpair failed and we were unable to recover it. 00:41:41.291 [2024-10-01 22:40:36.231600] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.291 [2024-10-01 22:40:36.231609] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.291 qpair failed and we were unable to recover it. 00:41:41.291 [2024-10-01 22:40:36.231953] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.291 [2024-10-01 22:40:36.231964] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.291 qpair failed and we were unable to recover it. 00:41:41.291 [2024-10-01 22:40:36.232271] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.291 [2024-10-01 22:40:36.232280] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.291 qpair failed and we were unable to recover it. 00:41:41.291 [2024-10-01 22:40:36.232593] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.291 [2024-10-01 22:40:36.232602] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.291 qpair failed and we were unable to recover it. 00:41:41.291 [2024-10-01 22:40:36.232777] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.291 [2024-10-01 22:40:36.232788] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.291 qpair failed and we were unable to recover it. 00:41:41.291 [2024-10-01 22:40:36.233098] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.291 [2024-10-01 22:40:36.233108] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.291 qpair failed and we were unable to recover it. 00:41:41.291 [2024-10-01 22:40:36.233430] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.291 [2024-10-01 22:40:36.233440] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.291 qpair failed and we were unable to recover it. 00:41:41.291 [2024-10-01 22:40:36.233649] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.291 [2024-10-01 22:40:36.233659] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.291 qpair failed and we were unable to recover it. 00:41:41.291 [2024-10-01 22:40:36.233950] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.291 [2024-10-01 22:40:36.233959] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.291 qpair failed and we were unable to recover it. 00:41:41.291 [2024-10-01 22:40:36.234275] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.291 [2024-10-01 22:40:36.234284] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.291 qpair failed and we were unable to recover it. 00:41:41.291 [2024-10-01 22:40:36.234465] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.291 [2024-10-01 22:40:36.234475] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.291 qpair failed and we were unable to recover it. 00:41:41.291 [2024-10-01 22:40:36.234725] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.291 [2024-10-01 22:40:36.234735] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.291 qpair failed and we were unable to recover it. 00:41:41.291 [2024-10-01 22:40:36.235043] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.291 [2024-10-01 22:40:36.235061] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.291 qpair failed and we were unable to recover it. 00:41:41.291 [2024-10-01 22:40:36.235392] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.291 [2024-10-01 22:40:36.235403] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.291 qpair failed and we were unable to recover it. 00:41:41.291 [2024-10-01 22:40:36.235772] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.291 [2024-10-01 22:40:36.235783] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.291 qpair failed and we were unable to recover it. 00:41:41.291 [2024-10-01 22:40:36.236080] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.291 [2024-10-01 22:40:36.236090] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.291 qpair failed and we were unable to recover it. 00:41:41.291 [2024-10-01 22:40:36.236389] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.291 [2024-10-01 22:40:36.236400] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.291 qpair failed and we were unable to recover it. 00:41:41.291 [2024-10-01 22:40:36.236726] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.291 [2024-10-01 22:40:36.236736] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.291 qpair failed and we were unable to recover it. 00:41:41.291 [2024-10-01 22:40:36.237055] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.291 [2024-10-01 22:40:36.237066] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.292 qpair failed and we were unable to recover it. 00:41:41.292 [2024-10-01 22:40:36.237400] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.292 [2024-10-01 22:40:36.237410] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.292 qpair failed and we were unable to recover it. 00:41:41.292 [2024-10-01 22:40:36.237699] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.292 [2024-10-01 22:40:36.237709] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.292 qpair failed and we were unable to recover it. 00:41:41.292 [2024-10-01 22:40:36.238061] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.292 [2024-10-01 22:40:36.238070] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.292 qpair failed and we were unable to recover it. 00:41:41.292 [2024-10-01 22:40:36.238233] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.292 [2024-10-01 22:40:36.238243] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.292 qpair failed and we were unable to recover it. 00:41:41.292 [2024-10-01 22:40:36.238568] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.292 [2024-10-01 22:40:36.238578] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.292 qpair failed and we were unable to recover it. 00:41:41.292 [2024-10-01 22:40:36.238873] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.292 [2024-10-01 22:40:36.238884] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.292 qpair failed and we were unable to recover it. 00:41:41.292 [2024-10-01 22:40:36.239259] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.292 [2024-10-01 22:40:36.239272] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.292 qpair failed and we were unable to recover it. 00:41:41.292 [2024-10-01 22:40:36.239575] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.292 [2024-10-01 22:40:36.239586] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.292 qpair failed and we were unable to recover it. 00:41:41.292 [2024-10-01 22:40:36.239872] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.292 [2024-10-01 22:40:36.239883] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.292 qpair failed and we were unable to recover it. 00:41:41.292 [2024-10-01 22:40:36.240160] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.292 [2024-10-01 22:40:36.240170] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.292 qpair failed and we were unable to recover it. 00:41:41.292 [2024-10-01 22:40:36.240453] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.292 [2024-10-01 22:40:36.240462] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.292 qpair failed and we were unable to recover it. 00:41:41.292 [2024-10-01 22:40:36.240834] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.292 [2024-10-01 22:40:36.240845] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.292 qpair failed and we were unable to recover it. 00:41:41.292 [2024-10-01 22:40:36.241140] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.292 [2024-10-01 22:40:36.241151] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.292 qpair failed and we were unable to recover it. 00:41:41.292 [2024-10-01 22:40:36.241457] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.292 [2024-10-01 22:40:36.241468] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.292 qpair failed and we were unable to recover it. 00:41:41.292 [2024-10-01 22:40:36.241780] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.292 [2024-10-01 22:40:36.241790] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.292 qpair failed and we were unable to recover it. 00:41:41.292 [2024-10-01 22:40:36.242095] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.292 [2024-10-01 22:40:36.242105] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.292 qpair failed and we were unable to recover it. 00:41:41.292 [2024-10-01 22:40:36.242401] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.292 [2024-10-01 22:40:36.242410] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.292 qpair failed and we were unable to recover it. 00:41:41.292 [2024-10-01 22:40:36.242689] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.292 [2024-10-01 22:40:36.242700] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.292 qpair failed and we were unable to recover it. 00:41:41.292 [2024-10-01 22:40:36.243016] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.292 [2024-10-01 22:40:36.243026] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.292 qpair failed and we were unable to recover it. 00:41:41.292 [2024-10-01 22:40:36.243315] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.292 [2024-10-01 22:40:36.243325] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.292 qpair failed and we were unable to recover it. 00:41:41.292 [2024-10-01 22:40:36.243638] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.292 [2024-10-01 22:40:36.243649] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.292 qpair failed and we were unable to recover it. 00:41:41.292 [2024-10-01 22:40:36.243968] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.292 [2024-10-01 22:40:36.243979] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.292 qpair failed and we were unable to recover it. 00:41:41.292 [2024-10-01 22:40:36.244163] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.292 [2024-10-01 22:40:36.244174] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.292 qpair failed and we were unable to recover it. 00:41:41.292 [2024-10-01 22:40:36.244493] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.292 [2024-10-01 22:40:36.244503] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.292 qpair failed and we were unable to recover it. 00:41:41.292 [2024-10-01 22:40:36.244860] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.292 [2024-10-01 22:40:36.244870] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.292 qpair failed and we were unable to recover it. 00:41:41.292 [2024-10-01 22:40:36.245149] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.292 [2024-10-01 22:40:36.245159] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.292 qpair failed and we were unable to recover it. 00:41:41.292 [2024-10-01 22:40:36.245440] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.292 [2024-10-01 22:40:36.245451] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.292 qpair failed and we were unable to recover it. 00:41:41.292 [2024-10-01 22:40:36.245722] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.292 [2024-10-01 22:40:36.245733] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.292 qpair failed and we were unable to recover it. 00:41:41.292 [2024-10-01 22:40:36.246054] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.292 [2024-10-01 22:40:36.246065] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.292 qpair failed and we were unable to recover it. 00:41:41.292 [2024-10-01 22:40:36.246341] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.292 [2024-10-01 22:40:36.246359] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.292 qpair failed and we were unable to recover it. 00:41:41.292 [2024-10-01 22:40:36.246691] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.292 [2024-10-01 22:40:36.246702] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.292 qpair failed and we were unable to recover it. 00:41:41.292 [2024-10-01 22:40:36.246974] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.292 [2024-10-01 22:40:36.246983] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.292 qpair failed and we were unable to recover it. 00:41:41.292 [2024-10-01 22:40:36.247294] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.292 [2024-10-01 22:40:36.247303] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.292 qpair failed and we were unable to recover it. 00:41:41.292 [2024-10-01 22:40:36.247582] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.292 [2024-10-01 22:40:36.247591] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.292 qpair failed and we were unable to recover it. 00:41:41.292 [2024-10-01 22:40:36.247922] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.292 [2024-10-01 22:40:36.247932] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.292 qpair failed and we were unable to recover it. 00:41:41.292 [2024-10-01 22:40:36.248238] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.292 [2024-10-01 22:40:36.248248] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.292 qpair failed and we were unable to recover it. 00:41:41.292 [2024-10-01 22:40:36.248566] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.292 [2024-10-01 22:40:36.248576] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.292 qpair failed and we were unable to recover it. 00:41:41.292 [2024-10-01 22:40:36.248867] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.292 [2024-10-01 22:40:36.248878] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.293 qpair failed and we were unable to recover it. 00:41:41.293 [2024-10-01 22:40:36.249158] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.293 [2024-10-01 22:40:36.249167] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.293 qpair failed and we were unable to recover it. 00:41:41.293 [2024-10-01 22:40:36.249481] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.293 [2024-10-01 22:40:36.249491] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.293 qpair failed and we were unable to recover it. 00:41:41.293 [2024-10-01 22:40:36.249800] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.293 [2024-10-01 22:40:36.249810] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.293 qpair failed and we were unable to recover it. 00:41:41.293 [2024-10-01 22:40:36.250080] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.293 [2024-10-01 22:40:36.250090] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.293 qpair failed and we were unable to recover it. 00:41:41.293 [2024-10-01 22:40:36.250384] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.293 [2024-10-01 22:40:36.250393] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.293 qpair failed and we were unable to recover it. 00:41:41.293 [2024-10-01 22:40:36.250728] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.293 [2024-10-01 22:40:36.250739] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.293 qpair failed and we were unable to recover it. 00:41:41.293 [2024-10-01 22:40:36.251061] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.293 [2024-10-01 22:40:36.251071] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.293 qpair failed and we were unable to recover it. 00:41:41.293 [2024-10-01 22:40:36.251349] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.293 [2024-10-01 22:40:36.251359] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.293 qpair failed and we were unable to recover it. 00:41:41.293 [2024-10-01 22:40:36.251682] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.293 [2024-10-01 22:40:36.251692] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.293 qpair failed and we were unable to recover it. 00:41:41.293 [2024-10-01 22:40:36.252082] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.293 [2024-10-01 22:40:36.252093] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.293 qpair failed and we were unable to recover it. 00:41:41.293 [2024-10-01 22:40:36.252369] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.293 [2024-10-01 22:40:36.252379] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.293 qpair failed and we were unable to recover it. 00:41:41.293 [2024-10-01 22:40:36.252705] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.293 [2024-10-01 22:40:36.252716] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.293 qpair failed and we were unable to recover it. 00:41:41.293 [2024-10-01 22:40:36.253025] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.293 [2024-10-01 22:40:36.253036] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.293 qpair failed and we were unable to recover it. 00:41:41.293 [2024-10-01 22:40:36.253342] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.293 [2024-10-01 22:40:36.253352] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.293 qpair failed and we were unable to recover it. 00:41:41.293 [2024-10-01 22:40:36.253730] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.293 [2024-10-01 22:40:36.253741] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.293 qpair failed and we were unable to recover it. 00:41:41.293 [2024-10-01 22:40:36.254039] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.293 [2024-10-01 22:40:36.254049] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.293 qpair failed and we were unable to recover it. 00:41:41.293 [2024-10-01 22:40:36.254344] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.293 [2024-10-01 22:40:36.254354] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.293 qpair failed and we were unable to recover it. 00:41:41.293 [2024-10-01 22:40:36.254663] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.293 [2024-10-01 22:40:36.254673] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.293 qpair failed and we were unable to recover it. 00:41:41.293 [2024-10-01 22:40:36.254924] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.293 [2024-10-01 22:40:36.254934] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.293 qpair failed and we were unable to recover it. 00:41:41.293 [2024-10-01 22:40:36.255158] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.293 [2024-10-01 22:40:36.255168] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.293 qpair failed and we were unable to recover it. 00:41:41.293 [2024-10-01 22:40:36.255476] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.293 [2024-10-01 22:40:36.255485] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.293 qpair failed and we were unable to recover it. 00:41:41.293 [2024-10-01 22:40:36.255676] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.293 [2024-10-01 22:40:36.255686] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.293 qpair failed and we were unable to recover it. 00:41:41.293 [2024-10-01 22:40:36.255977] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.293 [2024-10-01 22:40:36.255987] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.293 qpair failed and we were unable to recover it. 00:41:41.293 [2024-10-01 22:40:36.256294] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.293 [2024-10-01 22:40:36.256304] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.293 qpair failed and we were unable to recover it. 00:41:41.293 [2024-10-01 22:40:36.256635] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.293 [2024-10-01 22:40:36.256645] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.293 qpair failed and we were unable to recover it. 00:41:41.293 [2024-10-01 22:40:36.256937] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.293 [2024-10-01 22:40:36.256949] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.293 qpair failed and we were unable to recover it. 00:41:41.293 [2024-10-01 22:40:36.257251] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.293 [2024-10-01 22:40:36.257261] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.293 qpair failed and we were unable to recover it. 00:41:41.293 [2024-10-01 22:40:36.257567] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.293 [2024-10-01 22:40:36.257577] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.293 qpair failed and we were unable to recover it. 00:41:41.293 [2024-10-01 22:40:36.257878] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.293 [2024-10-01 22:40:36.257888] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.293 qpair failed and we were unable to recover it. 00:41:41.293 [2024-10-01 22:40:36.258190] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.293 [2024-10-01 22:40:36.258200] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.293 qpair failed and we were unable to recover it. 00:41:41.293 [2024-10-01 22:40:36.258502] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.293 [2024-10-01 22:40:36.258512] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.294 qpair failed and we were unable to recover it. 00:41:41.294 [2024-10-01 22:40:36.258791] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.294 [2024-10-01 22:40:36.258802] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.294 qpair failed and we were unable to recover it. 00:41:41.294 [2024-10-01 22:40:36.259101] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.294 [2024-10-01 22:40:36.259112] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.294 qpair failed and we were unable to recover it. 00:41:41.294 [2024-10-01 22:40:36.259273] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.294 [2024-10-01 22:40:36.259285] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.294 qpair failed and we were unable to recover it. 00:41:41.294 [2024-10-01 22:40:36.259572] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.294 [2024-10-01 22:40:36.259583] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.294 qpair failed and we were unable to recover it. 00:41:41.294 [2024-10-01 22:40:36.259868] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.294 [2024-10-01 22:40:36.259879] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.294 qpair failed and we were unable to recover it. 00:41:41.294 [2024-10-01 22:40:36.260162] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.294 [2024-10-01 22:40:36.260175] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.294 qpair failed and we were unable to recover it. 00:41:41.294 [2024-10-01 22:40:36.260472] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.294 [2024-10-01 22:40:36.260483] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.294 qpair failed and we were unable to recover it. 00:41:41.294 [2024-10-01 22:40:36.260686] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.294 [2024-10-01 22:40:36.260697] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.294 qpair failed and we were unable to recover it. 00:41:41.294 [2024-10-01 22:40:36.261005] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.294 [2024-10-01 22:40:36.261015] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.294 qpair failed and we were unable to recover it. 00:41:41.294 [2024-10-01 22:40:36.261297] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.294 [2024-10-01 22:40:36.261307] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.294 qpair failed and we were unable to recover it. 00:41:41.294 [2024-10-01 22:40:36.261649] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.294 [2024-10-01 22:40:36.261660] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.294 qpair failed and we were unable to recover it. 00:41:41.294 [2024-10-01 22:40:36.261975] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.294 [2024-10-01 22:40:36.261985] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.294 qpair failed and we were unable to recover it. 00:41:41.294 [2024-10-01 22:40:36.262286] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.294 [2024-10-01 22:40:36.262295] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.294 qpair failed and we were unable to recover it. 00:41:41.294 [2024-10-01 22:40:36.262573] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.294 [2024-10-01 22:40:36.262583] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.294 qpair failed and we were unable to recover it. 00:41:41.294 [2024-10-01 22:40:36.262896] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.294 [2024-10-01 22:40:36.262906] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.294 qpair failed and we were unable to recover it. 00:41:41.294 [2024-10-01 22:40:36.263206] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.294 [2024-10-01 22:40:36.263216] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.294 qpair failed and we were unable to recover it. 00:41:41.294 [2024-10-01 22:40:36.263554] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.294 [2024-10-01 22:40:36.263565] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.294 qpair failed and we were unable to recover it. 00:41:41.294 [2024-10-01 22:40:36.263863] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.294 [2024-10-01 22:40:36.263874] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.294 qpair failed and we were unable to recover it. 00:41:41.294 [2024-10-01 22:40:36.264142] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.294 [2024-10-01 22:40:36.264153] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.294 qpair failed and we were unable to recover it. 00:41:41.294 [2024-10-01 22:40:36.264461] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.294 [2024-10-01 22:40:36.264471] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.294 qpair failed and we were unable to recover it. 00:41:41.294 [2024-10-01 22:40:36.264654] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.294 [2024-10-01 22:40:36.264666] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.294 qpair failed and we were unable to recover it. 00:41:41.294 [2024-10-01 22:40:36.264975] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.294 [2024-10-01 22:40:36.264985] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.294 qpair failed and we were unable to recover it. 00:41:41.294 [2024-10-01 22:40:36.265328] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.294 [2024-10-01 22:40:36.265339] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.294 qpair failed and we were unable to recover it. 00:41:41.294 [2024-10-01 22:40:36.265533] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.294 [2024-10-01 22:40:36.265543] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.294 qpair failed and we were unable to recover it. 00:41:41.294 [2024-10-01 22:40:36.265879] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.294 [2024-10-01 22:40:36.265889] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.294 qpair failed and we were unable to recover it. 00:41:41.294 [2024-10-01 22:40:36.266193] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.294 [2024-10-01 22:40:36.266203] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.294 qpair failed and we were unable to recover it. 00:41:41.294 [2024-10-01 22:40:36.266507] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.294 [2024-10-01 22:40:36.266517] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.294 qpair failed and we were unable to recover it. 00:41:41.294 [2024-10-01 22:40:36.266798] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.294 [2024-10-01 22:40:36.266808] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.294 qpair failed and we were unable to recover it. 00:41:41.294 [2024-10-01 22:40:36.267130] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.294 [2024-10-01 22:40:36.267140] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.294 qpair failed and we were unable to recover it. 00:41:41.294 [2024-10-01 22:40:36.267458] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.294 [2024-10-01 22:40:36.267468] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.294 qpair failed and we were unable to recover it. 00:41:41.294 [2024-10-01 22:40:36.267776] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.294 [2024-10-01 22:40:36.267787] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.294 qpair failed and we were unable to recover it. 00:41:41.294 [2024-10-01 22:40:36.268069] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.294 [2024-10-01 22:40:36.268078] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.294 qpair failed and we were unable to recover it. 00:41:41.294 [2024-10-01 22:40:36.268353] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.294 [2024-10-01 22:40:36.268365] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.294 qpair failed and we were unable to recover it. 00:41:41.294 [2024-10-01 22:40:36.268642] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.294 [2024-10-01 22:40:36.268652] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.294 qpair failed and we were unable to recover it. 00:41:41.294 [2024-10-01 22:40:36.268951] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.294 [2024-10-01 22:40:36.268961] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.294 qpair failed and we were unable to recover it. 00:41:41.295 [2024-10-01 22:40:36.269161] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.295 [2024-10-01 22:40:36.269171] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.295 qpair failed and we were unable to recover it. 00:41:41.295 [2024-10-01 22:40:36.269499] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.295 [2024-10-01 22:40:36.269509] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.295 qpair failed and we were unable to recover it. 00:41:41.295 [2024-10-01 22:40:36.269815] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.295 [2024-10-01 22:40:36.269826] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.295 qpair failed and we were unable to recover it. 00:41:41.295 [2024-10-01 22:40:36.270018] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.295 [2024-10-01 22:40:36.270029] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.295 qpair failed and we were unable to recover it. 00:41:41.295 [2024-10-01 22:40:36.270308] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.295 [2024-10-01 22:40:36.270318] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.295 qpair failed and we were unable to recover it. 00:41:41.295 [2024-10-01 22:40:36.270635] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.295 [2024-10-01 22:40:36.270646] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.295 qpair failed and we were unable to recover it. 00:41:41.295 [2024-10-01 22:40:36.270926] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.295 [2024-10-01 22:40:36.270936] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.295 qpair failed and we were unable to recover it. 00:41:41.295 [2024-10-01 22:40:36.271180] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.295 [2024-10-01 22:40:36.271190] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.295 qpair failed and we were unable to recover it. 00:41:41.295 [2024-10-01 22:40:36.271505] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.295 [2024-10-01 22:40:36.271515] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.295 qpair failed and we were unable to recover it. 00:41:41.295 [2024-10-01 22:40:36.271793] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.295 [2024-10-01 22:40:36.271803] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.295 qpair failed and we were unable to recover it. 00:41:41.295 [2024-10-01 22:40:36.272099] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.295 [2024-10-01 22:40:36.272110] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.295 qpair failed and we were unable to recover it. 00:41:41.295 [2024-10-01 22:40:36.272409] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.295 [2024-10-01 22:40:36.272420] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.295 qpair failed and we were unable to recover it. 00:41:41.295 [2024-10-01 22:40:36.272696] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.295 [2024-10-01 22:40:36.272706] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.295 qpair failed and we were unable to recover it. 00:41:41.295 [2024-10-01 22:40:36.272996] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.295 [2024-10-01 22:40:36.273006] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.295 qpair failed and we were unable to recover it. 00:41:41.295 [2024-10-01 22:40:36.273293] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.295 [2024-10-01 22:40:36.273302] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.295 qpair failed and we were unable to recover it. 00:41:41.295 [2024-10-01 22:40:36.273614] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.295 [2024-10-01 22:40:36.273633] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.295 qpair failed and we were unable to recover it. 00:41:41.295 [2024-10-01 22:40:36.273933] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.295 [2024-10-01 22:40:36.273944] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.295 qpair failed and we were unable to recover it. 00:41:41.295 [2024-10-01 22:40:36.274246] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.295 [2024-10-01 22:40:36.274256] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.295 qpair failed and we were unable to recover it. 00:41:41.295 [2024-10-01 22:40:36.274579] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.295 [2024-10-01 22:40:36.274588] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.295 qpair failed and we were unable to recover it. 00:41:41.295 [2024-10-01 22:40:36.274898] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.295 [2024-10-01 22:40:36.274908] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.295 qpair failed and we were unable to recover it. 00:41:41.295 [2024-10-01 22:40:36.275288] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.295 [2024-10-01 22:40:36.275299] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.295 qpair failed and we were unable to recover it. 00:41:41.295 [2024-10-01 22:40:36.275605] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.295 [2024-10-01 22:40:36.275615] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.295 qpair failed and we were unable to recover it. 00:41:41.295 [2024-10-01 22:40:36.275898] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.295 [2024-10-01 22:40:36.275908] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.295 qpair failed and we were unable to recover it. 00:41:41.295 [2024-10-01 22:40:36.276203] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.295 [2024-10-01 22:40:36.276212] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.295 qpair failed and we were unable to recover it. 00:41:41.295 [2024-10-01 22:40:36.276517] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.295 [2024-10-01 22:40:36.276530] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.295 qpair failed and we were unable to recover it. 00:41:41.295 [2024-10-01 22:40:36.276808] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.295 [2024-10-01 22:40:36.276818] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.295 qpair failed and we were unable to recover it. 00:41:41.295 [2024-10-01 22:40:36.277178] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.295 [2024-10-01 22:40:36.277188] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.295 qpair failed and we were unable to recover it. 00:41:41.295 [2024-10-01 22:40:36.277490] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.295 [2024-10-01 22:40:36.277500] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.295 qpair failed and we were unable to recover it. 00:41:41.295 [2024-10-01 22:40:36.277810] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.295 [2024-10-01 22:40:36.277820] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.295 qpair failed and we were unable to recover it. 00:41:41.295 [2024-10-01 22:40:36.278142] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.295 [2024-10-01 22:40:36.278153] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.295 qpair failed and we were unable to recover it. 00:41:41.295 [2024-10-01 22:40:36.278521] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.295 [2024-10-01 22:40:36.278533] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.295 qpair failed and we were unable to recover it. 00:41:41.295 [2024-10-01 22:40:36.278842] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.295 [2024-10-01 22:40:36.278853] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.295 qpair failed and we were unable to recover it. 00:41:41.295 [2024-10-01 22:40:36.279186] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.295 [2024-10-01 22:40:36.279196] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.295 qpair failed and we were unable to recover it. 00:41:41.295 [2024-10-01 22:40:36.279515] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.295 [2024-10-01 22:40:36.279526] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.295 qpair failed and we were unable to recover it. 00:41:41.295 [2024-10-01 22:40:36.279837] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.296 [2024-10-01 22:40:36.279847] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.296 qpair failed and we were unable to recover it. 00:41:41.296 [2024-10-01 22:40:36.280133] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.296 [2024-10-01 22:40:36.280144] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.296 qpair failed and we were unable to recover it. 00:41:41.296 [2024-10-01 22:40:36.280446] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.296 [2024-10-01 22:40:36.280456] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.296 qpair failed and we were unable to recover it. 00:41:41.296 [2024-10-01 22:40:36.280762] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.296 [2024-10-01 22:40:36.280773] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.296 qpair failed and we were unable to recover it. 00:41:41.296 [2024-10-01 22:40:36.281113] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.296 [2024-10-01 22:40:36.281124] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.296 qpair failed and we were unable to recover it. 00:41:41.296 [2024-10-01 22:40:36.281433] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.296 [2024-10-01 22:40:36.281443] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.296 qpair failed and we were unable to recover it. 00:41:41.296 [2024-10-01 22:40:36.281758] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.296 [2024-10-01 22:40:36.281768] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.296 qpair failed and we were unable to recover it. 00:41:41.296 [2024-10-01 22:40:36.282093] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.296 [2024-10-01 22:40:36.282102] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.296 qpair failed and we were unable to recover it. 00:41:41.296 [2024-10-01 22:40:36.282427] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.296 [2024-10-01 22:40:36.282438] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.296 qpair failed and we were unable to recover it. 00:41:41.296 [2024-10-01 22:40:36.282739] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.296 [2024-10-01 22:40:36.282749] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.296 qpair failed and we were unable to recover it. 00:41:41.296 [2024-10-01 22:40:36.283037] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.296 [2024-10-01 22:40:36.283046] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.296 qpair failed and we were unable to recover it. 00:41:41.296 [2024-10-01 22:40:36.283339] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.296 [2024-10-01 22:40:36.283350] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.296 qpair failed and we were unable to recover it. 00:41:41.296 [2024-10-01 22:40:36.283637] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.296 [2024-10-01 22:40:36.283648] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.296 qpair failed and we were unable to recover it. 00:41:41.296 [2024-10-01 22:40:36.283971] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.296 [2024-10-01 22:40:36.283981] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.296 qpair failed and we were unable to recover it. 00:41:41.296 [2024-10-01 22:40:36.284360] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.296 [2024-10-01 22:40:36.284369] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.296 qpair failed and we were unable to recover it. 00:41:41.296 [2024-10-01 22:40:36.284565] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.296 [2024-10-01 22:40:36.284574] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.296 qpair failed and we were unable to recover it. 00:41:41.296 [2024-10-01 22:40:36.284922] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.296 [2024-10-01 22:40:36.284933] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.296 qpair failed and we were unable to recover it. 00:41:41.296 [2024-10-01 22:40:36.285240] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.296 [2024-10-01 22:40:36.285250] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.296 qpair failed and we were unable to recover it. 00:41:41.296 [2024-10-01 22:40:36.285638] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.296 [2024-10-01 22:40:36.285648] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.296 qpair failed and we were unable to recover it. 00:41:41.296 [2024-10-01 22:40:36.285845] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.296 [2024-10-01 22:40:36.285855] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.296 qpair failed and we were unable to recover it. 00:41:41.296 [2024-10-01 22:40:36.286142] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.296 [2024-10-01 22:40:36.286152] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.296 qpair failed and we were unable to recover it. 00:41:41.296 [2024-10-01 22:40:36.286329] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.296 [2024-10-01 22:40:36.286340] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.296 qpair failed and we were unable to recover it. 00:41:41.296 [2024-10-01 22:40:36.286656] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.296 [2024-10-01 22:40:36.286666] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.296 qpair failed and we were unable to recover it. 00:41:41.296 [2024-10-01 22:40:36.286954] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.296 [2024-10-01 22:40:36.287003] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.296 qpair failed and we were unable to recover it. 00:41:41.296 [2024-10-01 22:40:36.287228] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.296 [2024-10-01 22:40:36.287237] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.296 qpair failed and we were unable to recover it. 00:41:41.296 [2024-10-01 22:40:36.287618] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.296 [2024-10-01 22:40:36.287633] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.296 qpair failed and we were unable to recover it. 00:41:41.296 [2024-10-01 22:40:36.288062] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.296 [2024-10-01 22:40:36.288072] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.296 qpair failed and we were unable to recover it. 00:41:41.296 [2024-10-01 22:40:36.288395] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.296 [2024-10-01 22:40:36.288405] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.296 qpair failed and we were unable to recover it. 00:41:41.296 [2024-10-01 22:40:36.288730] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.296 [2024-10-01 22:40:36.288740] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.296 qpair failed and we were unable to recover it. 00:41:41.296 [2024-10-01 22:40:36.288923] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.296 [2024-10-01 22:40:36.288933] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.296 qpair failed and we were unable to recover it. 00:41:41.296 [2024-10-01 22:40:36.289210] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.296 [2024-10-01 22:40:36.289220] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.296 qpair failed and we were unable to recover it. 00:41:41.296 [2024-10-01 22:40:36.289530] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.296 [2024-10-01 22:40:36.289540] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.296 qpair failed and we were unable to recover it. 00:41:41.296 [2024-10-01 22:40:36.289912] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.296 [2024-10-01 22:40:36.289922] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.296 qpair failed and we were unable to recover it. 00:41:41.296 [2024-10-01 22:40:36.290215] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.296 [2024-10-01 22:40:36.290225] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.296 qpair failed and we were unable to recover it. 00:41:41.296 [2024-10-01 22:40:36.290546] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.296 [2024-10-01 22:40:36.290556] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.296 qpair failed and we were unable to recover it. 00:41:41.296 [2024-10-01 22:40:36.290858] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.296 [2024-10-01 22:40:36.290869] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.296 qpair failed and we were unable to recover it. 00:41:41.296 [2024-10-01 22:40:36.291146] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.296 [2024-10-01 22:40:36.291156] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.296 qpair failed and we were unable to recover it. 00:41:41.296 [2024-10-01 22:40:36.291439] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.296 [2024-10-01 22:40:36.291455] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.296 qpair failed and we were unable to recover it. 00:41:41.297 [2024-10-01 22:40:36.291796] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.297 [2024-10-01 22:40:36.291806] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.297 qpair failed and we were unable to recover it. 00:41:41.297 [2024-10-01 22:40:36.292090] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.297 [2024-10-01 22:40:36.292107] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.297 qpair failed and we were unable to recover it. 00:41:41.297 [2024-10-01 22:40:36.292443] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.297 [2024-10-01 22:40:36.292453] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.297 qpair failed and we were unable to recover it. 00:41:41.297 [2024-10-01 22:40:36.292754] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.297 [2024-10-01 22:40:36.292765] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.297 qpair failed and we were unable to recover it. 00:41:41.297 [2024-10-01 22:40:36.293072] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.297 [2024-10-01 22:40:36.293083] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.297 qpair failed and we were unable to recover it. 00:41:41.297 [2024-10-01 22:40:36.293283] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.297 [2024-10-01 22:40:36.293293] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.297 qpair failed and we were unable to recover it. 00:41:41.297 [2024-10-01 22:40:36.293615] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.297 [2024-10-01 22:40:36.293630] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.297 qpair failed and we were unable to recover it. 00:41:41.297 [2024-10-01 22:40:36.293939] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.297 [2024-10-01 22:40:36.293949] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.297 qpair failed and we were unable to recover it. 00:41:41.297 [2024-10-01 22:40:36.294253] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.297 [2024-10-01 22:40:36.294263] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.297 qpair failed and we were unable to recover it. 00:41:41.297 [2024-10-01 22:40:36.294559] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.297 [2024-10-01 22:40:36.294569] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.297 qpair failed and we were unable to recover it. 00:41:41.297 [2024-10-01 22:40:36.294850] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.297 [2024-10-01 22:40:36.294860] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.297 qpair failed and we were unable to recover it. 00:41:41.297 [2024-10-01 22:40:36.295163] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.297 [2024-10-01 22:40:36.295173] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.297 qpair failed and we were unable to recover it. 00:41:41.297 [2024-10-01 22:40:36.295362] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.297 [2024-10-01 22:40:36.295371] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.297 qpair failed and we were unable to recover it. 00:41:41.297 [2024-10-01 22:40:36.295684] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.297 [2024-10-01 22:40:36.295694] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.297 qpair failed and we were unable to recover it. 00:41:41.297 [2024-10-01 22:40:36.295907] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.297 [2024-10-01 22:40:36.295917] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.297 qpair failed and we were unable to recover it. 00:41:41.297 [2024-10-01 22:40:36.296231] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.297 [2024-10-01 22:40:36.296242] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.297 qpair failed and we were unable to recover it. 00:41:41.297 [2024-10-01 22:40:36.296516] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.297 [2024-10-01 22:40:36.296525] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.297 qpair failed and we were unable to recover it. 00:41:41.297 [2024-10-01 22:40:36.296805] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.297 [2024-10-01 22:40:36.296815] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.297 qpair failed and we were unable to recover it. 00:41:41.297 [2024-10-01 22:40:36.297114] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.297 [2024-10-01 22:40:36.297124] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.297 qpair failed and we were unable to recover it. 00:41:41.297 [2024-10-01 22:40:36.297437] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.297 [2024-10-01 22:40:36.297447] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.297 qpair failed and we were unable to recover it. 00:41:41.297 [2024-10-01 22:40:36.297725] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.297 [2024-10-01 22:40:36.297738] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.297 qpair failed and we were unable to recover it. 00:41:41.297 [2024-10-01 22:40:36.298036] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.297 [2024-10-01 22:40:36.298046] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.297 qpair failed and we were unable to recover it. 00:41:41.297 [2024-10-01 22:40:36.298335] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.297 [2024-10-01 22:40:36.298344] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.297 qpair failed and we were unable to recover it. 00:41:41.297 [2024-10-01 22:40:36.298671] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.297 [2024-10-01 22:40:36.298682] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.297 qpair failed and we were unable to recover it. 00:41:41.297 [2024-10-01 22:40:36.298987] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.297 [2024-10-01 22:40:36.298997] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.297 qpair failed and we were unable to recover it. 00:41:41.297 [2024-10-01 22:40:36.299377] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.297 [2024-10-01 22:40:36.299387] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.297 qpair failed and we were unable to recover it. 00:41:41.297 [2024-10-01 22:40:36.299677] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.297 [2024-10-01 22:40:36.299687] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.297 qpair failed and we were unable to recover it. 00:41:41.297 [2024-10-01 22:40:36.300033] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.297 [2024-10-01 22:40:36.300043] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.297 qpair failed and we were unable to recover it. 00:41:41.297 [2024-10-01 22:40:36.300326] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.297 [2024-10-01 22:40:36.300336] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.297 qpair failed and we were unable to recover it. 00:41:41.297 [2024-10-01 22:40:36.300637] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.297 [2024-10-01 22:40:36.300648] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.297 qpair failed and we were unable to recover it. 00:41:41.297 [2024-10-01 22:40:36.300977] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.297 [2024-10-01 22:40:36.300987] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.297 qpair failed and we were unable to recover it. 00:41:41.297 [2024-10-01 22:40:36.301299] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.297 [2024-10-01 22:40:36.301309] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.297 qpair failed and we were unable to recover it. 00:41:41.297 [2024-10-01 22:40:36.301617] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.297 [2024-10-01 22:40:36.301631] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.297 qpair failed and we were unable to recover it. 00:41:41.297 [2024-10-01 22:40:36.301923] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.297 [2024-10-01 22:40:36.301933] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.297 qpair failed and we were unable to recover it. 00:41:41.297 [2024-10-01 22:40:36.302253] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.297 [2024-10-01 22:40:36.302262] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.297 qpair failed and we were unable to recover it. 00:41:41.297 [2024-10-01 22:40:36.302567] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.297 [2024-10-01 22:40:36.302576] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.297 qpair failed and we were unable to recover it. 00:41:41.297 [2024-10-01 22:40:36.302761] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.297 [2024-10-01 22:40:36.302772] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.297 qpair failed and we were unable to recover it. 00:41:41.297 [2024-10-01 22:40:36.302979] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.297 [2024-10-01 22:40:36.302988] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.297 qpair failed and we were unable to recover it. 00:41:41.297 [2024-10-01 22:40:36.303277] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.297 [2024-10-01 22:40:36.303288] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.297 qpair failed and we were unable to recover it. 00:41:41.298 [2024-10-01 22:40:36.303586] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.298 [2024-10-01 22:40:36.303596] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.298 qpair failed and we were unable to recover it. 00:41:41.298 [2024-10-01 22:40:36.303876] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.298 [2024-10-01 22:40:36.303886] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.298 qpair failed and we were unable to recover it. 00:41:41.298 [2024-10-01 22:40:36.304061] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.298 [2024-10-01 22:40:36.304071] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.298 qpair failed and we were unable to recover it. 00:41:41.298 [2024-10-01 22:40:36.304374] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.298 [2024-10-01 22:40:36.304384] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.298 qpair failed and we were unable to recover it. 00:41:41.298 [2024-10-01 22:40:36.304664] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.298 [2024-10-01 22:40:36.304674] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.298 qpair failed and we were unable to recover it. 00:41:41.298 [2024-10-01 22:40:36.304967] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.298 [2024-10-01 22:40:36.304977] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.298 qpair failed and we were unable to recover it. 00:41:41.298 [2024-10-01 22:40:36.305283] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.298 [2024-10-01 22:40:36.305292] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.298 qpair failed and we were unable to recover it. 00:41:41.298 [2024-10-01 22:40:36.305634] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.298 [2024-10-01 22:40:36.305644] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.298 qpair failed and we were unable to recover it. 00:41:41.298 [2024-10-01 22:40:36.306040] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.298 [2024-10-01 22:40:36.306053] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.298 qpair failed and we were unable to recover it. 00:41:41.298 [2024-10-01 22:40:36.306355] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.298 [2024-10-01 22:40:36.306365] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.298 qpair failed and we were unable to recover it. 00:41:41.298 [2024-10-01 22:40:36.306639] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.298 [2024-10-01 22:40:36.306649] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.298 qpair failed and we were unable to recover it. 00:41:41.298 [2024-10-01 22:40:36.306813] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.298 [2024-10-01 22:40:36.306824] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.298 qpair failed and we were unable to recover it. 00:41:41.298 [2024-10-01 22:40:36.307172] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.298 [2024-10-01 22:40:36.307182] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.298 qpair failed and we were unable to recover it. 00:41:41.298 [2024-10-01 22:40:36.307488] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.298 [2024-10-01 22:40:36.307498] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.298 qpair failed and we were unable to recover it. 00:41:41.298 [2024-10-01 22:40:36.307801] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.298 [2024-10-01 22:40:36.307811] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.298 qpair failed and we were unable to recover it. 00:41:41.298 [2024-10-01 22:40:36.308099] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.298 [2024-10-01 22:40:36.308109] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.298 qpair failed and we were unable to recover it. 00:41:41.298 [2024-10-01 22:40:36.308422] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.298 [2024-10-01 22:40:36.308432] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.298 qpair failed and we were unable to recover it. 00:41:41.298 [2024-10-01 22:40:36.308752] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.298 [2024-10-01 22:40:36.308762] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.298 qpair failed and we were unable to recover it. 00:41:41.298 [2024-10-01 22:40:36.309096] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.298 [2024-10-01 22:40:36.309107] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.298 qpair failed and we were unable to recover it. 00:41:41.298 [2024-10-01 22:40:36.309390] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.298 [2024-10-01 22:40:36.309401] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.298 qpair failed and we were unable to recover it. 00:41:41.298 [2024-10-01 22:40:36.309702] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.298 [2024-10-01 22:40:36.309712] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.298 qpair failed and we were unable to recover it. 00:41:41.298 [2024-10-01 22:40:36.309943] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.298 [2024-10-01 22:40:36.309954] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.298 qpair failed and we were unable to recover it. 00:41:41.298 [2024-10-01 22:40:36.310277] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.298 [2024-10-01 22:40:36.310287] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.298 qpair failed and we were unable to recover it. 00:41:41.298 [2024-10-01 22:40:36.310630] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.298 [2024-10-01 22:40:36.310641] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.298 qpair failed and we were unable to recover it. 00:41:41.298 [2024-10-01 22:40:36.310985] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.298 [2024-10-01 22:40:36.310994] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.298 qpair failed and we were unable to recover it. 00:41:41.298 [2024-10-01 22:40:36.311291] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.298 [2024-10-01 22:40:36.311301] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.298 qpair failed and we were unable to recover it. 00:41:41.298 [2024-10-01 22:40:36.311633] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.298 [2024-10-01 22:40:36.311643] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.298 qpair failed and we were unable to recover it. 00:41:41.298 [2024-10-01 22:40:36.311999] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.298 [2024-10-01 22:40:36.312009] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.298 qpair failed and we were unable to recover it. 00:41:41.298 [2024-10-01 22:40:36.312319] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.298 [2024-10-01 22:40:36.312329] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.298 qpair failed and we were unable to recover it. 00:41:41.298 [2024-10-01 22:40:36.312631] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.298 [2024-10-01 22:40:36.312642] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.298 qpair failed and we were unable to recover it. 00:41:41.298 [2024-10-01 22:40:36.312834] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.298 [2024-10-01 22:40:36.312844] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.298 qpair failed and we were unable to recover it. 00:41:41.298 [2024-10-01 22:40:36.313196] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.298 [2024-10-01 22:40:36.313206] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.298 qpair failed and we were unable to recover it. 00:41:41.298 [2024-10-01 22:40:36.313513] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.298 [2024-10-01 22:40:36.313531] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.298 qpair failed and we were unable to recover it. 00:41:41.298 [2024-10-01 22:40:36.313856] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.298 [2024-10-01 22:40:36.313867] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.298 qpair failed and we were unable to recover it. 00:41:41.298 [2024-10-01 22:40:36.314169] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.298 [2024-10-01 22:40:36.314180] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.298 qpair failed and we were unable to recover it. 00:41:41.298 [2024-10-01 22:40:36.314371] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.298 [2024-10-01 22:40:36.314381] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.298 qpair failed and we were unable to recover it. 00:41:41.298 [2024-10-01 22:40:36.314685] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.298 [2024-10-01 22:40:36.314696] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.298 qpair failed and we were unable to recover it. 00:41:41.298 [2024-10-01 22:40:36.315020] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.298 [2024-10-01 22:40:36.315030] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.298 qpair failed and we were unable to recover it. 00:41:41.298 [2024-10-01 22:40:36.315354] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.298 [2024-10-01 22:40:36.315364] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.298 qpair failed and we were unable to recover it. 00:41:41.299 [2024-10-01 22:40:36.315642] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.299 [2024-10-01 22:40:36.315653] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.299 qpair failed and we were unable to recover it. 00:41:41.299 [2024-10-01 22:40:36.315970] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.299 [2024-10-01 22:40:36.315980] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.299 qpair failed and we were unable to recover it. 00:41:41.299 [2024-10-01 22:40:36.316289] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.299 [2024-10-01 22:40:36.316299] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.299 qpair failed and we were unable to recover it. 00:41:41.299 [2024-10-01 22:40:36.316575] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.299 [2024-10-01 22:40:36.316585] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.299 qpair failed and we were unable to recover it. 00:41:41.299 [2024-10-01 22:40:36.316878] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.299 [2024-10-01 22:40:36.316888] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.299 qpair failed and we were unable to recover it. 00:41:41.299 [2024-10-01 22:40:36.317197] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.299 [2024-10-01 22:40:36.317207] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.299 qpair failed and we were unable to recover it. 00:41:41.299 [2024-10-01 22:40:36.317484] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.299 [2024-10-01 22:40:36.317495] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.299 qpair failed and we were unable to recover it. 00:41:41.299 [2024-10-01 22:40:36.317823] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.299 [2024-10-01 22:40:36.317834] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.299 qpair failed and we were unable to recover it. 00:41:41.299 [2024-10-01 22:40:36.318119] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.299 [2024-10-01 22:40:36.318129] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.299 qpair failed and we were unable to recover it. 00:41:41.299 [2024-10-01 22:40:36.318433] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.299 [2024-10-01 22:40:36.318442] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.299 qpair failed and we were unable to recover it. 00:41:41.299 [2024-10-01 22:40:36.318825] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.299 [2024-10-01 22:40:36.318836] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.299 qpair failed and we were unable to recover it. 00:41:41.299 [2024-10-01 22:40:36.319134] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.299 [2024-10-01 22:40:36.319144] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.299 qpair failed and we were unable to recover it. 00:41:41.299 [2024-10-01 22:40:36.319421] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.299 [2024-10-01 22:40:36.319431] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.299 qpair failed and we were unable to recover it. 00:41:41.299 [2024-10-01 22:40:36.319743] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.299 [2024-10-01 22:40:36.319754] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.299 qpair failed and we were unable to recover it. 00:41:41.299 [2024-10-01 22:40:36.320057] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.299 [2024-10-01 22:40:36.320067] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.299 qpair failed and we were unable to recover it. 00:41:41.299 [2024-10-01 22:40:36.320371] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.299 [2024-10-01 22:40:36.320381] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.299 qpair failed and we were unable to recover it. 00:41:41.299 [2024-10-01 22:40:36.320659] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.299 [2024-10-01 22:40:36.320669] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.299 qpair failed and we were unable to recover it. 00:41:41.299 [2024-10-01 22:40:36.320868] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.299 [2024-10-01 22:40:36.320879] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.299 qpair failed and we were unable to recover it. 00:41:41.299 [2024-10-01 22:40:36.321194] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.299 [2024-10-01 22:40:36.321204] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.299 qpair failed and we were unable to recover it. 00:41:41.299 [2024-10-01 22:40:36.321506] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.299 [2024-10-01 22:40:36.321516] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.299 qpair failed and we were unable to recover it. 00:41:41.299 [2024-10-01 22:40:36.321789] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.299 [2024-10-01 22:40:36.321799] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.299 qpair failed and we were unable to recover it. 00:41:41.299 [2024-10-01 22:40:36.322003] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.299 [2024-10-01 22:40:36.322012] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.299 qpair failed and we were unable to recover it. 00:41:41.299 [2024-10-01 22:40:36.322329] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.299 [2024-10-01 22:40:36.322339] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.299 qpair failed and we were unable to recover it. 00:41:41.299 [2024-10-01 22:40:36.322617] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.299 [2024-10-01 22:40:36.322630] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.299 qpair failed and we were unable to recover it. 00:41:41.299 [2024-10-01 22:40:36.322935] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.299 [2024-10-01 22:40:36.322945] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.299 qpair failed and we were unable to recover it. 00:41:41.299 [2024-10-01 22:40:36.323251] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.299 [2024-10-01 22:40:36.323262] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.299 qpair failed and we were unable to recover it. 00:41:41.299 [2024-10-01 22:40:36.323561] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.299 [2024-10-01 22:40:36.323570] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.299 qpair failed and we were unable to recover it. 00:41:41.299 [2024-10-01 22:40:36.323875] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.299 [2024-10-01 22:40:36.323886] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.299 qpair failed and we were unable to recover it. 00:41:41.299 [2024-10-01 22:40:36.324213] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.299 [2024-10-01 22:40:36.324222] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.299 qpair failed and we were unable to recover it. 00:41:41.299 [2024-10-01 22:40:36.324598] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.299 [2024-10-01 22:40:36.324609] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.299 qpair failed and we were unable to recover it. 00:41:41.299 [2024-10-01 22:40:36.324909] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.299 [2024-10-01 22:40:36.324919] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.299 qpair failed and we were unable to recover it. 00:41:41.299 [2024-10-01 22:40:36.325208] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.299 [2024-10-01 22:40:36.325219] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.299 qpair failed and we were unable to recover it. 00:41:41.299 [2024-10-01 22:40:36.325510] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.299 [2024-10-01 22:40:36.325520] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.299 qpair failed and we were unable to recover it. 00:41:41.299 [2024-10-01 22:40:36.325796] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.299 [2024-10-01 22:40:36.325807] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.299 qpair failed and we were unable to recover it. 00:41:41.299 [2024-10-01 22:40:36.326125] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.300 [2024-10-01 22:40:36.326134] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.300 qpair failed and we were unable to recover it. 00:41:41.300 [2024-10-01 22:40:36.326421] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.300 [2024-10-01 22:40:36.326430] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.300 qpair failed and we were unable to recover it. 00:41:41.300 [2024-10-01 22:40:36.326744] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.300 [2024-10-01 22:40:36.326755] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.300 qpair failed and we were unable to recover it. 00:41:41.300 [2024-10-01 22:40:36.326944] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.300 [2024-10-01 22:40:36.326956] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.300 qpair failed and we were unable to recover it. 00:41:41.300 [2024-10-01 22:40:36.327222] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.300 [2024-10-01 22:40:36.327232] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.300 qpair failed and we were unable to recover it. 00:41:41.300 [2024-10-01 22:40:36.327529] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.300 [2024-10-01 22:40:36.327539] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.300 qpair failed and we were unable to recover it. 00:41:41.300 [2024-10-01 22:40:36.327841] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.300 [2024-10-01 22:40:36.327851] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.300 qpair failed and we were unable to recover it. 00:41:41.300 [2024-10-01 22:40:36.328163] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.300 [2024-10-01 22:40:36.328173] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.300 qpair failed and we were unable to recover it. 00:41:41.300 [2024-10-01 22:40:36.328475] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.300 [2024-10-01 22:40:36.328485] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.300 qpair failed and we were unable to recover it. 00:41:41.300 [2024-10-01 22:40:36.328798] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.300 [2024-10-01 22:40:36.328807] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.300 qpair failed and we were unable to recover it. 00:41:41.300 [2024-10-01 22:40:36.329133] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.300 [2024-10-01 22:40:36.329144] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.300 qpair failed and we were unable to recover it. 00:41:41.300 [2024-10-01 22:40:36.329447] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.300 [2024-10-01 22:40:36.329457] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.300 qpair failed and we were unable to recover it. 00:41:41.300 [2024-10-01 22:40:36.329664] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.300 [2024-10-01 22:40:36.329675] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.300 qpair failed and we were unable to recover it. 00:41:41.300 [2024-10-01 22:40:36.329966] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.300 [2024-10-01 22:40:36.329976] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.300 qpair failed and we were unable to recover it. 00:41:41.300 [2024-10-01 22:40:36.330259] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.300 [2024-10-01 22:40:36.330270] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.300 qpair failed and we were unable to recover it. 00:41:41.300 [2024-10-01 22:40:36.330587] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.300 [2024-10-01 22:40:36.330597] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.300 qpair failed and we were unable to recover it. 00:41:41.300 [2024-10-01 22:40:36.330967] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.300 [2024-10-01 22:40:36.330977] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.300 qpair failed and we were unable to recover it. 00:41:41.300 [2024-10-01 22:40:36.331245] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.300 [2024-10-01 22:40:36.331255] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.300 qpair failed and we were unable to recover it. 00:41:41.300 [2024-10-01 22:40:36.331567] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.300 [2024-10-01 22:40:36.331577] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.300 qpair failed and we were unable to recover it. 00:41:41.300 [2024-10-01 22:40:36.331765] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.300 [2024-10-01 22:40:36.331776] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.300 qpair failed and we were unable to recover it. 00:41:41.300 [2024-10-01 22:40:36.332133] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.300 [2024-10-01 22:40:36.332143] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.300 qpair failed and we were unable to recover it. 00:41:41.300 [2024-10-01 22:40:36.332342] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.300 [2024-10-01 22:40:36.332352] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.300 qpair failed and we were unable to recover it. 00:41:41.300 [2024-10-01 22:40:36.332664] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.300 [2024-10-01 22:40:36.332674] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.300 qpair failed and we were unable to recover it. 00:41:41.300 [2024-10-01 22:40:36.332983] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.300 [2024-10-01 22:40:36.332994] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.300 qpair failed and we were unable to recover it. 00:41:41.300 [2024-10-01 22:40:36.333298] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.300 [2024-10-01 22:40:36.333308] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.300 qpair failed and we were unable to recover it. 00:41:41.300 [2024-10-01 22:40:36.333609] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.300 [2024-10-01 22:40:36.333619] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.300 qpair failed and we were unable to recover it. 00:41:41.300 [2024-10-01 22:40:36.333939] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.300 [2024-10-01 22:40:36.333950] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.300 qpair failed and we were unable to recover it. 00:41:41.300 [2024-10-01 22:40:36.334294] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.300 [2024-10-01 22:40:36.334305] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.300 qpair failed and we were unable to recover it. 00:41:41.300 [2024-10-01 22:40:36.334618] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.300 [2024-10-01 22:40:36.334633] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.300 qpair failed and we were unable to recover it. 00:41:41.300 [2024-10-01 22:40:36.334974] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.300 [2024-10-01 22:40:36.334984] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.300 qpair failed and we were unable to recover it. 00:41:41.300 [2024-10-01 22:40:36.335281] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.300 [2024-10-01 22:40:36.335329] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.300 qpair failed and we were unable to recover it. 00:41:41.300 [2024-10-01 22:40:36.335630] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.300 [2024-10-01 22:40:36.335641] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.300 qpair failed and we were unable to recover it. 00:41:41.300 [2024-10-01 22:40:36.335929] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.300 [2024-10-01 22:40:36.335947] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.300 qpair failed and we were unable to recover it. 00:41:41.300 [2024-10-01 22:40:36.336258] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.300 [2024-10-01 22:40:36.336268] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.300 qpair failed and we were unable to recover it. 00:41:41.300 [2024-10-01 22:40:36.336536] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.300 [2024-10-01 22:40:36.336546] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.300 qpair failed and we were unable to recover it. 00:41:41.300 [2024-10-01 22:40:36.336876] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.300 [2024-10-01 22:40:36.336886] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.300 qpair failed and we were unable to recover it. 00:41:41.300 [2024-10-01 22:40:36.337201] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.300 [2024-10-01 22:40:36.337212] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.300 qpair failed and we were unable to recover it. 00:41:41.300 [2024-10-01 22:40:36.337514] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.300 [2024-10-01 22:40:36.337524] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.300 qpair failed and we were unable to recover it. 00:41:41.300 [2024-10-01 22:40:36.337841] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.300 [2024-10-01 22:40:36.337851] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.300 qpair failed and we were unable to recover it. 00:41:41.300 [2024-10-01 22:40:36.338149] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.301 [2024-10-01 22:40:36.338158] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.301 qpair failed and we were unable to recover it. 00:41:41.301 [2024-10-01 22:40:36.338436] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.301 [2024-10-01 22:40:36.338446] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.301 qpair failed and we were unable to recover it. 00:41:41.301 [2024-10-01 22:40:36.338751] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.301 [2024-10-01 22:40:36.338762] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.301 qpair failed and we were unable to recover it. 00:41:41.301 [2024-10-01 22:40:36.339122] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.301 [2024-10-01 22:40:36.339133] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.301 qpair failed and we were unable to recover it. 00:41:41.301 [2024-10-01 22:40:36.339432] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.301 [2024-10-01 22:40:36.339442] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.301 qpair failed and we were unable to recover it. 00:41:41.301 [2024-10-01 22:40:36.339614] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.301 [2024-10-01 22:40:36.339631] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.301 qpair failed and we were unable to recover it. 00:41:41.301 [2024-10-01 22:40:36.339998] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.301 [2024-10-01 22:40:36.340009] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.301 qpair failed and we were unable to recover it. 00:41:41.301 [2024-10-01 22:40:36.340288] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.301 [2024-10-01 22:40:36.340299] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.301 qpair failed and we were unable to recover it. 00:41:41.301 [2024-10-01 22:40:36.340602] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.301 [2024-10-01 22:40:36.340612] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.301 qpair failed and we were unable to recover it. 00:41:41.301 [2024-10-01 22:40:36.340917] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.301 [2024-10-01 22:40:36.340928] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.301 qpair failed and we were unable to recover it. 00:41:41.301 [2024-10-01 22:40:36.341239] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.301 [2024-10-01 22:40:36.341250] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.301 qpair failed and we were unable to recover it. 00:41:41.301 [2024-10-01 22:40:36.341599] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.301 [2024-10-01 22:40:36.341608] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.301 qpair failed and we were unable to recover it. 00:41:41.301 [2024-10-01 22:40:36.341936] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.301 [2024-10-01 22:40:36.341946] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.301 qpair failed and we were unable to recover it. 00:41:41.301 [2024-10-01 22:40:36.342275] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.301 [2024-10-01 22:40:36.342285] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.301 qpair failed and we were unable to recover it. 00:41:41.301 [2024-10-01 22:40:36.342598] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.301 [2024-10-01 22:40:36.342608] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.301 qpair failed and we were unable to recover it. 00:41:41.301 [2024-10-01 22:40:36.342931] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.301 [2024-10-01 22:40:36.342941] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.301 qpair failed and we were unable to recover it. 00:41:41.301 [2024-10-01 22:40:36.343215] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.301 [2024-10-01 22:40:36.343225] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.301 qpair failed and we were unable to recover it. 00:41:41.301 [2024-10-01 22:40:36.343549] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.301 [2024-10-01 22:40:36.343559] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.301 qpair failed and we were unable to recover it. 00:41:41.301 [2024-10-01 22:40:36.343860] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.301 [2024-10-01 22:40:36.343873] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.301 qpair failed and we were unable to recover it. 00:41:41.301 [2024-10-01 22:40:36.344153] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.301 [2024-10-01 22:40:36.344163] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.301 qpair failed and we were unable to recover it. 00:41:41.301 [2024-10-01 22:40:36.344486] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.301 [2024-10-01 22:40:36.344496] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.301 qpair failed and we were unable to recover it. 00:41:41.301 [2024-10-01 22:40:36.344782] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.301 [2024-10-01 22:40:36.344792] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.301 qpair failed and we were unable to recover it. 00:41:41.301 [2024-10-01 22:40:36.344972] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.301 [2024-10-01 22:40:36.344983] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.301 qpair failed and we were unable to recover it. 00:41:41.301 [2024-10-01 22:40:36.345316] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.301 [2024-10-01 22:40:36.345326] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.301 qpair failed and we were unable to recover it. 00:41:41.301 [2024-10-01 22:40:36.345642] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.301 [2024-10-01 22:40:36.345652] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.301 qpair failed and we were unable to recover it. 00:41:41.301 [2024-10-01 22:40:36.345964] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.301 [2024-10-01 22:40:36.345973] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.301 qpair failed and we were unable to recover it. 00:41:41.301 [2024-10-01 22:40:36.346278] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.301 [2024-10-01 22:40:36.346289] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.301 qpair failed and we were unable to recover it. 00:41:41.301 [2024-10-01 22:40:36.346475] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.301 [2024-10-01 22:40:36.346485] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.301 qpair failed and we were unable to recover it. 00:41:41.301 [2024-10-01 22:40:36.346793] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.301 [2024-10-01 22:40:36.346803] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.301 qpair failed and we were unable to recover it. 00:41:41.301 [2024-10-01 22:40:36.347089] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.301 [2024-10-01 22:40:36.347100] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.301 qpair failed and we were unable to recover it. 00:41:41.301 [2024-10-01 22:40:36.347402] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.301 [2024-10-01 22:40:36.347412] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.301 qpair failed and we were unable to recover it. 00:41:41.301 [2024-10-01 22:40:36.347703] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.301 [2024-10-01 22:40:36.347713] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.301 qpair failed and we were unable to recover it. 00:41:41.301 [2024-10-01 22:40:36.347985] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.301 [2024-10-01 22:40:36.347996] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.301 qpair failed and we were unable to recover it. 00:41:41.301 [2024-10-01 22:40:36.348313] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.301 [2024-10-01 22:40:36.348323] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.301 qpair failed and we were unable to recover it. 00:41:41.301 [2024-10-01 22:40:36.348639] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.301 [2024-10-01 22:40:36.348649] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.301 qpair failed and we were unable to recover it. 00:41:41.301 [2024-10-01 22:40:36.348941] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.301 [2024-10-01 22:40:36.348960] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.301 qpair failed and we were unable to recover it. 00:41:41.301 [2024-10-01 22:40:36.349266] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.301 [2024-10-01 22:40:36.349276] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.301 qpair failed and we were unable to recover it. 00:41:41.301 [2024-10-01 22:40:36.349567] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.301 [2024-10-01 22:40:36.349577] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.301 qpair failed and we were unable to recover it. 00:41:41.301 [2024-10-01 22:40:36.349799] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.301 [2024-10-01 22:40:36.349809] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.301 qpair failed and we were unable to recover it. 00:41:41.301 [2024-10-01 22:40:36.350189] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.302 [2024-10-01 22:40:36.350199] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.302 qpair failed and we were unable to recover it. 00:41:41.302 [2024-10-01 22:40:36.350387] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.302 [2024-10-01 22:40:36.350397] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.302 qpair failed and we were unable to recover it. 00:41:41.302 [2024-10-01 22:40:36.350711] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.302 [2024-10-01 22:40:36.350722] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.302 qpair failed and we were unable to recover it. 00:41:41.302 [2024-10-01 22:40:36.351030] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.302 [2024-10-01 22:40:36.351040] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.302 qpair failed and we were unable to recover it. 00:41:41.302 [2024-10-01 22:40:36.351322] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.302 [2024-10-01 22:40:36.351332] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.302 qpair failed and we were unable to recover it. 00:41:41.302 [2024-10-01 22:40:36.351651] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.302 [2024-10-01 22:40:36.351661] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.302 qpair failed and we were unable to recover it. 00:41:41.302 [2024-10-01 22:40:36.351965] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.302 [2024-10-01 22:40:36.351974] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.302 qpair failed and we were unable to recover it. 00:41:41.302 [2024-10-01 22:40:36.352283] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.302 [2024-10-01 22:40:36.352293] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.302 qpair failed and we were unable to recover it. 00:41:41.302 [2024-10-01 22:40:36.352612] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.302 [2024-10-01 22:40:36.352621] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.302 qpair failed and we were unable to recover it. 00:41:41.302 [2024-10-01 22:40:36.352917] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.302 [2024-10-01 22:40:36.352927] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.302 qpair failed and we were unable to recover it. 00:41:41.302 [2024-10-01 22:40:36.353241] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.302 [2024-10-01 22:40:36.353252] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.302 qpair failed and we were unable to recover it. 00:41:41.302 [2024-10-01 22:40:36.353559] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.302 [2024-10-01 22:40:36.353569] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.302 qpair failed and we were unable to recover it. 00:41:41.302 [2024-10-01 22:40:36.353768] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.302 [2024-10-01 22:40:36.353778] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.302 qpair failed and we were unable to recover it. 00:41:41.302 [2024-10-01 22:40:36.354115] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.302 [2024-10-01 22:40:36.354125] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.302 qpair failed and we were unable to recover it. 00:41:41.302 [2024-10-01 22:40:36.354400] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.302 [2024-10-01 22:40:36.354409] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.302 qpair failed and we were unable to recover it. 00:41:41.302 [2024-10-01 22:40:36.354707] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.302 [2024-10-01 22:40:36.354717] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.302 qpair failed and we were unable to recover it. 00:41:41.302 [2024-10-01 22:40:36.355050] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.302 [2024-10-01 22:40:36.355059] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.302 qpair failed and we were unable to recover it. 00:41:41.302 [2024-10-01 22:40:36.355363] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.302 [2024-10-01 22:40:36.355372] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.302 qpair failed and we were unable to recover it. 00:41:41.302 [2024-10-01 22:40:36.355678] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.302 [2024-10-01 22:40:36.355689] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.302 qpair failed and we were unable to recover it. 00:41:41.302 [2024-10-01 22:40:36.356043] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.302 [2024-10-01 22:40:36.356053] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.302 qpair failed and we were unable to recover it. 00:41:41.302 [2024-10-01 22:40:36.356255] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.302 [2024-10-01 22:40:36.356265] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.302 qpair failed and we were unable to recover it. 00:41:41.302 [2024-10-01 22:40:36.356610] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.302 [2024-10-01 22:40:36.356619] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.302 qpair failed and we were unable to recover it. 00:41:41.302 [2024-10-01 22:40:36.356940] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.302 [2024-10-01 22:40:36.356950] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.302 qpair failed and we were unable to recover it. 00:41:41.302 [2024-10-01 22:40:36.357265] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.302 [2024-10-01 22:40:36.357275] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.302 qpair failed and we were unable to recover it. 00:41:41.302 [2024-10-01 22:40:36.357628] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.302 [2024-10-01 22:40:36.357638] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.302 qpair failed and we were unable to recover it. 00:41:41.302 [2024-10-01 22:40:36.357913] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.302 [2024-10-01 22:40:36.357923] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.302 qpair failed and we were unable to recover it. 00:41:41.302 [2024-10-01 22:40:36.358303] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.302 [2024-10-01 22:40:36.358313] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.302 qpair failed and we were unable to recover it. 00:41:41.302 [2024-10-01 22:40:36.358622] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.302 [2024-10-01 22:40:36.358638] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.302 qpair failed and we were unable to recover it. 00:41:41.302 [2024-10-01 22:40:36.358932] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.302 [2024-10-01 22:40:36.358941] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.302 qpair failed and we were unable to recover it. 00:41:41.302 [2024-10-01 22:40:36.359221] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.302 [2024-10-01 22:40:36.359238] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.302 qpair failed and we were unable to recover it. 00:41:41.302 [2024-10-01 22:40:36.359576] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.302 [2024-10-01 22:40:36.359587] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.302 qpair failed and we were unable to recover it. 00:41:41.302 [2024-10-01 22:40:36.359881] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.302 [2024-10-01 22:40:36.359891] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.302 qpair failed and we were unable to recover it. 00:41:41.302 [2024-10-01 22:40:36.360185] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.302 [2024-10-01 22:40:36.360195] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.302 qpair failed and we were unable to recover it. 00:41:41.302 [2024-10-01 22:40:36.360355] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.302 [2024-10-01 22:40:36.360366] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.302 qpair failed and we were unable to recover it. 00:41:41.302 [2024-10-01 22:40:36.360661] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.302 [2024-10-01 22:40:36.360671] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.302 qpair failed and we were unable to recover it. 00:41:41.302 [2024-10-01 22:40:36.360901] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.302 [2024-10-01 22:40:36.360911] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.302 qpair failed and we were unable to recover it. 00:41:41.302 [2024-10-01 22:40:36.361298] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.302 [2024-10-01 22:40:36.361308] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.302 qpair failed and we were unable to recover it. 00:41:41.302 [2024-10-01 22:40:36.361601] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.302 [2024-10-01 22:40:36.361612] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.302 qpair failed and we were unable to recover it. 00:41:41.302 [2024-10-01 22:40:36.361943] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.302 [2024-10-01 22:40:36.361953] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.302 qpair failed and we were unable to recover it. 00:41:41.302 [2024-10-01 22:40:36.362250] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.303 [2024-10-01 22:40:36.362269] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.303 qpair failed and we were unable to recover it. 00:41:41.303 [2024-10-01 22:40:36.362603] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.303 [2024-10-01 22:40:36.362613] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.303 qpair failed and we were unable to recover it. 00:41:41.303 [2024-10-01 22:40:36.362835] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.303 [2024-10-01 22:40:36.362845] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.303 qpair failed and we were unable to recover it. 00:41:41.303 [2024-10-01 22:40:36.363071] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.303 [2024-10-01 22:40:36.363082] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.303 qpair failed and we were unable to recover it. 00:41:41.303 [2024-10-01 22:40:36.363421] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.303 [2024-10-01 22:40:36.363432] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.303 qpair failed and we were unable to recover it. 00:41:41.303 [2024-10-01 22:40:36.363734] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.303 [2024-10-01 22:40:36.363745] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.303 qpair failed and we were unable to recover it. 00:41:41.303 [2024-10-01 22:40:36.363907] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.303 [2024-10-01 22:40:36.363918] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.303 qpair failed and we were unable to recover it. 00:41:41.303 [2024-10-01 22:40:36.364230] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.303 [2024-10-01 22:40:36.364241] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.303 qpair failed and we were unable to recover it. 00:41:41.303 [2024-10-01 22:40:36.364586] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.303 [2024-10-01 22:40:36.364600] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.303 qpair failed and we were unable to recover it. 00:41:41.303 [2024-10-01 22:40:36.364986] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.303 [2024-10-01 22:40:36.364998] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.303 qpair failed and we were unable to recover it. 00:41:41.303 [2024-10-01 22:40:36.365303] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.303 [2024-10-01 22:40:36.365314] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.303 qpair failed and we were unable to recover it. 00:41:41.303 [2024-10-01 22:40:36.365632] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.303 [2024-10-01 22:40:36.365642] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.303 qpair failed and we were unable to recover it. 00:41:41.303 [2024-10-01 22:40:36.365845] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.303 [2024-10-01 22:40:36.365855] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.303 qpair failed and we were unable to recover it. 00:41:41.303 [2024-10-01 22:40:36.366156] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.303 [2024-10-01 22:40:36.366166] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.303 qpair failed and we were unable to recover it. 00:41:41.303 [2024-10-01 22:40:36.366378] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.303 [2024-10-01 22:40:36.366388] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.303 qpair failed and we were unable to recover it. 00:41:41.303 [2024-10-01 22:40:36.366712] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.303 [2024-10-01 22:40:36.366723] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.303 qpair failed and we were unable to recover it. 00:41:41.303 [2024-10-01 22:40:36.366926] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.303 [2024-10-01 22:40:36.366936] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.303 qpair failed and we were unable to recover it. 00:41:41.303 [2024-10-01 22:40:36.367153] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.303 [2024-10-01 22:40:36.367164] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.303 qpair failed and we were unable to recover it. 00:41:41.303 [2024-10-01 22:40:36.367439] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.303 [2024-10-01 22:40:36.367449] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.303 qpair failed and we were unable to recover it. 00:41:41.303 [2024-10-01 22:40:36.367749] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.303 [2024-10-01 22:40:36.367759] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.303 qpair failed and we were unable to recover it. 00:41:41.303 [2024-10-01 22:40:36.368055] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.303 [2024-10-01 22:40:36.368065] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.303 qpair failed and we were unable to recover it. 00:41:41.303 [2024-10-01 22:40:36.368362] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.303 [2024-10-01 22:40:36.368374] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.303 qpair failed and we were unable to recover it. 00:41:41.303 [2024-10-01 22:40:36.368694] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.303 [2024-10-01 22:40:36.368705] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.303 qpair failed and we were unable to recover it. 00:41:41.303 [2024-10-01 22:40:36.369002] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.303 [2024-10-01 22:40:36.369013] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.303 qpair failed and we were unable to recover it. 00:41:41.303 [2024-10-01 22:40:36.369338] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.303 [2024-10-01 22:40:36.369349] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.303 qpair failed and we were unable to recover it. 00:41:41.303 [2024-10-01 22:40:36.369631] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.303 [2024-10-01 22:40:36.369642] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.303 qpair failed and we were unable to recover it. 00:41:41.303 [2024-10-01 22:40:36.369944] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.303 [2024-10-01 22:40:36.369954] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.303 qpair failed and we were unable to recover it. 00:41:41.303 [2024-10-01 22:40:36.370247] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.303 [2024-10-01 22:40:36.370257] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.303 qpair failed and we were unable to recover it. 00:41:41.303 [2024-10-01 22:40:36.370584] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.303 [2024-10-01 22:40:36.370594] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.303 qpair failed and we were unable to recover it. 00:41:41.303 [2024-10-01 22:40:36.370868] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.303 [2024-10-01 22:40:36.370879] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.303 qpair failed and we were unable to recover it. 00:41:41.303 [2024-10-01 22:40:36.371192] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.303 [2024-10-01 22:40:36.371202] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.303 qpair failed and we were unable to recover it. 00:41:41.303 [2024-10-01 22:40:36.371521] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.303 [2024-10-01 22:40:36.371530] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.303 qpair failed and we were unable to recover it. 00:41:41.303 [2024-10-01 22:40:36.371845] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.303 [2024-10-01 22:40:36.371857] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.303 qpair failed and we were unable to recover it. 00:41:41.303 [2024-10-01 22:40:36.372134] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.303 [2024-10-01 22:40:36.372145] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.303 qpair failed and we were unable to recover it. 00:41:41.303 [2024-10-01 22:40:36.372444] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.303 [2024-10-01 22:40:36.372455] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.303 qpair failed and we were unable to recover it. 00:41:41.303 [2024-10-01 22:40:36.372767] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.303 [2024-10-01 22:40:36.372781] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.303 qpair failed and we were unable to recover it. 00:41:41.303 [2024-10-01 22:40:36.373080] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.303 [2024-10-01 22:40:36.373091] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.303 qpair failed and we were unable to recover it. 00:41:41.303 [2024-10-01 22:40:36.373441] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.303 [2024-10-01 22:40:36.373452] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.303 qpair failed and we were unable to recover it. 00:41:41.303 [2024-10-01 22:40:36.373756] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.303 [2024-10-01 22:40:36.373767] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.303 qpair failed and we were unable to recover it. 00:41:41.303 [2024-10-01 22:40:36.374063] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.303 [2024-10-01 22:40:36.374081] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.304 qpair failed and we were unable to recover it. 00:41:41.304 [2024-10-01 22:40:36.374380] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.304 [2024-10-01 22:40:36.374391] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.304 qpair failed and we were unable to recover it. 00:41:41.304 [2024-10-01 22:40:36.374669] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.304 [2024-10-01 22:40:36.374680] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.304 qpair failed and we were unable to recover it. 00:41:41.304 [2024-10-01 22:40:36.374983] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.304 [2024-10-01 22:40:36.374993] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.304 qpair failed and we were unable to recover it. 00:41:41.304 [2024-10-01 22:40:36.375174] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.304 [2024-10-01 22:40:36.375183] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.304 qpair failed and we were unable to recover it. 00:41:41.304 [2024-10-01 22:40:36.375502] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.304 [2024-10-01 22:40:36.375512] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.304 qpair failed and we were unable to recover it. 00:41:41.304 [2024-10-01 22:40:36.375719] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.304 [2024-10-01 22:40:36.375730] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.304 qpair failed and we were unable to recover it. 00:41:41.304 [2024-10-01 22:40:36.375914] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.304 [2024-10-01 22:40:36.375924] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.304 qpair failed and we were unable to recover it. 00:41:41.304 [2024-10-01 22:40:36.376230] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.304 [2024-10-01 22:40:36.376240] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.304 qpair failed and we were unable to recover it. 00:41:41.304 [2024-10-01 22:40:36.376528] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.304 [2024-10-01 22:40:36.376546] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.304 qpair failed and we were unable to recover it. 00:41:41.304 [2024-10-01 22:40:36.376868] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.304 [2024-10-01 22:40:36.376878] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.304 qpair failed and we were unable to recover it. 00:41:41.304 [2024-10-01 22:40:36.377191] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.304 [2024-10-01 22:40:36.377201] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.304 qpair failed and we were unable to recover it. 00:41:41.304 [2024-10-01 22:40:36.377504] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.304 [2024-10-01 22:40:36.377514] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.304 qpair failed and we were unable to recover it. 00:41:41.304 [2024-10-01 22:40:36.377824] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.304 [2024-10-01 22:40:36.377834] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.304 qpair failed and we were unable to recover it. 00:41:41.304 [2024-10-01 22:40:36.378160] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.304 [2024-10-01 22:40:36.378170] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.304 qpair failed and we were unable to recover it. 00:41:41.304 [2024-10-01 22:40:36.378496] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.304 [2024-10-01 22:40:36.378506] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.304 qpair failed and we were unable to recover it. 00:41:41.304 [2024-10-01 22:40:36.378792] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.304 [2024-10-01 22:40:36.378802] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.304 qpair failed and we were unable to recover it. 00:41:41.304 [2024-10-01 22:40:36.379113] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.304 [2024-10-01 22:40:36.379123] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.304 qpair failed and we were unable to recover it. 00:41:41.304 [2024-10-01 22:40:36.379530] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.304 [2024-10-01 22:40:36.379541] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.304 qpair failed and we were unable to recover it. 00:41:41.304 [2024-10-01 22:40:36.379833] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.304 [2024-10-01 22:40:36.379843] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.304 qpair failed and we were unable to recover it. 00:41:41.304 [2024-10-01 22:40:36.380138] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.304 [2024-10-01 22:40:36.380155] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.304 qpair failed and we were unable to recover it. 00:41:41.304 [2024-10-01 22:40:36.380478] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.304 [2024-10-01 22:40:36.380487] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.304 qpair failed and we were unable to recover it. 00:41:41.304 [2024-10-01 22:40:36.380788] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.304 [2024-10-01 22:40:36.380799] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.304 qpair failed and we were unable to recover it. 00:41:41.304 [2024-10-01 22:40:36.381108] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.304 [2024-10-01 22:40:36.381118] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.304 qpair failed and we were unable to recover it. 00:41:41.304 [2024-10-01 22:40:36.381438] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.304 [2024-10-01 22:40:36.381448] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.304 qpair failed and we were unable to recover it. 00:41:41.304 [2024-10-01 22:40:36.381736] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.304 [2024-10-01 22:40:36.381746] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.304 qpair failed and we were unable to recover it. 00:41:41.304 [2024-10-01 22:40:36.382094] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.304 [2024-10-01 22:40:36.382104] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.304 qpair failed and we were unable to recover it. 00:41:41.304 [2024-10-01 22:40:36.382409] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.304 [2024-10-01 22:40:36.382419] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.304 qpair failed and we were unable to recover it. 00:41:41.304 [2024-10-01 22:40:36.382735] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.304 [2024-10-01 22:40:36.382745] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.304 qpair failed and we were unable to recover it. 00:41:41.304 [2024-10-01 22:40:36.383034] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.304 [2024-10-01 22:40:36.383051] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.304 qpair failed and we were unable to recover it. 00:41:41.304 [2024-10-01 22:40:36.383374] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.304 [2024-10-01 22:40:36.383384] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.304 qpair failed and we were unable to recover it. 00:41:41.304 [2024-10-01 22:40:36.383689] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.304 [2024-10-01 22:40:36.383700] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.304 qpair failed and we were unable to recover it. 00:41:41.304 [2024-10-01 22:40:36.384020] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.304 [2024-10-01 22:40:36.384030] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.304 qpair failed and we were unable to recover it. 00:41:41.304 [2024-10-01 22:40:36.384323] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.304 [2024-10-01 22:40:36.384334] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.304 qpair failed and we were unable to recover it. 00:41:41.304 [2024-10-01 22:40:36.384620] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.304 [2024-10-01 22:40:36.384634] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.304 qpair failed and we were unable to recover it. 00:41:41.305 [2024-10-01 22:40:36.384841] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.305 [2024-10-01 22:40:36.384851] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.305 qpair failed and we were unable to recover it. 00:41:41.305 [2024-10-01 22:40:36.385170] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.305 [2024-10-01 22:40:36.385180] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.305 qpair failed and we were unable to recover it. 00:41:41.305 [2024-10-01 22:40:36.385502] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.305 [2024-10-01 22:40:36.385512] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.305 qpair failed and we were unable to recover it. 00:41:41.305 [2024-10-01 22:40:36.385846] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.305 [2024-10-01 22:40:36.385856] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.305 qpair failed and we were unable to recover it. 00:41:41.305 [2024-10-01 22:40:36.386136] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.305 [2024-10-01 22:40:36.386146] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.305 qpair failed and we were unable to recover it. 00:41:41.305 [2024-10-01 22:40:36.386452] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.305 [2024-10-01 22:40:36.386462] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.305 qpair failed and we were unable to recover it. 00:41:41.305 [2024-10-01 22:40:36.386757] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.305 [2024-10-01 22:40:36.386768] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.305 qpair failed and we were unable to recover it. 00:41:41.305 [2024-10-01 22:40:36.387068] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.305 [2024-10-01 22:40:36.387078] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.305 qpair failed and we were unable to recover it. 00:41:41.305 [2024-10-01 22:40:36.387355] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.305 [2024-10-01 22:40:36.387365] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.305 qpair failed and we were unable to recover it. 00:41:41.305 [2024-10-01 22:40:36.387647] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.305 [2024-10-01 22:40:36.387658] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.305 qpair failed and we were unable to recover it. 00:41:41.305 [2024-10-01 22:40:36.387764] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.305 [2024-10-01 22:40:36.387773] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.305 qpair failed and we were unable to recover it. 00:41:41.305 [2024-10-01 22:40:36.387942] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.305 [2024-10-01 22:40:36.387952] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.305 qpair failed and we were unable to recover it. 00:41:41.305 [2024-10-01 22:40:36.388266] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.305 [2024-10-01 22:40:36.388275] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.305 qpair failed and we were unable to recover it. 00:41:41.305 [2024-10-01 22:40:36.388572] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.305 [2024-10-01 22:40:36.388582] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.305 qpair failed and we were unable to recover it. 00:41:41.305 [2024-10-01 22:40:36.388793] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.305 [2024-10-01 22:40:36.388804] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.305 qpair failed and we were unable to recover it. 00:41:41.305 [2024-10-01 22:40:36.389151] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.305 [2024-10-01 22:40:36.389161] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.305 qpair failed and we were unable to recover it. 00:41:41.305 [2024-10-01 22:40:36.389345] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.305 [2024-10-01 22:40:36.389356] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.305 qpair failed and we were unable to recover it. 00:41:41.305 [2024-10-01 22:40:36.389689] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.305 [2024-10-01 22:40:36.389700] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.305 qpair failed and we were unable to recover it. 00:41:41.305 [2024-10-01 22:40:36.389991] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.305 [2024-10-01 22:40:36.390000] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.305 qpair failed and we were unable to recover it. 00:41:41.305 [2024-10-01 22:40:36.390289] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.305 [2024-10-01 22:40:36.390299] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.305 qpair failed and we were unable to recover it. 00:41:41.305 [2024-10-01 22:40:36.390683] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.305 [2024-10-01 22:40:36.390694] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.305 qpair failed and we were unable to recover it. 00:41:41.305 [2024-10-01 22:40:36.390995] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.305 [2024-10-01 22:40:36.391005] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.305 qpair failed and we were unable to recover it. 00:41:41.305 [2024-10-01 22:40:36.391376] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.305 [2024-10-01 22:40:36.391386] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.305 qpair failed and we were unable to recover it. 00:41:41.305 [2024-10-01 22:40:36.391691] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.305 [2024-10-01 22:40:36.391702] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.305 qpair failed and we were unable to recover it. 00:41:41.305 [2024-10-01 22:40:36.392035] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.305 [2024-10-01 22:40:36.392045] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.305 qpair failed and we were unable to recover it. 00:41:41.305 [2024-10-01 22:40:36.392354] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.305 [2024-10-01 22:40:36.392364] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.305 qpair failed and we were unable to recover it. 00:41:41.305 [2024-10-01 22:40:36.392673] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.305 [2024-10-01 22:40:36.392684] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.305 qpair failed and we were unable to recover it. 00:41:41.305 [2024-10-01 22:40:36.393010] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.305 [2024-10-01 22:40:36.393020] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.305 qpair failed and we were unable to recover it. 00:41:41.305 [2024-10-01 22:40:36.393329] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.305 [2024-10-01 22:40:36.393338] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.305 qpair failed and we were unable to recover it. 00:41:41.305 [2024-10-01 22:40:36.393665] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.305 [2024-10-01 22:40:36.393680] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.305 qpair failed and we were unable to recover it. 00:41:41.305 [2024-10-01 22:40:36.393989] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.305 [2024-10-01 22:40:36.393999] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.305 qpair failed and we were unable to recover it. 00:41:41.305 [2024-10-01 22:40:36.394340] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.305 [2024-10-01 22:40:36.394349] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.305 qpair failed and we were unable to recover it. 00:41:41.305 [2024-10-01 22:40:36.394664] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.305 [2024-10-01 22:40:36.394674] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.305 qpair failed and we were unable to recover it. 00:41:41.305 [2024-10-01 22:40:36.394966] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.305 [2024-10-01 22:40:36.394976] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.305 qpair failed and we were unable to recover it. 00:41:41.305 [2024-10-01 22:40:36.395159] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.305 [2024-10-01 22:40:36.395169] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.305 qpair failed and we were unable to recover it. 00:41:41.305 [2024-10-01 22:40:36.395538] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.305 [2024-10-01 22:40:36.395548] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.305 qpair failed and we were unable to recover it. 00:41:41.305 [2024-10-01 22:40:36.395762] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.305 [2024-10-01 22:40:36.395773] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.305 qpair failed and we were unable to recover it. 00:41:41.305 [2024-10-01 22:40:36.396098] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.305 [2024-10-01 22:40:36.396108] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.305 qpair failed and we were unable to recover it. 00:41:41.305 [2024-10-01 22:40:36.396416] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.305 [2024-10-01 22:40:36.396426] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.305 qpair failed and we were unable to recover it. 00:41:41.306 [2024-10-01 22:40:36.396726] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.306 [2024-10-01 22:40:36.396736] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.306 qpair failed and we were unable to recover it. 00:41:41.306 [2024-10-01 22:40:36.397041] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.306 [2024-10-01 22:40:36.397050] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.306 qpair failed and we were unable to recover it. 00:41:41.306 [2024-10-01 22:40:36.397245] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.306 [2024-10-01 22:40:36.397256] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.306 qpair failed and we were unable to recover it. 00:41:41.306 [2024-10-01 22:40:36.397560] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.306 [2024-10-01 22:40:36.397570] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.306 qpair failed and we were unable to recover it. 00:41:41.306 [2024-10-01 22:40:36.397875] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.306 [2024-10-01 22:40:36.397885] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.306 qpair failed and we were unable to recover it. 00:41:41.306 [2024-10-01 22:40:36.398083] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.306 [2024-10-01 22:40:36.398093] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.306 qpair failed and we were unable to recover it. 00:41:41.306 [2024-10-01 22:40:36.398444] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.306 [2024-10-01 22:40:36.398454] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.306 qpair failed and we were unable to recover it. 00:41:41.306 [2024-10-01 22:40:36.398644] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.306 [2024-10-01 22:40:36.398655] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.306 qpair failed and we were unable to recover it. 00:41:41.306 [2024-10-01 22:40:36.398938] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.306 [2024-10-01 22:40:36.398947] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.306 qpair failed and we were unable to recover it. 00:41:41.306 [2024-10-01 22:40:36.399239] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.306 [2024-10-01 22:40:36.399257] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.306 qpair failed and we were unable to recover it. 00:41:41.306 [2024-10-01 22:40:36.399563] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.306 [2024-10-01 22:40:36.399573] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.306 qpair failed and we were unable to recover it. 00:41:41.306 [2024-10-01 22:40:36.399865] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.306 [2024-10-01 22:40:36.399884] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.306 qpair failed and we were unable to recover it. 00:41:41.306 [2024-10-01 22:40:36.400195] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.306 [2024-10-01 22:40:36.400205] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.306 qpair failed and we were unable to recover it. 00:41:41.306 [2024-10-01 22:40:36.400485] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.306 [2024-10-01 22:40:36.400495] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.306 qpair failed and we were unable to recover it. 00:41:41.306 [2024-10-01 22:40:36.400795] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.306 [2024-10-01 22:40:36.400805] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.306 qpair failed and we were unable to recover it. 00:41:41.306 [2024-10-01 22:40:36.401100] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.306 [2024-10-01 22:40:36.401116] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.306 qpair failed and we were unable to recover it. 00:41:41.306 [2024-10-01 22:40:36.401423] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.306 [2024-10-01 22:40:36.401432] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.306 qpair failed and we were unable to recover it. 00:41:41.306 [2024-10-01 22:40:36.401720] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.306 [2024-10-01 22:40:36.401734] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.306 qpair failed and we were unable to recover it. 00:41:41.306 [2024-10-01 22:40:36.402040] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.306 [2024-10-01 22:40:36.402051] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.306 qpair failed and we were unable to recover it. 00:41:41.306 [2024-10-01 22:40:36.402318] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.306 [2024-10-01 22:40:36.402328] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.306 qpair failed and we were unable to recover it. 00:41:41.306 [2024-10-01 22:40:36.402517] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.306 [2024-10-01 22:40:36.402528] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.306 qpair failed and we were unable to recover it. 00:41:41.306 [2024-10-01 22:40:36.402825] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.306 [2024-10-01 22:40:36.402836] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.306 qpair failed and we were unable to recover it. 00:41:41.306 [2024-10-01 22:40:36.403145] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.306 [2024-10-01 22:40:36.403155] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.306 qpair failed and we were unable to recover it. 00:41:41.306 [2024-10-01 22:40:36.403472] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.306 [2024-10-01 22:40:36.403482] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.306 qpair failed and we were unable to recover it. 00:41:41.306 [2024-10-01 22:40:36.403670] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.306 [2024-10-01 22:40:36.403681] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.306 qpair failed and we were unable to recover it. 00:41:41.306 [2024-10-01 22:40:36.404012] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.306 [2024-10-01 22:40:36.404021] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.306 qpair failed and we were unable to recover it. 00:41:41.306 [2024-10-01 22:40:36.404325] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.306 [2024-10-01 22:40:36.404335] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.306 qpair failed and we were unable to recover it. 00:41:41.306 [2024-10-01 22:40:36.404645] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.306 [2024-10-01 22:40:36.404655] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.306 qpair failed and we were unable to recover it. 00:41:41.306 [2024-10-01 22:40:36.404857] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.306 [2024-10-01 22:40:36.404867] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.306 qpair failed and we were unable to recover it. 00:41:41.306 [2024-10-01 22:40:36.405064] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.306 [2024-10-01 22:40:36.405074] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.306 qpair failed and we were unable to recover it. 00:41:41.306 [2024-10-01 22:40:36.405402] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.306 [2024-10-01 22:40:36.405412] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.306 qpair failed and we were unable to recover it. 00:41:41.306 [2024-10-01 22:40:36.405607] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.306 [2024-10-01 22:40:36.405616] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.306 qpair failed and we were unable to recover it. 00:41:41.306 [2024-10-01 22:40:36.405965] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.306 [2024-10-01 22:40:36.405975] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.306 qpair failed and we were unable to recover it. 00:41:41.306 [2024-10-01 22:40:36.406267] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.306 [2024-10-01 22:40:36.406277] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.306 qpair failed and we were unable to recover it. 00:41:41.306 [2024-10-01 22:40:36.406594] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.306 [2024-10-01 22:40:36.406604] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.306 qpair failed and we were unable to recover it. 00:41:41.306 [2024-10-01 22:40:36.406796] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.306 [2024-10-01 22:40:36.406806] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.306 qpair failed and we were unable to recover it. 00:41:41.306 [2024-10-01 22:40:36.407008] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.306 [2024-10-01 22:40:36.407018] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.306 qpair failed and we were unable to recover it. 00:41:41.306 [2024-10-01 22:40:36.407396] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.306 [2024-10-01 22:40:36.407406] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.306 qpair failed and we were unable to recover it. 00:41:41.306 [2024-10-01 22:40:36.407733] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.307 [2024-10-01 22:40:36.407744] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.307 qpair failed and we were unable to recover it. 00:41:41.307 [2024-10-01 22:40:36.407949] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.307 [2024-10-01 22:40:36.407959] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.307 qpair failed and we were unable to recover it. 00:41:41.307 [2024-10-01 22:40:36.408297] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.307 [2024-10-01 22:40:36.408307] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.307 qpair failed and we were unable to recover it. 00:41:41.307 [2024-10-01 22:40:36.408607] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.307 [2024-10-01 22:40:36.408617] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.307 qpair failed and we were unable to recover it. 00:41:41.307 [2024-10-01 22:40:36.408901] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.307 [2024-10-01 22:40:36.408912] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.307 qpair failed and we were unable to recover it. 00:41:41.307 [2024-10-01 22:40:36.409219] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.307 [2024-10-01 22:40:36.409229] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.307 qpair failed and we were unable to recover it. 00:41:41.307 [2024-10-01 22:40:36.409541] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.307 [2024-10-01 22:40:36.409553] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.307 qpair failed and we were unable to recover it. 00:41:41.307 [2024-10-01 22:40:36.409854] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.307 [2024-10-01 22:40:36.409864] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.307 qpair failed and we were unable to recover it. 00:41:41.307 [2024-10-01 22:40:36.410159] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.307 [2024-10-01 22:40:36.410168] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.307 qpair failed and we were unable to recover it. 00:41:41.307 [2024-10-01 22:40:36.410353] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.307 [2024-10-01 22:40:36.410363] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.307 qpair failed and we were unable to recover it. 00:41:41.307 [2024-10-01 22:40:36.410665] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.307 [2024-10-01 22:40:36.410676] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.307 qpair failed and we were unable to recover it. 00:41:41.307 [2024-10-01 22:40:36.410996] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.307 [2024-10-01 22:40:36.411007] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.307 qpair failed and we were unable to recover it. 00:41:41.307 [2024-10-01 22:40:36.411323] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.307 [2024-10-01 22:40:36.411333] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.307 qpair failed and we were unable to recover it. 00:41:41.307 [2024-10-01 22:40:36.411658] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.307 [2024-10-01 22:40:36.411669] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.307 qpair failed and we were unable to recover it. 00:41:41.307 [2024-10-01 22:40:36.411960] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.307 [2024-10-01 22:40:36.411971] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.307 qpair failed and we were unable to recover it. 00:41:41.307 [2024-10-01 22:40:36.412296] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.307 [2024-10-01 22:40:36.412305] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.307 qpair failed and we were unable to recover it. 00:41:41.307 [2024-10-01 22:40:36.412621] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.307 [2024-10-01 22:40:36.412635] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.307 qpair failed and we were unable to recover it. 00:41:41.307 [2024-10-01 22:40:36.412930] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.307 [2024-10-01 22:40:36.412940] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.307 qpair failed and we were unable to recover it. 00:41:41.307 [2024-10-01 22:40:36.413269] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.307 [2024-10-01 22:40:36.413281] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.307 qpair failed and we were unable to recover it. 00:41:41.307 [2024-10-01 22:40:36.413604] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.307 [2024-10-01 22:40:36.413614] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.307 qpair failed and we were unable to recover it. 00:41:41.307 [2024-10-01 22:40:36.413923] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.307 [2024-10-01 22:40:36.413934] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.307 qpair failed and we were unable to recover it. 00:41:41.307 [2024-10-01 22:40:36.414105] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.307 [2024-10-01 22:40:36.414114] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.307 qpair failed and we were unable to recover it. 00:41:41.307 [2024-10-01 22:40:36.414325] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.307 [2024-10-01 22:40:36.414334] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.307 qpair failed and we were unable to recover it. 00:41:41.307 [2024-10-01 22:40:36.414759] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.307 [2024-10-01 22:40:36.414769] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.307 qpair failed and we were unable to recover it. 00:41:41.307 [2024-10-01 22:40:36.415079] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.307 [2024-10-01 22:40:36.415090] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.307 qpair failed and we were unable to recover it. 00:41:41.307 [2024-10-01 22:40:36.415415] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.307 [2024-10-01 22:40:36.415426] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.307 qpair failed and we were unable to recover it. 00:41:41.307 [2024-10-01 22:40:36.415725] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.307 [2024-10-01 22:40:36.415735] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.307 qpair failed and we were unable to recover it. 00:41:41.307 [2024-10-01 22:40:36.415959] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.307 [2024-10-01 22:40:36.415969] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.307 qpair failed and we were unable to recover it. 00:41:41.307 [2024-10-01 22:40:36.416170] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.307 [2024-10-01 22:40:36.416180] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.307 qpair failed and we were unable to recover it. 00:41:41.307 [2024-10-01 22:40:36.416243] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.307 [2024-10-01 22:40:36.416252] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.307 qpair failed and we were unable to recover it. 00:41:41.307 [2024-10-01 22:40:36.416484] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.307 [2024-10-01 22:40:36.416494] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.307 qpair failed and we were unable to recover it. 00:41:41.307 [2024-10-01 22:40:36.416752] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.307 [2024-10-01 22:40:36.416763] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.307 qpair failed and we were unable to recover it. 00:41:41.307 [2024-10-01 22:40:36.417087] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.307 [2024-10-01 22:40:36.417096] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.307 qpair failed and we were unable to recover it. 00:41:41.307 [2024-10-01 22:40:36.417410] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.307 [2024-10-01 22:40:36.417419] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.307 qpair failed and we were unable to recover it. 00:41:41.307 [2024-10-01 22:40:36.417677] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.307 [2024-10-01 22:40:36.417688] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.307 qpair failed and we were unable to recover it. 00:41:41.307 [2024-10-01 22:40:36.418009] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.307 [2024-10-01 22:40:36.418019] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.307 qpair failed and we were unable to recover it. 00:41:41.307 [2024-10-01 22:40:36.418352] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.307 [2024-10-01 22:40:36.418363] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.307 qpair failed and we were unable to recover it. 00:41:41.307 [2024-10-01 22:40:36.418647] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.307 [2024-10-01 22:40:36.418659] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.307 qpair failed and we were unable to recover it. 00:41:41.307 [2024-10-01 22:40:36.418971] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.307 [2024-10-01 22:40:36.418981] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.307 qpair failed and we were unable to recover it. 00:41:41.307 [2024-10-01 22:40:36.419177] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.307 [2024-10-01 22:40:36.419187] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.308 qpair failed and we were unable to recover it. 00:41:41.308 [2024-10-01 22:40:36.419509] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.308 [2024-10-01 22:40:36.419519] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.308 qpair failed and we were unable to recover it. 00:41:41.308 [2024-10-01 22:40:36.419905] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.308 [2024-10-01 22:40:36.419917] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.308 qpair failed and we were unable to recover it. 00:41:41.308 [2024-10-01 22:40:36.420221] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.308 [2024-10-01 22:40:36.420231] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.308 qpair failed and we were unable to recover it. 00:41:41.308 [2024-10-01 22:40:36.420621] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.308 [2024-10-01 22:40:36.420634] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.308 qpair failed and we were unable to recover it. 00:41:41.308 [2024-10-01 22:40:36.420940] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.308 [2024-10-01 22:40:36.420950] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.308 qpair failed and we were unable to recover it. 00:41:41.308 [2024-10-01 22:40:36.421132] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.308 [2024-10-01 22:40:36.421142] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.308 qpair failed and we were unable to recover it. 00:41:41.308 [2024-10-01 22:40:36.421434] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.308 [2024-10-01 22:40:36.421443] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.308 qpair failed and we were unable to recover it. 00:41:41.308 [2024-10-01 22:40:36.421813] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.308 [2024-10-01 22:40:36.421824] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.308 qpair failed and we were unable to recover it. 00:41:41.308 [2024-10-01 22:40:36.422021] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.308 [2024-10-01 22:40:36.422031] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.308 qpair failed and we were unable to recover it. 00:41:41.308 [2024-10-01 22:40:36.422364] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.308 [2024-10-01 22:40:36.422374] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.308 qpair failed and we were unable to recover it. 00:41:41.308 [2024-10-01 22:40:36.422693] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.308 [2024-10-01 22:40:36.422703] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.308 qpair failed and we were unable to recover it. 00:41:41.308 [2024-10-01 22:40:36.423036] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.308 [2024-10-01 22:40:36.423046] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.308 qpair failed and we were unable to recover it. 00:41:41.308 [2024-10-01 22:40:36.423344] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.308 [2024-10-01 22:40:36.423359] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.308 qpair failed and we were unable to recover it. 00:41:41.308 [2024-10-01 22:40:36.423656] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.308 [2024-10-01 22:40:36.423667] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.308 qpair failed and we were unable to recover it. 00:41:41.308 [2024-10-01 22:40:36.423949] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.308 [2024-10-01 22:40:36.423959] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.308 qpair failed and we were unable to recover it. 00:41:41.308 [2024-10-01 22:40:36.424254] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.308 [2024-10-01 22:40:36.424265] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.308 qpair failed and we were unable to recover it. 00:41:41.308 [2024-10-01 22:40:36.424548] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.308 [2024-10-01 22:40:36.424564] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.308 qpair failed and we were unable to recover it. 00:41:41.308 [2024-10-01 22:40:36.424892] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.308 [2024-10-01 22:40:36.424902] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.308 qpair failed and we were unable to recover it. 00:41:41.308 [2024-10-01 22:40:36.425066] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.308 [2024-10-01 22:40:36.425076] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.308 qpair failed and we were unable to recover it. 00:41:41.308 [2024-10-01 22:40:36.425252] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.308 [2024-10-01 22:40:36.425262] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.308 qpair failed and we were unable to recover it. 00:41:41.308 [2024-10-01 22:40:36.425562] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.308 [2024-10-01 22:40:36.425572] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.308 qpair failed and we were unable to recover it. 00:41:41.308 [2024-10-01 22:40:36.425924] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.308 [2024-10-01 22:40:36.425935] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.308 qpair failed and we were unable to recover it. 00:41:41.308 [2024-10-01 22:40:36.426250] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.308 [2024-10-01 22:40:36.426261] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.308 qpair failed and we were unable to recover it. 00:41:41.308 [2024-10-01 22:40:36.426541] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.308 [2024-10-01 22:40:36.426551] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.308 qpair failed and we were unable to recover it. 00:41:41.308 [2024-10-01 22:40:36.426853] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.308 [2024-10-01 22:40:36.426863] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.308 qpair failed and we were unable to recover it. 00:41:41.308 [2024-10-01 22:40:36.427159] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.308 [2024-10-01 22:40:36.427169] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.308 qpair failed and we were unable to recover it. 00:41:41.308 [2024-10-01 22:40:36.427478] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.308 [2024-10-01 22:40:36.427489] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.308 qpair failed and we were unable to recover it. 00:41:41.308 [2024-10-01 22:40:36.427786] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.308 [2024-10-01 22:40:36.427796] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.308 qpair failed and we were unable to recover it. 00:41:41.308 [2024-10-01 22:40:36.428152] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.308 [2024-10-01 22:40:36.428161] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.308 qpair failed and we were unable to recover it. 00:41:41.308 [2024-10-01 22:40:36.428474] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.308 [2024-10-01 22:40:36.428484] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.308 qpair failed and we were unable to recover it. 00:41:41.308 [2024-10-01 22:40:36.428685] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.308 [2024-10-01 22:40:36.428696] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.308 qpair failed and we were unable to recover it. 00:41:41.308 [2024-10-01 22:40:36.428904] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.308 [2024-10-01 22:40:36.428914] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.308 qpair failed and we were unable to recover it. 00:41:41.308 [2024-10-01 22:40:36.429088] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.308 [2024-10-01 22:40:36.429098] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.308 qpair failed and we were unable to recover it. 00:41:41.308 [2024-10-01 22:40:36.429446] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.308 [2024-10-01 22:40:36.429456] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.308 qpair failed and we were unable to recover it. 00:41:41.309 [2024-10-01 22:40:36.429780] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.309 [2024-10-01 22:40:36.429793] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.309 qpair failed and we were unable to recover it. 00:41:41.309 [2024-10-01 22:40:36.430113] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.309 [2024-10-01 22:40:36.430123] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.309 qpair failed and we were unable to recover it. 00:41:41.309 [2024-10-01 22:40:36.430429] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.309 [2024-10-01 22:40:36.430439] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.309 qpair failed and we were unable to recover it. 00:41:41.309 [2024-10-01 22:40:36.430710] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.309 [2024-10-01 22:40:36.430720] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.309 qpair failed and we were unable to recover it. 00:41:41.309 [2024-10-01 22:40:36.431102] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.309 [2024-10-01 22:40:36.431113] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.309 qpair failed and we were unable to recover it. 00:41:41.309 [2024-10-01 22:40:36.431406] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.309 [2024-10-01 22:40:36.431417] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.309 qpair failed and we were unable to recover it. 00:41:41.309 [2024-10-01 22:40:36.431589] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.309 [2024-10-01 22:40:36.431600] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.309 qpair failed and we were unable to recover it. 00:41:41.309 [2024-10-01 22:40:36.431782] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.309 [2024-10-01 22:40:36.431792] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.309 qpair failed and we were unable to recover it. 00:41:41.309 [2024-10-01 22:40:36.432097] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.309 [2024-10-01 22:40:36.432107] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.309 qpair failed and we were unable to recover it. 00:41:41.309 [2024-10-01 22:40:36.432478] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.309 [2024-10-01 22:40:36.432488] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.309 qpair failed and we were unable to recover it. 00:41:41.309 [2024-10-01 22:40:36.432652] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.309 [2024-10-01 22:40:36.432662] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.309 qpair failed and we were unable to recover it. 00:41:41.309 [2024-10-01 22:40:36.432975] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.309 [2024-10-01 22:40:36.432986] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.309 qpair failed and we were unable to recover it. 00:41:41.309 [2024-10-01 22:40:36.433163] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.309 [2024-10-01 22:40:36.433175] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.309 qpair failed and we were unable to recover it. 00:41:41.309 [2024-10-01 22:40:36.433575] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.309 [2024-10-01 22:40:36.433585] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.309 qpair failed and we were unable to recover it. 00:41:41.309 [2024-10-01 22:40:36.433921] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.309 [2024-10-01 22:40:36.433931] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.309 qpair failed and we were unable to recover it. 00:41:41.309 [2024-10-01 22:40:36.434212] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.309 [2024-10-01 22:40:36.434222] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.309 qpair failed and we were unable to recover it. 00:41:41.309 [2024-10-01 22:40:36.434405] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.309 [2024-10-01 22:40:36.434414] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.309 qpair failed and we were unable to recover it. 00:41:41.309 [2024-10-01 22:40:36.434701] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.309 [2024-10-01 22:40:36.434711] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.309 qpair failed and we were unable to recover it. 00:41:41.309 [2024-10-01 22:40:36.435006] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.309 [2024-10-01 22:40:36.435017] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.309 qpair failed and we were unable to recover it. 00:41:41.309 [2024-10-01 22:40:36.435323] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.309 [2024-10-01 22:40:36.435333] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.309 qpair failed and we were unable to recover it. 00:41:41.309 [2024-10-01 22:40:36.435500] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.309 [2024-10-01 22:40:36.435510] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.309 qpair failed and we were unable to recover it. 00:41:41.309 [2024-10-01 22:40:36.435778] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.309 [2024-10-01 22:40:36.435788] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.309 qpair failed and we were unable to recover it. 00:41:41.309 [2024-10-01 22:40:36.436135] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.309 [2024-10-01 22:40:36.436145] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.309 qpair failed and we were unable to recover it. 00:41:41.309 [2024-10-01 22:40:36.436451] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.309 [2024-10-01 22:40:36.436461] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.309 qpair failed and we were unable to recover it. 00:41:41.309 [2024-10-01 22:40:36.436756] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.309 [2024-10-01 22:40:36.436766] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.309 qpair failed and we were unable to recover it. 00:41:41.309 [2024-10-01 22:40:36.437073] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.309 [2024-10-01 22:40:36.437082] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.309 qpair failed and we were unable to recover it. 00:41:41.309 [2024-10-01 22:40:36.437377] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.309 [2024-10-01 22:40:36.437387] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.309 qpair failed and we were unable to recover it. 00:41:41.309 [2024-10-01 22:40:36.437685] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.309 [2024-10-01 22:40:36.437704] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.309 qpair failed and we were unable to recover it. 00:41:41.309 [2024-10-01 22:40:36.438022] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.309 [2024-10-01 22:40:36.438032] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.309 qpair failed and we were unable to recover it. 00:41:41.309 [2024-10-01 22:40:36.438423] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.309 [2024-10-01 22:40:36.438433] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.309 qpair failed and we were unable to recover it. 00:41:41.309 [2024-10-01 22:40:36.438812] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.309 [2024-10-01 22:40:36.438823] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.309 qpair failed and we were unable to recover it. 00:41:41.309 [2024-10-01 22:40:36.439031] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.309 [2024-10-01 22:40:36.439041] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.309 qpair failed and we were unable to recover it. 00:41:41.309 [2024-10-01 22:40:36.439355] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.309 [2024-10-01 22:40:36.439365] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.309 qpair failed and we were unable to recover it. 00:41:41.309 [2024-10-01 22:40:36.439665] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.309 [2024-10-01 22:40:36.439676] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.309 qpair failed and we were unable to recover it. 00:41:41.309 [2024-10-01 22:40:36.439971] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.309 [2024-10-01 22:40:36.439982] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.309 qpair failed and we were unable to recover it. 00:41:41.309 [2024-10-01 22:40:36.440225] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.309 [2024-10-01 22:40:36.440235] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.309 qpair failed and we were unable to recover it. 00:41:41.309 [2024-10-01 22:40:36.440544] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.309 [2024-10-01 22:40:36.440554] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.309 qpair failed and we were unable to recover it. 00:41:41.309 [2024-10-01 22:40:36.440741] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.309 [2024-10-01 22:40:36.440751] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.309 qpair failed and we were unable to recover it. 00:41:41.309 [2024-10-01 22:40:36.441051] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.309 [2024-10-01 22:40:36.441061] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.310 qpair failed and we were unable to recover it. 00:41:41.310 [2024-10-01 22:40:36.441369] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.310 [2024-10-01 22:40:36.441378] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.310 qpair failed and we were unable to recover it. 00:41:41.310 [2024-10-01 22:40:36.441690] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.310 [2024-10-01 22:40:36.441700] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.310 qpair failed and we were unable to recover it. 00:41:41.310 [2024-10-01 22:40:36.441881] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.310 [2024-10-01 22:40:36.441891] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.310 qpair failed and we were unable to recover it. 00:41:41.310 [2024-10-01 22:40:36.442189] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.310 [2024-10-01 22:40:36.442199] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.310 qpair failed and we were unable to recover it. 00:41:41.310 [2024-10-01 22:40:36.442401] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.310 [2024-10-01 22:40:36.442410] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.310 qpair failed and we were unable to recover it. 00:41:41.310 [2024-10-01 22:40:36.442608] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.310 [2024-10-01 22:40:36.442618] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.310 qpair failed and we were unable to recover it. 00:41:41.310 [2024-10-01 22:40:36.442922] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.310 [2024-10-01 22:40:36.442932] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.310 qpair failed and we were unable to recover it. 00:41:41.310 [2024-10-01 22:40:36.443344] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.310 [2024-10-01 22:40:36.443354] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.310 qpair failed and we were unable to recover it. 00:41:41.310 [2024-10-01 22:40:36.443655] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.310 [2024-10-01 22:40:36.443666] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.310 qpair failed and we were unable to recover it. 00:41:41.310 [2024-10-01 22:40:36.443987] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.310 [2024-10-01 22:40:36.443997] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.310 qpair failed and we were unable to recover it. 00:41:41.310 [2024-10-01 22:40:36.444306] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.310 [2024-10-01 22:40:36.444316] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.310 qpair failed and we were unable to recover it. 00:41:41.310 [2024-10-01 22:40:36.444609] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.310 [2024-10-01 22:40:36.444619] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.310 qpair failed and we were unable to recover it. 00:41:41.310 [2024-10-01 22:40:36.444904] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.310 [2024-10-01 22:40:36.444914] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.310 qpair failed and we were unable to recover it. 00:41:41.310 [2024-10-01 22:40:36.445194] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.310 [2024-10-01 22:40:36.445204] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.310 qpair failed and we were unable to recover it. 00:41:41.310 [2024-10-01 22:40:36.445487] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.310 [2024-10-01 22:40:36.445497] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.310 qpair failed and we were unable to recover it. 00:41:41.310 [2024-10-01 22:40:36.445737] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.310 [2024-10-01 22:40:36.445747] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.310 qpair failed and we were unable to recover it. 00:41:41.310 [2024-10-01 22:40:36.446054] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.310 [2024-10-01 22:40:36.446102] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.310 qpair failed and we were unable to recover it. 00:41:41.310 [2024-10-01 22:40:36.446400] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.310 [2024-10-01 22:40:36.446410] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.310 qpair failed and we were unable to recover it. 00:41:41.310 [2024-10-01 22:40:36.446694] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.310 [2024-10-01 22:40:36.446705] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.310 qpair failed and we were unable to recover it. 00:41:41.310 [2024-10-01 22:40:36.447125] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.310 [2024-10-01 22:40:36.447136] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.310 qpair failed and we were unable to recover it. 00:41:41.310 [2024-10-01 22:40:36.447455] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.310 [2024-10-01 22:40:36.447465] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.310 qpair failed and we were unable to recover it. 00:41:41.310 [2024-10-01 22:40:36.447771] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.310 [2024-10-01 22:40:36.447781] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.310 qpair failed and we were unable to recover it. 00:41:41.310 [2024-10-01 22:40:36.448159] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.310 [2024-10-01 22:40:36.448168] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.310 qpair failed and we were unable to recover it. 00:41:41.310 [2024-10-01 22:40:36.448458] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.310 [2024-10-01 22:40:36.448469] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.310 qpair failed and we were unable to recover it. 00:41:41.310 [2024-10-01 22:40:36.448641] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.310 [2024-10-01 22:40:36.448651] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.310 qpair failed and we were unable to recover it. 00:41:41.310 [2024-10-01 22:40:36.448988] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.310 [2024-10-01 22:40:36.448998] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.310 qpair failed and we were unable to recover it. 00:41:41.310 [2024-10-01 22:40:36.449307] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.310 [2024-10-01 22:40:36.449317] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.310 qpair failed and we were unable to recover it. 00:41:41.310 [2024-10-01 22:40:36.449602] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.310 [2024-10-01 22:40:36.449612] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.310 qpair failed and we were unable to recover it. 00:41:41.310 [2024-10-01 22:40:36.449920] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.310 [2024-10-01 22:40:36.449930] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.310 qpair failed and we were unable to recover it. 00:41:41.310 [2024-10-01 22:40:36.450234] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.310 [2024-10-01 22:40:36.450244] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.310 qpair failed and we were unable to recover it. 00:41:41.310 [2024-10-01 22:40:36.450521] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.310 [2024-10-01 22:40:36.450531] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.310 qpair failed and we were unable to recover it. 00:41:41.310 [2024-10-01 22:40:36.450892] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.310 [2024-10-01 22:40:36.450902] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.310 qpair failed and we were unable to recover it. 00:41:41.310 [2024-10-01 22:40:36.451307] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.310 [2024-10-01 22:40:36.451318] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.310 qpair failed and we were unable to recover it. 00:41:41.310 [2024-10-01 22:40:36.451640] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.310 [2024-10-01 22:40:36.451652] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.310 qpair failed and we were unable to recover it. 00:41:41.310 [2024-10-01 22:40:36.451935] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.310 [2024-10-01 22:40:36.451945] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.310 qpair failed and we were unable to recover it. 00:41:41.310 [2024-10-01 22:40:36.452232] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.310 [2024-10-01 22:40:36.452242] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.310 qpair failed and we were unable to recover it. 00:41:41.310 [2024-10-01 22:40:36.452451] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.310 [2024-10-01 22:40:36.452462] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.310 qpair failed and we were unable to recover it. 00:41:41.310 [2024-10-01 22:40:36.452849] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.310 [2024-10-01 22:40:36.452859] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.310 qpair failed and we were unable to recover it. 00:41:41.310 [2024-10-01 22:40:36.453137] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.310 [2024-10-01 22:40:36.453146] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.311 qpair failed and we were unable to recover it. 00:41:41.311 [2024-10-01 22:40:36.453427] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.311 [2024-10-01 22:40:36.453437] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.311 qpair failed and we were unable to recover it. 00:41:41.311 [2024-10-01 22:40:36.453815] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.311 [2024-10-01 22:40:36.453826] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.311 qpair failed and we were unable to recover it. 00:41:41.311 [2024-10-01 22:40:36.454126] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.311 [2024-10-01 22:40:36.454136] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.311 qpair failed and we were unable to recover it. 00:41:41.311 [2024-10-01 22:40:36.454413] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.311 [2024-10-01 22:40:36.454422] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.311 qpair failed and we were unable to recover it. 00:41:41.311 [2024-10-01 22:40:36.454711] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.311 [2024-10-01 22:40:36.454721] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.311 qpair failed and we were unable to recover it. 00:41:41.311 [2024-10-01 22:40:36.455037] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.311 [2024-10-01 22:40:36.455047] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.311 qpair failed and we were unable to recover it. 00:41:41.311 [2024-10-01 22:40:36.455342] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.311 [2024-10-01 22:40:36.455352] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.311 qpair failed and we were unable to recover it. 00:41:41.311 [2024-10-01 22:40:36.455552] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.311 [2024-10-01 22:40:36.455562] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.311 qpair failed and we were unable to recover it. 00:41:41.311 [2024-10-01 22:40:36.455870] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.311 [2024-10-01 22:40:36.455881] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.311 qpair failed and we were unable to recover it. 00:41:41.311 [2024-10-01 22:40:36.456064] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.311 [2024-10-01 22:40:36.456074] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.311 qpair failed and we were unable to recover it. 00:41:41.311 [2024-10-01 22:40:36.456440] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.311 [2024-10-01 22:40:36.456450] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.311 qpair failed and we were unable to recover it. 00:41:41.311 [2024-10-01 22:40:36.456731] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.311 [2024-10-01 22:40:36.456741] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.311 qpair failed and we were unable to recover it. 00:41:41.311 [2024-10-01 22:40:36.457032] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.311 [2024-10-01 22:40:36.457042] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.311 qpair failed and we were unable to recover it. 00:41:41.311 [2024-10-01 22:40:36.457238] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.311 [2024-10-01 22:40:36.457248] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.311 qpair failed and we were unable to recover it. 00:41:41.311 [2024-10-01 22:40:36.457568] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.311 [2024-10-01 22:40:36.457578] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.311 qpair failed and we were unable to recover it. 00:41:41.311 [2024-10-01 22:40:36.457891] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.311 [2024-10-01 22:40:36.457901] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.311 qpair failed and we were unable to recover it. 00:41:41.311 [2024-10-01 22:40:36.458225] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.311 [2024-10-01 22:40:36.458234] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.311 qpair failed and we were unable to recover it. 00:41:41.311 [2024-10-01 22:40:36.458500] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.311 [2024-10-01 22:40:36.458511] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.311 qpair failed and we were unable to recover it. 00:41:41.311 [2024-10-01 22:40:36.458798] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.311 [2024-10-01 22:40:36.458809] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.311 qpair failed and we were unable to recover it. 00:41:41.311 [2024-10-01 22:40:36.459139] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.311 [2024-10-01 22:40:36.459150] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.311 qpair failed and we were unable to recover it. 00:41:41.311 [2024-10-01 22:40:36.459377] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.311 [2024-10-01 22:40:36.459387] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.311 qpair failed and we were unable to recover it. 00:41:41.311 [2024-10-01 22:40:36.459590] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.311 [2024-10-01 22:40:36.459601] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.311 qpair failed and we were unable to recover it. 00:41:41.311 [2024-10-01 22:40:36.459997] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.311 [2024-10-01 22:40:36.460008] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.311 qpair failed and we were unable to recover it. 00:41:41.311 [2024-10-01 22:40:36.460330] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.311 [2024-10-01 22:40:36.460341] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.311 qpair failed and we were unable to recover it. 00:41:41.311 [2024-10-01 22:40:36.460525] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.311 [2024-10-01 22:40:36.460536] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.311 qpair failed and we were unable to recover it. 00:41:41.311 [2024-10-01 22:40:36.460729] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.311 [2024-10-01 22:40:36.460740] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.311 qpair failed and we were unable to recover it. 00:41:41.311 [2024-10-01 22:40:36.461044] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.311 [2024-10-01 22:40:36.461055] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.311 qpair failed and we were unable to recover it. 00:41:41.311 [2024-10-01 22:40:36.461357] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.311 [2024-10-01 22:40:36.461368] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.311 qpair failed and we were unable to recover it. 00:41:41.311 [2024-10-01 22:40:36.461691] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.311 [2024-10-01 22:40:36.461702] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.311 qpair failed and we were unable to recover it. 00:41:41.311 [2024-10-01 22:40:36.462028] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.311 [2024-10-01 22:40:36.462038] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.311 qpair failed and we were unable to recover it. 00:41:41.311 [2024-10-01 22:40:36.462343] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.311 [2024-10-01 22:40:36.462352] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.311 qpair failed and we were unable to recover it. 00:41:41.311 [2024-10-01 22:40:36.462671] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.311 [2024-10-01 22:40:36.462681] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.311 qpair failed and we were unable to recover it. 00:41:41.311 [2024-10-01 22:40:36.462987] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.311 [2024-10-01 22:40:36.462997] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.311 qpair failed and we were unable to recover it. 00:41:41.311 [2024-10-01 22:40:36.463185] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.311 [2024-10-01 22:40:36.463195] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.311 qpair failed and we were unable to recover it. 00:41:41.311 [2024-10-01 22:40:36.463541] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.311 [2024-10-01 22:40:36.463551] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.311 qpair failed and we were unable to recover it. 00:41:41.311 [2024-10-01 22:40:36.463856] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.311 [2024-10-01 22:40:36.463866] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.311 qpair failed and we were unable to recover it. 00:41:41.311 [2024-10-01 22:40:36.464189] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.311 [2024-10-01 22:40:36.464199] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.311 qpair failed and we were unable to recover it. 00:41:41.311 [2024-10-01 22:40:36.464385] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.311 [2024-10-01 22:40:36.464395] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.311 qpair failed and we were unable to recover it. 00:41:41.311 [2024-10-01 22:40:36.464674] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.311 [2024-10-01 22:40:36.464684] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.312 qpair failed and we were unable to recover it. 00:41:41.312 [2024-10-01 22:40:36.464996] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.312 [2024-10-01 22:40:36.465005] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.312 qpair failed and we were unable to recover it. 00:41:41.312 [2024-10-01 22:40:36.465304] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.312 [2024-10-01 22:40:36.465315] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.312 qpair failed and we were unable to recover it. 00:41:41.312 [2024-10-01 22:40:36.465630] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.312 [2024-10-01 22:40:36.465641] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.312 qpair failed and we were unable to recover it. 00:41:41.312 [2024-10-01 22:40:36.465963] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.312 [2024-10-01 22:40:36.465974] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.312 qpair failed and we were unable to recover it. 00:41:41.312 [2024-10-01 22:40:36.466279] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.312 [2024-10-01 22:40:36.466290] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.312 qpair failed and we were unable to recover it. 00:41:41.312 [2024-10-01 22:40:36.466632] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.312 [2024-10-01 22:40:36.466645] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.312 qpair failed and we were unable to recover it. 00:41:41.312 [2024-10-01 22:40:36.466958] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.312 [2024-10-01 22:40:36.466968] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.312 qpair failed and we were unable to recover it. 00:41:41.312 [2024-10-01 22:40:36.467273] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.312 [2024-10-01 22:40:36.467283] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.312 qpair failed and we were unable to recover it. 00:41:41.312 [2024-10-01 22:40:36.467588] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.312 [2024-10-01 22:40:36.467598] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.312 qpair failed and we were unable to recover it. 00:41:41.312 [2024-10-01 22:40:36.467909] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.312 [2024-10-01 22:40:36.467920] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.312 qpair failed and we were unable to recover it. 00:41:41.312 [2024-10-01 22:40:36.468225] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.312 [2024-10-01 22:40:36.468235] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.312 qpair failed and we were unable to recover it. 00:41:41.312 [2024-10-01 22:40:36.468565] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.312 [2024-10-01 22:40:36.468575] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.312 qpair failed and we were unable to recover it. 00:41:41.312 [2024-10-01 22:40:36.468803] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.312 [2024-10-01 22:40:36.468813] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.312 qpair failed and we were unable to recover it. 00:41:41.312 [2024-10-01 22:40:36.469918] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.312 [2024-10-01 22:40:36.469942] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.312 qpair failed and we were unable to recover it. 00:41:41.312 [2024-10-01 22:40:36.470286] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.312 [2024-10-01 22:40:36.470298] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.312 qpair failed and we were unable to recover it. 00:41:41.312 [2024-10-01 22:40:36.470631] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.312 [2024-10-01 22:40:36.470642] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.312 qpair failed and we were unable to recover it. 00:41:41.312 [2024-10-01 22:40:36.470913] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.312 [2024-10-01 22:40:36.470923] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.312 qpair failed and we were unable to recover it. 00:41:41.312 [2024-10-01 22:40:36.471123] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.312 [2024-10-01 22:40:36.471133] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.312 qpair failed and we were unable to recover it. 00:41:41.312 [2024-10-01 22:40:36.471436] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.312 [2024-10-01 22:40:36.471447] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.312 qpair failed and we were unable to recover it. 00:41:41.312 [2024-10-01 22:40:36.471774] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.312 [2024-10-01 22:40:36.471785] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.312 qpair failed and we were unable to recover it. 00:41:41.312 [2024-10-01 22:40:36.471970] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.312 [2024-10-01 22:40:36.471981] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.312 qpair failed and we were unable to recover it. 00:41:41.312 [2024-10-01 22:40:36.472384] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.312 [2024-10-01 22:40:36.472394] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.312 qpair failed and we were unable to recover it. 00:41:41.312 [2024-10-01 22:40:36.472670] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.312 [2024-10-01 22:40:36.472681] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.312 qpair failed and we were unable to recover it. 00:41:41.312 [2024-10-01 22:40:36.472886] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.312 [2024-10-01 22:40:36.472895] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.312 qpair failed and we were unable to recover it. 00:41:41.312 [2024-10-01 22:40:36.473217] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.312 [2024-10-01 22:40:36.473227] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.312 qpair failed and we were unable to recover it. 00:41:41.312 [2024-10-01 22:40:36.473400] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.312 [2024-10-01 22:40:36.473410] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.312 qpair failed and we were unable to recover it. 00:41:41.312 [2024-10-01 22:40:36.473675] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.312 [2024-10-01 22:40:36.473686] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.312 qpair failed and we were unable to recover it. 00:41:41.312 [2024-10-01 22:40:36.474136] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.312 [2024-10-01 22:40:36.474146] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.312 qpair failed and we were unable to recover it. 00:41:41.312 [2024-10-01 22:40:36.474433] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.312 [2024-10-01 22:40:36.474444] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.312 qpair failed and we were unable to recover it. 00:41:41.312 [2024-10-01 22:40:36.474732] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.312 [2024-10-01 22:40:36.474742] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.312 qpair failed and we were unable to recover it. 00:41:41.312 [2024-10-01 22:40:36.475052] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.312 [2024-10-01 22:40:36.475062] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.312 qpair failed and we were unable to recover it. 00:41:41.312 [2024-10-01 22:40:36.475390] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.312 [2024-10-01 22:40:36.475400] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.312 qpair failed and we were unable to recover it. 00:41:41.312 [2024-10-01 22:40:36.475590] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.312 [2024-10-01 22:40:36.475602] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.312 qpair failed and we were unable to recover it. 00:41:41.312 [2024-10-01 22:40:36.475881] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.312 [2024-10-01 22:40:36.475892] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.312 qpair failed and we were unable to recover it. 00:41:41.312 [2024-10-01 22:40:36.476199] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.312 [2024-10-01 22:40:36.476208] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.312 qpair failed and we were unable to recover it. 00:41:41.312 [2024-10-01 22:40:36.476516] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.312 [2024-10-01 22:40:36.476527] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.312 qpair failed and we were unable to recover it. 00:41:41.312 [2024-10-01 22:40:36.476735] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.312 [2024-10-01 22:40:36.476746] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.312 qpair failed and we were unable to recover it. 00:41:41.312 [2024-10-01 22:40:36.477026] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.313 [2024-10-01 22:40:36.477037] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.313 qpair failed and we were unable to recover it. 00:41:41.313 [2024-10-01 22:40:36.477346] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.313 [2024-10-01 22:40:36.477357] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.313 qpair failed and we were unable to recover it. 00:41:41.313 [2024-10-01 22:40:36.477560] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.313 [2024-10-01 22:40:36.477570] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.313 qpair failed and we were unable to recover it. 00:41:41.313 [2024-10-01 22:40:36.477839] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.313 [2024-10-01 22:40:36.477850] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.313 qpair failed and we were unable to recover it. 00:41:41.313 [2024-10-01 22:40:36.478120] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.313 [2024-10-01 22:40:36.478131] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.313 qpair failed and we were unable to recover it. 00:41:41.313 [2024-10-01 22:40:36.478446] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.313 [2024-10-01 22:40:36.478457] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.313 qpair failed and we were unable to recover it. 00:41:41.313 [2024-10-01 22:40:36.478787] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.313 [2024-10-01 22:40:36.478798] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.313 qpair failed and we were unable to recover it. 00:41:41.313 [2024-10-01 22:40:36.479109] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.313 [2024-10-01 22:40:36.479119] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.313 qpair failed and we were unable to recover it. 00:41:41.313 [2024-10-01 22:40:36.479426] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.313 [2024-10-01 22:40:36.479436] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.313 qpair failed and we were unable to recover it. 00:41:41.313 [2024-10-01 22:40:36.479777] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.313 [2024-10-01 22:40:36.479788] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.313 qpair failed and we were unable to recover it. 00:41:41.313 [2024-10-01 22:40:36.480113] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.313 [2024-10-01 22:40:36.480123] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.313 qpair failed and we were unable to recover it. 00:41:41.313 [2024-10-01 22:40:36.480350] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.313 [2024-10-01 22:40:36.480360] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.313 qpair failed and we were unable to recover it. 00:41:41.313 [2024-10-01 22:40:36.480694] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.313 [2024-10-01 22:40:36.480704] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.313 qpair failed and we were unable to recover it. 00:41:41.313 [2024-10-01 22:40:36.480997] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.313 [2024-10-01 22:40:36.481007] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.313 qpair failed and we were unable to recover it. 00:41:41.313 [2024-10-01 22:40:36.481323] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.313 [2024-10-01 22:40:36.481333] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.313 qpair failed and we were unable to recover it. 00:41:41.313 [2024-10-01 22:40:36.481620] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.313 [2024-10-01 22:40:36.481640] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.313 qpair failed and we were unable to recover it. 00:41:41.313 [2024-10-01 22:40:36.481955] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.313 [2024-10-01 22:40:36.481965] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.313 qpair failed and we were unable to recover it. 00:41:41.313 [2024-10-01 22:40:36.482275] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.313 [2024-10-01 22:40:36.482285] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.313 qpair failed and we were unable to recover it. 00:41:41.313 [2024-10-01 22:40:36.482600] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.313 [2024-10-01 22:40:36.482610] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.313 qpair failed and we were unable to recover it. 00:41:41.313 [2024-10-01 22:40:36.482931] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.313 [2024-10-01 22:40:36.482942] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.313 qpair failed and we were unable to recover it. 00:41:41.313 [2024-10-01 22:40:36.483117] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.313 [2024-10-01 22:40:36.483127] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.313 qpair failed and we were unable to recover it. 00:41:41.313 [2024-10-01 22:40:36.483454] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.313 [2024-10-01 22:40:36.483463] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.313 qpair failed and we were unable to recover it. 00:41:41.313 [2024-10-01 22:40:36.483826] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.313 [2024-10-01 22:40:36.483836] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.313 qpair failed and we were unable to recover it. 00:41:41.313 [2024-10-01 22:40:36.484117] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.313 [2024-10-01 22:40:36.484127] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.313 qpair failed and we were unable to recover it. 00:41:41.313 [2024-10-01 22:40:36.484444] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.313 [2024-10-01 22:40:36.484454] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.313 qpair failed and we were unable to recover it. 00:41:41.313 [2024-10-01 22:40:36.484778] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.313 [2024-10-01 22:40:36.484788] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.313 qpair failed and we were unable to recover it. 00:41:41.313 [2024-10-01 22:40:36.485110] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.313 [2024-10-01 22:40:36.485120] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.313 qpair failed and we were unable to recover it. 00:41:41.313 [2024-10-01 22:40:36.485404] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.313 [2024-10-01 22:40:36.485415] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.313 qpair failed and we were unable to recover it. 00:41:41.313 [2024-10-01 22:40:36.485691] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.313 [2024-10-01 22:40:36.485701] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.313 qpair failed and we were unable to recover it. 00:41:41.313 [2024-10-01 22:40:36.486005] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.313 [2024-10-01 22:40:36.486015] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.313 qpair failed and we were unable to recover it. 00:41:41.313 [2024-10-01 22:40:36.486335] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.313 [2024-10-01 22:40:36.486344] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.313 qpair failed and we were unable to recover it. 00:41:41.313 [2024-10-01 22:40:36.486632] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.313 [2024-10-01 22:40:36.486642] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.313 qpair failed and we were unable to recover it. 00:41:41.313 [2024-10-01 22:40:36.486964] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.313 [2024-10-01 22:40:36.486974] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.313 qpair failed and we were unable to recover it. 00:41:41.313 [2024-10-01 22:40:36.487255] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.313 [2024-10-01 22:40:36.487270] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.313 qpair failed and we were unable to recover it. 00:41:41.314 [2024-10-01 22:40:36.487582] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.314 [2024-10-01 22:40:36.487592] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.314 qpair failed and we were unable to recover it. 00:41:41.314 [2024-10-01 22:40:36.487869] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.314 [2024-10-01 22:40:36.487879] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.314 qpair failed and we were unable to recover it. 00:41:41.314 [2024-10-01 22:40:36.488093] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.314 [2024-10-01 22:40:36.488104] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.314 qpair failed and we were unable to recover it. 00:41:41.314 [2024-10-01 22:40:36.488410] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.314 [2024-10-01 22:40:36.488419] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.314 qpair failed and we were unable to recover it. 00:41:41.314 [2024-10-01 22:40:36.488743] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.314 [2024-10-01 22:40:36.488753] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.314 qpair failed and we were unable to recover it. 00:41:41.314 [2024-10-01 22:40:36.488944] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.314 [2024-10-01 22:40:36.488955] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.314 qpair failed and we were unable to recover it. 00:41:41.314 [2024-10-01 22:40:36.489270] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.314 [2024-10-01 22:40:36.489280] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.314 qpair failed and we were unable to recover it. 00:41:41.314 [2024-10-01 22:40:36.489571] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.314 [2024-10-01 22:40:36.489581] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.314 qpair failed and we were unable to recover it. 00:41:41.314 [2024-10-01 22:40:36.489913] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.314 [2024-10-01 22:40:36.489924] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.314 qpair failed and we were unable to recover it. 00:41:41.314 [2024-10-01 22:40:36.490208] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.314 [2024-10-01 22:40:36.490218] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.314 qpair failed and we were unable to recover it. 00:41:41.314 [2024-10-01 22:40:36.490521] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.314 [2024-10-01 22:40:36.490531] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.314 qpair failed and we were unable to recover it. 00:41:41.314 [2024-10-01 22:40:36.490690] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.314 [2024-10-01 22:40:36.490701] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.314 qpair failed and we were unable to recover it. 00:41:41.314 [2024-10-01 22:40:36.491014] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.314 [2024-10-01 22:40:36.491023] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.314 qpair failed and we were unable to recover it. 00:41:41.314 [2024-10-01 22:40:36.491348] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.314 [2024-10-01 22:40:36.491358] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.314 qpair failed and we were unable to recover it. 00:41:41.314 [2024-10-01 22:40:36.491650] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.314 [2024-10-01 22:40:36.491661] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.314 qpair failed and we were unable to recover it. 00:41:41.314 [2024-10-01 22:40:36.491890] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.314 [2024-10-01 22:40:36.491899] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.314 qpair failed and we were unable to recover it. 00:41:41.314 [2024-10-01 22:40:36.492114] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.314 [2024-10-01 22:40:36.492125] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.314 qpair failed and we were unable to recover it. 00:41:41.314 [2024-10-01 22:40:36.492459] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.314 [2024-10-01 22:40:36.492469] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.314 qpair failed and we were unable to recover it. 00:41:41.314 [2024-10-01 22:40:36.492792] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.314 [2024-10-01 22:40:36.492803] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.314 qpair failed and we were unable to recover it. 00:41:41.314 [2024-10-01 22:40:36.493100] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.314 [2024-10-01 22:40:36.493110] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.314 qpair failed and we were unable to recover it. 00:41:41.314 [2024-10-01 22:40:36.493415] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.314 [2024-10-01 22:40:36.493425] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.314 qpair failed and we were unable to recover it. 00:41:41.314 [2024-10-01 22:40:36.493708] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.314 [2024-10-01 22:40:36.493718] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.314 qpair failed and we were unable to recover it. 00:41:41.314 [2024-10-01 22:40:36.494029] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.314 [2024-10-01 22:40:36.494039] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.314 qpair failed and we were unable to recover it. 00:41:41.314 [2024-10-01 22:40:36.494372] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.314 [2024-10-01 22:40:36.494382] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.314 qpair failed and we were unable to recover it. 00:41:41.314 [2024-10-01 22:40:36.494659] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.314 [2024-10-01 22:40:36.494669] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.314 qpair failed and we were unable to recover it. 00:41:41.314 [2024-10-01 22:40:36.494866] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.314 [2024-10-01 22:40:36.494876] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.314 qpair failed and we were unable to recover it. 00:41:41.314 [2024-10-01 22:40:36.495181] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.314 [2024-10-01 22:40:36.495191] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.314 qpair failed and we were unable to recover it. 00:41:41.314 [2024-10-01 22:40:36.495484] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.314 [2024-10-01 22:40:36.495495] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.314 qpair failed and we were unable to recover it. 00:41:41.314 [2024-10-01 22:40:36.495731] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.314 [2024-10-01 22:40:36.495742] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.314 qpair failed and we were unable to recover it. 00:41:41.314 [2024-10-01 22:40:36.496041] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.314 [2024-10-01 22:40:36.496053] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.314 qpair failed and we were unable to recover it. 00:41:41.314 [2024-10-01 22:40:36.496265] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.314 [2024-10-01 22:40:36.496275] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.314 qpair failed and we were unable to recover it. 00:41:41.314 [2024-10-01 22:40:36.496606] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.314 [2024-10-01 22:40:36.496617] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.314 qpair failed and we were unable to recover it. 00:41:41.314 [2024-10-01 22:40:36.496904] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.314 [2024-10-01 22:40:36.496915] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.314 qpair failed and we were unable to recover it. 00:41:41.314 [2024-10-01 22:40:36.497260] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.314 [2024-10-01 22:40:36.497270] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.314 qpair failed and we were unable to recover it. 00:41:41.314 [2024-10-01 22:40:36.497568] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.314 [2024-10-01 22:40:36.497579] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.314 qpair failed and we were unable to recover it. 00:41:41.314 [2024-10-01 22:40:36.497858] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.314 [2024-10-01 22:40:36.497868] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.314 qpair failed and we were unable to recover it. 00:41:41.314 [2024-10-01 22:40:36.498053] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.314 [2024-10-01 22:40:36.498063] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.314 qpair failed and we were unable to recover it. 00:41:41.314 [2024-10-01 22:40:36.498337] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.314 [2024-10-01 22:40:36.498348] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.314 qpair failed and we were unable to recover it. 00:41:41.314 [2024-10-01 22:40:36.498538] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.314 [2024-10-01 22:40:36.498548] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.314 qpair failed and we were unable to recover it. 00:41:41.314 [2024-10-01 22:40:36.498826] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.315 [2024-10-01 22:40:36.498836] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.315 qpair failed and we were unable to recover it. 00:41:41.315 [2024-10-01 22:40:36.499196] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.315 [2024-10-01 22:40:36.499208] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.315 qpair failed and we were unable to recover it. 00:41:41.315 [2024-10-01 22:40:36.499499] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.315 [2024-10-01 22:40:36.499516] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.315 qpair failed and we were unable to recover it. 00:41:41.315 [2024-10-01 22:40:36.499838] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.315 [2024-10-01 22:40:36.499849] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.315 qpair failed and we were unable to recover it. 00:41:41.315 [2024-10-01 22:40:36.500182] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.315 [2024-10-01 22:40:36.500193] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.315 qpair failed and we were unable to recover it. 00:41:41.315 [2024-10-01 22:40:36.500556] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.315 [2024-10-01 22:40:36.500566] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.315 qpair failed and we were unable to recover it. 00:41:41.315 [2024-10-01 22:40:36.500872] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.315 [2024-10-01 22:40:36.500883] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.315 qpair failed and we were unable to recover it. 00:41:41.315 [2024-10-01 22:40:36.501198] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.315 [2024-10-01 22:40:36.501207] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.315 qpair failed and we were unable to recover it. 00:41:41.315 [2024-10-01 22:40:36.501489] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.315 [2024-10-01 22:40:36.501500] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.315 qpair failed and we were unable to recover it. 00:41:41.315 [2024-10-01 22:40:36.501799] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.315 [2024-10-01 22:40:36.501809] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.315 qpair failed and we were unable to recover it. 00:41:41.315 [2024-10-01 22:40:36.502101] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.315 [2024-10-01 22:40:36.502119] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.315 qpair failed and we were unable to recover it. 00:41:41.315 [2024-10-01 22:40:36.502425] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.315 [2024-10-01 22:40:36.502435] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.315 qpair failed and we were unable to recover it. 00:41:41.315 [2024-10-01 22:40:36.502729] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.315 [2024-10-01 22:40:36.502740] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.315 qpair failed and we were unable to recover it. 00:41:41.315 [2024-10-01 22:40:36.502924] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.315 [2024-10-01 22:40:36.502935] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.315 qpair failed and we were unable to recover it. 00:41:41.315 [2024-10-01 22:40:36.503268] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.315 [2024-10-01 22:40:36.503278] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.315 qpair failed and we were unable to recover it. 00:41:41.315 [2024-10-01 22:40:36.503583] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.315 [2024-10-01 22:40:36.503593] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.315 qpair failed and we were unable to recover it. 00:41:41.315 [2024-10-01 22:40:36.503915] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.315 [2024-10-01 22:40:36.503926] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.315 qpair failed and we were unable to recover it. 00:41:41.315 [2024-10-01 22:40:36.504235] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.315 [2024-10-01 22:40:36.504248] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.315 qpair failed and we were unable to recover it. 00:41:41.315 [2024-10-01 22:40:36.504555] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.315 [2024-10-01 22:40:36.504565] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.315 qpair failed and we were unable to recover it. 00:41:41.315 [2024-10-01 22:40:36.504855] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.315 [2024-10-01 22:40:36.504865] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.315 qpair failed and we were unable to recover it. 00:41:41.315 [2024-10-01 22:40:36.505157] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.315 [2024-10-01 22:40:36.505167] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.315 qpair failed and we were unable to recover it. 00:41:41.315 [2024-10-01 22:40:36.505356] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.315 [2024-10-01 22:40:36.505366] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.315 qpair failed and we were unable to recover it. 00:41:41.315 [2024-10-01 22:40:36.505681] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.315 [2024-10-01 22:40:36.505691] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.315 qpair failed and we were unable to recover it. 00:41:41.315 [2024-10-01 22:40:36.505992] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.315 [2024-10-01 22:40:36.506003] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.315 qpair failed and we were unable to recover it. 00:41:41.315 [2024-10-01 22:40:36.506192] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.315 [2024-10-01 22:40:36.506203] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.315 qpair failed and we were unable to recover it. 00:41:41.315 [2024-10-01 22:40:36.506498] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.315 [2024-10-01 22:40:36.506509] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.315 qpair failed and we were unable to recover it. 00:41:41.591 [2024-10-01 22:40:36.506789] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.592 [2024-10-01 22:40:36.506801] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.592 qpair failed and we were unable to recover it. 00:41:41.592 [2024-10-01 22:40:36.507083] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.592 [2024-10-01 22:40:36.507100] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.592 qpair failed and we were unable to recover it. 00:41:41.592 [2024-10-01 22:40:36.507405] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.592 [2024-10-01 22:40:36.507415] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.592 qpair failed and we were unable to recover it. 00:41:41.592 [2024-10-01 22:40:36.507697] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.592 [2024-10-01 22:40:36.507707] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.592 qpair failed and we were unable to recover it. 00:41:41.592 [2024-10-01 22:40:36.508024] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.592 [2024-10-01 22:40:36.508034] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.592 qpair failed and we were unable to recover it. 00:41:41.592 [2024-10-01 22:40:36.508313] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.592 [2024-10-01 22:40:36.508331] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.592 qpair failed and we were unable to recover it. 00:41:41.592 [2024-10-01 22:40:36.508659] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.592 [2024-10-01 22:40:36.508670] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.592 qpair failed and we were unable to recover it. 00:41:41.592 [2024-10-01 22:40:36.508962] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.592 [2024-10-01 22:40:36.508971] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.592 qpair failed and we were unable to recover it. 00:41:41.592 [2024-10-01 22:40:36.509291] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.592 [2024-10-01 22:40:36.509301] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.592 qpair failed and we were unable to recover it. 00:41:41.592 [2024-10-01 22:40:36.509604] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.592 [2024-10-01 22:40:36.509614] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.592 qpair failed and we were unable to recover it. 00:41:41.592 [2024-10-01 22:40:36.509956] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.592 [2024-10-01 22:40:36.509966] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.592 qpair failed and we were unable to recover it. 00:41:41.592 [2024-10-01 22:40:36.510254] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.592 [2024-10-01 22:40:36.510263] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.592 qpair failed and we were unable to recover it. 00:41:41.592 [2024-10-01 22:40:36.510567] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.592 [2024-10-01 22:40:36.510577] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.592 qpair failed and we were unable to recover it. 00:41:41.592 [2024-10-01 22:40:36.510760] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.592 [2024-10-01 22:40:36.510770] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.592 qpair failed and we were unable to recover it. 00:41:41.592 [2024-10-01 22:40:36.511069] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.592 [2024-10-01 22:40:36.511079] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.592 qpair failed and we were unable to recover it. 00:41:41.592 [2024-10-01 22:40:36.511403] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.592 [2024-10-01 22:40:36.511414] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.592 qpair failed and we were unable to recover it. 00:41:41.592 [2024-10-01 22:40:36.511693] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.592 [2024-10-01 22:40:36.511703] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.592 qpair failed and we were unable to recover it. 00:41:41.592 [2024-10-01 22:40:36.512028] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.592 [2024-10-01 22:40:36.512038] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.592 qpair failed and we were unable to recover it. 00:41:41.592 [2024-10-01 22:40:36.512341] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.592 [2024-10-01 22:40:36.512351] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.592 qpair failed and we were unable to recover it. 00:41:41.592 [2024-10-01 22:40:36.512670] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.592 [2024-10-01 22:40:36.512681] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.592 qpair failed and we were unable to recover it. 00:41:41.592 [2024-10-01 22:40:36.512991] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.592 [2024-10-01 22:40:36.513001] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.592 qpair failed and we were unable to recover it. 00:41:41.592 [2024-10-01 22:40:36.513381] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.592 [2024-10-01 22:40:36.513392] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.592 qpair failed and we were unable to recover it. 00:41:41.592 [2024-10-01 22:40:36.513671] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.592 [2024-10-01 22:40:36.513681] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.592 qpair failed and we were unable to recover it. 00:41:41.592 [2024-10-01 22:40:36.513971] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.592 [2024-10-01 22:40:36.513981] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.592 qpair failed and we were unable to recover it. 00:41:41.592 [2024-10-01 22:40:36.514302] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.592 [2024-10-01 22:40:36.514312] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.592 qpair failed and we were unable to recover it. 00:41:41.592 [2024-10-01 22:40:36.514491] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.592 [2024-10-01 22:40:36.514501] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.592 qpair failed and we were unable to recover it. 00:41:41.592 [2024-10-01 22:40:36.514811] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.592 [2024-10-01 22:40:36.514822] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.592 qpair failed and we were unable to recover it. 00:41:41.592 [2024-10-01 22:40:36.515031] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.592 [2024-10-01 22:40:36.515041] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.592 qpair failed and we were unable to recover it. 00:41:41.592 [2024-10-01 22:40:36.515223] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.592 [2024-10-01 22:40:36.515234] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.592 qpair failed and we were unable to recover it. 00:41:41.592 [2024-10-01 22:40:36.515556] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.592 [2024-10-01 22:40:36.515566] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.593 qpair failed and we were unable to recover it. 00:41:41.593 [2024-10-01 22:40:36.516368] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.593 [2024-10-01 22:40:36.516390] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.593 qpair failed and we were unable to recover it. 00:41:41.593 [2024-10-01 22:40:36.516714] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.593 [2024-10-01 22:40:36.516725] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.593 qpair failed and we were unable to recover it. 00:41:41.593 [2024-10-01 22:40:36.517036] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.593 [2024-10-01 22:40:36.517047] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.593 qpair failed and we were unable to recover it. 00:41:41.593 [2024-10-01 22:40:36.517352] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.593 [2024-10-01 22:40:36.517362] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.593 qpair failed and we were unable to recover it. 00:41:41.593 [2024-10-01 22:40:36.517688] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.593 [2024-10-01 22:40:36.517698] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.593 qpair failed and we were unable to recover it. 00:41:41.593 [2024-10-01 22:40:36.518005] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.593 [2024-10-01 22:40:36.518014] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.593 qpair failed and we were unable to recover it. 00:41:41.593 [2024-10-01 22:40:36.518315] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.593 [2024-10-01 22:40:36.518325] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.593 qpair failed and we were unable to recover it. 00:41:41.593 [2024-10-01 22:40:36.518633] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.593 [2024-10-01 22:40:36.518644] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.593 qpair failed and we were unable to recover it. 00:41:41.593 [2024-10-01 22:40:36.518952] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.593 [2024-10-01 22:40:36.518962] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.593 qpair failed and we were unable to recover it. 00:41:41.593 [2024-10-01 22:40:36.519260] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.593 [2024-10-01 22:40:36.519270] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.593 qpair failed and we were unable to recover it. 00:41:41.593 [2024-10-01 22:40:36.519596] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.593 [2024-10-01 22:40:36.519606] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.593 qpair failed and we were unable to recover it. 00:41:41.593 [2024-10-01 22:40:36.519889] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.593 [2024-10-01 22:40:36.519900] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.593 qpair failed and we were unable to recover it. 00:41:41.593 [2024-10-01 22:40:36.520177] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.593 [2024-10-01 22:40:36.520187] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.593 qpair failed and we were unable to recover it. 00:41:41.593 [2024-10-01 22:40:36.520494] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.593 [2024-10-01 22:40:36.520504] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.593 qpair failed and we were unable to recover it. 00:41:41.593 [2024-10-01 22:40:36.520802] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.593 [2024-10-01 22:40:36.520813] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.593 qpair failed and we were unable to recover it. 00:41:41.593 [2024-10-01 22:40:36.521129] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.593 [2024-10-01 22:40:36.521140] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.593 qpair failed and we were unable to recover it. 00:41:41.593 [2024-10-01 22:40:36.521462] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.593 [2024-10-01 22:40:36.521472] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.593 qpair failed and we were unable to recover it. 00:41:41.593 [2024-10-01 22:40:36.521784] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.593 [2024-10-01 22:40:36.521795] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.593 qpair failed and we were unable to recover it. 00:41:41.593 [2024-10-01 22:40:36.522121] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.593 [2024-10-01 22:40:36.522132] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.593 qpair failed and we were unable to recover it. 00:41:41.593 [2024-10-01 22:40:36.522319] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.593 [2024-10-01 22:40:36.522330] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.593 qpair failed and we were unable to recover it. 00:41:41.593 [2024-10-01 22:40:36.522605] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.593 [2024-10-01 22:40:36.522614] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.593 qpair failed and we were unable to recover it. 00:41:41.593 [2024-10-01 22:40:36.522912] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.593 [2024-10-01 22:40:36.522924] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.593 qpair failed and we were unable to recover it. 00:41:41.593 [2024-10-01 22:40:36.523225] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.593 [2024-10-01 22:40:36.523235] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.593 qpair failed and we were unable to recover it. 00:41:41.593 [2024-10-01 22:40:36.523423] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.593 [2024-10-01 22:40:36.523433] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.593 qpair failed and we were unable to recover it. 00:41:41.593 [2024-10-01 22:40:36.523774] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.593 [2024-10-01 22:40:36.523785] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.593 qpair failed and we were unable to recover it. 00:41:41.593 [2024-10-01 22:40:36.524101] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.593 [2024-10-01 22:40:36.524110] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.593 qpair failed and we were unable to recover it. 00:41:41.593 [2024-10-01 22:40:36.524428] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.593 [2024-10-01 22:40:36.524438] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.593 qpair failed and we were unable to recover it. 00:41:41.593 [2024-10-01 22:40:36.524648] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.593 [2024-10-01 22:40:36.524658] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.593 qpair failed and we were unable to recover it. 00:41:41.593 [2024-10-01 22:40:36.524961] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.593 [2024-10-01 22:40:36.524971] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.593 qpair failed and we were unable to recover it. 00:41:41.593 [2024-10-01 22:40:36.525280] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.593 [2024-10-01 22:40:36.525292] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.593 qpair failed and we were unable to recover it. 00:41:41.593 [2024-10-01 22:40:36.525686] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.593 [2024-10-01 22:40:36.525698] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.593 qpair failed and we were unable to recover it. 00:41:41.593 [2024-10-01 22:40:36.525983] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.594 [2024-10-01 22:40:36.525993] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.594 qpair failed and we were unable to recover it. 00:41:41.594 [2024-10-01 22:40:36.526327] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.594 [2024-10-01 22:40:36.526337] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.594 qpair failed and we were unable to recover it. 00:41:41.594 [2024-10-01 22:40:36.526642] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.594 [2024-10-01 22:40:36.526652] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.594 qpair failed and we were unable to recover it. 00:41:41.594 [2024-10-01 22:40:36.526945] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.594 [2024-10-01 22:40:36.526955] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.594 qpair failed and we were unable to recover it. 00:41:41.594 [2024-10-01 22:40:36.527273] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.594 [2024-10-01 22:40:36.527284] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.594 qpair failed and we were unable to recover it. 00:41:41.594 [2024-10-01 22:40:36.527614] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.594 [2024-10-01 22:40:36.527629] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.594 qpair failed and we were unable to recover it. 00:41:41.594 [2024-10-01 22:40:36.527924] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.594 [2024-10-01 22:40:36.527935] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.594 qpair failed and we were unable to recover it. 00:41:41.594 [2024-10-01 22:40:36.528245] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.594 [2024-10-01 22:40:36.528255] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.594 qpair failed and we were unable to recover it. 00:41:41.594 [2024-10-01 22:40:36.528554] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.594 [2024-10-01 22:40:36.528564] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.594 qpair failed and we were unable to recover it. 00:41:41.594 [2024-10-01 22:40:36.528861] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.594 [2024-10-01 22:40:36.528872] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.594 qpair failed and we were unable to recover it. 00:41:41.594 [2024-10-01 22:40:36.529058] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.594 [2024-10-01 22:40:36.529069] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.594 qpair failed and we were unable to recover it. 00:41:41.594 [2024-10-01 22:40:36.529353] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.594 [2024-10-01 22:40:36.529363] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.594 qpair failed and we were unable to recover it. 00:41:41.594 [2024-10-01 22:40:36.529719] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.594 [2024-10-01 22:40:36.529729] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.594 qpair failed and we were unable to recover it. 00:41:41.594 [2024-10-01 22:40:36.530013] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.594 [2024-10-01 22:40:36.530024] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.594 qpair failed and we were unable to recover it. 00:41:41.594 [2024-10-01 22:40:36.530407] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.594 [2024-10-01 22:40:36.530416] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.594 qpair failed and we were unable to recover it. 00:41:41.594 [2024-10-01 22:40:36.531342] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.594 [2024-10-01 22:40:36.531362] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.594 qpair failed and we were unable to recover it. 00:41:41.594 [2024-10-01 22:40:36.531643] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.594 [2024-10-01 22:40:36.531654] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.594 qpair failed and we were unable to recover it. 00:41:41.594 [2024-10-01 22:40:36.531983] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.594 [2024-10-01 22:40:36.531993] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.594 qpair failed and we were unable to recover it. 00:41:41.594 [2024-10-01 22:40:36.532369] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.594 [2024-10-01 22:40:36.532379] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.594 qpair failed and we were unable to recover it. 00:41:41.594 [2024-10-01 22:40:36.532646] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.594 [2024-10-01 22:40:36.532656] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.594 qpair failed and we were unable to recover it. 00:41:41.594 [2024-10-01 22:40:36.532934] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.594 [2024-10-01 22:40:36.532944] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.594 qpair failed and we were unable to recover it. 00:41:41.594 [2024-10-01 22:40:36.533275] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.594 [2024-10-01 22:40:36.533286] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.594 qpair failed and we were unable to recover it. 00:41:41.594 [2024-10-01 22:40:36.533591] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.594 [2024-10-01 22:40:36.533601] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.594 qpair failed and we were unable to recover it. 00:41:41.594 [2024-10-01 22:40:36.533863] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.594 [2024-10-01 22:40:36.533873] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.594 qpair failed and we were unable to recover it. 00:41:41.594 [2024-10-01 22:40:36.534195] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.594 [2024-10-01 22:40:36.534204] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.594 qpair failed and we were unable to recover it. 00:41:41.594 [2024-10-01 22:40:36.534486] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.594 [2024-10-01 22:40:36.534499] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.594 qpair failed and we were unable to recover it. 00:41:41.594 [2024-10-01 22:40:36.534800] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.594 [2024-10-01 22:40:36.534810] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.594 qpair failed and we were unable to recover it. 00:41:41.594 [2024-10-01 22:40:36.535127] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.594 [2024-10-01 22:40:36.535138] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.594 qpair failed and we were unable to recover it. 00:41:41.594 [2024-10-01 22:40:36.535442] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.594 [2024-10-01 22:40:36.535452] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.594 qpair failed and we were unable to recover it. 00:41:41.594 [2024-10-01 22:40:36.535651] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.594 [2024-10-01 22:40:36.535661] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.594 qpair failed and we were unable to recover it. 00:41:41.594 [2024-10-01 22:40:36.535977] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.594 [2024-10-01 22:40:36.535987] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.594 qpair failed and we were unable to recover it. 00:41:41.594 [2024-10-01 22:40:36.536291] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.595 [2024-10-01 22:40:36.536302] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.595 qpair failed and we were unable to recover it. 00:41:41.595 [2024-10-01 22:40:36.536607] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.595 [2024-10-01 22:40:36.536617] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.595 qpair failed and we were unable to recover it. 00:41:41.595 [2024-10-01 22:40:36.536957] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.595 [2024-10-01 22:40:36.536968] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.595 qpair failed and we were unable to recover it. 00:41:41.595 [2024-10-01 22:40:36.537268] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.595 [2024-10-01 22:40:36.537278] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.595 qpair failed and we were unable to recover it. 00:41:41.595 [2024-10-01 22:40:36.537448] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.595 [2024-10-01 22:40:36.537459] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.595 qpair failed and we were unable to recover it. 00:41:41.595 [2024-10-01 22:40:36.537665] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.595 [2024-10-01 22:40:36.537677] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.595 qpair failed and we were unable to recover it. 00:41:41.595 [2024-10-01 22:40:36.537989] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.595 [2024-10-01 22:40:36.537999] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.595 qpair failed and we were unable to recover it. 00:41:41.595 [2024-10-01 22:40:36.538316] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.595 [2024-10-01 22:40:36.538326] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.595 qpair failed and we were unable to recover it. 00:41:41.595 [2024-10-01 22:40:36.538656] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.595 [2024-10-01 22:40:36.538667] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.595 qpair failed and we were unable to recover it. 00:41:41.595 [2024-10-01 22:40:36.538990] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.595 [2024-10-01 22:40:36.539000] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.595 qpair failed and we were unable to recover it. 00:41:41.595 [2024-10-01 22:40:36.539373] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.595 [2024-10-01 22:40:36.539384] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.595 qpair failed and we were unable to recover it. 00:41:41.595 [2024-10-01 22:40:36.539685] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.595 [2024-10-01 22:40:36.539695] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.595 qpair failed and we were unable to recover it. 00:41:41.595 [2024-10-01 22:40:36.539979] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.595 [2024-10-01 22:40:36.539990] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.595 qpair failed and we were unable to recover it. 00:41:41.595 [2024-10-01 22:40:36.540299] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.595 [2024-10-01 22:40:36.540308] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.595 qpair failed and we were unable to recover it. 00:41:41.595 [2024-10-01 22:40:36.540655] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.595 [2024-10-01 22:40:36.540665] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.595 qpair failed and we were unable to recover it. 00:41:41.595 [2024-10-01 22:40:36.540958] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.595 [2024-10-01 22:40:36.540968] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.595 qpair failed and we were unable to recover it. 00:41:41.595 [2024-10-01 22:40:36.541247] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.595 [2024-10-01 22:40:36.541262] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.595 qpair failed and we were unable to recover it. 00:41:41.595 [2024-10-01 22:40:36.541591] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.595 [2024-10-01 22:40:36.541602] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.595 qpair failed and we were unable to recover it. 00:41:41.595 [2024-10-01 22:40:36.541960] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.595 [2024-10-01 22:40:36.541971] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.595 qpair failed and we were unable to recover it. 00:41:41.595 [2024-10-01 22:40:36.542291] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.595 [2024-10-01 22:40:36.542300] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.595 qpair failed and we were unable to recover it. 00:41:41.595 [2024-10-01 22:40:36.542493] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.595 [2024-10-01 22:40:36.542504] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.595 qpair failed and we were unable to recover it. 00:41:41.595 [2024-10-01 22:40:36.542875] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.595 [2024-10-01 22:40:36.542888] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.595 qpair failed and we were unable to recover it. 00:41:41.595 [2024-10-01 22:40:36.543180] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.595 [2024-10-01 22:40:36.543191] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.595 qpair failed and we were unable to recover it. 00:41:41.595 [2024-10-01 22:40:36.543490] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.595 [2024-10-01 22:40:36.543501] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.595 qpair failed and we were unable to recover it. 00:41:41.595 [2024-10-01 22:40:36.543790] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.595 [2024-10-01 22:40:36.543800] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.595 qpair failed and we were unable to recover it. 00:41:41.595 [2024-10-01 22:40:36.544007] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.595 [2024-10-01 22:40:36.544017] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.595 qpair failed and we were unable to recover it. 00:41:41.595 [2024-10-01 22:40:36.544313] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.595 [2024-10-01 22:40:36.544323] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.595 qpair failed and we were unable to recover it. 00:41:41.595 [2024-10-01 22:40:36.544519] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.595 [2024-10-01 22:40:36.544530] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.595 qpair failed and we were unable to recover it. 00:41:41.595 [2024-10-01 22:40:36.544838] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.595 [2024-10-01 22:40:36.544848] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.595 qpair failed and we were unable to recover it. 00:41:41.596 [2024-10-01 22:40:36.545126] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.596 [2024-10-01 22:40:36.545145] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.596 qpair failed and we were unable to recover it. 00:41:41.596 [2024-10-01 22:40:36.545481] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.596 [2024-10-01 22:40:36.545492] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.596 qpair failed and we were unable to recover it. 00:41:41.596 [2024-10-01 22:40:36.545801] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.596 [2024-10-01 22:40:36.545811] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.596 qpair failed and we were unable to recover it. 00:41:41.596 [2024-10-01 22:40:36.546109] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.596 [2024-10-01 22:40:36.546127] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.596 qpair failed and we were unable to recover it. 00:41:41.596 [2024-10-01 22:40:36.546432] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.596 [2024-10-01 22:40:36.546442] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.596 qpair failed and we were unable to recover it. 00:41:41.596 [2024-10-01 22:40:36.546724] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.596 [2024-10-01 22:40:36.546734] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.596 qpair failed and we were unable to recover it. 00:41:41.596 [2024-10-01 22:40:36.547048] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.596 [2024-10-01 22:40:36.547058] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.596 qpair failed and we were unable to recover it. 00:41:41.596 [2024-10-01 22:40:36.547341] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.596 [2024-10-01 22:40:36.547356] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.596 qpair failed and we were unable to recover it. 00:41:41.596 [2024-10-01 22:40:36.547653] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.596 [2024-10-01 22:40:36.547663] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.596 qpair failed and we were unable to recover it. 00:41:41.596 [2024-10-01 22:40:36.547964] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.596 [2024-10-01 22:40:36.547974] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.596 qpair failed and we were unable to recover it. 00:41:41.596 [2024-10-01 22:40:36.548324] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.596 [2024-10-01 22:40:36.548335] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.596 qpair failed and we were unable to recover it. 00:41:41.596 [2024-10-01 22:40:36.548720] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.596 [2024-10-01 22:40:36.548731] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.596 qpair failed and we were unable to recover it. 00:41:41.596 [2024-10-01 22:40:36.549037] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.596 [2024-10-01 22:40:36.549047] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.596 qpair failed and we were unable to recover it. 00:41:41.596 [2024-10-01 22:40:36.549338] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.596 [2024-10-01 22:40:36.549348] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.596 qpair failed and we were unable to recover it. 00:41:41.596 [2024-10-01 22:40:36.549660] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.596 [2024-10-01 22:40:36.549670] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.596 qpair failed and we were unable to recover it. 00:41:41.596 [2024-10-01 22:40:36.549919] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.596 [2024-10-01 22:40:36.549929] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.596 qpair failed and we were unable to recover it. 00:41:41.596 [2024-10-01 22:40:36.550135] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.596 [2024-10-01 22:40:36.550146] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.596 qpair failed and we were unable to recover it. 00:41:41.596 [2024-10-01 22:40:36.550448] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.596 [2024-10-01 22:40:36.550458] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.596 qpair failed and we were unable to recover it. 00:41:41.596 [2024-10-01 22:40:36.550761] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.596 [2024-10-01 22:40:36.550772] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.596 qpair failed and we were unable to recover it. 00:41:41.596 [2024-10-01 22:40:36.550973] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.596 [2024-10-01 22:40:36.550983] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.596 qpair failed and we were unable to recover it. 00:41:41.596 [2024-10-01 22:40:36.551253] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.596 [2024-10-01 22:40:36.551263] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.596 qpair failed and we were unable to recover it. 00:41:41.596 [2024-10-01 22:40:36.551542] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.596 [2024-10-01 22:40:36.551552] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.596 qpair failed and we were unable to recover it. 00:41:41.596 [2024-10-01 22:40:36.551875] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.596 [2024-10-01 22:40:36.551886] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.596 qpair failed and we were unable to recover it. 00:41:41.596 [2024-10-01 22:40:36.552164] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.596 [2024-10-01 22:40:36.552182] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.596 qpair failed and we were unable to recover it. 00:41:41.596 [2024-10-01 22:40:36.552547] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.596 [2024-10-01 22:40:36.552557] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.596 qpair failed and we were unable to recover it. 00:41:41.596 [2024-10-01 22:40:36.552855] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.596 [2024-10-01 22:40:36.552865] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.596 qpair failed and we were unable to recover it. 00:41:41.596 [2024-10-01 22:40:36.553157] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.596 [2024-10-01 22:40:36.553174] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.596 qpair failed and we were unable to recover it. 00:41:41.596 [2024-10-01 22:40:36.553476] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.596 [2024-10-01 22:40:36.553487] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.596 qpair failed and we were unable to recover it. 00:41:41.596 [2024-10-01 22:40:36.553796] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.596 [2024-10-01 22:40:36.553806] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.596 qpair failed and we were unable to recover it. 00:41:41.596 [2024-10-01 22:40:36.554023] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.596 [2024-10-01 22:40:36.554033] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.596 qpair failed and we were unable to recover it. 00:41:41.596 [2024-10-01 22:40:36.554334] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.596 [2024-10-01 22:40:36.554343] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.596 qpair failed and we were unable to recover it. 00:41:41.596 [2024-10-01 22:40:36.554652] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.597 [2024-10-01 22:40:36.554663] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.597 qpair failed and we were unable to recover it. 00:41:41.597 [2024-10-01 22:40:36.554987] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.597 [2024-10-01 22:40:36.554997] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.597 qpair failed and we were unable to recover it. 00:41:41.597 [2024-10-01 22:40:36.555282] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.597 [2024-10-01 22:40:36.555294] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.597 qpair failed and we were unable to recover it. 00:41:41.597 [2024-10-01 22:40:36.555602] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.597 [2024-10-01 22:40:36.555611] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.597 qpair failed and we were unable to recover it. 00:41:41.597 [2024-10-01 22:40:36.555991] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.597 [2024-10-01 22:40:36.556001] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.597 qpair failed and we were unable to recover it. 00:41:41.597 [2024-10-01 22:40:36.556315] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.597 [2024-10-01 22:40:36.556325] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.597 qpair failed and we were unable to recover it. 00:41:41.597 [2024-10-01 22:40:36.556512] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.597 [2024-10-01 22:40:36.556523] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.597 qpair failed and we were unable to recover it. 00:41:41.597 [2024-10-01 22:40:36.556795] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.597 [2024-10-01 22:40:36.556806] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.597 qpair failed and we were unable to recover it. 00:41:41.597 [2024-10-01 22:40:36.557093] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.597 [2024-10-01 22:40:36.557111] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.597 qpair failed and we were unable to recover it. 00:41:41.597 [2024-10-01 22:40:36.557435] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.597 [2024-10-01 22:40:36.557445] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.597 qpair failed and we were unable to recover it. 00:41:41.597 [2024-10-01 22:40:36.557747] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.597 [2024-10-01 22:40:36.557757] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.597 qpair failed and we were unable to recover it. 00:41:41.597 [2024-10-01 22:40:36.557917] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.597 [2024-10-01 22:40:36.557928] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.597 qpair failed and we were unable to recover it. 00:41:41.597 [2024-10-01 22:40:36.558221] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.597 [2024-10-01 22:40:36.558230] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.597 qpair failed and we were unable to recover it. 00:41:41.597 [2024-10-01 22:40:36.558509] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.597 [2024-10-01 22:40:36.558527] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.597 qpair failed and we were unable to recover it. 00:41:41.597 [2024-10-01 22:40:36.558836] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.597 [2024-10-01 22:40:36.558846] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.597 qpair failed and we were unable to recover it. 00:41:41.597 [2024-10-01 22:40:36.559214] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.597 [2024-10-01 22:40:36.559224] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.597 qpair failed and we were unable to recover it. 00:41:41.597 [2024-10-01 22:40:36.559513] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.597 [2024-10-01 22:40:36.559523] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.597 qpair failed and we were unable to recover it. 00:41:41.597 [2024-10-01 22:40:36.559753] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.597 [2024-10-01 22:40:36.559763] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.597 qpair failed and we were unable to recover it. 00:41:41.597 [2024-10-01 22:40:36.560091] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.597 [2024-10-01 22:40:36.560101] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.597 qpair failed and we were unable to recover it. 00:41:41.597 [2024-10-01 22:40:36.560429] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.597 [2024-10-01 22:40:36.560438] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.597 qpair failed and we were unable to recover it. 00:41:41.597 [2024-10-01 22:40:36.560709] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.597 [2024-10-01 22:40:36.560719] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.597 qpair failed and we were unable to recover it. 00:41:41.597 [2024-10-01 22:40:36.561076] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.597 [2024-10-01 22:40:36.561086] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.597 qpair failed and we were unable to recover it. 00:41:41.597 [2024-10-01 22:40:36.561261] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.597 [2024-10-01 22:40:36.561272] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.597 qpair failed and we were unable to recover it. 00:41:41.597 [2024-10-01 22:40:36.561600] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.597 [2024-10-01 22:40:36.561611] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.597 qpair failed and we were unable to recover it. 00:41:41.597 [2024-10-01 22:40:36.561911] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.597 [2024-10-01 22:40:36.561922] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.597 qpair failed and we were unable to recover it. 00:41:41.597 [2024-10-01 22:40:36.562126] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.597 [2024-10-01 22:40:36.562136] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.597 qpair failed and we were unable to recover it. 00:41:41.597 [2024-10-01 22:40:36.562363] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.597 [2024-10-01 22:40:36.562374] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.597 qpair failed and we were unable to recover it. 00:41:41.598 [2024-10-01 22:40:36.562687] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.598 [2024-10-01 22:40:36.562698] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.598 qpair failed and we were unable to recover it. 00:41:41.598 [2024-10-01 22:40:36.563003] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.598 [2024-10-01 22:40:36.563013] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.598 qpair failed and we were unable to recover it. 00:41:41.598 [2024-10-01 22:40:36.563317] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.598 [2024-10-01 22:40:36.563329] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.598 qpair failed and we were unable to recover it. 00:41:41.598 [2024-10-01 22:40:36.563620] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.598 [2024-10-01 22:40:36.563634] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.598 qpair failed and we were unable to recover it. 00:41:41.598 [2024-10-01 22:40:36.563919] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.598 [2024-10-01 22:40:36.563929] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.598 qpair failed and we were unable to recover it. 00:41:41.598 [2024-10-01 22:40:36.564103] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.598 [2024-10-01 22:40:36.564114] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.598 qpair failed and we were unable to recover it. 00:41:41.598 [2024-10-01 22:40:36.564320] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.598 [2024-10-01 22:40:36.564330] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.598 qpair failed and we were unable to recover it. 00:41:41.598 [2024-10-01 22:40:36.564645] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.598 [2024-10-01 22:40:36.564655] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.598 qpair failed and we were unable to recover it. 00:41:41.598 [2024-10-01 22:40:36.565012] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.598 [2024-10-01 22:40:36.565022] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.598 qpair failed and we were unable to recover it. 00:41:41.598 [2024-10-01 22:40:36.565322] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.598 [2024-10-01 22:40:36.565332] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.598 qpair failed and we were unable to recover it. 00:41:41.598 [2024-10-01 22:40:36.565607] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.598 [2024-10-01 22:40:36.565617] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.598 qpair failed and we were unable to recover it. 00:41:41.598 [2024-10-01 22:40:36.565939] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.598 [2024-10-01 22:40:36.565950] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.598 qpair failed and we were unable to recover it. 00:41:41.598 [2024-10-01 22:40:36.566236] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.598 [2024-10-01 22:40:36.566247] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.598 qpair failed and we were unable to recover it. 00:41:41.598 [2024-10-01 22:40:36.566517] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.598 [2024-10-01 22:40:36.566527] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.598 qpair failed and we were unable to recover it. 00:41:41.598 [2024-10-01 22:40:36.566700] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.598 [2024-10-01 22:40:36.566710] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.598 qpair failed and we were unable to recover it. 00:41:41.598 [2024-10-01 22:40:36.567027] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.598 [2024-10-01 22:40:36.567036] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.598 qpair failed and we were unable to recover it. 00:41:41.598 [2024-10-01 22:40:36.567209] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.598 [2024-10-01 22:40:36.567220] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.598 qpair failed and we were unable to recover it. 00:41:41.598 [2024-10-01 22:40:36.567520] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.598 [2024-10-01 22:40:36.567530] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.598 qpair failed and we were unable to recover it. 00:41:41.598 [2024-10-01 22:40:36.567823] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.598 [2024-10-01 22:40:36.567834] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.598 qpair failed and we were unable to recover it. 00:41:41.598 [2024-10-01 22:40:36.568124] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.598 [2024-10-01 22:40:36.568134] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.598 qpair failed and we were unable to recover it. 00:41:41.598 [2024-10-01 22:40:36.568442] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.598 [2024-10-01 22:40:36.568452] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.598 qpair failed and we were unable to recover it. 00:41:41.598 [2024-10-01 22:40:36.568742] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.598 [2024-10-01 22:40:36.568752] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.598 qpair failed and we were unable to recover it. 00:41:41.598 [2024-10-01 22:40:36.569080] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.598 [2024-10-01 22:40:36.569090] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.598 qpair failed and we were unable to recover it. 00:41:41.598 [2024-10-01 22:40:36.569460] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.598 [2024-10-01 22:40:36.569470] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.598 qpair failed and we were unable to recover it. 00:41:41.598 [2024-10-01 22:40:36.569671] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.598 [2024-10-01 22:40:36.569682] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.598 qpair failed and we were unable to recover it. 00:41:41.598 [2024-10-01 22:40:36.570054] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.598 [2024-10-01 22:40:36.570065] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.598 qpair failed and we were unable to recover it. 00:41:41.598 [2024-10-01 22:40:36.570273] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.598 [2024-10-01 22:40:36.570283] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.598 qpair failed and we were unable to recover it. 00:41:41.598 [2024-10-01 22:40:36.570575] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.598 [2024-10-01 22:40:36.570585] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.598 qpair failed and we were unable to recover it. 00:41:41.598 [2024-10-01 22:40:36.570911] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.598 [2024-10-01 22:40:36.570922] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.598 qpair failed and we were unable to recover it. 00:41:41.598 [2024-10-01 22:40:36.571234] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.598 [2024-10-01 22:40:36.571245] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.598 qpair failed and we were unable to recover it. 00:41:41.599 [2024-10-01 22:40:36.571554] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.599 [2024-10-01 22:40:36.571564] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.599 qpair failed and we were unable to recover it. 00:41:41.599 [2024-10-01 22:40:36.571778] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.599 [2024-10-01 22:40:36.571788] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.599 qpair failed and we were unable to recover it. 00:41:41.599 [2024-10-01 22:40:36.572104] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.599 [2024-10-01 22:40:36.572114] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.599 qpair failed and we were unable to recover it. 00:41:41.599 [2024-10-01 22:40:36.572420] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.599 [2024-10-01 22:40:36.572430] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.599 qpair failed and we were unable to recover it. 00:41:41.599 [2024-10-01 22:40:36.572739] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.599 [2024-10-01 22:40:36.572750] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.599 qpair failed and we were unable to recover it. 00:41:41.599 [2024-10-01 22:40:36.573078] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.599 [2024-10-01 22:40:36.573088] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.599 qpair failed and we were unable to recover it. 00:41:41.599 [2024-10-01 22:40:36.573393] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.599 [2024-10-01 22:40:36.573402] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.599 qpair failed and we were unable to recover it. 00:41:41.599 [2024-10-01 22:40:36.573710] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.599 [2024-10-01 22:40:36.573720] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.599 qpair failed and we were unable to recover it. 00:41:41.599 [2024-10-01 22:40:36.573922] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.599 [2024-10-01 22:40:36.573932] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.599 qpair failed and we were unable to recover it. 00:41:41.599 [2024-10-01 22:40:36.574225] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.599 [2024-10-01 22:40:36.574236] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.599 qpair failed and we were unable to recover it. 00:41:41.599 [2024-10-01 22:40:36.574539] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.599 [2024-10-01 22:40:36.574548] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.599 qpair failed and we were unable to recover it. 00:41:41.599 [2024-10-01 22:40:36.574807] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.599 [2024-10-01 22:40:36.574818] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.599 qpair failed and we were unable to recover it. 00:41:41.599 [2024-10-01 22:40:36.575138] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.599 [2024-10-01 22:40:36.575149] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.599 qpair failed and we were unable to recover it. 00:41:41.599 [2024-10-01 22:40:36.575443] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.599 [2024-10-01 22:40:36.575454] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.599 qpair failed and we were unable to recover it. 00:41:41.599 [2024-10-01 22:40:36.575650] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.599 [2024-10-01 22:40:36.575661] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.599 qpair failed and we were unable to recover it. 00:41:41.599 [2024-10-01 22:40:36.575964] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.599 [2024-10-01 22:40:36.575973] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.599 qpair failed and we were unable to recover it. 00:41:41.599 [2024-10-01 22:40:36.576254] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.599 [2024-10-01 22:40:36.576272] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.599 qpair failed and we were unable to recover it. 00:41:41.599 [2024-10-01 22:40:36.576486] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.599 [2024-10-01 22:40:36.576496] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.599 qpair failed and we were unable to recover it. 00:41:41.599 [2024-10-01 22:40:36.576853] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.599 [2024-10-01 22:40:36.576863] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.599 qpair failed and we were unable to recover it. 00:41:41.599 [2024-10-01 22:40:36.577166] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.599 [2024-10-01 22:40:36.577175] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.599 qpair failed and we were unable to recover it. 00:41:41.599 [2024-10-01 22:40:36.577480] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.599 [2024-10-01 22:40:36.577490] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.599 qpair failed and we were unable to recover it. 00:41:41.599 [2024-10-01 22:40:36.577828] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.599 [2024-10-01 22:40:36.577838] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.599 qpair failed and we were unable to recover it. 00:41:41.599 [2024-10-01 22:40:36.578002] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.599 [2024-10-01 22:40:36.578013] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.599 qpair failed and we were unable to recover it. 00:41:41.599 [2024-10-01 22:40:36.578356] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.599 [2024-10-01 22:40:36.578367] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.599 qpair failed and we were unable to recover it. 00:41:41.599 [2024-10-01 22:40:36.578687] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.599 [2024-10-01 22:40:36.578698] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.599 qpair failed and we were unable to recover it. 00:41:41.599 [2024-10-01 22:40:36.579013] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.599 [2024-10-01 22:40:36.579023] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.599 qpair failed and we were unable to recover it. 00:41:41.599 [2024-10-01 22:40:36.579327] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.599 [2024-10-01 22:40:36.579337] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.599 qpair failed and we were unable to recover it. 00:41:41.599 [2024-10-01 22:40:36.579649] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.599 [2024-10-01 22:40:36.579659] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.599 qpair failed and we were unable to recover it. 00:41:41.599 [2024-10-01 22:40:36.579974] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.599 [2024-10-01 22:40:36.579984] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.599 qpair failed and we were unable to recover it. 00:41:41.599 [2024-10-01 22:40:36.580253] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.599 [2024-10-01 22:40:36.580263] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.599 qpair failed and we were unable to recover it. 00:41:41.599 [2024-10-01 22:40:36.580532] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.599 [2024-10-01 22:40:36.580542] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.599 qpair failed and we were unable to recover it. 00:41:41.599 [2024-10-01 22:40:36.580835] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.599 [2024-10-01 22:40:36.580847] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.600 qpair failed and we were unable to recover it. 00:41:41.600 [2024-10-01 22:40:36.581155] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.600 [2024-10-01 22:40:36.581166] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.600 qpair failed and we were unable to recover it. 00:41:41.600 [2024-10-01 22:40:36.581499] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.600 [2024-10-01 22:40:36.581510] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.600 qpair failed and we were unable to recover it. 00:41:41.600 [2024-10-01 22:40:36.581794] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.600 [2024-10-01 22:40:36.581805] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.600 qpair failed and we were unable to recover it. 00:41:41.600 [2024-10-01 22:40:36.582089] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.600 [2024-10-01 22:40:36.582105] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.600 qpair failed and we were unable to recover it. 00:41:41.600 [2024-10-01 22:40:36.582405] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.600 [2024-10-01 22:40:36.582415] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.600 qpair failed and we were unable to recover it. 00:41:41.600 [2024-10-01 22:40:36.582689] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.600 [2024-10-01 22:40:36.582699] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.600 qpair failed and we were unable to recover it. 00:41:41.600 [2024-10-01 22:40:36.582977] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.600 [2024-10-01 22:40:36.582987] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.600 qpair failed and we were unable to recover it. 00:41:41.600 [2024-10-01 22:40:36.583251] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.600 [2024-10-01 22:40:36.583262] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.600 qpair failed and we were unable to recover it. 00:41:41.600 [2024-10-01 22:40:36.583568] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.600 [2024-10-01 22:40:36.583579] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.600 qpair failed and we were unable to recover it. 00:41:41.600 [2024-10-01 22:40:36.583947] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.600 [2024-10-01 22:40:36.583957] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.600 qpair failed and we were unable to recover it. 00:41:41.600 [2024-10-01 22:40:36.584280] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.600 [2024-10-01 22:40:36.584289] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.600 qpair failed and we were unable to recover it. 00:41:41.600 [2024-10-01 22:40:36.584632] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.600 [2024-10-01 22:40:36.584642] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.600 qpair failed and we were unable to recover it. 00:41:41.600 [2024-10-01 22:40:36.584942] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.600 [2024-10-01 22:40:36.584953] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.600 qpair failed and we were unable to recover it. 00:41:41.600 [2024-10-01 22:40:36.585237] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.600 [2024-10-01 22:40:36.585247] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.600 qpair failed and we were unable to recover it. 00:41:41.600 [2024-10-01 22:40:36.585522] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.600 [2024-10-01 22:40:36.585532] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.600 qpair failed and we were unable to recover it. 00:41:41.600 [2024-10-01 22:40:36.585839] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.600 [2024-10-01 22:40:36.585849] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.600 qpair failed and we were unable to recover it. 00:41:41.600 [2024-10-01 22:40:36.586150] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.600 [2024-10-01 22:40:36.586160] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.600 qpair failed and we were unable to recover it. 00:41:41.600 [2024-10-01 22:40:36.586443] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.600 [2024-10-01 22:40:36.586453] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.600 qpair failed and we were unable to recover it. 00:41:41.600 [2024-10-01 22:40:36.586741] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.600 [2024-10-01 22:40:36.586751] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.600 qpair failed and we were unable to recover it. 00:41:41.600 [2024-10-01 22:40:36.587052] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.600 [2024-10-01 22:40:36.587063] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.600 qpair failed and we were unable to recover it. 00:41:41.600 [2024-10-01 22:40:36.587368] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.600 [2024-10-01 22:40:36.587378] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.600 qpair failed and we were unable to recover it. 00:41:41.600 [2024-10-01 22:40:36.587660] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.600 [2024-10-01 22:40:36.587670] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.600 qpair failed and we were unable to recover it. 00:41:41.600 [2024-10-01 22:40:36.588012] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.600 [2024-10-01 22:40:36.588022] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.600 qpair failed and we were unable to recover it. 00:41:41.600 [2024-10-01 22:40:36.588325] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.600 [2024-10-01 22:40:36.588335] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.600 qpair failed and we were unable to recover it. 00:41:41.600 [2024-10-01 22:40:36.588639] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.600 [2024-10-01 22:40:36.588649] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.600 qpair failed and we were unable to recover it. 00:41:41.600 [2024-10-01 22:40:36.588952] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.600 [2024-10-01 22:40:36.588961] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.600 qpair failed and we were unable to recover it. 00:41:41.600 [2024-10-01 22:40:36.589263] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.600 [2024-10-01 22:40:36.589273] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.600 qpair failed and we were unable to recover it. 00:41:41.600 [2024-10-01 22:40:36.589627] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.600 [2024-10-01 22:40:36.589638] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.600 qpair failed and we were unable to recover it. 00:41:41.600 [2024-10-01 22:40:36.589940] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.600 [2024-10-01 22:40:36.589950] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.600 qpair failed and we were unable to recover it. 00:41:41.600 [2024-10-01 22:40:36.590127] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.600 [2024-10-01 22:40:36.590136] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.600 qpair failed and we were unable to recover it. 00:41:41.600 [2024-10-01 22:40:36.590542] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.600 [2024-10-01 22:40:36.590552] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.601 qpair failed and we were unable to recover it. 00:41:41.601 [2024-10-01 22:40:36.590849] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.601 [2024-10-01 22:40:36.590860] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.601 qpair failed and we were unable to recover it. 00:41:41.601 [2024-10-01 22:40:36.591186] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.601 [2024-10-01 22:40:36.591196] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.601 qpair failed and we were unable to recover it. 00:41:41.601 [2024-10-01 22:40:36.591531] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.601 [2024-10-01 22:40:36.591542] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.601 qpair failed and we were unable to recover it. 00:41:41.601 [2024-10-01 22:40:36.591845] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.601 [2024-10-01 22:40:36.591856] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.601 qpair failed and we were unable to recover it. 00:41:41.601 [2024-10-01 22:40:36.592066] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.601 [2024-10-01 22:40:36.592080] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.601 qpair failed and we were unable to recover it. 00:41:41.601 [2024-10-01 22:40:36.592392] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.601 [2024-10-01 22:40:36.592402] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.601 qpair failed and we were unable to recover it. 00:41:41.601 [2024-10-01 22:40:36.592593] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.601 [2024-10-01 22:40:36.592603] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.601 qpair failed and we were unable to recover it. 00:41:41.601 [2024-10-01 22:40:36.592916] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.601 [2024-10-01 22:40:36.592927] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.601 qpair failed and we were unable to recover it. 00:41:41.601 [2024-10-01 22:40:36.593248] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.601 [2024-10-01 22:40:36.593258] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.601 qpair failed and we were unable to recover it. 00:41:41.601 [2024-10-01 22:40:36.593559] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.601 [2024-10-01 22:40:36.593570] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.601 qpair failed and we were unable to recover it. 00:41:41.601 [2024-10-01 22:40:36.593863] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.601 [2024-10-01 22:40:36.593873] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.601 qpair failed and we were unable to recover it. 00:41:41.601 [2024-10-01 22:40:36.594188] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.601 [2024-10-01 22:40:36.594198] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.601 qpair failed and we were unable to recover it. 00:41:41.601 [2024-10-01 22:40:36.594483] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.601 [2024-10-01 22:40:36.594493] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.601 qpair failed and we were unable to recover it. 00:41:41.601 [2024-10-01 22:40:36.594795] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.601 [2024-10-01 22:40:36.594806] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.601 qpair failed and we were unable to recover it. 00:41:41.601 [2024-10-01 22:40:36.595099] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.601 [2024-10-01 22:40:36.595108] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.601 qpair failed and we were unable to recover it. 00:41:41.601 [2024-10-01 22:40:36.595447] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.601 [2024-10-01 22:40:36.595457] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.601 qpair failed and we were unable to recover it. 00:41:41.601 [2024-10-01 22:40:36.595739] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.601 [2024-10-01 22:40:36.595750] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.601 qpair failed and we were unable to recover it. 00:41:41.601 [2024-10-01 22:40:36.596056] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.601 [2024-10-01 22:40:36.596066] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.601 qpair failed and we were unable to recover it. 00:41:41.601 [2024-10-01 22:40:36.596406] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.601 [2024-10-01 22:40:36.596416] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.601 qpair failed and we were unable to recover it. 00:41:41.601 [2024-10-01 22:40:36.596611] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.601 [2024-10-01 22:40:36.596620] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.601 qpair failed and we were unable to recover it. 00:41:41.601 [2024-10-01 22:40:36.596844] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.601 [2024-10-01 22:40:36.596854] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.601 qpair failed and we were unable to recover it. 00:41:41.601 [2024-10-01 22:40:36.597158] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.601 [2024-10-01 22:40:36.597167] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.601 qpair failed and we were unable to recover it. 00:41:41.601 [2024-10-01 22:40:36.597448] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.601 [2024-10-01 22:40:36.597464] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.601 qpair failed and we were unable to recover it. 00:41:41.601 [2024-10-01 22:40:36.597769] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.601 [2024-10-01 22:40:36.597780] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.601 qpair failed and we were unable to recover it. 00:41:41.601 [2024-10-01 22:40:36.598085] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.601 [2024-10-01 22:40:36.598096] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.601 qpair failed and we were unable to recover it. 00:41:41.601 [2024-10-01 22:40:36.598408] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.601 [2024-10-01 22:40:36.598418] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.601 qpair failed and we were unable to recover it. 00:41:41.601 [2024-10-01 22:40:36.598739] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.601 [2024-10-01 22:40:36.598750] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.601 qpair failed and we were unable to recover it. 00:41:41.601 [2024-10-01 22:40:36.599019] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.601 [2024-10-01 22:40:36.599028] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.601 qpair failed and we were unable to recover it. 00:41:41.601 [2024-10-01 22:40:36.599332] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.601 [2024-10-01 22:40:36.599342] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.601 qpair failed and we were unable to recover it. 00:41:41.601 [2024-10-01 22:40:36.599646] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.602 [2024-10-01 22:40:36.599656] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.602 qpair failed and we were unable to recover it. 00:41:41.602 [2024-10-01 22:40:36.599946] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.602 [2024-10-01 22:40:36.599964] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.602 qpair failed and we were unable to recover it. 00:41:41.602 [2024-10-01 22:40:36.600272] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.602 [2024-10-01 22:40:36.600286] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.602 qpair failed and we were unable to recover it. 00:41:41.602 [2024-10-01 22:40:36.600477] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.602 [2024-10-01 22:40:36.600486] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.602 qpair failed and we were unable to recover it. 00:41:41.602 [2024-10-01 22:40:36.600804] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.602 [2024-10-01 22:40:36.600815] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.602 qpair failed and we were unable to recover it. 00:41:41.602 [2024-10-01 22:40:36.600981] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.602 [2024-10-01 22:40:36.600991] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.602 qpair failed and we were unable to recover it. 00:41:41.602 [2024-10-01 22:40:36.601349] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.602 [2024-10-01 22:40:36.601359] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.602 qpair failed and we were unable to recover it. 00:41:41.602 [2024-10-01 22:40:36.601647] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.602 [2024-10-01 22:40:36.601658] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.602 qpair failed and we were unable to recover it. 00:41:41.602 [2024-10-01 22:40:36.601783] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.602 [2024-10-01 22:40:36.601794] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.602 qpair failed and we were unable to recover it. 00:41:41.602 [2024-10-01 22:40:36.602104] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.602 [2024-10-01 22:40:36.602114] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.602 qpair failed and we were unable to recover it. 00:41:41.602 [2024-10-01 22:40:36.602393] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.602 [2024-10-01 22:40:36.602403] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.602 qpair failed and we were unable to recover it. 00:41:41.602 [2024-10-01 22:40:36.602588] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.602 [2024-10-01 22:40:36.602599] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.602 qpair failed and we were unable to recover it. 00:41:41.602 [2024-10-01 22:40:36.602916] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.602 [2024-10-01 22:40:36.602927] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.602 qpair failed and we were unable to recover it. 00:41:41.602 [2024-10-01 22:40:36.603230] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.602 [2024-10-01 22:40:36.603240] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.602 qpair failed and we were unable to recover it. 00:41:41.602 [2024-10-01 22:40:36.603526] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.602 [2024-10-01 22:40:36.603537] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.602 qpair failed and we were unable to recover it. 00:41:41.602 [2024-10-01 22:40:36.603703] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.602 [2024-10-01 22:40:36.603714] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.602 qpair failed and we were unable to recover it. 00:41:41.602 [2024-10-01 22:40:36.604023] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.602 [2024-10-01 22:40:36.604033] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.602 qpair failed and we were unable to recover it. 00:41:41.602 [2024-10-01 22:40:36.604336] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.602 [2024-10-01 22:40:36.604346] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.602 qpair failed and we were unable to recover it. 00:41:41.602 [2024-10-01 22:40:36.604705] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.602 [2024-10-01 22:40:36.604715] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.602 qpair failed and we were unable to recover it. 00:41:41.602 [2024-10-01 22:40:36.604993] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.602 [2024-10-01 22:40:36.605003] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.602 qpair failed and we were unable to recover it. 00:41:41.602 [2024-10-01 22:40:36.605294] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.602 [2024-10-01 22:40:36.605304] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.602 qpair failed and we were unable to recover it. 00:41:41.602 [2024-10-01 22:40:36.605608] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.602 [2024-10-01 22:40:36.605618] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.602 qpair failed and we were unable to recover it. 00:41:41.602 [2024-10-01 22:40:36.605906] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.602 [2024-10-01 22:40:36.605916] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.602 qpair failed and we were unable to recover it. 00:41:41.602 [2024-10-01 22:40:36.606228] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.602 [2024-10-01 22:40:36.606238] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.602 qpair failed and we were unable to recover it. 00:41:41.602 [2024-10-01 22:40:36.606436] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.602 [2024-10-01 22:40:36.606446] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.602 qpair failed and we were unable to recover it. 00:41:41.602 [2024-10-01 22:40:36.606776] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.602 [2024-10-01 22:40:36.606787] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.602 qpair failed and we were unable to recover it. 00:41:41.602 [2024-10-01 22:40:36.606991] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.602 [2024-10-01 22:40:36.607000] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.602 qpair failed and we were unable to recover it. 00:41:41.602 [2024-10-01 22:40:36.607392] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.602 [2024-10-01 22:40:36.607402] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.602 qpair failed and we were unable to recover it. 00:41:41.602 [2024-10-01 22:40:36.607681] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.602 [2024-10-01 22:40:36.607692] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.602 qpair failed and we were unable to recover it. 00:41:41.602 [2024-10-01 22:40:36.607999] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.603 [2024-10-01 22:40:36.608010] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.603 qpair failed and we were unable to recover it. 00:41:41.603 [2024-10-01 22:40:36.608287] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.603 [2024-10-01 22:40:36.608297] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.603 qpair failed and we were unable to recover it. 00:41:41.603 [2024-10-01 22:40:36.608598] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.603 [2024-10-01 22:40:36.608608] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.603 qpair failed and we were unable to recover it. 00:41:41.603 [2024-10-01 22:40:36.608940] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.603 [2024-10-01 22:40:36.608951] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.603 qpair failed and we were unable to recover it. 00:41:41.603 [2024-10-01 22:40:36.609253] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.603 [2024-10-01 22:40:36.609263] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.603 qpair failed and we were unable to recover it. 00:41:41.603 [2024-10-01 22:40:36.609620] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.603 [2024-10-01 22:40:36.609637] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.603 qpair failed and we were unable to recover it. 00:41:41.603 [2024-10-01 22:40:36.609921] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.603 [2024-10-01 22:40:36.609931] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.603 qpair failed and we were unable to recover it. 00:41:41.603 [2024-10-01 22:40:36.610258] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.603 [2024-10-01 22:40:36.610267] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.603 qpair failed and we were unable to recover it. 00:41:41.603 [2024-10-01 22:40:36.610569] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.603 [2024-10-01 22:40:36.610579] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.603 qpair failed and we were unable to recover it. 00:41:41.603 [2024-10-01 22:40:36.610874] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.603 [2024-10-01 22:40:36.610884] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.603 qpair failed and we were unable to recover it. 00:41:41.603 [2024-10-01 22:40:36.611155] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.603 [2024-10-01 22:40:36.611166] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.603 qpair failed and we were unable to recover it. 00:41:41.603 [2024-10-01 22:40:36.611474] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.603 [2024-10-01 22:40:36.611484] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.603 qpair failed and we were unable to recover it. 00:41:41.603 [2024-10-01 22:40:36.611782] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.603 [2024-10-01 22:40:36.611793] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.603 qpair failed and we were unable to recover it. 00:41:41.603 [2024-10-01 22:40:36.612070] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.603 [2024-10-01 22:40:36.612080] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.603 qpair failed and we were unable to recover it. 00:41:41.603 [2024-10-01 22:40:36.612408] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.603 [2024-10-01 22:40:36.612418] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.603 qpair failed and we were unable to recover it. 00:41:41.603 [2024-10-01 22:40:36.612706] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.603 [2024-10-01 22:40:36.612716] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.603 qpair failed and we were unable to recover it. 00:41:41.603 [2024-10-01 22:40:36.612930] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.603 [2024-10-01 22:40:36.612940] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.603 qpair failed and we were unable to recover it. 00:41:41.603 [2024-10-01 22:40:36.613156] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.603 [2024-10-01 22:40:36.613166] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.603 qpair failed and we were unable to recover it. 00:41:41.603 [2024-10-01 22:40:36.613477] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.603 [2024-10-01 22:40:36.613486] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.603 qpair failed and we were unable to recover it. 00:41:41.603 [2024-10-01 22:40:36.613795] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.603 [2024-10-01 22:40:36.613806] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.603 qpair failed and we were unable to recover it. 00:41:41.603 [2024-10-01 22:40:36.614114] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.603 [2024-10-01 22:40:36.614124] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.603 qpair failed and we were unable to recover it. 00:41:41.603 [2024-10-01 22:40:36.614404] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.603 [2024-10-01 22:40:36.614415] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.603 qpair failed and we were unable to recover it. 00:41:41.603 [2024-10-01 22:40:36.614682] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.603 [2024-10-01 22:40:36.614693] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.603 qpair failed and we were unable to recover it. 00:41:41.603 [2024-10-01 22:40:36.615015] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.603 [2024-10-01 22:40:36.615025] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.603 qpair failed and we were unable to recover it. 00:41:41.603 [2024-10-01 22:40:36.615328] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.603 [2024-10-01 22:40:36.615338] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.604 qpair failed and we were unable to recover it. 00:41:41.604 [2024-10-01 22:40:36.615688] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.604 [2024-10-01 22:40:36.615699] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.604 qpair failed and we were unable to recover it. 00:41:41.604 [2024-10-01 22:40:36.616011] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.604 [2024-10-01 22:40:36.616021] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.604 qpair failed and we were unable to recover it. 00:41:41.604 [2024-10-01 22:40:36.616329] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.604 [2024-10-01 22:40:36.616339] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.604 qpair failed and we were unable to recover it. 00:41:41.604 [2024-10-01 22:40:36.616648] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.604 [2024-10-01 22:40:36.616658] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.604 qpair failed and we were unable to recover it. 00:41:41.604 [2024-10-01 22:40:36.616878] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.604 [2024-10-01 22:40:36.616887] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.604 qpair failed and we were unable to recover it. 00:41:41.604 [2024-10-01 22:40:36.617196] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.604 [2024-10-01 22:40:36.617207] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.604 qpair failed and we were unable to recover it. 00:41:41.604 [2024-10-01 22:40:36.617546] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.604 [2024-10-01 22:40:36.617556] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.604 qpair failed and we were unable to recover it. 00:41:41.604 [2024-10-01 22:40:36.617847] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.604 [2024-10-01 22:40:36.617859] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.604 qpair failed and we were unable to recover it. 00:41:41.604 [2024-10-01 22:40:36.618147] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.604 [2024-10-01 22:40:36.618157] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.604 qpair failed and we were unable to recover it. 00:41:41.604 [2024-10-01 22:40:36.618518] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.604 [2024-10-01 22:40:36.618527] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.604 qpair failed and we were unable to recover it. 00:41:41.604 [2024-10-01 22:40:36.618827] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.604 [2024-10-01 22:40:36.618837] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.604 qpair failed and we were unable to recover it. 00:41:41.604 [2024-10-01 22:40:36.619149] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.604 [2024-10-01 22:40:36.619158] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.604 qpair failed and we were unable to recover it. 00:41:41.604 [2024-10-01 22:40:36.619441] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.604 [2024-10-01 22:40:36.619450] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.604 qpair failed and we were unable to recover it. 00:41:41.604 [2024-10-01 22:40:36.619762] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.604 [2024-10-01 22:40:36.619772] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.604 qpair failed and we were unable to recover it. 00:41:41.604 [2024-10-01 22:40:36.620076] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.604 [2024-10-01 22:40:36.620086] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.604 qpair failed and we were unable to recover it. 00:41:41.604 [2024-10-01 22:40:36.620429] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.604 [2024-10-01 22:40:36.620439] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.604 qpair failed and we were unable to recover it. 00:41:41.604 [2024-10-01 22:40:36.620740] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.604 [2024-10-01 22:40:36.620752] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.604 qpair failed and we were unable to recover it. 00:41:41.604 [2024-10-01 22:40:36.621069] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.604 [2024-10-01 22:40:36.621079] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.604 qpair failed and we were unable to recover it. 00:41:41.604 [2024-10-01 22:40:36.621358] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.604 [2024-10-01 22:40:36.621368] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.604 qpair failed and we were unable to recover it. 00:41:41.604 [2024-10-01 22:40:36.621672] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.604 [2024-10-01 22:40:36.621682] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.604 qpair failed and we were unable to recover it. 00:41:41.604 [2024-10-01 22:40:36.621973] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.604 [2024-10-01 22:40:36.621984] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.604 qpair failed and we were unable to recover it. 00:41:41.604 [2024-10-01 22:40:36.622237] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.604 [2024-10-01 22:40:36.622248] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.604 qpair failed and we were unable to recover it. 00:41:41.604 [2024-10-01 22:40:36.622540] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.604 [2024-10-01 22:40:36.622550] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.604 qpair failed and we were unable to recover it. 00:41:41.604 [2024-10-01 22:40:36.622828] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.604 [2024-10-01 22:40:36.622840] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.604 qpair failed and we were unable to recover it. 00:41:41.604 [2024-10-01 22:40:36.623139] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.604 [2024-10-01 22:40:36.623150] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.604 qpair failed and we were unable to recover it. 00:41:41.604 [2024-10-01 22:40:36.623318] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.604 [2024-10-01 22:40:36.623330] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.604 qpair failed and we were unable to recover it. 00:41:41.604 [2024-10-01 22:40:36.623726] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.604 [2024-10-01 22:40:36.623736] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.604 qpair failed and we were unable to recover it. 00:41:41.604 [2024-10-01 22:40:36.624133] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.604 [2024-10-01 22:40:36.624142] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.604 qpair failed and we were unable to recover it. 00:41:41.604 [2024-10-01 22:40:36.624323] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.604 [2024-10-01 22:40:36.624332] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.604 qpair failed and we were unable to recover it. 00:41:41.604 [2024-10-01 22:40:36.624704] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.604 [2024-10-01 22:40:36.624715] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.604 qpair failed and we were unable to recover it. 00:41:41.604 [2024-10-01 22:40:36.625011] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.604 [2024-10-01 22:40:36.625021] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.604 qpair failed and we were unable to recover it. 00:41:41.604 [2024-10-01 22:40:36.625326] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.604 [2024-10-01 22:40:36.625336] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.604 qpair failed and we were unable to recover it. 00:41:41.604 [2024-10-01 22:40:36.625613] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.604 [2024-10-01 22:40:36.625623] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.605 qpair failed and we were unable to recover it. 00:41:41.605 [2024-10-01 22:40:36.625932] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.605 [2024-10-01 22:40:36.625942] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.605 qpair failed and we were unable to recover it. 00:41:41.605 [2024-10-01 22:40:36.626205] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.605 [2024-10-01 22:40:36.626214] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.605 qpair failed and we were unable to recover it. 00:41:41.605 [2024-10-01 22:40:36.626529] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.605 [2024-10-01 22:40:36.626539] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.605 qpair failed and we were unable to recover it. 00:41:41.605 [2024-10-01 22:40:36.626840] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.605 [2024-10-01 22:40:36.626850] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.605 qpair failed and we were unable to recover it. 00:41:41.605 [2024-10-01 22:40:36.627161] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.605 [2024-10-01 22:40:36.627171] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.605 qpair failed and we were unable to recover it. 00:41:41.605 [2024-10-01 22:40:36.627477] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.605 [2024-10-01 22:40:36.627486] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.605 qpair failed and we were unable to recover it. 00:41:41.605 [2024-10-01 22:40:36.627798] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.605 [2024-10-01 22:40:36.627809] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.605 qpair failed and we were unable to recover it. 00:41:41.605 [2024-10-01 22:40:36.628100] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.605 [2024-10-01 22:40:36.628110] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.605 qpair failed and we were unable to recover it. 00:41:41.605 [2024-10-01 22:40:36.628421] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.605 [2024-10-01 22:40:36.628431] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.605 qpair failed and we were unable to recover it. 00:41:41.605 [2024-10-01 22:40:36.628630] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.605 [2024-10-01 22:40:36.628640] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.605 qpair failed and we were unable to recover it. 00:41:41.605 [2024-10-01 22:40:36.629022] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.605 [2024-10-01 22:40:36.629036] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.605 qpair failed and we were unable to recover it. 00:41:41.605 [2024-10-01 22:40:36.629342] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.605 [2024-10-01 22:40:36.629352] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.605 qpair failed and we were unable to recover it. 00:41:41.605 [2024-10-01 22:40:36.629659] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.605 [2024-10-01 22:40:36.629670] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.605 qpair failed and we were unable to recover it. 00:41:41.605 [2024-10-01 22:40:36.629973] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.605 [2024-10-01 22:40:36.629983] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.605 qpair failed and we were unable to recover it. 00:41:41.605 [2024-10-01 22:40:36.630296] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.605 [2024-10-01 22:40:36.630306] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.605 qpair failed and we were unable to recover it. 00:41:41.605 [2024-10-01 22:40:36.630604] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.605 [2024-10-01 22:40:36.630615] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.605 qpair failed and we were unable to recover it. 00:41:41.605 [2024-10-01 22:40:36.630859] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.605 [2024-10-01 22:40:36.630869] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.605 qpair failed and we were unable to recover it. 00:41:41.605 [2024-10-01 22:40:36.631178] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.605 [2024-10-01 22:40:36.631188] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.605 qpair failed and we were unable to recover it. 00:41:41.605 [2024-10-01 22:40:36.631465] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.605 [2024-10-01 22:40:36.631483] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.605 qpair failed and we were unable to recover it. 00:41:41.605 [2024-10-01 22:40:36.631701] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.605 [2024-10-01 22:40:36.631711] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.605 qpair failed and we were unable to recover it. 00:41:41.605 [2024-10-01 22:40:36.632037] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.605 [2024-10-01 22:40:36.632047] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.605 qpair failed and we were unable to recover it. 00:41:41.605 [2024-10-01 22:40:36.632324] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.605 [2024-10-01 22:40:36.632334] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.605 qpair failed and we were unable to recover it. 00:41:41.605 [2024-10-01 22:40:36.632610] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.605 [2024-10-01 22:40:36.632620] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.605 qpair failed and we were unable to recover it. 00:41:41.605 [2024-10-01 22:40:36.632954] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.605 [2024-10-01 22:40:36.632965] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.605 qpair failed and we were unable to recover it. 00:41:41.605 [2024-10-01 22:40:36.633247] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.605 [2024-10-01 22:40:36.633257] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.605 qpair failed and we were unable to recover it. 00:41:41.605 [2024-10-01 22:40:36.633560] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.605 [2024-10-01 22:40:36.633570] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.605 qpair failed and we were unable to recover it. 00:41:41.605 [2024-10-01 22:40:36.633854] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.605 [2024-10-01 22:40:36.633864] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.605 qpair failed and we were unable to recover it. 00:41:41.605 [2024-10-01 22:40:36.634165] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.605 [2024-10-01 22:40:36.634175] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.605 qpair failed and we were unable to recover it. 00:41:41.605 [2024-10-01 22:40:36.634362] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.605 [2024-10-01 22:40:36.634373] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.605 qpair failed and we were unable to recover it. 00:41:41.605 Read completed with error (sct=0, sc=8) 00:41:41.605 starting I/O failed 00:41:41.605 Read completed with error (sct=0, sc=8) 00:41:41.605 starting I/O failed 00:41:41.606 Read completed with error (sct=0, sc=8) 00:41:41.606 starting I/O failed 00:41:41.606 Read completed with error (sct=0, sc=8) 00:41:41.606 starting I/O failed 00:41:41.606 Read completed with error (sct=0, sc=8) 00:41:41.606 starting I/O failed 00:41:41.606 Read completed with error (sct=0, sc=8) 00:41:41.606 starting I/O failed 00:41:41.606 Write completed with error (sct=0, sc=8) 00:41:41.606 starting I/O failed 00:41:41.606 Read completed with error (sct=0, sc=8) 00:41:41.606 starting I/O failed 00:41:41.606 Write completed with error (sct=0, sc=8) 00:41:41.606 starting I/O failed 00:41:41.606 Read completed with error (sct=0, sc=8) 00:41:41.606 starting I/O failed 00:41:41.606 Read completed with error (sct=0, sc=8) 00:41:41.606 starting I/O failed 00:41:41.606 Read completed with error (sct=0, sc=8) 00:41:41.606 starting I/O failed 00:41:41.606 Write completed with error (sct=0, sc=8) 00:41:41.606 starting I/O failed 00:41:41.606 Write completed with error (sct=0, sc=8) 00:41:41.606 starting I/O failed 00:41:41.606 Write completed with error (sct=0, sc=8) 00:41:41.606 starting I/O failed 00:41:41.606 Read completed with error (sct=0, sc=8) 00:41:41.606 starting I/O failed 00:41:41.606 Read completed with error (sct=0, sc=8) 00:41:41.606 starting I/O failed 00:41:41.606 Write completed with error (sct=0, sc=8) 00:41:41.606 starting I/O failed 00:41:41.606 Write completed with error (sct=0, sc=8) 00:41:41.606 starting I/O failed 00:41:41.606 Write completed with error (sct=0, sc=8) 00:41:41.606 starting I/O failed 00:41:41.606 Write completed with error (sct=0, sc=8) 00:41:41.606 starting I/O failed 00:41:41.606 Read completed with error (sct=0, sc=8) 00:41:41.606 starting I/O failed 00:41:41.606 Read completed with error (sct=0, sc=8) 00:41:41.606 starting I/O failed 00:41:41.606 Read completed with error (sct=0, sc=8) 00:41:41.606 starting I/O failed 00:41:41.606 Write completed with error (sct=0, sc=8) 00:41:41.606 starting I/O failed 00:41:41.606 Read completed with error (sct=0, sc=8) 00:41:41.606 starting I/O failed 00:41:41.606 Write completed with error (sct=0, sc=8) 00:41:41.606 starting I/O failed 00:41:41.606 Read completed with error (sct=0, sc=8) 00:41:41.606 starting I/O failed 00:41:41.606 Read completed with error (sct=0, sc=8) 00:41:41.606 starting I/O failed 00:41:41.606 Read completed with error (sct=0, sc=8) 00:41:41.606 starting I/O failed 00:41:41.606 Write completed with error (sct=0, sc=8) 00:41:41.606 starting I/O failed 00:41:41.606 Read completed with error (sct=0, sc=8) 00:41:41.606 starting I/O failed 00:41:41.606 [2024-10-01 22:40:36.635147] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:41:41.606 Read completed with error (sct=0, sc=8) 00:41:41.606 starting I/O failed 00:41:41.606 Read completed with error (sct=0, sc=8) 00:41:41.606 starting I/O failed 00:41:41.606 Read completed with error (sct=0, sc=8) 00:41:41.606 starting I/O failed 00:41:41.606 Read completed with error (sct=0, sc=8) 00:41:41.606 starting I/O failed 00:41:41.606 Read completed with error (sct=0, sc=8) 00:41:41.606 starting I/O failed 00:41:41.606 Read completed with error (sct=0, sc=8) 00:41:41.606 starting I/O failed 00:41:41.606 Read completed with error (sct=0, sc=8) 00:41:41.606 starting I/O failed 00:41:41.606 Read completed with error (sct=0, sc=8) 00:41:41.606 starting I/O failed 00:41:41.606 Read completed with error (sct=0, sc=8) 00:41:41.606 starting I/O failed 00:41:41.606 Read completed with error (sct=0, sc=8) 00:41:41.606 starting I/O failed 00:41:41.606 Read completed with error (sct=0, sc=8) 00:41:41.606 starting I/O failed 00:41:41.606 Read completed with error (sct=0, sc=8) 00:41:41.606 starting I/O failed 00:41:41.606 Read completed with error (sct=0, sc=8) 00:41:41.606 starting I/O failed 00:41:41.606 Read completed with error (sct=0, sc=8) 00:41:41.606 starting I/O failed 00:41:41.606 Read completed with error (sct=0, sc=8) 00:41:41.606 starting I/O failed 00:41:41.606 Read completed with error (sct=0, sc=8) 00:41:41.606 starting I/O failed 00:41:41.606 Read completed with error (sct=0, sc=8) 00:41:41.606 starting I/O failed 00:41:41.606 Read completed with error (sct=0, sc=8) 00:41:41.606 starting I/O failed 00:41:41.606 Read completed with error (sct=0, sc=8) 00:41:41.606 starting I/O failed 00:41:41.606 Read completed with error (sct=0, sc=8) 00:41:41.606 starting I/O failed 00:41:41.606 Write completed with error (sct=0, sc=8) 00:41:41.606 starting I/O failed 00:41:41.606 Write completed with error (sct=0, sc=8) 00:41:41.606 starting I/O failed 00:41:41.606 Read completed with error (sct=0, sc=8) 00:41:41.606 starting I/O failed 00:41:41.606 Read completed with error (sct=0, sc=8) 00:41:41.606 starting I/O failed 00:41:41.606 Write completed with error (sct=0, sc=8) 00:41:41.606 starting I/O failed 00:41:41.606 Write completed with error (sct=0, sc=8) 00:41:41.606 starting I/O failed 00:41:41.606 Read completed with error (sct=0, sc=8) 00:41:41.606 starting I/O failed 00:41:41.606 Read completed with error (sct=0, sc=8) 00:41:41.606 starting I/O failed 00:41:41.606 Read completed with error (sct=0, sc=8) 00:41:41.606 starting I/O failed 00:41:41.606 Write completed with error (sct=0, sc=8) 00:41:41.606 starting I/O failed 00:41:41.606 Write completed with error (sct=0, sc=8) 00:41:41.606 starting I/O failed 00:41:41.606 Write completed with error (sct=0, sc=8) 00:41:41.606 starting I/O failed 00:41:41.606 [2024-10-01 22:40:36.635948] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:41:41.606 [2024-10-01 22:40:36.636288] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.606 [2024-10-01 22:40:36.636343] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7f8000b90 with addr=10.0.0.2, port=4420 00:41:41.606 qpair failed and we were unable to recover it. 00:41:41.606 [2024-10-01 22:40:36.636911] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.606 [2024-10-01 22:40:36.637000] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7f8000b90 with addr=10.0.0.2, port=4420 00:41:41.606 qpair failed and we were unable to recover it. 00:41:41.606 [2024-10-01 22:40:36.637421] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.606 [2024-10-01 22:40:36.637456] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7f8000b90 with addr=10.0.0.2, port=4420 00:41:41.606 qpair failed and we were unable to recover it. 00:41:41.606 [2024-10-01 22:40:36.637924] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.606 [2024-10-01 22:40:36.638013] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7f8000b90 with addr=10.0.0.2, port=4420 00:41:41.606 qpair failed and we were unable to recover it. 00:41:41.606 [2024-10-01 22:40:36.638421] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.606 [2024-10-01 22:40:36.638458] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7f8000b90 with addr=10.0.0.2, port=4420 00:41:41.606 qpair failed and we were unable to recover it. 00:41:41.606 [2024-10-01 22:40:36.638865] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.606 [2024-10-01 22:40:36.638896] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7f8000b90 with addr=10.0.0.2, port=4420 00:41:41.606 qpair failed and we were unable to recover it. 00:41:41.606 [2024-10-01 22:40:36.639250] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.606 [2024-10-01 22:40:36.639279] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7f8000b90 with addr=10.0.0.2, port=4420 00:41:41.606 qpair failed and we were unable to recover it. 00:41:41.606 [2024-10-01 22:40:36.639643] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.606 [2024-10-01 22:40:36.639672] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7f8000b90 with addr=10.0.0.2, port=4420 00:41:41.606 qpair failed and we were unable to recover it. 00:41:41.606 [2024-10-01 22:40:36.640072] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.606 [2024-10-01 22:40:36.640104] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7f8000b90 with addr=10.0.0.2, port=4420 00:41:41.606 qpair failed and we were unable to recover it. 00:41:41.606 [2024-10-01 22:40:36.640451] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.606 [2024-10-01 22:40:36.640480] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7f8000b90 with addr=10.0.0.2, port=4420 00:41:41.606 qpair failed and we were unable to recover it. 00:41:41.607 [2024-10-01 22:40:36.640936] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.607 [2024-10-01 22:40:36.640975] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.607 qpair failed and we were unable to recover it. 00:41:41.607 [2024-10-01 22:40:36.641316] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.607 [2024-10-01 22:40:36.641329] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.607 qpair failed and we were unable to recover it. 00:41:41.607 [2024-10-01 22:40:36.641650] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.607 [2024-10-01 22:40:36.641670] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.607 qpair failed and we were unable to recover it. 00:41:41.607 [2024-10-01 22:40:36.641978] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.607 [2024-10-01 22:40:36.641989] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.607 qpair failed and we were unable to recover it. 00:41:41.607 [2024-10-01 22:40:36.642299] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.607 [2024-10-01 22:40:36.642309] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.607 qpair failed and we were unable to recover it. 00:41:41.607 [2024-10-01 22:40:36.642621] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.607 [2024-10-01 22:40:36.642653] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.607 qpair failed and we were unable to recover it. 00:41:41.607 [2024-10-01 22:40:36.643006] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.607 [2024-10-01 22:40:36.643016] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.607 qpair failed and we were unable to recover it. 00:41:41.607 [2024-10-01 22:40:36.643290] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.607 [2024-10-01 22:40:36.643300] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.607 qpair failed and we were unable to recover it. 00:41:41.607 [2024-10-01 22:40:36.643621] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.607 [2024-10-01 22:40:36.643639] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.607 qpair failed and we were unable to recover it. 00:41:41.607 [2024-10-01 22:40:36.644015] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.607 [2024-10-01 22:40:36.644026] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.607 qpair failed and we were unable to recover it. 00:41:41.607 [2024-10-01 22:40:36.644311] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.607 [2024-10-01 22:40:36.644321] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.607 qpair failed and we were unable to recover it. 00:41:41.607 [2024-10-01 22:40:36.644503] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.607 [2024-10-01 22:40:36.644514] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.607 qpair failed and we were unable to recover it. 00:41:41.607 [2024-10-01 22:40:36.644823] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.607 [2024-10-01 22:40:36.644834] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.607 qpair failed and we were unable to recover it. 00:41:41.607 [2024-10-01 22:40:36.645093] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.607 [2024-10-01 22:40:36.645103] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.607 qpair failed and we were unable to recover it. 00:41:41.607 [2024-10-01 22:40:36.645453] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.607 [2024-10-01 22:40:36.645463] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.607 qpair failed and we were unable to recover it. 00:41:41.607 [2024-10-01 22:40:36.645765] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.607 [2024-10-01 22:40:36.645781] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.607 qpair failed and we were unable to recover it. 00:41:41.607 [2024-10-01 22:40:36.646094] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.607 [2024-10-01 22:40:36.646104] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.607 qpair failed and we were unable to recover it. 00:41:41.607 [2024-10-01 22:40:36.646390] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.607 [2024-10-01 22:40:36.646400] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.607 qpair failed and we were unable to recover it. 00:41:41.607 [2024-10-01 22:40:36.646708] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.607 [2024-10-01 22:40:36.646719] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.607 qpair failed and we were unable to recover it. 00:41:41.607 [2024-10-01 22:40:36.647104] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.607 [2024-10-01 22:40:36.647114] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.607 qpair failed and we were unable to recover it. 00:41:41.607 [2024-10-01 22:40:36.647442] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.607 [2024-10-01 22:40:36.647451] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.607 qpair failed and we were unable to recover it. 00:41:41.607 [2024-10-01 22:40:36.647643] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.607 [2024-10-01 22:40:36.647653] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.607 qpair failed and we were unable to recover it. 00:41:41.607 [2024-10-01 22:40:36.647950] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.607 [2024-10-01 22:40:36.647960] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.607 qpair failed and we were unable to recover it. 00:41:41.607 [2024-10-01 22:40:36.648313] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.607 [2024-10-01 22:40:36.648323] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.607 qpair failed and we were unable to recover it. 00:41:41.607 [2024-10-01 22:40:36.648500] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.607 [2024-10-01 22:40:36.648510] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.607 qpair failed and we were unable to recover it. 00:41:41.607 [2024-10-01 22:40:36.648703] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.607 [2024-10-01 22:40:36.648714] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.607 qpair failed and we were unable to recover it. 00:41:41.607 [2024-10-01 22:40:36.648931] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.607 [2024-10-01 22:40:36.648941] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.607 qpair failed and we were unable to recover it. 00:41:41.607 [2024-10-01 22:40:36.649279] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.607 [2024-10-01 22:40:36.649289] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.607 qpair failed and we were unable to recover it. 00:41:41.607 [2024-10-01 22:40:36.649618] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.607 [2024-10-01 22:40:36.649633] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.607 qpair failed and we were unable to recover it. 00:41:41.607 [2024-10-01 22:40:36.649841] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.607 [2024-10-01 22:40:36.649851] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.607 qpair failed and we were unable to recover it. 00:41:41.607 [2024-10-01 22:40:36.650031] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.607 [2024-10-01 22:40:36.650041] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.607 qpair failed and we were unable to recover it. 00:41:41.607 [2024-10-01 22:40:36.650282] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.607 [2024-10-01 22:40:36.650292] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.607 qpair failed and we were unable to recover it. 00:41:41.607 [2024-10-01 22:40:36.650379] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.607 [2024-10-01 22:40:36.650388] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.608 qpair failed and we were unable to recover it. 00:41:41.608 [2024-10-01 22:40:36.650691] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.608 [2024-10-01 22:40:36.650702] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.608 qpair failed and we were unable to recover it. 00:41:41.608 [2024-10-01 22:40:36.651016] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.608 [2024-10-01 22:40:36.651026] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.608 qpair failed and we were unable to recover it. 00:41:41.608 [2024-10-01 22:40:36.651408] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.608 [2024-10-01 22:40:36.651418] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.608 qpair failed and we were unable to recover it. 00:41:41.608 [2024-10-01 22:40:36.651704] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.608 [2024-10-01 22:40:36.651714] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.608 qpair failed and we were unable to recover it. 00:41:41.608 [2024-10-01 22:40:36.651900] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.608 [2024-10-01 22:40:36.651910] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.608 qpair failed and we were unable to recover it. 00:41:41.608 [2024-10-01 22:40:36.652218] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.608 [2024-10-01 22:40:36.652228] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.608 qpair failed and we were unable to recover it. 00:41:41.608 [2024-10-01 22:40:36.652508] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.608 [2024-10-01 22:40:36.652526] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.608 qpair failed and we were unable to recover it. 00:41:41.608 [2024-10-01 22:40:36.652857] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.608 [2024-10-01 22:40:36.652868] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.608 qpair failed and we were unable to recover it. 00:41:41.608 [2024-10-01 22:40:36.653177] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.608 [2024-10-01 22:40:36.653188] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.608 qpair failed and we were unable to recover it. 00:41:41.608 [2024-10-01 22:40:36.653508] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.608 [2024-10-01 22:40:36.653518] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.608 qpair failed and we were unable to recover it. 00:41:41.608 [2024-10-01 22:40:36.653820] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.608 [2024-10-01 22:40:36.653830] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.608 qpair failed and we were unable to recover it. 00:41:41.608 [2024-10-01 22:40:36.654151] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.608 [2024-10-01 22:40:36.654161] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.608 qpair failed and we were unable to recover it. 00:41:41.608 [2024-10-01 22:40:36.654544] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.608 [2024-10-01 22:40:36.654553] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.608 qpair failed and we were unable to recover it. 00:41:41.608 [2024-10-01 22:40:36.654720] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.608 [2024-10-01 22:40:36.654729] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.608 qpair failed and we were unable to recover it. 00:41:41.608 [2024-10-01 22:40:36.655003] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.608 [2024-10-01 22:40:36.655014] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.608 qpair failed and we were unable to recover it. 00:41:41.608 [2024-10-01 22:40:36.655182] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.608 [2024-10-01 22:40:36.655192] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.608 qpair failed and we were unable to recover it. 00:41:41.608 [2024-10-01 22:40:36.655383] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.608 [2024-10-01 22:40:36.655393] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.608 qpair failed and we were unable to recover it. 00:41:41.608 [2024-10-01 22:40:36.655576] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.608 [2024-10-01 22:40:36.655587] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.608 qpair failed and we were unable to recover it. 00:41:41.608 [2024-10-01 22:40:36.655950] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.608 [2024-10-01 22:40:36.655960] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.608 qpair failed and we were unable to recover it. 00:41:41.608 [2024-10-01 22:40:36.656162] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.608 [2024-10-01 22:40:36.656174] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.608 qpair failed and we were unable to recover it. 00:41:41.608 [2024-10-01 22:40:36.656441] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.608 [2024-10-01 22:40:36.656451] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.608 qpair failed and we were unable to recover it. 00:41:41.608 [2024-10-01 22:40:36.656761] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.608 [2024-10-01 22:40:36.656772] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.608 qpair failed and we were unable to recover it. 00:41:41.608 [2024-10-01 22:40:36.657096] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.608 [2024-10-01 22:40:36.657107] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.608 qpair failed and we were unable to recover it. 00:41:41.608 [2024-10-01 22:40:36.657414] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.608 [2024-10-01 22:40:36.657424] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.608 qpair failed and we were unable to recover it. 00:41:41.608 [2024-10-01 22:40:36.657638] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.608 [2024-10-01 22:40:36.657648] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.608 qpair failed and we were unable to recover it. 00:41:41.608 [2024-10-01 22:40:36.658022] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.608 [2024-10-01 22:40:36.658032] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.608 qpair failed and we were unable to recover it. 00:41:41.608 [2024-10-01 22:40:36.658323] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.608 [2024-10-01 22:40:36.658333] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.608 qpair failed and we were unable to recover it. 00:41:41.608 [2024-10-01 22:40:36.658619] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.608 [2024-10-01 22:40:36.658638] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.608 qpair failed and we were unable to recover it. 00:41:41.608 [2024-10-01 22:40:36.658933] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.608 [2024-10-01 22:40:36.658943] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.608 qpair failed and we were unable to recover it. 00:41:41.608 [2024-10-01 22:40:36.659260] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.608 [2024-10-01 22:40:36.659270] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.608 qpair failed and we were unable to recover it. 00:41:41.608 [2024-10-01 22:40:36.659578] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.609 [2024-10-01 22:40:36.659588] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.609 qpair failed and we were unable to recover it. 00:41:41.609 [2024-10-01 22:40:36.659885] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.609 [2024-10-01 22:40:36.659895] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.609 qpair failed and we were unable to recover it. 00:41:41.609 [2024-10-01 22:40:36.660196] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.609 [2024-10-01 22:40:36.660206] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.609 qpair failed and we were unable to recover it. 00:41:41.609 [2024-10-01 22:40:36.660429] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.609 [2024-10-01 22:40:36.660439] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.609 qpair failed and we were unable to recover it. 00:41:41.609 [2024-10-01 22:40:36.660748] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.609 [2024-10-01 22:40:36.660759] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.609 qpair failed and we were unable to recover it. 00:41:41.609 [2024-10-01 22:40:36.660916] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.609 [2024-10-01 22:40:36.660927] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.609 qpair failed and we were unable to recover it. 00:41:41.609 [2024-10-01 22:40:36.661254] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.609 [2024-10-01 22:40:36.661265] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.609 qpair failed and we were unable to recover it. 00:41:41.609 [2024-10-01 22:40:36.661573] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.609 [2024-10-01 22:40:36.661584] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.609 qpair failed and we were unable to recover it. 00:41:41.609 [2024-10-01 22:40:36.661955] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.609 [2024-10-01 22:40:36.661966] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.609 qpair failed and we were unable to recover it. 00:41:41.609 [2024-10-01 22:40:36.662329] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.609 [2024-10-01 22:40:36.662340] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.609 qpair failed and we were unable to recover it. 00:41:41.609 [2024-10-01 22:40:36.662658] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.609 [2024-10-01 22:40:36.662669] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.609 qpair failed and we were unable to recover it. 00:41:41.609 [2024-10-01 22:40:36.662962] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.609 [2024-10-01 22:40:36.662972] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.609 qpair failed and we were unable to recover it. 00:41:41.609 [2024-10-01 22:40:36.663281] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.609 [2024-10-01 22:40:36.663299] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.609 qpair failed and we were unable to recover it. 00:41:41.609 [2024-10-01 22:40:36.663696] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.609 [2024-10-01 22:40:36.663707] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.609 qpair failed and we were unable to recover it. 00:41:41.609 [2024-10-01 22:40:36.664021] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.609 [2024-10-01 22:40:36.664031] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.609 qpair failed and we were unable to recover it. 00:41:41.609 [2024-10-01 22:40:36.664342] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.609 [2024-10-01 22:40:36.664352] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.609 qpair failed and we were unable to recover it. 00:41:41.609 [2024-10-01 22:40:36.664642] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.609 [2024-10-01 22:40:36.664655] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.609 qpair failed and we were unable to recover it. 00:41:41.609 [2024-10-01 22:40:36.664978] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.609 [2024-10-01 22:40:36.664988] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.609 qpair failed and we were unable to recover it. 00:41:41.609 [2024-10-01 22:40:36.665307] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.609 [2024-10-01 22:40:36.665317] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.609 qpair failed and we were unable to recover it. 00:41:41.609 [2024-10-01 22:40:36.665621] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.609 [2024-10-01 22:40:36.665637] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.609 qpair failed and we were unable to recover it. 00:41:41.609 [2024-10-01 22:40:36.665943] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.609 [2024-10-01 22:40:36.665952] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.609 qpair failed and we were unable to recover it. 00:41:41.609 [2024-10-01 22:40:36.666149] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.609 [2024-10-01 22:40:36.666159] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.609 qpair failed and we were unable to recover it. 00:41:41.609 [2024-10-01 22:40:36.666475] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.609 [2024-10-01 22:40:36.666485] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.609 qpair failed and we were unable to recover it. 00:41:41.609 [2024-10-01 22:40:36.666799] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.609 [2024-10-01 22:40:36.666809] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.609 qpair failed and we were unable to recover it. 00:41:41.609 [2024-10-01 22:40:36.667098] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.609 [2024-10-01 22:40:36.667108] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.609 qpair failed and we were unable to recover it. 00:41:41.609 [2024-10-01 22:40:36.667442] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.609 [2024-10-01 22:40:36.667451] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.609 qpair failed and we were unable to recover it. 00:41:41.609 [2024-10-01 22:40:36.667687] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.609 [2024-10-01 22:40:36.667698] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.609 qpair failed and we were unable to recover it. 00:41:41.609 [2024-10-01 22:40:36.667984] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.609 [2024-10-01 22:40:36.667995] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.609 qpair failed and we were unable to recover it. 00:41:41.609 [2024-10-01 22:40:36.668344] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.609 [2024-10-01 22:40:36.668354] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.609 qpair failed and we were unable to recover it. 00:41:41.609 [2024-10-01 22:40:36.668750] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.609 [2024-10-01 22:40:36.668761] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.609 qpair failed and we were unable to recover it. 00:41:41.609 [2024-10-01 22:40:36.669064] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.609 [2024-10-01 22:40:36.669084] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.609 qpair failed and we were unable to recover it. 00:41:41.610 [2024-10-01 22:40:36.669391] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.610 [2024-10-01 22:40:36.669400] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.610 qpair failed and we were unable to recover it. 00:41:41.610 [2024-10-01 22:40:36.669585] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.610 [2024-10-01 22:40:36.669595] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.610 qpair failed and we were unable to recover it. 00:41:41.610 [2024-10-01 22:40:36.669937] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.610 [2024-10-01 22:40:36.669947] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.610 qpair failed and we were unable to recover it. 00:41:41.610 [2024-10-01 22:40:36.670221] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.610 [2024-10-01 22:40:36.670231] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.610 qpair failed and we were unable to recover it. 00:41:41.610 [2024-10-01 22:40:36.670433] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.610 [2024-10-01 22:40:36.670443] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.610 qpair failed and we were unable to recover it. 00:41:41.610 [2024-10-01 22:40:36.670758] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.610 [2024-10-01 22:40:36.670768] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.610 qpair failed and we were unable to recover it. 00:41:41.610 [2024-10-01 22:40:36.671153] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.610 [2024-10-01 22:40:36.671163] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.610 qpair failed and we were unable to recover it. 00:41:41.610 [2024-10-01 22:40:36.671353] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.610 [2024-10-01 22:40:36.671363] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.610 qpair failed and we were unable to recover it. 00:41:41.610 [2024-10-01 22:40:36.671528] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.610 [2024-10-01 22:40:36.671538] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.610 qpair failed and we were unable to recover it. 00:41:41.610 [2024-10-01 22:40:36.671772] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.610 [2024-10-01 22:40:36.671782] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.610 qpair failed and we were unable to recover it. 00:41:41.610 [2024-10-01 22:40:36.672091] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.610 [2024-10-01 22:40:36.672100] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.610 qpair failed and we were unable to recover it. 00:41:41.610 [2024-10-01 22:40:36.672408] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.610 [2024-10-01 22:40:36.672418] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.610 qpair failed and we were unable to recover it. 00:41:41.610 [2024-10-01 22:40:36.672746] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.610 [2024-10-01 22:40:36.672757] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.610 qpair failed and we were unable to recover it. 00:41:41.610 [2024-10-01 22:40:36.672975] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.610 [2024-10-01 22:40:36.672985] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.610 qpair failed and we were unable to recover it. 00:41:41.610 [2024-10-01 22:40:36.673310] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.610 [2024-10-01 22:40:36.673320] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.610 qpair failed and we were unable to recover it. 00:41:41.610 [2024-10-01 22:40:36.673619] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.610 [2024-10-01 22:40:36.673635] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.610 qpair failed and we were unable to recover it. 00:41:41.610 [2024-10-01 22:40:36.673943] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.610 [2024-10-01 22:40:36.673952] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.610 qpair failed and we were unable to recover it. 00:41:41.610 [2024-10-01 22:40:36.674261] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.610 [2024-10-01 22:40:36.674271] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.610 qpair failed and we were unable to recover it. 00:41:41.610 [2024-10-01 22:40:36.674597] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.610 [2024-10-01 22:40:36.674607] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.610 qpair failed and we were unable to recover it. 00:41:41.610 [2024-10-01 22:40:36.674913] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.610 [2024-10-01 22:40:36.674923] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.610 qpair failed and we were unable to recover it. 00:41:41.610 [2024-10-01 22:40:36.675241] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.610 [2024-10-01 22:40:36.675251] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.610 qpair failed and we were unable to recover it. 00:41:41.610 [2024-10-01 22:40:36.675434] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.610 [2024-10-01 22:40:36.675444] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.610 qpair failed and we were unable to recover it. 00:41:41.610 [2024-10-01 22:40:36.675614] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.610 [2024-10-01 22:40:36.675629] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.610 qpair failed and we were unable to recover it. 00:41:41.610 [2024-10-01 22:40:36.675928] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.610 [2024-10-01 22:40:36.675938] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.610 qpair failed and we were unable to recover it. 00:41:41.610 [2024-10-01 22:40:36.676244] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.610 [2024-10-01 22:40:36.676254] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.610 qpair failed and we were unable to recover it. 00:41:41.610 [2024-10-01 22:40:36.676565] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.610 [2024-10-01 22:40:36.676575] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.610 qpair failed and we were unable to recover it. 00:41:41.610 [2024-10-01 22:40:36.676882] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.610 [2024-10-01 22:40:36.676894] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.610 qpair failed and we were unable to recover it. 00:41:41.610 [2024-10-01 22:40:36.677201] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.610 [2024-10-01 22:40:36.677211] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.610 qpair failed and we were unable to recover it. 00:41:41.610 [2024-10-01 22:40:36.677406] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.610 [2024-10-01 22:40:36.677417] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.610 qpair failed and we were unable to recover it. 00:41:41.610 [2024-10-01 22:40:36.677719] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.610 [2024-10-01 22:40:36.677729] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.610 qpair failed and we were unable to recover it. 00:41:41.610 [2024-10-01 22:40:36.678033] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.610 [2024-10-01 22:40:36.678048] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.610 qpair failed and we were unable to recover it. 00:41:41.610 [2024-10-01 22:40:36.678353] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.610 [2024-10-01 22:40:36.678363] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.610 qpair failed and we were unable to recover it. 00:41:41.610 [2024-10-01 22:40:36.678672] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.610 [2024-10-01 22:40:36.678682] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.610 qpair failed and we were unable to recover it. 00:41:41.610 [2024-10-01 22:40:36.679004] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.610 [2024-10-01 22:40:36.679014] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.611 qpair failed and we were unable to recover it. 00:41:41.611 [2024-10-01 22:40:36.679200] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.611 [2024-10-01 22:40:36.679210] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.611 qpair failed and we were unable to recover it. 00:41:41.611 [2024-10-01 22:40:36.679411] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.611 [2024-10-01 22:40:36.679421] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.611 qpair failed and we were unable to recover it. 00:41:41.611 [2024-10-01 22:40:36.679701] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.611 [2024-10-01 22:40:36.679711] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.611 qpair failed and we were unable to recover it. 00:41:41.611 [2024-10-01 22:40:36.680000] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.611 [2024-10-01 22:40:36.680009] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.611 qpair failed and we were unable to recover it. 00:41:41.611 [2024-10-01 22:40:36.680324] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.611 [2024-10-01 22:40:36.680333] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.611 qpair failed and we were unable to recover it. 00:41:41.611 [2024-10-01 22:40:36.680687] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.611 [2024-10-01 22:40:36.680698] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.611 qpair failed and we were unable to recover it. 00:41:41.611 [2024-10-01 22:40:36.681030] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.611 [2024-10-01 22:40:36.681040] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.611 qpair failed and we were unable to recover it. 00:41:41.611 [2024-10-01 22:40:36.681334] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.611 [2024-10-01 22:40:36.681344] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.611 qpair failed and we were unable to recover it. 00:41:41.611 [2024-10-01 22:40:36.681635] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.611 [2024-10-01 22:40:36.681647] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.611 qpair failed and we were unable to recover it. 00:41:41.611 [2024-10-01 22:40:36.681954] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.611 [2024-10-01 22:40:36.681964] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.611 qpair failed and we were unable to recover it. 00:41:41.611 [2024-10-01 22:40:36.682274] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.611 [2024-10-01 22:40:36.682284] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.611 qpair failed and we were unable to recover it. 00:41:41.611 [2024-10-01 22:40:36.682508] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.611 [2024-10-01 22:40:36.682518] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.611 qpair failed and we were unable to recover it. 00:41:41.611 [2024-10-01 22:40:36.682882] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.611 [2024-10-01 22:40:36.682892] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.611 qpair failed and we were unable to recover it. 00:41:41.611 [2024-10-01 22:40:36.683116] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.611 [2024-10-01 22:40:36.683126] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.611 qpair failed and we were unable to recover it. 00:41:41.611 [2024-10-01 22:40:36.683313] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.611 [2024-10-01 22:40:36.683323] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.611 qpair failed and we were unable to recover it. 00:41:41.611 [2024-10-01 22:40:36.683562] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.611 [2024-10-01 22:40:36.683573] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.611 qpair failed and we were unable to recover it. 00:41:41.611 [2024-10-01 22:40:36.683762] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.611 [2024-10-01 22:40:36.683773] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.611 qpair failed and we were unable to recover it. 00:41:41.611 [2024-10-01 22:40:36.683990] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.611 [2024-10-01 22:40:36.684000] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.611 qpair failed and we were unable to recover it. 00:41:41.611 [2024-10-01 22:40:36.684389] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.611 [2024-10-01 22:40:36.684399] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.611 qpair failed and we were unable to recover it. 00:41:41.611 [2024-10-01 22:40:36.684697] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.611 [2024-10-01 22:40:36.684709] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.611 qpair failed and we were unable to recover it. 00:41:41.611 [2024-10-01 22:40:36.684894] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.611 [2024-10-01 22:40:36.684905] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.611 qpair failed and we were unable to recover it. 00:41:41.611 [2024-10-01 22:40:36.685212] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.611 [2024-10-01 22:40:36.685222] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.611 qpair failed and we were unable to recover it. 00:41:41.611 [2024-10-01 22:40:36.685510] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.611 [2024-10-01 22:40:36.685519] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.611 qpair failed and we were unable to recover it. 00:41:41.611 [2024-10-01 22:40:36.685822] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.611 [2024-10-01 22:40:36.685833] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.611 qpair failed and we were unable to recover it. 00:41:41.611 [2024-10-01 22:40:36.686131] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.611 [2024-10-01 22:40:36.686142] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.611 qpair failed and we were unable to recover it. 00:41:41.611 [2024-10-01 22:40:36.686422] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.611 [2024-10-01 22:40:36.686432] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.611 qpair failed and we were unable to recover it. 00:41:41.611 [2024-10-01 22:40:36.686735] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.611 [2024-10-01 22:40:36.686745] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.611 qpair failed and we were unable to recover it. 00:41:41.611 [2024-10-01 22:40:36.686954] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.611 [2024-10-01 22:40:36.686964] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.611 qpair failed and we were unable to recover it. 00:41:41.611 [2024-10-01 22:40:36.687287] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.611 [2024-10-01 22:40:36.687297] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.611 qpair failed and we were unable to recover it. 00:41:41.611 [2024-10-01 22:40:36.687628] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.611 [2024-10-01 22:40:36.687638] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.611 qpair failed and we were unable to recover it. 00:41:41.611 [2024-10-01 22:40:36.687927] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.611 [2024-10-01 22:40:36.687938] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.611 qpair failed and we were unable to recover it. 00:41:41.611 [2024-10-01 22:40:36.688238] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.611 [2024-10-01 22:40:36.688248] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.611 qpair failed and we were unable to recover it. 00:41:41.611 [2024-10-01 22:40:36.688575] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.611 [2024-10-01 22:40:36.688586] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.611 qpair failed and we were unable to recover it. 00:41:41.611 [2024-10-01 22:40:36.688900] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.611 [2024-10-01 22:40:36.688910] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.611 qpair failed and we were unable to recover it. 00:41:41.612 [2024-10-01 22:40:36.689105] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.612 [2024-10-01 22:40:36.689115] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.612 qpair failed and we were unable to recover it. 00:41:41.612 [2024-10-01 22:40:36.689437] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.612 [2024-10-01 22:40:36.689447] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.612 qpair failed and we were unable to recover it. 00:41:41.612 [2024-10-01 22:40:36.689751] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.612 [2024-10-01 22:40:36.689762] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.612 qpair failed and we were unable to recover it. 00:41:41.612 [2024-10-01 22:40:36.690057] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.612 [2024-10-01 22:40:36.690067] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.612 qpair failed and we were unable to recover it. 00:41:41.612 [2024-10-01 22:40:36.690362] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.612 [2024-10-01 22:40:36.690373] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.612 qpair failed and we were unable to recover it. 00:41:41.612 [2024-10-01 22:40:36.690688] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.612 [2024-10-01 22:40:36.690698] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.612 qpair failed and we were unable to recover it. 00:41:41.612 [2024-10-01 22:40:36.691000] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.612 [2024-10-01 22:40:36.691010] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.612 qpair failed and we were unable to recover it. 00:41:41.612 [2024-10-01 22:40:36.691322] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.612 [2024-10-01 22:40:36.691332] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.612 qpair failed and we were unable to recover it. 00:41:41.612 [2024-10-01 22:40:36.691651] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.612 [2024-10-01 22:40:36.691662] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.612 qpair failed and we were unable to recover it. 00:41:41.612 [2024-10-01 22:40:36.692031] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.612 [2024-10-01 22:40:36.692042] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.612 qpair failed and we were unable to recover it. 00:41:41.612 [2024-10-01 22:40:36.692215] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.612 [2024-10-01 22:40:36.692225] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.612 qpair failed and we were unable to recover it. 00:41:41.612 [2024-10-01 22:40:36.692463] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.612 [2024-10-01 22:40:36.692473] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.612 qpair failed and we were unable to recover it. 00:41:41.612 [2024-10-01 22:40:36.692801] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.612 [2024-10-01 22:40:36.692813] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.612 qpair failed and we were unable to recover it. 00:41:41.612 [2024-10-01 22:40:36.693000] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.612 [2024-10-01 22:40:36.693011] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.612 qpair failed and we were unable to recover it. 00:41:41.612 [2024-10-01 22:40:36.693355] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.612 [2024-10-01 22:40:36.693365] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.612 qpair failed and we were unable to recover it. 00:41:41.612 [2024-10-01 22:40:36.693680] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.612 [2024-10-01 22:40:36.693690] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.612 qpair failed and we were unable to recover it. 00:41:41.612 [2024-10-01 22:40:36.694021] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.612 [2024-10-01 22:40:36.694030] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.612 qpair failed and we were unable to recover it. 00:41:41.612 [2024-10-01 22:40:36.694346] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.612 [2024-10-01 22:40:36.694356] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.612 qpair failed and we were unable to recover it. 00:41:41.612 [2024-10-01 22:40:36.694637] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.612 [2024-10-01 22:40:36.694648] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.612 qpair failed and we were unable to recover it. 00:41:41.612 [2024-10-01 22:40:36.695021] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.612 [2024-10-01 22:40:36.695032] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.612 qpair failed and we were unable to recover it. 00:41:41.612 [2024-10-01 22:40:36.695334] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.612 [2024-10-01 22:40:36.695343] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.612 qpair failed and we were unable to recover it. 00:41:41.612 [2024-10-01 22:40:36.695637] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.612 [2024-10-01 22:40:36.695649] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.612 qpair failed and we were unable to recover it. 00:41:41.612 [2024-10-01 22:40:36.695959] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.612 [2024-10-01 22:40:36.695969] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.612 qpair failed and we were unable to recover it. 00:41:41.612 [2024-10-01 22:40:36.696178] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.612 [2024-10-01 22:40:36.696188] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.612 qpair failed and we were unable to recover it. 00:41:41.612 [2024-10-01 22:40:36.696358] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.612 [2024-10-01 22:40:36.696368] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.612 qpair failed and we were unable to recover it. 00:41:41.612 [2024-10-01 22:40:36.696685] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.612 [2024-10-01 22:40:36.696695] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.612 qpair failed and we were unable to recover it. 00:41:41.612 [2024-10-01 22:40:36.696985] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.612 [2024-10-01 22:40:36.696995] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.612 qpair failed and we were unable to recover it. 00:41:41.612 [2024-10-01 22:40:36.697313] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.612 [2024-10-01 22:40:36.697322] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.612 qpair failed and we were unable to recover it. 00:41:41.612 [2024-10-01 22:40:36.697605] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.612 [2024-10-01 22:40:36.697621] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.612 qpair failed and we were unable to recover it. 00:41:41.612 [2024-10-01 22:40:36.697952] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.612 [2024-10-01 22:40:36.697962] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.612 qpair failed and we were unable to recover it. 00:41:41.612 [2024-10-01 22:40:36.698138] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.612 [2024-10-01 22:40:36.698148] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.612 qpair failed and we were unable to recover it. 00:41:41.612 [2024-10-01 22:40:36.698504] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.612 [2024-10-01 22:40:36.698514] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.612 qpair failed and we were unable to recover it. 00:41:41.613 [2024-10-01 22:40:36.698777] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.613 [2024-10-01 22:40:36.698787] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.613 qpair failed and we were unable to recover it. 00:41:41.613 [2024-10-01 22:40:36.699113] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.613 [2024-10-01 22:40:36.699123] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.613 qpair failed and we were unable to recover it. 00:41:41.613 [2024-10-01 22:40:36.699405] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.613 [2024-10-01 22:40:36.699423] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.613 qpair failed and we were unable to recover it. 00:41:41.613 [2024-10-01 22:40:36.699628] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.613 [2024-10-01 22:40:36.699638] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.613 qpair failed and we were unable to recover it. 00:41:41.613 [2024-10-01 22:40:36.699831] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.613 [2024-10-01 22:40:36.699842] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.613 qpair failed and we were unable to recover it. 00:41:41.613 [2024-10-01 22:40:36.700163] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.613 [2024-10-01 22:40:36.700173] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.613 qpair failed and we were unable to recover it. 00:41:41.613 [2024-10-01 22:40:36.700395] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.613 [2024-10-01 22:40:36.700406] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.613 qpair failed and we were unable to recover it. 00:41:41.613 [2024-10-01 22:40:36.700566] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.613 [2024-10-01 22:40:36.700579] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.613 qpair failed and we were unable to recover it. 00:41:41.613 [2024-10-01 22:40:36.700899] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.613 [2024-10-01 22:40:36.700911] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.613 qpair failed and we were unable to recover it. 00:41:41.613 [2024-10-01 22:40:36.701216] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.613 [2024-10-01 22:40:36.701227] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.613 qpair failed and we were unable to recover it. 00:41:41.613 [2024-10-01 22:40:36.701564] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.613 [2024-10-01 22:40:36.701575] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.613 qpair failed and we were unable to recover it. 00:41:41.613 [2024-10-01 22:40:36.701959] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.613 [2024-10-01 22:40:36.701970] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.613 qpair failed and we were unable to recover it. 00:41:41.613 [2024-10-01 22:40:36.702180] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.613 [2024-10-01 22:40:36.702190] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.613 qpair failed and we were unable to recover it. 00:41:41.613 [2024-10-01 22:40:36.702509] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.613 [2024-10-01 22:40:36.702519] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.613 qpair failed and we were unable to recover it. 00:41:41.613 [2024-10-01 22:40:36.702796] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.613 [2024-10-01 22:40:36.702807] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.613 qpair failed and we were unable to recover it. 00:41:41.613 [2024-10-01 22:40:36.703084] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.613 [2024-10-01 22:40:36.703093] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.613 qpair failed and we were unable to recover it. 00:41:41.613 [2024-10-01 22:40:36.703363] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.613 [2024-10-01 22:40:36.703373] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.613 qpair failed and we were unable to recover it. 00:41:41.613 [2024-10-01 22:40:36.703559] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.613 [2024-10-01 22:40:36.703570] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.613 qpair failed and we were unable to recover it. 00:41:41.613 [2024-10-01 22:40:36.703931] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.613 [2024-10-01 22:40:36.703941] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.613 qpair failed and we were unable to recover it. 00:41:41.613 [2024-10-01 22:40:36.704226] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.613 [2024-10-01 22:40:36.704236] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.613 qpair failed and we were unable to recover it. 00:41:41.613 [2024-10-01 22:40:36.704541] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.613 [2024-10-01 22:40:36.704551] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.613 qpair failed and we were unable to recover it. 00:41:41.613 [2024-10-01 22:40:36.704805] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.613 [2024-10-01 22:40:36.704816] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.613 qpair failed and we were unable to recover it. 00:41:41.613 [2024-10-01 22:40:36.705115] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.613 [2024-10-01 22:40:36.705125] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.613 qpair failed and we were unable to recover it. 00:41:41.613 [2024-10-01 22:40:36.705429] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.613 [2024-10-01 22:40:36.705441] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.613 qpair failed and we were unable to recover it. 00:41:41.613 [2024-10-01 22:40:36.705745] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.613 [2024-10-01 22:40:36.705755] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.613 qpair failed and we were unable to recover it. 00:41:41.613 [2024-10-01 22:40:36.706050] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.613 [2024-10-01 22:40:36.706061] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.613 qpair failed and we were unable to recover it. 00:41:41.613 [2024-10-01 22:40:36.706361] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.613 [2024-10-01 22:40:36.706371] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.613 qpair failed and we were unable to recover it. 00:41:41.613 [2024-10-01 22:40:36.706675] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.613 [2024-10-01 22:40:36.706685] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.613 qpair failed and we were unable to recover it. 00:41:41.613 [2024-10-01 22:40:36.707016] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.613 [2024-10-01 22:40:36.707026] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.613 qpair failed and we were unable to recover it. 00:41:41.613 [2024-10-01 22:40:36.707327] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.614 [2024-10-01 22:40:36.707336] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.614 qpair failed and we were unable to recover it. 00:41:41.614 [2024-10-01 22:40:36.707618] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.614 [2024-10-01 22:40:36.707633] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.614 qpair failed and we were unable to recover it. 00:41:41.614 [2024-10-01 22:40:36.707937] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.614 [2024-10-01 22:40:36.707948] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.614 qpair failed and we were unable to recover it. 00:41:41.614 [2024-10-01 22:40:36.708254] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.614 [2024-10-01 22:40:36.708264] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.614 qpair failed and we were unable to recover it. 00:41:41.614 [2024-10-01 22:40:36.708569] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.614 [2024-10-01 22:40:36.708579] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.614 qpair failed and we were unable to recover it. 00:41:41.614 [2024-10-01 22:40:36.708667] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.614 [2024-10-01 22:40:36.708677] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.614 qpair failed and we were unable to recover it. 00:41:41.614 [2024-10-01 22:40:36.708958] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.614 [2024-10-01 22:40:36.708968] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.614 qpair failed and we were unable to recover it. 00:41:41.614 [2024-10-01 22:40:36.709285] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.614 [2024-10-01 22:40:36.709294] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.614 qpair failed and we were unable to recover it. 00:41:41.614 [2024-10-01 22:40:36.709508] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.614 [2024-10-01 22:40:36.709518] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.614 qpair failed and we were unable to recover it. 00:41:41.614 [2024-10-01 22:40:36.709906] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.614 [2024-10-01 22:40:36.709917] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.614 qpair failed and we were unable to recover it. 00:41:41.614 [2024-10-01 22:40:36.710199] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.614 [2024-10-01 22:40:36.710210] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.614 qpair failed and we were unable to recover it. 00:41:41.614 [2024-10-01 22:40:36.710521] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.614 [2024-10-01 22:40:36.710531] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.614 qpair failed and we were unable to recover it. 00:41:41.614 [2024-10-01 22:40:36.710845] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.614 [2024-10-01 22:40:36.710862] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.614 qpair failed and we were unable to recover it. 00:41:41.614 [2024-10-01 22:40:36.711168] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.614 [2024-10-01 22:40:36.711178] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.614 qpair failed and we were unable to recover it. 00:41:41.614 [2024-10-01 22:40:36.711466] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.614 [2024-10-01 22:40:36.711476] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.614 qpair failed and we were unable to recover it. 00:41:41.614 [2024-10-01 22:40:36.711780] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.614 [2024-10-01 22:40:36.711790] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.614 qpair failed and we were unable to recover it. 00:41:41.614 [2024-10-01 22:40:36.712107] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.614 [2024-10-01 22:40:36.712117] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.614 qpair failed and we were unable to recover it. 00:41:41.614 [2024-10-01 22:40:36.712423] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.614 [2024-10-01 22:40:36.712433] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.614 qpair failed and we were unable to recover it. 00:41:41.614 [2024-10-01 22:40:36.712609] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.614 [2024-10-01 22:40:36.712620] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.614 qpair failed and we were unable to recover it. 00:41:41.614 [2024-10-01 22:40:36.712934] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.614 [2024-10-01 22:40:36.712945] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.614 qpair failed and we were unable to recover it. 00:41:41.614 [2024-10-01 22:40:36.713232] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.614 [2024-10-01 22:40:36.713250] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.614 qpair failed and we were unable to recover it. 00:41:41.614 [2024-10-01 22:40:36.713559] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.614 [2024-10-01 22:40:36.713569] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.614 qpair failed and we were unable to recover it. 00:41:41.614 [2024-10-01 22:40:36.713853] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.614 [2024-10-01 22:40:36.713864] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.614 qpair failed and we were unable to recover it. 00:41:41.614 [2024-10-01 22:40:36.714214] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.614 [2024-10-01 22:40:36.714223] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.615 qpair failed and we were unable to recover it. 00:41:41.615 [2024-10-01 22:40:36.714519] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.615 [2024-10-01 22:40:36.714530] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.615 qpair failed and we were unable to recover it. 00:41:41.615 [2024-10-01 22:40:36.714805] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.615 [2024-10-01 22:40:36.714815] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.615 qpair failed and we were unable to recover it. 00:41:41.615 [2024-10-01 22:40:36.715108] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.615 [2024-10-01 22:40:36.715118] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.615 qpair failed and we were unable to recover it. 00:41:41.615 [2024-10-01 22:40:36.715424] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.615 [2024-10-01 22:40:36.715433] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.615 qpair failed and we were unable to recover it. 00:41:41.615 [2024-10-01 22:40:36.715729] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.615 [2024-10-01 22:40:36.715739] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.615 qpair failed and we were unable to recover it. 00:41:41.615 [2024-10-01 22:40:36.715912] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.615 [2024-10-01 22:40:36.715923] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.615 qpair failed and we were unable to recover it. 00:41:41.615 [2024-10-01 22:40:36.716138] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.615 [2024-10-01 22:40:36.716148] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.615 qpair failed and we were unable to recover it. 00:41:41.615 [2024-10-01 22:40:36.716550] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.615 [2024-10-01 22:40:36.716560] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.615 qpair failed and we were unable to recover it. 00:41:41.615 [2024-10-01 22:40:36.716769] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.615 [2024-10-01 22:40:36.716780] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.615 qpair failed and we were unable to recover it. 00:41:41.615 [2024-10-01 22:40:36.717096] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.615 [2024-10-01 22:40:36.717106] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.615 qpair failed and we were unable to recover it. 00:41:41.615 [2024-10-01 22:40:36.717420] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.615 [2024-10-01 22:40:36.717430] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.615 qpair failed and we were unable to recover it. 00:41:41.615 [2024-10-01 22:40:36.717738] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.615 [2024-10-01 22:40:36.717749] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.615 qpair failed and we were unable to recover it. 00:41:41.615 [2024-10-01 22:40:36.718066] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.615 [2024-10-01 22:40:36.718076] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.615 qpair failed and we were unable to recover it. 00:41:41.615 [2024-10-01 22:40:36.718387] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.615 [2024-10-01 22:40:36.718397] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.615 qpair failed and we were unable to recover it. 00:41:41.615 [2024-10-01 22:40:36.718737] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.615 [2024-10-01 22:40:36.718748] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.615 qpair failed and we were unable to recover it. 00:41:41.615 [2024-10-01 22:40:36.719097] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.615 [2024-10-01 22:40:36.719107] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.615 qpair failed and we were unable to recover it. 00:41:41.615 [2024-10-01 22:40:36.719384] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.615 [2024-10-01 22:40:36.719394] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.615 qpair failed and we were unable to recover it. 00:41:41.615 [2024-10-01 22:40:36.719687] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.615 [2024-10-01 22:40:36.719697] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.615 qpair failed and we were unable to recover it. 00:41:41.615 [2024-10-01 22:40:36.719992] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.615 [2024-10-01 22:40:36.720002] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.615 qpair failed and we were unable to recover it. 00:41:41.615 [2024-10-01 22:40:36.720263] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.615 [2024-10-01 22:40:36.720273] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.615 qpair failed and we were unable to recover it. 00:41:41.615 [2024-10-01 22:40:36.720574] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.615 [2024-10-01 22:40:36.720584] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.615 qpair failed and we were unable to recover it. 00:41:41.615 [2024-10-01 22:40:36.720871] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.615 [2024-10-01 22:40:36.720882] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.615 qpair failed and we were unable to recover it. 00:41:41.615 [2024-10-01 22:40:36.721175] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.615 [2024-10-01 22:40:36.721189] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.615 qpair failed and we were unable to recover it. 00:41:41.615 [2024-10-01 22:40:36.721495] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.615 [2024-10-01 22:40:36.721505] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.615 qpair failed and we were unable to recover it. 00:41:41.615 [2024-10-01 22:40:36.721707] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.615 [2024-10-01 22:40:36.721718] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.615 qpair failed and we were unable to recover it. 00:41:41.615 [2024-10-01 22:40:36.722084] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.615 [2024-10-01 22:40:36.722094] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.615 qpair failed and we were unable to recover it. 00:41:41.615 [2024-10-01 22:40:36.722321] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.615 [2024-10-01 22:40:36.722331] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.615 qpair failed and we were unable to recover it. 00:41:41.615 [2024-10-01 22:40:36.722651] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.615 [2024-10-01 22:40:36.722662] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.615 qpair failed and we were unable to recover it. 00:41:41.615 [2024-10-01 22:40:36.722967] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.615 [2024-10-01 22:40:36.722978] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.615 qpair failed and we were unable to recover it. 00:41:41.615 [2024-10-01 22:40:36.723289] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.615 [2024-10-01 22:40:36.723299] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.615 qpair failed and we were unable to recover it. 00:41:41.615 [2024-10-01 22:40:36.723585] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.616 [2024-10-01 22:40:36.723595] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.616 qpair failed and we were unable to recover it. 00:41:41.616 [2024-10-01 22:40:36.723928] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.616 [2024-10-01 22:40:36.723939] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.616 qpair failed and we were unable to recover it. 00:41:41.616 [2024-10-01 22:40:36.724107] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.616 [2024-10-01 22:40:36.724119] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.616 qpair failed and we were unable to recover it. 00:41:41.616 [2024-10-01 22:40:36.724410] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.616 [2024-10-01 22:40:36.724420] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.616 qpair failed and we were unable to recover it. 00:41:41.616 [2024-10-01 22:40:36.724736] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.616 [2024-10-01 22:40:36.724746] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.616 qpair failed and we were unable to recover it. 00:41:41.616 [2024-10-01 22:40:36.724930] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.616 [2024-10-01 22:40:36.724940] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.616 qpair failed and we were unable to recover it. 00:41:41.616 [2024-10-01 22:40:36.725239] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.616 [2024-10-01 22:40:36.725250] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.616 qpair failed and we were unable to recover it. 00:41:41.616 [2024-10-01 22:40:36.725569] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.616 [2024-10-01 22:40:36.725580] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.616 qpair failed and we were unable to recover it. 00:41:41.616 [2024-10-01 22:40:36.725873] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.616 [2024-10-01 22:40:36.725884] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.616 qpair failed and we were unable to recover it. 00:41:41.616 [2024-10-01 22:40:36.726068] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.616 [2024-10-01 22:40:36.726079] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.616 qpair failed and we were unable to recover it. 00:41:41.616 [2024-10-01 22:40:36.726484] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.616 [2024-10-01 22:40:36.726495] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.616 qpair failed and we were unable to recover it. 00:41:41.616 [2024-10-01 22:40:36.726795] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.616 [2024-10-01 22:40:36.726806] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.616 qpair failed and we were unable to recover it. 00:41:41.616 [2024-10-01 22:40:36.727102] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.616 [2024-10-01 22:40:36.727113] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.616 qpair failed and we were unable to recover it. 00:41:41.616 [2024-10-01 22:40:36.727414] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.616 [2024-10-01 22:40:36.727424] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.616 qpair failed and we were unable to recover it. 00:41:41.616 [2024-10-01 22:40:36.727729] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.616 [2024-10-01 22:40:36.727739] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.616 qpair failed and we were unable to recover it. 00:41:41.616 [2024-10-01 22:40:36.728052] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.616 [2024-10-01 22:40:36.728062] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.616 qpair failed and we were unable to recover it. 00:41:41.616 [2024-10-01 22:40:36.728355] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.616 [2024-10-01 22:40:36.728365] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.616 qpair failed and we were unable to recover it. 00:41:41.616 [2024-10-01 22:40:36.728671] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.616 [2024-10-01 22:40:36.728681] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.616 qpair failed and we were unable to recover it. 00:41:41.616 [2024-10-01 22:40:36.728962] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.616 [2024-10-01 22:40:36.728972] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.616 qpair failed and we were unable to recover it. 00:41:41.616 [2024-10-01 22:40:36.729304] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.616 [2024-10-01 22:40:36.729317] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.616 qpair failed and we were unable to recover it. 00:41:41.616 [2024-10-01 22:40:36.729646] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.616 [2024-10-01 22:40:36.729658] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.616 qpair failed and we were unable to recover it. 00:41:41.616 [2024-10-01 22:40:36.730030] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.616 [2024-10-01 22:40:36.730040] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.616 qpair failed and we were unable to recover it. 00:41:41.616 [2024-10-01 22:40:36.730318] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.616 [2024-10-01 22:40:36.730328] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.616 qpair failed and we were unable to recover it. 00:41:41.616 [2024-10-01 22:40:36.730642] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.616 [2024-10-01 22:40:36.730653] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.616 qpair failed and we were unable to recover it. 00:41:41.616 [2024-10-01 22:40:36.730932] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.616 [2024-10-01 22:40:36.730942] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.616 qpair failed and we were unable to recover it. 00:41:41.616 [2024-10-01 22:40:36.731254] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.616 [2024-10-01 22:40:36.731264] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.616 qpair failed and we were unable to recover it. 00:41:41.616 [2024-10-01 22:40:36.731550] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.616 [2024-10-01 22:40:36.731565] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.616 qpair failed and we were unable to recover it. 00:41:41.616 [2024-10-01 22:40:36.731735] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.616 [2024-10-01 22:40:36.731746] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.616 qpair failed and we were unable to recover it. 00:41:41.616 [2024-10-01 22:40:36.732069] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.616 [2024-10-01 22:40:36.732079] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.616 qpair failed and we were unable to recover it. 00:41:41.616 [2024-10-01 22:40:36.732380] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.616 [2024-10-01 22:40:36.732390] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.616 qpair failed and we were unable to recover it. 00:41:41.616 [2024-10-01 22:40:36.732765] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.617 [2024-10-01 22:40:36.732776] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.617 qpair failed and we were unable to recover it. 00:41:41.617 [2024-10-01 22:40:36.733056] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.617 [2024-10-01 22:40:36.733066] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.617 qpair failed and we were unable to recover it. 00:41:41.617 [2024-10-01 22:40:36.733251] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.617 [2024-10-01 22:40:36.733263] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.617 qpair failed and we were unable to recover it. 00:41:41.617 [2024-10-01 22:40:36.733590] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.617 [2024-10-01 22:40:36.733599] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.617 qpair failed and we were unable to recover it. 00:41:41.617 [2024-10-01 22:40:36.733906] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.617 [2024-10-01 22:40:36.733917] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.617 qpair failed and we were unable to recover it. 00:41:41.617 [2024-10-01 22:40:36.734220] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.617 [2024-10-01 22:40:36.734229] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.617 qpair failed and we were unable to recover it. 00:41:41.617 [2024-10-01 22:40:36.734557] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.617 [2024-10-01 22:40:36.734567] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.617 qpair failed and we were unable to recover it. 00:41:41.617 [2024-10-01 22:40:36.734841] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.617 [2024-10-01 22:40:36.734852] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.617 qpair failed and we were unable to recover it. 00:41:41.617 [2024-10-01 22:40:36.735169] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.617 [2024-10-01 22:40:36.735179] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.617 qpair failed and we were unable to recover it. 00:41:41.617 [2024-10-01 22:40:36.735458] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.617 [2024-10-01 22:40:36.735469] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.617 qpair failed and we were unable to recover it. 00:41:41.617 [2024-10-01 22:40:36.735773] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.617 [2024-10-01 22:40:36.735784] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.617 qpair failed and we were unable to recover it. 00:41:41.617 [2024-10-01 22:40:36.736068] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.617 [2024-10-01 22:40:36.736079] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.617 qpair failed and we were unable to recover it. 00:41:41.617 [2024-10-01 22:40:36.736383] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.617 [2024-10-01 22:40:36.736393] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.617 qpair failed and we were unable to recover it. 00:41:41.617 [2024-10-01 22:40:36.736694] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.617 [2024-10-01 22:40:36.736704] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.617 qpair failed and we were unable to recover it. 00:41:41.617 [2024-10-01 22:40:36.737106] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.617 [2024-10-01 22:40:36.737116] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.617 qpair failed and we were unable to recover it. 00:41:41.617 [2024-10-01 22:40:36.737421] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.617 [2024-10-01 22:40:36.737431] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.617 qpair failed and we were unable to recover it. 00:41:41.617 [2024-10-01 22:40:36.737732] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.617 [2024-10-01 22:40:36.737743] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.617 qpair failed and we were unable to recover it. 00:41:41.617 [2024-10-01 22:40:36.738068] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.617 [2024-10-01 22:40:36.738078] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.617 qpair failed and we were unable to recover it. 00:41:41.617 [2024-10-01 22:40:36.738350] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.617 [2024-10-01 22:40:36.738360] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.617 qpair failed and we were unable to recover it. 00:41:41.617 [2024-10-01 22:40:36.738672] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.617 [2024-10-01 22:40:36.738682] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.617 qpair failed and we were unable to recover it. 00:41:41.617 [2024-10-01 22:40:36.739010] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.617 [2024-10-01 22:40:36.739021] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.617 qpair failed and we were unable to recover it. 00:41:41.617 [2024-10-01 22:40:36.739305] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.617 [2024-10-01 22:40:36.739315] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.617 qpair failed and we were unable to recover it. 00:41:41.617 [2024-10-01 22:40:36.739666] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.617 [2024-10-01 22:40:36.739676] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.617 qpair failed and we were unable to recover it. 00:41:41.617 [2024-10-01 22:40:36.739998] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.617 [2024-10-01 22:40:36.740009] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.617 qpair failed and we were unable to recover it. 00:41:41.617 [2024-10-01 22:40:36.740319] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.617 [2024-10-01 22:40:36.740330] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.617 qpair failed and we were unable to recover it. 00:41:41.617 [2024-10-01 22:40:36.740610] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.617 [2024-10-01 22:40:36.740620] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.617 qpair failed and we were unable to recover it. 00:41:41.617 [2024-10-01 22:40:36.740793] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.617 [2024-10-01 22:40:36.740805] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.617 qpair failed and we were unable to recover it. 00:41:41.617 [2024-10-01 22:40:36.741116] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.617 [2024-10-01 22:40:36.741126] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.617 qpair failed and we were unable to recover it. 00:41:41.617 [2024-10-01 22:40:36.741284] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.617 [2024-10-01 22:40:36.741295] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.617 qpair failed and we were unable to recover it. 00:41:41.617 [2024-10-01 22:40:36.741611] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.617 [2024-10-01 22:40:36.741621] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.617 qpair failed and we were unable to recover it. 00:41:41.617 [2024-10-01 22:40:36.741990] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.617 [2024-10-01 22:40:36.742001] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.617 qpair failed and we were unable to recover it. 00:41:41.617 [2024-10-01 22:40:36.742165] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.617 [2024-10-01 22:40:36.742176] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.618 qpair failed and we were unable to recover it. 00:41:41.618 [2024-10-01 22:40:36.742349] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.618 [2024-10-01 22:40:36.742360] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.618 qpair failed and we were unable to recover it. 00:41:41.618 [2024-10-01 22:40:36.742527] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.618 [2024-10-01 22:40:36.742538] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.618 qpair failed and we were unable to recover it. 00:41:41.618 [2024-10-01 22:40:36.742832] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.618 [2024-10-01 22:40:36.742843] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.618 qpair failed and we were unable to recover it. 00:41:41.618 [2024-10-01 22:40:36.743136] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.618 [2024-10-01 22:40:36.743147] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.618 qpair failed and we were unable to recover it. 00:41:41.618 [2024-10-01 22:40:36.743419] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.618 [2024-10-01 22:40:36.743429] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.618 qpair failed and we were unable to recover it. 00:41:41.618 [2024-10-01 22:40:36.743635] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.618 [2024-10-01 22:40:36.743646] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.618 qpair failed and we were unable to recover it. 00:41:41.618 [2024-10-01 22:40:36.743846] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.618 [2024-10-01 22:40:36.743856] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.618 qpair failed and we were unable to recover it. 00:41:41.618 [2024-10-01 22:40:36.744133] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.618 [2024-10-01 22:40:36.744143] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.618 qpair failed and we were unable to recover it. 00:41:41.618 [2024-10-01 22:40:36.744445] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.618 [2024-10-01 22:40:36.744455] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.618 qpair failed and we were unable to recover it. 00:41:41.618 [2024-10-01 22:40:36.744770] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.618 [2024-10-01 22:40:36.744780] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.618 qpair failed and we were unable to recover it. 00:41:41.618 [2024-10-01 22:40:36.745070] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.618 [2024-10-01 22:40:36.745080] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.618 qpair failed and we were unable to recover it. 00:41:41.618 [2024-10-01 22:40:36.745256] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.618 [2024-10-01 22:40:36.745266] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.618 qpair failed and we were unable to recover it. 00:41:41.618 [2024-10-01 22:40:36.745484] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.618 [2024-10-01 22:40:36.745494] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.618 qpair failed and we were unable to recover it. 00:41:41.618 [2024-10-01 22:40:36.745807] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.618 [2024-10-01 22:40:36.745818] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.618 qpair failed and we were unable to recover it. 00:41:41.618 [2024-10-01 22:40:36.746016] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.618 [2024-10-01 22:40:36.746026] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.618 qpair failed and we were unable to recover it. 00:41:41.618 [2024-10-01 22:40:36.746307] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.618 [2024-10-01 22:40:36.746316] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.618 qpair failed and we were unable to recover it. 00:41:41.618 [2024-10-01 22:40:36.746662] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.618 [2024-10-01 22:40:36.746673] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.618 qpair failed and we were unable to recover it. 00:41:41.618 [2024-10-01 22:40:36.746969] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.618 [2024-10-01 22:40:36.746978] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.618 qpair failed and we were unable to recover it. 00:41:41.618 [2024-10-01 22:40:36.747282] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.618 [2024-10-01 22:40:36.747292] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.618 qpair failed and we were unable to recover it. 00:41:41.618 [2024-10-01 22:40:36.747577] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.618 [2024-10-01 22:40:36.747587] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.618 qpair failed and we were unable to recover it. 00:41:41.618 [2024-10-01 22:40:36.747909] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.618 [2024-10-01 22:40:36.747920] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.618 qpair failed and we were unable to recover it. 00:41:41.618 [2024-10-01 22:40:36.748095] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.618 [2024-10-01 22:40:36.748104] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.618 qpair failed and we were unable to recover it. 00:41:41.618 [2024-10-01 22:40:36.748416] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.618 [2024-10-01 22:40:36.748427] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.618 qpair failed and we were unable to recover it. 00:41:41.618 [2024-10-01 22:40:36.748730] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.618 [2024-10-01 22:40:36.748740] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.618 qpair failed and we were unable to recover it. 00:41:41.618 [2024-10-01 22:40:36.749046] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.618 [2024-10-01 22:40:36.749063] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.618 qpair failed and we were unable to recover it. 00:41:41.618 [2024-10-01 22:40:36.749252] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.618 [2024-10-01 22:40:36.749264] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.618 qpair failed and we were unable to recover it. 00:41:41.618 [2024-10-01 22:40:36.749575] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.618 [2024-10-01 22:40:36.749585] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.618 qpair failed and we were unable to recover it. 00:41:41.618 [2024-10-01 22:40:36.749917] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.618 [2024-10-01 22:40:36.749927] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.618 qpair failed and we were unable to recover it. 00:41:41.618 [2024-10-01 22:40:36.750139] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.618 [2024-10-01 22:40:36.750148] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.618 qpair failed and we were unable to recover it. 00:41:41.618 [2024-10-01 22:40:36.750453] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.618 [2024-10-01 22:40:36.750463] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.618 qpair failed and we were unable to recover it. 00:41:41.618 [2024-10-01 22:40:36.750631] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.618 [2024-10-01 22:40:36.750642] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.618 qpair failed and we were unable to recover it. 00:41:41.619 [2024-10-01 22:40:36.751004] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.619 [2024-10-01 22:40:36.751014] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.619 qpair failed and we were unable to recover it. 00:41:41.619 [2024-10-01 22:40:36.751336] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.619 [2024-10-01 22:40:36.751346] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.619 qpair failed and we were unable to recover it. 00:41:41.619 [2024-10-01 22:40:36.751673] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.619 [2024-10-01 22:40:36.751684] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.619 qpair failed and we were unable to recover it. 00:41:41.619 [2024-10-01 22:40:36.751986] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.619 [2024-10-01 22:40:36.751996] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.619 qpair failed and we were unable to recover it. 00:41:41.619 [2024-10-01 22:40:36.752189] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.619 [2024-10-01 22:40:36.752199] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.619 qpair failed and we were unable to recover it. 00:41:41.619 [2024-10-01 22:40:36.752521] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.619 [2024-10-01 22:40:36.752531] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.619 qpair failed and we were unable to recover it. 00:41:41.619 [2024-10-01 22:40:36.752726] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.619 [2024-10-01 22:40:36.752737] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.619 qpair failed and we were unable to recover it. 00:41:41.619 [2024-10-01 22:40:36.753063] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.619 [2024-10-01 22:40:36.753074] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.619 qpair failed and we were unable to recover it. 00:41:41.619 [2024-10-01 22:40:36.753361] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.619 [2024-10-01 22:40:36.753372] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.619 qpair failed and we were unable to recover it. 00:41:41.619 [2024-10-01 22:40:36.753677] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.619 [2024-10-01 22:40:36.753687] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.619 qpair failed and we were unable to recover it. 00:41:41.619 [2024-10-01 22:40:36.753976] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.619 [2024-10-01 22:40:36.753986] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.619 qpair failed and we were unable to recover it. 00:41:41.619 [2024-10-01 22:40:36.754308] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.619 [2024-10-01 22:40:36.754318] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.619 qpair failed and we were unable to recover it. 00:41:41.619 [2024-10-01 22:40:36.754597] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.619 [2024-10-01 22:40:36.754606] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.619 qpair failed and we were unable to recover it. 00:41:41.619 [2024-10-01 22:40:36.754920] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.619 [2024-10-01 22:40:36.754930] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.619 qpair failed and we were unable to recover it. 00:41:41.619 [2024-10-01 22:40:36.755209] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.619 [2024-10-01 22:40:36.755220] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.619 qpair failed and we were unable to recover it. 00:41:41.619 [2024-10-01 22:40:36.755527] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.619 [2024-10-01 22:40:36.755537] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.619 qpair failed and we were unable to recover it. 00:41:41.619 [2024-10-01 22:40:36.755729] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.619 [2024-10-01 22:40:36.755739] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.619 qpair failed and we were unable to recover it. 00:41:41.619 [2024-10-01 22:40:36.756077] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.619 [2024-10-01 22:40:36.756087] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.619 qpair failed and we were unable to recover it. 00:41:41.619 [2024-10-01 22:40:36.756369] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.619 [2024-10-01 22:40:36.756379] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.619 qpair failed and we were unable to recover it. 00:41:41.619 [2024-10-01 22:40:36.756684] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.619 [2024-10-01 22:40:36.756695] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.619 qpair failed and we were unable to recover it. 00:41:41.619 [2024-10-01 22:40:36.756978] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.619 [2024-10-01 22:40:36.756988] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.619 qpair failed and we were unable to recover it. 00:41:41.619 [2024-10-01 22:40:36.757293] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.619 [2024-10-01 22:40:36.757305] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.619 qpair failed and we were unable to recover it. 00:41:41.619 [2024-10-01 22:40:36.757613] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.619 [2024-10-01 22:40:36.757632] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.619 qpair failed and we were unable to recover it. 00:41:41.619 [2024-10-01 22:40:36.757904] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.619 [2024-10-01 22:40:36.757914] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.619 qpair failed and we were unable to recover it. 00:41:41.619 [2024-10-01 22:40:36.758216] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.619 [2024-10-01 22:40:36.758226] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.619 qpair failed and we were unable to recover it. 00:41:41.619 [2024-10-01 22:40:36.758532] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.619 [2024-10-01 22:40:36.758541] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.619 qpair failed and we were unable to recover it. 00:41:41.619 [2024-10-01 22:40:36.758805] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.619 [2024-10-01 22:40:36.758815] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.619 qpair failed and we were unable to recover it. 00:41:41.619 [2024-10-01 22:40:36.759120] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.619 [2024-10-01 22:40:36.759130] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.619 qpair failed and we were unable to recover it. 00:41:41.619 [2024-10-01 22:40:36.759411] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.619 [2024-10-01 22:40:36.759421] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.619 qpair failed and we were unable to recover it. 00:41:41.619 [2024-10-01 22:40:36.759731] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.619 [2024-10-01 22:40:36.759741] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.619 qpair failed and we were unable to recover it. 00:41:41.619 [2024-10-01 22:40:36.759989] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.619 [2024-10-01 22:40:36.759999] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.620 qpair failed and we were unable to recover it. 00:41:41.620 [2024-10-01 22:40:36.760328] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.620 [2024-10-01 22:40:36.760338] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.620 qpair failed and we were unable to recover it. 00:41:41.620 [2024-10-01 22:40:36.760416] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.620 [2024-10-01 22:40:36.760426] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.620 qpair failed and we were unable to recover it. 00:41:41.620 [2024-10-01 22:40:36.760695] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.620 [2024-10-01 22:40:36.760705] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.620 qpair failed and we were unable to recover it. 00:41:41.620 [2024-10-01 22:40:36.760895] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.620 [2024-10-01 22:40:36.760906] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.620 qpair failed and we were unable to recover it. 00:41:41.620 [2024-10-01 22:40:36.761219] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.620 [2024-10-01 22:40:36.761229] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.620 qpair failed and we were unable to recover it. 00:41:41.620 [2024-10-01 22:40:36.761544] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.620 [2024-10-01 22:40:36.761554] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.620 qpair failed and we were unable to recover it. 00:41:41.620 [2024-10-01 22:40:36.761854] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.620 [2024-10-01 22:40:36.761864] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.620 qpair failed and we were unable to recover it. 00:41:41.620 [2024-10-01 22:40:36.762172] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.620 [2024-10-01 22:40:36.762181] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.620 qpair failed and we were unable to recover it. 00:41:41.620 [2024-10-01 22:40:36.762361] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.620 [2024-10-01 22:40:36.762372] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.620 qpair failed and we were unable to recover it. 00:41:41.620 [2024-10-01 22:40:36.762570] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.620 [2024-10-01 22:40:36.762580] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.620 qpair failed and we were unable to recover it. 00:41:41.620 [2024-10-01 22:40:36.762980] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.620 [2024-10-01 22:40:36.762991] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.620 qpair failed and we were unable to recover it. 00:41:41.620 [2024-10-01 22:40:36.763293] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.620 [2024-10-01 22:40:36.763304] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.620 qpair failed and we were unable to recover it. 00:41:41.620 [2024-10-01 22:40:36.763499] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.620 [2024-10-01 22:40:36.763510] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.620 qpair failed and we were unable to recover it. 00:41:41.620 [2024-10-01 22:40:36.763825] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.620 [2024-10-01 22:40:36.763836] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.620 qpair failed and we were unable to recover it. 00:41:41.620 [2024-10-01 22:40:36.763931] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.620 [2024-10-01 22:40:36.763941] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.620 qpair failed and we were unable to recover it. 00:41:41.620 [2024-10-01 22:40:36.764287] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.620 [2024-10-01 22:40:36.764297] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.620 qpair failed and we were unable to recover it. 00:41:41.620 [2024-10-01 22:40:36.764586] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.620 [2024-10-01 22:40:36.764596] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.620 qpair failed and we were unable to recover it. 00:41:41.620 [2024-10-01 22:40:36.764899] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.620 [2024-10-01 22:40:36.764912] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.620 qpair failed and we were unable to recover it. 00:41:41.620 [2024-10-01 22:40:36.765202] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.620 [2024-10-01 22:40:36.765212] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.620 qpair failed and we were unable to recover it. 00:41:41.620 [2024-10-01 22:40:36.765491] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.620 [2024-10-01 22:40:36.765501] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.620 qpair failed and we were unable to recover it. 00:41:41.620 [2024-10-01 22:40:36.765801] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.620 [2024-10-01 22:40:36.765811] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.620 qpair failed and we were unable to recover it. 00:41:41.620 [2024-10-01 22:40:36.766101] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.620 [2024-10-01 22:40:36.766110] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.620 qpair failed and we were unable to recover it. 00:41:41.620 [2024-10-01 22:40:36.766399] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.620 [2024-10-01 22:40:36.766409] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.620 qpair failed and we were unable to recover it. 00:41:41.620 [2024-10-01 22:40:36.766722] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.620 [2024-10-01 22:40:36.766732] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.620 qpair failed and we were unable to recover it. 00:41:41.620 [2024-10-01 22:40:36.767052] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.621 [2024-10-01 22:40:36.767061] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.621 qpair failed and we were unable to recover it. 00:41:41.621 [2024-10-01 22:40:36.767351] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.621 [2024-10-01 22:40:36.767361] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.621 qpair failed and we were unable to recover it. 00:41:41.621 [2024-10-01 22:40:36.767691] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.621 [2024-10-01 22:40:36.767702] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.621 qpair failed and we were unable to recover it. 00:41:41.621 [2024-10-01 22:40:36.768005] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.621 [2024-10-01 22:40:36.768016] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.621 qpair failed and we were unable to recover it. 00:41:41.621 [2024-10-01 22:40:36.768321] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.621 [2024-10-01 22:40:36.768331] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.621 qpair failed and we were unable to recover it. 00:41:41.621 [2024-10-01 22:40:36.768636] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.621 [2024-10-01 22:40:36.768646] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.621 qpair failed and we were unable to recover it. 00:41:41.621 [2024-10-01 22:40:36.769001] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.621 [2024-10-01 22:40:36.769010] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.621 qpair failed and we were unable to recover it. 00:41:41.621 [2024-10-01 22:40:36.769298] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.621 [2024-10-01 22:40:36.769309] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.621 qpair failed and we were unable to recover it. 00:41:41.621 [2024-10-01 22:40:36.769618] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.621 [2024-10-01 22:40:36.769636] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.621 qpair failed and we were unable to recover it. 00:41:41.621 [2024-10-01 22:40:36.769946] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.621 [2024-10-01 22:40:36.769956] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.621 qpair failed and we were unable to recover it. 00:41:41.621 [2024-10-01 22:40:36.770278] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.621 [2024-10-01 22:40:36.770287] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.621 qpair failed and we were unable to recover it. 00:41:41.621 [2024-10-01 22:40:36.770598] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.621 [2024-10-01 22:40:36.770608] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.621 qpair failed and we were unable to recover it. 00:41:41.621 [2024-10-01 22:40:36.770952] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.621 [2024-10-01 22:40:36.770963] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.621 qpair failed and we were unable to recover it. 00:41:41.621 [2024-10-01 22:40:36.771288] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.621 [2024-10-01 22:40:36.771299] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.621 qpair failed and we were unable to recover it. 00:41:41.621 [2024-10-01 22:40:36.771632] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.621 [2024-10-01 22:40:36.771643] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.621 qpair failed and we were unable to recover it. 00:41:41.621 [2024-10-01 22:40:36.772023] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.621 [2024-10-01 22:40:36.772034] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.621 qpair failed and we were unable to recover it. 00:41:41.621 [2024-10-01 22:40:36.772361] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.621 [2024-10-01 22:40:36.772371] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.621 qpair failed and we were unable to recover it. 00:41:41.621 [2024-10-01 22:40:36.772686] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.621 [2024-10-01 22:40:36.772697] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.621 qpair failed and we were unable to recover it. 00:41:41.621 [2024-10-01 22:40:36.772884] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.621 [2024-10-01 22:40:36.772895] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.621 qpair failed and we were unable to recover it. 00:41:41.621 [2024-10-01 22:40:36.773252] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.621 [2024-10-01 22:40:36.773261] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.621 qpair failed and we were unable to recover it. 00:41:41.621 [2024-10-01 22:40:36.773565] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.621 [2024-10-01 22:40:36.773574] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.621 qpair failed and we were unable to recover it. 00:41:41.621 [2024-10-01 22:40:36.773891] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.621 [2024-10-01 22:40:36.773902] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.621 qpair failed and we were unable to recover it. 00:41:41.621 [2024-10-01 22:40:36.774180] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.621 [2024-10-01 22:40:36.774190] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.621 qpair failed and we were unable to recover it. 00:41:41.621 [2024-10-01 22:40:36.774511] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.621 [2024-10-01 22:40:36.774520] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.621 qpair failed and we were unable to recover it. 00:41:41.621 [2024-10-01 22:40:36.774741] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.621 [2024-10-01 22:40:36.774751] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.621 qpair failed and we were unable to recover it. 00:41:41.621 [2024-10-01 22:40:36.775075] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.621 [2024-10-01 22:40:36.775085] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.621 qpair failed and we were unable to recover it. 00:41:41.621 [2024-10-01 22:40:36.775374] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.621 [2024-10-01 22:40:36.775384] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.621 qpair failed and we were unable to recover it. 00:41:41.621 [2024-10-01 22:40:36.775550] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.621 [2024-10-01 22:40:36.775561] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.621 qpair failed and we were unable to recover it. 00:41:41.621 [2024-10-01 22:40:36.775885] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.621 [2024-10-01 22:40:36.775896] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.621 qpair failed and we were unable to recover it. 00:41:41.621 [2024-10-01 22:40:36.776066] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.621 [2024-10-01 22:40:36.776076] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.621 qpair failed and we were unable to recover it. 00:41:41.621 [2024-10-01 22:40:36.776404] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.621 [2024-10-01 22:40:36.776415] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.621 qpair failed and we were unable to recover it. 00:41:41.622 [2024-10-01 22:40:36.776730] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.622 [2024-10-01 22:40:36.776741] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.622 qpair failed and we were unable to recover it. 00:41:41.622 [2024-10-01 22:40:36.777041] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.622 [2024-10-01 22:40:36.777052] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.622 qpair failed and we were unable to recover it. 00:41:41.622 [2024-10-01 22:40:36.777395] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.622 [2024-10-01 22:40:36.777406] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.622 qpair failed and we were unable to recover it. 00:41:41.622 [2024-10-01 22:40:36.777702] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.622 [2024-10-01 22:40:36.777713] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.622 qpair failed and we were unable to recover it. 00:41:41.622 [2024-10-01 22:40:36.778019] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.622 [2024-10-01 22:40:36.778030] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.622 qpair failed and we were unable to recover it. 00:41:41.622 [2024-10-01 22:40:36.778328] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.622 [2024-10-01 22:40:36.778338] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.622 qpair failed and we were unable to recover it. 00:41:41.622 [2024-10-01 22:40:36.778616] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.622 [2024-10-01 22:40:36.778631] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.622 qpair failed and we were unable to recover it. 00:41:41.622 [2024-10-01 22:40:36.778961] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.622 [2024-10-01 22:40:36.778971] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.622 qpair failed and we were unable to recover it. 00:41:41.622 [2024-10-01 22:40:36.779274] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.622 [2024-10-01 22:40:36.779283] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.622 qpair failed and we were unable to recover it. 00:41:41.622 [2024-10-01 22:40:36.779606] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.622 [2024-10-01 22:40:36.779616] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.622 qpair failed and we were unable to recover it. 00:41:41.622 [2024-10-01 22:40:36.779895] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.622 [2024-10-01 22:40:36.779905] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.622 qpair failed and we were unable to recover it. 00:41:41.622 [2024-10-01 22:40:36.780197] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.622 [2024-10-01 22:40:36.780206] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.622 qpair failed and we were unable to recover it. 00:41:41.622 [2024-10-01 22:40:36.780486] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.622 [2024-10-01 22:40:36.780496] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.622 qpair failed and we were unable to recover it. 00:41:41.622 [2024-10-01 22:40:36.780797] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.622 [2024-10-01 22:40:36.780808] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.622 qpair failed and we were unable to recover it. 00:41:41.622 [2024-10-01 22:40:36.781095] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.622 [2024-10-01 22:40:36.781114] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.622 qpair failed and we were unable to recover it. 00:41:41.622 [2024-10-01 22:40:36.781449] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.622 [2024-10-01 22:40:36.781458] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.622 qpair failed and we were unable to recover it. 00:41:41.622 [2024-10-01 22:40:36.781649] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.622 [2024-10-01 22:40:36.781659] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.622 qpair failed and we were unable to recover it. 00:41:41.622 [2024-10-01 22:40:36.781974] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.622 [2024-10-01 22:40:36.781984] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.622 qpair failed and we were unable to recover it. 00:41:41.622 [2024-10-01 22:40:36.782277] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.622 [2024-10-01 22:40:36.782287] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.622 qpair failed and we were unable to recover it. 00:41:41.622 [2024-10-01 22:40:36.782484] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.622 [2024-10-01 22:40:36.782494] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.622 qpair failed and we were unable to recover it. 00:41:41.622 [2024-10-01 22:40:36.782790] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.622 [2024-10-01 22:40:36.782800] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.622 qpair failed and we were unable to recover it. 00:41:41.622 [2024-10-01 22:40:36.783114] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.622 [2024-10-01 22:40:36.783124] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.622 qpair failed and we were unable to recover it. 00:41:41.622 [2024-10-01 22:40:36.783407] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.622 [2024-10-01 22:40:36.783418] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.622 qpair failed and we were unable to recover it. 00:41:41.622 [2024-10-01 22:40:36.783739] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.622 [2024-10-01 22:40:36.783749] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.622 qpair failed and we were unable to recover it. 00:41:41.622 [2024-10-01 22:40:36.784072] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.622 [2024-10-01 22:40:36.784082] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.622 qpair failed and we were unable to recover it. 00:41:41.622 [2024-10-01 22:40:36.784385] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.622 [2024-10-01 22:40:36.784396] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.622 qpair failed and we were unable to recover it. 00:41:41.622 [2024-10-01 22:40:36.784685] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.622 [2024-10-01 22:40:36.784696] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.622 qpair failed and we were unable to recover it. 00:41:41.622 [2024-10-01 22:40:36.785009] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.622 [2024-10-01 22:40:36.785019] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.622 qpair failed and we were unable to recover it. 00:41:41.622 [2024-10-01 22:40:36.785240] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.622 [2024-10-01 22:40:36.785250] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.622 qpair failed and we were unable to recover it. 00:41:41.622 [2024-10-01 22:40:36.785580] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.622 [2024-10-01 22:40:36.785590] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.622 qpair failed and we were unable to recover it. 00:41:41.622 [2024-10-01 22:40:36.785893] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.622 [2024-10-01 22:40:36.785905] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.622 qpair failed and we were unable to recover it. 00:41:41.622 [2024-10-01 22:40:36.786182] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.622 [2024-10-01 22:40:36.786192] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.622 qpair failed and we were unable to recover it. 00:41:41.623 [2024-10-01 22:40:36.786483] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.623 [2024-10-01 22:40:36.786494] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.623 qpair failed and we were unable to recover it. 00:41:41.623 [2024-10-01 22:40:36.786796] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.623 [2024-10-01 22:40:36.786806] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.623 qpair failed and we were unable to recover it. 00:41:41.623 [2024-10-01 22:40:36.787114] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.623 [2024-10-01 22:40:36.787124] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.623 qpair failed and we were unable to recover it. 00:41:41.623 [2024-10-01 22:40:36.787443] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.623 [2024-10-01 22:40:36.787453] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.623 qpair failed and we were unable to recover it. 00:41:41.623 [2024-10-01 22:40:36.787756] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.623 [2024-10-01 22:40:36.787766] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.623 qpair failed and we were unable to recover it. 00:41:41.623 [2024-10-01 22:40:36.788073] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.623 [2024-10-01 22:40:36.788083] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.623 qpair failed and we were unable to recover it. 00:41:41.623 [2024-10-01 22:40:36.788361] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.623 [2024-10-01 22:40:36.788370] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.623 qpair failed and we were unable to recover it. 00:41:41.623 [2024-10-01 22:40:36.788685] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.623 [2024-10-01 22:40:36.788695] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.623 qpair failed and we were unable to recover it. 00:41:41.623 [2024-10-01 22:40:36.789043] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.623 [2024-10-01 22:40:36.789053] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.623 qpair failed and we were unable to recover it. 00:41:41.623 [2024-10-01 22:40:36.789358] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.623 [2024-10-01 22:40:36.789367] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.623 qpair failed and we were unable to recover it. 00:41:41.623 [2024-10-01 22:40:36.789554] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.623 [2024-10-01 22:40:36.789564] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.623 qpair failed and we were unable to recover it. 00:41:41.623 [2024-10-01 22:40:36.789844] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.623 [2024-10-01 22:40:36.789854] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.623 qpair failed and we were unable to recover it. 00:41:41.623 [2024-10-01 22:40:36.790046] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.623 [2024-10-01 22:40:36.790056] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.623 qpair failed and we were unable to recover it. 00:41:41.623 [2024-10-01 22:40:36.790319] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.623 [2024-10-01 22:40:36.790329] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.623 qpair failed and we were unable to recover it. 00:41:41.623 [2024-10-01 22:40:36.790634] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.623 [2024-10-01 22:40:36.790644] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.623 qpair failed and we were unable to recover it. 00:41:41.623 [2024-10-01 22:40:36.790968] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.623 [2024-10-01 22:40:36.790978] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.623 qpair failed and we were unable to recover it. 00:41:41.623 [2024-10-01 22:40:36.791280] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.623 [2024-10-01 22:40:36.791290] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.623 qpair failed and we were unable to recover it. 00:41:41.623 [2024-10-01 22:40:36.791596] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.623 [2024-10-01 22:40:36.791606] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.623 qpair failed and we were unable to recover it. 00:41:41.623 [2024-10-01 22:40:36.791979] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.623 [2024-10-01 22:40:36.791988] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.623 qpair failed and we were unable to recover it. 00:41:41.623 [2024-10-01 22:40:36.792269] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.623 [2024-10-01 22:40:36.792279] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.623 qpair failed and we were unable to recover it. 00:41:41.623 [2024-10-01 22:40:36.792589] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.623 [2024-10-01 22:40:36.792599] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.623 qpair failed and we were unable to recover it. 00:41:41.623 [2024-10-01 22:40:36.792978] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.623 [2024-10-01 22:40:36.792988] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.623 qpair failed and we were unable to recover it. 00:41:41.623 [2024-10-01 22:40:36.793306] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.623 [2024-10-01 22:40:36.793316] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.623 qpair failed and we were unable to recover it. 00:41:41.623 [2024-10-01 22:40:36.793633] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.623 [2024-10-01 22:40:36.793645] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.623 qpair failed and we were unable to recover it. 00:41:41.623 [2024-10-01 22:40:36.793954] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.623 [2024-10-01 22:40:36.793963] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.623 qpair failed and we were unable to recover it. 00:41:41.623 [2024-10-01 22:40:36.794270] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.623 [2024-10-01 22:40:36.794282] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.623 qpair failed and we were unable to recover it. 00:41:41.623 [2024-10-01 22:40:36.794586] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.623 [2024-10-01 22:40:36.794596] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.623 qpair failed and we were unable to recover it. 00:41:41.623 [2024-10-01 22:40:36.794795] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.623 [2024-10-01 22:40:36.794804] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.623 qpair failed and we were unable to recover it. 00:41:41.623 [2024-10-01 22:40:36.795142] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.623 [2024-10-01 22:40:36.795152] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.623 qpair failed and we were unable to recover it. 00:41:41.623 [2024-10-01 22:40:36.795455] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.623 [2024-10-01 22:40:36.795465] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.623 qpair failed and we were unable to recover it. 00:41:41.623 [2024-10-01 22:40:36.795672] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.623 [2024-10-01 22:40:36.795682] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.623 qpair failed and we were unable to recover it. 00:41:41.623 [2024-10-01 22:40:36.796021] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.624 [2024-10-01 22:40:36.796030] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.624 qpair failed and we were unable to recover it. 00:41:41.624 [2024-10-01 22:40:36.796308] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.624 [2024-10-01 22:40:36.796326] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.624 qpair failed and we were unable to recover it. 00:41:41.624 [2024-10-01 22:40:36.796643] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.624 [2024-10-01 22:40:36.796653] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.624 qpair failed and we were unable to recover it. 00:41:41.624 [2024-10-01 22:40:36.796957] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.624 [2024-10-01 22:40:36.796967] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.624 qpair failed and we were unable to recover it. 00:41:41.624 [2024-10-01 22:40:36.797192] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.624 [2024-10-01 22:40:36.797201] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.624 qpair failed and we were unable to recover it. 00:41:41.624 [2024-10-01 22:40:36.797502] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.624 [2024-10-01 22:40:36.797512] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.624 qpair failed and we were unable to recover it. 00:41:41.624 [2024-10-01 22:40:36.797803] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.624 [2024-10-01 22:40:36.797813] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.624 qpair failed and we were unable to recover it. 00:41:41.624 [2024-10-01 22:40:36.798126] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.624 [2024-10-01 22:40:36.798136] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.624 qpair failed and we were unable to recover it. 00:41:41.624 [2024-10-01 22:40:36.798421] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.624 [2024-10-01 22:40:36.798430] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.624 qpair failed and we were unable to recover it. 00:41:41.624 [2024-10-01 22:40:36.798735] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.624 [2024-10-01 22:40:36.798745] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.624 qpair failed and we were unable to recover it. 00:41:41.624 [2024-10-01 22:40:36.799054] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.624 [2024-10-01 22:40:36.799064] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.624 qpair failed and we were unable to recover it. 00:41:41.624 [2024-10-01 22:40:36.799267] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.624 [2024-10-01 22:40:36.799277] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.624 qpair failed and we were unable to recover it. 00:41:41.624 [2024-10-01 22:40:36.799580] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.624 [2024-10-01 22:40:36.799590] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.624 qpair failed and we were unable to recover it. 00:41:41.624 [2024-10-01 22:40:36.799786] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.624 [2024-10-01 22:40:36.799796] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.624 qpair failed and we were unable to recover it. 00:41:41.624 [2024-10-01 22:40:36.800102] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.624 [2024-10-01 22:40:36.800112] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.624 qpair failed and we were unable to recover it. 00:41:41.624 [2024-10-01 22:40:36.800331] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.624 [2024-10-01 22:40:36.800341] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.624 qpair failed and we were unable to recover it. 00:41:41.624 [2024-10-01 22:40:36.800712] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.624 [2024-10-01 22:40:36.800723] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.624 qpair failed and we were unable to recover it. 00:41:41.624 [2024-10-01 22:40:36.801023] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.624 [2024-10-01 22:40:36.801034] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.624 qpair failed and we were unable to recover it. 00:41:41.624 [2024-10-01 22:40:36.801360] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.624 [2024-10-01 22:40:36.801371] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.624 qpair failed and we were unable to recover it. 00:41:41.624 [2024-10-01 22:40:36.801670] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.624 [2024-10-01 22:40:36.801680] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.624 qpair failed and we were unable to recover it. 00:41:41.624 [2024-10-01 22:40:36.801958] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.624 [2024-10-01 22:40:36.801968] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.624 qpair failed and we were unable to recover it. 00:41:41.624 [2024-10-01 22:40:36.802271] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.624 [2024-10-01 22:40:36.802280] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.624 qpair failed and we were unable to recover it. 00:41:41.624 [2024-10-01 22:40:36.802588] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.624 [2024-10-01 22:40:36.802598] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.624 qpair failed and we were unable to recover it. 00:41:41.624 [2024-10-01 22:40:36.802901] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.624 [2024-10-01 22:40:36.802911] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.624 qpair failed and we were unable to recover it. 00:41:41.624 [2024-10-01 22:40:36.803232] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.624 [2024-10-01 22:40:36.803242] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.624 qpair failed and we were unable to recover it. 00:41:41.624 [2024-10-01 22:40:36.803549] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.624 [2024-10-01 22:40:36.803559] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.624 qpair failed and we were unable to recover it. 00:41:41.624 [2024-10-01 22:40:36.803867] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.624 [2024-10-01 22:40:36.803877] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.624 qpair failed and we were unable to recover it. 00:41:41.624 [2024-10-01 22:40:36.804180] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.624 [2024-10-01 22:40:36.804190] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.624 qpair failed and we were unable to recover it. 00:41:41.624 [2024-10-01 22:40:36.804473] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.624 [2024-10-01 22:40:36.804483] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.624 qpair failed and we were unable to recover it. 00:41:41.624 [2024-10-01 22:40:36.804788] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.624 [2024-10-01 22:40:36.804798] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.624 qpair failed and we were unable to recover it. 00:41:41.624 [2024-10-01 22:40:36.804960] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.624 [2024-10-01 22:40:36.804971] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.624 qpair failed and we were unable to recover it. 00:41:41.624 [2024-10-01 22:40:36.805283] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.624 [2024-10-01 22:40:36.805293] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.624 qpair failed and we were unable to recover it. 00:41:41.624 [2024-10-01 22:40:36.805612] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.624 [2024-10-01 22:40:36.805623] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.624 qpair failed and we were unable to recover it. 00:41:41.624 [2024-10-01 22:40:36.805936] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.624 [2024-10-01 22:40:36.805945] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.624 qpair failed and we were unable to recover it. 00:41:41.624 [2024-10-01 22:40:36.806249] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.624 [2024-10-01 22:40:36.806259] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.624 qpair failed and we were unable to recover it. 00:41:41.624 [2024-10-01 22:40:36.806583] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.624 [2024-10-01 22:40:36.806593] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.625 qpair failed and we were unable to recover it. 00:41:41.625 [2024-10-01 22:40:36.806898] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.625 [2024-10-01 22:40:36.806909] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.625 qpair failed and we were unable to recover it. 00:41:41.625 [2024-10-01 22:40:36.807212] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.625 [2024-10-01 22:40:36.807222] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.625 qpair failed and we were unable to recover it. 00:41:41.625 [2024-10-01 22:40:36.807527] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.625 [2024-10-01 22:40:36.807537] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.625 qpair failed and we were unable to recover it. 00:41:41.625 [2024-10-01 22:40:36.807861] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.625 [2024-10-01 22:40:36.807871] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.625 qpair failed and we were unable to recover it. 00:41:41.625 [2024-10-01 22:40:36.808169] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.625 [2024-10-01 22:40:36.808180] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.625 qpair failed and we were unable to recover it. 00:41:41.625 [2024-10-01 22:40:36.808484] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.625 [2024-10-01 22:40:36.808494] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.625 qpair failed and we were unable to recover it. 00:41:41.625 [2024-10-01 22:40:36.808797] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.625 [2024-10-01 22:40:36.808808] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.625 qpair failed and we were unable to recover it. 00:41:41.625 [2024-10-01 22:40:36.809094] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.625 [2024-10-01 22:40:36.809104] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.625 qpair failed and we were unable to recover it. 00:41:41.625 [2024-10-01 22:40:36.809388] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.625 [2024-10-01 22:40:36.809397] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.625 qpair failed and we were unable to recover it. 00:41:41.625 [2024-10-01 22:40:36.809704] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.625 [2024-10-01 22:40:36.809714] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.625 qpair failed and we were unable to recover it. 00:41:41.625 [2024-10-01 22:40:36.809934] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.625 [2024-10-01 22:40:36.809944] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.625 qpair failed and we were unable to recover it. 00:41:41.625 [2024-10-01 22:40:36.810203] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.625 [2024-10-01 22:40:36.810212] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.625 qpair failed and we were unable to recover it. 00:41:41.625 [2024-10-01 22:40:36.810550] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.625 [2024-10-01 22:40:36.810560] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.625 qpair failed and we were unable to recover it. 00:41:41.625 [2024-10-01 22:40:36.810877] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.625 [2024-10-01 22:40:36.810887] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.625 qpair failed and we were unable to recover it. 00:41:41.625 [2024-10-01 22:40:36.811196] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.625 [2024-10-01 22:40:36.811205] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.625 qpair failed and we were unable to recover it. 00:41:41.625 [2024-10-01 22:40:36.811552] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.625 [2024-10-01 22:40:36.811562] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.625 qpair failed and we were unable to recover it. 00:41:41.625 [2024-10-01 22:40:36.811876] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.625 [2024-10-01 22:40:36.811886] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.625 qpair failed and we were unable to recover it. 00:41:41.625 [2024-10-01 22:40:36.812192] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.625 [2024-10-01 22:40:36.812202] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.625 qpair failed and we were unable to recover it. 00:41:41.625 [2024-10-01 22:40:36.812508] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.625 [2024-10-01 22:40:36.812517] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.625 qpair failed and we were unable to recover it. 00:41:41.625 [2024-10-01 22:40:36.812821] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.625 [2024-10-01 22:40:36.812831] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.625 qpair failed and we were unable to recover it. 00:41:41.625 [2024-10-01 22:40:36.813039] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.625 [2024-10-01 22:40:36.813049] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.625 qpair failed and we were unable to recover it. 00:41:41.625 [2024-10-01 22:40:36.813373] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.625 [2024-10-01 22:40:36.813382] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.625 qpair failed and we were unable to recover it. 00:41:41.625 [2024-10-01 22:40:36.813690] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.625 [2024-10-01 22:40:36.813700] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.625 qpair failed and we were unable to recover it. 00:41:41.625 [2024-10-01 22:40:36.814020] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.625 [2024-10-01 22:40:36.814030] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.625 qpair failed and we were unable to recover it. 00:41:41.625 [2024-10-01 22:40:36.814317] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.625 [2024-10-01 22:40:36.814328] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.625 qpair failed and we were unable to recover it. 00:41:41.625 [2024-10-01 22:40:36.814631] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.626 [2024-10-01 22:40:36.814642] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.626 qpair failed and we were unable to recover it. 00:41:41.626 [2024-10-01 22:40:36.814942] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.626 [2024-10-01 22:40:36.814955] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.626 qpair failed and we were unable to recover it. 00:41:41.626 [2024-10-01 22:40:36.815261] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.626 [2024-10-01 22:40:36.815270] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.626 qpair failed and we were unable to recover it. 00:41:41.626 [2024-10-01 22:40:36.815540] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.626 [2024-10-01 22:40:36.815550] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.626 qpair failed and we were unable to recover it. 00:41:41.626 [2024-10-01 22:40:36.815869] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.626 [2024-10-01 22:40:36.815879] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.626 qpair failed and we were unable to recover it. 00:41:41.626 [2024-10-01 22:40:36.816150] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.626 [2024-10-01 22:40:36.816160] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.626 qpair failed and we were unable to recover it. 00:41:41.626 [2024-10-01 22:40:36.816462] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.626 [2024-10-01 22:40:36.816472] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.626 qpair failed and we were unable to recover it. 00:41:41.626 [2024-10-01 22:40:36.816647] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.626 [2024-10-01 22:40:36.816658] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.626 qpair failed and we were unable to recover it. 00:41:41.626 [2024-10-01 22:40:36.816942] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.626 [2024-10-01 22:40:36.816951] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.626 qpair failed and we were unable to recover it. 00:41:41.626 [2024-10-01 22:40:36.817261] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.626 [2024-10-01 22:40:36.817271] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.626 qpair failed and we were unable to recover it. 00:41:41.626 [2024-10-01 22:40:36.817576] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.626 [2024-10-01 22:40:36.817586] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.626 qpair failed and we were unable to recover it. 00:41:41.626 [2024-10-01 22:40:36.817875] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.626 [2024-10-01 22:40:36.817886] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.626 qpair failed and we were unable to recover it. 00:41:41.626 [2024-10-01 22:40:36.818191] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.626 [2024-10-01 22:40:36.818201] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.626 qpair failed and we were unable to recover it. 00:41:41.626 [2024-10-01 22:40:36.818556] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.626 [2024-10-01 22:40:36.818566] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.626 qpair failed and we were unable to recover it. 00:41:41.626 [2024-10-01 22:40:36.818873] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.626 [2024-10-01 22:40:36.818883] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.626 qpair failed and we were unable to recover it. 00:41:41.626 [2024-10-01 22:40:36.819232] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.626 [2024-10-01 22:40:36.819242] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.626 qpair failed and we were unable to recover it. 00:41:41.626 [2024-10-01 22:40:36.819555] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.626 [2024-10-01 22:40:36.819565] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.626 qpair failed and we were unable to recover it. 00:41:41.626 [2024-10-01 22:40:36.819811] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.626 [2024-10-01 22:40:36.819821] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.626 qpair failed and we were unable to recover it. 00:41:41.626 [2024-10-01 22:40:36.819990] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.626 [2024-10-01 22:40:36.820001] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.626 qpair failed and we were unable to recover it. 00:41:41.626 [2024-10-01 22:40:36.820335] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.626 [2024-10-01 22:40:36.820345] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.626 qpair failed and we were unable to recover it. 00:41:41.626 [2024-10-01 22:40:36.820535] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.626 [2024-10-01 22:40:36.820545] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.626 qpair failed and we were unable to recover it. 00:41:41.626 [2024-10-01 22:40:36.820808] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.626 [2024-10-01 22:40:36.820819] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.626 qpair failed and we were unable to recover it. 00:41:41.626 [2024-10-01 22:40:36.821121] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.626 [2024-10-01 22:40:36.821131] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.626 qpair failed and we were unable to recover it. 00:41:41.626 [2024-10-01 22:40:36.821410] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.626 [2024-10-01 22:40:36.821427] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.626 qpair failed and we were unable to recover it. 00:41:41.626 [2024-10-01 22:40:36.821746] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.626 [2024-10-01 22:40:36.821757] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.626 qpair failed and we were unable to recover it. 00:41:41.626 [2024-10-01 22:40:36.822066] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.626 [2024-10-01 22:40:36.822076] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.626 qpair failed and we were unable to recover it. 00:41:41.626 [2024-10-01 22:40:36.822381] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.626 [2024-10-01 22:40:36.822392] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.626 qpair failed and we were unable to recover it. 00:41:41.626 [2024-10-01 22:40:36.822724] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.626 [2024-10-01 22:40:36.822734] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.626 qpair failed and we were unable to recover it. 00:41:41.626 [2024-10-01 22:40:36.823035] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.626 [2024-10-01 22:40:36.823047] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.626 qpair failed and we were unable to recover it. 00:41:41.626 [2024-10-01 22:40:36.823348] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.626 [2024-10-01 22:40:36.823358] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.626 qpair failed and we were unable to recover it. 00:41:41.626 [2024-10-01 22:40:36.823650] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.626 [2024-10-01 22:40:36.823660] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.626 qpair failed and we were unable to recover it. 00:41:41.626 [2024-10-01 22:40:36.823996] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.626 [2024-10-01 22:40:36.824006] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.626 qpair failed and we were unable to recover it. 00:41:41.626 [2024-10-01 22:40:36.824284] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.626 [2024-10-01 22:40:36.824301] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.626 qpair failed and we were unable to recover it. 00:41:41.626 [2024-10-01 22:40:36.824598] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.626 [2024-10-01 22:40:36.824608] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.626 qpair failed and we were unable to recover it. 00:41:41.627 [2024-10-01 22:40:36.824919] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.627 [2024-10-01 22:40:36.824930] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.627 qpair failed and we were unable to recover it. 00:41:41.627 [2024-10-01 22:40:36.825213] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.627 [2024-10-01 22:40:36.825223] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.627 qpair failed and we were unable to recover it. 00:41:41.627 [2024-10-01 22:40:36.825524] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.627 [2024-10-01 22:40:36.825534] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.627 qpair failed and we were unable to recover it. 00:41:41.627 [2024-10-01 22:40:36.825898] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.627 [2024-10-01 22:40:36.825909] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.627 qpair failed and we were unable to recover it. 00:41:41.627 [2024-10-01 22:40:36.826283] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.627 [2024-10-01 22:40:36.826292] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.627 qpair failed and we were unable to recover it. 00:41:41.627 [2024-10-01 22:40:36.826583] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.627 [2024-10-01 22:40:36.826593] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.627 qpair failed and we were unable to recover it. 00:41:41.627 [2024-10-01 22:40:36.826908] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.627 [2024-10-01 22:40:36.826919] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.627 qpair failed and we were unable to recover it. 00:41:41.627 [2024-10-01 22:40:36.827234] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.627 [2024-10-01 22:40:36.827244] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.627 qpair failed and we were unable to recover it. 00:41:41.627 [2024-10-01 22:40:36.827547] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.627 [2024-10-01 22:40:36.827557] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.627 qpair failed and we were unable to recover it. 00:41:41.627 [2024-10-01 22:40:36.827859] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.627 [2024-10-01 22:40:36.827869] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.627 qpair failed and we were unable to recover it. 00:41:41.627 [2024-10-01 22:40:36.828172] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.627 [2024-10-01 22:40:36.828182] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.627 qpair failed and we were unable to recover it. 00:41:41.627 [2024-10-01 22:40:36.828492] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.627 [2024-10-01 22:40:36.828502] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.627 qpair failed and we were unable to recover it. 00:41:41.627 [2024-10-01 22:40:36.828794] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.627 [2024-10-01 22:40:36.828804] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.627 qpair failed and we were unable to recover it. 00:41:41.627 [2024-10-01 22:40:36.829095] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.627 [2024-10-01 22:40:36.829105] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.627 qpair failed and we were unable to recover it. 00:41:41.627 [2024-10-01 22:40:36.829381] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.627 [2024-10-01 22:40:36.829390] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.627 qpair failed and we were unable to recover it. 00:41:41.627 [2024-10-01 22:40:36.829694] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.627 [2024-10-01 22:40:36.829704] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.627 qpair failed and we were unable to recover it. 00:41:41.627 [2024-10-01 22:40:36.829987] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.901 [2024-10-01 22:40:36.830002] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.901 qpair failed and we were unable to recover it. 00:41:41.901 [2024-10-01 22:40:36.830331] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.901 [2024-10-01 22:40:36.830343] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.901 qpair failed and we were unable to recover it. 00:41:41.901 [2024-10-01 22:40:36.830653] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.901 [2024-10-01 22:40:36.830664] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.901 qpair failed and we were unable to recover it. 00:41:41.901 [2024-10-01 22:40:36.830875] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.901 [2024-10-01 22:40:36.830885] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.901 qpair failed and we were unable to recover it. 00:41:41.901 [2024-10-01 22:40:36.831203] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.901 [2024-10-01 22:40:36.831213] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.901 qpair failed and we were unable to recover it. 00:41:41.901 [2024-10-01 22:40:36.831562] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.901 [2024-10-01 22:40:36.831575] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.901 qpair failed and we were unable to recover it. 00:41:41.901 [2024-10-01 22:40:36.831797] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.901 [2024-10-01 22:40:36.831807] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.901 qpair failed and we were unable to recover it. 00:41:41.901 [2024-10-01 22:40:36.831998] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.901 [2024-10-01 22:40:36.832010] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.901 qpair failed and we were unable to recover it. 00:41:41.901 [2024-10-01 22:40:36.832316] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.901 [2024-10-01 22:40:36.832326] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.901 qpair failed and we were unable to recover it. 00:41:41.901 [2024-10-01 22:40:36.832653] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.901 [2024-10-01 22:40:36.832664] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.901 qpair failed and we were unable to recover it. 00:41:41.901 [2024-10-01 22:40:36.832975] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.901 [2024-10-01 22:40:36.832985] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.901 qpair failed and we were unable to recover it. 00:41:41.901 [2024-10-01 22:40:36.833256] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.901 [2024-10-01 22:40:36.833266] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.901 qpair failed and we were unable to recover it. 00:41:41.901 [2024-10-01 22:40:36.833473] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.901 [2024-10-01 22:40:36.833484] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.901 qpair failed and we were unable to recover it. 00:41:41.901 [2024-10-01 22:40:36.833826] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.901 [2024-10-01 22:40:36.833837] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.901 qpair failed and we were unable to recover it. 00:41:41.901 [2024-10-01 22:40:36.834123] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.901 [2024-10-01 22:40:36.834134] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.902 qpair failed and we were unable to recover it. 00:41:41.902 [2024-10-01 22:40:36.834433] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.902 [2024-10-01 22:40:36.834443] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.902 qpair failed and we were unable to recover it. 00:41:41.902 [2024-10-01 22:40:36.834752] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.902 [2024-10-01 22:40:36.834762] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.902 qpair failed and we were unable to recover it. 00:41:41.902 [2024-10-01 22:40:36.835070] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.902 [2024-10-01 22:40:36.835079] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.902 qpair failed and we were unable to recover it. 00:41:41.902 [2024-10-01 22:40:36.835264] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.902 [2024-10-01 22:40:36.835275] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.902 qpair failed and we were unable to recover it. 00:41:41.902 [2024-10-01 22:40:36.835580] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.902 [2024-10-01 22:40:36.835591] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.902 qpair failed and we were unable to recover it. 00:41:41.902 [2024-10-01 22:40:36.835893] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.902 [2024-10-01 22:40:36.835904] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.902 qpair failed and we were unable to recover it. 00:41:41.902 [2024-10-01 22:40:36.836192] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.902 [2024-10-01 22:40:36.836203] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.902 qpair failed and we were unable to recover it. 00:41:41.902 [2024-10-01 22:40:36.836508] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.902 [2024-10-01 22:40:36.836518] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.902 qpair failed and we were unable to recover it. 00:41:41.902 [2024-10-01 22:40:36.836727] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.902 [2024-10-01 22:40:36.836738] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.902 qpair failed and we were unable to recover it. 00:41:41.902 [2024-10-01 22:40:36.837004] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.902 [2024-10-01 22:40:36.837015] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.902 qpair failed and we were unable to recover it. 00:41:41.902 [2024-10-01 22:40:36.837301] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.902 [2024-10-01 22:40:36.837312] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.902 qpair failed and we were unable to recover it. 00:41:41.902 [2024-10-01 22:40:36.837621] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.902 [2024-10-01 22:40:36.837642] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.902 qpair failed and we were unable to recover it. 00:41:41.902 [2024-10-01 22:40:36.837959] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.902 [2024-10-01 22:40:36.837969] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.902 qpair failed and we were unable to recover it. 00:41:41.902 [2024-10-01 22:40:36.838271] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.902 [2024-10-01 22:40:36.838281] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.902 qpair failed and we were unable to recover it. 00:41:41.902 [2024-10-01 22:40:36.838606] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.902 [2024-10-01 22:40:36.838616] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.902 qpair failed and we were unable to recover it. 00:41:41.902 [2024-10-01 22:40:36.838921] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.902 [2024-10-01 22:40:36.838931] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.902 qpair failed and we were unable to recover it. 00:41:41.902 [2024-10-01 22:40:36.839138] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.902 [2024-10-01 22:40:36.839148] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.902 qpair failed and we were unable to recover it. 00:41:41.902 [2024-10-01 22:40:36.839465] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.902 [2024-10-01 22:40:36.839475] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.902 qpair failed and we were unable to recover it. 00:41:41.902 [2024-10-01 22:40:36.839754] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.902 [2024-10-01 22:40:36.839765] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.902 qpair failed and we were unable to recover it. 00:41:41.902 [2024-10-01 22:40:36.840113] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.902 [2024-10-01 22:40:36.840123] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.902 qpair failed and we were unable to recover it. 00:41:41.902 [2024-10-01 22:40:36.840438] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.902 [2024-10-01 22:40:36.840448] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.902 qpair failed and we were unable to recover it. 00:41:41.902 [2024-10-01 22:40:36.840747] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.902 [2024-10-01 22:40:36.840757] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.902 qpair failed and we were unable to recover it. 00:41:41.902 [2024-10-01 22:40:36.841080] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.902 [2024-10-01 22:40:36.841090] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.902 qpair failed and we were unable to recover it. 00:41:41.902 [2024-10-01 22:40:36.841358] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.902 [2024-10-01 22:40:36.841369] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.902 qpair failed and we were unable to recover it. 00:41:41.902 [2024-10-01 22:40:36.841673] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.902 [2024-10-01 22:40:36.841684] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.902 qpair failed and we were unable to recover it. 00:41:41.902 [2024-10-01 22:40:36.841876] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.902 [2024-10-01 22:40:36.841886] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.902 qpair failed and we were unable to recover it. 00:41:41.902 [2024-10-01 22:40:36.842223] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.902 [2024-10-01 22:40:36.842232] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.902 qpair failed and we were unable to recover it. 00:41:41.902 [2024-10-01 22:40:36.842554] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.902 [2024-10-01 22:40:36.842564] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.902 qpair failed and we were unable to recover it. 00:41:41.902 [2024-10-01 22:40:36.842848] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.902 [2024-10-01 22:40:36.842858] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.903 qpair failed and we were unable to recover it. 00:41:41.903 [2024-10-01 22:40:36.843140] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.903 [2024-10-01 22:40:36.843151] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.903 qpair failed and we were unable to recover it. 00:41:41.903 [2024-10-01 22:40:36.843432] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.903 [2024-10-01 22:40:36.843441] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.903 qpair failed and we were unable to recover it. 00:41:41.903 [2024-10-01 22:40:36.843750] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.903 [2024-10-01 22:40:36.843761] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.903 qpair failed and we were unable to recover it. 00:41:41.903 [2024-10-01 22:40:36.843956] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.903 [2024-10-01 22:40:36.843965] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.903 qpair failed and we were unable to recover it. 00:41:41.903 [2024-10-01 22:40:36.844280] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.903 [2024-10-01 22:40:36.844290] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.903 qpair failed and we were unable to recover it. 00:41:41.903 [2024-10-01 22:40:36.844622] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.903 [2024-10-01 22:40:36.844641] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.903 qpair failed and we were unable to recover it. 00:41:41.903 [2024-10-01 22:40:36.844938] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.903 [2024-10-01 22:40:36.844948] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.903 qpair failed and we were unable to recover it. 00:41:41.903 [2024-10-01 22:40:36.845252] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.903 [2024-10-01 22:40:36.845262] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.903 qpair failed and we were unable to recover it. 00:41:41.903 [2024-10-01 22:40:36.845568] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.903 [2024-10-01 22:40:36.845578] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.903 qpair failed and we were unable to recover it. 00:41:41.903 [2024-10-01 22:40:36.845965] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.903 [2024-10-01 22:40:36.845975] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.903 qpair failed and we were unable to recover it. 00:41:41.903 [2024-10-01 22:40:36.846267] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.903 [2024-10-01 22:40:36.846277] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.903 qpair failed and we were unable to recover it. 00:41:41.903 [2024-10-01 22:40:36.846447] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.903 [2024-10-01 22:40:36.846457] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.903 qpair failed and we were unable to recover it. 00:41:41.903 [2024-10-01 22:40:36.846760] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.903 [2024-10-01 22:40:36.846770] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.903 qpair failed and we were unable to recover it. 00:41:41.903 [2024-10-01 22:40:36.847062] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.903 [2024-10-01 22:40:36.847073] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.903 qpair failed and we were unable to recover it. 00:41:41.903 [2024-10-01 22:40:36.847378] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.903 [2024-10-01 22:40:36.847388] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.903 qpair failed and we were unable to recover it. 00:41:41.903 [2024-10-01 22:40:36.847714] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.903 [2024-10-01 22:40:36.847724] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.903 qpair failed and we were unable to recover it. 00:41:41.903 [2024-10-01 22:40:36.848050] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.903 [2024-10-01 22:40:36.848060] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.903 qpair failed and we were unable to recover it. 00:41:41.903 [2024-10-01 22:40:36.848341] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.903 [2024-10-01 22:40:36.848352] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.903 qpair failed and we were unable to recover it. 00:41:41.903 [2024-10-01 22:40:36.848658] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.903 [2024-10-01 22:40:36.848668] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.903 qpair failed and we were unable to recover it. 00:41:41.903 [2024-10-01 22:40:36.848974] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.903 [2024-10-01 22:40:36.848983] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.903 qpair failed and we were unable to recover it. 00:41:41.903 [2024-10-01 22:40:36.849289] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.903 [2024-10-01 22:40:36.849298] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.903 qpair failed and we were unable to recover it. 00:41:41.903 [2024-10-01 22:40:36.849581] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.903 [2024-10-01 22:40:36.849596] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.903 qpair failed and we were unable to recover it. 00:41:41.903 [2024-10-01 22:40:36.849893] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.903 [2024-10-01 22:40:36.849903] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.903 qpair failed and we were unable to recover it. 00:41:41.903 [2024-10-01 22:40:36.850212] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.903 [2024-10-01 22:40:36.850222] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.903 qpair failed and we were unable to recover it. 00:41:41.903 [2024-10-01 22:40:36.850523] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.903 [2024-10-01 22:40:36.850532] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.903 qpair failed and we were unable to recover it. 00:41:41.903 [2024-10-01 22:40:36.850723] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.903 [2024-10-01 22:40:36.850733] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.903 qpair failed and we were unable to recover it. 00:41:41.903 [2024-10-01 22:40:36.851096] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.903 [2024-10-01 22:40:36.851106] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.903 qpair failed and we were unable to recover it. 00:41:41.903 [2024-10-01 22:40:36.851410] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.903 [2024-10-01 22:40:36.851421] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.903 qpair failed and we were unable to recover it. 00:41:41.904 [2024-10-01 22:40:36.851722] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.904 [2024-10-01 22:40:36.851733] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.904 qpair failed and we were unable to recover it. 00:41:41.904 [2024-10-01 22:40:36.852021] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.904 [2024-10-01 22:40:36.852041] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.904 qpair failed and we were unable to recover it. 00:41:41.904 [2024-10-01 22:40:36.852352] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.904 [2024-10-01 22:40:36.852361] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.904 qpair failed and we were unable to recover it. 00:41:41.904 [2024-10-01 22:40:36.852666] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.904 [2024-10-01 22:40:36.852676] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.904 qpair failed and we were unable to recover it. 00:41:41.904 [2024-10-01 22:40:36.852957] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.904 [2024-10-01 22:40:36.852967] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.904 qpair failed and we were unable to recover it. 00:41:41.904 [2024-10-01 22:40:36.853339] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.904 [2024-10-01 22:40:36.853348] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.904 qpair failed and we were unable to recover it. 00:41:41.904 [2024-10-01 22:40:36.853637] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.904 [2024-10-01 22:40:36.853648] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.904 qpair failed and we were unable to recover it. 00:41:41.904 [2024-10-01 22:40:36.853992] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.904 [2024-10-01 22:40:36.854002] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.904 qpair failed and we were unable to recover it. 00:41:41.904 [2024-10-01 22:40:36.854336] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.904 [2024-10-01 22:40:36.854346] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.904 qpair failed and we were unable to recover it. 00:41:41.904 [2024-10-01 22:40:36.854677] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.904 [2024-10-01 22:40:36.854687] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.904 qpair failed and we were unable to recover it. 00:41:41.904 [2024-10-01 22:40:36.854992] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.904 [2024-10-01 22:40:36.855001] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.904 qpair failed and we were unable to recover it. 00:41:41.904 [2024-10-01 22:40:36.855307] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.904 [2024-10-01 22:40:36.855317] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.904 qpair failed and we were unable to recover it. 00:41:41.904 [2024-10-01 22:40:36.855628] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.904 [2024-10-01 22:40:36.855638] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.904 qpair failed and we were unable to recover it. 00:41:41.904 [2024-10-01 22:40:36.855912] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.904 [2024-10-01 22:40:36.855922] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.904 qpair failed and we were unable to recover it. 00:41:41.904 [2024-10-01 22:40:36.856239] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.904 [2024-10-01 22:40:36.856250] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.904 qpair failed and we were unable to recover it. 00:41:41.904 [2024-10-01 22:40:36.856551] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.904 [2024-10-01 22:40:36.856561] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.904 qpair failed and we were unable to recover it. 00:41:41.904 [2024-10-01 22:40:36.856782] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.904 [2024-10-01 22:40:36.856792] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.904 qpair failed and we were unable to recover it. 00:41:41.904 [2024-10-01 22:40:36.857078] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.904 [2024-10-01 22:40:36.857088] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.904 qpair failed and we were unable to recover it. 00:41:41.904 [2024-10-01 22:40:36.857369] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.904 [2024-10-01 22:40:36.857378] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.904 qpair failed and we were unable to recover it. 00:41:41.904 [2024-10-01 22:40:36.857699] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.904 [2024-10-01 22:40:36.857709] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.904 qpair failed and we were unable to recover it. 00:41:41.904 [2024-10-01 22:40:36.857985] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.904 [2024-10-01 22:40:36.857995] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.904 qpair failed and we were unable to recover it. 00:41:41.904 [2024-10-01 22:40:36.858273] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.904 [2024-10-01 22:40:36.858282] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.904 qpair failed and we were unable to recover it. 00:41:41.904 [2024-10-01 22:40:36.858544] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.904 [2024-10-01 22:40:36.858553] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.904 qpair failed and we were unable to recover it. 00:41:41.904 [2024-10-01 22:40:36.858758] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.904 [2024-10-01 22:40:36.858768] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.904 qpair failed and we were unable to recover it. 00:41:41.904 [2024-10-01 22:40:36.858990] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.904 [2024-10-01 22:40:36.858999] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.904 qpair failed and we were unable to recover it. 00:41:41.904 [2024-10-01 22:40:36.859330] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.904 [2024-10-01 22:40:36.859340] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.904 qpair failed and we were unable to recover it. 00:41:41.904 [2024-10-01 22:40:36.859668] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.904 [2024-10-01 22:40:36.859678] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.904 qpair failed and we were unable to recover it. 00:41:41.904 [2024-10-01 22:40:36.859979] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.905 [2024-10-01 22:40:36.859990] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.905 qpair failed and we were unable to recover it. 00:41:41.905 [2024-10-01 22:40:36.860304] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.905 [2024-10-01 22:40:36.860316] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.905 qpair failed and we were unable to recover it. 00:41:41.905 [2024-10-01 22:40:36.860593] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.905 [2024-10-01 22:40:36.860603] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.905 qpair failed and we were unable to recover it. 00:41:41.905 [2024-10-01 22:40:36.860910] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.905 [2024-10-01 22:40:36.860921] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.905 qpair failed and we were unable to recover it. 00:41:41.905 [2024-10-01 22:40:36.861226] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.905 [2024-10-01 22:40:36.861236] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.905 qpair failed and we were unable to recover it. 00:41:41.905 [2024-10-01 22:40:36.861542] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.905 [2024-10-01 22:40:36.861551] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.905 qpair failed and we were unable to recover it. 00:41:41.905 [2024-10-01 22:40:36.861892] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.905 [2024-10-01 22:40:36.861903] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.905 qpair failed and we were unable to recover it. 00:41:41.905 [2024-10-01 22:40:36.862221] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.905 [2024-10-01 22:40:36.862231] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.905 qpair failed and we were unable to recover it. 00:41:41.905 [2024-10-01 22:40:36.862559] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.905 [2024-10-01 22:40:36.862569] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.905 qpair failed and we were unable to recover it. 00:41:41.905 [2024-10-01 22:40:36.862916] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.905 [2024-10-01 22:40:36.862926] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.905 qpair failed and we were unable to recover it. 00:41:41.905 [2024-10-01 22:40:36.863215] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.905 [2024-10-01 22:40:36.863226] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.905 qpair failed and we were unable to recover it. 00:41:41.905 [2024-10-01 22:40:36.863528] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.905 [2024-10-01 22:40:36.863538] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.905 qpair failed and we were unable to recover it. 00:41:41.905 [2024-10-01 22:40:36.863711] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.905 [2024-10-01 22:40:36.863722] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.905 qpair failed and we were unable to recover it. 00:41:41.905 [2024-10-01 22:40:36.863926] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.905 [2024-10-01 22:40:36.863936] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.905 qpair failed and we were unable to recover it. 00:41:41.905 [2024-10-01 22:40:36.864245] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.905 [2024-10-01 22:40:36.864256] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.905 qpair failed and we were unable to recover it. 00:41:41.905 [2024-10-01 22:40:36.864563] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.905 [2024-10-01 22:40:36.864573] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.905 qpair failed and we were unable to recover it. 00:41:41.905 [2024-10-01 22:40:36.864762] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.905 [2024-10-01 22:40:36.864773] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.905 qpair failed and we were unable to recover it. 00:41:41.905 [2024-10-01 22:40:36.865087] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.905 [2024-10-01 22:40:36.865098] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.905 qpair failed and we were unable to recover it. 00:41:41.905 [2024-10-01 22:40:36.865380] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.905 [2024-10-01 22:40:36.865391] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.905 qpair failed and we were unable to recover it. 00:41:41.905 [2024-10-01 22:40:36.865685] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.905 [2024-10-01 22:40:36.865695] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.905 qpair failed and we were unable to recover it. 00:41:41.905 [2024-10-01 22:40:36.866008] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.905 [2024-10-01 22:40:36.866017] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.905 qpair failed and we were unable to recover it. 00:41:41.905 [2024-10-01 22:40:36.866335] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.905 [2024-10-01 22:40:36.866345] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.905 qpair failed and we were unable to recover it. 00:41:41.905 [2024-10-01 22:40:36.866651] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.905 [2024-10-01 22:40:36.866661] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.905 qpair failed and we were unable to recover it. 00:41:41.905 [2024-10-01 22:40:36.866937] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.905 [2024-10-01 22:40:36.866947] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.905 qpair failed and we were unable to recover it. 00:41:41.905 [2024-10-01 22:40:36.867248] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.905 [2024-10-01 22:40:36.867258] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.905 qpair failed and we were unable to recover it. 00:41:41.905 [2024-10-01 22:40:36.867566] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.905 [2024-10-01 22:40:36.867576] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.905 qpair failed and we were unable to recover it. 00:41:41.905 [2024-10-01 22:40:36.867863] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.905 [2024-10-01 22:40:36.867874] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.905 qpair failed and we were unable to recover it. 00:41:41.906 [2024-10-01 22:40:36.868180] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.906 [2024-10-01 22:40:36.868190] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.906 qpair failed and we were unable to recover it. 00:41:41.906 [2024-10-01 22:40:36.868470] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.906 [2024-10-01 22:40:36.868480] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.906 qpair failed and we were unable to recover it. 00:41:41.906 [2024-10-01 22:40:36.868787] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.906 [2024-10-01 22:40:36.868798] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.906 qpair failed and we were unable to recover it. 00:41:41.906 [2024-10-01 22:40:36.869073] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.906 [2024-10-01 22:40:36.869082] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.906 qpair failed and we were unable to recover it. 00:41:41.906 [2024-10-01 22:40:36.869367] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.906 [2024-10-01 22:40:36.869378] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.906 qpair failed and we were unable to recover it. 00:41:41.906 [2024-10-01 22:40:36.869684] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.906 [2024-10-01 22:40:36.869695] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.906 qpair failed and we were unable to recover it. 00:41:41.906 [2024-10-01 22:40:36.869995] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.906 [2024-10-01 22:40:36.870004] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.906 qpair failed and we were unable to recover it. 00:41:41.906 [2024-10-01 22:40:36.870286] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.906 [2024-10-01 22:40:36.870295] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.906 qpair failed and we were unable to recover it. 00:41:41.906 [2024-10-01 22:40:36.870612] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.906 [2024-10-01 22:40:36.870622] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.906 qpair failed and we were unable to recover it. 00:41:41.906 [2024-10-01 22:40:36.870826] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.906 [2024-10-01 22:40:36.870836] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.906 qpair failed and we were unable to recover it. 00:41:41.906 [2024-10-01 22:40:36.871101] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.906 [2024-10-01 22:40:36.871111] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.906 qpair failed and we were unable to recover it. 00:41:41.906 [2024-10-01 22:40:36.871500] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.906 [2024-10-01 22:40:36.871510] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.906 qpair failed and we were unable to recover it. 00:41:41.906 [2024-10-01 22:40:36.871792] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.906 [2024-10-01 22:40:36.871802] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.906 qpair failed and we were unable to recover it. 00:41:41.906 [2024-10-01 22:40:36.872114] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.906 [2024-10-01 22:40:36.872123] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.906 qpair failed and we were unable to recover it. 00:41:41.906 [2024-10-01 22:40:36.872402] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.906 [2024-10-01 22:40:36.872411] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.906 qpair failed and we were unable to recover it. 00:41:41.906 [2024-10-01 22:40:36.872691] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.906 [2024-10-01 22:40:36.872701] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.906 qpair failed and we were unable to recover it. 00:41:41.906 [2024-10-01 22:40:36.873018] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.906 [2024-10-01 22:40:36.873027] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.906 qpair failed and we were unable to recover it. 00:41:41.906 [2024-10-01 22:40:36.873326] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.906 [2024-10-01 22:40:36.873337] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.906 qpair failed and we were unable to recover it. 00:41:41.906 [2024-10-01 22:40:36.873643] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.906 [2024-10-01 22:40:36.873654] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.906 qpair failed and we were unable to recover it. 00:41:41.906 [2024-10-01 22:40:36.873948] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.906 [2024-10-01 22:40:36.873958] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.906 qpair failed and we were unable to recover it. 00:41:41.906 [2024-10-01 22:40:36.874265] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.906 [2024-10-01 22:40:36.874275] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.906 qpair failed and we were unable to recover it. 00:41:41.906 [2024-10-01 22:40:36.874582] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.906 [2024-10-01 22:40:36.874591] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.906 qpair failed and we were unable to recover it. 00:41:41.906 [2024-10-01 22:40:36.874989] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.906 [2024-10-01 22:40:36.874999] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.906 qpair failed and we were unable to recover it. 00:41:41.906 [2024-10-01 22:40:36.875299] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.906 [2024-10-01 22:40:36.875308] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.906 qpair failed and we were unable to recover it. 00:41:41.906 [2024-10-01 22:40:36.875630] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.906 [2024-10-01 22:40:36.875640] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.906 qpair failed and we were unable to recover it. 00:41:41.906 [2024-10-01 22:40:36.875942] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.906 [2024-10-01 22:40:36.875951] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.906 qpair failed and we were unable to recover it. 00:41:41.906 [2024-10-01 22:40:36.876153] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.906 [2024-10-01 22:40:36.876162] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.906 qpair failed and we were unable to recover it. 00:41:41.906 [2024-10-01 22:40:36.876468] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.907 [2024-10-01 22:40:36.876477] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.907 qpair failed and we were unable to recover it. 00:41:41.907 [2024-10-01 22:40:36.876773] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.907 [2024-10-01 22:40:36.876784] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.907 qpair failed and we were unable to recover it. 00:41:41.907 [2024-10-01 22:40:36.877107] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.907 [2024-10-01 22:40:36.877117] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.907 qpair failed and we were unable to recover it. 00:41:41.907 [2024-10-01 22:40:36.877299] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.907 [2024-10-01 22:40:36.877310] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.907 qpair failed and we were unable to recover it. 00:41:41.907 [2024-10-01 22:40:36.877593] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.907 [2024-10-01 22:40:36.877605] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.907 qpair failed and we were unable to recover it. 00:41:41.907 [2024-10-01 22:40:36.877918] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.907 [2024-10-01 22:40:36.877929] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.907 qpair failed and we were unable to recover it. 00:41:41.907 [2024-10-01 22:40:36.878231] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.907 [2024-10-01 22:40:36.878242] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.907 qpair failed and we were unable to recover it. 00:41:41.907 [2024-10-01 22:40:36.878437] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.907 [2024-10-01 22:40:36.878447] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.907 qpair failed and we were unable to recover it. 00:41:41.907 [2024-10-01 22:40:36.878748] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.907 [2024-10-01 22:40:36.878759] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.907 qpair failed and we were unable to recover it. 00:41:41.907 [2024-10-01 22:40:36.879063] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.907 [2024-10-01 22:40:36.879073] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.907 qpair failed and we were unable to recover it. 00:41:41.907 [2024-10-01 22:40:36.879453] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.907 [2024-10-01 22:40:36.879463] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.907 qpair failed and we were unable to recover it. 00:41:41.907 [2024-10-01 22:40:36.879725] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.907 [2024-10-01 22:40:36.879735] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.907 qpair failed and we were unable to recover it. 00:41:41.907 [2024-10-01 22:40:36.880028] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.907 [2024-10-01 22:40:36.880038] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.907 qpair failed and we were unable to recover it. 00:41:41.907 [2024-10-01 22:40:36.880346] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.907 [2024-10-01 22:40:36.880355] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.907 qpair failed and we were unable to recover it. 00:41:41.907 [2024-10-01 22:40:36.880671] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.907 [2024-10-01 22:40:36.880681] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.907 qpair failed and we were unable to recover it. 00:41:41.907 [2024-10-01 22:40:36.880983] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.907 [2024-10-01 22:40:36.881000] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.907 qpair failed and we were unable to recover it. 00:41:41.907 [2024-10-01 22:40:36.881333] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.907 [2024-10-01 22:40:36.881343] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.907 qpair failed and we were unable to recover it. 00:41:41.907 [2024-10-01 22:40:36.881622] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.907 [2024-10-01 22:40:36.881636] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.907 qpair failed and we were unable to recover it. 00:41:41.907 [2024-10-01 22:40:36.881955] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.907 [2024-10-01 22:40:36.881965] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.907 qpair failed and we were unable to recover it. 00:41:41.907 [2024-10-01 22:40:36.882274] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.907 [2024-10-01 22:40:36.882283] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.907 qpair failed and we were unable to recover it. 00:41:41.907 [2024-10-01 22:40:36.882562] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.907 [2024-10-01 22:40:36.882573] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.907 qpair failed and we were unable to recover it. 00:41:41.907 [2024-10-01 22:40:36.882879] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.908 [2024-10-01 22:40:36.882890] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.908 qpair failed and we were unable to recover it. 00:41:41.908 [2024-10-01 22:40:36.883201] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.908 [2024-10-01 22:40:36.883212] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.908 qpair failed and we were unable to recover it. 00:41:41.908 [2024-10-01 22:40:36.883510] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.908 [2024-10-01 22:40:36.883520] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.908 qpair failed and we were unable to recover it. 00:41:41.908 [2024-10-01 22:40:36.883746] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.908 [2024-10-01 22:40:36.883756] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.908 qpair failed and we were unable to recover it. 00:41:41.908 [2024-10-01 22:40:36.884040] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.908 [2024-10-01 22:40:36.884051] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.908 qpair failed and we were unable to recover it. 00:41:41.908 [2024-10-01 22:40:36.884356] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.908 [2024-10-01 22:40:36.884367] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.908 qpair failed and we were unable to recover it. 00:41:41.908 [2024-10-01 22:40:36.884711] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.908 [2024-10-01 22:40:36.884722] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.908 qpair failed and we were unable to recover it. 00:41:41.908 [2024-10-01 22:40:36.884876] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.908 [2024-10-01 22:40:36.884886] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.908 qpair failed and we were unable to recover it. 00:41:41.908 [2024-10-01 22:40:36.885195] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.908 [2024-10-01 22:40:36.885205] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.908 qpair failed and we were unable to recover it. 00:41:41.908 [2024-10-01 22:40:36.885506] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.908 [2024-10-01 22:40:36.885524] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.908 qpair failed and we were unable to recover it. 00:41:41.908 [2024-10-01 22:40:36.885743] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.908 [2024-10-01 22:40:36.885753] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.908 qpair failed and we were unable to recover it. 00:41:41.908 [2024-10-01 22:40:36.886054] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.908 [2024-10-01 22:40:36.886064] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.908 qpair failed and we were unable to recover it. 00:41:41.908 [2024-10-01 22:40:36.886230] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.908 [2024-10-01 22:40:36.886242] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.908 qpair failed and we were unable to recover it. 00:41:41.908 [2024-10-01 22:40:36.886506] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.908 [2024-10-01 22:40:36.886515] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.908 qpair failed and we were unable to recover it. 00:41:41.908 [2024-10-01 22:40:36.886815] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.908 [2024-10-01 22:40:36.886825] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.908 qpair failed and we were unable to recover it. 00:41:41.908 [2024-10-01 22:40:36.887007] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.908 [2024-10-01 22:40:36.887017] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.908 qpair failed and we were unable to recover it. 00:41:41.908 [2024-10-01 22:40:36.887299] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.908 [2024-10-01 22:40:36.887309] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.908 qpair failed and we were unable to recover it. 00:41:41.908 [2024-10-01 22:40:36.887642] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.908 [2024-10-01 22:40:36.887652] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.908 qpair failed and we were unable to recover it. 00:41:41.908 [2024-10-01 22:40:36.887959] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.908 [2024-10-01 22:40:36.887969] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.908 qpair failed and we were unable to recover it. 00:41:41.908 [2024-10-01 22:40:36.888262] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.908 [2024-10-01 22:40:36.888272] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.908 qpair failed and we were unable to recover it. 00:41:41.908 [2024-10-01 22:40:36.888570] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.908 [2024-10-01 22:40:36.888579] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.908 qpair failed and we were unable to recover it. 00:41:41.908 [2024-10-01 22:40:36.888873] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.908 [2024-10-01 22:40:36.888886] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.908 qpair failed and we were unable to recover it. 00:41:41.908 [2024-10-01 22:40:36.889165] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.908 [2024-10-01 22:40:36.889184] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.908 qpair failed and we were unable to recover it. 00:41:41.908 [2024-10-01 22:40:36.889513] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.908 [2024-10-01 22:40:36.889523] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.908 qpair failed and we were unable to recover it. 00:41:41.908 [2024-10-01 22:40:36.889829] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.908 [2024-10-01 22:40:36.889839] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.908 qpair failed and we were unable to recover it. 00:41:41.908 [2024-10-01 22:40:36.890151] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.908 [2024-10-01 22:40:36.890162] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.908 qpair failed and we were unable to recover it. 00:41:41.908 [2024-10-01 22:40:36.890470] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.908 [2024-10-01 22:40:36.890480] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.908 qpair failed and we were unable to recover it. 00:41:41.908 [2024-10-01 22:40:36.890661] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.909 [2024-10-01 22:40:36.890671] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.909 qpair failed and we were unable to recover it. 00:41:41.909 [2024-10-01 22:40:36.890856] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.909 [2024-10-01 22:40:36.890866] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.909 qpair failed and we were unable to recover it. 00:41:41.909 [2024-10-01 22:40:36.891237] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.909 [2024-10-01 22:40:36.891247] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.909 qpair failed and we were unable to recover it. 00:41:41.909 [2024-10-01 22:40:36.891650] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.909 [2024-10-01 22:40:36.891660] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.909 qpair failed and we were unable to recover it. 00:41:41.909 [2024-10-01 22:40:36.891887] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.909 [2024-10-01 22:40:36.891897] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.909 qpair failed and we were unable to recover it. 00:41:41.909 [2024-10-01 22:40:36.892096] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.909 [2024-10-01 22:40:36.892106] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.909 qpair failed and we were unable to recover it. 00:41:41.909 [2024-10-01 22:40:36.892421] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.909 [2024-10-01 22:40:36.892431] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.909 qpair failed and we were unable to recover it. 00:41:41.909 [2024-10-01 22:40:36.892652] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.909 [2024-10-01 22:40:36.892663] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.909 qpair failed and we were unable to recover it. 00:41:41.909 [2024-10-01 22:40:36.892860] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.909 [2024-10-01 22:40:36.892871] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.909 qpair failed and we were unable to recover it. 00:41:41.909 [2024-10-01 22:40:36.893160] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.909 [2024-10-01 22:40:36.893170] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.909 qpair failed and we were unable to recover it. 00:41:41.909 [2024-10-01 22:40:36.893457] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.909 [2024-10-01 22:40:36.893474] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.909 qpair failed and we were unable to recover it. 00:41:41.909 [2024-10-01 22:40:36.893640] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.909 [2024-10-01 22:40:36.893651] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.909 qpair failed and we were unable to recover it. 00:41:41.909 [2024-10-01 22:40:36.893957] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.909 [2024-10-01 22:40:36.893967] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.909 qpair failed and we were unable to recover it. 00:41:41.909 [2024-10-01 22:40:36.894079] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.909 [2024-10-01 22:40:36.894088] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.909 qpair failed and we were unable to recover it. 00:41:41.909 [2024-10-01 22:40:36.894310] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.909 [2024-10-01 22:40:36.894319] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.909 qpair failed and we were unable to recover it. 00:41:41.909 [2024-10-01 22:40:36.894632] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.909 [2024-10-01 22:40:36.894642] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.909 qpair failed and we were unable to recover it. 00:41:41.909 [2024-10-01 22:40:36.894955] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.909 [2024-10-01 22:40:36.894965] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.909 qpair failed and we were unable to recover it. 00:41:41.909 [2024-10-01 22:40:36.895301] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.909 [2024-10-01 22:40:36.895310] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.909 qpair failed and we were unable to recover it. 00:41:41.909 [2024-10-01 22:40:36.895622] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.909 [2024-10-01 22:40:36.895637] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.909 qpair failed and we were unable to recover it. 00:41:41.909 [2024-10-01 22:40:36.895954] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.909 [2024-10-01 22:40:36.895964] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.909 qpair failed and we were unable to recover it. 00:41:41.909 [2024-10-01 22:40:36.896317] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.909 [2024-10-01 22:40:36.896327] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.909 qpair failed and we were unable to recover it. 00:41:41.909 [2024-10-01 22:40:36.896634] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.909 [2024-10-01 22:40:36.896647] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.909 qpair failed and we were unable to recover it. 00:41:41.909 [2024-10-01 22:40:36.896866] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.909 [2024-10-01 22:40:36.896877] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.909 qpair failed and we were unable to recover it. 00:41:41.909 [2024-10-01 22:40:36.897226] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.909 [2024-10-01 22:40:36.897237] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.909 qpair failed and we were unable to recover it. 00:41:41.909 [2024-10-01 22:40:36.897423] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.909 [2024-10-01 22:40:36.897433] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.909 qpair failed and we were unable to recover it. 00:41:41.909 [2024-10-01 22:40:36.897762] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.909 [2024-10-01 22:40:36.897773] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.909 qpair failed and we were unable to recover it. 00:41:41.909 [2024-10-01 22:40:36.898089] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.909 [2024-10-01 22:40:36.898099] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.909 qpair failed and we were unable to recover it. 00:41:41.909 [2024-10-01 22:40:36.898284] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.910 [2024-10-01 22:40:36.898295] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.910 qpair failed and we were unable to recover it. 00:41:41.910 [2024-10-01 22:40:36.898602] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.910 [2024-10-01 22:40:36.898611] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.910 qpair failed and we were unable to recover it. 00:41:41.910 [2024-10-01 22:40:36.898848] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.910 [2024-10-01 22:40:36.898858] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.910 qpair failed and we were unable to recover it. 00:41:41.910 [2024-10-01 22:40:36.899187] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.910 [2024-10-01 22:40:36.899197] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.910 qpair failed and we were unable to recover it. 00:41:41.910 [2024-10-01 22:40:36.899523] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.910 [2024-10-01 22:40:36.899533] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.910 qpair failed and we were unable to recover it. 00:41:41.910 [2024-10-01 22:40:36.899884] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.910 [2024-10-01 22:40:36.899895] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.910 qpair failed and we were unable to recover it. 00:41:41.910 [2024-10-01 22:40:36.900208] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.910 [2024-10-01 22:40:36.900218] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.910 qpair failed and we were unable to recover it. 00:41:41.910 [2024-10-01 22:40:36.900526] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.910 [2024-10-01 22:40:36.900535] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.910 qpair failed and we were unable to recover it. 00:41:41.910 [2024-10-01 22:40:36.900843] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.910 [2024-10-01 22:40:36.900854] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.910 qpair failed and we were unable to recover it. 00:41:41.910 [2024-10-01 22:40:36.901029] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.910 [2024-10-01 22:40:36.901041] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.910 qpair failed and we were unable to recover it. 00:41:41.910 [2024-10-01 22:40:36.901313] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.910 [2024-10-01 22:40:36.901323] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.910 qpair failed and we were unable to recover it. 00:41:41.910 [2024-10-01 22:40:36.901643] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.910 [2024-10-01 22:40:36.901654] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.910 qpair failed and we were unable to recover it. 00:41:41.910 [2024-10-01 22:40:36.901830] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.910 [2024-10-01 22:40:36.901841] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.910 qpair failed and we were unable to recover it. 00:41:41.910 [2024-10-01 22:40:36.902141] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.910 [2024-10-01 22:40:36.902151] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.910 qpair failed and we were unable to recover it. 00:41:41.910 [2024-10-01 22:40:36.902319] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.910 [2024-10-01 22:40:36.902329] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.910 qpair failed and we were unable to recover it. 00:41:41.910 [2024-10-01 22:40:36.902678] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.910 [2024-10-01 22:40:36.902689] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.910 qpair failed and we were unable to recover it. 00:41:41.910 [2024-10-01 22:40:36.903062] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.910 [2024-10-01 22:40:36.903072] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.910 qpair failed and we were unable to recover it. 00:41:41.910 [2024-10-01 22:40:36.903251] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.910 [2024-10-01 22:40:36.903261] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.910 qpair failed and we were unable to recover it. 00:41:41.910 [2024-10-01 22:40:36.903638] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.910 [2024-10-01 22:40:36.903649] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.910 qpair failed and we were unable to recover it. 00:41:41.910 [2024-10-01 22:40:36.903959] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.910 [2024-10-01 22:40:36.903970] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.910 qpair failed and we were unable to recover it. 00:41:41.910 [2024-10-01 22:40:36.904337] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.910 [2024-10-01 22:40:36.904347] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.910 qpair failed and we were unable to recover it. 00:41:41.910 [2024-10-01 22:40:36.904649] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.910 [2024-10-01 22:40:36.904660] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.910 qpair failed and we were unable to recover it. 00:41:41.910 [2024-10-01 22:40:36.904772] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.910 [2024-10-01 22:40:36.904783] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.910 qpair failed and we were unable to recover it. 00:41:41.910 [2024-10-01 22:40:36.905098] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.910 [2024-10-01 22:40:36.905108] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.910 qpair failed and we were unable to recover it. 00:41:41.910 [2024-10-01 22:40:36.905401] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.910 [2024-10-01 22:40:36.905411] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.910 qpair failed and we were unable to recover it. 00:41:41.910 [2024-10-01 22:40:36.905767] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.910 [2024-10-01 22:40:36.905778] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.910 qpair failed and we were unable to recover it. 00:41:41.910 [2024-10-01 22:40:36.906068] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.910 [2024-10-01 22:40:36.906078] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.910 qpair failed and we were unable to recover it. 00:41:41.910 [2024-10-01 22:40:36.906408] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.910 [2024-10-01 22:40:36.906418] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.910 qpair failed and we were unable to recover it. 00:41:41.911 [2024-10-01 22:40:36.906613] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.911 [2024-10-01 22:40:36.906623] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.911 qpair failed and we were unable to recover it. 00:41:41.911 [2024-10-01 22:40:36.906846] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.911 [2024-10-01 22:40:36.906856] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.911 qpair failed and we were unable to recover it. 00:41:41.911 [2024-10-01 22:40:36.907182] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.911 [2024-10-01 22:40:36.907192] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.911 qpair failed and we were unable to recover it. 00:41:41.911 [2024-10-01 22:40:36.907504] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.911 [2024-10-01 22:40:36.907514] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.911 qpair failed and we were unable to recover it. 00:41:41.911 [2024-10-01 22:40:36.907789] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.911 [2024-10-01 22:40:36.907799] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.911 qpair failed and we were unable to recover it. 00:41:41.911 [2024-10-01 22:40:36.908013] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.911 [2024-10-01 22:40:36.908023] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.911 qpair failed and we were unable to recover it. 00:41:41.911 [2024-10-01 22:40:36.908246] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.911 [2024-10-01 22:40:36.908256] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.911 qpair failed and we were unable to recover it. 00:41:41.911 [2024-10-01 22:40:36.908330] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.911 [2024-10-01 22:40:36.908340] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.911 qpair failed and we were unable to recover it. 00:41:41.911 [2024-10-01 22:40:36.908648] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.911 [2024-10-01 22:40:36.908658] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.911 qpair failed and we were unable to recover it. 00:41:41.911 [2024-10-01 22:40:36.908947] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.911 [2024-10-01 22:40:36.908957] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.911 qpair failed and we were unable to recover it. 00:41:41.911 [2024-10-01 22:40:36.909259] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.911 [2024-10-01 22:40:36.909269] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.911 qpair failed and we were unable to recover it. 00:41:41.911 [2024-10-01 22:40:36.909577] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.911 [2024-10-01 22:40:36.909588] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.911 qpair failed and we were unable to recover it. 00:41:41.911 [2024-10-01 22:40:36.909879] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.911 [2024-10-01 22:40:36.909889] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.911 qpair failed and we were unable to recover it. 00:41:41.911 [2024-10-01 22:40:36.910208] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.911 [2024-10-01 22:40:36.910219] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.911 qpair failed and we were unable to recover it. 00:41:41.911 [2024-10-01 22:40:36.910502] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.911 [2024-10-01 22:40:36.910513] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.911 qpair failed and we were unable to recover it. 00:41:41.911 [2024-10-01 22:40:36.910679] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.911 [2024-10-01 22:40:36.910689] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.911 qpair failed and we were unable to recover it. 00:41:41.911 [2024-10-01 22:40:36.910959] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.911 [2024-10-01 22:40:36.910969] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.911 qpair failed and we were unable to recover it. 00:41:41.911 [2024-10-01 22:40:36.911139] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.911 [2024-10-01 22:40:36.911149] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.911 qpair failed and we were unable to recover it. 00:41:41.911 [2024-10-01 22:40:36.911337] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.911 [2024-10-01 22:40:36.911347] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.911 qpair failed and we were unable to recover it. 00:41:41.911 [2024-10-01 22:40:36.911543] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.911 [2024-10-01 22:40:36.911552] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.911 qpair failed and we were unable to recover it. 00:41:41.911 [2024-10-01 22:40:36.911909] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.911 [2024-10-01 22:40:36.911919] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.911 qpair failed and we were unable to recover it. 00:41:41.911 [2024-10-01 22:40:36.912183] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.911 [2024-10-01 22:40:36.912193] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.911 qpair failed and we were unable to recover it. 00:41:41.911 [2024-10-01 22:40:36.912363] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.911 [2024-10-01 22:40:36.912373] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.911 qpair failed and we were unable to recover it. 00:41:41.911 [2024-10-01 22:40:36.912589] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.911 [2024-10-01 22:40:36.912599] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.911 qpair failed and we were unable to recover it. 00:41:41.911 [2024-10-01 22:40:36.912924] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.911 [2024-10-01 22:40:36.912935] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.911 qpair failed and we were unable to recover it. 00:41:41.911 [2024-10-01 22:40:36.913240] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.911 [2024-10-01 22:40:36.913250] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.911 qpair failed and we were unable to recover it. 00:41:41.911 [2024-10-01 22:40:36.913428] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.911 [2024-10-01 22:40:36.913437] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.911 qpair failed and we were unable to recover it. 00:41:41.912 [2024-10-01 22:40:36.913832] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.912 [2024-10-01 22:40:36.913843] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.912 qpair failed and we were unable to recover it. 00:41:41.912 [2024-10-01 22:40:36.914144] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.912 [2024-10-01 22:40:36.914155] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.912 qpair failed and we were unable to recover it. 00:41:41.912 [2024-10-01 22:40:36.914454] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.912 [2024-10-01 22:40:36.914464] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.912 qpair failed and we were unable to recover it. 00:41:41.912 [2024-10-01 22:40:36.914672] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.912 [2024-10-01 22:40:36.914682] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.912 qpair failed and we were unable to recover it. 00:41:41.912 [2024-10-01 22:40:36.915006] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.912 [2024-10-01 22:40:36.915016] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.912 qpair failed and we were unable to recover it. 00:41:41.912 [2024-10-01 22:40:36.915329] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.912 [2024-10-01 22:40:36.915340] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.912 qpair failed and we were unable to recover it. 00:41:41.912 [2024-10-01 22:40:36.915643] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.912 [2024-10-01 22:40:36.915653] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.912 qpair failed and we were unable to recover it. 00:41:41.912 [2024-10-01 22:40:36.915850] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.912 [2024-10-01 22:40:36.915863] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.912 qpair failed and we were unable to recover it. 00:41:41.912 [2024-10-01 22:40:36.916071] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.912 [2024-10-01 22:40:36.916082] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.912 qpair failed and we were unable to recover it. 00:41:41.912 [2024-10-01 22:40:36.916386] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.912 [2024-10-01 22:40:36.916396] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.912 qpair failed and we were unable to recover it. 00:41:41.912 [2024-10-01 22:40:36.916687] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.912 [2024-10-01 22:40:36.916697] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.912 qpair failed and we were unable to recover it. 00:41:41.912 [2024-10-01 22:40:36.917013] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.912 [2024-10-01 22:40:36.917023] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.912 qpair failed and we were unable to recover it. 00:41:41.912 [2024-10-01 22:40:36.917329] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.912 [2024-10-01 22:40:36.917338] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.912 qpair failed and we were unable to recover it. 00:41:41.912 [2024-10-01 22:40:36.917644] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.912 [2024-10-01 22:40:36.917655] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.912 qpair failed and we were unable to recover it. 00:41:41.912 [2024-10-01 22:40:36.917996] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.912 [2024-10-01 22:40:36.918005] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.912 qpair failed and we were unable to recover it. 00:41:41.912 [2024-10-01 22:40:36.918297] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.912 [2024-10-01 22:40:36.918307] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.912 qpair failed and we were unable to recover it. 00:41:41.912 [2024-10-01 22:40:36.918619] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.912 [2024-10-01 22:40:36.918634] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.912 qpair failed and we were unable to recover it. 00:41:41.912 [2024-10-01 22:40:36.918939] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.912 [2024-10-01 22:40:36.918949] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.912 qpair failed and we were unable to recover it. 00:41:41.912 [2024-10-01 22:40:36.919141] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.912 [2024-10-01 22:40:36.919152] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.912 qpair failed and we were unable to recover it. 00:41:41.912 [2024-10-01 22:40:36.919489] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.912 [2024-10-01 22:40:36.919499] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.912 qpair failed and we were unable to recover it. 00:41:41.912 [2024-10-01 22:40:36.919776] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.912 [2024-10-01 22:40:36.919786] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.912 qpair failed and we were unable to recover it. 00:41:41.912 [2024-10-01 22:40:36.920111] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.912 [2024-10-01 22:40:36.920121] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.912 qpair failed and we were unable to recover it. 00:41:41.912 [2024-10-01 22:40:36.920376] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.912 [2024-10-01 22:40:36.920385] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.912 qpair failed and we were unable to recover it. 00:41:41.912 [2024-10-01 22:40:36.920683] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.912 [2024-10-01 22:40:36.920693] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.912 qpair failed and we were unable to recover it. 00:41:41.912 [2024-10-01 22:40:36.920874] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.912 [2024-10-01 22:40:36.920884] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.912 qpair failed and we were unable to recover it. 00:41:41.912 [2024-10-01 22:40:36.921205] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.913 [2024-10-01 22:40:36.921215] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.913 qpair failed and we were unable to recover it. 00:41:41.913 [2024-10-01 22:40:36.921428] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.913 [2024-10-01 22:40:36.921438] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.913 qpair failed and we were unable to recover it. 00:41:41.913 [2024-10-01 22:40:36.921793] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.913 [2024-10-01 22:40:36.921804] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.913 qpair failed and we were unable to recover it. 00:41:41.913 [2024-10-01 22:40:36.922129] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.913 [2024-10-01 22:40:36.922138] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.913 qpair failed and we were unable to recover it. 00:41:41.913 [2024-10-01 22:40:36.922447] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.913 [2024-10-01 22:40:36.922457] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.913 qpair failed and we were unable to recover it. 00:41:41.913 [2024-10-01 22:40:36.922627] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.913 [2024-10-01 22:40:36.922638] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.913 qpair failed and we were unable to recover it. 00:41:41.913 [2024-10-01 22:40:36.923004] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.913 [2024-10-01 22:40:36.923013] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.913 qpair failed and we were unable to recover it. 00:41:41.913 [2024-10-01 22:40:36.923300] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.913 [2024-10-01 22:40:36.923310] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.913 qpair failed and we were unable to recover it. 00:41:41.913 [2024-10-01 22:40:36.923616] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.913 [2024-10-01 22:40:36.923629] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.913 qpair failed and we were unable to recover it. 00:41:41.913 [2024-10-01 22:40:36.924005] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.913 [2024-10-01 22:40:36.924017] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.913 qpair failed and we were unable to recover it. 00:41:41.913 [2024-10-01 22:40:36.924315] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.913 [2024-10-01 22:40:36.924325] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.913 qpair failed and we were unable to recover it. 00:41:41.913 [2024-10-01 22:40:36.924517] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.913 [2024-10-01 22:40:36.924527] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.913 qpair failed and we were unable to recover it. 00:41:41.913 [2024-10-01 22:40:36.924722] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.913 [2024-10-01 22:40:36.924731] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.913 qpair failed and we were unable to recover it. 00:41:41.913 [2024-10-01 22:40:36.925001] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.913 [2024-10-01 22:40:36.925010] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.913 qpair failed and we were unable to recover it. 00:41:41.913 [2024-10-01 22:40:36.925221] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.913 [2024-10-01 22:40:36.925231] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.913 qpair failed and we were unable to recover it. 00:41:41.913 [2024-10-01 22:40:36.925596] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.913 [2024-10-01 22:40:36.925606] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.913 qpair failed and we were unable to recover it. 00:41:41.913 [2024-10-01 22:40:36.925787] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.913 [2024-10-01 22:40:36.925797] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.913 qpair failed and we were unable to recover it. 00:41:41.913 [2024-10-01 22:40:36.926121] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.913 [2024-10-01 22:40:36.926131] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.913 qpair failed and we were unable to recover it. 00:41:41.913 [2024-10-01 22:40:36.926308] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.913 [2024-10-01 22:40:36.926319] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.913 qpair failed and we were unable to recover it. 00:41:41.913 [2024-10-01 22:40:36.926658] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.913 [2024-10-01 22:40:36.926669] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.913 qpair failed and we were unable to recover it. 00:41:41.913 [2024-10-01 22:40:36.926976] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.913 [2024-10-01 22:40:36.926987] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.913 qpair failed and we were unable to recover it. 00:41:41.913 [2024-10-01 22:40:36.927338] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.913 [2024-10-01 22:40:36.927347] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.913 qpair failed and we were unable to recover it. 00:41:41.913 [2024-10-01 22:40:36.927630] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.913 [2024-10-01 22:40:36.927641] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.913 qpair failed and we were unable to recover it. 00:41:41.913 [2024-10-01 22:40:36.927909] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.913 [2024-10-01 22:40:36.927919] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.913 qpair failed and we were unable to recover it. 00:41:41.913 [2024-10-01 22:40:36.928211] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.913 [2024-10-01 22:40:36.928230] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.913 qpair failed and we were unable to recover it. 00:41:41.913 [2024-10-01 22:40:36.928525] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.913 [2024-10-01 22:40:36.928536] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.913 qpair failed and we were unable to recover it. 00:41:41.913 [2024-10-01 22:40:36.928825] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.913 [2024-10-01 22:40:36.928835] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.913 qpair failed and we were unable to recover it. 00:41:41.914 [2024-10-01 22:40:36.929130] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.914 [2024-10-01 22:40:36.929140] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.914 qpair failed and we were unable to recover it. 00:41:41.914 [2024-10-01 22:40:36.929440] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.914 [2024-10-01 22:40:36.929450] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.914 qpair failed and we were unable to recover it. 00:41:41.914 [2024-10-01 22:40:36.929759] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.914 [2024-10-01 22:40:36.929769] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.914 qpair failed and we were unable to recover it. 00:41:41.914 [2024-10-01 22:40:36.930064] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.914 [2024-10-01 22:40:36.930074] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.914 qpair failed and we were unable to recover it. 00:41:41.914 [2024-10-01 22:40:36.930386] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.914 [2024-10-01 22:40:36.930396] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.914 qpair failed and we were unable to recover it. 00:41:41.914 [2024-10-01 22:40:36.930701] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.914 [2024-10-01 22:40:36.930712] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.914 qpair failed and we were unable to recover it. 00:41:41.914 [2024-10-01 22:40:36.931034] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.914 [2024-10-01 22:40:36.931044] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.914 qpair failed and we were unable to recover it. 00:41:41.914 [2024-10-01 22:40:36.931344] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.914 [2024-10-01 22:40:36.931354] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.914 qpair failed and we were unable to recover it. 00:41:41.914 [2024-10-01 22:40:36.931640] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.914 [2024-10-01 22:40:36.931650] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.914 qpair failed and we were unable to recover it. 00:41:41.914 [2024-10-01 22:40:36.931958] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.914 [2024-10-01 22:40:36.931968] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.914 qpair failed and we were unable to recover it. 00:41:41.914 [2024-10-01 22:40:36.932276] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.914 [2024-10-01 22:40:36.932286] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.914 qpair failed and we were unable to recover it. 00:41:41.914 [2024-10-01 22:40:36.932569] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.914 [2024-10-01 22:40:36.932580] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.914 qpair failed and we were unable to recover it. 00:41:41.914 [2024-10-01 22:40:36.932876] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.914 [2024-10-01 22:40:36.932887] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.914 qpair failed and we were unable to recover it. 00:41:41.914 [2024-10-01 22:40:36.933159] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.914 [2024-10-01 22:40:36.933169] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.914 qpair failed and we were unable to recover it. 00:41:41.914 [2024-10-01 22:40:36.933445] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.914 [2024-10-01 22:40:36.933456] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.914 qpair failed and we were unable to recover it. 00:41:41.914 [2024-10-01 22:40:36.933791] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.914 [2024-10-01 22:40:36.933802] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.914 qpair failed and we were unable to recover it. 00:41:41.914 [2024-10-01 22:40:36.933989] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.914 [2024-10-01 22:40:36.933999] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.914 qpair failed and we were unable to recover it. 00:41:41.914 [2024-10-01 22:40:36.934273] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.914 [2024-10-01 22:40:36.934283] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.914 qpair failed and we were unable to recover it. 00:41:41.914 [2024-10-01 22:40:36.934570] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.914 [2024-10-01 22:40:36.934587] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.914 qpair failed and we were unable to recover it. 00:41:41.914 [2024-10-01 22:40:36.934881] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.914 [2024-10-01 22:40:36.934892] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.914 qpair failed and we were unable to recover it. 00:41:41.914 [2024-10-01 22:40:36.935232] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.914 [2024-10-01 22:40:36.935241] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.914 qpair failed and we were unable to recover it. 00:41:41.914 [2024-10-01 22:40:36.935571] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.914 [2024-10-01 22:40:36.935581] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.914 qpair failed and we were unable to recover it. 00:41:41.914 [2024-10-01 22:40:36.935912] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.914 [2024-10-01 22:40:36.935922] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.914 qpair failed and we were unable to recover it. 00:41:41.915 [2024-10-01 22:40:36.936299] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.915 [2024-10-01 22:40:36.936309] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.915 qpair failed and we were unable to recover it. 00:41:41.915 [2024-10-01 22:40:36.936610] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.915 [2024-10-01 22:40:36.936620] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.915 qpair failed and we were unable to recover it. 00:41:41.915 [2024-10-01 22:40:36.936952] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.915 [2024-10-01 22:40:36.936962] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.915 qpair failed and we were unable to recover it. 00:41:41.915 [2024-10-01 22:40:36.937263] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.915 [2024-10-01 22:40:36.937273] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.915 qpair failed and we were unable to recover it. 00:41:41.915 [2024-10-01 22:40:36.937574] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.915 [2024-10-01 22:40:36.937584] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.915 qpair failed and we were unable to recover it. 00:41:41.915 [2024-10-01 22:40:36.937783] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.915 [2024-10-01 22:40:36.937794] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.915 qpair failed and we were unable to recover it. 00:41:41.915 [2024-10-01 22:40:36.938075] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.915 [2024-10-01 22:40:36.938086] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.915 qpair failed and we were unable to recover it. 00:41:41.915 [2024-10-01 22:40:36.938471] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.915 [2024-10-01 22:40:36.938480] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.915 qpair failed and we were unable to recover it. 00:41:41.915 [2024-10-01 22:40:36.938811] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.915 [2024-10-01 22:40:36.938823] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.915 qpair failed and we were unable to recover it. 00:41:41.915 [2024-10-01 22:40:36.939128] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.915 [2024-10-01 22:40:36.939138] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.915 qpair failed and we were unable to recover it. 00:41:41.915 [2024-10-01 22:40:36.939304] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.915 [2024-10-01 22:40:36.939314] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.915 qpair failed and we were unable to recover it. 00:41:41.915 [2024-10-01 22:40:36.939659] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.915 [2024-10-01 22:40:36.939669] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.915 qpair failed and we were unable to recover it. 00:41:41.915 [2024-10-01 22:40:36.939985] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.915 [2024-10-01 22:40:36.939995] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.915 qpair failed and we were unable to recover it. 00:41:41.915 [2024-10-01 22:40:36.940308] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.915 [2024-10-01 22:40:36.940319] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.915 qpair failed and we were unable to recover it. 00:41:41.915 [2024-10-01 22:40:36.940692] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.915 [2024-10-01 22:40:36.940702] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.915 qpair failed and we were unable to recover it. 00:41:41.915 [2024-10-01 22:40:36.941021] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.915 [2024-10-01 22:40:36.941031] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.915 qpair failed and we were unable to recover it. 00:41:41.915 [2024-10-01 22:40:36.941359] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.915 [2024-10-01 22:40:36.941370] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.915 qpair failed and we were unable to recover it. 00:41:41.915 [2024-10-01 22:40:36.941565] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.915 [2024-10-01 22:40:36.941576] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.915 qpair failed and we were unable to recover it. 00:41:41.915 [2024-10-01 22:40:36.941876] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.915 [2024-10-01 22:40:36.941886] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.915 qpair failed and we were unable to recover it. 00:41:41.915 [2024-10-01 22:40:36.942191] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.915 [2024-10-01 22:40:36.942201] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.915 qpair failed and we were unable to recover it. 00:41:41.915 [2024-10-01 22:40:36.942532] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.915 [2024-10-01 22:40:36.942543] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.915 qpair failed and we were unable to recover it. 00:41:41.915 [2024-10-01 22:40:36.942862] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.915 [2024-10-01 22:40:36.942874] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.915 qpair failed and we were unable to recover it. 00:41:41.915 [2024-10-01 22:40:36.943215] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.915 [2024-10-01 22:40:36.943226] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.915 qpair failed and we were unable to recover it. 00:41:41.915 [2024-10-01 22:40:36.943532] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.915 [2024-10-01 22:40:36.943543] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.915 qpair failed and we were unable to recover it. 00:41:41.915 [2024-10-01 22:40:36.943918] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.915 [2024-10-01 22:40:36.943929] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.915 qpair failed and we were unable to recover it. 00:41:41.915 [2024-10-01 22:40:36.944114] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.915 [2024-10-01 22:40:36.944125] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.915 qpair failed and we were unable to recover it. 00:41:41.915 [2024-10-01 22:40:36.944422] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.915 [2024-10-01 22:40:36.944433] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.915 qpair failed and we were unable to recover it. 00:41:41.916 [2024-10-01 22:40:36.944737] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.916 [2024-10-01 22:40:36.944749] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.916 qpair failed and we were unable to recover it. 00:41:41.916 [2024-10-01 22:40:36.944929] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.916 [2024-10-01 22:40:36.944939] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.916 qpair failed and we were unable to recover it. 00:41:41.916 [2024-10-01 22:40:36.945161] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.916 [2024-10-01 22:40:36.945171] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.916 qpair failed and we were unable to recover it. 00:41:41.916 [2024-10-01 22:40:36.945438] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.916 [2024-10-01 22:40:36.945447] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.916 qpair failed and we were unable to recover it. 00:41:41.916 [2024-10-01 22:40:36.945618] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.916 [2024-10-01 22:40:36.945644] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.916 qpair failed and we were unable to recover it. 00:41:41.916 [2024-10-01 22:40:36.945850] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.916 [2024-10-01 22:40:36.945861] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.916 qpair failed and we were unable to recover it. 00:41:41.916 [2024-10-01 22:40:36.946185] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.916 [2024-10-01 22:40:36.946195] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.916 qpair failed and we were unable to recover it. 00:41:41.916 [2024-10-01 22:40:36.946478] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.916 [2024-10-01 22:40:36.946496] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.916 qpair failed and we were unable to recover it. 00:41:41.916 [2024-10-01 22:40:36.946790] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.916 [2024-10-01 22:40:36.946800] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.916 qpair failed and we were unable to recover it. 00:41:41.916 [2024-10-01 22:40:36.947099] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.916 [2024-10-01 22:40:36.947110] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.916 qpair failed and we were unable to recover it. 00:41:41.916 [2024-10-01 22:40:36.947390] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.916 [2024-10-01 22:40:36.947400] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.916 qpair failed and we were unable to recover it. 00:41:41.916 [2024-10-01 22:40:36.947709] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.916 [2024-10-01 22:40:36.947719] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.916 qpair failed and we were unable to recover it. 00:41:41.916 [2024-10-01 22:40:36.947925] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.916 [2024-10-01 22:40:36.947935] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.916 qpair failed and we were unable to recover it. 00:41:41.916 [2024-10-01 22:40:36.948242] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.916 [2024-10-01 22:40:36.948252] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.916 qpair failed and we were unable to recover it. 00:41:41.916 [2024-10-01 22:40:36.948548] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.916 [2024-10-01 22:40:36.948558] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.916 qpair failed and we were unable to recover it. 00:41:41.916 [2024-10-01 22:40:36.948780] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.916 [2024-10-01 22:40:36.948790] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.916 qpair failed and we were unable to recover it. 00:41:41.916 [2024-10-01 22:40:36.948985] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.916 [2024-10-01 22:40:36.948996] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.916 qpair failed and we were unable to recover it. 00:41:41.916 [2024-10-01 22:40:36.949336] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.916 [2024-10-01 22:40:36.949346] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.916 qpair failed and we were unable to recover it. 00:41:41.916 [2024-10-01 22:40:36.949645] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.916 [2024-10-01 22:40:36.949655] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.916 qpair failed and we were unable to recover it. 00:41:41.916 [2024-10-01 22:40:36.949860] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.916 [2024-10-01 22:40:36.949869] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.916 qpair failed and we were unable to recover it. 00:41:41.916 [2024-10-01 22:40:36.950185] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.916 [2024-10-01 22:40:36.950195] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.916 qpair failed and we were unable to recover it. 00:41:41.916 [2024-10-01 22:40:36.950501] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.916 [2024-10-01 22:40:36.950511] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.916 qpair failed and we were unable to recover it. 00:41:41.916 [2024-10-01 22:40:36.950792] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.916 [2024-10-01 22:40:36.950802] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.916 qpair failed and we were unable to recover it. 00:41:41.916 [2024-10-01 22:40:36.951089] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.916 [2024-10-01 22:40:36.951099] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.916 qpair failed and we were unable to recover it. 00:41:41.916 [2024-10-01 22:40:36.951408] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.916 [2024-10-01 22:40:36.951418] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.916 qpair failed and we were unable to recover it. 00:41:41.916 [2024-10-01 22:40:36.951735] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.916 [2024-10-01 22:40:36.951745] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.916 qpair failed and we were unable to recover it. 00:41:41.916 [2024-10-01 22:40:36.952055] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.916 [2024-10-01 22:40:36.952066] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.916 qpair failed and we were unable to recover it. 00:41:41.916 [2024-10-01 22:40:36.952369] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.917 [2024-10-01 22:40:36.952381] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.917 qpair failed and we were unable to recover it. 00:41:41.917 [2024-10-01 22:40:36.952685] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.917 [2024-10-01 22:40:36.952695] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.917 qpair failed and we were unable to recover it. 00:41:41.917 [2024-10-01 22:40:36.953021] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.917 [2024-10-01 22:40:36.953031] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.917 qpair failed and we were unable to recover it. 00:41:41.917 [2024-10-01 22:40:36.953320] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.917 [2024-10-01 22:40:36.953331] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.917 qpair failed and we were unable to recover it. 00:41:41.917 [2024-10-01 22:40:36.953680] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.917 [2024-10-01 22:40:36.953690] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.917 qpair failed and we were unable to recover it. 00:41:41.917 [2024-10-01 22:40:36.953988] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.917 [2024-10-01 22:40:36.953997] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.917 qpair failed and we were unable to recover it. 00:41:41.917 [2024-10-01 22:40:36.954320] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.917 [2024-10-01 22:40:36.954330] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.917 qpair failed and we were unable to recover it. 00:41:41.917 [2024-10-01 22:40:36.954615] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.917 [2024-10-01 22:40:36.954630] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.917 qpair failed and we were unable to recover it. 00:41:41.917 [2024-10-01 22:40:36.954916] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.917 [2024-10-01 22:40:36.954926] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.917 qpair failed and we were unable to recover it. 00:41:41.917 [2024-10-01 22:40:36.955236] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.917 [2024-10-01 22:40:36.955246] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.917 qpair failed and we were unable to recover it. 00:41:41.917 [2024-10-01 22:40:36.955558] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.917 [2024-10-01 22:40:36.955569] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.917 qpair failed and we were unable to recover it. 00:41:41.917 [2024-10-01 22:40:36.955895] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.917 [2024-10-01 22:40:36.955906] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.917 qpair failed and we were unable to recover it. 00:41:41.917 [2024-10-01 22:40:36.956297] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.917 [2024-10-01 22:40:36.956308] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.917 qpair failed and we were unable to recover it. 00:41:41.917 [2024-10-01 22:40:36.956614] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.917 [2024-10-01 22:40:36.956628] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.917 qpair failed and we were unable to recover it. 00:41:41.917 [2024-10-01 22:40:36.956953] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.917 [2024-10-01 22:40:36.956963] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.917 qpair failed and we were unable to recover it. 00:41:41.917 [2024-10-01 22:40:36.957246] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.917 [2024-10-01 22:40:36.957255] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.917 qpair failed and we were unable to recover it. 00:41:41.917 [2024-10-01 22:40:36.957637] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.917 [2024-10-01 22:40:36.957647] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.917 qpair failed and we were unable to recover it. 00:41:41.917 [2024-10-01 22:40:36.957950] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.917 [2024-10-01 22:40:36.957960] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.917 qpair failed and we were unable to recover it. 00:41:41.917 [2024-10-01 22:40:36.958314] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.917 [2024-10-01 22:40:36.958324] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.917 qpair failed and we were unable to recover it. 00:41:41.917 [2024-10-01 22:40:36.958644] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.917 [2024-10-01 22:40:36.958654] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.917 qpair failed and we were unable to recover it. 00:41:41.917 [2024-10-01 22:40:36.958941] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.917 [2024-10-01 22:40:36.958950] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.917 qpair failed and we were unable to recover it. 00:41:41.917 [2024-10-01 22:40:36.959260] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.917 [2024-10-01 22:40:36.959270] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.917 qpair failed and we were unable to recover it. 00:41:41.917 [2024-10-01 22:40:36.959581] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.917 [2024-10-01 22:40:36.959591] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.917 qpair failed and we were unable to recover it. 00:41:41.917 [2024-10-01 22:40:36.959945] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.917 [2024-10-01 22:40:36.959955] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.917 qpair failed and we were unable to recover it. 00:41:41.917 [2024-10-01 22:40:36.960142] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.917 [2024-10-01 22:40:36.960152] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.917 qpair failed and we were unable to recover it. 00:41:41.917 [2024-10-01 22:40:36.960506] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.917 [2024-10-01 22:40:36.960516] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.917 qpair failed and we were unable to recover it. 00:41:41.917 [2024-10-01 22:40:36.960799] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.917 [2024-10-01 22:40:36.960809] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.917 qpair failed and we were unable to recover it. 00:41:41.917 [2024-10-01 22:40:36.961016] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.918 [2024-10-01 22:40:36.961027] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.918 qpair failed and we were unable to recover it. 00:41:41.918 [2024-10-01 22:40:36.961327] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.918 [2024-10-01 22:40:36.961336] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.918 qpair failed and we were unable to recover it. 00:41:41.918 [2024-10-01 22:40:36.961639] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.918 [2024-10-01 22:40:36.961650] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.918 qpair failed and we were unable to recover it. 00:41:41.918 [2024-10-01 22:40:36.961920] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.918 [2024-10-01 22:40:36.961930] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.918 qpair failed and we were unable to recover it. 00:41:41.918 [2024-10-01 22:40:36.962120] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.918 [2024-10-01 22:40:36.962131] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.918 qpair failed and we were unable to recover it. 00:41:41.918 [2024-10-01 22:40:36.962531] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.918 [2024-10-01 22:40:36.962542] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.918 qpair failed and we were unable to recover it. 00:41:41.918 [2024-10-01 22:40:36.962835] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.918 [2024-10-01 22:40:36.962846] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.918 qpair failed and we were unable to recover it. 00:41:41.918 [2024-10-01 22:40:36.963147] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.918 [2024-10-01 22:40:36.963157] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.918 qpair failed and we were unable to recover it. 00:41:41.918 [2024-10-01 22:40:36.963363] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.918 [2024-10-01 22:40:36.963374] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.918 qpair failed and we were unable to recover it. 00:41:41.918 [2024-10-01 22:40:36.963684] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.918 [2024-10-01 22:40:36.963695] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.918 qpair failed and we were unable to recover it. 00:41:41.918 [2024-10-01 22:40:36.963991] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.918 [2024-10-01 22:40:36.964000] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.918 qpair failed and we were unable to recover it. 00:41:41.918 [2024-10-01 22:40:36.964309] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.918 [2024-10-01 22:40:36.964319] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.918 qpair failed and we were unable to recover it. 00:41:41.918 [2024-10-01 22:40:36.964510] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.918 [2024-10-01 22:40:36.964520] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.918 qpair failed and we were unable to recover it. 00:41:41.918 [2024-10-01 22:40:36.964827] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.918 [2024-10-01 22:40:36.964837] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.918 qpair failed and we were unable to recover it. 00:41:41.918 [2024-10-01 22:40:36.965201] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.918 [2024-10-01 22:40:36.965211] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.918 qpair failed and we were unable to recover it. 00:41:41.918 [2024-10-01 22:40:36.965595] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.918 [2024-10-01 22:40:36.965605] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.918 qpair failed and we were unable to recover it. 00:41:41.918 [2024-10-01 22:40:36.965768] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.918 [2024-10-01 22:40:36.965780] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.918 qpair failed and we were unable to recover it. 00:41:41.918 [2024-10-01 22:40:36.966108] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.918 [2024-10-01 22:40:36.966118] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.918 qpair failed and we were unable to recover it. 00:41:41.918 [2024-10-01 22:40:36.966436] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.918 [2024-10-01 22:40:36.966446] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.918 qpair failed and we were unable to recover it. 00:41:41.918 [2024-10-01 22:40:36.966751] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.918 [2024-10-01 22:40:36.966761] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.918 qpair failed and we were unable to recover it. 00:41:41.918 [2024-10-01 22:40:36.967053] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.918 [2024-10-01 22:40:36.967064] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.918 qpair failed and we were unable to recover it. 00:41:41.918 [2024-10-01 22:40:36.967373] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.918 [2024-10-01 22:40:36.967383] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.918 qpair failed and we were unable to recover it. 00:41:41.918 [2024-10-01 22:40:36.967653] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.918 [2024-10-01 22:40:36.967663] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.918 qpair failed and we were unable to recover it. 00:41:41.918 [2024-10-01 22:40:36.967972] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.918 [2024-10-01 22:40:36.967982] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.918 qpair failed and we were unable to recover it. 00:41:41.918 [2024-10-01 22:40:36.968262] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.918 [2024-10-01 22:40:36.968277] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.918 qpair failed and we were unable to recover it. 00:41:41.918 [2024-10-01 22:40:36.968576] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.918 [2024-10-01 22:40:36.968586] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.918 qpair failed and we were unable to recover it. 00:41:41.918 [2024-10-01 22:40:36.968950] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.918 [2024-10-01 22:40:36.968960] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.918 qpair failed and we were unable to recover it. 00:41:41.919 [2024-10-01 22:40:36.969280] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.919 [2024-10-01 22:40:36.969290] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.919 qpair failed and we were unable to recover it. 00:41:41.919 [2024-10-01 22:40:36.969573] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.919 [2024-10-01 22:40:36.969583] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.919 qpair failed and we were unable to recover it. 00:41:41.919 [2024-10-01 22:40:36.969868] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.919 [2024-10-01 22:40:36.969879] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.919 qpair failed and we were unable to recover it. 00:41:41.919 [2024-10-01 22:40:36.970184] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.919 [2024-10-01 22:40:36.970193] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.919 qpair failed and we were unable to recover it. 00:41:41.919 [2024-10-01 22:40:36.970556] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.919 [2024-10-01 22:40:36.970567] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.919 qpair failed and we were unable to recover it. 00:41:41.919 [2024-10-01 22:40:36.970847] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.919 [2024-10-01 22:40:36.970858] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.919 qpair failed and we were unable to recover it. 00:41:41.919 [2024-10-01 22:40:36.971176] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.919 [2024-10-01 22:40:36.971186] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.919 qpair failed and we were unable to recover it. 00:41:41.919 [2024-10-01 22:40:36.971488] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.919 [2024-10-01 22:40:36.971499] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.919 qpair failed and we were unable to recover it. 00:41:41.919 [2024-10-01 22:40:36.971710] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.919 [2024-10-01 22:40:36.971720] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.919 qpair failed and we were unable to recover it. 00:41:41.919 [2024-10-01 22:40:36.972023] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.919 [2024-10-01 22:40:36.972033] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.919 qpair failed and we were unable to recover it. 00:41:41.919 [2024-10-01 22:40:36.972313] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.919 [2024-10-01 22:40:36.972322] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.919 qpair failed and we were unable to recover it. 00:41:41.919 [2024-10-01 22:40:36.972639] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.919 [2024-10-01 22:40:36.972649] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.919 qpair failed and we were unable to recover it. 00:41:41.919 [2024-10-01 22:40:36.972996] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.919 [2024-10-01 22:40:36.973007] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.919 qpair failed and we were unable to recover it. 00:41:41.919 [2024-10-01 22:40:36.973301] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.919 [2024-10-01 22:40:36.973311] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.919 qpair failed and we were unable to recover it. 00:41:41.919 [2024-10-01 22:40:36.973611] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.919 [2024-10-01 22:40:36.973623] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.919 qpair failed and we were unable to recover it. 00:41:41.919 [2024-10-01 22:40:36.973893] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.919 [2024-10-01 22:40:36.973903] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.919 qpair failed and we were unable to recover it. 00:41:41.919 [2024-10-01 22:40:36.974207] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.919 [2024-10-01 22:40:36.974217] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.919 qpair failed and we were unable to recover it. 00:41:41.919 [2024-10-01 22:40:36.974444] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.919 [2024-10-01 22:40:36.974453] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.919 qpair failed and we were unable to recover it. 00:41:41.919 [2024-10-01 22:40:36.974673] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.919 [2024-10-01 22:40:36.974683] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.919 qpair failed and we were unable to recover it. 00:41:41.919 [2024-10-01 22:40:36.974997] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.919 [2024-10-01 22:40:36.975006] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.919 qpair failed and we were unable to recover it. 00:41:41.919 [2024-10-01 22:40:36.975331] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.919 [2024-10-01 22:40:36.975341] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.919 qpair failed and we were unable to recover it. 00:41:41.919 [2024-10-01 22:40:36.975613] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.919 [2024-10-01 22:40:36.975628] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.919 qpair failed and we were unable to recover it. 00:41:41.919 [2024-10-01 22:40:36.975811] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.919 [2024-10-01 22:40:36.975821] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.919 qpair failed and we were unable to recover it. 00:41:41.919 [2024-10-01 22:40:36.976150] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.919 [2024-10-01 22:40:36.976160] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.919 qpair failed and we were unable to recover it. 00:41:41.919 [2024-10-01 22:40:36.976473] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.919 [2024-10-01 22:40:36.976484] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.919 qpair failed and we were unable to recover it. 00:41:41.919 [2024-10-01 22:40:36.976815] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.919 [2024-10-01 22:40:36.976827] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.919 qpair failed and we were unable to recover it. 00:41:41.920 [2024-10-01 22:40:36.977131] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.920 [2024-10-01 22:40:36.977141] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.920 qpair failed and we were unable to recover it. 00:41:41.920 [2024-10-01 22:40:36.977419] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.920 [2024-10-01 22:40:36.977428] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.920 qpair failed and we were unable to recover it. 00:41:41.920 [2024-10-01 22:40:36.977734] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.920 [2024-10-01 22:40:36.977744] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.920 qpair failed and we were unable to recover it. 00:41:41.920 [2024-10-01 22:40:36.978041] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.920 [2024-10-01 22:40:36.978051] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.920 qpair failed and we were unable to recover it. 00:41:41.920 [2024-10-01 22:40:36.978350] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.920 [2024-10-01 22:40:36.978361] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.920 qpair failed and we were unable to recover it. 00:41:41.920 [2024-10-01 22:40:36.978669] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.920 [2024-10-01 22:40:36.978679] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.920 qpair failed and we were unable to recover it. 00:41:41.920 [2024-10-01 22:40:36.979007] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.920 [2024-10-01 22:40:36.979017] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.920 qpair failed and we were unable to recover it. 00:41:41.920 [2024-10-01 22:40:36.979334] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.920 [2024-10-01 22:40:36.979343] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.920 qpair failed and we were unable to recover it. 00:41:41.920 [2024-10-01 22:40:36.979650] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.920 [2024-10-01 22:40:36.979660] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.920 qpair failed and we were unable to recover it. 00:41:41.920 [2024-10-01 22:40:36.979987] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.920 [2024-10-01 22:40:36.979996] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.920 qpair failed and we were unable to recover it. 00:41:41.920 [2024-10-01 22:40:36.980304] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.920 [2024-10-01 22:40:36.980315] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.920 qpair failed and we were unable to recover it. 00:41:41.920 [2024-10-01 22:40:36.980599] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.920 [2024-10-01 22:40:36.980610] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.920 qpair failed and we were unable to recover it. 00:41:41.920 [2024-10-01 22:40:36.980937] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.920 [2024-10-01 22:40:36.980949] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.920 qpair failed and we were unable to recover it. 00:41:41.920 [2024-10-01 22:40:36.981250] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.920 [2024-10-01 22:40:36.981260] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.920 qpair failed and we were unable to recover it. 00:41:41.920 [2024-10-01 22:40:36.981564] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.920 [2024-10-01 22:40:36.981576] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.920 qpair failed and we were unable to recover it. 00:41:41.920 [2024-10-01 22:40:36.981853] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.920 [2024-10-01 22:40:36.981867] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.920 qpair failed and we were unable to recover it. 00:41:41.920 [2024-10-01 22:40:36.982158] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.920 [2024-10-01 22:40:36.982169] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.920 qpair failed and we were unable to recover it. 00:41:41.920 [2024-10-01 22:40:36.982435] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.920 [2024-10-01 22:40:36.982446] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.920 qpair failed and we were unable to recover it. 00:41:41.920 [2024-10-01 22:40:36.982753] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.920 [2024-10-01 22:40:36.982766] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.920 qpair failed and we were unable to recover it. 00:41:41.920 [2024-10-01 22:40:36.982948] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.920 [2024-10-01 22:40:36.982960] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.920 qpair failed and we were unable to recover it. 00:41:41.920 [2024-10-01 22:40:36.983291] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.920 [2024-10-01 22:40:36.983303] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.920 qpair failed and we were unable to recover it. 00:41:41.920 [2024-10-01 22:40:36.983606] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.920 [2024-10-01 22:40:36.983617] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.921 qpair failed and we were unable to recover it. 00:41:41.921 [2024-10-01 22:40:36.983930] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.921 [2024-10-01 22:40:36.983943] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.921 qpair failed and we were unable to recover it. 00:41:41.921 [2024-10-01 22:40:36.984271] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.921 [2024-10-01 22:40:36.984283] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.921 qpair failed and we were unable to recover it. 00:41:41.921 [2024-10-01 22:40:36.984582] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.921 [2024-10-01 22:40:36.984594] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.921 qpair failed and we were unable to recover it. 00:41:41.921 [2024-10-01 22:40:36.984875] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.921 [2024-10-01 22:40:36.984886] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.921 qpair failed and we were unable to recover it. 00:41:41.921 [2024-10-01 22:40:36.985224] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.921 [2024-10-01 22:40:36.985236] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.921 qpair failed and we were unable to recover it. 00:41:41.921 [2024-10-01 22:40:36.985565] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.921 [2024-10-01 22:40:36.985577] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.921 qpair failed and we were unable to recover it. 00:41:41.921 [2024-10-01 22:40:36.985879] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.921 [2024-10-01 22:40:36.985892] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.921 qpair failed and we were unable to recover it. 00:41:41.921 [2024-10-01 22:40:36.986193] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.921 [2024-10-01 22:40:36.986206] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.921 qpair failed and we were unable to recover it. 00:41:41.921 [2024-10-01 22:40:36.986505] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.921 [2024-10-01 22:40:36.986517] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.921 qpair failed and we were unable to recover it. 00:41:41.921 [2024-10-01 22:40:36.986832] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.921 [2024-10-01 22:40:36.986844] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.921 qpair failed and we were unable to recover it. 00:41:41.921 [2024-10-01 22:40:36.987170] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.921 [2024-10-01 22:40:36.987182] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.921 qpair failed and we were unable to recover it. 00:41:41.921 [2024-10-01 22:40:36.987483] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.921 [2024-10-01 22:40:36.987495] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.921 qpair failed and we were unable to recover it. 00:41:41.921 [2024-10-01 22:40:36.987678] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.921 [2024-10-01 22:40:36.987691] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.921 qpair failed and we were unable to recover it. 00:41:41.921 [2024-10-01 22:40:36.988007] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.921 [2024-10-01 22:40:36.988018] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.921 qpair failed and we were unable to recover it. 00:41:41.921 [2024-10-01 22:40:36.988280] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.921 [2024-10-01 22:40:36.988291] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.921 qpair failed and we were unable to recover it. 00:41:41.921 [2024-10-01 22:40:36.988613] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.921 [2024-10-01 22:40:36.988628] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.921 qpair failed and we were unable to recover it. 00:41:41.921 [2024-10-01 22:40:36.988895] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.921 [2024-10-01 22:40:36.988906] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.921 qpair failed and we were unable to recover it. 00:41:41.921 [2024-10-01 22:40:36.989190] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.921 [2024-10-01 22:40:36.989201] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.921 qpair failed and we were unable to recover it. 00:41:41.921 [2024-10-01 22:40:36.989509] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.921 [2024-10-01 22:40:36.989521] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.921 qpair failed and we were unable to recover it. 00:41:41.921 [2024-10-01 22:40:36.989827] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.921 [2024-10-01 22:40:36.989839] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.921 qpair failed and we were unable to recover it. 00:41:41.921 [2024-10-01 22:40:36.990148] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.921 [2024-10-01 22:40:36.990163] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.921 qpair failed and we were unable to recover it. 00:41:41.921 [2024-10-01 22:40:36.990486] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.921 [2024-10-01 22:40:36.990497] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.921 qpair failed and we were unable to recover it. 00:41:41.921 [2024-10-01 22:40:36.990857] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.921 [2024-10-01 22:40:36.990868] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.921 qpair failed and we were unable to recover it. 00:41:41.921 [2024-10-01 22:40:36.991199] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.921 [2024-10-01 22:40:36.991210] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.921 qpair failed and we were unable to recover it. 00:41:41.921 [2024-10-01 22:40:36.991509] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.921 [2024-10-01 22:40:36.991520] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.921 qpair failed and we were unable to recover it. 00:41:41.921 [2024-10-01 22:40:36.991833] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.921 [2024-10-01 22:40:36.991845] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.921 qpair failed and we were unable to recover it. 00:41:41.922 [2024-10-01 22:40:36.992190] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.922 [2024-10-01 22:40:36.992200] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.922 qpair failed and we were unable to recover it. 00:41:41.922 [2024-10-01 22:40:36.992524] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.922 [2024-10-01 22:40:36.992535] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.922 qpair failed and we were unable to recover it. 00:41:41.922 [2024-10-01 22:40:36.992845] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.922 [2024-10-01 22:40:36.992856] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.922 qpair failed and we were unable to recover it. 00:41:41.922 [2024-10-01 22:40:36.993134] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.922 [2024-10-01 22:40:36.993144] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.922 qpair failed and we were unable to recover it. 00:41:41.922 [2024-10-01 22:40:36.993446] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.922 [2024-10-01 22:40:36.993458] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.922 qpair failed and we were unable to recover it. 00:41:41.922 [2024-10-01 22:40:36.993753] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.922 [2024-10-01 22:40:36.993765] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.922 qpair failed and we were unable to recover it. 00:41:41.922 [2024-10-01 22:40:36.994085] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.922 [2024-10-01 22:40:36.994097] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.922 qpair failed and we were unable to recover it. 00:41:41.922 [2024-10-01 22:40:36.994422] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.922 [2024-10-01 22:40:36.994433] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.922 qpair failed and we were unable to recover it. 00:41:41.922 [2024-10-01 22:40:36.994714] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.922 [2024-10-01 22:40:36.994726] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.922 qpair failed and we were unable to recover it. 00:41:41.922 [2024-10-01 22:40:36.995045] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.922 [2024-10-01 22:40:36.995056] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.922 qpair failed and we were unable to recover it. 00:41:41.922 [2024-10-01 22:40:36.995367] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.922 [2024-10-01 22:40:36.995379] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.922 qpair failed and we were unable to recover it. 00:41:41.922 [2024-10-01 22:40:36.995709] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.922 [2024-10-01 22:40:36.995721] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.922 qpair failed and we were unable to recover it. 00:41:41.922 [2024-10-01 22:40:36.996027] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.922 [2024-10-01 22:40:36.996040] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.922 qpair failed and we were unable to recover it. 00:41:41.922 [2024-10-01 22:40:36.996341] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.922 [2024-10-01 22:40:36.996352] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.922 qpair failed and we were unable to recover it. 00:41:41.922 [2024-10-01 22:40:36.996655] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.922 [2024-10-01 22:40:36.996668] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.922 qpair failed and we were unable to recover it. 00:41:41.922 [2024-10-01 22:40:36.996975] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.922 [2024-10-01 22:40:36.996986] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.922 qpair failed and we were unable to recover it. 00:41:41.922 [2024-10-01 22:40:36.997293] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.922 [2024-10-01 22:40:36.997305] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.922 qpair failed and we were unable to recover it. 00:41:41.922 [2024-10-01 22:40:36.997613] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.922 [2024-10-01 22:40:36.997629] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.922 qpair failed and we were unable to recover it. 00:41:41.922 [2024-10-01 22:40:36.997938] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.922 [2024-10-01 22:40:36.997950] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.922 qpair failed and we were unable to recover it. 00:41:41.922 [2024-10-01 22:40:36.998139] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.922 [2024-10-01 22:40:36.998151] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.922 qpair failed and we were unable to recover it. 00:41:41.922 [2024-10-01 22:40:36.998477] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.922 [2024-10-01 22:40:36.998489] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.922 qpair failed and we were unable to recover it. 00:41:41.922 [2024-10-01 22:40:36.998760] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.922 [2024-10-01 22:40:36.998772] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.922 qpair failed and we were unable to recover it. 00:41:41.922 [2024-10-01 22:40:36.999109] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.922 [2024-10-01 22:40:36.999121] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.922 qpair failed and we were unable to recover it. 00:41:41.922 [2024-10-01 22:40:36.999395] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.922 [2024-10-01 22:40:36.999407] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.922 qpair failed and we were unable to recover it. 00:41:41.922 [2024-10-01 22:40:36.999709] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.922 [2024-10-01 22:40:36.999720] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.922 qpair failed and we were unable to recover it. 00:41:41.922 [2024-10-01 22:40:37.000029] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.922 [2024-10-01 22:40:37.000041] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.922 qpair failed and we were unable to recover it. 00:41:41.922 [2024-10-01 22:40:37.000352] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.922 [2024-10-01 22:40:37.000363] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.922 qpair failed and we were unable to recover it. 00:41:41.922 [2024-10-01 22:40:37.000641] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.923 [2024-10-01 22:40:37.000652] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.923 qpair failed and we were unable to recover it. 00:41:41.923 [2024-10-01 22:40:37.000881] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.923 [2024-10-01 22:40:37.000892] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.923 qpair failed and we were unable to recover it. 00:41:41.923 [2024-10-01 22:40:37.001210] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.923 [2024-10-01 22:40:37.001222] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.923 qpair failed and we were unable to recover it. 00:41:41.923 [2024-10-01 22:40:37.001383] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.923 [2024-10-01 22:40:37.001395] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.923 qpair failed and we were unable to recover it. 00:41:41.923 [2024-10-01 22:40:37.001710] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.923 [2024-10-01 22:40:37.001722] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.923 qpair failed and we were unable to recover it. 00:41:41.923 [2024-10-01 22:40:37.002046] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.923 [2024-10-01 22:40:37.002057] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.923 qpair failed and we were unable to recover it. 00:41:41.923 [2024-10-01 22:40:37.002359] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.923 [2024-10-01 22:40:37.002371] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.923 qpair failed and we were unable to recover it. 00:41:41.923 [2024-10-01 22:40:37.002676] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.923 [2024-10-01 22:40:37.002688] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.923 qpair failed and we were unable to recover it. 00:41:41.923 [2024-10-01 22:40:37.003006] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.923 [2024-10-01 22:40:37.003019] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.923 qpair failed and we were unable to recover it. 00:41:41.923 [2024-10-01 22:40:37.003313] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.923 [2024-10-01 22:40:37.003324] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.923 qpair failed and we were unable to recover it. 00:41:41.923 [2024-10-01 22:40:37.003631] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.923 [2024-10-01 22:40:37.003644] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.923 qpair failed and we were unable to recover it. 00:41:41.923 [2024-10-01 22:40:37.003939] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.923 [2024-10-01 22:40:37.003950] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.923 qpair failed and we were unable to recover it. 00:41:41.923 [2024-10-01 22:40:37.004230] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.923 [2024-10-01 22:40:37.004240] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.923 qpair failed and we were unable to recover it. 00:41:41.923 [2024-10-01 22:40:37.004541] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.923 [2024-10-01 22:40:37.004552] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.923 qpair failed and we were unable to recover it. 00:41:41.923 [2024-10-01 22:40:37.004867] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.923 [2024-10-01 22:40:37.004880] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.923 qpair failed and we were unable to recover it. 00:41:41.923 [2024-10-01 22:40:37.005180] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.923 [2024-10-01 22:40:37.005191] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.923 qpair failed and we were unable to recover it. 00:41:41.923 [2024-10-01 22:40:37.005493] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.923 [2024-10-01 22:40:37.005505] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.923 qpair failed and we were unable to recover it. 00:41:41.923 [2024-10-01 22:40:37.005789] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.923 [2024-10-01 22:40:37.005800] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.923 qpair failed and we were unable to recover it. 00:41:41.923 [2024-10-01 22:40:37.006118] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.923 [2024-10-01 22:40:37.006129] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.923 qpair failed and we were unable to recover it. 00:41:41.923 [2024-10-01 22:40:37.006400] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.923 [2024-10-01 22:40:37.006411] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.923 qpair failed and we were unable to recover it. 00:41:41.923 [2024-10-01 22:40:37.006603] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.923 [2024-10-01 22:40:37.006613] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.923 qpair failed and we were unable to recover it. 00:41:41.923 [2024-10-01 22:40:37.006925] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.923 [2024-10-01 22:40:37.006936] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.923 qpair failed and we were unable to recover it. 00:41:41.923 [2024-10-01 22:40:37.007260] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.923 [2024-10-01 22:40:37.007272] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.923 qpair failed and we were unable to recover it. 00:41:41.923 [2024-10-01 22:40:37.007573] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.923 [2024-10-01 22:40:37.007583] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.923 qpair failed and we were unable to recover it. 00:41:41.923 [2024-10-01 22:40:37.007888] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.923 [2024-10-01 22:40:37.007900] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.923 qpair failed and we were unable to recover it. 00:41:41.923 [2024-10-01 22:40:37.008205] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.923 [2024-10-01 22:40:37.008217] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.923 qpair failed and we were unable to recover it. 00:41:41.923 [2024-10-01 22:40:37.008561] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.923 [2024-10-01 22:40:37.008573] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.923 qpair failed and we were unable to recover it. 00:41:41.924 [2024-10-01 22:40:37.008864] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.924 [2024-10-01 22:40:37.008875] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.924 qpair failed and we were unable to recover it. 00:41:41.924 [2024-10-01 22:40:37.009151] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.924 [2024-10-01 22:40:37.009162] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.924 qpair failed and we were unable to recover it. 00:41:41.924 [2024-10-01 22:40:37.009460] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.924 [2024-10-01 22:40:37.009472] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.924 qpair failed and we were unable to recover it. 00:41:41.924 [2024-10-01 22:40:37.009780] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.924 [2024-10-01 22:40:37.009791] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.924 qpair failed and we were unable to recover it. 00:41:41.924 [2024-10-01 22:40:37.010096] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.924 [2024-10-01 22:40:37.010109] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.924 qpair failed and we were unable to recover it. 00:41:41.924 [2024-10-01 22:40:37.010388] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.924 [2024-10-01 22:40:37.010399] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.924 qpair failed and we were unable to recover it. 00:41:41.924 [2024-10-01 22:40:37.010714] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.924 [2024-10-01 22:40:37.010726] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.924 qpair failed and we were unable to recover it. 00:41:41.924 [2024-10-01 22:40:37.011062] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.924 [2024-10-01 22:40:37.011073] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.924 qpair failed and we were unable to recover it. 00:41:41.924 [2024-10-01 22:40:37.011381] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.924 [2024-10-01 22:40:37.011395] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.924 qpair failed and we were unable to recover it. 00:41:41.924 [2024-10-01 22:40:37.011615] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.924 [2024-10-01 22:40:37.011631] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.924 qpair failed and we were unable to recover it. 00:41:41.924 [2024-10-01 22:40:37.011936] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.924 [2024-10-01 22:40:37.011948] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.924 qpair failed and we were unable to recover it. 00:41:41.924 [2024-10-01 22:40:37.012245] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.924 [2024-10-01 22:40:37.012256] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.924 qpair failed and we were unable to recover it. 00:41:41.924 [2024-10-01 22:40:37.012568] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.924 [2024-10-01 22:40:37.012580] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.924 qpair failed and we were unable to recover it. 00:41:41.924 [2024-10-01 22:40:37.012859] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.924 [2024-10-01 22:40:37.012871] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.924 qpair failed and we were unable to recover it. 00:41:41.924 [2024-10-01 22:40:37.013174] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.924 [2024-10-01 22:40:37.013185] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.924 qpair failed and we were unable to recover it. 00:41:41.924 [2024-10-01 22:40:37.013556] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.924 [2024-10-01 22:40:37.013567] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.924 qpair failed and we were unable to recover it. 00:41:41.924 [2024-10-01 22:40:37.013859] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.924 [2024-10-01 22:40:37.013871] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.924 qpair failed and we were unable to recover it. 00:41:41.924 [2024-10-01 22:40:37.014173] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.924 [2024-10-01 22:40:37.014184] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.924 qpair failed and we were unable to recover it. 00:41:41.924 [2024-10-01 22:40:37.014442] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.924 [2024-10-01 22:40:37.014453] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.924 qpair failed and we were unable to recover it. 00:41:41.924 [2024-10-01 22:40:37.014772] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.924 [2024-10-01 22:40:37.014783] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.924 qpair failed and we were unable to recover it. 00:41:41.924 [2024-10-01 22:40:37.015091] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.924 [2024-10-01 22:40:37.015103] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.924 qpair failed and we were unable to recover it. 00:41:41.924 [2024-10-01 22:40:37.015436] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.924 [2024-10-01 22:40:37.015447] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.924 qpair failed and we were unable to recover it. 00:41:41.924 [2024-10-01 22:40:37.015750] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.924 [2024-10-01 22:40:37.015762] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.924 qpair failed and we were unable to recover it. 00:41:41.924 [2024-10-01 22:40:37.016052] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.924 [2024-10-01 22:40:37.016063] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.924 qpair failed and we were unable to recover it. 00:41:41.924 [2024-10-01 22:40:37.016374] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.924 [2024-10-01 22:40:37.016386] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.924 qpair failed and we were unable to recover it. 00:41:41.924 [2024-10-01 22:40:37.016568] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.924 [2024-10-01 22:40:37.016581] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.924 qpair failed and we were unable to recover it. 00:41:41.924 [2024-10-01 22:40:37.016911] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.924 [2024-10-01 22:40:37.016923] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.924 qpair failed and we were unable to recover it. 00:41:41.925 [2024-10-01 22:40:37.017225] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.925 [2024-10-01 22:40:37.017237] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.925 qpair failed and we were unable to recover it. 00:41:41.925 [2024-10-01 22:40:37.017503] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.925 [2024-10-01 22:40:37.017514] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.925 qpair failed and we were unable to recover it. 00:41:41.925 [2024-10-01 22:40:37.017789] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.925 [2024-10-01 22:40:37.017801] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.925 qpair failed and we were unable to recover it. 00:41:41.925 [2024-10-01 22:40:37.018092] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.925 [2024-10-01 22:40:37.018104] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.925 qpair failed and we were unable to recover it. 00:41:41.925 [2024-10-01 22:40:37.018398] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.925 [2024-10-01 22:40:37.018410] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.925 qpair failed and we were unable to recover it. 00:41:41.925 [2024-10-01 22:40:37.018721] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.925 [2024-10-01 22:40:37.018733] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.925 qpair failed and we were unable to recover it. 00:41:41.925 [2024-10-01 22:40:37.019011] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.925 [2024-10-01 22:40:37.019022] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.925 qpair failed and we were unable to recover it. 00:41:41.925 [2024-10-01 22:40:37.019321] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.925 [2024-10-01 22:40:37.019332] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.925 qpair failed and we were unable to recover it. 00:41:41.925 [2024-10-01 22:40:37.019636] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.925 [2024-10-01 22:40:37.019651] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.925 qpair failed and we were unable to recover it. 00:41:41.925 [2024-10-01 22:40:37.019970] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.925 [2024-10-01 22:40:37.019981] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.925 qpair failed and we were unable to recover it. 00:41:41.925 [2024-10-01 22:40:37.020265] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.925 [2024-10-01 22:40:37.020277] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.925 qpair failed and we were unable to recover it. 00:41:41.925 [2024-10-01 22:40:37.020471] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.925 [2024-10-01 22:40:37.020481] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.925 qpair failed and we were unable to recover it. 00:41:41.925 [2024-10-01 22:40:37.020755] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.925 [2024-10-01 22:40:37.020766] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.925 qpair failed and we were unable to recover it. 00:41:41.925 [2024-10-01 22:40:37.021086] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.925 [2024-10-01 22:40:37.021097] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.925 qpair failed and we were unable to recover it. 00:41:41.925 [2024-10-01 22:40:37.021419] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.925 [2024-10-01 22:40:37.021431] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.925 qpair failed and we were unable to recover it. 00:41:41.925 [2024-10-01 22:40:37.021731] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.925 [2024-10-01 22:40:37.021742] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.925 qpair failed and we were unable to recover it. 00:41:41.925 [2024-10-01 22:40:37.022053] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.925 [2024-10-01 22:40:37.022065] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.925 qpair failed and we were unable to recover it. 00:41:41.925 [2024-10-01 22:40:37.022365] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.925 [2024-10-01 22:40:37.022376] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.925 qpair failed and we were unable to recover it. 00:41:41.925 [2024-10-01 22:40:37.022653] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.925 [2024-10-01 22:40:37.022664] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.925 qpair failed and we were unable to recover it. 00:41:41.925 [2024-10-01 22:40:37.022984] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.925 [2024-10-01 22:40:37.022994] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.925 qpair failed and we were unable to recover it. 00:41:41.925 [2024-10-01 22:40:37.023298] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.925 [2024-10-01 22:40:37.023310] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.925 qpair failed and we were unable to recover it. 00:41:41.925 [2024-10-01 22:40:37.023621] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.925 [2024-10-01 22:40:37.023636] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.925 qpair failed and we were unable to recover it. 00:41:41.925 [2024-10-01 22:40:37.023942] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.925 [2024-10-01 22:40:37.023954] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.925 qpair failed and we were unable to recover it. 00:41:41.925 [2024-10-01 22:40:37.024146] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.925 [2024-10-01 22:40:37.024157] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.925 qpair failed and we were unable to recover it. 00:41:41.925 [2024-10-01 22:40:37.024462] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.925 [2024-10-01 22:40:37.024473] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.925 qpair failed and we were unable to recover it. 00:41:41.925 [2024-10-01 22:40:37.024783] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.925 [2024-10-01 22:40:37.024795] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.925 qpair failed and we were unable to recover it. 00:41:41.925 [2024-10-01 22:40:37.025093] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.925 [2024-10-01 22:40:37.025105] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.925 qpair failed and we were unable to recover it. 00:41:41.925 [2024-10-01 22:40:37.025406] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.926 [2024-10-01 22:40:37.025417] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.926 qpair failed and we were unable to recover it. 00:41:41.926 [2024-10-01 22:40:37.025727] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.926 [2024-10-01 22:40:37.025740] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.926 qpair failed and we were unable to recover it. 00:41:41.926 [2024-10-01 22:40:37.026045] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.926 [2024-10-01 22:40:37.026056] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.926 qpair failed and we were unable to recover it. 00:41:41.926 [2024-10-01 22:40:37.026415] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.926 [2024-10-01 22:40:37.026426] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.926 qpair failed and we were unable to recover it. 00:41:41.926 [2024-10-01 22:40:37.026753] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.926 [2024-10-01 22:40:37.026765] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.926 qpair failed and we were unable to recover it. 00:41:41.926 [2024-10-01 22:40:37.027079] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.926 [2024-10-01 22:40:37.027090] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.926 qpair failed and we were unable to recover it. 00:41:41.926 [2024-10-01 22:40:37.027366] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.926 [2024-10-01 22:40:37.027377] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.926 qpair failed and we were unable to recover it. 00:41:41.926 [2024-10-01 22:40:37.027694] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.926 [2024-10-01 22:40:37.027705] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.926 qpair failed and we were unable to recover it. 00:41:41.926 [2024-10-01 22:40:37.028004] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.926 [2024-10-01 22:40:37.028019] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.926 qpair failed and we were unable to recover it. 00:41:41.926 [2024-10-01 22:40:37.028323] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.926 [2024-10-01 22:40:37.028335] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.926 qpair failed and we were unable to recover it. 00:41:41.926 [2024-10-01 22:40:37.028525] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.926 [2024-10-01 22:40:37.028536] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.926 qpair failed and we were unable to recover it. 00:41:41.926 [2024-10-01 22:40:37.028835] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.926 [2024-10-01 22:40:37.028847] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.926 qpair failed and we were unable to recover it. 00:41:41.926 [2024-10-01 22:40:37.029151] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.926 [2024-10-01 22:40:37.029163] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.926 qpair failed and we were unable to recover it. 00:41:41.926 [2024-10-01 22:40:37.029465] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.926 [2024-10-01 22:40:37.029476] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.926 qpair failed and we were unable to recover it. 00:41:41.926 [2024-10-01 22:40:37.029784] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.926 [2024-10-01 22:40:37.029795] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.926 qpair failed and we were unable to recover it. 00:41:41.926 [2024-10-01 22:40:37.030098] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.926 [2024-10-01 22:40:37.030110] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.926 qpair failed and we were unable to recover it. 00:41:41.926 [2024-10-01 22:40:37.030411] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.926 [2024-10-01 22:40:37.030423] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.926 qpair failed and we were unable to recover it. 00:41:41.926 [2024-10-01 22:40:37.030720] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.926 [2024-10-01 22:40:37.030731] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.926 qpair failed and we were unable to recover it. 00:41:41.926 [2024-10-01 22:40:37.031034] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.926 [2024-10-01 22:40:37.031047] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.926 qpair failed and we were unable to recover it. 00:41:41.926 [2024-10-01 22:40:37.031324] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.926 [2024-10-01 22:40:37.031335] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.926 qpair failed and we were unable to recover it. 00:41:41.926 [2024-10-01 22:40:37.031636] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.926 [2024-10-01 22:40:37.031648] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.926 qpair failed and we were unable to recover it. 00:41:41.926 [2024-10-01 22:40:37.031978] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.926 [2024-10-01 22:40:37.031990] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.926 qpair failed and we were unable to recover it. 00:41:41.926 [2024-10-01 22:40:37.032299] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.926 [2024-10-01 22:40:37.032311] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.926 qpair failed and we were unable to recover it. 00:41:41.926 [2024-10-01 22:40:37.032638] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.926 [2024-10-01 22:40:37.032651] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.926 qpair failed and we were unable to recover it. 00:41:41.926 [2024-10-01 22:40:37.032946] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.926 [2024-10-01 22:40:37.032957] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.926 qpair failed and we were unable to recover it. 00:41:41.926 [2024-10-01 22:40:37.033303] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.926 [2024-10-01 22:40:37.033315] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.926 qpair failed and we were unable to recover it. 00:41:41.926 [2024-10-01 22:40:37.033615] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.926 [2024-10-01 22:40:37.033635] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.926 qpair failed and we were unable to recover it. 00:41:41.926 [2024-10-01 22:40:37.033918] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.927 [2024-10-01 22:40:37.033929] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.927 qpair failed and we were unable to recover it. 00:41:41.927 [2024-10-01 22:40:37.034228] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.927 [2024-10-01 22:40:37.034240] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.927 qpair failed and we were unable to recover it. 00:41:41.927 [2024-10-01 22:40:37.034542] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.927 [2024-10-01 22:40:37.034553] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.927 qpair failed and we were unable to recover it. 00:41:41.927 [2024-10-01 22:40:37.034868] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.927 [2024-10-01 22:40:37.034880] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.927 qpair failed and we were unable to recover it. 00:41:41.927 [2024-10-01 22:40:37.035214] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.927 [2024-10-01 22:40:37.035225] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.927 qpair failed and we were unable to recover it. 00:41:41.927 [2024-10-01 22:40:37.035527] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.927 [2024-10-01 22:40:37.035539] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.927 qpair failed and we were unable to recover it. 00:41:41.927 [2024-10-01 22:40:37.035813] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.927 [2024-10-01 22:40:37.035824] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.927 qpair failed and we were unable to recover it. 00:41:41.927 [2024-10-01 22:40:37.036130] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.927 [2024-10-01 22:40:37.036143] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.927 qpair failed and we were unable to recover it. 00:41:41.927 [2024-10-01 22:40:37.036420] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.927 [2024-10-01 22:40:37.036432] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.927 qpair failed and we were unable to recover it. 00:41:41.927 [2024-10-01 22:40:37.036740] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.927 [2024-10-01 22:40:37.036753] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.927 qpair failed and we were unable to recover it. 00:41:41.927 [2024-10-01 22:40:37.037080] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.927 [2024-10-01 22:40:37.037091] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.927 qpair failed and we were unable to recover it. 00:41:41.927 [2024-10-01 22:40:37.037395] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.927 [2024-10-01 22:40:37.037407] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.927 qpair failed and we were unable to recover it. 00:41:41.927 [2024-10-01 22:40:37.037691] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.927 [2024-10-01 22:40:37.037702] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.927 qpair failed and we were unable to recover it. 00:41:41.927 [2024-10-01 22:40:37.037862] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.927 [2024-10-01 22:40:37.037873] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.927 qpair failed and we were unable to recover it. 00:41:41.927 [2024-10-01 22:40:37.038181] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.927 [2024-10-01 22:40:37.038192] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.927 qpair failed and we were unable to recover it. 00:41:41.927 [2024-10-01 22:40:37.038492] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.927 [2024-10-01 22:40:37.038503] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.927 qpair failed and we were unable to recover it. 00:41:41.927 [2024-10-01 22:40:37.038691] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.927 [2024-10-01 22:40:37.038702] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.927 qpair failed and we were unable to recover it. 00:41:41.927 [2024-10-01 22:40:37.039010] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.927 [2024-10-01 22:40:37.039020] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.927 qpair failed and we were unable to recover it. 00:41:41.927 [2024-10-01 22:40:37.039321] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.927 [2024-10-01 22:40:37.039333] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.927 qpair failed and we were unable to recover it. 00:41:41.927 [2024-10-01 22:40:37.039635] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.927 [2024-10-01 22:40:37.039648] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.927 qpair failed and we were unable to recover it. 00:41:41.927 [2024-10-01 22:40:37.039951] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.927 [2024-10-01 22:40:37.039962] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.927 qpair failed and we were unable to recover it. 00:41:41.927 [2024-10-01 22:40:37.040265] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.927 [2024-10-01 22:40:37.040278] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.927 qpair failed and we were unable to recover it. 00:41:41.927 [2024-10-01 22:40:37.040581] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.927 [2024-10-01 22:40:37.040594] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.927 qpair failed and we were unable to recover it. 00:41:41.928 [2024-10-01 22:40:37.040919] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.928 [2024-10-01 22:40:37.040931] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.928 qpair failed and we were unable to recover it. 00:41:41.928 [2024-10-01 22:40:37.041205] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.928 [2024-10-01 22:40:37.041215] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.928 qpair failed and we were unable to recover it. 00:41:41.928 [2024-10-01 22:40:37.041516] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.928 [2024-10-01 22:40:37.041528] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.928 qpair failed and we were unable to recover it. 00:41:41.928 [2024-10-01 22:40:37.041804] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.928 [2024-10-01 22:40:37.041815] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.928 qpair failed and we were unable to recover it. 00:41:41.928 [2024-10-01 22:40:37.042131] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.928 [2024-10-01 22:40:37.042143] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.928 qpair failed and we were unable to recover it. 00:41:41.928 [2024-10-01 22:40:37.042432] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.928 [2024-10-01 22:40:37.042444] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.928 qpair failed and we were unable to recover it. 00:41:41.928 [2024-10-01 22:40:37.042750] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.928 [2024-10-01 22:40:37.042761] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.928 qpair failed and we were unable to recover it. 00:41:41.928 [2024-10-01 22:40:37.043048] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.928 [2024-10-01 22:40:37.043059] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.928 qpair failed and we were unable to recover it. 00:41:41.928 [2024-10-01 22:40:37.043361] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.928 [2024-10-01 22:40:37.043373] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.928 qpair failed and we were unable to recover it. 00:41:41.928 [2024-10-01 22:40:37.043652] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.928 [2024-10-01 22:40:37.043664] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.928 qpair failed and we were unable to recover it. 00:41:41.928 [2024-10-01 22:40:37.043974] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.928 [2024-10-01 22:40:37.043985] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.928 qpair failed and we were unable to recover it. 00:41:41.928 [2024-10-01 22:40:37.044276] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.928 [2024-10-01 22:40:37.044287] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.928 qpair failed and we were unable to recover it. 00:41:41.928 [2024-10-01 22:40:37.044449] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.928 [2024-10-01 22:40:37.044460] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.928 qpair failed and we were unable to recover it. 00:41:41.928 [2024-10-01 22:40:37.044784] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.928 [2024-10-01 22:40:37.044796] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.928 qpair failed and we were unable to recover it. 00:41:41.928 [2024-10-01 22:40:37.045080] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.928 [2024-10-01 22:40:37.045090] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.928 qpair failed and we were unable to recover it. 00:41:41.928 [2024-10-01 22:40:37.045399] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.928 [2024-10-01 22:40:37.045410] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.928 qpair failed and we were unable to recover it. 00:41:41.928 [2024-10-01 22:40:37.045714] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.928 [2024-10-01 22:40:37.045726] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.928 qpair failed and we were unable to recover it. 00:41:41.928 [2024-10-01 22:40:37.046036] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.928 [2024-10-01 22:40:37.046046] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.928 qpair failed and we were unable to recover it. 00:41:41.928 [2024-10-01 22:40:37.046324] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.928 [2024-10-01 22:40:37.046334] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.928 qpair failed and we were unable to recover it. 00:41:41.928 [2024-10-01 22:40:37.046638] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.928 [2024-10-01 22:40:37.046649] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.928 qpair failed and we were unable to recover it. 00:41:41.928 [2024-10-01 22:40:37.046970] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.928 [2024-10-01 22:40:37.046982] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.928 qpair failed and we were unable to recover it. 00:41:41.928 [2024-10-01 22:40:37.047256] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.928 [2024-10-01 22:40:37.047267] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.928 qpair failed and we were unable to recover it. 00:41:41.928 [2024-10-01 22:40:37.047499] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.928 [2024-10-01 22:40:37.047510] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.928 qpair failed and we were unable to recover it. 00:41:41.928 [2024-10-01 22:40:37.047773] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.928 [2024-10-01 22:40:37.047785] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.928 qpair failed and we were unable to recover it. 00:41:41.928 [2024-10-01 22:40:37.048088] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.928 [2024-10-01 22:40:37.048099] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.928 qpair failed and we were unable to recover it. 00:41:41.928 [2024-10-01 22:40:37.048388] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.928 [2024-10-01 22:40:37.048398] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.928 qpair failed and we were unable to recover it. 00:41:41.928 [2024-10-01 22:40:37.048648] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.928 [2024-10-01 22:40:37.048662] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.929 qpair failed and we were unable to recover it. 00:41:41.929 [2024-10-01 22:40:37.048956] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.929 [2024-10-01 22:40:37.048967] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.929 qpair failed and we were unable to recover it. 00:41:41.929 [2024-10-01 22:40:37.049303] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.929 [2024-10-01 22:40:37.049314] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.929 qpair failed and we were unable to recover it. 00:41:41.929 [2024-10-01 22:40:37.049607] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.929 [2024-10-01 22:40:37.049618] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.929 qpair failed and we were unable to recover it. 00:41:41.929 [2024-10-01 22:40:37.049958] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.929 [2024-10-01 22:40:37.049970] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.929 qpair failed and we were unable to recover it. 00:41:41.929 [2024-10-01 22:40:37.050269] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.929 [2024-10-01 22:40:37.050281] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.929 qpair failed and we were unable to recover it. 00:41:41.929 [2024-10-01 22:40:37.050589] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.929 [2024-10-01 22:40:37.050600] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.929 qpair failed and we were unable to recover it. 00:41:41.929 [2024-10-01 22:40:37.050876] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.929 [2024-10-01 22:40:37.050888] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.929 qpair failed and we were unable to recover it. 00:41:41.929 [2024-10-01 22:40:37.051192] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.929 [2024-10-01 22:40:37.051203] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.929 qpair failed and we were unable to recover it. 00:41:41.929 [2024-10-01 22:40:37.051503] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.929 [2024-10-01 22:40:37.051515] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.929 qpair failed and we were unable to recover it. 00:41:41.929 [2024-10-01 22:40:37.051822] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.929 [2024-10-01 22:40:37.051834] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.929 qpair failed and we were unable to recover it. 00:41:41.929 [2024-10-01 22:40:37.052109] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.929 [2024-10-01 22:40:37.052120] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.929 qpair failed and we were unable to recover it. 00:41:41.929 [2024-10-01 22:40:37.052419] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.929 [2024-10-01 22:40:37.052430] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.929 qpair failed and we were unable to recover it. 00:41:41.929 [2024-10-01 22:40:37.052743] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.929 [2024-10-01 22:40:37.052756] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.929 qpair failed and we were unable to recover it. 00:41:41.929 [2024-10-01 22:40:37.053079] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.929 [2024-10-01 22:40:37.053090] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.929 qpair failed and we were unable to recover it. 00:41:41.929 [2024-10-01 22:40:37.053364] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.929 [2024-10-01 22:40:37.053375] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.929 qpair failed and we were unable to recover it. 00:41:41.929 [2024-10-01 22:40:37.053675] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.929 [2024-10-01 22:40:37.053686] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.929 qpair failed and we were unable to recover it. 00:41:41.929 [2024-10-01 22:40:37.053989] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.929 [2024-10-01 22:40:37.054001] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.929 qpair failed and we were unable to recover it. 00:41:41.929 [2024-10-01 22:40:37.054301] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.929 [2024-10-01 22:40:37.054312] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.929 qpair failed and we were unable to recover it. 00:41:41.929 [2024-10-01 22:40:37.054603] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.929 [2024-10-01 22:40:37.054616] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.929 qpair failed and we were unable to recover it. 00:41:41.929 [2024-10-01 22:40:37.054914] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.929 [2024-10-01 22:40:37.054925] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.929 qpair failed and we were unable to recover it. 00:41:41.929 [2024-10-01 22:40:37.055191] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.929 [2024-10-01 22:40:37.055202] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.929 qpair failed and we were unable to recover it. 00:41:41.929 [2024-10-01 22:40:37.055511] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.929 [2024-10-01 22:40:37.055522] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.929 qpair failed and we were unable to recover it. 00:41:41.929 [2024-10-01 22:40:37.055792] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.929 [2024-10-01 22:40:37.055803] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.929 qpair failed and we were unable to recover it. 00:41:41.929 [2024-10-01 22:40:37.056111] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.929 [2024-10-01 22:40:37.056123] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.929 qpair failed and we were unable to recover it. 00:41:41.929 [2024-10-01 22:40:37.056421] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.929 [2024-10-01 22:40:37.056433] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.929 qpair failed and we were unable to recover it. 00:41:41.929 [2024-10-01 22:40:37.056728] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.929 [2024-10-01 22:40:37.056740] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.929 qpair failed and we were unable to recover it. 00:41:41.929 [2024-10-01 22:40:37.057063] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.929 [2024-10-01 22:40:37.057077] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.929 qpair failed and we were unable to recover it. 00:41:41.929 [2024-10-01 22:40:37.057415] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.929 [2024-10-01 22:40:37.057426] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.930 qpair failed and we were unable to recover it. 00:41:41.930 [2024-10-01 22:40:37.057726] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.930 [2024-10-01 22:40:37.057737] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.930 qpair failed and we were unable to recover it. 00:41:41.930 [2024-10-01 22:40:37.058072] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.930 [2024-10-01 22:40:37.058083] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.930 qpair failed and we were unable to recover it. 00:41:41.930 [2024-10-01 22:40:37.058375] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.930 [2024-10-01 22:40:37.058387] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.930 qpair failed and we were unable to recover it. 00:41:41.930 [2024-10-01 22:40:37.058695] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.930 [2024-10-01 22:40:37.058706] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.930 qpair failed and we were unable to recover it. 00:41:41.930 [2024-10-01 22:40:37.059007] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.930 [2024-10-01 22:40:37.059019] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.930 qpair failed and we were unable to recover it. 00:41:41.930 [2024-10-01 22:40:37.059321] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.930 [2024-10-01 22:40:37.059333] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.930 qpair failed and we were unable to recover it. 00:41:41.930 [2024-10-01 22:40:37.059632] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.930 [2024-10-01 22:40:37.059644] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.930 qpair failed and we were unable to recover it. 00:41:41.930 [2024-10-01 22:40:37.059955] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.930 [2024-10-01 22:40:37.059966] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.930 qpair failed and we were unable to recover it. 00:41:41.930 [2024-10-01 22:40:37.060278] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.930 [2024-10-01 22:40:37.060291] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.930 qpair failed and we were unable to recover it. 00:41:41.930 [2024-10-01 22:40:37.060611] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.930 [2024-10-01 22:40:37.060623] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.930 qpair failed and we were unable to recover it. 00:41:41.930 [2024-10-01 22:40:37.060940] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.930 [2024-10-01 22:40:37.060952] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.930 qpair failed and we were unable to recover it. 00:41:41.930 [2024-10-01 22:40:37.061249] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.930 [2024-10-01 22:40:37.061260] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.930 qpair failed and we were unable to recover it. 00:41:41.930 [2024-10-01 22:40:37.061599] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.930 [2024-10-01 22:40:37.061610] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.930 qpair failed and we were unable to recover it. 00:41:41.930 [2024-10-01 22:40:37.061903] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.930 [2024-10-01 22:40:37.061915] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.930 qpair failed and we were unable to recover it. 00:41:41.930 [2024-10-01 22:40:37.062204] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.930 [2024-10-01 22:40:37.062217] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.930 qpair failed and we were unable to recover it. 00:41:41.930 [2024-10-01 22:40:37.062519] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.930 [2024-10-01 22:40:37.062530] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.930 qpair failed and we were unable to recover it. 00:41:41.930 [2024-10-01 22:40:37.062834] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.930 [2024-10-01 22:40:37.062848] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.930 qpair failed and we were unable to recover it. 00:41:41.930 [2024-10-01 22:40:37.063152] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.930 [2024-10-01 22:40:37.063164] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.930 qpair failed and we were unable to recover it. 00:41:41.930 [2024-10-01 22:40:37.063449] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.930 [2024-10-01 22:40:37.063460] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.930 qpair failed and we were unable to recover it. 00:41:41.930 [2024-10-01 22:40:37.063660] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.930 [2024-10-01 22:40:37.063671] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.930 qpair failed and we were unable to recover it. 00:41:41.930 [2024-10-01 22:40:37.063985] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.930 [2024-10-01 22:40:37.063997] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.930 qpair failed and we were unable to recover it. 00:41:41.930 [2024-10-01 22:40:37.064300] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.930 [2024-10-01 22:40:37.064311] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.930 qpair failed and we were unable to recover it. 00:41:41.930 [2024-10-01 22:40:37.064634] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.930 [2024-10-01 22:40:37.064646] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.930 qpair failed and we were unable to recover it. 00:41:41.930 [2024-10-01 22:40:37.064968] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.930 [2024-10-01 22:40:37.064979] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.930 qpair failed and we were unable to recover it. 00:41:41.930 [2024-10-01 22:40:37.065283] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.930 [2024-10-01 22:40:37.065295] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.930 qpair failed and we were unable to recover it. 00:41:41.930 [2024-10-01 22:40:37.065599] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.930 [2024-10-01 22:40:37.065610] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.930 qpair failed and we were unable to recover it. 00:41:41.930 [2024-10-01 22:40:37.065948] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.930 [2024-10-01 22:40:37.065959] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.930 qpair failed and we were unable to recover it. 00:41:41.931 [2024-10-01 22:40:37.066258] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.931 [2024-10-01 22:40:37.066269] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.931 qpair failed and we were unable to recover it. 00:41:41.931 [2024-10-01 22:40:37.066567] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.931 [2024-10-01 22:40:37.066578] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.931 qpair failed and we were unable to recover it. 00:41:41.931 [2024-10-01 22:40:37.066887] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.931 [2024-10-01 22:40:37.066900] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.931 qpair failed and we were unable to recover it. 00:41:41.931 [2024-10-01 22:40:37.067226] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.931 [2024-10-01 22:40:37.067238] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.931 qpair failed and we were unable to recover it. 00:41:41.931 [2024-10-01 22:40:37.067544] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.931 [2024-10-01 22:40:37.067556] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.931 qpair failed and we were unable to recover it. 00:41:41.931 [2024-10-01 22:40:37.067870] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.931 [2024-10-01 22:40:37.067882] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.931 qpair failed and we were unable to recover it. 00:41:41.931 [2024-10-01 22:40:37.068234] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.931 [2024-10-01 22:40:37.068245] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.931 qpair failed and we were unable to recover it. 00:41:41.931 [2024-10-01 22:40:37.068534] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.931 [2024-10-01 22:40:37.068545] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.931 qpair failed and we were unable to recover it. 00:41:41.931 [2024-10-01 22:40:37.068910] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.931 [2024-10-01 22:40:37.068921] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.931 qpair failed and we were unable to recover it. 00:41:41.931 [2024-10-01 22:40:37.069224] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.931 [2024-10-01 22:40:37.069235] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.931 qpair failed and we were unable to recover it. 00:41:41.931 [2024-10-01 22:40:37.069526] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.931 [2024-10-01 22:40:37.069537] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.931 qpair failed and we were unable to recover it. 00:41:41.931 [2024-10-01 22:40:37.069745] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.931 [2024-10-01 22:40:37.069757] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.931 qpair failed and we were unable to recover it. 00:41:41.931 [2024-10-01 22:40:37.070088] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.931 [2024-10-01 22:40:37.070099] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.931 qpair failed and we were unable to recover it. 00:41:41.931 [2024-10-01 22:40:37.070398] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.931 [2024-10-01 22:40:37.070410] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.931 qpair failed and we were unable to recover it. 00:41:41.931 [2024-10-01 22:40:37.070764] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.931 [2024-10-01 22:40:37.070775] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.931 qpair failed and we were unable to recover it. 00:41:41.931 [2024-10-01 22:40:37.071079] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.931 [2024-10-01 22:40:37.071091] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.931 qpair failed and we were unable to recover it. 00:41:41.931 [2024-10-01 22:40:37.071386] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.931 [2024-10-01 22:40:37.071396] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.931 qpair failed and we were unable to recover it. 00:41:41.931 [2024-10-01 22:40:37.071660] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.931 [2024-10-01 22:40:37.071671] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.931 qpair failed and we were unable to recover it. 00:41:41.931 [2024-10-01 22:40:37.071825] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.931 [2024-10-01 22:40:37.071837] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.931 qpair failed and we were unable to recover it. 00:41:41.931 [2024-10-01 22:40:37.072140] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.931 [2024-10-01 22:40:37.072151] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.931 qpair failed and we were unable to recover it. 00:41:41.931 [2024-10-01 22:40:37.072453] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.931 [2024-10-01 22:40:37.072465] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.931 qpair failed and we were unable to recover it. 00:41:41.931 [2024-10-01 22:40:37.072761] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.931 [2024-10-01 22:40:37.072773] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.931 qpair failed and we were unable to recover it. 00:41:41.931 [2024-10-01 22:40:37.073076] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.931 [2024-10-01 22:40:37.073088] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.931 qpair failed and we were unable to recover it. 00:41:41.931 [2024-10-01 22:40:37.073420] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.931 [2024-10-01 22:40:37.073431] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.931 qpair failed and we were unable to recover it. 00:41:41.931 [2024-10-01 22:40:37.073736] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.931 [2024-10-01 22:40:37.073748] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.931 qpair failed and we were unable to recover it. 00:41:41.931 [2024-10-01 22:40:37.074018] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.931 [2024-10-01 22:40:37.074029] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.931 qpair failed and we were unable to recover it. 00:41:41.931 [2024-10-01 22:40:37.074336] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.931 [2024-10-01 22:40:37.074348] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.931 qpair failed and we were unable to recover it. 00:41:41.931 [2024-10-01 22:40:37.074628] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.932 [2024-10-01 22:40:37.074640] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.932 qpair failed and we were unable to recover it. 00:41:41.932 [2024-10-01 22:40:37.074938] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.932 [2024-10-01 22:40:37.074951] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.932 qpair failed and we were unable to recover it. 00:41:41.932 [2024-10-01 22:40:37.075213] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.932 [2024-10-01 22:40:37.075224] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.932 qpair failed and we were unable to recover it. 00:41:41.932 [2024-10-01 22:40:37.075527] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.932 [2024-10-01 22:40:37.075539] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.932 qpair failed and we were unable to recover it. 00:41:41.932 [2024-10-01 22:40:37.075826] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.932 [2024-10-01 22:40:37.075837] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.932 qpair failed and we were unable to recover it. 00:41:41.932 [2024-10-01 22:40:37.076142] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.932 [2024-10-01 22:40:37.076154] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.932 qpair failed and we were unable to recover it. 00:41:41.932 [2024-10-01 22:40:37.076457] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.932 [2024-10-01 22:40:37.076467] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.932 qpair failed and we were unable to recover it. 00:41:41.932 [2024-10-01 22:40:37.076702] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.932 [2024-10-01 22:40:37.076714] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.932 qpair failed and we were unable to recover it. 00:41:41.932 [2024-10-01 22:40:37.077029] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.932 [2024-10-01 22:40:37.077040] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.932 qpair failed and we were unable to recover it. 00:41:41.932 [2024-10-01 22:40:37.077347] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.932 [2024-10-01 22:40:37.077359] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.932 qpair failed and we were unable to recover it. 00:41:41.932 [2024-10-01 22:40:37.077664] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.932 [2024-10-01 22:40:37.077675] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.932 qpair failed and we were unable to recover it. 00:41:41.932 [2024-10-01 22:40:37.078030] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.932 [2024-10-01 22:40:37.078041] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.932 qpair failed and we were unable to recover it. 00:41:41.932 [2024-10-01 22:40:37.078349] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.932 [2024-10-01 22:40:37.078362] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.932 qpair failed and we were unable to recover it. 00:41:41.932 [2024-10-01 22:40:37.078675] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.932 [2024-10-01 22:40:37.078686] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.932 qpair failed and we were unable to recover it. 00:41:41.932 [2024-10-01 22:40:37.078974] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.932 [2024-10-01 22:40:37.078987] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.932 qpair failed and we were unable to recover it. 00:41:41.932 [2024-10-01 22:40:37.079171] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.932 [2024-10-01 22:40:37.079183] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.932 qpair failed and we were unable to recover it. 00:41:41.932 [2024-10-01 22:40:37.079462] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.932 [2024-10-01 22:40:37.079473] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.932 qpair failed and we were unable to recover it. 00:41:41.932 [2024-10-01 22:40:37.079787] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.932 [2024-10-01 22:40:37.079798] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.932 qpair failed and we were unable to recover it. 00:41:41.932 [2024-10-01 22:40:37.080115] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.932 [2024-10-01 22:40:37.080127] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.932 qpair failed and we were unable to recover it. 00:41:41.932 [2024-10-01 22:40:37.080294] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.932 [2024-10-01 22:40:37.080306] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.932 qpair failed and we were unable to recover it. 00:41:41.932 [2024-10-01 22:40:37.080565] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.932 [2024-10-01 22:40:37.080576] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.932 qpair failed and we were unable to recover it. 00:41:41.932 [2024-10-01 22:40:37.080757] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.932 [2024-10-01 22:40:37.080770] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.932 qpair failed and we were unable to recover it. 00:41:41.932 [2024-10-01 22:40:37.081099] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.932 [2024-10-01 22:40:37.081110] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.932 qpair failed and we were unable to recover it. 00:41:41.932 [2024-10-01 22:40:37.081408] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.932 [2024-10-01 22:40:37.081420] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.932 qpair failed and we were unable to recover it. 00:41:41.932 [2024-10-01 22:40:37.081726] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.932 [2024-10-01 22:40:37.081738] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.932 qpair failed and we were unable to recover it. 00:41:41.932 [2024-10-01 22:40:37.082042] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.932 [2024-10-01 22:40:37.082054] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.932 qpair failed and we were unable to recover it. 00:41:41.932 [2024-10-01 22:40:37.082435] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.932 [2024-10-01 22:40:37.082446] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.932 qpair failed and we were unable to recover it. 00:41:41.932 [2024-10-01 22:40:37.082762] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.932 [2024-10-01 22:40:37.082775] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.932 qpair failed and we were unable to recover it. 00:41:41.933 [2024-10-01 22:40:37.083077] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.933 [2024-10-01 22:40:37.083088] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.933 qpair failed and we were unable to recover it. 00:41:41.933 [2024-10-01 22:40:37.083267] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.933 [2024-10-01 22:40:37.083278] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.933 qpair failed and we were unable to recover it. 00:41:41.933 [2024-10-01 22:40:37.083477] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.933 [2024-10-01 22:40:37.083489] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.933 qpair failed and we were unable to recover it. 00:41:41.933 [2024-10-01 22:40:37.083800] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.933 [2024-10-01 22:40:37.083812] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.933 qpair failed and we were unable to recover it. 00:41:41.933 [2024-10-01 22:40:37.084101] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.933 [2024-10-01 22:40:37.084112] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.933 qpair failed and we were unable to recover it. 00:41:41.933 [2024-10-01 22:40:37.084422] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.933 [2024-10-01 22:40:37.084433] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.933 qpair failed and we were unable to recover it. 00:41:41.933 [2024-10-01 22:40:37.084733] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.933 [2024-10-01 22:40:37.084744] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.933 qpair failed and we were unable to recover it. 00:41:41.933 [2024-10-01 22:40:37.085057] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.933 [2024-10-01 22:40:37.085068] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.933 qpair failed and we were unable to recover it. 00:41:41.933 [2024-10-01 22:40:37.085356] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.933 [2024-10-01 22:40:37.085368] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.933 qpair failed and we were unable to recover it. 00:41:41.933 [2024-10-01 22:40:37.085673] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.933 [2024-10-01 22:40:37.085684] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.933 qpair failed and we were unable to recover it. 00:41:41.933 [2024-10-01 22:40:37.085975] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.933 [2024-10-01 22:40:37.085987] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.933 qpair failed and we were unable to recover it. 00:41:41.933 [2024-10-01 22:40:37.086303] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.933 [2024-10-01 22:40:37.086316] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.933 qpair failed and we were unable to recover it. 00:41:41.933 [2024-10-01 22:40:37.086644] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.933 [2024-10-01 22:40:37.086656] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.933 qpair failed and we were unable to recover it. 00:41:41.933 [2024-10-01 22:40:37.086968] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.933 [2024-10-01 22:40:37.086979] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.933 qpair failed and we were unable to recover it. 00:41:41.933 [2024-10-01 22:40:37.087286] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.933 [2024-10-01 22:40:37.087298] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.933 qpair failed and we were unable to recover it. 00:41:41.933 [2024-10-01 22:40:37.087599] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.933 [2024-10-01 22:40:37.087610] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.933 qpair failed and we were unable to recover it. 00:41:41.933 [2024-10-01 22:40:37.087948] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.933 [2024-10-01 22:40:37.087960] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.933 qpair failed and we were unable to recover it. 00:41:41.933 [2024-10-01 22:40:37.088265] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.933 [2024-10-01 22:40:37.088276] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.933 qpair failed and we were unable to recover it. 00:41:41.933 [2024-10-01 22:40:37.088580] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.933 [2024-10-01 22:40:37.088591] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.933 qpair failed and we were unable to recover it. 00:41:41.933 [2024-10-01 22:40:37.088910] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.933 [2024-10-01 22:40:37.088922] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.933 qpair failed and we were unable to recover it. 00:41:41.933 [2024-10-01 22:40:37.089195] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.933 [2024-10-01 22:40:37.089206] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.933 qpair failed and we were unable to recover it. 00:41:41.933 [2024-10-01 22:40:37.089508] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.933 [2024-10-01 22:40:37.089519] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.933 qpair failed and we were unable to recover it. 00:41:41.933 [2024-10-01 22:40:37.089794] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.933 [2024-10-01 22:40:37.089805] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.933 qpair failed and we were unable to recover it. 00:41:41.933 [2024-10-01 22:40:37.089981] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.934 [2024-10-01 22:40:37.089992] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.934 qpair failed and we were unable to recover it. 00:41:41.934 [2024-10-01 22:40:37.090169] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.934 [2024-10-01 22:40:37.090181] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.934 qpair failed and we were unable to recover it. 00:41:41.934 [2024-10-01 22:40:37.090469] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.934 [2024-10-01 22:40:37.090480] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.934 qpair failed and we were unable to recover it. 00:41:41.934 [2024-10-01 22:40:37.090777] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.934 [2024-10-01 22:40:37.090788] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.934 qpair failed and we were unable to recover it. 00:41:41.934 [2024-10-01 22:40:37.090997] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.934 [2024-10-01 22:40:37.091008] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.934 qpair failed and we were unable to recover it. 00:41:41.934 [2024-10-01 22:40:37.091284] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.934 [2024-10-01 22:40:37.091295] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.934 qpair failed and we were unable to recover it. 00:41:41.934 [2024-10-01 22:40:37.091595] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.934 [2024-10-01 22:40:37.091606] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.934 qpair failed and we were unable to recover it. 00:41:41.934 [2024-10-01 22:40:37.091874] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.934 [2024-10-01 22:40:37.091885] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.934 qpair failed and we were unable to recover it. 00:41:41.934 [2024-10-01 22:40:37.092185] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.934 [2024-10-01 22:40:37.092196] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.934 qpair failed and we were unable to recover it. 00:41:41.934 [2024-10-01 22:40:37.092519] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.934 [2024-10-01 22:40:37.092530] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.934 qpair failed and we were unable to recover it. 00:41:41.934 [2024-10-01 22:40:37.092811] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.934 [2024-10-01 22:40:37.092823] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.934 qpair failed and we were unable to recover it. 00:41:41.934 [2024-10-01 22:40:37.093132] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.934 [2024-10-01 22:40:37.093144] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.934 qpair failed and we were unable to recover it. 00:41:41.934 [2024-10-01 22:40:37.093446] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.934 [2024-10-01 22:40:37.093457] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.934 qpair failed and we were unable to recover it. 00:41:41.934 [2024-10-01 22:40:37.093736] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.934 [2024-10-01 22:40:37.093747] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.934 qpair failed and we were unable to recover it. 00:41:41.934 [2024-10-01 22:40:37.094053] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.934 [2024-10-01 22:40:37.094064] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.934 qpair failed and we were unable to recover it. 00:41:41.934 [2024-10-01 22:40:37.094363] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.934 [2024-10-01 22:40:37.094377] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.934 qpair failed and we were unable to recover it. 00:41:41.934 [2024-10-01 22:40:37.094677] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.934 [2024-10-01 22:40:37.094689] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.934 qpair failed and we were unable to recover it. 00:41:41.934 [2024-10-01 22:40:37.094984] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.934 [2024-10-01 22:40:37.094996] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.934 qpair failed and we were unable to recover it. 00:41:41.934 [2024-10-01 22:40:37.095187] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.934 [2024-10-01 22:40:37.095198] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.934 qpair failed and we were unable to recover it. 00:41:41.934 [2024-10-01 22:40:37.095510] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.934 [2024-10-01 22:40:37.095522] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.934 qpair failed and we were unable to recover it. 00:41:41.934 [2024-10-01 22:40:37.095836] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.934 [2024-10-01 22:40:37.095848] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.934 qpair failed and we were unable to recover it. 00:41:41.934 [2024-10-01 22:40:37.096183] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.934 [2024-10-01 22:40:37.096195] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.934 qpair failed and we were unable to recover it. 00:41:41.934 [2024-10-01 22:40:37.096475] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.934 [2024-10-01 22:40:37.096486] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.934 qpair failed and we were unable to recover it. 00:41:41.934 [2024-10-01 22:40:37.096711] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.934 [2024-10-01 22:40:37.096722] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.934 qpair failed and we were unable to recover it. 00:41:41.934 [2024-10-01 22:40:37.096889] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.934 [2024-10-01 22:40:37.096901] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.934 qpair failed and we were unable to recover it. 00:41:41.934 [2024-10-01 22:40:37.097240] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.934 [2024-10-01 22:40:37.097251] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.934 qpair failed and we were unable to recover it. 00:41:41.934 [2024-10-01 22:40:37.097553] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.934 [2024-10-01 22:40:37.097566] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.934 qpair failed and we were unable to recover it. 00:41:41.934 [2024-10-01 22:40:37.097877] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.934 [2024-10-01 22:40:37.097888] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.934 qpair failed and we were unable to recover it. 00:41:41.934 [2024-10-01 22:40:37.098069] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.935 [2024-10-01 22:40:37.098080] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.935 qpair failed and we were unable to recover it. 00:41:41.935 [2024-10-01 22:40:37.098374] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.935 [2024-10-01 22:40:37.098385] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.935 qpair failed and we were unable to recover it. 00:41:41.935 [2024-10-01 22:40:37.098693] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.935 [2024-10-01 22:40:37.098704] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.935 qpair failed and we were unable to recover it. 00:41:41.935 [2024-10-01 22:40:37.098997] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.935 [2024-10-01 22:40:37.099007] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.935 qpair failed and we were unable to recover it. 00:41:41.935 [2024-10-01 22:40:37.099188] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.935 [2024-10-01 22:40:37.099200] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.935 qpair failed and we were unable to recover it. 00:41:41.935 [2024-10-01 22:40:37.099484] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.935 [2024-10-01 22:40:37.099495] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.935 qpair failed and we were unable to recover it. 00:41:41.935 [2024-10-01 22:40:37.099789] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.935 [2024-10-01 22:40:37.099800] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.935 qpair failed and we were unable to recover it. 00:41:41.935 [2024-10-01 22:40:37.100114] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.935 [2024-10-01 22:40:37.100125] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.935 qpair failed and we were unable to recover it. 00:41:41.935 [2024-10-01 22:40:37.100438] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.935 [2024-10-01 22:40:37.100450] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.935 qpair failed and we were unable to recover it. 00:41:41.935 [2024-10-01 22:40:37.100750] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.935 [2024-10-01 22:40:37.100762] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.935 qpair failed and we were unable to recover it. 00:41:41.935 [2024-10-01 22:40:37.101068] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.935 [2024-10-01 22:40:37.101088] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.935 qpair failed and we were unable to recover it. 00:41:41.935 [2024-10-01 22:40:37.101438] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.935 [2024-10-01 22:40:37.101448] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.935 qpair failed and we were unable to recover it. 00:41:41.935 [2024-10-01 22:40:37.101771] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.935 [2024-10-01 22:40:37.101784] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.935 qpair failed and we were unable to recover it. 00:41:41.935 [2024-10-01 22:40:37.102067] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.935 [2024-10-01 22:40:37.102078] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.935 qpair failed and we were unable to recover it. 00:41:41.935 [2024-10-01 22:40:37.102387] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.935 [2024-10-01 22:40:37.102399] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.935 qpair failed and we were unable to recover it. 00:41:41.935 [2024-10-01 22:40:37.102705] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.935 [2024-10-01 22:40:37.102716] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.935 qpair failed and we were unable to recover it. 00:41:41.935 [2024-10-01 22:40:37.103059] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.935 [2024-10-01 22:40:37.103070] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.935 qpair failed and we were unable to recover it. 00:41:41.935 [2024-10-01 22:40:37.103395] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.935 [2024-10-01 22:40:37.103406] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.935 qpair failed and we were unable to recover it. 00:41:41.935 [2024-10-01 22:40:37.103717] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.935 [2024-10-01 22:40:37.103728] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.935 qpair failed and we were unable to recover it. 00:41:41.935 [2024-10-01 22:40:37.104056] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.935 [2024-10-01 22:40:37.104067] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.935 qpair failed and we were unable to recover it. 00:41:41.935 [2024-10-01 22:40:37.104366] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.935 [2024-10-01 22:40:37.104376] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.935 qpair failed and we were unable to recover it. 00:41:41.935 [2024-10-01 22:40:37.104653] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.935 [2024-10-01 22:40:37.104665] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.935 qpair failed and we were unable to recover it. 00:41:41.935 [2024-10-01 22:40:37.104947] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.935 [2024-10-01 22:40:37.104959] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.935 qpair failed and we were unable to recover it. 00:41:41.935 [2024-10-01 22:40:37.105264] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.935 [2024-10-01 22:40:37.105275] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.935 qpair failed and we were unable to recover it. 00:41:41.935 [2024-10-01 22:40:37.105520] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.935 [2024-10-01 22:40:37.105530] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.935 qpair failed and we were unable to recover it. 00:41:41.935 [2024-10-01 22:40:37.105830] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.935 [2024-10-01 22:40:37.105841] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.935 qpair failed and we were unable to recover it. 00:41:41.935 [2024-10-01 22:40:37.106154] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.935 [2024-10-01 22:40:37.106166] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.935 qpair failed and we were unable to recover it. 00:41:41.935 [2024-10-01 22:40:37.106469] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.935 [2024-10-01 22:40:37.106479] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.935 qpair failed and we were unable to recover it. 00:41:41.936 [2024-10-01 22:40:37.106787] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.936 [2024-10-01 22:40:37.106801] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.936 qpair failed and we were unable to recover it. 00:41:41.936 [2024-10-01 22:40:37.107088] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.936 [2024-10-01 22:40:37.107099] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.936 qpair failed and we were unable to recover it. 00:41:41.936 [2024-10-01 22:40:37.107403] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.936 [2024-10-01 22:40:37.107415] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.936 qpair failed and we were unable to recover it. 00:41:41.936 [2024-10-01 22:40:37.107601] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.936 [2024-10-01 22:40:37.107611] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.936 qpair failed and we were unable to recover it. 00:41:41.936 [2024-10-01 22:40:37.107820] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.936 [2024-10-01 22:40:37.107831] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.936 qpair failed and we were unable to recover it. 00:41:41.936 [2024-10-01 22:40:37.108183] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.936 [2024-10-01 22:40:37.108193] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.936 qpair failed and we were unable to recover it. 00:41:41.936 [2024-10-01 22:40:37.108495] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.936 [2024-10-01 22:40:37.108507] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.936 qpair failed and we were unable to recover it. 00:41:41.936 [2024-10-01 22:40:37.108792] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.936 [2024-10-01 22:40:37.108804] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.936 qpair failed and we were unable to recover it. 00:41:41.936 [2024-10-01 22:40:37.109108] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.936 [2024-10-01 22:40:37.109120] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.936 qpair failed and we were unable to recover it. 00:41:41.936 [2024-10-01 22:40:37.109399] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.936 [2024-10-01 22:40:37.109410] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.936 qpair failed and we were unable to recover it. 00:41:41.936 [2024-10-01 22:40:37.109761] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.936 [2024-10-01 22:40:37.109774] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.936 qpair failed and we were unable to recover it. 00:41:41.936 [2024-10-01 22:40:37.110099] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.936 [2024-10-01 22:40:37.110110] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.936 qpair failed and we were unable to recover it. 00:41:41.936 [2024-10-01 22:40:37.110412] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.936 [2024-10-01 22:40:37.110424] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.936 qpair failed and we were unable to recover it. 00:41:41.936 [2024-10-01 22:40:37.110705] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.936 [2024-10-01 22:40:37.110716] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.936 qpair failed and we were unable to recover it. 00:41:41.936 [2024-10-01 22:40:37.111061] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.936 [2024-10-01 22:40:37.111072] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.936 qpair failed and we were unable to recover it. 00:41:41.936 [2024-10-01 22:40:37.111375] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.936 [2024-10-01 22:40:37.111385] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.936 qpair failed and we were unable to recover it. 00:41:41.936 [2024-10-01 22:40:37.111569] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.936 [2024-10-01 22:40:37.111580] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.936 qpair failed and we were unable to recover it. 00:41:41.936 [2024-10-01 22:40:37.111861] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.936 [2024-10-01 22:40:37.111873] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.936 qpair failed and we were unable to recover it. 00:41:41.936 [2024-10-01 22:40:37.112182] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.936 [2024-10-01 22:40:37.112193] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.936 qpair failed and we were unable to recover it. 00:41:41.936 [2024-10-01 22:40:37.112488] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.936 [2024-10-01 22:40:37.112500] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.936 qpair failed and we were unable to recover it. 00:41:41.936 [2024-10-01 22:40:37.112797] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.936 [2024-10-01 22:40:37.112809] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.936 qpair failed and we were unable to recover it. 00:41:41.936 [2024-10-01 22:40:37.113134] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.936 [2024-10-01 22:40:37.113146] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.936 qpair failed and we were unable to recover it. 00:41:41.936 [2024-10-01 22:40:37.113461] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.936 [2024-10-01 22:40:37.113471] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.936 qpair failed and we were unable to recover it. 00:41:41.936 [2024-10-01 22:40:37.113773] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.936 [2024-10-01 22:40:37.113784] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.936 qpair failed and we were unable to recover it. 00:41:41.936 [2024-10-01 22:40:37.114087] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.936 [2024-10-01 22:40:37.114098] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.936 qpair failed and we were unable to recover it. 00:41:41.936 [2024-10-01 22:40:37.114377] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.936 [2024-10-01 22:40:37.114388] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.936 qpair failed and we were unable to recover it. 00:41:41.936 [2024-10-01 22:40:37.114689] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.936 [2024-10-01 22:40:37.114700] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.936 qpair failed and we were unable to recover it. 00:41:41.936 [2024-10-01 22:40:37.115073] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.936 [2024-10-01 22:40:37.115086] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.937 qpair failed and we were unable to recover it. 00:41:41.937 [2024-10-01 22:40:37.115393] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.937 [2024-10-01 22:40:37.115405] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.937 qpair failed and we were unable to recover it. 00:41:41.937 [2024-10-01 22:40:37.115683] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.937 [2024-10-01 22:40:37.115694] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.937 qpair failed and we were unable to recover it. 00:41:41.937 [2024-10-01 22:40:37.116015] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.937 [2024-10-01 22:40:37.116026] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.937 qpair failed and we were unable to recover it. 00:41:41.937 [2024-10-01 22:40:37.116341] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.937 [2024-10-01 22:40:37.116352] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.937 qpair failed and we were unable to recover it. 00:41:41.937 [2024-10-01 22:40:37.116657] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.937 [2024-10-01 22:40:37.116668] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.937 qpair failed and we were unable to recover it. 00:41:41.937 [2024-10-01 22:40:37.116998] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.937 [2024-10-01 22:40:37.117009] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.937 qpair failed and we were unable to recover it. 00:41:41.937 [2024-10-01 22:40:37.117310] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.937 [2024-10-01 22:40:37.117322] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.937 qpair failed and we were unable to recover it. 00:41:41.937 [2024-10-01 22:40:37.117631] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.937 [2024-10-01 22:40:37.117642] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.937 qpair failed and we were unable to recover it. 00:41:41.937 [2024-10-01 22:40:37.117814] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.937 [2024-10-01 22:40:37.117825] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.937 qpair failed and we were unable to recover it. 00:41:41.937 [2024-10-01 22:40:37.118143] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.937 [2024-10-01 22:40:37.118154] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.937 qpair failed and we were unable to recover it. 00:41:41.937 [2024-10-01 22:40:37.118495] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.937 [2024-10-01 22:40:37.118507] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.937 qpair failed and we were unable to recover it. 00:41:41.937 [2024-10-01 22:40:37.118822] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.937 [2024-10-01 22:40:37.118833] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.937 qpair failed and we were unable to recover it. 00:41:41.937 [2024-10-01 22:40:37.119136] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.937 [2024-10-01 22:40:37.119147] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.937 qpair failed and we were unable to recover it. 00:41:41.937 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh: line 36: 399162 Killed "${NVMF_APP[@]}" "$@" 00:41:41.937 [2024-10-01 22:40:37.119457] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.937 [2024-10-01 22:40:37.119469] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.937 qpair failed and we were unable to recover it. 00:41:41.937 [2024-10-01 22:40:37.119770] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.937 [2024-10-01 22:40:37.119782] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.937 qpair failed and we were unable to recover it. 00:41:41.937 [2024-10-01 22:40:37.120124] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.937 [2024-10-01 22:40:37.120136] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.937 qpair failed and we were unable to recover it. 00:41:41.937 22:40:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@48 -- # disconnect_init 10.0.0.2 00:41:41.937 [2024-10-01 22:40:37.120431] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.937 [2024-10-01 22:40:37.120442] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.937 qpair failed and we were unable to recover it. 00:41:41.937 22:40:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:41:41.937 [2024-10-01 22:40:37.120748] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.937 [2024-10-01 22:40:37.120759] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.937 qpair failed and we were unable to recover it. 00:41:41.937 22:40:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:41:41.937 [2024-10-01 22:40:37.121072] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.937 [2024-10-01 22:40:37.121084] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.937 22:40:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@724 -- # xtrace_disable 00:41:41.937 qpair failed and we were unable to recover it. 00:41:41.937 22:40:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:41:41.937 [2024-10-01 22:40:37.121380] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.937 [2024-10-01 22:40:37.121391] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.937 qpair failed and we were unable to recover it. 00:41:41.937 [2024-10-01 22:40:37.121698] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.937 [2024-10-01 22:40:37.121710] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.937 qpair failed and we were unable to recover it. 00:41:41.937 [2024-10-01 22:40:37.122028] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.937 [2024-10-01 22:40:37.122039] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.937 qpair failed and we were unable to recover it. 00:41:41.937 [2024-10-01 22:40:37.122344] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.937 [2024-10-01 22:40:37.122357] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.937 qpair failed and we were unable to recover it. 00:41:41.937 [2024-10-01 22:40:37.122571] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.937 [2024-10-01 22:40:37.122581] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.937 qpair failed and we were unable to recover it. 00:41:41.937 [2024-10-01 22:40:37.122895] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.937 [2024-10-01 22:40:37.122908] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.937 qpair failed and we were unable to recover it. 00:41:41.938 [2024-10-01 22:40:37.123177] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.938 [2024-10-01 22:40:37.123188] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.938 qpair failed and we were unable to recover it. 00:41:41.938 [2024-10-01 22:40:37.123492] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.938 [2024-10-01 22:40:37.123503] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.938 qpair failed and we were unable to recover it. 00:41:41.938 [2024-10-01 22:40:37.123800] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.938 [2024-10-01 22:40:37.123811] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.938 qpair failed and we were unable to recover it. 00:41:41.938 [2024-10-01 22:40:37.124139] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.938 [2024-10-01 22:40:37.124150] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.938 qpair failed and we were unable to recover it. 00:41:41.938 [2024-10-01 22:40:37.124453] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.938 [2024-10-01 22:40:37.124464] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.938 qpair failed and we were unable to recover it. 00:41:41.938 [2024-10-01 22:40:37.124767] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.938 [2024-10-01 22:40:37.124778] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.938 qpair failed and we were unable to recover it. 00:41:41.938 [2024-10-01 22:40:37.125097] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.938 [2024-10-01 22:40:37.125109] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.938 qpair failed and we were unable to recover it. 00:41:41.938 [2024-10-01 22:40:37.125404] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.938 [2024-10-01 22:40:37.125415] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.938 qpair failed and we were unable to recover it. 00:41:41.938 [2024-10-01 22:40:37.125688] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.938 [2024-10-01 22:40:37.125700] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.938 qpair failed and we were unable to recover it. 00:41:41.938 [2024-10-01 22:40:37.126077] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.938 [2024-10-01 22:40:37.126088] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.938 qpair failed and we were unable to recover it. 00:41:41.938 [2024-10-01 22:40:37.126273] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.938 [2024-10-01 22:40:37.126283] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.938 qpair failed and we were unable to recover it. 00:41:41.938 [2024-10-01 22:40:37.126597] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.938 [2024-10-01 22:40:37.126609] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.938 qpair failed and we were unable to recover it. 00:41:41.938 [2024-10-01 22:40:37.126926] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.938 [2024-10-01 22:40:37.126942] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.938 qpair failed and we were unable to recover it. 00:41:41.938 [2024-10-01 22:40:37.127245] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.938 [2024-10-01 22:40:37.127256] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.938 qpair failed and we were unable to recover it. 00:41:41.938 [2024-10-01 22:40:37.127446] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.938 [2024-10-01 22:40:37.127456] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.938 qpair failed and we were unable to recover it. 00:41:41.938 [2024-10-01 22:40:37.127750] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.938 [2024-10-01 22:40:37.127761] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.938 qpair failed and we were unable to recover it. 00:41:41.938 [2024-10-01 22:40:37.128100] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.938 [2024-10-01 22:40:37.128112] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.938 qpair failed and we were unable to recover it. 00:41:41.938 [2024-10-01 22:40:37.128416] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.938 [2024-10-01 22:40:37.128427] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.938 qpair failed and we were unable to recover it. 00:41:41.938 [2024-10-01 22:40:37.128618] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.938 [2024-10-01 22:40:37.128634] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.938 qpair failed and we were unable to recover it. 00:41:41.938 22:40:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@507 -- # nvmfpid=400200 00:41:41.938 [2024-10-01 22:40:37.128924] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.938 [2024-10-01 22:40:37.128936] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.938 qpair failed and we were unable to recover it. 00:41:41.938 22:40:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@508 -- # waitforlisten 400200 00:41:41.938 [2024-10-01 22:40:37.129122] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.938 [2024-10-01 22:40:37.129134] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.938 qpair failed and we were unable to recover it. 00:41:41.938 [2024-10-01 22:40:37.129322] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.938 [2024-10-01 22:40:37.129342] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.938 qpair failed and we were unable to recover it. 00:41:41.938 22:40:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:41:41.938 22:40:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@831 -- # '[' -z 400200 ']' 00:41:41.938 [2024-10-01 22:40:37.129621] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.938 [2024-10-01 22:40:37.129639] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.938 qpair failed and we were unable to recover it. 00:41:41.938 22:40:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:41:41.938 [2024-10-01 22:40:37.129943] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.938 [2024-10-01 22:40:37.129958] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.938 qpair failed and we were unable to recover it. 00:41:41.938 22:40:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@836 -- # local max_retries=100 00:41:41.938 22:40:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:41:41.939 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:41:41.939 [2024-10-01 22:40:37.130252] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.939 [2024-10-01 22:40:37.130263] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.939 qpair failed and we were unable to recover it. 00:41:41.939 22:40:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@840 -- # xtrace_disable 00:41:41.939 [2024-10-01 22:40:37.130560] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.939 [2024-10-01 22:40:37.130571] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.939 qpair failed and we were unable to recover it. 00:41:41.939 22:40:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:41:41.939 [2024-10-01 22:40:37.130861] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.939 [2024-10-01 22:40:37.130872] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.939 qpair failed and we were unable to recover it. 00:41:41.939 [2024-10-01 22:40:37.131181] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.939 [2024-10-01 22:40:37.131192] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.939 qpair failed and we were unable to recover it. 00:41:41.939 [2024-10-01 22:40:37.131525] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.939 [2024-10-01 22:40:37.131537] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.939 qpair failed and we were unable to recover it. 00:41:41.939 [2024-10-01 22:40:37.131824] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.939 [2024-10-01 22:40:37.131836] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.939 qpair failed and we were unable to recover it. 00:41:41.939 [2024-10-01 22:40:37.132157] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.939 [2024-10-01 22:40:37.132167] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.939 qpair failed and we were unable to recover it. 00:41:41.939 [2024-10-01 22:40:37.132468] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.939 [2024-10-01 22:40:37.132480] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.939 qpair failed and we were unable to recover it. 00:41:41.939 [2024-10-01 22:40:37.132678] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.939 [2024-10-01 22:40:37.132690] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.939 qpair failed and we were unable to recover it. 00:41:41.939 [2024-10-01 22:40:37.133011] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.939 [2024-10-01 22:40:37.133022] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.939 qpair failed and we were unable to recover it. 00:41:41.939 [2024-10-01 22:40:37.133340] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.939 [2024-10-01 22:40:37.133351] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.939 qpair failed and we were unable to recover it. 00:41:41.939 [2024-10-01 22:40:37.133658] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.939 [2024-10-01 22:40:37.133671] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.939 qpair failed and we were unable to recover it. 00:41:41.939 [2024-10-01 22:40:37.134005] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.939 [2024-10-01 22:40:37.134017] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.939 qpair failed and we were unable to recover it. 00:41:41.939 [2024-10-01 22:40:37.134344] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.939 [2024-10-01 22:40:37.134356] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.939 qpair failed and we were unable to recover it. 00:41:41.939 [2024-10-01 22:40:37.134555] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.939 [2024-10-01 22:40:37.134567] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.939 qpair failed and we were unable to recover it. 00:41:41.939 [2024-10-01 22:40:37.134902] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.939 [2024-10-01 22:40:37.134915] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.939 qpair failed and we were unable to recover it. 00:41:41.939 [2024-10-01 22:40:37.135264] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.939 [2024-10-01 22:40:37.135277] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.939 qpair failed and we were unable to recover it. 00:41:41.939 [2024-10-01 22:40:37.135494] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.939 [2024-10-01 22:40:37.135506] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.939 qpair failed and we were unable to recover it. 00:41:41.939 [2024-10-01 22:40:37.135847] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.939 [2024-10-01 22:40:37.135859] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.939 qpair failed and we were unable to recover it. 00:41:41.939 [2024-10-01 22:40:37.136251] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.939 [2024-10-01 22:40:37.136263] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.939 qpair failed and we were unable to recover it. 00:41:41.939 [2024-10-01 22:40:37.136606] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.939 [2024-10-01 22:40:37.136617] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.939 qpair failed and we were unable to recover it. 00:41:41.939 [2024-10-01 22:40:37.136947] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.939 [2024-10-01 22:40:37.136960] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.939 qpair failed and we were unable to recover it. 00:41:41.939 [2024-10-01 22:40:37.137271] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.939 [2024-10-01 22:40:37.137283] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.939 qpair failed and we were unable to recover it. 00:41:41.939 [2024-10-01 22:40:37.137593] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.939 [2024-10-01 22:40:37.137605] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.939 qpair failed and we were unable to recover it. 00:41:41.939 [2024-10-01 22:40:37.137825] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.939 [2024-10-01 22:40:37.137840] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.939 qpair failed and we were unable to recover it. 00:41:41.939 [2024-10-01 22:40:37.138150] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.939 [2024-10-01 22:40:37.138163] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.939 qpair failed and we were unable to recover it. 00:41:41.939 [2024-10-01 22:40:37.138443] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.940 [2024-10-01 22:40:37.138455] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.940 qpair failed and we were unable to recover it. 00:41:41.940 [2024-10-01 22:40:37.138668] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.940 [2024-10-01 22:40:37.138681] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.940 qpair failed and we were unable to recover it. 00:41:41.940 [2024-10-01 22:40:37.138978] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.940 [2024-10-01 22:40:37.138989] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.940 qpair failed and we were unable to recover it. 00:41:41.940 [2024-10-01 22:40:37.139304] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.940 [2024-10-01 22:40:37.139317] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.940 qpair failed and we were unable to recover it. 00:41:41.940 [2024-10-01 22:40:37.139635] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.940 [2024-10-01 22:40:37.139648] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.940 qpair failed and we were unable to recover it. 00:41:41.940 [2024-10-01 22:40:37.139989] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.940 [2024-10-01 22:40:37.140002] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.940 qpair failed and we were unable to recover it. 00:41:41.940 [2024-10-01 22:40:37.140236] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.940 [2024-10-01 22:40:37.140247] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.940 qpair failed and we were unable to recover it. 00:41:41.940 [2024-10-01 22:40:37.140565] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.940 [2024-10-01 22:40:37.140578] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:41.940 qpair failed and we were unable to recover it. 00:41:42.214 [2024-10-01 22:40:37.140969] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.214 [2024-10-01 22:40:37.140982] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.214 qpair failed and we were unable to recover it. 00:41:42.214 [2024-10-01 22:40:37.141305] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.214 [2024-10-01 22:40:37.141317] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.214 qpair failed and we were unable to recover it. 00:41:42.214 [2024-10-01 22:40:37.141607] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.214 [2024-10-01 22:40:37.141619] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.214 qpair failed and we were unable to recover it. 00:41:42.214 [2024-10-01 22:40:37.141934] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.214 [2024-10-01 22:40:37.141946] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.214 qpair failed and we were unable to recover it. 00:41:42.214 [2024-10-01 22:40:37.142175] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.214 [2024-10-01 22:40:37.142187] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.214 qpair failed and we were unable to recover it. 00:41:42.214 [2024-10-01 22:40:37.142542] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.214 [2024-10-01 22:40:37.142554] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.214 qpair failed and we were unable to recover it. 00:41:42.214 [2024-10-01 22:40:37.142873] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.214 [2024-10-01 22:40:37.142887] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.214 qpair failed and we were unable to recover it. 00:41:42.214 [2024-10-01 22:40:37.143211] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.214 [2024-10-01 22:40:37.143223] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.214 qpair failed and we were unable to recover it. 00:41:42.214 [2024-10-01 22:40:37.143524] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.214 [2024-10-01 22:40:37.143536] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.214 qpair failed and we were unable to recover it. 00:41:42.214 [2024-10-01 22:40:37.143848] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.214 [2024-10-01 22:40:37.143861] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.214 qpair failed and we were unable to recover it. 00:41:42.214 [2024-10-01 22:40:37.144023] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.214 [2024-10-01 22:40:37.144037] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.214 qpair failed and we were unable to recover it. 00:41:42.214 [2024-10-01 22:40:37.144361] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.214 [2024-10-01 22:40:37.144373] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.214 qpair failed and we were unable to recover it. 00:41:42.214 [2024-10-01 22:40:37.144681] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.214 [2024-10-01 22:40:37.144693] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.214 qpair failed and we were unable to recover it. 00:41:42.214 [2024-10-01 22:40:37.145009] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.214 [2024-10-01 22:40:37.145020] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.214 qpair failed and we were unable to recover it. 00:41:42.214 [2024-10-01 22:40:37.145325] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.214 [2024-10-01 22:40:37.145336] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.214 qpair failed and we were unable to recover it. 00:41:42.214 [2024-10-01 22:40:37.145556] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.214 [2024-10-01 22:40:37.145567] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.214 qpair failed and we were unable to recover it. 00:41:42.214 [2024-10-01 22:40:37.145878] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.214 [2024-10-01 22:40:37.145890] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.214 qpair failed and we were unable to recover it. 00:41:42.214 [2024-10-01 22:40:37.146204] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.214 [2024-10-01 22:40:37.146218] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.214 qpair failed and we were unable to recover it. 00:41:42.214 [2024-10-01 22:40:37.146398] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.214 [2024-10-01 22:40:37.146409] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.214 qpair failed and we were unable to recover it. 00:41:42.214 [2024-10-01 22:40:37.146591] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.214 [2024-10-01 22:40:37.146602] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.214 qpair failed and we were unable to recover it. 00:41:42.214 [2024-10-01 22:40:37.146911] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.214 [2024-10-01 22:40:37.146922] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.214 qpair failed and we were unable to recover it. 00:41:42.214 [2024-10-01 22:40:37.147322] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.214 [2024-10-01 22:40:37.147333] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.214 qpair failed and we were unable to recover it. 00:41:42.214 [2024-10-01 22:40:37.147645] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.214 [2024-10-01 22:40:37.147657] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.214 qpair failed and we were unable to recover it. 00:41:42.214 [2024-10-01 22:40:37.148038] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.214 [2024-10-01 22:40:37.148049] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.214 qpair failed and we were unable to recover it. 00:41:42.214 [2024-10-01 22:40:37.148357] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.214 [2024-10-01 22:40:37.148367] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.214 qpair failed and we were unable to recover it. 00:41:42.214 [2024-10-01 22:40:37.148524] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.214 [2024-10-01 22:40:37.148535] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.215 qpair failed and we were unable to recover it. 00:41:42.215 [2024-10-01 22:40:37.148825] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.215 [2024-10-01 22:40:37.148836] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.215 qpair failed and we were unable to recover it. 00:41:42.215 [2024-10-01 22:40:37.149047] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.215 [2024-10-01 22:40:37.149057] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.215 qpair failed and we were unable to recover it. 00:41:42.215 [2024-10-01 22:40:37.149236] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.215 [2024-10-01 22:40:37.149247] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.215 qpair failed and we were unable to recover it. 00:41:42.215 [2024-10-01 22:40:37.149563] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.215 [2024-10-01 22:40:37.149575] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.215 qpair failed and we were unable to recover it. 00:41:42.215 [2024-10-01 22:40:37.149659] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.215 [2024-10-01 22:40:37.149669] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.215 qpair failed and we were unable to recover it. 00:41:42.215 [2024-10-01 22:40:37.149859] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.215 [2024-10-01 22:40:37.149872] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.215 qpair failed and we were unable to recover it. 00:41:42.215 [2024-10-01 22:40:37.150293] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.215 [2024-10-01 22:40:37.150305] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.215 qpair failed and we were unable to recover it. 00:41:42.215 [2024-10-01 22:40:37.150650] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.215 [2024-10-01 22:40:37.150663] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.215 qpair failed and we were unable to recover it. 00:41:42.215 [2024-10-01 22:40:37.150980] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.215 [2024-10-01 22:40:37.150992] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.215 qpair failed and we were unable to recover it. 00:41:42.215 [2024-10-01 22:40:37.151295] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.215 [2024-10-01 22:40:37.151306] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.215 qpair failed and we were unable to recover it. 00:41:42.215 [2024-10-01 22:40:37.151642] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.215 [2024-10-01 22:40:37.151654] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.215 qpair failed and we were unable to recover it. 00:41:42.215 [2024-10-01 22:40:37.151959] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.215 [2024-10-01 22:40:37.151971] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.215 qpair failed and we were unable to recover it. 00:41:42.215 [2024-10-01 22:40:37.152286] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.215 [2024-10-01 22:40:37.152298] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.215 qpair failed and we were unable to recover it. 00:41:42.215 [2024-10-01 22:40:37.152618] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.215 [2024-10-01 22:40:37.152640] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.215 qpair failed and we were unable to recover it. 00:41:42.215 [2024-10-01 22:40:37.152970] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.215 [2024-10-01 22:40:37.152981] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.215 qpair failed and we were unable to recover it. 00:41:42.215 [2024-10-01 22:40:37.153203] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.215 [2024-10-01 22:40:37.153214] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.215 qpair failed and we were unable to recover it. 00:41:42.215 [2024-10-01 22:40:37.153533] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.215 [2024-10-01 22:40:37.153544] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.215 qpair failed and we were unable to recover it. 00:41:42.215 [2024-10-01 22:40:37.153864] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.215 [2024-10-01 22:40:37.153876] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.215 qpair failed and we were unable to recover it. 00:41:42.215 [2024-10-01 22:40:37.154204] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.215 [2024-10-01 22:40:37.154217] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.215 qpair failed and we were unable to recover it. 00:41:42.215 [2024-10-01 22:40:37.154541] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.215 [2024-10-01 22:40:37.154551] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.215 qpair failed and we were unable to recover it. 00:41:42.215 [2024-10-01 22:40:37.154918] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.215 [2024-10-01 22:40:37.154930] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.215 qpair failed and we were unable to recover it. 00:41:42.215 [2024-10-01 22:40:37.155239] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.215 [2024-10-01 22:40:37.155249] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.215 qpair failed and we were unable to recover it. 00:41:42.215 [2024-10-01 22:40:37.155604] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.215 [2024-10-01 22:40:37.155616] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.215 qpair failed and we were unable to recover it. 00:41:42.215 [2024-10-01 22:40:37.155890] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.215 [2024-10-01 22:40:37.155901] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.215 qpair failed and we were unable to recover it. 00:41:42.215 [2024-10-01 22:40:37.156218] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.215 [2024-10-01 22:40:37.156229] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.215 qpair failed and we were unable to recover it. 00:41:42.215 [2024-10-01 22:40:37.156526] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.215 [2024-10-01 22:40:37.156537] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.215 qpair failed and we were unable to recover it. 00:41:42.215 [2024-10-01 22:40:37.156810] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.215 [2024-10-01 22:40:37.156822] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.215 qpair failed and we were unable to recover it. 00:41:42.215 [2024-10-01 22:40:37.157133] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.215 [2024-10-01 22:40:37.157145] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.215 qpair failed and we were unable to recover it. 00:41:42.215 [2024-10-01 22:40:37.157469] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.215 [2024-10-01 22:40:37.157481] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.215 qpair failed and we were unable to recover it. 00:41:42.215 [2024-10-01 22:40:37.157649] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.215 [2024-10-01 22:40:37.157661] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.215 qpair failed and we were unable to recover it. 00:41:42.215 [2024-10-01 22:40:37.157753] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.215 [2024-10-01 22:40:37.157763] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.215 qpair failed and we were unable to recover it. 00:41:42.215 [2024-10-01 22:40:37.158129] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.215 [2024-10-01 22:40:37.158140] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.215 qpair failed and we were unable to recover it. 00:41:42.215 [2024-10-01 22:40:37.158457] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.215 [2024-10-01 22:40:37.158469] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.215 qpair failed and we were unable to recover it. 00:41:42.215 [2024-10-01 22:40:37.158790] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.215 [2024-10-01 22:40:37.158802] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.215 qpair failed and we were unable to recover it. 00:41:42.215 [2024-10-01 22:40:37.159000] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.215 [2024-10-01 22:40:37.159010] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.215 qpair failed and we were unable to recover it. 00:41:42.215 [2024-10-01 22:40:37.159265] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.215 [2024-10-01 22:40:37.159277] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.215 qpair failed and we were unable to recover it. 00:41:42.215 [2024-10-01 22:40:37.159611] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.215 [2024-10-01 22:40:37.159622] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.215 qpair failed and we were unable to recover it. 00:41:42.215 [2024-10-01 22:40:37.160023] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.215 [2024-10-01 22:40:37.160035] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.215 qpair failed and we were unable to recover it. 00:41:42.216 [2024-10-01 22:40:37.160337] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.216 [2024-10-01 22:40:37.160348] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.216 qpair failed and we were unable to recover it. 00:41:42.216 [2024-10-01 22:40:37.160667] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.216 [2024-10-01 22:40:37.160678] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.216 qpair failed and we were unable to recover it. 00:41:42.216 [2024-10-01 22:40:37.161016] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.216 [2024-10-01 22:40:37.161027] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.216 qpair failed and we were unable to recover it. 00:41:42.216 [2024-10-01 22:40:37.161234] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.216 [2024-10-01 22:40:37.161245] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.216 qpair failed and we were unable to recover it. 00:41:42.216 [2024-10-01 22:40:37.161571] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.216 [2024-10-01 22:40:37.161581] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.216 qpair failed and we were unable to recover it. 00:41:42.216 [2024-10-01 22:40:37.161899] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.216 [2024-10-01 22:40:37.161910] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.216 qpair failed and we were unable to recover it. 00:41:42.216 [2024-10-01 22:40:37.162225] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.216 [2024-10-01 22:40:37.162236] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.216 qpair failed and we were unable to recover it. 00:41:42.216 [2024-10-01 22:40:37.162544] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.216 [2024-10-01 22:40:37.162555] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.216 qpair failed and we were unable to recover it. 00:41:42.216 [2024-10-01 22:40:37.162758] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.216 [2024-10-01 22:40:37.162769] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.216 qpair failed and we were unable to recover it. 00:41:42.216 [2024-10-01 22:40:37.163085] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.216 [2024-10-01 22:40:37.163096] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.216 qpair failed and we were unable to recover it. 00:41:42.216 [2024-10-01 22:40:37.163405] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.216 [2024-10-01 22:40:37.163415] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.216 qpair failed and we were unable to recover it. 00:41:42.216 [2024-10-01 22:40:37.163608] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.216 [2024-10-01 22:40:37.163619] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.216 qpair failed and we were unable to recover it. 00:41:42.216 [2024-10-01 22:40:37.163904] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.216 [2024-10-01 22:40:37.163915] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.216 qpair failed and we were unable to recover it. 00:41:42.216 [2024-10-01 22:40:37.164228] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.216 [2024-10-01 22:40:37.164238] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.216 qpair failed and we were unable to recover it. 00:41:42.216 [2024-10-01 22:40:37.164555] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.216 [2024-10-01 22:40:37.164566] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.216 qpair failed and we were unable to recover it. 00:41:42.216 [2024-10-01 22:40:37.164861] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.216 [2024-10-01 22:40:37.164872] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.216 qpair failed and we were unable to recover it. 00:41:42.216 [2024-10-01 22:40:37.165209] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.216 [2024-10-01 22:40:37.165221] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.216 qpair failed and we were unable to recover it. 00:41:42.216 [2024-10-01 22:40:37.165533] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.216 [2024-10-01 22:40:37.165544] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.216 qpair failed and we were unable to recover it. 00:41:42.216 [2024-10-01 22:40:37.165899] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.216 [2024-10-01 22:40:37.165911] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.216 qpair failed and we were unable to recover it. 00:41:42.216 [2024-10-01 22:40:37.166233] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.216 [2024-10-01 22:40:37.166243] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.216 qpair failed and we were unable to recover it. 00:41:42.216 [2024-10-01 22:40:37.166579] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.216 [2024-10-01 22:40:37.166592] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.216 qpair failed and we were unable to recover it. 00:41:42.216 [2024-10-01 22:40:37.166817] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.216 [2024-10-01 22:40:37.166829] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.216 qpair failed and we were unable to recover it. 00:41:42.216 [2024-10-01 22:40:37.166999] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.216 [2024-10-01 22:40:37.167010] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.216 qpair failed and we were unable to recover it. 00:41:42.216 [2024-10-01 22:40:37.167188] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.216 [2024-10-01 22:40:37.167198] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.216 qpair failed and we were unable to recover it. 00:41:42.216 [2024-10-01 22:40:37.167510] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.216 [2024-10-01 22:40:37.167522] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.216 qpair failed and we were unable to recover it. 00:41:42.216 [2024-10-01 22:40:37.167686] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.216 [2024-10-01 22:40:37.167698] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.216 qpair failed and we were unable to recover it. 00:41:42.216 [2024-10-01 22:40:37.168053] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.216 [2024-10-01 22:40:37.168065] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.216 qpair failed and we were unable to recover it. 00:41:42.216 [2024-10-01 22:40:37.168356] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.216 [2024-10-01 22:40:37.168368] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.216 qpair failed and we were unable to recover it. 00:41:42.216 [2024-10-01 22:40:37.168684] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.216 [2024-10-01 22:40:37.168696] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.216 qpair failed and we were unable to recover it. 00:41:42.216 [2024-10-01 22:40:37.169035] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.216 [2024-10-01 22:40:37.169047] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.216 qpair failed and we were unable to recover it. 00:41:42.216 [2024-10-01 22:40:37.169207] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.216 [2024-10-01 22:40:37.169218] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.216 qpair failed and we were unable to recover it. 00:41:42.216 [2024-10-01 22:40:37.169514] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.216 [2024-10-01 22:40:37.169526] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.216 qpair failed and we were unable to recover it. 00:41:42.216 [2024-10-01 22:40:37.169859] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.216 [2024-10-01 22:40:37.169871] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.216 qpair failed and we were unable to recover it. 00:41:42.216 [2024-10-01 22:40:37.170181] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.216 [2024-10-01 22:40:37.170191] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.216 qpair failed and we were unable to recover it. 00:41:42.216 [2024-10-01 22:40:37.170521] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.216 [2024-10-01 22:40:37.170532] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.216 qpair failed and we were unable to recover it. 00:41:42.216 [2024-10-01 22:40:37.170817] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.216 [2024-10-01 22:40:37.170829] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.216 qpair failed and we were unable to recover it. 00:41:42.216 [2024-10-01 22:40:37.171138] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.216 [2024-10-01 22:40:37.171150] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.216 qpair failed and we were unable to recover it. 00:41:42.216 [2024-10-01 22:40:37.171432] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.216 [2024-10-01 22:40:37.171445] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.216 qpair failed and we were unable to recover it. 00:41:42.216 [2024-10-01 22:40:37.171764] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.217 [2024-10-01 22:40:37.171777] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.217 qpair failed and we were unable to recover it. 00:41:42.217 [2024-10-01 22:40:37.172122] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.217 [2024-10-01 22:40:37.172134] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.217 qpair failed and we were unable to recover it. 00:41:42.217 [2024-10-01 22:40:37.172354] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.217 [2024-10-01 22:40:37.172365] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.217 qpair failed and we were unable to recover it. 00:41:42.217 [2024-10-01 22:40:37.172548] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.217 [2024-10-01 22:40:37.172559] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.217 qpair failed and we were unable to recover it. 00:41:42.217 [2024-10-01 22:40:37.172872] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.217 [2024-10-01 22:40:37.172884] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.217 qpair failed and we were unable to recover it. 00:41:42.217 [2024-10-01 22:40:37.173217] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.217 [2024-10-01 22:40:37.173228] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.217 qpair failed and we were unable to recover it. 00:41:42.217 [2024-10-01 22:40:37.173522] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.217 [2024-10-01 22:40:37.173534] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.217 qpair failed and we were unable to recover it. 00:41:42.217 [2024-10-01 22:40:37.173825] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.217 [2024-10-01 22:40:37.173836] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.217 qpair failed and we were unable to recover it. 00:41:42.217 [2024-10-01 22:40:37.174168] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.217 [2024-10-01 22:40:37.174179] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.217 qpair failed and we were unable to recover it. 00:41:42.217 [2024-10-01 22:40:37.174349] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.217 [2024-10-01 22:40:37.174361] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.217 qpair failed and we were unable to recover it. 00:41:42.217 [2024-10-01 22:40:37.174692] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.217 [2024-10-01 22:40:37.174706] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.217 qpair failed and we were unable to recover it. 00:41:42.217 [2024-10-01 22:40:37.175047] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.217 [2024-10-01 22:40:37.175058] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.217 qpair failed and we were unable to recover it. 00:41:42.217 [2024-10-01 22:40:37.175365] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.217 [2024-10-01 22:40:37.175377] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.217 qpair failed and we were unable to recover it. 00:41:42.217 [2024-10-01 22:40:37.175538] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.217 [2024-10-01 22:40:37.175549] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.217 qpair failed and we were unable to recover it. 00:41:42.217 [2024-10-01 22:40:37.175876] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.217 [2024-10-01 22:40:37.175888] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.217 qpair failed and we were unable to recover it. 00:41:42.217 [2024-10-01 22:40:37.176205] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.217 [2024-10-01 22:40:37.176217] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.217 qpair failed and we were unable to recover it. 00:41:42.217 [2024-10-01 22:40:37.176531] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.217 [2024-10-01 22:40:37.176542] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.217 qpair failed and we were unable to recover it. 00:41:42.217 [2024-10-01 22:40:37.176722] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.217 [2024-10-01 22:40:37.176733] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.217 qpair failed and we were unable to recover it. 00:41:42.217 [2024-10-01 22:40:37.177110] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.217 [2024-10-01 22:40:37.177122] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.217 qpair failed and we were unable to recover it. 00:41:42.217 [2024-10-01 22:40:37.177428] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.217 [2024-10-01 22:40:37.177440] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.217 qpair failed and we were unable to recover it. 00:41:42.217 [2024-10-01 22:40:37.177758] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.217 [2024-10-01 22:40:37.177769] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.217 qpair failed and we were unable to recover it. 00:41:42.217 [2024-10-01 22:40:37.178107] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.217 [2024-10-01 22:40:37.178118] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.217 qpair failed and we were unable to recover it. 00:41:42.217 [2024-10-01 22:40:37.178283] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.217 [2024-10-01 22:40:37.178295] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.217 qpair failed and we were unable to recover it. 00:41:42.217 [2024-10-01 22:40:37.178480] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.217 [2024-10-01 22:40:37.178490] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.217 qpair failed and we were unable to recover it. 00:41:42.217 [2024-10-01 22:40:37.178804] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.217 [2024-10-01 22:40:37.178816] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.217 qpair failed and we were unable to recover it. 00:41:42.217 [2024-10-01 22:40:37.179129] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.217 [2024-10-01 22:40:37.179140] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.217 qpair failed and we were unable to recover it. 00:41:42.217 [2024-10-01 22:40:37.179330] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.217 [2024-10-01 22:40:37.179341] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.217 qpair failed and we were unable to recover it. 00:41:42.217 [2024-10-01 22:40:37.179706] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.217 [2024-10-01 22:40:37.179718] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.217 qpair failed and we were unable to recover it. 00:41:42.217 [2024-10-01 22:40:37.180081] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.217 [2024-10-01 22:40:37.180093] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.217 qpair failed and we were unable to recover it. 00:41:42.217 [2024-10-01 22:40:37.180431] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.217 [2024-10-01 22:40:37.180443] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.217 qpair failed and we were unable to recover it. 00:41:42.217 [2024-10-01 22:40:37.180773] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.217 [2024-10-01 22:40:37.180784] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.217 qpair failed and we were unable to recover it. 00:41:42.217 [2024-10-01 22:40:37.181117] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.217 [2024-10-01 22:40:37.181129] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.217 qpair failed and we were unable to recover it. 00:41:42.217 [2024-10-01 22:40:37.181323] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.217 [2024-10-01 22:40:37.181334] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.217 qpair failed and we were unable to recover it. 00:41:42.217 [2024-10-01 22:40:37.181508] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.217 [2024-10-01 22:40:37.181520] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.217 qpair failed and we were unable to recover it. 00:41:42.217 [2024-10-01 22:40:37.181861] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.217 [2024-10-01 22:40:37.181872] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.217 qpair failed and we were unable to recover it. 00:41:42.217 [2024-10-01 22:40:37.182184] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.217 [2024-10-01 22:40:37.182195] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.217 qpair failed and we were unable to recover it. 00:41:42.217 [2024-10-01 22:40:37.182387] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.217 [2024-10-01 22:40:37.182397] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.217 qpair failed and we were unable to recover it. 00:41:42.217 [2024-10-01 22:40:37.182565] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.217 [2024-10-01 22:40:37.182578] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.217 qpair failed and we were unable to recover it. 00:41:42.217 [2024-10-01 22:40:37.182884] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.217 [2024-10-01 22:40:37.182896] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.217 qpair failed and we were unable to recover it. 00:41:42.218 [2024-10-01 22:40:37.183076] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.218 [2024-10-01 22:40:37.183087] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.218 qpair failed and we were unable to recover it. 00:41:42.218 [2024-10-01 22:40:37.183412] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.218 [2024-10-01 22:40:37.183423] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.218 qpair failed and we were unable to recover it. 00:41:42.218 [2024-10-01 22:40:37.183621] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.218 [2024-10-01 22:40:37.183637] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.218 qpair failed and we were unable to recover it. 00:41:42.218 [2024-10-01 22:40:37.183969] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.218 [2024-10-01 22:40:37.183980] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.218 qpair failed and we were unable to recover it. 00:41:42.218 [2024-10-01 22:40:37.184287] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.218 [2024-10-01 22:40:37.184298] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.218 qpair failed and we were unable to recover it. 00:41:42.218 [2024-10-01 22:40:37.184496] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.218 [2024-10-01 22:40:37.184507] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.218 qpair failed and we were unable to recover it. 00:41:42.218 [2024-10-01 22:40:37.184855] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.218 [2024-10-01 22:40:37.184866] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.218 qpair failed and we were unable to recover it. 00:41:42.218 [2024-10-01 22:40:37.185155] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.218 [2024-10-01 22:40:37.185165] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.218 qpair failed and we were unable to recover it. 00:41:42.218 [2024-10-01 22:40:37.185489] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.218 [2024-10-01 22:40:37.185500] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.218 qpair failed and we were unable to recover it. 00:41:42.218 [2024-10-01 22:40:37.185705] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.218 [2024-10-01 22:40:37.185716] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.218 qpair failed and we were unable to recover it. 00:41:42.218 [2024-10-01 22:40:37.186124] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.218 [2024-10-01 22:40:37.186135] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.218 qpair failed and we were unable to recover it. 00:41:42.218 [2024-10-01 22:40:37.186475] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.218 [2024-10-01 22:40:37.186486] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.218 qpair failed and we were unable to recover it. 00:41:42.218 [2024-10-01 22:40:37.186797] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.218 [2024-10-01 22:40:37.186808] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.218 qpair failed and we were unable to recover it. 00:41:42.218 [2024-10-01 22:40:37.187151] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.218 [2024-10-01 22:40:37.187162] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.218 qpair failed and we were unable to recover it. 00:41:42.218 [2024-10-01 22:40:37.187494] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.218 [2024-10-01 22:40:37.187505] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.218 qpair failed and we were unable to recover it. 00:41:42.218 [2024-10-01 22:40:37.187879] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.218 [2024-10-01 22:40:37.187891] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.218 qpair failed and we were unable to recover it. 00:41:42.218 [2024-10-01 22:40:37.188217] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.218 [2024-10-01 22:40:37.188228] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.218 qpair failed and we were unable to recover it. 00:41:42.218 [2024-10-01 22:40:37.188264] Starting SPDK v25.01-pre git sha1 1b1c3081e / DPDK 24.03.0 initialization... 00:41:42.218 [2024-10-01 22:40:37.188310] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:41:42.218 [2024-10-01 22:40:37.188593] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.218 [2024-10-01 22:40:37.188604] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.218 qpair failed and we were unable to recover it. 00:41:42.218 [2024-10-01 22:40:37.188776] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.218 [2024-10-01 22:40:37.188786] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.218 qpair failed and we were unable to recover it. 00:41:42.218 [2024-10-01 22:40:37.189106] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.218 [2024-10-01 22:40:37.189117] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.218 qpair failed and we were unable to recover it. 00:41:42.218 [2024-10-01 22:40:37.189297] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.218 [2024-10-01 22:40:37.189308] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.218 qpair failed and we were unable to recover it. 00:41:42.218 [2024-10-01 22:40:37.189678] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.218 [2024-10-01 22:40:37.189691] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.218 qpair failed and we were unable to recover it. 00:41:42.218 [2024-10-01 22:40:37.190032] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.218 [2024-10-01 22:40:37.190044] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.218 qpair failed and we were unable to recover it. 00:41:42.218 [2024-10-01 22:40:37.190400] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.218 [2024-10-01 22:40:37.190411] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.218 qpair failed and we were unable to recover it. 00:41:42.218 [2024-10-01 22:40:37.190730] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.218 [2024-10-01 22:40:37.190744] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.218 qpair failed and we were unable to recover it. 00:41:42.218 [2024-10-01 22:40:37.190934] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.218 [2024-10-01 22:40:37.190945] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.218 qpair failed and we were unable to recover it. 00:41:42.218 [2024-10-01 22:40:37.191271] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.218 [2024-10-01 22:40:37.191283] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.218 qpair failed and we were unable to recover it. 00:41:42.218 [2024-10-01 22:40:37.191664] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.218 [2024-10-01 22:40:37.191677] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.218 qpair failed and we were unable to recover it. 00:41:42.218 [2024-10-01 22:40:37.191962] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.218 [2024-10-01 22:40:37.191974] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.218 qpair failed and we were unable to recover it. 00:41:42.218 [2024-10-01 22:40:37.192293] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.218 [2024-10-01 22:40:37.192305] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.218 qpair failed and we were unable to recover it. 00:41:42.218 [2024-10-01 22:40:37.192621] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.219 [2024-10-01 22:40:37.192637] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.219 qpair failed and we were unable to recover it. 00:41:42.219 [2024-10-01 22:40:37.192939] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.219 [2024-10-01 22:40:37.192951] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.219 qpair failed and we were unable to recover it. 00:41:42.219 [2024-10-01 22:40:37.193270] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.219 [2024-10-01 22:40:37.193282] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.219 qpair failed and we were unable to recover it. 00:41:42.219 [2024-10-01 22:40:37.193491] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.219 [2024-10-01 22:40:37.193503] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.219 qpair failed and we were unable to recover it. 00:41:42.219 [2024-10-01 22:40:37.193720] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.219 [2024-10-01 22:40:37.193732] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.219 qpair failed and we were unable to recover it. 00:41:42.219 [2024-10-01 22:40:37.194186] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.219 [2024-10-01 22:40:37.194198] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.219 qpair failed and we were unable to recover it. 00:41:42.219 [2024-10-01 22:40:37.194534] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.219 [2024-10-01 22:40:37.194547] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.219 qpair failed and we were unable to recover it. 00:41:42.219 [2024-10-01 22:40:37.194862] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.219 [2024-10-01 22:40:37.194877] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.219 qpair failed and we were unable to recover it. 00:41:42.219 [2024-10-01 22:40:37.195199] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.219 [2024-10-01 22:40:37.195210] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.219 qpair failed and we were unable to recover it. 00:41:42.219 [2024-10-01 22:40:37.195577] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.219 [2024-10-01 22:40:37.195588] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.219 qpair failed and we were unable to recover it. 00:41:42.219 [2024-10-01 22:40:37.195874] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.219 [2024-10-01 22:40:37.195885] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.219 qpair failed and we were unable to recover it. 00:41:42.219 [2024-10-01 22:40:37.196229] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.219 [2024-10-01 22:40:37.196239] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.219 qpair failed and we were unable to recover it. 00:41:42.219 [2024-10-01 22:40:37.196408] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.219 [2024-10-01 22:40:37.196419] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.219 qpair failed and we were unable to recover it. 00:41:42.219 [2024-10-01 22:40:37.196816] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.219 [2024-10-01 22:40:37.196826] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.219 qpair failed and we were unable to recover it. 00:41:42.219 [2024-10-01 22:40:37.197194] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.219 [2024-10-01 22:40:37.197204] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.219 qpair failed and we were unable to recover it. 00:41:42.219 [2024-10-01 22:40:37.197423] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.219 [2024-10-01 22:40:37.197434] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.219 qpair failed and we were unable to recover it. 00:41:42.219 [2024-10-01 22:40:37.197667] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.219 [2024-10-01 22:40:37.197677] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.219 qpair failed and we were unable to recover it. 00:41:42.219 [2024-10-01 22:40:37.198022] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.219 [2024-10-01 22:40:37.198033] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.219 qpair failed and we were unable to recover it. 00:41:42.219 [2024-10-01 22:40:37.198354] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.219 [2024-10-01 22:40:37.198364] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.219 qpair failed and we were unable to recover it. 00:41:42.219 [2024-10-01 22:40:37.198439] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.219 [2024-10-01 22:40:37.198449] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.219 qpair failed and we were unable to recover it. 00:41:42.219 [2024-10-01 22:40:37.198656] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.219 [2024-10-01 22:40:37.198667] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.219 qpair failed and we were unable to recover it. 00:41:42.219 [2024-10-01 22:40:37.198928] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.219 [2024-10-01 22:40:37.198941] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.219 qpair failed and we were unable to recover it. 00:41:42.219 [2024-10-01 22:40:37.199250] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.219 [2024-10-01 22:40:37.199261] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.219 qpair failed and we were unable to recover it. 00:41:42.219 [2024-10-01 22:40:37.199585] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.219 [2024-10-01 22:40:37.199596] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.219 qpair failed and we were unable to recover it. 00:41:42.219 [2024-10-01 22:40:37.199908] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.219 [2024-10-01 22:40:37.199919] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.219 qpair failed and we were unable to recover it. 00:41:42.219 [2024-10-01 22:40:37.200203] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.219 [2024-10-01 22:40:37.200213] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.219 qpair failed and we were unable to recover it. 00:41:42.219 [2024-10-01 22:40:37.200505] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.219 [2024-10-01 22:40:37.200516] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.219 qpair failed and we were unable to recover it. 00:41:42.219 [2024-10-01 22:40:37.200701] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.219 [2024-10-01 22:40:37.200711] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.219 qpair failed and we were unable to recover it. 00:41:42.219 [2024-10-01 22:40:37.200836] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.219 [2024-10-01 22:40:37.200846] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.219 qpair failed and we were unable to recover it. 00:41:42.219 [2024-10-01 22:40:37.201178] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.219 [2024-10-01 22:40:37.201188] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.219 qpair failed and we were unable to recover it. 00:41:42.219 [2024-10-01 22:40:37.201509] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.219 [2024-10-01 22:40:37.201519] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.219 qpair failed and we were unable to recover it. 00:41:42.219 [2024-10-01 22:40:37.201871] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.219 [2024-10-01 22:40:37.201883] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.219 qpair failed and we were unable to recover it. 00:41:42.219 [2024-10-01 22:40:37.202185] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.219 [2024-10-01 22:40:37.202195] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.219 qpair failed and we were unable to recover it. 00:41:42.219 [2024-10-01 22:40:37.202618] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.219 [2024-10-01 22:40:37.202633] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.219 qpair failed and we were unable to recover it. 00:41:42.219 [2024-10-01 22:40:37.202699] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.219 [2024-10-01 22:40:37.202709] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.219 qpair failed and we were unable to recover it. 00:41:42.219 [2024-10-01 22:40:37.203015] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.219 [2024-10-01 22:40:37.203026] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.219 qpair failed and we were unable to recover it. 00:41:42.219 [2024-10-01 22:40:37.203346] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.219 [2024-10-01 22:40:37.203357] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.219 qpair failed and we were unable to recover it. 00:41:42.219 [2024-10-01 22:40:37.203768] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.219 [2024-10-01 22:40:37.203779] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.219 qpair failed and we were unable to recover it. 00:41:42.219 [2024-10-01 22:40:37.204100] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.219 [2024-10-01 22:40:37.204110] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.219 qpair failed and we were unable to recover it. 00:41:42.219 [2024-10-01 22:40:37.204316] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.220 [2024-10-01 22:40:37.204326] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.220 qpair failed and we were unable to recover it. 00:41:42.220 [2024-10-01 22:40:37.204653] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.220 [2024-10-01 22:40:37.204664] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.220 qpair failed and we were unable to recover it. 00:41:42.220 [2024-10-01 22:40:37.205008] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.220 [2024-10-01 22:40:37.205019] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.220 qpair failed and we were unable to recover it. 00:41:42.220 [2024-10-01 22:40:37.205377] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.220 [2024-10-01 22:40:37.205388] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.220 qpair failed and we were unable to recover it. 00:41:42.220 [2024-10-01 22:40:37.205561] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.220 [2024-10-01 22:40:37.205572] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.220 qpair failed and we were unable to recover it. 00:41:42.220 [2024-10-01 22:40:37.205904] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.220 [2024-10-01 22:40:37.205915] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.220 qpair failed and we were unable to recover it. 00:41:42.220 [2024-10-01 22:40:37.206238] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.220 [2024-10-01 22:40:37.206248] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.220 qpair failed and we were unable to recover it. 00:41:42.220 [2024-10-01 22:40:37.206421] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.220 [2024-10-01 22:40:37.206432] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.220 qpair failed and we were unable to recover it. 00:41:42.220 [2024-10-01 22:40:37.206755] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.220 [2024-10-01 22:40:37.206766] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.220 qpair failed and we were unable to recover it. 00:41:42.220 [2024-10-01 22:40:37.207088] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.220 [2024-10-01 22:40:37.207099] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.220 qpair failed and we were unable to recover it. 00:41:42.220 [2024-10-01 22:40:37.207433] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.220 [2024-10-01 22:40:37.207444] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.220 qpair failed and we were unable to recover it. 00:41:42.220 [2024-10-01 22:40:37.207633] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.220 [2024-10-01 22:40:37.207644] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.220 qpair failed and we were unable to recover it. 00:41:42.220 [2024-10-01 22:40:37.207825] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.220 [2024-10-01 22:40:37.207836] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.220 qpair failed and we were unable to recover it. 00:41:42.220 [2024-10-01 22:40:37.208157] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.220 [2024-10-01 22:40:37.208168] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.220 qpair failed and we were unable to recover it. 00:41:42.220 [2024-10-01 22:40:37.208487] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.220 [2024-10-01 22:40:37.208498] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.220 qpair failed and we were unable to recover it. 00:41:42.220 [2024-10-01 22:40:37.208829] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.220 [2024-10-01 22:40:37.208840] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.220 qpair failed and we were unable to recover it. 00:41:42.220 [2024-10-01 22:40:37.209203] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.220 [2024-10-01 22:40:37.209214] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.220 qpair failed and we were unable to recover it. 00:41:42.220 [2024-10-01 22:40:37.209404] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.220 [2024-10-01 22:40:37.209415] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.220 qpair failed and we were unable to recover it. 00:41:42.220 [2024-10-01 22:40:37.209659] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.220 [2024-10-01 22:40:37.209670] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.220 qpair failed and we were unable to recover it. 00:41:42.220 [2024-10-01 22:40:37.209960] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.220 [2024-10-01 22:40:37.209971] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.220 qpair failed and we were unable to recover it. 00:41:42.220 [2024-10-01 22:40:37.210166] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.220 [2024-10-01 22:40:37.210176] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.220 qpair failed and we were unable to recover it. 00:41:42.220 [2024-10-01 22:40:37.210360] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.220 [2024-10-01 22:40:37.210371] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.220 qpair failed and we were unable to recover it. 00:41:42.220 [2024-10-01 22:40:37.210742] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.220 [2024-10-01 22:40:37.210753] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.220 qpair failed and we were unable to recover it. 00:41:42.220 [2024-10-01 22:40:37.211042] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.220 [2024-10-01 22:40:37.211054] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.220 qpair failed and we were unable to recover it. 00:41:42.220 [2024-10-01 22:40:37.211398] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.220 [2024-10-01 22:40:37.211408] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.220 qpair failed and we were unable to recover it. 00:41:42.220 [2024-10-01 22:40:37.211726] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.220 [2024-10-01 22:40:37.211736] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.220 qpair failed and we were unable to recover it. 00:41:42.220 [2024-10-01 22:40:37.212063] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.220 [2024-10-01 22:40:37.212074] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.220 qpair failed and we were unable to recover it. 00:41:42.220 [2024-10-01 22:40:37.212399] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.220 [2024-10-01 22:40:37.212408] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.220 qpair failed and we were unable to recover it. 00:41:42.220 [2024-10-01 22:40:37.212702] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.220 [2024-10-01 22:40:37.212713] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.220 qpair failed and we were unable to recover it. 00:41:42.220 [2024-10-01 22:40:37.213037] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.220 [2024-10-01 22:40:37.213047] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.220 qpair failed and we were unable to recover it. 00:41:42.220 [2024-10-01 22:40:37.213367] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.220 [2024-10-01 22:40:37.213377] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.220 qpair failed and we were unable to recover it. 00:41:42.220 [2024-10-01 22:40:37.213708] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.220 [2024-10-01 22:40:37.213718] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.220 qpair failed and we were unable to recover it. 00:41:42.220 [2024-10-01 22:40:37.214062] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.220 [2024-10-01 22:40:37.214072] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.220 qpair failed and we were unable to recover it. 00:41:42.220 [2024-10-01 22:40:37.214401] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.220 [2024-10-01 22:40:37.214411] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.220 qpair failed and we were unable to recover it. 00:41:42.220 [2024-10-01 22:40:37.214699] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.220 [2024-10-01 22:40:37.214709] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.220 qpair failed and we were unable to recover it. 00:41:42.220 [2024-10-01 22:40:37.215049] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.220 [2024-10-01 22:40:37.215059] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.220 qpair failed and we were unable to recover it. 00:41:42.220 [2024-10-01 22:40:37.215358] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.220 [2024-10-01 22:40:37.215367] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.220 qpair failed and we were unable to recover it. 00:41:42.220 [2024-10-01 22:40:37.215696] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.220 [2024-10-01 22:40:37.215706] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.220 qpair failed and we were unable to recover it. 00:41:42.220 [2024-10-01 22:40:37.216021] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.220 [2024-10-01 22:40:37.216032] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.220 qpair failed and we were unable to recover it. 00:41:42.221 [2024-10-01 22:40:37.216348] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.221 [2024-10-01 22:40:37.216358] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.221 qpair failed and we were unable to recover it. 00:41:42.221 [2024-10-01 22:40:37.216644] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.221 [2024-10-01 22:40:37.216655] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.221 qpair failed and we were unable to recover it. 00:41:42.221 [2024-10-01 22:40:37.216994] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.221 [2024-10-01 22:40:37.217004] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.221 qpair failed and we were unable to recover it. 00:41:42.221 [2024-10-01 22:40:37.217319] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.221 [2024-10-01 22:40:37.217329] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.221 qpair failed and we were unable to recover it. 00:41:42.221 [2024-10-01 22:40:37.217644] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.221 [2024-10-01 22:40:37.217654] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.221 qpair failed and we were unable to recover it. 00:41:42.221 [2024-10-01 22:40:37.217877] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.221 [2024-10-01 22:40:37.217887] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.221 qpair failed and we were unable to recover it. 00:41:42.221 [2024-10-01 22:40:37.218255] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.221 [2024-10-01 22:40:37.218265] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.221 qpair failed and we were unable to recover it. 00:41:42.221 [2024-10-01 22:40:37.218586] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.221 [2024-10-01 22:40:37.218597] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.221 qpair failed and we were unable to recover it. 00:41:42.221 [2024-10-01 22:40:37.218774] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.221 [2024-10-01 22:40:37.218785] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.221 qpair failed and we were unable to recover it. 00:41:42.221 [2024-10-01 22:40:37.219160] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.221 [2024-10-01 22:40:37.219169] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.221 qpair failed and we were unable to recover it. 00:41:42.221 [2024-10-01 22:40:37.219462] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.221 [2024-10-01 22:40:37.219473] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.221 qpair failed and we were unable to recover it. 00:41:42.221 [2024-10-01 22:40:37.219668] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.221 [2024-10-01 22:40:37.219680] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.221 qpair failed and we were unable to recover it. 00:41:42.221 [2024-10-01 22:40:37.219908] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.221 [2024-10-01 22:40:37.219917] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.221 qpair failed and we were unable to recover it. 00:41:42.221 [2024-10-01 22:40:37.220247] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.221 [2024-10-01 22:40:37.220258] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.221 qpair failed and we were unable to recover it. 00:41:42.221 [2024-10-01 22:40:37.220574] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.221 [2024-10-01 22:40:37.220583] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.221 qpair failed and we were unable to recover it. 00:41:42.221 [2024-10-01 22:40:37.220873] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.221 [2024-10-01 22:40:37.220883] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.221 qpair failed and we were unable to recover it. 00:41:42.221 [2024-10-01 22:40:37.221218] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.221 [2024-10-01 22:40:37.221228] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.221 qpair failed and we were unable to recover it. 00:41:42.221 [2024-10-01 22:40:37.221435] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.221 [2024-10-01 22:40:37.221444] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.221 qpair failed and we were unable to recover it. 00:41:42.221 [2024-10-01 22:40:37.221655] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.221 [2024-10-01 22:40:37.221665] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.221 qpair failed and we were unable to recover it. 00:41:42.221 [2024-10-01 22:40:37.221968] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.221 [2024-10-01 22:40:37.221978] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.221 qpair failed and we were unable to recover it. 00:41:42.221 [2024-10-01 22:40:37.222295] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.221 [2024-10-01 22:40:37.222304] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.221 qpair failed and we were unable to recover it. 00:41:42.221 [2024-10-01 22:40:37.222503] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.221 [2024-10-01 22:40:37.222512] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.221 qpair failed and we were unable to recover it. 00:41:42.221 [2024-10-01 22:40:37.222863] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.221 [2024-10-01 22:40:37.222873] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.221 qpair failed and we were unable to recover it. 00:41:42.221 [2024-10-01 22:40:37.223184] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.221 [2024-10-01 22:40:37.223195] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.221 qpair failed and we were unable to recover it. 00:41:42.221 [2024-10-01 22:40:37.223390] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.221 [2024-10-01 22:40:37.223400] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.221 qpair failed and we were unable to recover it. 00:41:42.221 [2024-10-01 22:40:37.223776] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.221 [2024-10-01 22:40:37.223787] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.221 qpair failed and we were unable to recover it. 00:41:42.221 [2024-10-01 22:40:37.224058] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.221 [2024-10-01 22:40:37.224068] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.221 qpair failed and we were unable to recover it. 00:41:42.221 [2024-10-01 22:40:37.224260] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.221 [2024-10-01 22:40:37.224272] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.221 qpair failed and we were unable to recover it. 00:41:42.221 [2024-10-01 22:40:37.224638] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.221 [2024-10-01 22:40:37.224650] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.221 qpair failed and we were unable to recover it. 00:41:42.221 [2024-10-01 22:40:37.225003] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.221 [2024-10-01 22:40:37.225013] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.221 qpair failed and we were unable to recover it. 00:41:42.221 [2024-10-01 22:40:37.225302] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.221 [2024-10-01 22:40:37.225312] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.221 qpair failed and we were unable to recover it. 00:41:42.221 [2024-10-01 22:40:37.225460] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.221 [2024-10-01 22:40:37.225470] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.221 qpair failed and we were unable to recover it. 00:41:42.221 [2024-10-01 22:40:37.225797] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.221 [2024-10-01 22:40:37.225807] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.221 qpair failed and we were unable to recover it. 00:41:42.221 [2024-10-01 22:40:37.226139] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.221 [2024-10-01 22:40:37.226150] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.221 qpair failed and we were unable to recover it. 00:41:42.221 [2024-10-01 22:40:37.226423] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.221 [2024-10-01 22:40:37.226434] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.221 qpair failed and we were unable to recover it. 00:41:42.221 [2024-10-01 22:40:37.226836] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.221 [2024-10-01 22:40:37.226846] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.221 qpair failed and we were unable to recover it. 00:41:42.221 [2024-10-01 22:40:37.227165] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.221 [2024-10-01 22:40:37.227176] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.221 qpair failed and we were unable to recover it. 00:41:42.221 [2024-10-01 22:40:37.227459] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.221 [2024-10-01 22:40:37.227470] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.221 qpair failed and we were unable to recover it. 00:41:42.221 [2024-10-01 22:40:37.227792] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.221 [2024-10-01 22:40:37.227804] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.222 qpair failed and we were unable to recover it. 00:41:42.222 [2024-10-01 22:40:37.228095] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.222 [2024-10-01 22:40:37.228106] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.222 qpair failed and we were unable to recover it. 00:41:42.222 [2024-10-01 22:40:37.228423] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.222 [2024-10-01 22:40:37.228434] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.222 qpair failed and we were unable to recover it. 00:41:42.222 [2024-10-01 22:40:37.228661] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.222 [2024-10-01 22:40:37.228672] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.222 qpair failed and we were unable to recover it. 00:41:42.222 [2024-10-01 22:40:37.229007] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.222 [2024-10-01 22:40:37.229017] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.222 qpair failed and we were unable to recover it. 00:41:42.222 [2024-10-01 22:40:37.229339] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.222 [2024-10-01 22:40:37.229350] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.222 qpair failed and we were unable to recover it. 00:41:42.222 [2024-10-01 22:40:37.229695] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.222 [2024-10-01 22:40:37.229709] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.222 qpair failed and we were unable to recover it. 00:41:42.222 [2024-10-01 22:40:37.229949] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.222 [2024-10-01 22:40:37.229959] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.222 qpair failed and we were unable to recover it. 00:41:42.222 [2024-10-01 22:40:37.230289] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.222 [2024-10-01 22:40:37.230299] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.222 qpair failed and we were unable to recover it. 00:41:42.222 [2024-10-01 22:40:37.230582] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.222 [2024-10-01 22:40:37.230593] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.222 qpair failed and we were unable to recover it. 00:41:42.222 [2024-10-01 22:40:37.230819] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.222 [2024-10-01 22:40:37.230829] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.222 qpair failed and we were unable to recover it. 00:41:42.222 [2024-10-01 22:40:37.231168] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.222 [2024-10-01 22:40:37.231177] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.222 qpair failed and we were unable to recover it. 00:41:42.222 [2024-10-01 22:40:37.231365] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.222 [2024-10-01 22:40:37.231375] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.222 qpair failed and we were unable to recover it. 00:41:42.222 [2024-10-01 22:40:37.231701] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.222 [2024-10-01 22:40:37.231711] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.222 qpair failed and we were unable to recover it. 00:41:42.222 [2024-10-01 22:40:37.231974] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.222 [2024-10-01 22:40:37.231984] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.222 qpair failed and we were unable to recover it. 00:41:42.222 [2024-10-01 22:40:37.232262] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.222 [2024-10-01 22:40:37.232273] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.222 qpair failed and we were unable to recover it. 00:41:42.222 [2024-10-01 22:40:37.232605] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.222 [2024-10-01 22:40:37.232615] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.222 qpair failed and we were unable to recover it. 00:41:42.222 [2024-10-01 22:40:37.232910] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.222 [2024-10-01 22:40:37.232921] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.222 qpair failed and we were unable to recover it. 00:41:42.222 [2024-10-01 22:40:37.233241] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.222 [2024-10-01 22:40:37.233251] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.222 qpair failed and we were unable to recover it. 00:41:42.222 [2024-10-01 22:40:37.233534] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.222 [2024-10-01 22:40:37.233544] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.222 qpair failed and we were unable to recover it. 00:41:42.222 [2024-10-01 22:40:37.233720] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.222 [2024-10-01 22:40:37.233731] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.222 qpair failed and we were unable to recover it. 00:41:42.222 [2024-10-01 22:40:37.234137] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.222 [2024-10-01 22:40:37.234147] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.222 qpair failed and we were unable to recover it. 00:41:42.222 [2024-10-01 22:40:37.234425] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.222 [2024-10-01 22:40:37.234435] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.222 qpair failed and we were unable to recover it. 00:41:42.222 [2024-10-01 22:40:37.234727] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.222 [2024-10-01 22:40:37.234738] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.222 qpair failed and we were unable to recover it. 00:41:42.222 [2024-10-01 22:40:37.235021] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.222 [2024-10-01 22:40:37.235031] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.222 qpair failed and we were unable to recover it. 00:41:42.222 [2024-10-01 22:40:37.235306] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.222 [2024-10-01 22:40:37.235316] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.222 qpair failed and we were unable to recover it. 00:41:42.222 [2024-10-01 22:40:37.235597] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.222 [2024-10-01 22:40:37.235607] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.222 qpair failed and we were unable to recover it. 00:41:42.222 [2024-10-01 22:40:37.235778] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.222 [2024-10-01 22:40:37.235791] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.222 qpair failed and we were unable to recover it. 00:41:42.222 [2024-10-01 22:40:37.236101] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.222 [2024-10-01 22:40:37.236111] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.222 qpair failed and we were unable to recover it. 00:41:42.222 [2024-10-01 22:40:37.236327] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.222 [2024-10-01 22:40:37.236336] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.222 qpair failed and we were unable to recover it. 00:41:42.222 [2024-10-01 22:40:37.236677] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.222 [2024-10-01 22:40:37.236687] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.222 qpair failed and we were unable to recover it. 00:41:42.222 [2024-10-01 22:40:37.237059] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.222 [2024-10-01 22:40:37.237069] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.222 qpair failed and we were unable to recover it. 00:41:42.222 [2024-10-01 22:40:37.237380] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.222 [2024-10-01 22:40:37.237389] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.222 qpair failed and we were unable to recover it. 00:41:42.222 [2024-10-01 22:40:37.237671] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.222 [2024-10-01 22:40:37.237681] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.222 qpair failed and we were unable to recover it. 00:41:42.222 [2024-10-01 22:40:37.238021] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.222 [2024-10-01 22:40:37.238031] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.222 qpair failed and we were unable to recover it. 00:41:42.222 [2024-10-01 22:40:37.238230] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.222 [2024-10-01 22:40:37.238241] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.222 qpair failed and we were unable to recover it. 00:41:42.222 [2024-10-01 22:40:37.238464] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.222 [2024-10-01 22:40:37.238473] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.222 qpair failed and we were unable to recover it. 00:41:42.222 [2024-10-01 22:40:37.238767] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.222 [2024-10-01 22:40:37.238777] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.222 qpair failed and we were unable to recover it. 00:41:42.222 [2024-10-01 22:40:37.239069] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.222 [2024-10-01 22:40:37.239080] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.222 qpair failed and we were unable to recover it. 00:41:42.222 [2024-10-01 22:40:37.239365] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.222 [2024-10-01 22:40:37.239375] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.223 qpair failed and we were unable to recover it. 00:41:42.223 [2024-10-01 22:40:37.239697] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.223 [2024-10-01 22:40:37.239707] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.223 qpair failed and we were unable to recover it. 00:41:42.223 [2024-10-01 22:40:37.239881] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.223 [2024-10-01 22:40:37.239893] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.223 qpair failed and we were unable to recover it. 00:41:42.223 [2024-10-01 22:40:37.240248] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.223 [2024-10-01 22:40:37.240258] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.223 qpair failed and we were unable to recover it. 00:41:42.223 [2024-10-01 22:40:37.240550] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.223 [2024-10-01 22:40:37.240562] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.223 qpair failed and we were unable to recover it. 00:41:42.223 [2024-10-01 22:40:37.240878] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.223 [2024-10-01 22:40:37.240889] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.223 qpair failed and we were unable to recover it. 00:41:42.223 [2024-10-01 22:40:37.241251] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.223 [2024-10-01 22:40:37.241262] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.223 qpair failed and we were unable to recover it. 00:41:42.223 [2024-10-01 22:40:37.241577] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.223 [2024-10-01 22:40:37.241587] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.223 qpair failed and we were unable to recover it. 00:41:42.223 [2024-10-01 22:40:37.241885] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.223 [2024-10-01 22:40:37.241901] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.223 qpair failed and we were unable to recover it. 00:41:42.223 [2024-10-01 22:40:37.242225] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.223 [2024-10-01 22:40:37.242235] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.223 qpair failed and we were unable to recover it. 00:41:42.223 [2024-10-01 22:40:37.242521] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.223 [2024-10-01 22:40:37.242532] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.223 qpair failed and we were unable to recover it. 00:41:42.223 [2024-10-01 22:40:37.242819] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.223 [2024-10-01 22:40:37.242830] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.223 qpair failed and we were unable to recover it. 00:41:42.223 [2024-10-01 22:40:37.243126] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.223 [2024-10-01 22:40:37.243136] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.223 qpair failed and we were unable to recover it. 00:41:42.223 [2024-10-01 22:40:37.243469] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.223 [2024-10-01 22:40:37.243480] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.223 qpair failed and we were unable to recover it. 00:41:42.223 [2024-10-01 22:40:37.243796] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.223 [2024-10-01 22:40:37.243806] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.223 qpair failed and we were unable to recover it. 00:41:42.223 [2024-10-01 22:40:37.244119] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.223 [2024-10-01 22:40:37.244129] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.223 qpair failed and we were unable to recover it. 00:41:42.223 [2024-10-01 22:40:37.244418] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.223 [2024-10-01 22:40:37.244429] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.223 qpair failed and we were unable to recover it. 00:41:42.223 [2024-10-01 22:40:37.244748] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.223 [2024-10-01 22:40:37.244758] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.223 qpair failed and we were unable to recover it. 00:41:42.223 [2024-10-01 22:40:37.245049] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.223 [2024-10-01 22:40:37.245066] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.223 qpair failed and we were unable to recover it. 00:41:42.223 [2024-10-01 22:40:37.245256] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.223 [2024-10-01 22:40:37.245266] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.223 qpair failed and we were unable to recover it. 00:41:42.223 [2024-10-01 22:40:37.245593] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.223 [2024-10-01 22:40:37.245603] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.223 qpair failed and we were unable to recover it. 00:41:42.223 [2024-10-01 22:40:37.245915] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.223 [2024-10-01 22:40:37.245925] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.223 qpair failed and we were unable to recover it. 00:41:42.223 [2024-10-01 22:40:37.246102] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.223 [2024-10-01 22:40:37.246112] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.223 qpair failed and we were unable to recover it. 00:41:42.223 [2024-10-01 22:40:37.246513] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.223 [2024-10-01 22:40:37.246523] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.223 qpair failed and we were unable to recover it. 00:41:42.223 [2024-10-01 22:40:37.246843] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.223 [2024-10-01 22:40:37.246853] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.223 qpair failed and we were unable to recover it. 00:41:42.223 [2024-10-01 22:40:37.247217] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.223 [2024-10-01 22:40:37.247227] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.223 qpair failed and we were unable to recover it. 00:41:42.223 [2024-10-01 22:40:37.247509] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.223 [2024-10-01 22:40:37.247519] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.223 qpair failed and we were unable to recover it. 00:41:42.223 [2024-10-01 22:40:37.247851] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.223 [2024-10-01 22:40:37.247861] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.223 qpair failed and we were unable to recover it. 00:41:42.223 [2024-10-01 22:40:37.248148] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.223 [2024-10-01 22:40:37.248158] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.223 qpair failed and we were unable to recover it. 00:41:42.223 [2024-10-01 22:40:37.248472] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.223 [2024-10-01 22:40:37.248482] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.223 qpair failed and we were unable to recover it. 00:41:42.223 [2024-10-01 22:40:37.248652] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.223 [2024-10-01 22:40:37.248663] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.223 qpair failed and we were unable to recover it. 00:41:42.223 [2024-10-01 22:40:37.249018] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.223 [2024-10-01 22:40:37.249028] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.223 qpair failed and we were unable to recover it. 00:41:42.223 [2024-10-01 22:40:37.249316] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.223 [2024-10-01 22:40:37.249326] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.223 qpair failed and we were unable to recover it. 00:41:42.223 [2024-10-01 22:40:37.249631] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.223 [2024-10-01 22:40:37.249641] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.223 qpair failed and we were unable to recover it. 00:41:42.224 [2024-10-01 22:40:37.249924] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.224 [2024-10-01 22:40:37.249934] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.224 qpair failed and we were unable to recover it. 00:41:42.224 [2024-10-01 22:40:37.250269] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.224 [2024-10-01 22:40:37.250279] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.224 qpair failed and we were unable to recover it. 00:41:42.224 [2024-10-01 22:40:37.250467] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.224 [2024-10-01 22:40:37.250478] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.224 qpair failed and we were unable to recover it. 00:41:42.224 [2024-10-01 22:40:37.250786] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.224 [2024-10-01 22:40:37.250796] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.224 qpair failed and we were unable to recover it. 00:41:42.224 [2024-10-01 22:40:37.251123] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.224 [2024-10-01 22:40:37.251133] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.224 qpair failed and we were unable to recover it. 00:41:42.224 [2024-10-01 22:40:37.251454] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.224 [2024-10-01 22:40:37.251464] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.224 qpair failed and we were unable to recover it. 00:41:42.224 [2024-10-01 22:40:37.251757] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.224 [2024-10-01 22:40:37.251767] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.224 qpair failed and we were unable to recover it. 00:41:42.224 [2024-10-01 22:40:37.252038] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.224 [2024-10-01 22:40:37.252048] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.224 qpair failed and we were unable to recover it. 00:41:42.224 [2024-10-01 22:40:37.252365] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.224 [2024-10-01 22:40:37.252375] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.224 qpair failed and we were unable to recover it. 00:41:42.224 [2024-10-01 22:40:37.252720] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.224 [2024-10-01 22:40:37.252731] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.224 qpair failed and we were unable to recover it. 00:41:42.224 [2024-10-01 22:40:37.253033] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.224 [2024-10-01 22:40:37.253044] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.224 qpair failed and we were unable to recover it. 00:41:42.224 [2024-10-01 22:40:37.253339] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.224 [2024-10-01 22:40:37.253349] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.224 qpair failed and we were unable to recover it. 00:41:42.224 [2024-10-01 22:40:37.253644] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.224 [2024-10-01 22:40:37.253654] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.224 qpair failed and we were unable to recover it. 00:41:42.224 [2024-10-01 22:40:37.253847] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.224 [2024-10-01 22:40:37.253857] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.224 qpair failed and we were unable to recover it. 00:41:42.224 [2024-10-01 22:40:37.254147] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.224 [2024-10-01 22:40:37.254157] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.224 qpair failed and we were unable to recover it. 00:41:42.224 [2024-10-01 22:40:37.254347] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.224 [2024-10-01 22:40:37.254365] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.224 qpair failed and we were unable to recover it. 00:41:42.224 [2024-10-01 22:40:37.254665] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.224 [2024-10-01 22:40:37.254675] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.224 qpair failed and we were unable to recover it. 00:41:42.224 [2024-10-01 22:40:37.254997] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.224 [2024-10-01 22:40:37.255008] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.224 qpair failed and we were unable to recover it. 00:41:42.224 [2024-10-01 22:40:37.255358] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.224 [2024-10-01 22:40:37.255368] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.224 qpair failed and we were unable to recover it. 00:41:42.224 [2024-10-01 22:40:37.255681] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.224 [2024-10-01 22:40:37.255691] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.224 qpair failed and we were unable to recover it. 00:41:42.224 [2024-10-01 22:40:37.255924] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.224 [2024-10-01 22:40:37.255934] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.224 qpair failed and we were unable to recover it. 00:41:42.224 [2024-10-01 22:40:37.256149] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.224 [2024-10-01 22:40:37.256159] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.224 qpair failed and we were unable to recover it. 00:41:42.224 [2024-10-01 22:40:37.256383] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.224 [2024-10-01 22:40:37.256396] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.224 qpair failed and we were unable to recover it. 00:41:42.224 [2024-10-01 22:40:37.256680] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.224 [2024-10-01 22:40:37.256690] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.224 qpair failed and we were unable to recover it. 00:41:42.224 [2024-10-01 22:40:37.257010] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.224 [2024-10-01 22:40:37.257020] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.224 qpair failed and we were unable to recover it. 00:41:42.224 [2024-10-01 22:40:37.257192] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.224 [2024-10-01 22:40:37.257203] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.224 qpair failed and we were unable to recover it. 00:41:42.224 [2024-10-01 22:40:37.257504] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.224 [2024-10-01 22:40:37.257514] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.224 qpair failed and we were unable to recover it. 00:41:42.224 [2024-10-01 22:40:37.257647] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.224 [2024-10-01 22:40:37.257657] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.224 qpair failed and we were unable to recover it. 00:41:42.224 [2024-10-01 22:40:37.257969] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.224 [2024-10-01 22:40:37.257980] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.224 qpair failed and we were unable to recover it. 00:41:42.224 [2024-10-01 22:40:37.258270] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.224 [2024-10-01 22:40:37.258280] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.224 qpair failed and we were unable to recover it. 00:41:42.224 [2024-10-01 22:40:37.258562] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.224 [2024-10-01 22:40:37.258571] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.224 qpair failed and we were unable to recover it. 00:41:42.224 [2024-10-01 22:40:37.258890] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.224 [2024-10-01 22:40:37.258900] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.224 qpair failed and we were unable to recover it. 00:41:42.224 [2024-10-01 22:40:37.259204] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.224 [2024-10-01 22:40:37.259213] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.224 qpair failed and we were unable to recover it. 00:41:42.224 [2024-10-01 22:40:37.259565] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.224 [2024-10-01 22:40:37.259575] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.224 qpair failed and we were unable to recover it. 00:41:42.224 [2024-10-01 22:40:37.259916] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.224 [2024-10-01 22:40:37.259926] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.224 qpair failed and we were unable to recover it. 00:41:42.224 [2024-10-01 22:40:37.260209] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.224 [2024-10-01 22:40:37.260219] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.224 qpair failed and we were unable to recover it. 00:41:42.224 [2024-10-01 22:40:37.260488] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.224 [2024-10-01 22:40:37.260498] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.224 qpair failed and we were unable to recover it. 00:41:42.224 [2024-10-01 22:40:37.260790] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.224 [2024-10-01 22:40:37.260800] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.224 qpair failed and we were unable to recover it. 00:41:42.224 [2024-10-01 22:40:37.261145] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.224 [2024-10-01 22:40:37.261155] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.225 qpair failed and we were unable to recover it. 00:41:42.225 [2024-10-01 22:40:37.261434] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.225 [2024-10-01 22:40:37.261444] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.225 qpair failed and we were unable to recover it. 00:41:42.225 [2024-10-01 22:40:37.261756] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.225 [2024-10-01 22:40:37.261766] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.225 qpair failed and we were unable to recover it. 00:41:42.225 [2024-10-01 22:40:37.262056] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.225 [2024-10-01 22:40:37.262065] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.225 qpair failed and we were unable to recover it. 00:41:42.225 [2024-10-01 22:40:37.262237] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.225 [2024-10-01 22:40:37.262248] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.225 qpair failed and we were unable to recover it. 00:41:42.225 [2024-10-01 22:40:37.262569] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.225 [2024-10-01 22:40:37.262579] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.225 qpair failed and we were unable to recover it. 00:41:42.225 [2024-10-01 22:40:37.262931] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.225 [2024-10-01 22:40:37.262941] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.225 qpair failed and we were unable to recover it. 00:41:42.225 [2024-10-01 22:40:37.263274] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.225 [2024-10-01 22:40:37.263284] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.225 qpair failed and we were unable to recover it. 00:41:42.225 [2024-10-01 22:40:37.263588] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.225 [2024-10-01 22:40:37.263598] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.225 qpair failed and we were unable to recover it. 00:41:42.225 [2024-10-01 22:40:37.263778] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.225 [2024-10-01 22:40:37.263790] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.225 qpair failed and we were unable to recover it. 00:41:42.225 [2024-10-01 22:40:37.264040] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.225 [2024-10-01 22:40:37.264050] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.225 qpair failed and we were unable to recover it. 00:41:42.225 [2024-10-01 22:40:37.264371] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.225 [2024-10-01 22:40:37.264384] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.225 qpair failed and we were unable to recover it. 00:41:42.225 [2024-10-01 22:40:37.264680] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.225 [2024-10-01 22:40:37.264690] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.225 qpair failed and we were unable to recover it. 00:41:42.225 [2024-10-01 22:40:37.265022] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.225 [2024-10-01 22:40:37.265032] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.225 qpair failed and we were unable to recover it. 00:41:42.225 [2024-10-01 22:40:37.265223] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.225 [2024-10-01 22:40:37.265233] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.225 qpair failed and we were unable to recover it. 00:41:42.225 [2024-10-01 22:40:37.265543] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.225 [2024-10-01 22:40:37.265553] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.225 qpair failed and we were unable to recover it. 00:41:42.225 [2024-10-01 22:40:37.265838] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.225 [2024-10-01 22:40:37.265848] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.225 qpair failed and we were unable to recover it. 00:41:42.225 [2024-10-01 22:40:37.266179] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.225 [2024-10-01 22:40:37.266188] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.225 qpair failed and we were unable to recover it. 00:41:42.225 [2024-10-01 22:40:37.266470] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.225 [2024-10-01 22:40:37.266480] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.225 qpair failed and we were unable to recover it. 00:41:42.225 [2024-10-01 22:40:37.266787] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.225 [2024-10-01 22:40:37.266797] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.225 qpair failed and we were unable to recover it. 00:41:42.225 [2024-10-01 22:40:37.267010] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.225 [2024-10-01 22:40:37.267020] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.225 qpair failed and we were unable to recover it. 00:41:42.225 [2024-10-01 22:40:37.267345] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.225 [2024-10-01 22:40:37.267355] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.225 qpair failed and we were unable to recover it. 00:41:42.225 [2024-10-01 22:40:37.267669] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.225 [2024-10-01 22:40:37.267679] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.225 qpair failed and we were unable to recover it. 00:41:42.225 [2024-10-01 22:40:37.268009] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.225 [2024-10-01 22:40:37.268019] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.225 qpair failed and we were unable to recover it. 00:41:42.225 [2024-10-01 22:40:37.268327] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.225 [2024-10-01 22:40:37.268336] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.225 qpair failed and we were unable to recover it. 00:41:42.225 [2024-10-01 22:40:37.268665] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.225 [2024-10-01 22:40:37.268675] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.225 qpair failed and we were unable to recover it. 00:41:42.225 [2024-10-01 22:40:37.268986] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.225 [2024-10-01 22:40:37.268997] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.225 qpair failed and we were unable to recover it. 00:41:42.225 [2024-10-01 22:40:37.269309] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.225 [2024-10-01 22:40:37.269319] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.225 qpair failed and we were unable to recover it. 00:41:42.225 [2024-10-01 22:40:37.269601] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.225 [2024-10-01 22:40:37.269611] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.225 qpair failed and we were unable to recover it. 00:41:42.225 [2024-10-01 22:40:37.269951] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.225 [2024-10-01 22:40:37.269961] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.225 qpair failed and we were unable to recover it. 00:41:42.225 [2024-10-01 22:40:37.270227] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.225 [2024-10-01 22:40:37.270237] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.225 qpair failed and we were unable to recover it. 00:41:42.225 [2024-10-01 22:40:37.270547] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.225 [2024-10-01 22:40:37.270558] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.225 qpair failed and we were unable to recover it. 00:41:42.225 [2024-10-01 22:40:37.270748] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.225 [2024-10-01 22:40:37.270759] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.225 qpair failed and we were unable to recover it. 00:41:42.225 [2024-10-01 22:40:37.270925] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.225 [2024-10-01 22:40:37.270936] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.225 qpair failed and we were unable to recover it. 00:41:42.225 [2024-10-01 22:40:37.271311] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.225 [2024-10-01 22:40:37.271321] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.225 qpair failed and we were unable to recover it. 00:41:42.225 [2024-10-01 22:40:37.271641] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.225 [2024-10-01 22:40:37.271651] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.225 qpair failed and we were unable to recover it. 00:41:42.225 [2024-10-01 22:40:37.271835] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.225 [2024-10-01 22:40:37.271845] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.225 qpair failed and we were unable to recover it. 00:41:42.225 [2024-10-01 22:40:37.272180] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.225 [2024-10-01 22:40:37.272190] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.225 qpair failed and we were unable to recover it. 00:41:42.225 [2024-10-01 22:40:37.272505] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.225 [2024-10-01 22:40:37.272516] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.225 qpair failed and we were unable to recover it. 00:41:42.225 [2024-10-01 22:40:37.272832] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.226 [2024-10-01 22:40:37.272843] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.226 qpair failed and we were unable to recover it. 00:41:42.226 [2024-10-01 22:40:37.273121] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.226 [2024-10-01 22:40:37.273132] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.226 qpair failed and we were unable to recover it. 00:41:42.226 [2024-10-01 22:40:37.273401] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.226 [2024-10-01 22:40:37.273411] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.226 qpair failed and we were unable to recover it. 00:41:42.226 [2024-10-01 22:40:37.273496] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:41:42.226 [2024-10-01 22:40:37.273729] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.226 [2024-10-01 22:40:37.273740] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.226 qpair failed and we were unable to recover it. 00:41:42.226 [2024-10-01 22:40:37.274048] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.226 [2024-10-01 22:40:37.274058] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.226 qpair failed and we were unable to recover it. 00:41:42.226 [2024-10-01 22:40:37.274366] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.226 [2024-10-01 22:40:37.274376] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.226 qpair failed and we were unable to recover it. 00:41:42.226 [2024-10-01 22:40:37.274695] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.226 [2024-10-01 22:40:37.274706] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.226 qpair failed and we were unable to recover it. 00:41:42.226 [2024-10-01 22:40:37.274995] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.226 [2024-10-01 22:40:37.275007] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.226 qpair failed and we were unable to recover it. 00:41:42.226 [2024-10-01 22:40:37.275327] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.226 [2024-10-01 22:40:37.275338] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.226 qpair failed and we were unable to recover it. 00:41:42.226 [2024-10-01 22:40:37.275631] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.226 [2024-10-01 22:40:37.275642] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.226 qpair failed and we were unable to recover it. 00:41:42.226 [2024-10-01 22:40:37.275940] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.226 [2024-10-01 22:40:37.275950] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.226 qpair failed and we were unable to recover it. 00:41:42.226 [2024-10-01 22:40:37.276237] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.226 [2024-10-01 22:40:37.276247] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.226 qpair failed and we were unable to recover it. 00:41:42.226 [2024-10-01 22:40:37.276458] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.226 [2024-10-01 22:40:37.276468] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.226 qpair failed and we were unable to recover it. 00:41:42.226 [2024-10-01 22:40:37.276784] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.226 [2024-10-01 22:40:37.276794] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.226 qpair failed and we were unable to recover it. 00:41:42.226 [2024-10-01 22:40:37.277161] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.226 [2024-10-01 22:40:37.277171] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.226 qpair failed and we were unable to recover it. 00:41:42.226 [2024-10-01 22:40:37.277441] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.226 [2024-10-01 22:40:37.277451] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.226 qpair failed and we were unable to recover it. 00:41:42.226 [2024-10-01 22:40:37.277760] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.226 [2024-10-01 22:40:37.277770] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.226 qpair failed and we were unable to recover it. 00:41:42.226 [2024-10-01 22:40:37.278112] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.226 [2024-10-01 22:40:37.278122] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.226 qpair failed and we were unable to recover it. 00:41:42.226 [2024-10-01 22:40:37.278474] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.226 [2024-10-01 22:40:37.278483] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.226 qpair failed and we were unable to recover it. 00:41:42.226 [2024-10-01 22:40:37.278796] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.226 [2024-10-01 22:40:37.278806] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.226 qpair failed and we were unable to recover it. 00:41:42.226 [2024-10-01 22:40:37.279022] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.226 [2024-10-01 22:40:37.279032] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.226 qpair failed and we were unable to recover it. 00:41:42.226 [2024-10-01 22:40:37.279381] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.226 [2024-10-01 22:40:37.279391] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.226 qpair failed and we were unable to recover it. 00:41:42.226 [2024-10-01 22:40:37.279602] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.226 [2024-10-01 22:40:37.279612] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.226 qpair failed and we were unable to recover it. 00:41:42.226 [2024-10-01 22:40:37.279977] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.226 [2024-10-01 22:40:37.279989] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.226 qpair failed and we were unable to recover it. 00:41:42.226 [2024-10-01 22:40:37.280340] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.226 [2024-10-01 22:40:37.280350] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.226 qpair failed and we were unable to recover it. 00:41:42.226 [2024-10-01 22:40:37.280696] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.226 [2024-10-01 22:40:37.280706] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.226 qpair failed and we were unable to recover it. 00:41:42.226 [2024-10-01 22:40:37.281032] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.226 [2024-10-01 22:40:37.281045] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.226 qpair failed and we were unable to recover it. 00:41:42.226 [2024-10-01 22:40:37.281366] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.226 [2024-10-01 22:40:37.281375] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.226 qpair failed and we were unable to recover it. 00:41:42.226 [2024-10-01 22:40:37.281665] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.226 [2024-10-01 22:40:37.281675] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.226 qpair failed and we were unable to recover it. 00:41:42.226 [2024-10-01 22:40:37.282014] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.226 [2024-10-01 22:40:37.282024] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.226 qpair failed and we were unable to recover it. 00:41:42.226 [2024-10-01 22:40:37.282299] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.226 [2024-10-01 22:40:37.282309] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.226 qpair failed and we were unable to recover it. 00:41:42.226 [2024-10-01 22:40:37.282627] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.226 [2024-10-01 22:40:37.282638] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.226 qpair failed and we were unable to recover it. 00:41:42.226 [2024-10-01 22:40:37.283054] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.226 [2024-10-01 22:40:37.283064] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.226 qpair failed and we were unable to recover it. 00:41:42.226 [2024-10-01 22:40:37.283421] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.226 [2024-10-01 22:40:37.283431] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.226 qpair failed and we were unable to recover it. 00:41:42.226 [2024-10-01 22:40:37.283775] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.226 [2024-10-01 22:40:37.283785] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.226 qpair failed and we were unable to recover it. 00:41:42.226 [2024-10-01 22:40:37.284118] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.226 [2024-10-01 22:40:37.284128] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.226 qpair failed and we were unable to recover it. 00:41:42.226 [2024-10-01 22:40:37.284398] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.226 [2024-10-01 22:40:37.284408] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.226 qpair failed and we were unable to recover it. 00:41:42.226 [2024-10-01 22:40:37.284708] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.226 [2024-10-01 22:40:37.284719] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.226 qpair failed and we were unable to recover it. 00:41:42.226 [2024-10-01 22:40:37.284988] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.227 [2024-10-01 22:40:37.284998] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.227 qpair failed and we were unable to recover it. 00:41:42.227 [2024-10-01 22:40:37.285384] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.227 [2024-10-01 22:40:37.285394] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.227 qpair failed and we were unable to recover it. 00:41:42.227 [2024-10-01 22:40:37.285708] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.227 [2024-10-01 22:40:37.285719] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.227 qpair failed and we were unable to recover it. 00:41:42.227 [2024-10-01 22:40:37.286084] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.227 [2024-10-01 22:40:37.286094] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.227 qpair failed and we were unable to recover it. 00:41:42.227 [2024-10-01 22:40:37.286404] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.227 [2024-10-01 22:40:37.286414] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.227 qpair failed and we were unable to recover it. 00:41:42.227 [2024-10-01 22:40:37.286736] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.227 [2024-10-01 22:40:37.286746] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.227 qpair failed and we were unable to recover it. 00:41:42.227 [2024-10-01 22:40:37.286948] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.227 [2024-10-01 22:40:37.286958] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.227 qpair failed and we were unable to recover it. 00:41:42.227 [2024-10-01 22:40:37.287331] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.227 [2024-10-01 22:40:37.287342] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.227 qpair failed and we were unable to recover it. 00:41:42.227 [2024-10-01 22:40:37.287676] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.227 [2024-10-01 22:40:37.287687] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.227 qpair failed and we were unable to recover it. 00:41:42.227 [2024-10-01 22:40:37.287969] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.227 [2024-10-01 22:40:37.287979] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.227 qpair failed and we were unable to recover it. 00:41:42.227 [2024-10-01 22:40:37.288302] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.227 [2024-10-01 22:40:37.288313] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.227 qpair failed and we were unable to recover it. 00:41:42.227 [2024-10-01 22:40:37.288659] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.227 [2024-10-01 22:40:37.288669] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.227 qpair failed and we were unable to recover it. 00:41:42.227 [2024-10-01 22:40:37.288857] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.227 [2024-10-01 22:40:37.288866] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.227 qpair failed and we were unable to recover it. 00:41:42.227 [2024-10-01 22:40:37.289050] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.227 [2024-10-01 22:40:37.289061] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.227 qpair failed and we were unable to recover it. 00:41:42.227 [2024-10-01 22:40:37.289386] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.227 [2024-10-01 22:40:37.289397] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.227 qpair failed and we were unable to recover it. 00:41:42.227 [2024-10-01 22:40:37.289738] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.227 [2024-10-01 22:40:37.289752] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.227 qpair failed and we were unable to recover it. 00:41:42.227 [2024-10-01 22:40:37.289942] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.227 [2024-10-01 22:40:37.289953] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.227 qpair failed and we were unable to recover it. 00:41:42.227 [2024-10-01 22:40:37.290286] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.227 [2024-10-01 22:40:37.290297] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.227 qpair failed and we were unable to recover it. 00:41:42.227 [2024-10-01 22:40:37.290517] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.227 [2024-10-01 22:40:37.290528] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.227 qpair failed and we were unable to recover it. 00:41:42.227 [2024-10-01 22:40:37.290853] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.227 [2024-10-01 22:40:37.290864] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.227 qpair failed and we were unable to recover it. 00:41:42.227 [2024-10-01 22:40:37.291150] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.227 [2024-10-01 22:40:37.291161] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.227 qpair failed and we were unable to recover it. 00:41:42.227 [2024-10-01 22:40:37.291461] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.227 [2024-10-01 22:40:37.291472] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.227 qpair failed and we were unable to recover it. 00:41:42.227 [2024-10-01 22:40:37.291683] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.227 [2024-10-01 22:40:37.291694] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.227 qpair failed and we were unable to recover it. 00:41:42.227 [2024-10-01 22:40:37.291892] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.227 [2024-10-01 22:40:37.291904] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.227 qpair failed and we were unable to recover it. 00:41:42.227 [2024-10-01 22:40:37.292183] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.227 [2024-10-01 22:40:37.292194] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.227 qpair failed and we were unable to recover it. 00:41:42.227 [2024-10-01 22:40:37.292500] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.227 [2024-10-01 22:40:37.292511] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.227 qpair failed and we were unable to recover it. 00:41:42.227 [2024-10-01 22:40:37.292840] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.227 [2024-10-01 22:40:37.292851] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.227 qpair failed and we were unable to recover it. 00:41:42.227 [2024-10-01 22:40:37.293129] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.227 [2024-10-01 22:40:37.293140] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.227 qpair failed and we were unable to recover it. 00:41:42.227 [2024-10-01 22:40:37.293462] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.227 [2024-10-01 22:40:37.293473] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.227 qpair failed and we were unable to recover it. 00:41:42.227 [2024-10-01 22:40:37.293798] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.227 [2024-10-01 22:40:37.293811] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.227 qpair failed and we were unable to recover it. 00:41:42.227 [2024-10-01 22:40:37.294124] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.227 [2024-10-01 22:40:37.294134] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.227 qpair failed and we were unable to recover it. 00:41:42.227 [2024-10-01 22:40:37.294427] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.227 [2024-10-01 22:40:37.294438] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.227 qpair failed and we were unable to recover it. 00:41:42.227 [2024-10-01 22:40:37.294754] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.227 [2024-10-01 22:40:37.294765] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.227 qpair failed and we were unable to recover it. 00:41:42.227 [2024-10-01 22:40:37.295060] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.227 [2024-10-01 22:40:37.295071] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.227 qpair failed and we were unable to recover it. 00:41:42.227 [2024-10-01 22:40:37.295388] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.227 [2024-10-01 22:40:37.295399] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.227 qpair failed and we were unable to recover it. 00:41:42.227 [2024-10-01 22:40:37.295676] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.227 [2024-10-01 22:40:37.295688] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.227 qpair failed and we were unable to recover it. 00:41:42.227 [2024-10-01 22:40:37.296004] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.227 [2024-10-01 22:40:37.296015] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.228 qpair failed and we were unable to recover it. 00:41:42.228 [2024-10-01 22:40:37.296325] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.228 [2024-10-01 22:40:37.296336] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.228 qpair failed and we were unable to recover it. 00:41:42.228 [2024-10-01 22:40:37.296594] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.228 [2024-10-01 22:40:37.296605] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.228 qpair failed and we were unable to recover it. 00:41:42.228 [2024-10-01 22:40:37.296940] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.228 [2024-10-01 22:40:37.296951] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.228 qpair failed and we were unable to recover it. 00:41:42.228 [2024-10-01 22:40:37.297267] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.228 [2024-10-01 22:40:37.297278] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.228 qpair failed and we were unable to recover it. 00:41:42.228 [2024-10-01 22:40:37.297476] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.228 [2024-10-01 22:40:37.297487] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.228 qpair failed and we were unable to recover it. 00:41:42.228 [2024-10-01 22:40:37.297797] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.228 [2024-10-01 22:40:37.297808] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.228 qpair failed and we were unable to recover it. 00:41:42.228 [2024-10-01 22:40:37.298160] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.228 [2024-10-01 22:40:37.298171] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.228 qpair failed and we were unable to recover it. 00:41:42.228 [2024-10-01 22:40:37.298501] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.228 [2024-10-01 22:40:37.298512] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.228 qpair failed and we were unable to recover it. 00:41:42.228 [2024-10-01 22:40:37.298851] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.228 [2024-10-01 22:40:37.298862] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.228 qpair failed and we were unable to recover it. 00:41:42.228 [2024-10-01 22:40:37.299156] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.228 [2024-10-01 22:40:37.299168] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.228 qpair failed and we were unable to recover it. 00:41:42.228 [2024-10-01 22:40:37.299499] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.228 [2024-10-01 22:40:37.299510] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.228 qpair failed and we were unable to recover it. 00:41:42.228 [2024-10-01 22:40:37.299822] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.228 [2024-10-01 22:40:37.299834] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.228 qpair failed and we were unable to recover it. 00:41:42.228 [2024-10-01 22:40:37.300113] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.228 [2024-10-01 22:40:37.300124] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.228 qpair failed and we were unable to recover it. 00:41:42.228 [2024-10-01 22:40:37.300455] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.228 [2024-10-01 22:40:37.300466] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.228 qpair failed and we were unable to recover it. 00:41:42.228 [2024-10-01 22:40:37.300775] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.228 [2024-10-01 22:40:37.300787] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.228 qpair failed and we were unable to recover it. 00:41:42.228 [2024-10-01 22:40:37.301096] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.228 [2024-10-01 22:40:37.301108] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.228 qpair failed and we were unable to recover it. 00:41:42.228 [2024-10-01 22:40:37.301413] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.228 [2024-10-01 22:40:37.301424] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.228 qpair failed and we were unable to recover it. 00:41:42.228 [2024-10-01 22:40:37.301698] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.228 [2024-10-01 22:40:37.301709] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.228 qpair failed and we were unable to recover it. 00:41:42.228 [2024-10-01 22:40:37.302012] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.228 [2024-10-01 22:40:37.302023] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.228 qpair failed and we were unable to recover it. 00:41:42.228 [2024-10-01 22:40:37.302277] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.228 [2024-10-01 22:40:37.302288] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.228 qpair failed and we were unable to recover it. 00:41:42.228 [2024-10-01 22:40:37.302461] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.228 [2024-10-01 22:40:37.302472] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.228 qpair failed and we were unable to recover it. 00:41:42.228 [2024-10-01 22:40:37.302840] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.228 [2024-10-01 22:40:37.302852] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.228 qpair failed and we were unable to recover it. 00:41:42.228 [2024-10-01 22:40:37.303121] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.228 [2024-10-01 22:40:37.303132] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.228 qpair failed and we were unable to recover it. 00:41:42.228 [2024-10-01 22:40:37.303315] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.228 [2024-10-01 22:40:37.303327] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.228 qpair failed and we were unable to recover it. 00:41:42.228 [2024-10-01 22:40:37.303508] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.228 [2024-10-01 22:40:37.303519] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.228 qpair failed and we were unable to recover it. 00:41:42.228 [2024-10-01 22:40:37.303701] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.228 [2024-10-01 22:40:37.303712] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.228 qpair failed and we were unable to recover it. 00:41:42.228 [2024-10-01 22:40:37.303880] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.228 [2024-10-01 22:40:37.303892] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.228 qpair failed and we were unable to recover it. 00:41:42.228 [2024-10-01 22:40:37.304216] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.228 [2024-10-01 22:40:37.304227] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.228 qpair failed and we were unable to recover it. 00:41:42.228 [2024-10-01 22:40:37.304410] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.228 [2024-10-01 22:40:37.304421] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.228 qpair failed and we were unable to recover it. 00:41:42.228 [2024-10-01 22:40:37.304696] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.228 [2024-10-01 22:40:37.304707] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.228 qpair failed and we were unable to recover it. 00:41:42.228 [2024-10-01 22:40:37.305010] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.228 [2024-10-01 22:40:37.305020] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.228 qpair failed and we were unable to recover it. 00:41:42.228 [2024-10-01 22:40:37.305345] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.228 [2024-10-01 22:40:37.305356] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.228 qpair failed and we were unable to recover it. 00:41:42.228 [2024-10-01 22:40:37.305656] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.228 [2024-10-01 22:40:37.305668] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.228 qpair failed and we were unable to recover it. 00:41:42.228 [2024-10-01 22:40:37.305982] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.228 [2024-10-01 22:40:37.305993] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.228 qpair failed and we were unable to recover it. 00:41:42.228 [2024-10-01 22:40:37.306326] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.228 [2024-10-01 22:40:37.306336] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.228 qpair failed and we were unable to recover it. 00:41:42.228 [2024-10-01 22:40:37.306644] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.228 [2024-10-01 22:40:37.306655] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.228 qpair failed and we were unable to recover it. 00:41:42.229 [2024-10-01 22:40:37.307026] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.229 [2024-10-01 22:40:37.307037] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.229 qpair failed and we were unable to recover it. 00:41:42.229 [2024-10-01 22:40:37.307318] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.229 [2024-10-01 22:40:37.307328] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.229 qpair failed and we were unable to recover it. 00:41:42.229 [2024-10-01 22:40:37.307632] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.229 [2024-10-01 22:40:37.307643] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.229 qpair failed and we were unable to recover it. 00:41:42.229 [2024-10-01 22:40:37.307981] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.229 [2024-10-01 22:40:37.307991] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.229 qpair failed and we were unable to recover it. 00:41:42.229 [2024-10-01 22:40:37.308300] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.229 [2024-10-01 22:40:37.308311] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.229 qpair failed and we were unable to recover it. 00:41:42.229 [2024-10-01 22:40:37.308641] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.229 [2024-10-01 22:40:37.308652] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.229 qpair failed and we were unable to recover it. 00:41:42.229 [2024-10-01 22:40:37.308854] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.229 [2024-10-01 22:40:37.308864] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.229 qpair failed and we were unable to recover it. 00:41:42.229 [2024-10-01 22:40:37.309221] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.229 [2024-10-01 22:40:37.309231] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.229 qpair failed and we were unable to recover it. 00:41:42.229 [2024-10-01 22:40:37.309502] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.229 [2024-10-01 22:40:37.309512] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.229 qpair failed and we were unable to recover it. 00:41:42.229 [2024-10-01 22:40:37.309827] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.229 [2024-10-01 22:40:37.309839] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.229 qpair failed and we were unable to recover it. 00:41:42.229 [2024-10-01 22:40:37.310031] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.229 [2024-10-01 22:40:37.310044] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.229 qpair failed and we were unable to recover it. 00:41:42.229 [2024-10-01 22:40:37.310370] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.229 [2024-10-01 22:40:37.310381] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.229 qpair failed and we were unable to recover it. 00:41:42.229 [2024-10-01 22:40:37.310512] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.229 [2024-10-01 22:40:37.310523] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.229 qpair failed and we were unable to recover it. 00:41:42.229 [2024-10-01 22:40:37.310754] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.229 [2024-10-01 22:40:37.310765] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.229 qpair failed and we were unable to recover it. 00:41:42.229 [2024-10-01 22:40:37.311105] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.229 [2024-10-01 22:40:37.311116] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.229 qpair failed and we were unable to recover it. 00:41:42.229 [2024-10-01 22:40:37.311420] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.229 [2024-10-01 22:40:37.311431] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.229 qpair failed and we were unable to recover it. 00:41:42.229 [2024-10-01 22:40:37.311621] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.229 [2024-10-01 22:40:37.311635] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.229 qpair failed and we were unable to recover it. 00:41:42.229 [2024-10-01 22:40:37.311965] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.229 [2024-10-01 22:40:37.311975] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.229 qpair failed and we were unable to recover it. 00:41:42.229 [2024-10-01 22:40:37.312261] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.229 [2024-10-01 22:40:37.312271] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.229 qpair failed and we were unable to recover it. 00:41:42.229 [2024-10-01 22:40:37.312605] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.229 [2024-10-01 22:40:37.312615] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.229 qpair failed and we were unable to recover it. 00:41:42.229 [2024-10-01 22:40:37.312900] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.229 [2024-10-01 22:40:37.312910] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.229 qpair failed and we were unable to recover it. 00:41:42.229 [2024-10-01 22:40:37.313235] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.229 [2024-10-01 22:40:37.313245] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.229 qpair failed and we were unable to recover it. 00:41:42.229 [2024-10-01 22:40:37.313533] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.229 [2024-10-01 22:40:37.313544] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.229 qpair failed and we were unable to recover it. 00:41:42.229 [2024-10-01 22:40:37.313699] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.229 [2024-10-01 22:40:37.313710] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.229 qpair failed and we were unable to recover it. 00:41:42.229 [2024-10-01 22:40:37.314051] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.229 [2024-10-01 22:40:37.314061] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.229 qpair failed and we were unable to recover it. 00:41:42.229 [2024-10-01 22:40:37.314370] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.229 [2024-10-01 22:40:37.314380] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.229 qpair failed and we were unable to recover it. 00:41:42.229 [2024-10-01 22:40:37.314710] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.229 [2024-10-01 22:40:37.314720] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.229 qpair failed and we were unable to recover it. 00:41:42.229 [2024-10-01 22:40:37.315005] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.229 [2024-10-01 22:40:37.315015] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.229 qpair failed and we were unable to recover it. 00:41:42.229 [2024-10-01 22:40:37.315346] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.229 [2024-10-01 22:40:37.315356] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.229 qpair failed and we were unable to recover it. 00:41:42.229 [2024-10-01 22:40:37.315664] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.229 [2024-10-01 22:40:37.315675] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.229 qpair failed and we were unable to recover it. 00:41:42.229 [2024-10-01 22:40:37.316012] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.229 [2024-10-01 22:40:37.316022] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.229 qpair failed and we were unable to recover it. 00:41:42.229 [2024-10-01 22:40:37.316327] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.229 [2024-10-01 22:40:37.316338] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.229 qpair failed and we were unable to recover it. 00:41:42.229 [2024-10-01 22:40:37.316660] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.229 [2024-10-01 22:40:37.316670] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.229 qpair failed and we were unable to recover it. 00:41:42.229 [2024-10-01 22:40:37.317035] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.229 [2024-10-01 22:40:37.317045] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.229 qpair failed and we were unable to recover it. 00:41:42.229 [2024-10-01 22:40:37.317372] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.229 [2024-10-01 22:40:37.317382] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.229 qpair failed and we were unable to recover it. 00:41:42.229 [2024-10-01 22:40:37.317692] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.229 [2024-10-01 22:40:37.317703] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.229 qpair failed and we were unable to recover it. 00:41:42.229 [2024-10-01 22:40:37.317928] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.229 [2024-10-01 22:40:37.317937] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.229 qpair failed and we were unable to recover it. 00:41:42.229 [2024-10-01 22:40:37.318205] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.229 [2024-10-01 22:40:37.318217] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.229 qpair failed and we were unable to recover it. 00:41:42.229 [2024-10-01 22:40:37.318519] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.229 [2024-10-01 22:40:37.318528] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.229 qpair failed and we were unable to recover it. 00:41:42.230 [2024-10-01 22:40:37.318841] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.230 [2024-10-01 22:40:37.318851] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.230 qpair failed and we were unable to recover it. 00:41:42.230 [2024-10-01 22:40:37.319170] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.230 [2024-10-01 22:40:37.319180] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.230 qpair failed and we were unable to recover it. 00:41:42.230 [2024-10-01 22:40:37.319514] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.230 [2024-10-01 22:40:37.319524] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.230 qpair failed and we were unable to recover it. 00:41:42.230 [2024-10-01 22:40:37.319723] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.230 [2024-10-01 22:40:37.319734] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.230 qpair failed and we were unable to recover it. 00:41:42.230 [2024-10-01 22:40:37.320179] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.230 [2024-10-01 22:40:37.320189] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.230 qpair failed and we were unable to recover it. 00:41:42.230 [2024-10-01 22:40:37.320380] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.230 [2024-10-01 22:40:37.320391] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.230 qpair failed and we were unable to recover it. 00:41:42.230 [2024-10-01 22:40:37.320684] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.230 [2024-10-01 22:40:37.320695] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.230 qpair failed and we were unable to recover it. 00:41:42.230 [2024-10-01 22:40:37.320892] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.230 [2024-10-01 22:40:37.320902] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.230 qpair failed and we were unable to recover it. 00:41:42.230 [2024-10-01 22:40:37.321183] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.230 [2024-10-01 22:40:37.321193] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.230 qpair failed and we were unable to recover it. 00:41:42.230 [2024-10-01 22:40:37.321378] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.230 [2024-10-01 22:40:37.321388] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.230 qpair failed and we were unable to recover it. 00:41:42.230 [2024-10-01 22:40:37.321712] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.230 [2024-10-01 22:40:37.321723] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.230 qpair failed and we were unable to recover it. 00:41:42.230 [2024-10-01 22:40:37.322036] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.230 [2024-10-01 22:40:37.322046] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.230 qpair failed and we were unable to recover it. 00:41:42.230 [2024-10-01 22:40:37.322217] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.230 [2024-10-01 22:40:37.322227] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.230 qpair failed and we were unable to recover it. 00:41:42.230 [2024-10-01 22:40:37.322385] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.230 [2024-10-01 22:40:37.322396] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.230 qpair failed and we were unable to recover it. 00:41:42.230 [2024-10-01 22:40:37.322715] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.230 [2024-10-01 22:40:37.322725] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.230 qpair failed and we were unable to recover it. 00:41:42.230 [2024-10-01 22:40:37.323084] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.230 [2024-10-01 22:40:37.323093] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.230 qpair failed and we were unable to recover it. 00:41:42.230 [2024-10-01 22:40:37.323369] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.230 [2024-10-01 22:40:37.323379] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.230 qpair failed and we were unable to recover it. 00:41:42.230 [2024-10-01 22:40:37.323674] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.230 [2024-10-01 22:40:37.323685] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.230 qpair failed and we were unable to recover it. 00:41:42.230 [2024-10-01 22:40:37.323993] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.230 [2024-10-01 22:40:37.324003] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.230 qpair failed and we were unable to recover it. 00:41:42.230 [2024-10-01 22:40:37.324279] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.230 [2024-10-01 22:40:37.324289] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.230 qpair failed and we were unable to recover it. 00:41:42.230 [2024-10-01 22:40:37.324606] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.230 [2024-10-01 22:40:37.324615] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.230 qpair failed and we were unable to recover it. 00:41:42.230 [2024-10-01 22:40:37.324979] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.230 [2024-10-01 22:40:37.324989] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.230 qpair failed and we were unable to recover it. 00:41:42.230 [2024-10-01 22:40:37.325299] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.230 [2024-10-01 22:40:37.325309] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.230 qpair failed and we were unable to recover it. 00:41:42.230 [2024-10-01 22:40:37.325643] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.230 [2024-10-01 22:40:37.325654] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.230 qpair failed and we were unable to recover it. 00:41:42.230 [2024-10-01 22:40:37.326100] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.230 [2024-10-01 22:40:37.326110] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.230 qpair failed and we were unable to recover it. 00:41:42.230 [2024-10-01 22:40:37.326391] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.230 [2024-10-01 22:40:37.326400] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.230 qpair failed and we were unable to recover it. 00:41:42.230 [2024-10-01 22:40:37.326702] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.230 [2024-10-01 22:40:37.326713] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.230 qpair failed and we were unable to recover it. 00:41:42.230 [2024-10-01 22:40:37.326959] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.230 [2024-10-01 22:40:37.326969] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.230 qpair failed and we were unable to recover it. 00:41:42.230 [2024-10-01 22:40:37.327281] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.230 [2024-10-01 22:40:37.327291] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.230 qpair failed and we were unable to recover it. 00:41:42.230 [2024-10-01 22:40:37.327454] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.230 [2024-10-01 22:40:37.327465] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.230 qpair failed and we were unable to recover it. 00:41:42.230 [2024-10-01 22:40:37.327674] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.230 [2024-10-01 22:40:37.327684] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.230 qpair failed and we were unable to recover it. 00:41:42.230 [2024-10-01 22:40:37.327972] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.230 [2024-10-01 22:40:37.327982] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.230 qpair failed and we were unable to recover it. 00:41:42.230 [2024-10-01 22:40:37.328296] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.230 [2024-10-01 22:40:37.328306] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.230 qpair failed and we were unable to recover it. 00:41:42.230 [2024-10-01 22:40:37.328611] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.230 [2024-10-01 22:40:37.328621] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.230 qpair failed and we were unable to recover it. 00:41:42.230 [2024-10-01 22:40:37.328927] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.230 [2024-10-01 22:40:37.328938] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.230 qpair failed and we were unable to recover it. 00:41:42.230 [2024-10-01 22:40:37.329230] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.230 [2024-10-01 22:40:37.329240] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.230 qpair failed and we were unable to recover it. 00:41:42.230 [2024-10-01 22:40:37.329332] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:41:42.230 [2024-10-01 22:40:37.329359] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:41:42.230 [2024-10-01 22:40:37.329366] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:41:42.230 [2024-10-01 22:40:37.329372] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:41:42.230 [2024-10-01 22:40:37.329377] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:41:42.230 [2024-10-01 22:40:37.329552] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.231 [2024-10-01 22:40:37.329562] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.231 qpair failed and we were unable to recover it. 00:41:42.231 [2024-10-01 22:40:37.329514] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 5 00:41:42.231 [2024-10-01 22:40:37.329667] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 6 00:41:42.231 [2024-10-01 22:40:37.329834] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.231 [2024-10-01 22:40:37.329845] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.231 qpair failed and we were unable to recover it. 00:41:42.231 [2024-10-01 22:40:37.329831] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 4 00:41:42.231 [2024-10-01 22:40:37.329825] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 7 00:41:42.231 [2024-10-01 22:40:37.330089] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.231 [2024-10-01 22:40:37.330099] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.231 qpair failed and we were unable to recover it. 00:41:42.231 [2024-10-01 22:40:37.330428] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.231 [2024-10-01 22:40:37.330437] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.231 qpair failed and we were unable to recover it. 00:41:42.231 [2024-10-01 22:40:37.330723] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.231 [2024-10-01 22:40:37.330733] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.231 qpair failed and we were unable to recover it. 00:41:42.231 [2024-10-01 22:40:37.330913] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.231 [2024-10-01 22:40:37.330923] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.231 qpair failed and we were unable to recover it. 00:41:42.231 [2024-10-01 22:40:37.331334] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.231 [2024-10-01 22:40:37.331345] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.231 qpair failed and we were unable to recover it. 00:41:42.231 [2024-10-01 22:40:37.331683] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.231 [2024-10-01 22:40:37.331694] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.231 qpair failed and we were unable to recover it. 00:41:42.231 [2024-10-01 22:40:37.332001] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.231 [2024-10-01 22:40:37.332012] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.231 qpair failed and we were unable to recover it. 00:41:42.231 [2024-10-01 22:40:37.332317] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.231 [2024-10-01 22:40:37.332327] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.231 qpair failed and we were unable to recover it. 00:41:42.231 [2024-10-01 22:40:37.332637] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.231 [2024-10-01 22:40:37.332648] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.231 qpair failed and we were unable to recover it. 00:41:42.231 [2024-10-01 22:40:37.332960] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.231 [2024-10-01 22:40:37.332969] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.231 qpair failed and we were unable to recover it. 00:41:42.231 [2024-10-01 22:40:37.333158] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.231 [2024-10-01 22:40:37.333168] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.231 qpair failed and we were unable to recover it. 00:41:42.231 [2024-10-01 22:40:37.333491] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.231 [2024-10-01 22:40:37.333507] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.231 qpair failed and we were unable to recover it. 00:41:42.231 [2024-10-01 22:40:37.333712] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.231 [2024-10-01 22:40:37.333722] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.231 qpair failed and we were unable to recover it. 00:41:42.231 [2024-10-01 22:40:37.334065] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.231 [2024-10-01 22:40:37.334075] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.231 qpair failed and we were unable to recover it. 00:41:42.231 [2024-10-01 22:40:37.334376] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.231 [2024-10-01 22:40:37.334386] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.231 qpair failed and we were unable to recover it. 00:41:42.231 [2024-10-01 22:40:37.334695] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.231 [2024-10-01 22:40:37.334705] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.231 qpair failed and we were unable to recover it. 00:41:42.231 [2024-10-01 22:40:37.335066] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.231 [2024-10-01 22:40:37.335076] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.231 qpair failed and we were unable to recover it. 00:41:42.231 [2024-10-01 22:40:37.335358] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.231 [2024-10-01 22:40:37.335368] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.231 qpair failed and we were unable to recover it. 00:41:42.231 [2024-10-01 22:40:37.335647] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.231 [2024-10-01 22:40:37.335657] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.231 qpair failed and we were unable to recover it. 00:41:42.231 [2024-10-01 22:40:37.336006] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.231 [2024-10-01 22:40:37.336016] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.231 qpair failed and we were unable to recover it. 00:41:42.231 [2024-10-01 22:40:37.336220] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.231 [2024-10-01 22:40:37.336230] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.231 qpair failed and we were unable to recover it. 00:41:42.231 [2024-10-01 22:40:37.336538] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.231 [2024-10-01 22:40:37.336548] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.231 qpair failed and we were unable to recover it. 00:41:42.231 [2024-10-01 22:40:37.336831] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.231 [2024-10-01 22:40:37.336841] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.231 qpair failed and we were unable to recover it. 00:41:42.231 [2024-10-01 22:40:37.337143] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.231 [2024-10-01 22:40:37.337153] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.231 qpair failed and we were unable to recover it. 00:41:42.231 [2024-10-01 22:40:37.337465] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.231 [2024-10-01 22:40:37.337475] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.231 qpair failed and we were unable to recover it. 00:41:42.231 [2024-10-01 22:40:37.337774] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.231 [2024-10-01 22:40:37.337785] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.231 qpair failed and we were unable to recover it. 00:41:42.231 [2024-10-01 22:40:37.337977] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.231 [2024-10-01 22:40:37.337986] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.231 qpair failed and we were unable to recover it. 00:41:42.231 [2024-10-01 22:40:37.338315] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.231 [2024-10-01 22:40:37.338324] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.231 qpair failed and we were unable to recover it. 00:41:42.231 [2024-10-01 22:40:37.338495] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.231 [2024-10-01 22:40:37.338505] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.231 qpair failed and we were unable to recover it. 00:41:42.231 [2024-10-01 22:40:37.338821] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.231 [2024-10-01 22:40:37.338831] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.231 qpair failed and we were unable to recover it. 00:41:42.231 [2024-10-01 22:40:37.339005] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.231 [2024-10-01 22:40:37.339015] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.231 qpair failed and we were unable to recover it. 00:41:42.231 [2024-10-01 22:40:37.339330] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.231 [2024-10-01 22:40:37.339345] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.231 qpair failed and we were unable to recover it. 00:41:42.231 [2024-10-01 22:40:37.339628] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.231 [2024-10-01 22:40:37.339642] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.231 qpair failed and we were unable to recover it. 00:41:42.231 [2024-10-01 22:40:37.339818] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.231 [2024-10-01 22:40:37.339828] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.232 qpair failed and we were unable to recover it. 00:41:42.232 [2024-10-01 22:40:37.340139] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.232 [2024-10-01 22:40:37.340149] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.232 qpair failed and we were unable to recover it. 00:41:42.232 [2024-10-01 22:40:37.340427] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.232 [2024-10-01 22:40:37.340439] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.232 qpair failed and we were unable to recover it. 00:41:42.232 [2024-10-01 22:40:37.340779] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.232 [2024-10-01 22:40:37.340790] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.232 qpair failed and we were unable to recover it. 00:41:42.232 [2024-10-01 22:40:37.340999] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.232 [2024-10-01 22:40:37.341009] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.232 qpair failed and we were unable to recover it. 00:41:42.232 [2024-10-01 22:40:37.341344] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.232 [2024-10-01 22:40:37.341354] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.232 qpair failed and we were unable to recover it. 00:41:42.232 [2024-10-01 22:40:37.341617] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.232 [2024-10-01 22:40:37.341632] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.232 qpair failed and we were unable to recover it. 00:41:42.232 [2024-10-01 22:40:37.341836] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.232 [2024-10-01 22:40:37.341847] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.232 qpair failed and we were unable to recover it. 00:41:42.232 [2024-10-01 22:40:37.342012] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.232 [2024-10-01 22:40:37.342023] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.232 qpair failed and we were unable to recover it. 00:41:42.232 [2024-10-01 22:40:37.342203] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.232 [2024-10-01 22:40:37.342213] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.232 qpair failed and we were unable to recover it. 00:41:42.232 [2024-10-01 22:40:37.342438] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.232 [2024-10-01 22:40:37.342448] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.232 qpair failed and we were unable to recover it. 00:41:42.232 [2024-10-01 22:40:37.342680] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.232 [2024-10-01 22:40:37.342690] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.232 qpair failed and we were unable to recover it. 00:41:42.232 [2024-10-01 22:40:37.342973] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.232 [2024-10-01 22:40:37.342983] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.232 qpair failed and we were unable to recover it. 00:41:42.232 [2024-10-01 22:40:37.343045] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.232 [2024-10-01 22:40:37.343055] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.232 qpair failed and we were unable to recover it. 00:41:42.232 [2024-10-01 22:40:37.343119] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.232 [2024-10-01 22:40:37.343128] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.232 qpair failed and we were unable to recover it. 00:41:42.232 [2024-10-01 22:40:37.343456] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.232 [2024-10-01 22:40:37.343466] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.232 qpair failed and we were unable to recover it. 00:41:42.232 [2024-10-01 22:40:37.343774] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.232 [2024-10-01 22:40:37.343784] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.232 qpair failed and we were unable to recover it. 00:41:42.232 [2024-10-01 22:40:37.344139] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.232 [2024-10-01 22:40:37.344149] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.232 qpair failed and we were unable to recover it. 00:41:42.232 [2024-10-01 22:40:37.344471] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.232 [2024-10-01 22:40:37.344482] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.232 qpair failed and we were unable to recover it. 00:41:42.232 [2024-10-01 22:40:37.344765] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.232 [2024-10-01 22:40:37.344775] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.232 qpair failed and we were unable to recover it. 00:41:42.232 [2024-10-01 22:40:37.345104] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.232 [2024-10-01 22:40:37.345114] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.232 qpair failed and we were unable to recover it. 00:41:42.232 [2024-10-01 22:40:37.345455] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.232 [2024-10-01 22:40:37.345465] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.232 qpair failed and we were unable to recover it. 00:41:42.232 [2024-10-01 22:40:37.345787] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.232 [2024-10-01 22:40:37.345798] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.232 qpair failed and we were unable to recover it. 00:41:42.232 [2024-10-01 22:40:37.346099] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.232 [2024-10-01 22:40:37.346109] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.232 qpair failed and we were unable to recover it. 00:41:42.232 [2024-10-01 22:40:37.346331] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.232 [2024-10-01 22:40:37.346340] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.232 qpair failed and we were unable to recover it. 00:41:42.232 [2024-10-01 22:40:37.346685] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.232 [2024-10-01 22:40:37.346695] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.232 qpair failed and we were unable to recover it. 00:41:42.232 [2024-10-01 22:40:37.347012] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.232 [2024-10-01 22:40:37.347023] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.232 qpair failed and we were unable to recover it. 00:41:42.232 [2024-10-01 22:40:37.347186] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.232 [2024-10-01 22:40:37.347195] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.232 qpair failed and we were unable to recover it. 00:41:42.232 [2024-10-01 22:40:37.347432] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.232 [2024-10-01 22:40:37.347443] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.232 qpair failed and we were unable to recover it. 00:41:42.232 [2024-10-01 22:40:37.347752] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.232 [2024-10-01 22:40:37.347763] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.232 qpair failed and we were unable to recover it. 00:41:42.232 [2024-10-01 22:40:37.348120] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.232 [2024-10-01 22:40:37.348130] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.232 qpair failed and we were unable to recover it. 00:41:42.232 [2024-10-01 22:40:37.348450] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.232 [2024-10-01 22:40:37.348468] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.232 qpair failed and we were unable to recover it. 00:41:42.232 [2024-10-01 22:40:37.348782] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.232 [2024-10-01 22:40:37.348793] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.233 qpair failed and we were unable to recover it. 00:41:42.233 [2024-10-01 22:40:37.349090] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.233 [2024-10-01 22:40:37.349100] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.233 qpair failed and we were unable to recover it. 00:41:42.233 [2024-10-01 22:40:37.349410] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.233 [2024-10-01 22:40:37.349420] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.233 qpair failed and we were unable to recover it. 00:41:42.233 [2024-10-01 22:40:37.349617] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.233 [2024-10-01 22:40:37.349632] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.233 qpair failed and we were unable to recover it. 00:41:42.233 [2024-10-01 22:40:37.349984] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.233 [2024-10-01 22:40:37.349995] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.233 qpair failed and we were unable to recover it. 00:41:42.233 [2024-10-01 22:40:37.350311] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.233 [2024-10-01 22:40:37.350329] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.233 qpair failed and we were unable to recover it. 00:41:42.233 [2024-10-01 22:40:37.350648] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.233 [2024-10-01 22:40:37.350660] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.233 qpair failed and we were unable to recover it. 00:41:42.233 [2024-10-01 22:40:37.350851] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.233 [2024-10-01 22:40:37.350861] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.233 qpair failed and we were unable to recover it. 00:41:42.233 [2024-10-01 22:40:37.351055] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.233 [2024-10-01 22:40:37.351066] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.233 qpair failed and we were unable to recover it. 00:41:42.233 [2024-10-01 22:40:37.351375] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.233 [2024-10-01 22:40:37.351386] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.233 qpair failed and we were unable to recover it. 00:41:42.233 [2024-10-01 22:40:37.351665] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.233 [2024-10-01 22:40:37.351676] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.233 qpair failed and we were unable to recover it. 00:41:42.233 [2024-10-01 22:40:37.351867] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.233 [2024-10-01 22:40:37.351878] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.233 qpair failed and we were unable to recover it. 00:41:42.233 [2024-10-01 22:40:37.352198] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.233 [2024-10-01 22:40:37.352209] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.233 qpair failed and we were unable to recover it. 00:41:42.233 [2024-10-01 22:40:37.352542] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.233 [2024-10-01 22:40:37.352553] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.233 qpair failed and we were unable to recover it. 00:41:42.233 [2024-10-01 22:40:37.352829] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.233 [2024-10-01 22:40:37.352842] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.233 qpair failed and we were unable to recover it. 00:41:42.233 [2024-10-01 22:40:37.353140] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.233 [2024-10-01 22:40:37.353152] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.233 qpair failed and we were unable to recover it. 00:41:42.233 [2024-10-01 22:40:37.353426] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.233 [2024-10-01 22:40:37.353436] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.233 qpair failed and we were unable to recover it. 00:41:42.233 [2024-10-01 22:40:37.353725] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.233 [2024-10-01 22:40:37.353735] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.233 qpair failed and we were unable to recover it. 00:41:42.233 [2024-10-01 22:40:37.353907] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.233 [2024-10-01 22:40:37.353917] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.233 qpair failed and we were unable to recover it. 00:41:42.233 [2024-10-01 22:40:37.354257] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.233 [2024-10-01 22:40:37.354266] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.233 qpair failed and we were unable to recover it. 00:41:42.233 [2024-10-01 22:40:37.354571] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.233 [2024-10-01 22:40:37.354582] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.233 qpair failed and we were unable to recover it. 00:41:42.233 [2024-10-01 22:40:37.354905] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.233 [2024-10-01 22:40:37.354917] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.233 qpair failed and we were unable to recover it. 00:41:42.233 [2024-10-01 22:40:37.355120] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.233 [2024-10-01 22:40:37.355130] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.233 qpair failed and we were unable to recover it. 00:41:42.233 [2024-10-01 22:40:37.355433] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.233 [2024-10-01 22:40:37.355444] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.233 qpair failed and we were unable to recover it. 00:41:42.233 [2024-10-01 22:40:37.355736] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.233 [2024-10-01 22:40:37.355746] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.233 qpair failed and we were unable to recover it. 00:41:42.233 [2024-10-01 22:40:37.356086] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.233 [2024-10-01 22:40:37.356097] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.233 qpair failed and we were unable to recover it. 00:41:42.233 [2024-10-01 22:40:37.356400] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.233 [2024-10-01 22:40:37.356410] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.233 qpair failed and we were unable to recover it. 00:41:42.233 [2024-10-01 22:40:37.356688] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.233 [2024-10-01 22:40:37.356698] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.233 qpair failed and we were unable to recover it. 00:41:42.233 [2024-10-01 22:40:37.357013] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.233 [2024-10-01 22:40:37.357023] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.233 qpair failed and we were unable to recover it. 00:41:42.233 [2024-10-01 22:40:37.357341] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.233 [2024-10-01 22:40:37.357351] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.233 qpair failed and we were unable to recover it. 00:41:42.233 [2024-10-01 22:40:37.357664] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.233 [2024-10-01 22:40:37.357674] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.233 qpair failed and we were unable to recover it. 00:41:42.233 [2024-10-01 22:40:37.358005] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.233 [2024-10-01 22:40:37.358015] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.233 qpair failed and we were unable to recover it. 00:41:42.233 [2024-10-01 22:40:37.358237] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.233 [2024-10-01 22:40:37.358247] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.233 qpair failed and we were unable to recover it. 00:41:42.233 [2024-10-01 22:40:37.358452] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.233 [2024-10-01 22:40:37.358462] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.233 qpair failed and we were unable to recover it. 00:41:42.233 [2024-10-01 22:40:37.358756] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.233 [2024-10-01 22:40:37.358773] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.233 qpair failed and we were unable to recover it. 00:41:42.233 [2024-10-01 22:40:37.359038] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.233 [2024-10-01 22:40:37.359047] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.233 qpair failed and we were unable to recover it. 00:41:42.233 [2024-10-01 22:40:37.359378] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.233 [2024-10-01 22:40:37.359388] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.233 qpair failed and we were unable to recover it. 00:41:42.233 [2024-10-01 22:40:37.359587] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.233 [2024-10-01 22:40:37.359598] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.233 qpair failed and we were unable to recover it. 00:41:42.233 [2024-10-01 22:40:37.359681] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.233 [2024-10-01 22:40:37.359692] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.233 qpair failed and we were unable to recover it. 00:41:42.233 [2024-10-01 22:40:37.359778] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.234 [2024-10-01 22:40:37.359788] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.234 qpair failed and we were unable to recover it. 00:41:42.234 [2024-10-01 22:40:37.360161] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.234 [2024-10-01 22:40:37.360171] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.234 qpair failed and we were unable to recover it. 00:41:42.234 [2024-10-01 22:40:37.360447] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.234 [2024-10-01 22:40:37.360460] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.234 qpair failed and we were unable to recover it. 00:41:42.234 [2024-10-01 22:40:37.360743] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.234 [2024-10-01 22:40:37.360754] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.234 qpair failed and we were unable to recover it. 00:41:42.234 [2024-10-01 22:40:37.361031] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.234 [2024-10-01 22:40:37.361041] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.234 qpair failed and we were unable to recover it. 00:41:42.234 [2024-10-01 22:40:37.361350] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.234 [2024-10-01 22:40:37.361360] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.234 qpair failed and we were unable to recover it. 00:41:42.234 [2024-10-01 22:40:37.361534] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.234 [2024-10-01 22:40:37.361544] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.234 qpair failed and we were unable to recover it. 00:41:42.234 [2024-10-01 22:40:37.361719] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.234 [2024-10-01 22:40:37.361730] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.234 qpair failed and we were unable to recover it. 00:41:42.234 [2024-10-01 22:40:37.362088] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.234 [2024-10-01 22:40:37.362098] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.234 qpair failed and we were unable to recover it. 00:41:42.234 [2024-10-01 22:40:37.362411] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.234 [2024-10-01 22:40:37.362421] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.234 qpair failed and we were unable to recover it. 00:41:42.234 [2024-10-01 22:40:37.362732] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.234 [2024-10-01 22:40:37.362743] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.234 qpair failed and we were unable to recover it. 00:41:42.234 [2024-10-01 22:40:37.362954] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.234 [2024-10-01 22:40:37.362964] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.234 qpair failed and we were unable to recover it. 00:41:42.234 [2024-10-01 22:40:37.363191] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.234 [2024-10-01 22:40:37.363202] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.234 qpair failed and we were unable to recover it. 00:41:42.234 [2024-10-01 22:40:37.363537] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.234 [2024-10-01 22:40:37.363547] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.234 qpair failed and we were unable to recover it. 00:41:42.234 [2024-10-01 22:40:37.363839] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.234 [2024-10-01 22:40:37.363850] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.234 qpair failed and we were unable to recover it. 00:41:42.234 [2024-10-01 22:40:37.364167] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.234 [2024-10-01 22:40:37.364177] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.234 qpair failed and we were unable to recover it. 00:41:42.234 [2024-10-01 22:40:37.364493] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.234 [2024-10-01 22:40:37.364503] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.234 qpair failed and we were unable to recover it. 00:41:42.234 [2024-10-01 22:40:37.364825] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.234 [2024-10-01 22:40:37.364837] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.234 qpair failed and we were unable to recover it. 00:41:42.234 [2024-10-01 22:40:37.365015] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.234 [2024-10-01 22:40:37.365025] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.234 qpair failed and we were unable to recover it. 00:41:42.234 [2024-10-01 22:40:37.365332] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.234 [2024-10-01 22:40:37.365342] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.234 qpair failed and we were unable to recover it. 00:41:42.234 [2024-10-01 22:40:37.365655] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.234 [2024-10-01 22:40:37.365665] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.234 qpair failed and we were unable to recover it. 00:41:42.234 [2024-10-01 22:40:37.365954] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.234 [2024-10-01 22:40:37.365964] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.234 qpair failed and we were unable to recover it. 00:41:42.234 [2024-10-01 22:40:37.366245] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.234 [2024-10-01 22:40:37.366255] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.234 qpair failed and we were unable to recover it. 00:41:42.234 [2024-10-01 22:40:37.366421] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.234 [2024-10-01 22:40:37.366430] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.234 qpair failed and we were unable to recover it. 00:41:42.234 [2024-10-01 22:40:37.366634] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.234 [2024-10-01 22:40:37.366645] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.234 qpair failed and we were unable to recover it. 00:41:42.234 [2024-10-01 22:40:37.366935] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.234 [2024-10-01 22:40:37.366945] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.234 qpair failed and we were unable to recover it. 00:41:42.234 [2024-10-01 22:40:37.367258] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.234 [2024-10-01 22:40:37.367268] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.234 qpair failed and we were unable to recover it. 00:41:42.234 [2024-10-01 22:40:37.367552] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.234 [2024-10-01 22:40:37.367562] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.234 qpair failed and we were unable to recover it. 00:41:42.234 [2024-10-01 22:40:37.367778] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.234 [2024-10-01 22:40:37.367789] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.234 qpair failed and we were unable to recover it. 00:41:42.234 [2024-10-01 22:40:37.367986] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.234 [2024-10-01 22:40:37.367998] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.234 qpair failed and we were unable to recover it. 00:41:42.234 [2024-10-01 22:40:37.368146] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.234 [2024-10-01 22:40:37.368161] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.234 qpair failed and we were unable to recover it. 00:41:42.234 [2024-10-01 22:40:37.368468] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.234 [2024-10-01 22:40:37.368478] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.234 qpair failed and we were unable to recover it. 00:41:42.234 [2024-10-01 22:40:37.368790] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.234 [2024-10-01 22:40:37.368800] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.234 qpair failed and we were unable to recover it. 00:41:42.234 [2024-10-01 22:40:37.369009] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.234 [2024-10-01 22:40:37.369019] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.234 qpair failed and we were unable to recover it. 00:41:42.234 [2024-10-01 22:40:37.369290] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.234 [2024-10-01 22:40:37.369299] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.234 qpair failed and we were unable to recover it. 00:41:42.234 [2024-10-01 22:40:37.369640] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.234 [2024-10-01 22:40:37.369651] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.234 qpair failed and we were unable to recover it. 00:41:42.234 [2024-10-01 22:40:37.369851] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.234 [2024-10-01 22:40:37.369861] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.234 qpair failed and we were unable to recover it. 00:41:42.234 [2024-10-01 22:40:37.370210] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.234 [2024-10-01 22:40:37.370220] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.234 qpair failed and we were unable to recover it. 00:41:42.234 [2024-10-01 22:40:37.370530] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.235 [2024-10-01 22:40:37.370541] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.235 qpair failed and we were unable to recover it. 00:41:42.235 [2024-10-01 22:40:37.370842] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.235 [2024-10-01 22:40:37.370853] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.235 qpair failed and we were unable to recover it. 00:41:42.235 [2024-10-01 22:40:37.371180] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.235 [2024-10-01 22:40:37.371189] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.235 qpair failed and we were unable to recover it. 00:41:42.235 [2024-10-01 22:40:37.371504] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.235 [2024-10-01 22:40:37.371514] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.235 qpair failed and we were unable to recover it. 00:41:42.235 [2024-10-01 22:40:37.371848] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.235 [2024-10-01 22:40:37.371859] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.235 qpair failed and we were unable to recover it. 00:41:42.235 [2024-10-01 22:40:37.372137] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.235 [2024-10-01 22:40:37.372147] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.235 qpair failed and we were unable to recover it. 00:41:42.235 [2024-10-01 22:40:37.372528] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.235 [2024-10-01 22:40:37.372538] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.235 qpair failed and we were unable to recover it. 00:41:42.235 [2024-10-01 22:40:37.372874] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.235 [2024-10-01 22:40:37.372885] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.235 qpair failed and we were unable to recover it. 00:41:42.235 [2024-10-01 22:40:37.373222] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.235 [2024-10-01 22:40:37.373232] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.235 qpair failed and we were unable to recover it. 00:41:42.235 [2024-10-01 22:40:37.373573] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.235 [2024-10-01 22:40:37.373583] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.235 qpair failed and we were unable to recover it. 00:41:42.235 [2024-10-01 22:40:37.373941] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.235 [2024-10-01 22:40:37.373951] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.235 qpair failed and we were unable to recover it. 00:41:42.235 [2024-10-01 22:40:37.374294] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.235 [2024-10-01 22:40:37.374304] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.235 qpair failed and we were unable to recover it. 00:41:42.235 [2024-10-01 22:40:37.374632] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.235 [2024-10-01 22:40:37.374643] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.235 qpair failed and we were unable to recover it. 00:41:42.235 [2024-10-01 22:40:37.374810] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.235 [2024-10-01 22:40:37.374820] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.235 qpair failed and we were unable to recover it. 00:41:42.235 [2024-10-01 22:40:37.375135] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.235 [2024-10-01 22:40:37.375145] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.235 qpair failed and we were unable to recover it. 00:41:42.235 [2024-10-01 22:40:37.375311] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.235 [2024-10-01 22:40:37.375322] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.235 qpair failed and we were unable to recover it. 00:41:42.235 [2024-10-01 22:40:37.375517] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.235 [2024-10-01 22:40:37.375528] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.235 qpair failed and we were unable to recover it. 00:41:42.235 [2024-10-01 22:40:37.375894] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.235 [2024-10-01 22:40:37.375904] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.235 qpair failed and we were unable to recover it. 00:41:42.235 [2024-10-01 22:40:37.376187] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.235 [2024-10-01 22:40:37.376196] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.235 qpair failed and we were unable to recover it. 00:41:42.235 [2024-10-01 22:40:37.376375] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.235 [2024-10-01 22:40:37.376385] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.235 qpair failed and we were unable to recover it. 00:41:42.235 [2024-10-01 22:40:37.376705] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.235 [2024-10-01 22:40:37.376716] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.235 qpair failed and we were unable to recover it. 00:41:42.235 [2024-10-01 22:40:37.376768] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.235 [2024-10-01 22:40:37.376779] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.235 qpair failed and we were unable to recover it. 00:41:42.235 [2024-10-01 22:40:37.377111] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.235 [2024-10-01 22:40:37.377120] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.235 qpair failed and we were unable to recover it. 00:41:42.235 [2024-10-01 22:40:37.377311] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.235 [2024-10-01 22:40:37.377322] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.235 qpair failed and we were unable to recover it. 00:41:42.235 [2024-10-01 22:40:37.377445] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.235 [2024-10-01 22:40:37.377454] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.235 qpair failed and we were unable to recover it. 00:41:42.235 [2024-10-01 22:40:37.377794] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.235 [2024-10-01 22:40:37.377804] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.235 qpair failed and we were unable to recover it. 00:41:42.235 [2024-10-01 22:40:37.378119] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.235 [2024-10-01 22:40:37.378129] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.235 qpair failed and we were unable to recover it. 00:41:42.235 [2024-10-01 22:40:37.378405] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.235 [2024-10-01 22:40:37.378415] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.235 qpair failed and we were unable to recover it. 00:41:42.235 [2024-10-01 22:40:37.378734] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.235 [2024-10-01 22:40:37.378745] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.235 qpair failed and we were unable to recover it. 00:41:42.235 [2024-10-01 22:40:37.378959] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.235 [2024-10-01 22:40:37.378969] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.235 qpair failed and we were unable to recover it. 00:41:42.235 [2024-10-01 22:40:37.379321] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.235 [2024-10-01 22:40:37.379331] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.235 qpair failed and we were unable to recover it. 00:41:42.235 [2024-10-01 22:40:37.379638] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.235 [2024-10-01 22:40:37.379649] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.235 qpair failed and we were unable to recover it. 00:41:42.235 [2024-10-01 22:40:37.379924] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.235 [2024-10-01 22:40:37.379936] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.235 qpair failed and we were unable to recover it. 00:41:42.235 [2024-10-01 22:40:37.380121] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.235 [2024-10-01 22:40:37.380131] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.235 qpair failed and we were unable to recover it. 00:41:42.235 [2024-10-01 22:40:37.380498] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.235 [2024-10-01 22:40:37.380509] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.235 qpair failed and we were unable to recover it. 00:41:42.235 [2024-10-01 22:40:37.380884] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.235 [2024-10-01 22:40:37.380894] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.235 qpair failed and we were unable to recover it. 00:41:42.235 [2024-10-01 22:40:37.381196] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.235 [2024-10-01 22:40:37.381207] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.235 qpair failed and we were unable to recover it. 00:41:42.235 [2024-10-01 22:40:37.381370] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.235 [2024-10-01 22:40:37.381381] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.235 qpair failed and we were unable to recover it. 00:41:42.236 [2024-10-01 22:40:37.381695] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.236 [2024-10-01 22:40:37.381705] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.236 qpair failed and we were unable to recover it. 00:41:42.236 [2024-10-01 22:40:37.382000] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.236 [2024-10-01 22:40:37.382010] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.236 qpair failed and we were unable to recover it. 00:41:42.236 [2024-10-01 22:40:37.382339] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.236 [2024-10-01 22:40:37.382349] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.236 qpair failed and we were unable to recover it. 00:41:42.236 [2024-10-01 22:40:37.382660] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.236 [2024-10-01 22:40:37.382670] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.236 qpair failed and we were unable to recover it. 00:41:42.236 [2024-10-01 22:40:37.382992] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.236 [2024-10-01 22:40:37.383002] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.236 qpair failed and we were unable to recover it. 00:41:42.236 [2024-10-01 22:40:37.383287] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.236 [2024-10-01 22:40:37.383297] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.236 qpair failed and we were unable to recover it. 00:41:42.236 [2024-10-01 22:40:37.383609] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.236 [2024-10-01 22:40:37.383619] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.236 qpair failed and we were unable to recover it. 00:41:42.236 [2024-10-01 22:40:37.383970] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.236 [2024-10-01 22:40:37.383980] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.236 qpair failed and we were unable to recover it. 00:41:42.236 [2024-10-01 22:40:37.384319] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.236 [2024-10-01 22:40:37.384329] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.236 qpair failed and we were unable to recover it. 00:41:42.236 [2024-10-01 22:40:37.384605] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.236 [2024-10-01 22:40:37.384615] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.236 qpair failed and we were unable to recover it. 00:41:42.236 [2024-10-01 22:40:37.384823] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.236 [2024-10-01 22:40:37.384834] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.236 qpair failed and we were unable to recover it. 00:41:42.236 [2024-10-01 22:40:37.385066] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.236 [2024-10-01 22:40:37.385077] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.236 qpair failed and we were unable to recover it. 00:41:42.236 [2024-10-01 22:40:37.385415] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.236 [2024-10-01 22:40:37.385425] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.236 qpair failed and we were unable to recover it. 00:41:42.236 [2024-10-01 22:40:37.385584] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.236 [2024-10-01 22:40:37.385594] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.236 qpair failed and we were unable to recover it. 00:41:42.236 [2024-10-01 22:40:37.385760] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.236 [2024-10-01 22:40:37.385771] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.236 qpair failed and we were unable to recover it. 00:41:42.236 [2024-10-01 22:40:37.386181] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.236 [2024-10-01 22:40:37.386191] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.236 qpair failed and we were unable to recover it. 00:41:42.236 [2024-10-01 22:40:37.386482] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.236 [2024-10-01 22:40:37.386494] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.236 qpair failed and we were unable to recover it. 00:41:42.236 [2024-10-01 22:40:37.386688] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.236 [2024-10-01 22:40:37.386699] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.236 qpair failed and we were unable to recover it. 00:41:42.236 [2024-10-01 22:40:37.386882] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.236 [2024-10-01 22:40:37.386892] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.236 qpair failed and we were unable to recover it. 00:41:42.236 [2024-10-01 22:40:37.387071] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.236 [2024-10-01 22:40:37.387082] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.236 qpair failed and we were unable to recover it. 00:41:42.236 [2024-10-01 22:40:37.387262] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.236 [2024-10-01 22:40:37.387272] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.236 qpair failed and we were unable to recover it. 00:41:42.236 [2024-10-01 22:40:37.387592] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.236 [2024-10-01 22:40:37.387604] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.236 qpair failed and we were unable to recover it. 00:41:42.236 [2024-10-01 22:40:37.387918] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.236 [2024-10-01 22:40:37.387928] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.236 qpair failed and we were unable to recover it. 00:41:42.236 [2024-10-01 22:40:37.388245] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.236 [2024-10-01 22:40:37.388255] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.236 qpair failed and we were unable to recover it. 00:41:42.236 [2024-10-01 22:40:37.388589] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.236 [2024-10-01 22:40:37.388599] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.236 qpair failed and we were unable to recover it. 00:41:42.236 [2024-10-01 22:40:37.388866] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.236 [2024-10-01 22:40:37.388877] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.236 qpair failed and we were unable to recover it. 00:41:42.236 [2024-10-01 22:40:37.389183] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.236 [2024-10-01 22:40:37.389194] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.236 qpair failed and we were unable to recover it. 00:41:42.236 [2024-10-01 22:40:37.389497] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.236 [2024-10-01 22:40:37.389510] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.236 qpair failed and we were unable to recover it. 00:41:42.236 [2024-10-01 22:40:37.389723] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.236 [2024-10-01 22:40:37.389733] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.236 qpair failed and we were unable to recover it. 00:41:42.236 [2024-10-01 22:40:37.390068] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.236 [2024-10-01 22:40:37.390079] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.236 qpair failed and we were unable to recover it. 00:41:42.236 [2024-10-01 22:40:37.390268] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.236 [2024-10-01 22:40:37.390278] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.236 qpair failed and we were unable to recover it. 00:41:42.236 [2024-10-01 22:40:37.390648] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.236 [2024-10-01 22:40:37.390659] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.236 qpair failed and we were unable to recover it. 00:41:42.236 [2024-10-01 22:40:37.390979] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.236 [2024-10-01 22:40:37.390989] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.236 qpair failed and we were unable to recover it. 00:41:42.236 [2024-10-01 22:40:37.391179] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.236 [2024-10-01 22:40:37.391189] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.236 qpair failed and we were unable to recover it. 00:41:42.236 [2024-10-01 22:40:37.391524] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.236 [2024-10-01 22:40:37.391534] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.236 qpair failed and we were unable to recover it. 00:41:42.236 [2024-10-01 22:40:37.391731] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.236 [2024-10-01 22:40:37.391741] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.236 qpair failed and we were unable to recover it. 00:41:42.236 [2024-10-01 22:40:37.391956] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.236 [2024-10-01 22:40:37.391966] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.236 qpair failed and we were unable to recover it. 00:41:42.236 [2024-10-01 22:40:37.392307] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.236 [2024-10-01 22:40:37.392318] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.236 qpair failed and we were unable to recover it. 00:41:42.236 [2024-10-01 22:40:37.392701] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.236 [2024-10-01 22:40:37.392711] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.237 qpair failed and we were unable to recover it. 00:41:42.237 [2024-10-01 22:40:37.393025] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.237 [2024-10-01 22:40:37.393035] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.237 qpair failed and we were unable to recover it. 00:41:42.237 [2024-10-01 22:40:37.393401] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.237 [2024-10-01 22:40:37.393412] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.237 qpair failed and we were unable to recover it. 00:41:42.237 [2024-10-01 22:40:37.393731] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.237 [2024-10-01 22:40:37.393742] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.237 qpair failed and we were unable to recover it. 00:41:42.237 [2024-10-01 22:40:37.393949] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.237 [2024-10-01 22:40:37.393960] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.237 qpair failed and we were unable to recover it. 00:41:42.237 [2024-10-01 22:40:37.394267] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.237 [2024-10-01 22:40:37.394276] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.237 qpair failed and we were unable to recover it. 00:41:42.237 [2024-10-01 22:40:37.394470] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.237 [2024-10-01 22:40:37.394480] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.237 qpair failed and we were unable to recover it. 00:41:42.237 [2024-10-01 22:40:37.394795] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.237 [2024-10-01 22:40:37.394805] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.237 qpair failed and we were unable to recover it. 00:41:42.237 [2024-10-01 22:40:37.394986] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.237 [2024-10-01 22:40:37.394997] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.237 qpair failed and we were unable to recover it. 00:41:42.237 [2024-10-01 22:40:37.395188] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.237 [2024-10-01 22:40:37.395199] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.237 qpair failed and we were unable to recover it. 00:41:42.237 [2024-10-01 22:40:37.395483] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.237 [2024-10-01 22:40:37.395495] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.237 qpair failed and we were unable to recover it. 00:41:42.237 [2024-10-01 22:40:37.395843] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.237 [2024-10-01 22:40:37.395854] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.237 qpair failed and we were unable to recover it. 00:41:42.237 [2024-10-01 22:40:37.396042] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.237 [2024-10-01 22:40:37.396053] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.237 qpair failed and we were unable to recover it. 00:41:42.237 [2024-10-01 22:40:37.396287] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.237 [2024-10-01 22:40:37.396297] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.237 qpair failed and we were unable to recover it. 00:41:42.237 [2024-10-01 22:40:37.396486] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.237 [2024-10-01 22:40:37.396497] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.237 qpair failed and we were unable to recover it. 00:41:42.237 [2024-10-01 22:40:37.396821] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.237 [2024-10-01 22:40:37.396831] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.237 qpair failed and we were unable to recover it. 00:41:42.237 [2024-10-01 22:40:37.397120] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.237 [2024-10-01 22:40:37.397130] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.237 qpair failed and we were unable to recover it. 00:41:42.237 [2024-10-01 22:40:37.397399] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.237 [2024-10-01 22:40:37.397409] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.237 qpair failed and we were unable to recover it. 00:41:42.237 [2024-10-01 22:40:37.397695] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.237 [2024-10-01 22:40:37.397706] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.237 qpair failed and we were unable to recover it. 00:41:42.237 [2024-10-01 22:40:37.397903] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.237 [2024-10-01 22:40:37.397913] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.237 qpair failed and we were unable to recover it. 00:41:42.237 [2024-10-01 22:40:37.398087] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.237 [2024-10-01 22:40:37.398097] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.237 qpair failed and we were unable to recover it. 00:41:42.237 [2024-10-01 22:40:37.398484] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.237 [2024-10-01 22:40:37.398494] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.237 qpair failed and we were unable to recover it. 00:41:42.237 [2024-10-01 22:40:37.398836] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.237 [2024-10-01 22:40:37.398846] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.237 qpair failed and we were unable to recover it. 00:41:42.237 [2024-10-01 22:40:37.399161] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.237 [2024-10-01 22:40:37.399170] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.237 qpair failed and we were unable to recover it. 00:41:42.237 [2024-10-01 22:40:37.399487] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.237 [2024-10-01 22:40:37.399496] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.237 qpair failed and we were unable to recover it. 00:41:42.237 [2024-10-01 22:40:37.399678] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.237 [2024-10-01 22:40:37.399689] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.237 qpair failed and we were unable to recover it. 00:41:42.237 [2024-10-01 22:40:37.400090] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.237 [2024-10-01 22:40:37.400101] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.237 qpair failed and we were unable to recover it. 00:41:42.237 [2024-10-01 22:40:37.400295] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.237 [2024-10-01 22:40:37.400306] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.237 qpair failed and we were unable to recover it. 00:41:42.237 [2024-10-01 22:40:37.400633] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.237 [2024-10-01 22:40:37.400644] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.237 qpair failed and we were unable to recover it. 00:41:42.237 [2024-10-01 22:40:37.400954] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.237 [2024-10-01 22:40:37.400964] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.237 qpair failed and we were unable to recover it. 00:41:42.237 [2024-10-01 22:40:37.401308] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.237 [2024-10-01 22:40:37.401318] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.237 qpair failed and we were unable to recover it. 00:41:42.237 [2024-10-01 22:40:37.401596] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.237 [2024-10-01 22:40:37.401606] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.237 qpair failed and we were unable to recover it. 00:41:42.237 [2024-10-01 22:40:37.401792] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.237 [2024-10-01 22:40:37.401802] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.237 qpair failed and we were unable to recover it. 00:41:42.237 [2024-10-01 22:40:37.401972] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.237 [2024-10-01 22:40:37.401982] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.237 qpair failed and we were unable to recover it. 00:41:42.237 [2024-10-01 22:40:37.402032] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.238 [2024-10-01 22:40:37.402042] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.238 qpair failed and we were unable to recover it. 00:41:42.238 [2024-10-01 22:40:37.402257] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.238 [2024-10-01 22:40:37.402266] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.238 qpair failed and we were unable to recover it. 00:41:42.238 [2024-10-01 22:40:37.402575] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.238 [2024-10-01 22:40:37.402586] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.238 qpair failed and we were unable to recover it. 00:41:42.238 [2024-10-01 22:40:37.402759] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.238 [2024-10-01 22:40:37.402770] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.238 qpair failed and we were unable to recover it. 00:41:42.238 [2024-10-01 22:40:37.403096] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.238 [2024-10-01 22:40:37.403107] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.238 qpair failed and we were unable to recover it. 00:41:42.238 [2024-10-01 22:40:37.403396] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.238 [2024-10-01 22:40:37.403407] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.238 qpair failed and we were unable to recover it. 00:41:42.238 [2024-10-01 22:40:37.403738] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.238 [2024-10-01 22:40:37.403748] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.238 qpair failed and we were unable to recover it. 00:41:42.238 [2024-10-01 22:40:37.403968] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.238 [2024-10-01 22:40:37.403978] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.238 qpair failed and we were unable to recover it. 00:41:42.238 [2024-10-01 22:40:37.404235] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.238 [2024-10-01 22:40:37.404247] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.238 qpair failed and we were unable to recover it. 00:41:42.238 [2024-10-01 22:40:37.404572] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.238 [2024-10-01 22:40:37.404582] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.238 qpair failed and we were unable to recover it. 00:41:42.238 [2024-10-01 22:40:37.404787] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.238 [2024-10-01 22:40:37.404797] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.238 qpair failed and we were unable to recover it. 00:41:42.238 [2024-10-01 22:40:37.404882] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.238 [2024-10-01 22:40:37.404892] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.238 qpair failed and we were unable to recover it. 00:41:42.238 [2024-10-01 22:40:37.405199] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.238 [2024-10-01 22:40:37.405209] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.238 qpair failed and we were unable to recover it. 00:41:42.238 [2024-10-01 22:40:37.405538] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.238 [2024-10-01 22:40:37.405548] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.238 qpair failed and we were unable to recover it. 00:41:42.238 [2024-10-01 22:40:37.405847] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.238 [2024-10-01 22:40:37.405858] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.238 qpair failed and we were unable to recover it. 00:41:42.238 [2024-10-01 22:40:37.406260] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.238 [2024-10-01 22:40:37.406270] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.238 qpair failed and we were unable to recover it. 00:41:42.238 [2024-10-01 22:40:37.406542] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.238 [2024-10-01 22:40:37.406552] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.238 qpair failed and we were unable to recover it. 00:41:42.238 [2024-10-01 22:40:37.406867] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.238 [2024-10-01 22:40:37.406879] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.238 qpair failed and we were unable to recover it. 00:41:42.238 [2024-10-01 22:40:37.407070] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.238 [2024-10-01 22:40:37.407080] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.238 qpair failed and we were unable to recover it. 00:41:42.238 [2024-10-01 22:40:37.407398] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.238 [2024-10-01 22:40:37.407408] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.238 qpair failed and we were unable to recover it. 00:41:42.238 [2024-10-01 22:40:37.407724] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.238 [2024-10-01 22:40:37.407735] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.238 qpair failed and we were unable to recover it. 00:41:42.238 [2024-10-01 22:40:37.407956] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.238 [2024-10-01 22:40:37.407966] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.238 qpair failed and we were unable to recover it. 00:41:42.238 [2024-10-01 22:40:37.408137] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.238 [2024-10-01 22:40:37.408147] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.238 qpair failed and we were unable to recover it. 00:41:42.238 [2024-10-01 22:40:37.408314] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.238 [2024-10-01 22:40:37.408324] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.238 qpair failed and we were unable to recover it. 00:41:42.238 [2024-10-01 22:40:37.408616] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.238 [2024-10-01 22:40:37.408634] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.238 qpair failed and we were unable to recover it. 00:41:42.238 [2024-10-01 22:40:37.408815] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.238 [2024-10-01 22:40:37.408825] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.238 qpair failed and we were unable to recover it. 00:41:42.238 [2024-10-01 22:40:37.409130] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.238 [2024-10-01 22:40:37.409140] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.238 qpair failed and we were unable to recover it. 00:41:42.238 [2024-10-01 22:40:37.409431] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.238 [2024-10-01 22:40:37.409442] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.238 qpair failed and we were unable to recover it. 00:41:42.238 [2024-10-01 22:40:37.409645] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.238 [2024-10-01 22:40:37.409656] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.238 qpair failed and we were unable to recover it. 00:41:42.238 [2024-10-01 22:40:37.409876] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.238 [2024-10-01 22:40:37.409886] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.238 qpair failed and we were unable to recover it. 00:41:42.238 [2024-10-01 22:40:37.410157] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.238 [2024-10-01 22:40:37.410167] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.238 qpair failed and we were unable to recover it. 00:41:42.238 [2024-10-01 22:40:37.410382] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.238 [2024-10-01 22:40:37.410393] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.238 qpair failed and we were unable to recover it. 00:41:42.238 [2024-10-01 22:40:37.410719] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.238 [2024-10-01 22:40:37.410729] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.238 qpair failed and we were unable to recover it. 00:41:42.238 [2024-10-01 22:40:37.411065] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.238 [2024-10-01 22:40:37.411074] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.238 qpair failed and we were unable to recover it. 00:41:42.238 [2024-10-01 22:40:37.411399] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.238 [2024-10-01 22:40:37.411409] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.238 qpair failed and we were unable to recover it. 00:41:42.238 [2024-10-01 22:40:37.411729] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.238 [2024-10-01 22:40:37.411739] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.238 qpair failed and we were unable to recover it. 00:41:42.238 [2024-10-01 22:40:37.411925] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.238 [2024-10-01 22:40:37.411935] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.238 qpair failed and we were unable to recover it. 00:41:42.238 [2024-10-01 22:40:37.412141] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.238 [2024-10-01 22:40:37.412151] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.238 qpair failed and we were unable to recover it. 00:41:42.238 [2024-10-01 22:40:37.412325] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.238 [2024-10-01 22:40:37.412334] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.238 qpair failed and we were unable to recover it. 00:41:42.238 [2024-10-01 22:40:37.412540] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.239 [2024-10-01 22:40:37.412550] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.239 qpair failed and we were unable to recover it. 00:41:42.239 [2024-10-01 22:40:37.412840] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.239 [2024-10-01 22:40:37.412851] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.239 qpair failed and we were unable to recover it. 00:41:42.239 [2024-10-01 22:40:37.413050] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.239 [2024-10-01 22:40:37.413061] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.239 qpair failed and we were unable to recover it. 00:41:42.239 [2024-10-01 22:40:37.413445] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.239 [2024-10-01 22:40:37.413455] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.239 qpair failed and we were unable to recover it. 00:41:42.239 [2024-10-01 22:40:37.413847] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.239 [2024-10-01 22:40:37.413858] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.239 qpair failed and we were unable to recover it. 00:41:42.239 [2024-10-01 22:40:37.414170] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.239 [2024-10-01 22:40:37.414182] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.239 qpair failed and we were unable to recover it. 00:41:42.239 [2024-10-01 22:40:37.414488] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.239 [2024-10-01 22:40:37.414498] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.239 qpair failed and we were unable to recover it. 00:41:42.239 [2024-10-01 22:40:37.414817] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.239 [2024-10-01 22:40:37.414828] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.239 qpair failed and we were unable to recover it. 00:41:42.239 [2024-10-01 22:40:37.415114] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.239 [2024-10-01 22:40:37.415124] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.239 qpair failed and we were unable to recover it. 00:41:42.239 [2024-10-01 22:40:37.415439] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.239 [2024-10-01 22:40:37.415449] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.239 qpair failed and we were unable to recover it. 00:41:42.239 [2024-10-01 22:40:37.415768] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.239 [2024-10-01 22:40:37.415778] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.239 qpair failed and we were unable to recover it. 00:41:42.239 [2024-10-01 22:40:37.416124] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.239 [2024-10-01 22:40:37.416134] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.239 qpair failed and we were unable to recover it. 00:41:42.239 [2024-10-01 22:40:37.416427] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.239 [2024-10-01 22:40:37.416438] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.239 qpair failed and we were unable to recover it. 00:41:42.239 [2024-10-01 22:40:37.416759] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.239 [2024-10-01 22:40:37.416770] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.239 qpair failed and we were unable to recover it. 00:41:42.239 [2024-10-01 22:40:37.417094] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.239 [2024-10-01 22:40:37.417104] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.239 qpair failed and we were unable to recover it. 00:41:42.239 [2024-10-01 22:40:37.417408] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.239 [2024-10-01 22:40:37.417417] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.239 qpair failed and we were unable to recover it. 00:41:42.239 [2024-10-01 22:40:37.417623] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.239 [2024-10-01 22:40:37.417637] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.239 qpair failed and we were unable to recover it. 00:41:42.239 [2024-10-01 22:40:37.417820] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.239 [2024-10-01 22:40:37.417830] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.239 qpair failed and we were unable to recover it. 00:41:42.239 [2024-10-01 22:40:37.418087] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.239 [2024-10-01 22:40:37.418096] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.239 qpair failed and we were unable to recover it. 00:41:42.239 [2024-10-01 22:40:37.418269] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.239 [2024-10-01 22:40:37.418280] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.239 qpair failed and we were unable to recover it. 00:41:42.239 [2024-10-01 22:40:37.418442] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.239 [2024-10-01 22:40:37.418452] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.239 qpair failed and we were unable to recover it. 00:41:42.239 [2024-10-01 22:40:37.418497] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.239 [2024-10-01 22:40:37.418507] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.239 qpair failed and we were unable to recover it. 00:41:42.239 [2024-10-01 22:40:37.418704] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.239 [2024-10-01 22:40:37.418715] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.239 qpair failed and we were unable to recover it. 00:41:42.239 [2024-10-01 22:40:37.418899] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.239 [2024-10-01 22:40:37.418909] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.239 qpair failed and we were unable to recover it. 00:41:42.239 [2024-10-01 22:40:37.419211] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.239 [2024-10-01 22:40:37.419220] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.239 qpair failed and we were unable to recover it. 00:41:42.239 [2024-10-01 22:40:37.419556] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.239 [2024-10-01 22:40:37.419566] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.239 qpair failed and we were unable to recover it. 00:41:42.239 [2024-10-01 22:40:37.419946] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.239 [2024-10-01 22:40:37.419957] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.239 qpair failed and we were unable to recover it. 00:41:42.239 [2024-10-01 22:40:37.420327] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.239 [2024-10-01 22:40:37.420338] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.239 qpair failed and we were unable to recover it. 00:41:42.239 [2024-10-01 22:40:37.420682] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.239 [2024-10-01 22:40:37.420694] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.239 qpair failed and we were unable to recover it. 00:41:42.239 [2024-10-01 22:40:37.421011] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.239 [2024-10-01 22:40:37.421022] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.239 qpair failed and we were unable to recover it. 00:41:42.239 [2024-10-01 22:40:37.421337] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.239 [2024-10-01 22:40:37.421348] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.239 qpair failed and we were unable to recover it. 00:41:42.239 [2024-10-01 22:40:37.421654] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.239 [2024-10-01 22:40:37.421666] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.239 qpair failed and we were unable to recover it. 00:41:42.239 [2024-10-01 22:40:37.421931] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.239 [2024-10-01 22:40:37.421943] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.239 qpair failed and we were unable to recover it. 00:41:42.239 [2024-10-01 22:40:37.422191] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.239 [2024-10-01 22:40:37.422201] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.239 qpair failed and we were unable to recover it. 00:41:42.239 [2024-10-01 22:40:37.422496] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.239 [2024-10-01 22:40:37.422507] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.239 qpair failed and we were unable to recover it. 00:41:42.239 [2024-10-01 22:40:37.422793] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.239 [2024-10-01 22:40:37.422804] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.239 qpair failed and we were unable to recover it. 00:41:42.239 [2024-10-01 22:40:37.423107] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.239 [2024-10-01 22:40:37.423118] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.239 qpair failed and we were unable to recover it. 00:41:42.239 [2024-10-01 22:40:37.423307] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.239 [2024-10-01 22:40:37.423318] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.239 qpair failed and we were unable to recover it. 00:41:42.239 [2024-10-01 22:40:37.423641] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.239 [2024-10-01 22:40:37.423652] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.240 qpair failed and we were unable to recover it. 00:41:42.240 [2024-10-01 22:40:37.423874] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.240 [2024-10-01 22:40:37.423885] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.240 qpair failed and we were unable to recover it. 00:41:42.240 [2024-10-01 22:40:37.424081] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.240 [2024-10-01 22:40:37.424092] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.240 qpair failed and we were unable to recover it. 00:41:42.240 [2024-10-01 22:40:37.424422] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.240 [2024-10-01 22:40:37.424432] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.240 qpair failed and we were unable to recover it. 00:41:42.240 [2024-10-01 22:40:37.424748] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.240 [2024-10-01 22:40:37.424759] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.240 qpair failed and we were unable to recover it. 00:41:42.240 [2024-10-01 22:40:37.425071] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.240 [2024-10-01 22:40:37.425082] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.240 qpair failed and we were unable to recover it. 00:41:42.240 [2024-10-01 22:40:37.425397] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.240 [2024-10-01 22:40:37.425407] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.240 qpair failed and we were unable to recover it. 00:41:42.240 [2024-10-01 22:40:37.425612] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.240 [2024-10-01 22:40:37.425622] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.240 qpair failed and we were unable to recover it. 00:41:42.240 [2024-10-01 22:40:37.425954] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.240 [2024-10-01 22:40:37.425965] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.240 qpair failed and we were unable to recover it. 00:41:42.240 [2024-10-01 22:40:37.426176] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.240 [2024-10-01 22:40:37.426187] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.240 qpair failed and we were unable to recover it. 00:41:42.240 [2024-10-01 22:40:37.426499] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.240 [2024-10-01 22:40:37.426510] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.240 qpair failed and we were unable to recover it. 00:41:42.240 [2024-10-01 22:40:37.426679] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.240 [2024-10-01 22:40:37.426690] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.240 qpair failed and we were unable to recover it. 00:41:42.240 [2024-10-01 22:40:37.426985] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.240 [2024-10-01 22:40:37.426996] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.240 qpair failed and we were unable to recover it. 00:41:42.240 [2024-10-01 22:40:37.427178] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.240 [2024-10-01 22:40:37.427188] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.240 qpair failed and we were unable to recover it. 00:41:42.240 [2024-10-01 22:40:37.427374] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.240 [2024-10-01 22:40:37.427385] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.240 qpair failed and we were unable to recover it. 00:41:42.240 [2024-10-01 22:40:37.427553] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.240 [2024-10-01 22:40:37.427564] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.240 qpair failed and we were unable to recover it. 00:41:42.240 [2024-10-01 22:40:37.427907] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.240 [2024-10-01 22:40:37.427918] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.240 qpair failed and we were unable to recover it. 00:41:42.240 [2024-10-01 22:40:37.428196] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.240 [2024-10-01 22:40:37.428207] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.240 qpair failed and we were unable to recover it. 00:41:42.240 [2024-10-01 22:40:37.428551] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.240 [2024-10-01 22:40:37.428562] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.240 qpair failed and we were unable to recover it. 00:41:42.240 [2024-10-01 22:40:37.428845] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.240 [2024-10-01 22:40:37.428856] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.240 qpair failed and we were unable to recover it. 00:41:42.240 [2024-10-01 22:40:37.429051] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.240 [2024-10-01 22:40:37.429062] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.240 qpair failed and we were unable to recover it. 00:41:42.240 [2024-10-01 22:40:37.429394] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.240 [2024-10-01 22:40:37.429406] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.240 qpair failed and we were unable to recover it. 00:41:42.240 [2024-10-01 22:40:37.429746] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.240 [2024-10-01 22:40:37.429757] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.240 qpair failed and we were unable to recover it. 00:41:42.240 [2024-10-01 22:40:37.430083] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.240 [2024-10-01 22:40:37.430094] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.240 qpair failed and we were unable to recover it. 00:41:42.240 [2024-10-01 22:40:37.430414] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.240 [2024-10-01 22:40:37.430424] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.240 qpair failed and we were unable to recover it. 00:41:42.240 [2024-10-01 22:40:37.430731] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.240 [2024-10-01 22:40:37.430742] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.240 qpair failed and we were unable to recover it. 00:41:42.240 [2024-10-01 22:40:37.430923] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.240 [2024-10-01 22:40:37.430934] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.240 qpair failed and we were unable to recover it. 00:41:42.240 [2024-10-01 22:40:37.431258] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.240 [2024-10-01 22:40:37.431269] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.240 qpair failed and we were unable to recover it. 00:41:42.240 [2024-10-01 22:40:37.431498] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.240 [2024-10-01 22:40:37.431510] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.240 qpair failed and we were unable to recover it. 00:41:42.240 [2024-10-01 22:40:37.431827] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.240 [2024-10-01 22:40:37.431838] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.240 qpair failed and we were unable to recover it. 00:41:42.240 [2024-10-01 22:40:37.432146] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.240 [2024-10-01 22:40:37.432157] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.240 qpair failed and we were unable to recover it. 00:41:42.240 [2024-10-01 22:40:37.432484] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.240 [2024-10-01 22:40:37.432495] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.240 qpair failed and we were unable to recover it. 00:41:42.240 [2024-10-01 22:40:37.432797] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.240 [2024-10-01 22:40:37.432807] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.240 qpair failed and we were unable to recover it. 00:41:42.240 [2024-10-01 22:40:37.433133] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.240 [2024-10-01 22:40:37.433142] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.240 qpair failed and we were unable to recover it. 00:41:42.240 [2024-10-01 22:40:37.433331] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.240 [2024-10-01 22:40:37.433341] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.240 qpair failed and we were unable to recover it. 00:41:42.240 [2024-10-01 22:40:37.433602] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.240 [2024-10-01 22:40:37.433612] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.240 qpair failed and we were unable to recover it. 00:41:42.240 [2024-10-01 22:40:37.433785] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.240 [2024-10-01 22:40:37.433795] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.240 qpair failed and we were unable to recover it. 00:41:42.240 [2024-10-01 22:40:37.434018] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.240 [2024-10-01 22:40:37.434028] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.240 qpair failed and we were unable to recover it. 00:41:42.240 [2024-10-01 22:40:37.434078] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.240 [2024-10-01 22:40:37.434087] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.240 qpair failed and we were unable to recover it. 00:41:42.240 [2024-10-01 22:40:37.434253] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.241 [2024-10-01 22:40:37.434263] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.241 qpair failed and we were unable to recover it. 00:41:42.241 [2024-10-01 22:40:37.434632] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.241 [2024-10-01 22:40:37.434650] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.241 qpair failed and we were unable to recover it. 00:41:42.241 [2024-10-01 22:40:37.434834] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.241 [2024-10-01 22:40:37.434844] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.241 qpair failed and we were unable to recover it. 00:41:42.241 [2024-10-01 22:40:37.435154] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.241 [2024-10-01 22:40:37.435164] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.241 qpair failed and we were unable to recover it. 00:41:42.241 [2024-10-01 22:40:37.435344] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.241 [2024-10-01 22:40:37.435355] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.241 qpair failed and we were unable to recover it. 00:41:42.241 [2024-10-01 22:40:37.435531] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.241 [2024-10-01 22:40:37.435541] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.241 qpair failed and we were unable to recover it. 00:41:42.241 [2024-10-01 22:40:37.435857] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.241 [2024-10-01 22:40:37.435867] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.241 qpair failed and we were unable to recover it. 00:41:42.241 [2024-10-01 22:40:37.436033] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.241 [2024-10-01 22:40:37.436043] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.241 qpair failed and we were unable to recover it. 00:41:42.241 [2024-10-01 22:40:37.436343] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.241 [2024-10-01 22:40:37.436353] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.241 qpair failed and we were unable to recover it. 00:41:42.241 [2024-10-01 22:40:37.436540] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.241 [2024-10-01 22:40:37.436549] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.241 qpair failed and we were unable to recover it. 00:41:42.241 [2024-10-01 22:40:37.436890] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.241 [2024-10-01 22:40:37.436900] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.241 qpair failed and we were unable to recover it. 00:41:42.241 [2024-10-01 22:40:37.437182] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.241 [2024-10-01 22:40:37.437192] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.241 qpair failed and we were unable to recover it. 00:41:42.241 [2024-10-01 22:40:37.437493] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.241 [2024-10-01 22:40:37.437502] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.241 qpair failed and we were unable to recover it. 00:41:42.241 [2024-10-01 22:40:37.437711] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.241 [2024-10-01 22:40:37.437721] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.241 qpair failed and we were unable to recover it. 00:41:42.241 [2024-10-01 22:40:37.438006] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.241 [2024-10-01 22:40:37.438015] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.241 qpair failed and we were unable to recover it. 00:41:42.241 [2024-10-01 22:40:37.438323] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.241 [2024-10-01 22:40:37.438333] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.241 qpair failed and we were unable to recover it. 00:41:42.241 [2024-10-01 22:40:37.438611] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.241 [2024-10-01 22:40:37.438621] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.241 qpair failed and we were unable to recover it. 00:41:42.241 [2024-10-01 22:40:37.438955] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.241 [2024-10-01 22:40:37.438965] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.241 qpair failed and we were unable to recover it. 00:41:42.241 [2024-10-01 22:40:37.439264] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.241 [2024-10-01 22:40:37.439274] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.241 qpair failed and we were unable to recover it. 00:41:42.241 [2024-10-01 22:40:37.439553] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.241 [2024-10-01 22:40:37.439563] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.241 qpair failed and we were unable to recover it. 00:41:42.241 [2024-10-01 22:40:37.439640] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.241 [2024-10-01 22:40:37.439650] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.241 qpair failed and we were unable to recover it. 00:41:42.241 [2024-10-01 22:40:37.439936] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.241 [2024-10-01 22:40:37.439946] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.241 qpair failed and we were unable to recover it. 00:41:42.241 [2024-10-01 22:40:37.440329] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.241 [2024-10-01 22:40:37.440339] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.241 qpair failed and we were unable to recover it. 00:41:42.241 [2024-10-01 22:40:37.440545] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.241 [2024-10-01 22:40:37.440558] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.241 qpair failed and we were unable to recover it. 00:41:42.241 [2024-10-01 22:40:37.440946] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.241 [2024-10-01 22:40:37.440957] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.241 qpair failed and we were unable to recover it. 00:41:42.241 [2024-10-01 22:40:37.441295] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.241 [2024-10-01 22:40:37.441305] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.241 qpair failed and we were unable to recover it. 00:41:42.241 [2024-10-01 22:40:37.441622] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.241 [2024-10-01 22:40:37.441636] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.241 qpair failed and we were unable to recover it. 00:41:42.241 [2024-10-01 22:40:37.441925] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.241 [2024-10-01 22:40:37.441935] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.241 qpair failed and we were unable to recover it. 00:41:42.241 [2024-10-01 22:40:37.442274] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.242 [2024-10-01 22:40:37.442284] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.242 qpair failed and we were unable to recover it. 00:41:42.242 [2024-10-01 22:40:37.442566] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.242 [2024-10-01 22:40:37.442576] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.242 qpair failed and we were unable to recover it. 00:41:42.242 [2024-10-01 22:40:37.442859] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.242 [2024-10-01 22:40:37.442870] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.242 qpair failed and we were unable to recover it. 00:41:42.242 [2024-10-01 22:40:37.443180] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.242 [2024-10-01 22:40:37.443190] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.242 qpair failed and we were unable to recover it. 00:41:42.242 [2024-10-01 22:40:37.443471] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.242 [2024-10-01 22:40:37.443481] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.242 qpair failed and we were unable to recover it. 00:41:42.242 [2024-10-01 22:40:37.443642] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.242 [2024-10-01 22:40:37.443653] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.242 qpair failed and we were unable to recover it. 00:41:42.242 [2024-10-01 22:40:37.443958] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.242 [2024-10-01 22:40:37.443968] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.242 qpair failed and we were unable to recover it. 00:41:42.242 [2024-10-01 22:40:37.444131] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.242 [2024-10-01 22:40:37.444140] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.242 qpair failed and we were unable to recover it. 00:41:42.242 [2024-10-01 22:40:37.444475] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.242 [2024-10-01 22:40:37.444485] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.242 qpair failed and we were unable to recover it. 00:41:42.242 [2024-10-01 22:40:37.444766] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.242 [2024-10-01 22:40:37.444777] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.242 qpair failed and we were unable to recover it. 00:41:42.242 [2024-10-01 22:40:37.444867] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.242 [2024-10-01 22:40:37.444877] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.242 qpair failed and we were unable to recover it. 00:41:42.242 [2024-10-01 22:40:37.445031] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.242 [2024-10-01 22:40:37.445040] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.242 qpair failed and we were unable to recover it. 00:41:42.242 [2024-10-01 22:40:37.445354] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.242 [2024-10-01 22:40:37.445365] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.242 qpair failed and we were unable to recover it. 00:41:42.242 [2024-10-01 22:40:37.445668] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.242 [2024-10-01 22:40:37.445678] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.242 qpair failed and we were unable to recover it. 00:41:42.242 [2024-10-01 22:40:37.446018] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.242 [2024-10-01 22:40:37.446028] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.242 qpair failed and we were unable to recover it. 00:41:42.242 [2024-10-01 22:40:37.446393] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.242 [2024-10-01 22:40:37.446402] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.242 qpair failed and we were unable to recover it. 00:41:42.242 [2024-10-01 22:40:37.446723] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.242 [2024-10-01 22:40:37.446733] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.242 qpair failed and we were unable to recover it. 00:41:42.242 [2024-10-01 22:40:37.447054] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.242 [2024-10-01 22:40:37.447064] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.242 qpair failed and we were unable to recover it. 00:41:42.242 [2024-10-01 22:40:37.447357] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.242 [2024-10-01 22:40:37.447366] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.242 qpair failed and we were unable to recover it. 00:41:42.242 [2024-10-01 22:40:37.447708] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.242 [2024-10-01 22:40:37.447719] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.242 qpair failed and we were unable to recover it. 00:41:42.242 [2024-10-01 22:40:37.447946] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.242 [2024-10-01 22:40:37.447956] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.242 qpair failed and we were unable to recover it. 00:41:42.242 [2024-10-01 22:40:37.448122] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.242 [2024-10-01 22:40:37.448132] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.242 qpair failed and we were unable to recover it. 00:41:42.242 [2024-10-01 22:40:37.448382] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.242 [2024-10-01 22:40:37.448396] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.242 qpair failed and we were unable to recover it. 00:41:42.242 [2024-10-01 22:40:37.448683] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.242 [2024-10-01 22:40:37.448693] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.242 qpair failed and we were unable to recover it. 00:41:42.242 [2024-10-01 22:40:37.448894] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.242 [2024-10-01 22:40:37.448905] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.242 qpair failed and we were unable to recover it. 00:41:42.242 [2024-10-01 22:40:37.449210] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.242 [2024-10-01 22:40:37.449219] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.242 qpair failed and we were unable to recover it. 00:41:42.242 [2024-10-01 22:40:37.449521] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.242 [2024-10-01 22:40:37.449531] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.242 qpair failed and we were unable to recover it. 00:41:42.242 [2024-10-01 22:40:37.449864] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.242 [2024-10-01 22:40:37.449875] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.242 qpair failed and we were unable to recover it. 00:41:42.242 [2024-10-01 22:40:37.450236] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.242 [2024-10-01 22:40:37.450246] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.242 qpair failed and we were unable to recover it. 00:41:42.242 [2024-10-01 22:40:37.450476] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.242 [2024-10-01 22:40:37.450486] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.242 qpair failed and we were unable to recover it. 00:41:42.242 [2024-10-01 22:40:37.450686] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.242 [2024-10-01 22:40:37.450697] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.242 qpair failed and we were unable to recover it. 00:41:42.242 [2024-10-01 22:40:37.451044] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.242 [2024-10-01 22:40:37.451054] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.242 qpair failed and we were unable to recover it. 00:41:42.242 [2024-10-01 22:40:37.451320] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.242 [2024-10-01 22:40:37.451329] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.242 qpair failed and we were unable to recover it. 00:41:42.242 [2024-10-01 22:40:37.451655] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.242 [2024-10-01 22:40:37.451665] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.242 qpair failed and we were unable to recover it. 00:41:42.242 [2024-10-01 22:40:37.451981] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.242 [2024-10-01 22:40:37.451991] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.242 qpair failed and we were unable to recover it. 00:41:42.242 [2024-10-01 22:40:37.452318] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.242 [2024-10-01 22:40:37.452328] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.243 qpair failed and we were unable to recover it. 00:41:42.243 [2024-10-01 22:40:37.452618] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.243 [2024-10-01 22:40:37.452637] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.243 qpair failed and we were unable to recover it. 00:41:42.523 [2024-10-01 22:40:37.452951] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.523 [2024-10-01 22:40:37.452962] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.523 qpair failed and we were unable to recover it. 00:41:42.523 [2024-10-01 22:40:37.453219] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.523 [2024-10-01 22:40:37.453229] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.523 qpair failed and we were unable to recover it. 00:41:42.523 [2024-10-01 22:40:37.453559] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.523 [2024-10-01 22:40:37.453569] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.523 qpair failed and we were unable to recover it. 00:41:42.523 [2024-10-01 22:40:37.453875] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.523 [2024-10-01 22:40:37.453886] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.523 qpair failed and we were unable to recover it. 00:41:42.523 [2024-10-01 22:40:37.454199] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.523 [2024-10-01 22:40:37.454209] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.523 qpair failed and we were unable to recover it. 00:41:42.523 [2024-10-01 22:40:37.454520] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.523 [2024-10-01 22:40:37.454531] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.523 qpair failed and we were unable to recover it. 00:41:42.523 [2024-10-01 22:40:37.454860] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.523 [2024-10-01 22:40:37.454870] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.523 qpair failed and we were unable to recover it. 00:41:42.523 [2024-10-01 22:40:37.455029] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.523 [2024-10-01 22:40:37.455040] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.523 qpair failed and we were unable to recover it. 00:41:42.523 [2024-10-01 22:40:37.455317] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.523 [2024-10-01 22:40:37.455327] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.523 qpair failed and we were unable to recover it. 00:41:42.523 [2024-10-01 22:40:37.455520] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.523 [2024-10-01 22:40:37.455530] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.523 qpair failed and we were unable to recover it. 00:41:42.523 [2024-10-01 22:40:37.455687] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.524 [2024-10-01 22:40:37.455697] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.524 qpair failed and we were unable to recover it. 00:41:42.524 [2024-10-01 22:40:37.455776] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.524 [2024-10-01 22:40:37.455786] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.524 qpair failed and we were unable to recover it. 00:41:42.524 [2024-10-01 22:40:37.455994] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.524 [2024-10-01 22:40:37.456006] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.524 qpair failed and we were unable to recover it. 00:41:42.524 [2024-10-01 22:40:37.456304] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.524 [2024-10-01 22:40:37.456314] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.524 qpair failed and we were unable to recover it. 00:41:42.524 [2024-10-01 22:40:37.456515] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.524 [2024-10-01 22:40:37.456525] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.524 qpair failed and we were unable to recover it. 00:41:42.524 [2024-10-01 22:40:37.456822] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.524 [2024-10-01 22:40:37.456832] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.524 qpair failed and we were unable to recover it. 00:41:42.524 [2024-10-01 22:40:37.457170] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.524 [2024-10-01 22:40:37.457181] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.524 qpair failed and we were unable to recover it. 00:41:42.524 [2024-10-01 22:40:37.457495] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.524 [2024-10-01 22:40:37.457505] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.524 qpair failed and we were unable to recover it. 00:41:42.524 [2024-10-01 22:40:37.457782] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.524 [2024-10-01 22:40:37.457792] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.524 qpair failed and we were unable to recover it. 00:41:42.524 [2024-10-01 22:40:37.458093] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.524 [2024-10-01 22:40:37.458103] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.524 qpair failed and we were unable to recover it. 00:41:42.524 [2024-10-01 22:40:37.458304] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.524 [2024-10-01 22:40:37.458314] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.524 qpair failed and we were unable to recover it. 00:41:42.524 [2024-10-01 22:40:37.458494] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.524 [2024-10-01 22:40:37.458505] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.524 qpair failed and we were unable to recover it. 00:41:42.524 [2024-10-01 22:40:37.458806] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.524 [2024-10-01 22:40:37.458816] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.524 qpair failed and we were unable to recover it. 00:41:42.524 [2024-10-01 22:40:37.459004] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.524 [2024-10-01 22:40:37.459014] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.524 qpair failed and we were unable to recover it. 00:41:42.524 [2024-10-01 22:40:37.459338] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.524 [2024-10-01 22:40:37.459348] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.524 qpair failed and we were unable to recover it. 00:41:42.524 [2024-10-01 22:40:37.459638] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.524 [2024-10-01 22:40:37.459649] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.524 qpair failed and we were unable to recover it. 00:41:42.524 [2024-10-01 22:40:37.459853] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.524 [2024-10-01 22:40:37.459863] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.524 qpair failed and we were unable to recover it. 00:41:42.524 [2024-10-01 22:40:37.460179] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.524 [2024-10-01 22:40:37.460189] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.524 qpair failed and we were unable to recover it. 00:41:42.524 [2024-10-01 22:40:37.460504] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.524 [2024-10-01 22:40:37.460514] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.524 qpair failed and we were unable to recover it. 00:41:42.524 [2024-10-01 22:40:37.460732] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.524 [2024-10-01 22:40:37.460742] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.524 qpair failed and we were unable to recover it. 00:41:42.524 [2024-10-01 22:40:37.461053] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.524 [2024-10-01 22:40:37.461063] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.524 qpair failed and we were unable to recover it. 00:41:42.524 [2024-10-01 22:40:37.461223] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.524 [2024-10-01 22:40:37.461233] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.524 qpair failed and we were unable to recover it. 00:41:42.524 [2024-10-01 22:40:37.461509] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.524 [2024-10-01 22:40:37.461520] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.524 qpair failed and we were unable to recover it. 00:41:42.524 [2024-10-01 22:40:37.461858] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.524 [2024-10-01 22:40:37.461868] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.524 qpair failed and we were unable to recover it. 00:41:42.524 [2024-10-01 22:40:37.462193] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.524 [2024-10-01 22:40:37.462202] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.524 qpair failed and we were unable to recover it. 00:41:42.524 [2024-10-01 22:40:37.462520] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.524 [2024-10-01 22:40:37.462529] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.524 qpair failed and we were unable to recover it. 00:41:42.524 [2024-10-01 22:40:37.462852] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.524 [2024-10-01 22:40:37.462862] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.524 qpair failed and we were unable to recover it. 00:41:42.524 [2024-10-01 22:40:37.463055] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.524 [2024-10-01 22:40:37.463065] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.524 qpair failed and we were unable to recover it. 00:41:42.524 [2024-10-01 22:40:37.463262] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.524 [2024-10-01 22:40:37.463272] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.524 qpair failed and we were unable to recover it. 00:41:42.524 [2024-10-01 22:40:37.463555] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.524 [2024-10-01 22:40:37.463565] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.524 qpair failed and we were unable to recover it. 00:41:42.524 [2024-10-01 22:40:37.463841] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.524 [2024-10-01 22:40:37.463852] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.524 qpair failed and we were unable to recover it. 00:41:42.524 [2024-10-01 22:40:37.464029] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.524 [2024-10-01 22:40:37.464040] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.524 qpair failed and we were unable to recover it. 00:41:42.524 [2024-10-01 22:40:37.464218] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.524 [2024-10-01 22:40:37.464228] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.524 qpair failed and we were unable to recover it. 00:41:42.524 [2024-10-01 22:40:37.464416] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.524 [2024-10-01 22:40:37.464426] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.524 qpair failed and we were unable to recover it. 00:41:42.524 [2024-10-01 22:40:37.464770] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.524 [2024-10-01 22:40:37.464780] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.524 qpair failed and we were unable to recover it. 00:41:42.525 [2024-10-01 22:40:37.465124] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.525 [2024-10-01 22:40:37.465134] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.525 qpair failed and we were unable to recover it. 00:41:42.525 [2024-10-01 22:40:37.465470] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.525 [2024-10-01 22:40:37.465480] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.525 qpair failed and we were unable to recover it. 00:41:42.525 [2024-10-01 22:40:37.465739] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.525 [2024-10-01 22:40:37.465749] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.525 qpair failed and we were unable to recover it. 00:41:42.525 [2024-10-01 22:40:37.466030] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.525 [2024-10-01 22:40:37.466039] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.525 qpair failed and we were unable to recover it. 00:41:42.525 [2024-10-01 22:40:37.466352] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.525 [2024-10-01 22:40:37.466362] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.525 qpair failed and we were unable to recover it. 00:41:42.525 [2024-10-01 22:40:37.466719] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.525 [2024-10-01 22:40:37.466730] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.525 qpair failed and we were unable to recover it. 00:41:42.525 [2024-10-01 22:40:37.466916] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.525 [2024-10-01 22:40:37.466926] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.525 qpair failed and we were unable to recover it. 00:41:42.525 [2024-10-01 22:40:37.467173] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.525 [2024-10-01 22:40:37.467183] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.525 qpair failed and we were unable to recover it. 00:41:42.525 [2024-10-01 22:40:37.467457] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.525 [2024-10-01 22:40:37.467467] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.525 qpair failed and we were unable to recover it. 00:41:42.525 [2024-10-01 22:40:37.467772] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.525 [2024-10-01 22:40:37.467782] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.525 qpair failed and we were unable to recover it. 00:41:42.525 [2024-10-01 22:40:37.467965] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.525 [2024-10-01 22:40:37.467975] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.525 qpair failed and we were unable to recover it. 00:41:42.525 [2024-10-01 22:40:37.468305] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.525 [2024-10-01 22:40:37.468315] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.525 qpair failed and we were unable to recover it. 00:41:42.525 [2024-10-01 22:40:37.468495] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.525 [2024-10-01 22:40:37.468505] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.525 qpair failed and we were unable to recover it. 00:41:42.525 [2024-10-01 22:40:37.468811] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.525 [2024-10-01 22:40:37.468821] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.525 qpair failed and we were unable to recover it. 00:41:42.525 [2024-10-01 22:40:37.469168] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.525 [2024-10-01 22:40:37.469177] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.525 qpair failed and we were unable to recover it. 00:41:42.525 [2024-10-01 22:40:37.469337] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.525 [2024-10-01 22:40:37.469346] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.525 qpair failed and we were unable to recover it. 00:41:42.525 [2024-10-01 22:40:37.469715] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.525 [2024-10-01 22:40:37.469725] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.525 qpair failed and we were unable to recover it. 00:41:42.525 [2024-10-01 22:40:37.470073] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.525 [2024-10-01 22:40:37.470082] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.525 qpair failed and we were unable to recover it. 00:41:42.525 [2024-10-01 22:40:37.470392] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.525 [2024-10-01 22:40:37.470402] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.525 qpair failed and we were unable to recover it. 00:41:42.525 [2024-10-01 22:40:37.470577] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.525 [2024-10-01 22:40:37.470586] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.525 qpair failed and we were unable to recover it. 00:41:42.525 [2024-10-01 22:40:37.470801] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.525 [2024-10-01 22:40:37.470812] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.525 qpair failed and we were unable to recover it. 00:41:42.525 [2024-10-01 22:40:37.471125] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.525 [2024-10-01 22:40:37.471135] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.525 qpair failed and we were unable to recover it. 00:41:42.525 [2024-10-01 22:40:37.471407] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.525 [2024-10-01 22:40:37.471417] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.525 qpair failed and we were unable to recover it. 00:41:42.525 [2024-10-01 22:40:37.471737] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.525 [2024-10-01 22:40:37.471748] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.525 qpair failed and we were unable to recover it. 00:41:42.525 [2024-10-01 22:40:37.472023] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.525 [2024-10-01 22:40:37.472033] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.525 qpair failed and we were unable to recover it. 00:41:42.525 [2024-10-01 22:40:37.472221] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.525 [2024-10-01 22:40:37.472233] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.525 qpair failed and we were unable to recover it. 00:41:42.525 [2024-10-01 22:40:37.472558] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.525 [2024-10-01 22:40:37.472569] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.525 qpair failed and we were unable to recover it. 00:41:42.525 [2024-10-01 22:40:37.472879] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.525 [2024-10-01 22:40:37.472889] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.525 qpair failed and we were unable to recover it. 00:41:42.525 [2024-10-01 22:40:37.473204] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.525 [2024-10-01 22:40:37.473214] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.525 qpair failed and we were unable to recover it. 00:41:42.525 [2024-10-01 22:40:37.473437] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.525 [2024-10-01 22:40:37.473447] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.525 qpair failed and we were unable to recover it. 00:41:42.525 [2024-10-01 22:40:37.473726] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.525 [2024-10-01 22:40:37.473736] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.525 qpair failed and we were unable to recover it. 00:41:42.525 [2024-10-01 22:40:37.474078] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.525 [2024-10-01 22:40:37.474088] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.525 qpair failed and we were unable to recover it. 00:41:42.525 [2024-10-01 22:40:37.474399] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.525 [2024-10-01 22:40:37.474409] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.525 qpair failed and we were unable to recover it. 00:41:42.525 [2024-10-01 22:40:37.474698] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.525 [2024-10-01 22:40:37.474708] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.525 qpair failed and we were unable to recover it. 00:41:42.525 [2024-10-01 22:40:37.475015] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.525 [2024-10-01 22:40:37.475025] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.525 qpair failed and we were unable to recover it. 00:41:42.525 [2024-10-01 22:40:37.475358] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.525 [2024-10-01 22:40:37.475371] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.525 qpair failed and we were unable to recover it. 00:41:42.525 [2024-10-01 22:40:37.475693] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.525 [2024-10-01 22:40:37.475703] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.525 qpair failed and we were unable to recover it. 00:41:42.525 [2024-10-01 22:40:37.476013] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.526 [2024-10-01 22:40:37.476024] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.526 qpair failed and we were unable to recover it. 00:41:42.526 [2024-10-01 22:40:37.476214] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.526 [2024-10-01 22:40:37.476225] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.526 qpair failed and we were unable to recover it. 00:41:42.526 [2024-10-01 22:40:37.476546] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.526 [2024-10-01 22:40:37.476556] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.526 qpair failed and we were unable to recover it. 00:41:42.526 [2024-10-01 22:40:37.476940] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.526 [2024-10-01 22:40:37.476950] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.526 qpair failed and we were unable to recover it. 00:41:42.526 [2024-10-01 22:40:37.477241] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.526 [2024-10-01 22:40:37.477251] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.526 qpair failed and we were unable to recover it. 00:41:42.526 [2024-10-01 22:40:37.477574] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.526 [2024-10-01 22:40:37.477584] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.526 qpair failed and we were unable to recover it. 00:41:42.526 [2024-10-01 22:40:37.477902] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.526 [2024-10-01 22:40:37.477912] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.526 qpair failed and we were unable to recover it. 00:41:42.526 [2024-10-01 22:40:37.478225] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.526 [2024-10-01 22:40:37.478235] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.526 qpair failed and we were unable to recover it. 00:41:42.526 [2024-10-01 22:40:37.478419] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.526 [2024-10-01 22:40:37.478429] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.526 qpair failed and we were unable to recover it. 00:41:42.526 [2024-10-01 22:40:37.478720] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.526 [2024-10-01 22:40:37.478730] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.526 qpair failed and we were unable to recover it. 00:41:42.526 [2024-10-01 22:40:37.479039] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.526 [2024-10-01 22:40:37.479049] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.526 qpair failed and we were unable to recover it. 00:41:42.526 [2024-10-01 22:40:37.479357] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.526 [2024-10-01 22:40:37.479366] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.526 qpair failed and we were unable to recover it. 00:41:42.526 [2024-10-01 22:40:37.479707] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.526 [2024-10-01 22:40:37.479718] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.526 qpair failed and we were unable to recover it. 00:41:42.526 [2024-10-01 22:40:37.480046] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.526 [2024-10-01 22:40:37.480063] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.526 qpair failed and we were unable to recover it. 00:41:42.526 [2024-10-01 22:40:37.480328] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.526 [2024-10-01 22:40:37.480339] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.526 qpair failed and we were unable to recover it. 00:41:42.526 [2024-10-01 22:40:37.480584] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.526 [2024-10-01 22:40:37.480595] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.526 qpair failed and we were unable to recover it. 00:41:42.526 [2024-10-01 22:40:37.480952] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.526 [2024-10-01 22:40:37.480963] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.526 qpair failed and we were unable to recover it. 00:41:42.526 [2024-10-01 22:40:37.481260] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.526 [2024-10-01 22:40:37.481271] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.526 qpair failed and we were unable to recover it. 00:41:42.526 [2024-10-01 22:40:37.481466] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.526 [2024-10-01 22:40:37.481476] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.526 qpair failed and we were unable to recover it. 00:41:42.526 [2024-10-01 22:40:37.481673] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.526 [2024-10-01 22:40:37.481684] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.526 qpair failed and we were unable to recover it. 00:41:42.526 [2024-10-01 22:40:37.481921] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.526 [2024-10-01 22:40:37.481930] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.526 qpair failed and we were unable to recover it. 00:41:42.526 [2024-10-01 22:40:37.482148] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.526 [2024-10-01 22:40:37.482157] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.526 qpair failed and we were unable to recover it. 00:41:42.526 [2024-10-01 22:40:37.482459] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.526 [2024-10-01 22:40:37.482469] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.526 qpair failed and we were unable to recover it. 00:41:42.526 [2024-10-01 22:40:37.482774] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.526 [2024-10-01 22:40:37.482784] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.526 qpair failed and we were unable to recover it. 00:41:42.526 [2024-10-01 22:40:37.483067] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.526 [2024-10-01 22:40:37.483076] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.526 qpair failed and we were unable to recover it. 00:41:42.526 [2024-10-01 22:40:37.483356] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.526 [2024-10-01 22:40:37.483368] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.526 qpair failed and we were unable to recover it. 00:41:42.526 [2024-10-01 22:40:37.483633] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.526 [2024-10-01 22:40:37.483644] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.526 qpair failed and we were unable to recover it. 00:41:42.526 [2024-10-01 22:40:37.484009] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.526 [2024-10-01 22:40:37.484019] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.526 qpair failed and we were unable to recover it. 00:41:42.526 [2024-10-01 22:40:37.484329] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.526 [2024-10-01 22:40:37.484338] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.526 qpair failed and we were unable to recover it. 00:41:42.526 [2024-10-01 22:40:37.484657] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.526 [2024-10-01 22:40:37.484668] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.526 qpair failed and we were unable to recover it. 00:41:42.526 [2024-10-01 22:40:37.484952] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.526 [2024-10-01 22:40:37.484962] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.526 qpair failed and we were unable to recover it. 00:41:42.526 [2024-10-01 22:40:37.485264] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.526 [2024-10-01 22:40:37.485274] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.526 qpair failed and we were unable to recover it. 00:41:42.526 [2024-10-01 22:40:37.485579] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.526 [2024-10-01 22:40:37.485590] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.526 qpair failed and we were unable to recover it. 00:41:42.526 [2024-10-01 22:40:37.485905] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.526 [2024-10-01 22:40:37.485916] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.526 qpair failed and we were unable to recover it. 00:41:42.526 [2024-10-01 22:40:37.486223] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.526 [2024-10-01 22:40:37.486234] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.526 qpair failed and we were unable to recover it. 00:41:42.526 [2024-10-01 22:40:37.486542] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.526 [2024-10-01 22:40:37.486552] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.526 qpair failed and we were unable to recover it. 00:41:42.526 [2024-10-01 22:40:37.486855] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.526 [2024-10-01 22:40:37.486866] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.526 qpair failed and we were unable to recover it. 00:41:42.526 [2024-10-01 22:40:37.487172] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.526 [2024-10-01 22:40:37.487181] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.526 qpair failed and we were unable to recover it. 00:41:42.526 [2024-10-01 22:40:37.487485] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.526 [2024-10-01 22:40:37.487495] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.526 qpair failed and we were unable to recover it. 00:41:42.527 [2024-10-01 22:40:37.487792] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.527 [2024-10-01 22:40:37.487803] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.527 qpair failed and we were unable to recover it. 00:41:42.527 [2024-10-01 22:40:37.488138] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.527 [2024-10-01 22:40:37.488149] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.527 qpair failed and we were unable to recover it. 00:41:42.527 [2024-10-01 22:40:37.488488] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.527 [2024-10-01 22:40:37.488499] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.527 qpair failed and we were unable to recover it. 00:41:42.527 [2024-10-01 22:40:37.488692] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.527 [2024-10-01 22:40:37.488702] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.527 qpair failed and we were unable to recover it. 00:41:42.527 [2024-10-01 22:40:37.489060] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.527 [2024-10-01 22:40:37.489070] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.527 qpair failed and we were unable to recover it. 00:41:42.527 [2024-10-01 22:40:37.489418] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.527 [2024-10-01 22:40:37.489428] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.527 qpair failed and we were unable to recover it. 00:41:42.527 [2024-10-01 22:40:37.489621] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.527 [2024-10-01 22:40:37.489635] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.527 qpair failed and we were unable to recover it. 00:41:42.527 [2024-10-01 22:40:37.489952] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.527 [2024-10-01 22:40:37.489962] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.527 qpair failed and we were unable to recover it. 00:41:42.527 [2024-10-01 22:40:37.490265] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.527 [2024-10-01 22:40:37.490274] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.527 qpair failed and we were unable to recover it. 00:41:42.527 [2024-10-01 22:40:37.490563] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.527 [2024-10-01 22:40:37.490573] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.527 qpair failed and we were unable to recover it. 00:41:42.527 [2024-10-01 22:40:37.490907] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.527 [2024-10-01 22:40:37.490918] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.527 qpair failed and we were unable to recover it. 00:41:42.527 [2024-10-01 22:40:37.491201] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.527 [2024-10-01 22:40:37.491211] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.527 qpair failed and we were unable to recover it. 00:41:42.527 [2024-10-01 22:40:37.491519] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.527 [2024-10-01 22:40:37.491529] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.527 qpair failed and we were unable to recover it. 00:41:42.527 [2024-10-01 22:40:37.491836] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.527 [2024-10-01 22:40:37.491848] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.527 qpair failed and we were unable to recover it. 00:41:42.527 [2024-10-01 22:40:37.492159] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.527 [2024-10-01 22:40:37.492169] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.527 qpair failed and we were unable to recover it. 00:41:42.527 [2024-10-01 22:40:37.492327] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.527 [2024-10-01 22:40:37.492337] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.527 qpair failed and we were unable to recover it. 00:41:42.527 [2024-10-01 22:40:37.492449] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.527 [2024-10-01 22:40:37.492459] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.527 qpair failed and we were unable to recover it. 00:41:42.527 [2024-10-01 22:40:37.492675] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1300ed0 is same with the state(6) to be set 00:41:42.527 [2024-10-01 22:40:37.493248] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.527 [2024-10-01 22:40:37.493335] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7f8000b90 with addr=10.0.0.2, port=4420 00:41:42.527 qpair failed and we were unable to recover it. 00:41:42.527 [2024-10-01 22:40:37.493669] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.527 [2024-10-01 22:40:37.493712] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7f8000b90 with addr=10.0.0.2, port=4420 00:41:42.527 qpair failed and we were unable to recover it. 00:41:42.527 [2024-10-01 22:40:37.494001] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.527 [2024-10-01 22:40:37.494012] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.527 qpair failed and we were unable to recover it. 00:41:42.527 [2024-10-01 22:40:37.494357] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.527 [2024-10-01 22:40:37.494367] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.527 qpair failed and we were unable to recover it. 00:41:42.527 [2024-10-01 22:40:37.494691] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.527 [2024-10-01 22:40:37.494701] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.527 qpair failed and we were unable to recover it. 00:41:42.527 [2024-10-01 22:40:37.495031] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.527 [2024-10-01 22:40:37.495041] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.527 qpair failed and we were unable to recover it. 00:41:42.527 [2024-10-01 22:40:37.495378] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.527 [2024-10-01 22:40:37.495388] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.527 qpair failed and we were unable to recover it. 00:41:42.527 [2024-10-01 22:40:37.495641] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.527 [2024-10-01 22:40:37.495652] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.527 qpair failed and we were unable to recover it. 00:41:42.527 [2024-10-01 22:40:37.495880] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.527 [2024-10-01 22:40:37.495889] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.527 qpair failed and we were unable to recover it. 00:41:42.527 [2024-10-01 22:40:37.496185] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.527 [2024-10-01 22:40:37.496196] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.527 qpair failed and we were unable to recover it. 00:41:42.527 [2024-10-01 22:40:37.496449] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.527 [2024-10-01 22:40:37.496460] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.527 qpair failed and we were unable to recover it. 00:41:42.527 [2024-10-01 22:40:37.496751] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.527 [2024-10-01 22:40:37.496761] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.527 qpair failed and we were unable to recover it. 00:41:42.527 [2024-10-01 22:40:37.497096] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.527 [2024-10-01 22:40:37.497106] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.527 qpair failed and we were unable to recover it. 00:41:42.527 [2024-10-01 22:40:37.497302] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.527 [2024-10-01 22:40:37.497312] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.527 qpair failed and we were unable to recover it. 00:41:42.527 [2024-10-01 22:40:37.497604] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.527 [2024-10-01 22:40:37.497614] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.527 qpair failed and we were unable to recover it. 00:41:42.527 [2024-10-01 22:40:37.497798] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.527 [2024-10-01 22:40:37.497808] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.527 qpair failed and we were unable to recover it. 00:41:42.527 [2024-10-01 22:40:37.498117] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.527 [2024-10-01 22:40:37.498127] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.527 qpair failed and we were unable to recover it. 00:41:42.527 [2024-10-01 22:40:37.498412] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.527 [2024-10-01 22:40:37.498422] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.527 qpair failed and we were unable to recover it. 00:41:42.527 [2024-10-01 22:40:37.498635] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.527 [2024-10-01 22:40:37.498646] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.527 qpair failed and we were unable to recover it. 00:41:42.527 [2024-10-01 22:40:37.498962] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.527 [2024-10-01 22:40:37.498981] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.527 qpair failed and we were unable to recover it. 00:41:42.527 [2024-10-01 22:40:37.499277] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.527 [2024-10-01 22:40:37.499286] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.527 qpair failed and we were unable to recover it. 00:41:42.528 [2024-10-01 22:40:37.499701] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.528 [2024-10-01 22:40:37.499715] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.528 qpair failed and we were unable to recover it. 00:41:42.528 [2024-10-01 22:40:37.500032] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.528 [2024-10-01 22:40:37.500044] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.528 qpair failed and we were unable to recover it. 00:41:42.528 [2024-10-01 22:40:37.500360] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.528 [2024-10-01 22:40:37.500370] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.528 qpair failed and we were unable to recover it. 00:41:42.528 [2024-10-01 22:40:37.500689] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.528 [2024-10-01 22:40:37.500699] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.528 qpair failed and we were unable to recover it. 00:41:42.528 [2024-10-01 22:40:37.500884] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.528 [2024-10-01 22:40:37.500894] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.528 qpair failed and we were unable to recover it. 00:41:42.528 [2024-10-01 22:40:37.501233] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.528 [2024-10-01 22:40:37.501243] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.528 qpair failed and we were unable to recover it. 00:41:42.528 [2024-10-01 22:40:37.501580] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.528 [2024-10-01 22:40:37.501590] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.528 qpair failed and we were unable to recover it. 00:41:42.528 [2024-10-01 22:40:37.501756] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.528 [2024-10-01 22:40:37.501767] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.528 qpair failed and we were unable to recover it. 00:41:42.528 [2024-10-01 22:40:37.501839] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.528 [2024-10-01 22:40:37.501849] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.528 qpair failed and we were unable to recover it. 00:41:42.528 [2024-10-01 22:40:37.502019] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.528 [2024-10-01 22:40:37.502029] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.528 qpair failed and we were unable to recover it. 00:41:42.528 [2024-10-01 22:40:37.502187] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.528 [2024-10-01 22:40:37.502196] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.528 qpair failed and we were unable to recover it. 00:41:42.528 [2024-10-01 22:40:37.502418] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.528 [2024-10-01 22:40:37.502427] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.528 qpair failed and we were unable to recover it. 00:41:42.528 [2024-10-01 22:40:37.502806] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.528 [2024-10-01 22:40:37.502816] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.528 qpair failed and we were unable to recover it. 00:41:42.528 [2024-10-01 22:40:37.502995] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.528 [2024-10-01 22:40:37.503005] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.528 qpair failed and we were unable to recover it. 00:41:42.528 [2024-10-01 22:40:37.503300] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.528 [2024-10-01 22:40:37.503311] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.528 qpair failed and we were unable to recover it. 00:41:42.528 [2024-10-01 22:40:37.503486] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.528 [2024-10-01 22:40:37.503496] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.528 qpair failed and we were unable to recover it. 00:41:42.528 [2024-10-01 22:40:37.503696] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.528 [2024-10-01 22:40:37.503706] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.528 qpair failed and we were unable to recover it. 00:41:42.528 [2024-10-01 22:40:37.504023] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.528 [2024-10-01 22:40:37.504033] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.528 qpair failed and we were unable to recover it. 00:41:42.528 [2024-10-01 22:40:37.504366] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.528 [2024-10-01 22:40:37.504376] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.528 qpair failed and we were unable to recover it. 00:41:42.528 [2024-10-01 22:40:37.504671] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.528 [2024-10-01 22:40:37.504682] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.528 qpair failed and we were unable to recover it. 00:41:42.528 [2024-10-01 22:40:37.504989] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.528 [2024-10-01 22:40:37.504999] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.528 qpair failed and we were unable to recover it. 00:41:42.528 [2024-10-01 22:40:37.505267] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.528 [2024-10-01 22:40:37.505277] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.528 qpair failed and we were unable to recover it. 00:41:42.528 [2024-10-01 22:40:37.505610] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.528 [2024-10-01 22:40:37.505620] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.528 qpair failed and we were unable to recover it. 00:41:42.528 [2024-10-01 22:40:37.505914] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.528 [2024-10-01 22:40:37.505925] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.528 qpair failed and we were unable to recover it. 00:41:42.528 [2024-10-01 22:40:37.506191] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.528 [2024-10-01 22:40:37.506201] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.528 qpair failed and we were unable to recover it. 00:41:42.528 [2024-10-01 22:40:37.506392] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.528 [2024-10-01 22:40:37.506403] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.528 qpair failed and we were unable to recover it. 00:41:42.528 [2024-10-01 22:40:37.506693] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.528 [2024-10-01 22:40:37.506704] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.528 qpair failed and we were unable to recover it. 00:41:42.528 [2024-10-01 22:40:37.506997] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.528 [2024-10-01 22:40:37.507007] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.528 qpair failed and we were unable to recover it. 00:41:42.528 [2024-10-01 22:40:37.507341] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.528 [2024-10-01 22:40:37.507351] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.528 qpair failed and we were unable to recover it. 00:41:42.529 [2024-10-01 22:40:37.507666] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.529 [2024-10-01 22:40:37.507681] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.529 qpair failed and we were unable to recover it. 00:41:42.529 [2024-10-01 22:40:37.507856] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.529 [2024-10-01 22:40:37.507866] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.529 qpair failed and we were unable to recover it. 00:41:42.529 [2024-10-01 22:40:37.508163] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.529 [2024-10-01 22:40:37.508172] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.529 qpair failed and we were unable to recover it. 00:41:42.529 [2024-10-01 22:40:37.508488] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.529 [2024-10-01 22:40:37.508498] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.529 qpair failed and we were unable to recover it. 00:41:42.529 [2024-10-01 22:40:37.508798] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.529 [2024-10-01 22:40:37.508808] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.529 qpair failed and we were unable to recover it. 00:41:42.529 [2024-10-01 22:40:37.509105] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.529 [2024-10-01 22:40:37.509115] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.529 qpair failed and we were unable to recover it. 00:41:42.529 [2024-10-01 22:40:37.509430] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.529 [2024-10-01 22:40:37.509440] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.529 qpair failed and we were unable to recover it. 00:41:42.529 [2024-10-01 22:40:37.509767] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.529 [2024-10-01 22:40:37.509777] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.529 qpair failed and we were unable to recover it. 00:41:42.529 [2024-10-01 22:40:37.509953] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.529 [2024-10-01 22:40:37.509963] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.529 qpair failed and we were unable to recover it. 00:41:42.529 [2024-10-01 22:40:37.510191] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.529 [2024-10-01 22:40:37.510201] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.529 qpair failed and we were unable to recover it. 00:41:42.529 [2024-10-01 22:40:37.510526] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.529 [2024-10-01 22:40:37.510536] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.529 qpair failed and we were unable to recover it. 00:41:42.529 [2024-10-01 22:40:37.510844] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.529 [2024-10-01 22:40:37.510855] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.529 qpair failed and we were unable to recover it. 00:41:42.529 [2024-10-01 22:40:37.511178] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.529 [2024-10-01 22:40:37.511187] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.529 qpair failed and we were unable to recover it. 00:41:42.529 [2024-10-01 22:40:37.511468] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.529 [2024-10-01 22:40:37.511478] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.529 qpair failed and we were unable to recover it. 00:41:42.529 [2024-10-01 22:40:37.511786] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.529 [2024-10-01 22:40:37.511796] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.529 qpair failed and we were unable to recover it. 00:41:42.529 [2024-10-01 22:40:37.511985] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.529 [2024-10-01 22:40:37.511995] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.529 qpair failed and we were unable to recover it. 00:41:42.529 [2024-10-01 22:40:37.512225] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.529 [2024-10-01 22:40:37.512235] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.529 qpair failed and we were unable to recover it. 00:41:42.529 [2024-10-01 22:40:37.512423] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.529 [2024-10-01 22:40:37.512433] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.529 qpair failed and we were unable to recover it. 00:41:42.529 [2024-10-01 22:40:37.512718] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.529 [2024-10-01 22:40:37.512728] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.529 qpair failed and we were unable to recover it. 00:41:42.529 [2024-10-01 22:40:37.513068] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.529 [2024-10-01 22:40:37.513079] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.529 qpair failed and we were unable to recover it. 00:41:42.529 [2024-10-01 22:40:37.513414] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.529 [2024-10-01 22:40:37.513424] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.529 qpair failed and we were unable to recover it. 00:41:42.529 [2024-10-01 22:40:37.513727] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.529 [2024-10-01 22:40:37.513737] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.529 qpair failed and we were unable to recover it. 00:41:42.529 [2024-10-01 22:40:37.514050] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.529 [2024-10-01 22:40:37.514059] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.529 qpair failed and we were unable to recover it. 00:41:42.529 [2024-10-01 22:40:37.514354] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.529 [2024-10-01 22:40:37.514364] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.529 qpair failed and we were unable to recover it. 00:41:42.529 [2024-10-01 22:40:37.514658] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.529 [2024-10-01 22:40:37.514668] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.529 qpair failed and we were unable to recover it. 00:41:42.529 [2024-10-01 22:40:37.514977] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.529 [2024-10-01 22:40:37.514987] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.529 qpair failed and we were unable to recover it. 00:41:42.529 [2024-10-01 22:40:37.515151] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.529 [2024-10-01 22:40:37.515161] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.529 qpair failed and we were unable to recover it. 00:41:42.529 [2024-10-01 22:40:37.515439] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.529 [2024-10-01 22:40:37.515451] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.529 qpair failed and we were unable to recover it. 00:41:42.529 [2024-10-01 22:40:37.515782] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.529 [2024-10-01 22:40:37.515792] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.529 qpair failed and we were unable to recover it. 00:41:42.529 [2024-10-01 22:40:37.516003] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.529 [2024-10-01 22:40:37.516013] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.529 qpair failed and we were unable to recover it. 00:41:42.529 [2024-10-01 22:40:37.516207] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.529 [2024-10-01 22:40:37.516216] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.529 qpair failed and we were unable to recover it. 00:41:42.529 [2024-10-01 22:40:37.516407] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.529 [2024-10-01 22:40:37.516416] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.529 qpair failed and we were unable to recover it. 00:41:42.529 [2024-10-01 22:40:37.516670] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.529 [2024-10-01 22:40:37.516680] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.529 qpair failed and we were unable to recover it. 00:41:42.529 [2024-10-01 22:40:37.517042] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.529 [2024-10-01 22:40:37.517052] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.529 qpair failed and we were unable to recover it. 00:41:42.529 [2024-10-01 22:40:37.517373] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.529 [2024-10-01 22:40:37.517383] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.529 qpair failed and we were unable to recover it. 00:41:42.529 [2024-10-01 22:40:37.517721] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.529 [2024-10-01 22:40:37.517732] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.529 qpair failed and we were unable to recover it. 00:41:42.529 [2024-10-01 22:40:37.517906] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.529 [2024-10-01 22:40:37.517917] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.529 qpair failed and we were unable to recover it. 00:41:42.529 [2024-10-01 22:40:37.518087] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.529 [2024-10-01 22:40:37.518098] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.529 qpair failed and we were unable to recover it. 00:41:42.530 [2024-10-01 22:40:37.518214] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.530 [2024-10-01 22:40:37.518225] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.530 qpair failed and we were unable to recover it. 00:41:42.530 [2024-10-01 22:40:37.518377] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.530 [2024-10-01 22:40:37.518386] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.530 qpair failed and we were unable to recover it. 00:41:42.530 [2024-10-01 22:40:37.518575] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.530 [2024-10-01 22:40:37.518585] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.530 qpair failed and we were unable to recover it. 00:41:42.530 [2024-10-01 22:40:37.518752] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.530 [2024-10-01 22:40:37.518763] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.530 qpair failed and we were unable to recover it. 00:41:42.530 [2024-10-01 22:40:37.518935] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.530 [2024-10-01 22:40:37.518945] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.530 qpair failed and we were unable to recover it. 00:41:42.530 [2024-10-01 22:40:37.519141] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.530 [2024-10-01 22:40:37.519151] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.530 qpair failed and we were unable to recover it. 00:41:42.530 [2024-10-01 22:40:37.519502] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.530 [2024-10-01 22:40:37.519512] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.530 qpair failed and we were unable to recover it. 00:41:42.530 [2024-10-01 22:40:37.519809] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.530 [2024-10-01 22:40:37.519819] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.530 qpair failed and we were unable to recover it. 00:41:42.530 [2024-10-01 22:40:37.520128] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.530 [2024-10-01 22:40:37.520138] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.530 qpair failed and we were unable to recover it. 00:41:42.530 [2024-10-01 22:40:37.520460] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.530 [2024-10-01 22:40:37.520470] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.530 qpair failed and we were unable to recover it. 00:41:42.530 [2024-10-01 22:40:37.520741] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.530 [2024-10-01 22:40:37.520752] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.530 qpair failed and we were unable to recover it. 00:41:42.530 [2024-10-01 22:40:37.520913] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.530 [2024-10-01 22:40:37.520924] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.530 qpair failed and we were unable to recover it. 00:41:42.530 [2024-10-01 22:40:37.521120] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.530 [2024-10-01 22:40:37.521130] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.530 qpair failed and we were unable to recover it. 00:41:42.530 [2024-10-01 22:40:37.521513] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.530 [2024-10-01 22:40:37.521523] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.530 qpair failed and we were unable to recover it. 00:41:42.530 [2024-10-01 22:40:37.521835] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.530 [2024-10-01 22:40:37.521845] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.530 qpair failed and we were unable to recover it. 00:41:42.530 [2024-10-01 22:40:37.522143] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.530 [2024-10-01 22:40:37.522152] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.530 qpair failed and we were unable to recover it. 00:41:42.530 [2024-10-01 22:40:37.522323] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.530 [2024-10-01 22:40:37.522335] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.530 qpair failed and we were unable to recover it. 00:41:42.530 [2024-10-01 22:40:37.522651] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.530 [2024-10-01 22:40:37.522662] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.530 qpair failed and we were unable to recover it. 00:41:42.530 [2024-10-01 22:40:37.523011] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.530 [2024-10-01 22:40:37.523021] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.530 qpair failed and we were unable to recover it. 00:41:42.530 [2024-10-01 22:40:37.523326] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.530 [2024-10-01 22:40:37.523336] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.530 qpair failed and we were unable to recover it. 00:41:42.530 [2024-10-01 22:40:37.523675] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.530 [2024-10-01 22:40:37.523685] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.530 qpair failed and we were unable to recover it. 00:41:42.530 [2024-10-01 22:40:37.523885] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.530 [2024-10-01 22:40:37.523895] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.530 qpair failed and we were unable to recover it. 00:41:42.530 [2024-10-01 22:40:37.524212] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.530 [2024-10-01 22:40:37.524222] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.530 qpair failed and we were unable to recover it. 00:41:42.530 [2024-10-01 22:40:37.524486] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.530 [2024-10-01 22:40:37.524496] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.530 qpair failed and we were unable to recover it. 00:41:42.530 [2024-10-01 22:40:37.524797] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.530 [2024-10-01 22:40:37.524808] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.530 qpair failed and we were unable to recover it. 00:41:42.530 [2024-10-01 22:40:37.525097] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.530 [2024-10-01 22:40:37.525107] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.530 qpair failed and we were unable to recover it. 00:41:42.530 [2024-10-01 22:40:37.525417] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.530 [2024-10-01 22:40:37.525427] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.530 qpair failed and we were unable to recover it. 00:41:42.530 [2024-10-01 22:40:37.525609] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.530 [2024-10-01 22:40:37.525619] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.530 qpair failed and we were unable to recover it. 00:41:42.530 [2024-10-01 22:40:37.525975] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.530 [2024-10-01 22:40:37.525986] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.530 qpair failed and we were unable to recover it. 00:41:42.530 [2024-10-01 22:40:37.526302] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.530 [2024-10-01 22:40:37.526312] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.530 qpair failed and we were unable to recover it. 00:41:42.530 [2024-10-01 22:40:37.526620] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.530 [2024-10-01 22:40:37.526633] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.530 qpair failed and we were unable to recover it. 00:41:42.530 [2024-10-01 22:40:37.526820] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.530 [2024-10-01 22:40:37.526829] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.530 qpair failed and we were unable to recover it. 00:41:42.530 [2024-10-01 22:40:37.527150] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.530 [2024-10-01 22:40:37.527161] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.530 qpair failed and we were unable to recover it. 00:41:42.530 [2024-10-01 22:40:37.527452] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.530 [2024-10-01 22:40:37.527463] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.530 qpair failed and we were unable to recover it. 00:41:42.530 [2024-10-01 22:40:37.527681] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.530 [2024-10-01 22:40:37.527691] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.530 qpair failed and we were unable to recover it. 00:41:42.530 [2024-10-01 22:40:37.528003] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.530 [2024-10-01 22:40:37.528013] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.530 qpair failed and we were unable to recover it. 00:41:42.530 [2024-10-01 22:40:37.528178] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.530 [2024-10-01 22:40:37.528188] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.530 qpair failed and we were unable to recover it. 00:41:42.530 [2024-10-01 22:40:37.528515] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.530 [2024-10-01 22:40:37.528525] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.530 qpair failed and we were unable to recover it. 00:41:42.530 [2024-10-01 22:40:37.528721] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.531 [2024-10-01 22:40:37.528731] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.531 qpair failed and we were unable to recover it. 00:41:42.531 [2024-10-01 22:40:37.528905] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.531 [2024-10-01 22:40:37.528915] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.531 qpair failed and we were unable to recover it. 00:41:42.531 [2024-10-01 22:40:37.529189] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.531 [2024-10-01 22:40:37.529198] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.531 qpair failed and we were unable to recover it. 00:41:42.531 [2024-10-01 22:40:37.529387] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.531 [2024-10-01 22:40:37.529397] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.531 qpair failed and we were unable to recover it. 00:41:42.531 [2024-10-01 22:40:37.529764] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.531 [2024-10-01 22:40:37.529774] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.531 qpair failed and we were unable to recover it. 00:41:42.531 [2024-10-01 22:40:37.529966] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.531 [2024-10-01 22:40:37.529976] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.531 qpair failed and we were unable to recover it. 00:41:42.531 [2024-10-01 22:40:37.530252] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.531 [2024-10-01 22:40:37.530262] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.531 qpair failed and we were unable to recover it. 00:41:42.531 [2024-10-01 22:40:37.530562] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.531 [2024-10-01 22:40:37.530572] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.531 qpair failed and we were unable to recover it. 00:41:42.531 [2024-10-01 22:40:37.530764] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.531 [2024-10-01 22:40:37.530775] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.531 qpair failed and we were unable to recover it. 00:41:42.531 [2024-10-01 22:40:37.531123] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.531 [2024-10-01 22:40:37.531133] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.531 qpair failed and we were unable to recover it. 00:41:42.531 [2024-10-01 22:40:37.531420] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.531 [2024-10-01 22:40:37.531430] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.531 qpair failed and we were unable to recover it. 00:41:42.531 [2024-10-01 22:40:37.531715] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.531 [2024-10-01 22:40:37.531725] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.531 qpair failed and we were unable to recover it. 00:41:42.531 [2024-10-01 22:40:37.532035] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.531 [2024-10-01 22:40:37.532052] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.531 qpair failed and we were unable to recover it. 00:41:42.531 [2024-10-01 22:40:37.532259] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.531 [2024-10-01 22:40:37.532269] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.531 qpair failed and we were unable to recover it. 00:41:42.531 [2024-10-01 22:40:37.532576] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.531 [2024-10-01 22:40:37.532587] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.531 qpair failed and we were unable to recover it. 00:41:42.531 [2024-10-01 22:40:37.532896] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.531 [2024-10-01 22:40:37.532908] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.531 qpair failed and we were unable to recover it. 00:41:42.531 [2024-10-01 22:40:37.533212] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.531 [2024-10-01 22:40:37.533222] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.531 qpair failed and we were unable to recover it. 00:41:42.531 [2024-10-01 22:40:37.533534] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.531 [2024-10-01 22:40:37.533544] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.531 qpair failed and we were unable to recover it. 00:41:42.531 [2024-10-01 22:40:37.533725] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.531 [2024-10-01 22:40:37.533736] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.531 qpair failed and we were unable to recover it. 00:41:42.531 [2024-10-01 22:40:37.533960] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.531 [2024-10-01 22:40:37.533970] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.531 qpair failed and we were unable to recover it. 00:41:42.531 [2024-10-01 22:40:37.534146] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.531 [2024-10-01 22:40:37.534157] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.531 qpair failed and we were unable to recover it. 00:41:42.531 [2024-10-01 22:40:37.534449] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.531 [2024-10-01 22:40:37.534459] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.531 qpair failed and we were unable to recover it. 00:41:42.531 [2024-10-01 22:40:37.534746] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.531 [2024-10-01 22:40:37.534757] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.531 qpair failed and we were unable to recover it. 00:41:42.531 [2024-10-01 22:40:37.534946] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.531 [2024-10-01 22:40:37.534955] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.531 qpair failed and we were unable to recover it. 00:41:42.531 [2024-10-01 22:40:37.535343] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.531 [2024-10-01 22:40:37.535353] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.531 qpair failed and we were unable to recover it. 00:41:42.531 [2024-10-01 22:40:37.535698] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.531 [2024-10-01 22:40:37.535710] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.531 qpair failed and we were unable to recover it. 00:41:42.531 [2024-10-01 22:40:37.536023] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.531 [2024-10-01 22:40:37.536033] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.531 qpair failed and we were unable to recover it. 00:41:42.531 [2024-10-01 22:40:37.536351] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.531 [2024-10-01 22:40:37.536361] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.531 qpair failed and we were unable to recover it. 00:41:42.531 [2024-10-01 22:40:37.536690] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.531 [2024-10-01 22:40:37.536700] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.531 qpair failed and we were unable to recover it. 00:41:42.531 [2024-10-01 22:40:37.536994] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.531 [2024-10-01 22:40:37.537004] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.531 qpair failed and we were unable to recover it. 00:41:42.531 [2024-10-01 22:40:37.537233] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.531 [2024-10-01 22:40:37.537242] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.531 qpair failed and we were unable to recover it. 00:41:42.531 [2024-10-01 22:40:37.537530] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.531 [2024-10-01 22:40:37.537540] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.531 qpair failed and we were unable to recover it. 00:41:42.531 [2024-10-01 22:40:37.537866] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.531 [2024-10-01 22:40:37.537876] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.531 qpair failed and we were unable to recover it. 00:41:42.531 [2024-10-01 22:40:37.538142] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.531 [2024-10-01 22:40:37.538152] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.531 qpair failed and we were unable to recover it. 00:41:42.531 [2024-10-01 22:40:37.538487] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.531 [2024-10-01 22:40:37.538497] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.532 qpair failed and we were unable to recover it. 00:41:42.532 [2024-10-01 22:40:37.538776] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.532 [2024-10-01 22:40:37.538786] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.532 qpair failed and we were unable to recover it. 00:41:42.532 [2024-10-01 22:40:37.539122] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.532 [2024-10-01 22:40:37.539133] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.532 qpair failed and we were unable to recover it. 00:41:42.532 [2024-10-01 22:40:37.539289] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.532 [2024-10-01 22:40:37.539299] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.532 qpair failed and we were unable to recover it. 00:41:42.532 [2024-10-01 22:40:37.539609] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.532 [2024-10-01 22:40:37.539619] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.532 qpair failed and we were unable to recover it. 00:41:42.532 [2024-10-01 22:40:37.540004] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.532 [2024-10-01 22:40:37.540014] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.532 qpair failed and we were unable to recover it. 00:41:42.532 [2024-10-01 22:40:37.540317] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.532 [2024-10-01 22:40:37.540327] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.532 qpair failed and we were unable to recover it. 00:41:42.532 [2024-10-01 22:40:37.540660] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.532 [2024-10-01 22:40:37.540671] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.532 qpair failed and we were unable to recover it. 00:41:42.532 [2024-10-01 22:40:37.540957] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.532 [2024-10-01 22:40:37.540975] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.532 qpair failed and we were unable to recover it. 00:41:42.532 [2024-10-01 22:40:37.541308] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.532 [2024-10-01 22:40:37.541317] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.532 qpair failed and we were unable to recover it. 00:41:42.532 [2024-10-01 22:40:37.541632] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.532 [2024-10-01 22:40:37.541642] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.532 qpair failed and we were unable to recover it. 00:41:42.532 [2024-10-01 22:40:37.541853] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.532 [2024-10-01 22:40:37.541863] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.532 qpair failed and we were unable to recover it. 00:41:42.532 [2024-10-01 22:40:37.542165] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.532 [2024-10-01 22:40:37.542178] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.532 qpair failed and we were unable to recover it. 00:41:42.532 [2024-10-01 22:40:37.542492] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.532 [2024-10-01 22:40:37.542502] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.532 qpair failed and we were unable to recover it. 00:41:42.532 [2024-10-01 22:40:37.542835] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.532 [2024-10-01 22:40:37.542845] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.532 qpair failed and we were unable to recover it. 00:41:42.532 [2024-10-01 22:40:37.543132] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.532 [2024-10-01 22:40:37.543149] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.532 qpair failed and we were unable to recover it. 00:41:42.532 [2024-10-01 22:40:37.543467] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.532 [2024-10-01 22:40:37.543477] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.532 qpair failed and we were unable to recover it. 00:41:42.532 [2024-10-01 22:40:37.543660] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.532 [2024-10-01 22:40:37.543670] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.532 qpair failed and we were unable to recover it. 00:41:42.532 [2024-10-01 22:40:37.543994] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.532 [2024-10-01 22:40:37.544004] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.532 qpair failed and we were unable to recover it. 00:41:42.532 [2024-10-01 22:40:37.544280] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.532 [2024-10-01 22:40:37.544290] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.532 qpair failed and we were unable to recover it. 00:41:42.532 [2024-10-01 22:40:37.544475] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.532 [2024-10-01 22:40:37.544485] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.532 qpair failed and we were unable to recover it. 00:41:42.532 [2024-10-01 22:40:37.544677] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.532 [2024-10-01 22:40:37.544688] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.532 qpair failed and we were unable to recover it. 00:41:42.532 [2024-10-01 22:40:37.544773] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.532 [2024-10-01 22:40:37.544783] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.532 qpair failed and we were unable to recover it. 00:41:42.532 [2024-10-01 22:40:37.545077] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.532 [2024-10-01 22:40:37.545087] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.532 qpair failed and we were unable to recover it. 00:41:42.532 [2024-10-01 22:40:37.545281] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.532 [2024-10-01 22:40:37.545292] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.532 qpair failed and we were unable to recover it. 00:41:42.532 [2024-10-01 22:40:37.545562] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.532 [2024-10-01 22:40:37.545572] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.532 qpair failed and we were unable to recover it. 00:41:42.532 [2024-10-01 22:40:37.545883] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.532 [2024-10-01 22:40:37.545894] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.532 qpair failed and we were unable to recover it. 00:41:42.532 [2024-10-01 22:40:37.546174] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.532 [2024-10-01 22:40:37.546184] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.532 qpair failed and we were unable to recover it. 00:41:42.532 [2024-10-01 22:40:37.546368] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.532 [2024-10-01 22:40:37.546378] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.532 qpair failed and we were unable to recover it. 00:41:42.532 [2024-10-01 22:40:37.546571] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.532 [2024-10-01 22:40:37.546581] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.532 qpair failed and we were unable to recover it. 00:41:42.532 [2024-10-01 22:40:37.546767] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.532 [2024-10-01 22:40:37.546777] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.532 qpair failed and we were unable to recover it. 00:41:42.532 [2024-10-01 22:40:37.547096] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.532 [2024-10-01 22:40:37.547106] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.532 qpair failed and we were unable to recover it. 00:41:42.532 [2024-10-01 22:40:37.547286] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.532 [2024-10-01 22:40:37.547295] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.532 qpair failed and we were unable to recover it. 00:41:42.532 [2024-10-01 22:40:37.547583] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.532 [2024-10-01 22:40:37.547593] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.532 qpair failed and we were unable to recover it. 00:41:42.532 [2024-10-01 22:40:37.547906] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.532 [2024-10-01 22:40:37.547916] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.532 qpair failed and we were unable to recover it. 00:41:42.532 [2024-10-01 22:40:37.548245] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.532 [2024-10-01 22:40:37.548255] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.532 qpair failed and we were unable to recover it. 00:41:42.533 [2024-10-01 22:40:37.548565] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.533 [2024-10-01 22:40:37.548575] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.533 qpair failed and we were unable to recover it. 00:41:42.533 [2024-10-01 22:40:37.548857] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.533 [2024-10-01 22:40:37.548867] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.533 qpair failed and we were unable to recover it. 00:41:42.533 [2024-10-01 22:40:37.549177] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.533 [2024-10-01 22:40:37.549187] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.533 qpair failed and we were unable to recover it. 00:41:42.533 [2024-10-01 22:40:37.549466] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.533 [2024-10-01 22:40:37.549478] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.533 qpair failed and we were unable to recover it. 00:41:42.533 [2024-10-01 22:40:37.549706] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.533 [2024-10-01 22:40:37.549718] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.533 qpair failed and we were unable to recover it. 00:41:42.533 [2024-10-01 22:40:37.549997] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.533 [2024-10-01 22:40:37.550007] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.533 qpair failed and we were unable to recover it. 00:41:42.533 [2024-10-01 22:40:37.550274] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.533 [2024-10-01 22:40:37.550284] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.533 qpair failed and we were unable to recover it. 00:41:42.533 [2024-10-01 22:40:37.550484] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.533 [2024-10-01 22:40:37.550494] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.533 qpair failed and we were unable to recover it. 00:41:42.533 [2024-10-01 22:40:37.550809] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.533 [2024-10-01 22:40:37.550819] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.533 qpair failed and we were unable to recover it. 00:41:42.533 [2024-10-01 22:40:37.551166] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.533 [2024-10-01 22:40:37.551176] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.533 qpair failed and we were unable to recover it. 00:41:42.533 [2024-10-01 22:40:37.551487] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.533 [2024-10-01 22:40:37.551497] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.533 qpair failed and we were unable to recover it. 00:41:42.533 [2024-10-01 22:40:37.551669] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.533 [2024-10-01 22:40:37.551679] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.533 qpair failed and we were unable to recover it. 00:41:42.533 [2024-10-01 22:40:37.551993] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.533 [2024-10-01 22:40:37.552003] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.533 qpair failed and we were unable to recover it. 00:41:42.533 [2024-10-01 22:40:37.552344] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.533 [2024-10-01 22:40:37.552354] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.533 qpair failed and we were unable to recover it. 00:41:42.533 [2024-10-01 22:40:37.552657] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.533 [2024-10-01 22:40:37.552667] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.533 qpair failed and we were unable to recover it. 00:41:42.533 [2024-10-01 22:40:37.552931] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.533 [2024-10-01 22:40:37.552940] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.533 qpair failed and we were unable to recover it. 00:41:42.533 [2024-10-01 22:40:37.553244] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.533 [2024-10-01 22:40:37.553253] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.533 qpair failed and we were unable to recover it. 00:41:42.533 [2024-10-01 22:40:37.553530] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.533 [2024-10-01 22:40:37.553540] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.533 qpair failed and we were unable to recover it. 00:41:42.533 [2024-10-01 22:40:37.553843] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.533 [2024-10-01 22:40:37.553853] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.533 qpair failed and we were unable to recover it. 00:41:42.533 [2024-10-01 22:40:37.554178] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.533 [2024-10-01 22:40:37.554187] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.533 qpair failed and we were unable to recover it. 00:41:42.533 [2024-10-01 22:40:37.554393] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.533 [2024-10-01 22:40:37.554402] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.533 qpair failed and we were unable to recover it. 00:41:42.533 [2024-10-01 22:40:37.554759] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.533 [2024-10-01 22:40:37.554769] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.533 qpair failed and we were unable to recover it. 00:41:42.533 [2024-10-01 22:40:37.555058] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.533 [2024-10-01 22:40:37.555068] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.533 qpair failed and we were unable to recover it. 00:41:42.533 [2024-10-01 22:40:37.555298] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.533 [2024-10-01 22:40:37.555308] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.533 qpair failed and we were unable to recover it. 00:41:42.533 [2024-10-01 22:40:37.555617] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.533 [2024-10-01 22:40:37.555632] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.533 qpair failed and we were unable to recover it. 00:41:42.533 [2024-10-01 22:40:37.555813] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.533 [2024-10-01 22:40:37.555824] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.533 qpair failed and we were unable to recover it. 00:41:42.533 [2024-10-01 22:40:37.556125] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.533 [2024-10-01 22:40:37.556135] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.533 qpair failed and we were unable to recover it. 00:41:42.533 [2024-10-01 22:40:37.556428] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.533 [2024-10-01 22:40:37.556446] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.533 qpair failed and we were unable to recover it. 00:41:42.533 [2024-10-01 22:40:37.556762] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.533 [2024-10-01 22:40:37.556772] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.533 qpair failed and we were unable to recover it. 00:41:42.533 [2024-10-01 22:40:37.557068] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.533 [2024-10-01 22:40:37.557078] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.533 qpair failed and we were unable to recover it. 00:41:42.533 [2024-10-01 22:40:37.557272] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.533 [2024-10-01 22:40:37.557282] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.534 qpair failed and we were unable to recover it. 00:41:42.534 [2024-10-01 22:40:37.557597] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.534 [2024-10-01 22:40:37.557606] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.534 qpair failed and we were unable to recover it. 00:41:42.534 [2024-10-01 22:40:37.557807] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.534 [2024-10-01 22:40:37.557817] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.534 qpair failed and we were unable to recover it. 00:41:42.534 [2024-10-01 22:40:37.558009] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.534 [2024-10-01 22:40:37.558018] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.534 qpair failed and we were unable to recover it. 00:41:42.534 [2024-10-01 22:40:37.558304] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.534 [2024-10-01 22:40:37.558313] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.534 qpair failed and we were unable to recover it. 00:41:42.534 [2024-10-01 22:40:37.558627] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.534 [2024-10-01 22:40:37.558638] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.534 qpair failed and we were unable to recover it. 00:41:42.534 [2024-10-01 22:40:37.558801] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.534 [2024-10-01 22:40:37.558811] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.534 qpair failed and we were unable to recover it. 00:41:42.534 [2024-10-01 22:40:37.559160] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.534 [2024-10-01 22:40:37.559170] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.534 qpair failed and we were unable to recover it. 00:41:42.534 [2024-10-01 22:40:37.559472] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.534 [2024-10-01 22:40:37.559481] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.534 qpair failed and we were unable to recover it. 00:41:42.534 [2024-10-01 22:40:37.559811] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.534 [2024-10-01 22:40:37.559821] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.534 qpair failed and we were unable to recover it. 00:41:42.534 [2024-10-01 22:40:37.560136] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.534 [2024-10-01 22:40:37.560145] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.534 qpair failed and we were unable to recover it. 00:41:42.534 [2024-10-01 22:40:37.560457] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.534 [2024-10-01 22:40:37.560466] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.534 qpair failed and we were unable to recover it. 00:41:42.534 [2024-10-01 22:40:37.560751] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.534 [2024-10-01 22:40:37.560761] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.534 qpair failed and we were unable to recover it. 00:41:42.534 [2024-10-01 22:40:37.560928] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.534 [2024-10-01 22:40:37.560945] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.534 qpair failed and we were unable to recover it. 00:41:42.534 [2024-10-01 22:40:37.561306] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.534 [2024-10-01 22:40:37.561411] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa804000b90 with addr=10.0.0.2, port=4420 00:41:42.534 qpair failed and we were unable to recover it. 00:41:42.534 [2024-10-01 22:40:37.561727] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.534 [2024-10-01 22:40:37.561779] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa804000b90 with addr=10.0.0.2, port=4420 00:41:42.534 qpair failed and we were unable to recover it. 00:41:42.534 [2024-10-01 22:40:37.561850] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.534 [2024-10-01 22:40:37.561862] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.534 qpair failed and we were unable to recover it. 00:41:42.534 [2024-10-01 22:40:37.562095] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.534 [2024-10-01 22:40:37.562106] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.534 qpair failed and we were unable to recover it. 00:41:42.534 [2024-10-01 22:40:37.562431] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.534 [2024-10-01 22:40:37.562441] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.534 qpair failed and we were unable to recover it. 00:41:42.534 [2024-10-01 22:40:37.562780] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.534 [2024-10-01 22:40:37.562790] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.534 qpair failed and we were unable to recover it. 00:41:42.534 [2024-10-01 22:40:37.563075] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.534 [2024-10-01 22:40:37.563085] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.534 qpair failed and we were unable to recover it. 00:41:42.534 [2024-10-01 22:40:37.563365] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.534 [2024-10-01 22:40:37.563374] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.534 qpair failed and we were unable to recover it. 00:41:42.534 [2024-10-01 22:40:37.563691] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.534 [2024-10-01 22:40:37.563702] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.534 qpair failed and we were unable to recover it. 00:41:42.534 [2024-10-01 22:40:37.564033] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.534 [2024-10-01 22:40:37.564042] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.534 qpair failed and we were unable to recover it. 00:41:42.534 [2024-10-01 22:40:37.564215] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.534 [2024-10-01 22:40:37.564224] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.534 qpair failed and we were unable to recover it. 00:41:42.534 [2024-10-01 22:40:37.564496] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.534 [2024-10-01 22:40:37.564505] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.534 qpair failed and we were unable to recover it. 00:41:42.534 [2024-10-01 22:40:37.564824] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.534 [2024-10-01 22:40:37.564835] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.534 qpair failed and we were unable to recover it. 00:41:42.534 [2024-10-01 22:40:37.565128] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.534 [2024-10-01 22:40:37.565138] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.534 qpair failed and we were unable to recover it. 00:41:42.534 [2024-10-01 22:40:37.565467] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.534 [2024-10-01 22:40:37.565478] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.534 qpair failed and we were unable to recover it. 00:41:42.534 [2024-10-01 22:40:37.565687] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.534 [2024-10-01 22:40:37.565697] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.534 qpair failed and we were unable to recover it. 00:41:42.534 [2024-10-01 22:40:37.566036] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.534 [2024-10-01 22:40:37.566045] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.534 qpair failed and we were unable to recover it. 00:41:42.534 [2024-10-01 22:40:37.566331] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.534 [2024-10-01 22:40:37.566342] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.534 qpair failed and we were unable to recover it. 00:41:42.534 [2024-10-01 22:40:37.566541] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.534 [2024-10-01 22:40:37.566552] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.534 qpair failed and we were unable to recover it. 00:41:42.534 [2024-10-01 22:40:37.566865] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.534 [2024-10-01 22:40:37.566875] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.534 qpair failed and we were unable to recover it. 00:41:42.534 [2024-10-01 22:40:37.567136] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.534 [2024-10-01 22:40:37.567145] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.534 qpair failed and we were unable to recover it. 00:41:42.534 [2024-10-01 22:40:37.567333] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.534 [2024-10-01 22:40:37.567343] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.534 qpair failed and we were unable to recover it. 00:41:42.535 [2024-10-01 22:40:37.567664] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.535 [2024-10-01 22:40:37.567673] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.535 qpair failed and we were unable to recover it. 00:41:42.535 [2024-10-01 22:40:37.567848] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.535 [2024-10-01 22:40:37.567857] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.535 qpair failed and we were unable to recover it. 00:41:42.535 [2024-10-01 22:40:37.568139] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.535 [2024-10-01 22:40:37.568150] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.535 qpair failed and we were unable to recover it. 00:41:42.535 [2024-10-01 22:40:37.568450] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.535 [2024-10-01 22:40:37.568460] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.535 qpair failed and we were unable to recover it. 00:41:42.535 [2024-10-01 22:40:37.568668] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.535 [2024-10-01 22:40:37.568678] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.535 qpair failed and we were unable to recover it. 00:41:42.535 [2024-10-01 22:40:37.568986] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.535 [2024-10-01 22:40:37.568997] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.535 qpair failed and we were unable to recover it. 00:41:42.535 [2024-10-01 22:40:37.569329] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.535 [2024-10-01 22:40:37.569339] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.535 qpair failed and we were unable to recover it. 00:41:42.535 [2024-10-01 22:40:37.569635] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.535 [2024-10-01 22:40:37.569646] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.535 qpair failed and we were unable to recover it. 00:41:42.535 [2024-10-01 22:40:37.569819] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.535 [2024-10-01 22:40:37.569829] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.535 qpair failed and we were unable to recover it. 00:41:42.535 [2024-10-01 22:40:37.570004] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.535 [2024-10-01 22:40:37.570013] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.535 qpair failed and we were unable to recover it. 00:41:42.535 [2024-10-01 22:40:37.570330] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.535 [2024-10-01 22:40:37.570339] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.535 qpair failed and we were unable to recover it. 00:41:42.535 [2024-10-01 22:40:37.570509] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.535 [2024-10-01 22:40:37.570518] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.535 qpair failed and we were unable to recover it. 00:41:42.535 [2024-10-01 22:40:37.570827] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.535 [2024-10-01 22:40:37.570836] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.535 qpair failed and we were unable to recover it. 00:41:42.535 [2024-10-01 22:40:37.571169] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.535 [2024-10-01 22:40:37.571179] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.535 qpair failed and we were unable to recover it. 00:41:42.535 [2024-10-01 22:40:37.571493] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.535 [2024-10-01 22:40:37.571503] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.535 qpair failed and we were unable to recover it. 00:41:42.535 [2024-10-01 22:40:37.571751] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.535 [2024-10-01 22:40:37.571761] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.535 qpair failed and we were unable to recover it. 00:41:42.535 [2024-10-01 22:40:37.572097] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.535 [2024-10-01 22:40:37.572107] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.535 qpair failed and we were unable to recover it. 00:41:42.535 [2024-10-01 22:40:37.572322] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.535 [2024-10-01 22:40:37.572331] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.535 qpair failed and we were unable to recover it. 00:41:42.535 [2024-10-01 22:40:37.572682] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.535 [2024-10-01 22:40:37.572692] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.535 qpair failed and we were unable to recover it. 00:41:42.535 [2024-10-01 22:40:37.572986] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.535 [2024-10-01 22:40:37.572995] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.535 qpair failed and we were unable to recover it. 00:41:42.535 [2024-10-01 22:40:37.573315] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.535 [2024-10-01 22:40:37.573326] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.535 qpair failed and we were unable to recover it. 00:41:42.535 [2024-10-01 22:40:37.573487] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.535 [2024-10-01 22:40:37.573498] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.535 qpair failed and we were unable to recover it. 00:41:42.535 [2024-10-01 22:40:37.573816] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.535 [2024-10-01 22:40:37.573826] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.535 qpair failed and we were unable to recover it. 00:41:42.535 [2024-10-01 22:40:37.573987] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.535 [2024-10-01 22:40:37.573996] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.535 qpair failed and we were unable to recover it. 00:41:42.535 [2024-10-01 22:40:37.574386] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.535 [2024-10-01 22:40:37.574396] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.535 qpair failed and we were unable to recover it. 00:41:42.535 [2024-10-01 22:40:37.574681] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.535 [2024-10-01 22:40:37.574691] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.535 qpair failed and we were unable to recover it. 00:41:42.535 [2024-10-01 22:40:37.575020] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.535 [2024-10-01 22:40:37.575030] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.535 qpair failed and we were unable to recover it. 00:41:42.535 [2024-10-01 22:40:37.575195] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.535 [2024-10-01 22:40:37.575205] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.535 qpair failed and we were unable to recover it. 00:41:42.535 [2024-10-01 22:40:37.575376] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.535 [2024-10-01 22:40:37.575387] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.535 qpair failed and we were unable to recover it. 00:41:42.535 [2024-10-01 22:40:37.575705] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.535 [2024-10-01 22:40:37.575715] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.535 qpair failed and we were unable to recover it. 00:41:42.535 [2024-10-01 22:40:37.576029] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.535 [2024-10-01 22:40:37.576039] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.535 qpair failed and we were unable to recover it. 00:41:42.535 [2024-10-01 22:40:37.576355] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.535 [2024-10-01 22:40:37.576364] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.535 qpair failed and we were unable to recover it. 00:41:42.535 [2024-10-01 22:40:37.576709] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.535 [2024-10-01 22:40:37.576723] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.535 qpair failed and we were unable to recover it. 00:41:42.535 [2024-10-01 22:40:37.577056] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.535 [2024-10-01 22:40:37.577066] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.535 qpair failed and we were unable to recover it. 00:41:42.535 [2024-10-01 22:40:37.577344] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.535 [2024-10-01 22:40:37.577353] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.535 qpair failed and we were unable to recover it. 00:41:42.535 [2024-10-01 22:40:37.577567] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.536 [2024-10-01 22:40:37.577576] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.536 qpair failed and we were unable to recover it. 00:41:42.536 [2024-10-01 22:40:37.577864] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.536 [2024-10-01 22:40:37.577874] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.536 qpair failed and we were unable to recover it. 00:41:42.536 [2024-10-01 22:40:37.578044] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.536 [2024-10-01 22:40:37.578055] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.536 qpair failed and we were unable to recover it. 00:41:42.536 [2024-10-01 22:40:37.578375] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.536 [2024-10-01 22:40:37.578384] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.536 qpair failed and we were unable to recover it. 00:41:42.536 [2024-10-01 22:40:37.578664] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.536 [2024-10-01 22:40:37.578674] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.536 qpair failed and we were unable to recover it. 00:41:42.536 [2024-10-01 22:40:37.579011] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.536 [2024-10-01 22:40:37.579020] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.536 qpair failed and we were unable to recover it. 00:41:42.536 [2024-10-01 22:40:37.579353] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.536 [2024-10-01 22:40:37.579363] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.536 qpair failed and we were unable to recover it. 00:41:42.536 [2024-10-01 22:40:37.579405] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.536 [2024-10-01 22:40:37.579415] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.536 qpair failed and we were unable to recover it. 00:41:42.536 [2024-10-01 22:40:37.579599] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.536 [2024-10-01 22:40:37.579609] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.536 qpair failed and we were unable to recover it. 00:41:42.536 [2024-10-01 22:40:37.579795] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.536 [2024-10-01 22:40:37.579806] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.536 qpair failed and we were unable to recover it. 00:41:42.536 [2024-10-01 22:40:37.580141] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.536 [2024-10-01 22:40:37.580151] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.536 qpair failed and we were unable to recover it. 00:41:42.536 [2024-10-01 22:40:37.580480] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.536 [2024-10-01 22:40:37.580490] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.536 qpair failed and we were unable to recover it. 00:41:42.536 [2024-10-01 22:40:37.580789] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.536 [2024-10-01 22:40:37.580800] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.536 qpair failed and we were unable to recover it. 00:41:42.536 [2024-10-01 22:40:37.581090] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.536 [2024-10-01 22:40:37.581100] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.536 qpair failed and we were unable to recover it. 00:41:42.536 [2024-10-01 22:40:37.581409] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.536 [2024-10-01 22:40:37.581419] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.536 qpair failed and we were unable to recover it. 00:41:42.536 [2024-10-01 22:40:37.581622] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.536 [2024-10-01 22:40:37.581639] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.536 qpair failed and we were unable to recover it. 00:41:42.536 [2024-10-01 22:40:37.581853] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.536 [2024-10-01 22:40:37.581863] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.536 qpair failed and we were unable to recover it. 00:41:42.536 [2024-10-01 22:40:37.582149] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.536 [2024-10-01 22:40:37.582159] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.536 qpair failed and we were unable to recover it. 00:41:42.536 [2024-10-01 22:40:37.582342] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.536 [2024-10-01 22:40:37.582352] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.536 qpair failed and we were unable to recover it. 00:41:42.536 [2024-10-01 22:40:37.582705] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.536 [2024-10-01 22:40:37.582715] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.536 qpair failed and we were unable to recover it. 00:41:42.536 [2024-10-01 22:40:37.583009] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.536 [2024-10-01 22:40:37.583019] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.536 qpair failed and we were unable to recover it. 00:41:42.536 [2024-10-01 22:40:37.583239] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.536 [2024-10-01 22:40:37.583249] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.536 qpair failed and we were unable to recover it. 00:41:42.536 [2024-10-01 22:40:37.583597] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.536 [2024-10-01 22:40:37.583606] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.536 qpair failed and we were unable to recover it. 00:41:42.536 [2024-10-01 22:40:37.583959] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.536 [2024-10-01 22:40:37.583969] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.536 qpair failed and we were unable to recover it. 00:41:42.536 [2024-10-01 22:40:37.584289] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.536 [2024-10-01 22:40:37.584301] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.536 qpair failed and we were unable to recover it. 00:41:42.536 [2024-10-01 22:40:37.584642] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.536 [2024-10-01 22:40:37.584653] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.536 qpair failed and we were unable to recover it. 00:41:42.536 [2024-10-01 22:40:37.584838] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.536 [2024-10-01 22:40:37.584847] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.536 qpair failed and we were unable to recover it. 00:41:42.536 [2024-10-01 22:40:37.585014] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.536 [2024-10-01 22:40:37.585024] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.536 qpair failed and we were unable to recover it. 00:41:42.536 [2024-10-01 22:40:37.585360] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.536 [2024-10-01 22:40:37.585369] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.536 qpair failed and we were unable to recover it. 00:41:42.536 [2024-10-01 22:40:37.585555] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.536 [2024-10-01 22:40:37.585564] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.536 qpair failed and we were unable to recover it. 00:41:42.536 [2024-10-01 22:40:37.585946] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.536 [2024-10-01 22:40:37.585956] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.536 qpair failed and we were unable to recover it. 00:41:42.536 [2024-10-01 22:40:37.586287] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.536 [2024-10-01 22:40:37.586297] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.536 qpair failed and we were unable to recover it. 00:41:42.536 [2024-10-01 22:40:37.586627] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.536 [2024-10-01 22:40:37.586638] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.536 qpair failed and we were unable to recover it. 00:41:42.537 [2024-10-01 22:40:37.586812] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.537 [2024-10-01 22:40:37.586822] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.537 qpair failed and we were unable to recover it. 00:41:42.537 [2024-10-01 22:40:37.587141] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.537 [2024-10-01 22:40:37.587151] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.537 qpair failed and we were unable to recover it. 00:41:42.537 [2024-10-01 22:40:37.587307] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.537 [2024-10-01 22:40:37.587316] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.537 qpair failed and we were unable to recover it. 00:41:42.537 [2024-10-01 22:40:37.587627] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.537 [2024-10-01 22:40:37.587637] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.537 qpair failed and we were unable to recover it. 00:41:42.537 [2024-10-01 22:40:37.587867] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.537 [2024-10-01 22:40:37.587877] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.537 qpair failed and we were unable to recover it. 00:41:42.537 [2024-10-01 22:40:37.588165] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.537 [2024-10-01 22:40:37.588176] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.537 qpair failed and we were unable to recover it. 00:41:42.537 [2024-10-01 22:40:37.588246] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.537 [2024-10-01 22:40:37.588256] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.537 qpair failed and we were unable to recover it. 00:41:42.537 [2024-10-01 22:40:37.588433] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.537 [2024-10-01 22:40:37.588445] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.537 qpair failed and we were unable to recover it. 00:41:42.537 [2024-10-01 22:40:37.588755] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.537 [2024-10-01 22:40:37.588765] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.537 qpair failed and we were unable to recover it. 00:41:42.537 [2024-10-01 22:40:37.588941] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.537 [2024-10-01 22:40:37.588951] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.537 qpair failed and we were unable to recover it. 00:41:42.537 [2024-10-01 22:40:37.589118] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.537 [2024-10-01 22:40:37.589127] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.537 qpair failed and we were unable to recover it. 00:41:42.537 [2024-10-01 22:40:37.589314] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.537 [2024-10-01 22:40:37.589324] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.537 qpair failed and we were unable to recover it. 00:41:42.537 [2024-10-01 22:40:37.589511] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.537 [2024-10-01 22:40:37.589521] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.537 qpair failed and we were unable to recover it. 00:41:42.537 [2024-10-01 22:40:37.589858] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.537 [2024-10-01 22:40:37.589869] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.537 qpair failed and we were unable to recover it. 00:41:42.537 [2024-10-01 22:40:37.590203] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.537 [2024-10-01 22:40:37.590213] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.537 qpair failed and we were unable to recover it. 00:41:42.537 [2024-10-01 22:40:37.590489] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.537 [2024-10-01 22:40:37.590499] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.537 qpair failed and we were unable to recover it. 00:41:42.537 [2024-10-01 22:40:37.590789] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.537 [2024-10-01 22:40:37.590799] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.537 qpair failed and we were unable to recover it. 00:41:42.537 [2024-10-01 22:40:37.591108] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.537 [2024-10-01 22:40:37.591124] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.537 qpair failed and we were unable to recover it. 00:41:42.537 [2024-10-01 22:40:37.591317] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.537 [2024-10-01 22:40:37.591327] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.537 qpair failed and we were unable to recover it. 00:41:42.537 [2024-10-01 22:40:37.591596] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.537 [2024-10-01 22:40:37.591606] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.537 qpair failed and we were unable to recover it. 00:41:42.537 [2024-10-01 22:40:37.591919] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.537 [2024-10-01 22:40:37.591929] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.537 qpair failed and we were unable to recover it. 00:41:42.537 [2024-10-01 22:40:37.592215] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.537 [2024-10-01 22:40:37.592225] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.537 qpair failed and we were unable to recover it. 00:41:42.537 [2024-10-01 22:40:37.592550] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.537 [2024-10-01 22:40:37.592559] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.537 qpair failed and we were unable to recover it. 00:41:42.537 [2024-10-01 22:40:37.592901] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.537 [2024-10-01 22:40:37.592912] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.537 qpair failed and we were unable to recover it. 00:41:42.537 [2024-10-01 22:40:37.593225] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.537 [2024-10-01 22:40:37.593235] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.537 qpair failed and we were unable to recover it. 00:41:42.537 [2024-10-01 22:40:37.593450] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.537 [2024-10-01 22:40:37.593460] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.537 qpair failed and we were unable to recover it. 00:41:42.537 [2024-10-01 22:40:37.593766] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.537 [2024-10-01 22:40:37.593777] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.537 qpair failed and we were unable to recover it. 00:41:42.537 [2024-10-01 22:40:37.594088] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.537 [2024-10-01 22:40:37.594097] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.537 qpair failed and we were unable to recover it. 00:41:42.537 [2024-10-01 22:40:37.594398] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.537 [2024-10-01 22:40:37.594408] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.537 qpair failed and we were unable to recover it. 00:41:42.537 [2024-10-01 22:40:37.594713] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.537 [2024-10-01 22:40:37.594723] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.537 qpair failed and we were unable to recover it. 00:41:42.537 [2024-10-01 22:40:37.595015] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.537 [2024-10-01 22:40:37.595024] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.537 qpair failed and we were unable to recover it. 00:41:42.537 [2024-10-01 22:40:37.595337] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.537 [2024-10-01 22:40:37.595347] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.537 qpair failed and we were unable to recover it. 00:41:42.537 [2024-10-01 22:40:37.595640] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.538 [2024-10-01 22:40:37.595650] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.538 qpair failed and we were unable to recover it. 00:41:42.538 [2024-10-01 22:40:37.595960] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.538 [2024-10-01 22:40:37.595970] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.538 qpair failed and we were unable to recover it. 00:41:42.538 [2024-10-01 22:40:37.596252] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.538 [2024-10-01 22:40:37.596261] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.538 qpair failed and we were unable to recover it. 00:41:42.538 [2024-10-01 22:40:37.596590] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.538 [2024-10-01 22:40:37.596599] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.538 qpair failed and we were unable to recover it. 00:41:42.538 [2024-10-01 22:40:37.596763] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.538 [2024-10-01 22:40:37.596774] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.538 qpair failed and we were unable to recover it. 00:41:42.538 [2024-10-01 22:40:37.597006] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.538 [2024-10-01 22:40:37.597016] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.538 qpair failed and we were unable to recover it. 00:41:42.538 [2024-10-01 22:40:37.597332] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.538 [2024-10-01 22:40:37.597342] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.538 qpair failed and we were unable to recover it. 00:41:42.538 [2024-10-01 22:40:37.597512] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.538 [2024-10-01 22:40:37.597521] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.538 qpair failed and we were unable to recover it. 00:41:42.538 [2024-10-01 22:40:37.597821] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.538 [2024-10-01 22:40:37.597831] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.538 qpair failed and we were unable to recover it. 00:41:42.538 [2024-10-01 22:40:37.598131] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.538 [2024-10-01 22:40:37.598141] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.538 qpair failed and we were unable to recover it. 00:41:42.538 [2024-10-01 22:40:37.598320] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.538 [2024-10-01 22:40:37.598330] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.538 qpair failed and we were unable to recover it. 00:41:42.538 [2024-10-01 22:40:37.598546] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.538 [2024-10-01 22:40:37.598556] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.538 qpair failed and we were unable to recover it. 00:41:42.538 [2024-10-01 22:40:37.598908] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.538 [2024-10-01 22:40:37.598918] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.538 qpair failed and we were unable to recover it. 00:41:42.538 [2024-10-01 22:40:37.599180] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.538 [2024-10-01 22:40:37.599190] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.538 qpair failed and we were unable to recover it. 00:41:42.538 [2024-10-01 22:40:37.599521] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.538 [2024-10-01 22:40:37.599531] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.538 qpair failed and we were unable to recover it. 00:41:42.538 [2024-10-01 22:40:37.599743] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.538 [2024-10-01 22:40:37.599754] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.538 qpair failed and we were unable to recover it. 00:41:42.538 [2024-10-01 22:40:37.600078] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.538 [2024-10-01 22:40:37.600089] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.538 qpair failed and we were unable to recover it. 00:41:42.538 [2024-10-01 22:40:37.600273] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.538 [2024-10-01 22:40:37.600283] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.538 qpair failed and we were unable to recover it. 00:41:42.538 [2024-10-01 22:40:37.600593] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.538 [2024-10-01 22:40:37.600604] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.538 qpair failed and we were unable to recover it. 00:41:42.538 [2024-10-01 22:40:37.600795] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.538 [2024-10-01 22:40:37.600806] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.538 qpair failed and we were unable to recover it. 00:41:42.538 [2024-10-01 22:40:37.601067] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.538 [2024-10-01 22:40:37.601078] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.538 qpair failed and we were unable to recover it. 00:41:42.538 [2024-10-01 22:40:37.601409] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.538 [2024-10-01 22:40:37.601420] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.538 qpair failed and we were unable to recover it. 00:41:42.538 [2024-10-01 22:40:37.601749] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.538 [2024-10-01 22:40:37.601759] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.538 qpair failed and we were unable to recover it. 00:41:42.538 [2024-10-01 22:40:37.602120] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.538 [2024-10-01 22:40:37.602130] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.538 qpair failed and we were unable to recover it. 00:41:42.538 [2024-10-01 22:40:37.602307] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.538 [2024-10-01 22:40:37.602317] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.538 qpair failed and we were unable to recover it. 00:41:42.538 [2024-10-01 22:40:37.602599] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.538 [2024-10-01 22:40:37.602608] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.538 qpair failed and we were unable to recover it. 00:41:42.538 [2024-10-01 22:40:37.602917] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.538 [2024-10-01 22:40:37.602927] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.538 qpair failed and we were unable to recover it. 00:41:42.538 [2024-10-01 22:40:37.603102] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.538 [2024-10-01 22:40:37.603113] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.538 qpair failed and we were unable to recover it. 00:41:42.538 [2024-10-01 22:40:37.603412] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.538 [2024-10-01 22:40:37.603422] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.538 qpair failed and we were unable to recover it. 00:41:42.538 [2024-10-01 22:40:37.603642] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.539 [2024-10-01 22:40:37.603652] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.539 qpair failed and we were unable to recover it. 00:41:42.539 [2024-10-01 22:40:37.604063] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.539 [2024-10-01 22:40:37.604072] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.539 qpair failed and we were unable to recover it. 00:41:42.539 [2024-10-01 22:40:37.604361] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.539 [2024-10-01 22:40:37.604371] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.539 qpair failed and we were unable to recover it. 00:41:42.539 [2024-10-01 22:40:37.604690] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.539 [2024-10-01 22:40:37.604700] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.539 qpair failed and we were unable to recover it. 00:41:42.539 [2024-10-01 22:40:37.605031] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.539 [2024-10-01 22:40:37.605041] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.539 qpair failed and we were unable to recover it. 00:41:42.539 [2024-10-01 22:40:37.605180] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.539 [2024-10-01 22:40:37.605190] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.539 qpair failed and we were unable to recover it. 00:41:42.539 [2024-10-01 22:40:37.605348] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.539 [2024-10-01 22:40:37.605357] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.539 qpair failed and we were unable to recover it. 00:41:42.539 [2024-10-01 22:40:37.605544] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.539 [2024-10-01 22:40:37.605554] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.539 qpair failed and we were unable to recover it. 00:41:42.539 [2024-10-01 22:40:37.605810] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.539 [2024-10-01 22:40:37.605820] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.539 qpair failed and we were unable to recover it. 00:41:42.539 [2024-10-01 22:40:37.605989] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.539 [2024-10-01 22:40:37.606000] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.539 qpair failed and we were unable to recover it. 00:41:42.539 [2024-10-01 22:40:37.606045] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.539 [2024-10-01 22:40:37.606055] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.539 qpair failed and we were unable to recover it. 00:41:42.539 [2024-10-01 22:40:37.606412] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.539 [2024-10-01 22:40:37.606422] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.539 qpair failed and we were unable to recover it. 00:41:42.539 [2024-10-01 22:40:37.606735] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.539 [2024-10-01 22:40:37.606746] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.539 qpair failed and we were unable to recover it. 00:41:42.539 [2024-10-01 22:40:37.607032] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.539 [2024-10-01 22:40:37.607041] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.539 qpair failed and we were unable to recover it. 00:41:42.539 [2024-10-01 22:40:37.607329] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.539 [2024-10-01 22:40:37.607339] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.539 qpair failed and we were unable to recover it. 00:41:42.539 [2024-10-01 22:40:37.607653] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.539 [2024-10-01 22:40:37.607663] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.539 qpair failed and we were unable to recover it. 00:41:42.539 [2024-10-01 22:40:37.608002] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.539 [2024-10-01 22:40:37.608011] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.539 qpair failed and we were unable to recover it. 00:41:42.539 [2024-10-01 22:40:37.608326] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.539 [2024-10-01 22:40:37.608335] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.539 qpair failed and we were unable to recover it. 00:41:42.539 [2024-10-01 22:40:37.608648] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.539 [2024-10-01 22:40:37.608658] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.539 qpair failed and we were unable to recover it. 00:41:42.539 [2024-10-01 22:40:37.608863] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.539 [2024-10-01 22:40:37.608873] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.539 qpair failed and we were unable to recover it. 00:41:42.539 [2024-10-01 22:40:37.609182] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.539 [2024-10-01 22:40:37.609192] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.539 qpair failed and we were unable to recover it. 00:41:42.539 [2024-10-01 22:40:37.609483] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.539 [2024-10-01 22:40:37.609493] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.539 qpair failed and we were unable to recover it. 00:41:42.539 [2024-10-01 22:40:37.609676] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.539 [2024-10-01 22:40:37.609687] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.539 qpair failed and we were unable to recover it. 00:41:42.539 [2024-10-01 22:40:37.609882] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.539 [2024-10-01 22:40:37.609892] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.539 qpair failed and we were unable to recover it. 00:41:42.539 [2024-10-01 22:40:37.610177] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.539 [2024-10-01 22:40:37.610186] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.539 qpair failed and we were unable to recover it. 00:41:42.539 [2024-10-01 22:40:37.610363] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.539 [2024-10-01 22:40:37.610374] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.539 qpair failed and we were unable to recover it. 00:41:42.539 [2024-10-01 22:40:37.610684] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.539 [2024-10-01 22:40:37.610694] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.539 qpair failed and we were unable to recover it. 00:41:42.539 [2024-10-01 22:40:37.610990] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.539 [2024-10-01 22:40:37.610999] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.539 qpair failed and we were unable to recover it. 00:41:42.539 [2024-10-01 22:40:37.611275] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.539 [2024-10-01 22:40:37.611285] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.539 qpair failed and we were unable to recover it. 00:41:42.539 [2024-10-01 22:40:37.611575] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.539 [2024-10-01 22:40:37.611585] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.539 qpair failed and we were unable to recover it. 00:41:42.539 [2024-10-01 22:40:37.611886] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.539 [2024-10-01 22:40:37.611896] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.539 qpair failed and we were unable to recover it. 00:41:42.539 [2024-10-01 22:40:37.612209] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.539 [2024-10-01 22:40:37.612218] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.539 qpair failed and we were unable to recover it. 00:41:42.539 [2024-10-01 22:40:37.612506] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.539 [2024-10-01 22:40:37.612517] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.539 qpair failed and we were unable to recover it. 00:41:42.539 [2024-10-01 22:40:37.612735] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.539 [2024-10-01 22:40:37.612745] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.539 qpair failed and we were unable to recover it. 00:41:42.539 [2024-10-01 22:40:37.613068] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.539 [2024-10-01 22:40:37.613078] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.539 qpair failed and we were unable to recover it. 00:41:42.539 [2024-10-01 22:40:37.613419] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.539 [2024-10-01 22:40:37.613430] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.539 qpair failed and we were unable to recover it. 00:41:42.539 [2024-10-01 22:40:37.613639] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.539 [2024-10-01 22:40:37.613650] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.539 qpair failed and we were unable to recover it. 00:41:42.539 [2024-10-01 22:40:37.613977] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.540 [2024-10-01 22:40:37.613987] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.540 qpair failed and we were unable to recover it. 00:41:42.540 [2024-10-01 22:40:37.614145] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.540 [2024-10-01 22:40:37.614155] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.540 qpair failed and we were unable to recover it. 00:41:42.540 [2024-10-01 22:40:37.614427] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.540 [2024-10-01 22:40:37.614437] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.540 qpair failed and we were unable to recover it. 00:41:42.540 [2024-10-01 22:40:37.614744] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.540 [2024-10-01 22:40:37.614755] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.540 qpair failed and we were unable to recover it. 00:41:42.540 [2024-10-01 22:40:37.614930] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.540 [2024-10-01 22:40:37.614940] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.540 qpair failed and we were unable to recover it. 00:41:42.540 [2024-10-01 22:40:37.615136] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.540 [2024-10-01 22:40:37.615146] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.540 qpair failed and we were unable to recover it. 00:41:42.540 [2024-10-01 22:40:37.615457] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.540 [2024-10-01 22:40:37.615467] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.540 qpair failed and we were unable to recover it. 00:41:42.540 [2024-10-01 22:40:37.615769] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.540 [2024-10-01 22:40:37.615780] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.540 qpair failed and we were unable to recover it. 00:41:42.540 [2024-10-01 22:40:37.616066] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.540 [2024-10-01 22:40:37.616077] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.540 qpair failed and we were unable to recover it. 00:41:42.540 [2024-10-01 22:40:37.616420] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.540 [2024-10-01 22:40:37.616431] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.540 qpair failed and we were unable to recover it. 00:41:42.540 [2024-10-01 22:40:37.616770] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.540 [2024-10-01 22:40:37.616780] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.540 qpair failed and we were unable to recover it. 00:41:42.540 [2024-10-01 22:40:37.617094] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.540 [2024-10-01 22:40:37.617112] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.540 qpair failed and we were unable to recover it. 00:41:42.540 [2024-10-01 22:40:37.617421] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.540 [2024-10-01 22:40:37.617431] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.540 qpair failed and we were unable to recover it. 00:41:42.540 [2024-10-01 22:40:37.617710] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.540 [2024-10-01 22:40:37.617720] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.540 qpair failed and we were unable to recover it. 00:41:42.540 [2024-10-01 22:40:37.617930] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.540 [2024-10-01 22:40:37.617940] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.540 qpair failed and we were unable to recover it. 00:41:42.540 [2024-10-01 22:40:37.618222] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.540 [2024-10-01 22:40:37.618234] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.540 qpair failed and we were unable to recover it. 00:41:42.540 [2024-10-01 22:40:37.618411] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.540 [2024-10-01 22:40:37.618420] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.540 qpair failed and we were unable to recover it. 00:41:42.540 [2024-10-01 22:40:37.618588] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.540 [2024-10-01 22:40:37.618597] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.540 qpair failed and we were unable to recover it. 00:41:42.540 [2024-10-01 22:40:37.618896] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.540 [2024-10-01 22:40:37.618907] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.540 qpair failed and we were unable to recover it. 00:41:42.540 [2024-10-01 22:40:37.619189] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.540 [2024-10-01 22:40:37.619199] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.540 qpair failed and we were unable to recover it. 00:41:42.540 [2024-10-01 22:40:37.619481] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.540 [2024-10-01 22:40:37.619491] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.540 qpair failed and we were unable to recover it. 00:41:42.540 [2024-10-01 22:40:37.619790] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.540 [2024-10-01 22:40:37.619800] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.540 qpair failed and we were unable to recover it. 00:41:42.540 [2024-10-01 22:40:37.620106] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.540 [2024-10-01 22:40:37.620116] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.540 qpair failed and we were unable to recover it. 00:41:42.540 [2024-10-01 22:40:37.620300] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.540 [2024-10-01 22:40:37.620310] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.540 qpair failed and we were unable to recover it. 00:41:42.540 [2024-10-01 22:40:37.620638] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.540 [2024-10-01 22:40:37.620648] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.540 qpair failed and we were unable to recover it. 00:41:42.540 [2024-10-01 22:40:37.620851] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.540 [2024-10-01 22:40:37.620861] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.540 qpair failed and we were unable to recover it. 00:41:42.540 [2024-10-01 22:40:37.621203] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.540 [2024-10-01 22:40:37.621213] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.540 qpair failed and we were unable to recover it. 00:41:42.540 [2024-10-01 22:40:37.621414] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.540 [2024-10-01 22:40:37.621424] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.540 qpair failed and we were unable to recover it. 00:41:42.540 [2024-10-01 22:40:37.621761] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.540 [2024-10-01 22:40:37.621771] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.540 qpair failed and we were unable to recover it. 00:41:42.540 [2024-10-01 22:40:37.622066] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.540 [2024-10-01 22:40:37.622076] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.540 qpair failed and we were unable to recover it. 00:41:42.540 [2024-10-01 22:40:37.622375] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.540 [2024-10-01 22:40:37.622385] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.540 qpair failed and we were unable to recover it. 00:41:42.540 [2024-10-01 22:40:37.622666] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.540 [2024-10-01 22:40:37.622676] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.540 qpair failed and we were unable to recover it. 00:41:42.540 [2024-10-01 22:40:37.623020] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.540 [2024-10-01 22:40:37.623030] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.540 qpair failed and we were unable to recover it. 00:41:42.540 [2024-10-01 22:40:37.623215] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.540 [2024-10-01 22:40:37.623225] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.540 qpair failed and we were unable to recover it. 00:41:42.540 [2024-10-01 22:40:37.623532] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.540 [2024-10-01 22:40:37.623542] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.540 qpair failed and we were unable to recover it. 00:41:42.540 [2024-10-01 22:40:37.623873] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.540 [2024-10-01 22:40:37.623883] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.540 qpair failed and we were unable to recover it. 00:41:42.540 [2024-10-01 22:40:37.624074] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.540 [2024-10-01 22:40:37.624083] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.540 qpair failed and we were unable to recover it. 00:41:42.540 [2024-10-01 22:40:37.624132] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.540 [2024-10-01 22:40:37.624142] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.540 qpair failed and we were unable to recover it. 00:41:42.540 [2024-10-01 22:40:37.624185] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.540 [2024-10-01 22:40:37.624194] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.540 qpair failed and we were unable to recover it. 00:41:42.540 Read completed with error (sct=0, sc=8) 00:41:42.541 starting I/O failed 00:41:42.541 Read completed with error (sct=0, sc=8) 00:41:42.541 starting I/O failed 00:41:42.541 Read completed with error (sct=0, sc=8) 00:41:42.541 starting I/O failed 00:41:42.541 Read completed with error (sct=0, sc=8) 00:41:42.541 starting I/O failed 00:41:42.541 Read completed with error (sct=0, sc=8) 00:41:42.541 starting I/O failed 00:41:42.541 Read completed with error (sct=0, sc=8) 00:41:42.541 starting I/O failed 00:41:42.541 Read completed with error (sct=0, sc=8) 00:41:42.541 starting I/O failed 00:41:42.541 Read completed with error (sct=0, sc=8) 00:41:42.541 starting I/O failed 00:41:42.541 Read completed with error (sct=0, sc=8) 00:41:42.541 starting I/O failed 00:41:42.541 Read completed with error (sct=0, sc=8) 00:41:42.541 starting I/O failed 00:41:42.541 Write completed with error (sct=0, sc=8) 00:41:42.541 starting I/O failed 00:41:42.541 Write completed with error (sct=0, sc=8) 00:41:42.541 starting I/O failed 00:41:42.541 Read completed with error (sct=0, sc=8) 00:41:42.541 starting I/O failed 00:41:42.541 Write completed with error (sct=0, sc=8) 00:41:42.541 starting I/O failed 00:41:42.541 Write completed with error (sct=0, sc=8) 00:41:42.541 starting I/O failed 00:41:42.541 Read completed with error (sct=0, sc=8) 00:41:42.541 starting I/O failed 00:41:42.541 Write completed with error (sct=0, sc=8) 00:41:42.541 starting I/O failed 00:41:42.541 Read completed with error (sct=0, sc=8) 00:41:42.541 starting I/O failed 00:41:42.541 Write completed with error (sct=0, sc=8) 00:41:42.541 starting I/O failed 00:41:42.541 Read completed with error (sct=0, sc=8) 00:41:42.541 starting I/O failed 00:41:42.541 Write completed with error (sct=0, sc=8) 00:41:42.541 starting I/O failed 00:41:42.541 Write completed with error (sct=0, sc=8) 00:41:42.541 starting I/O failed 00:41:42.541 Write completed with error (sct=0, sc=8) 00:41:42.541 starting I/O failed 00:41:42.541 Write completed with error (sct=0, sc=8) 00:41:42.541 starting I/O failed 00:41:42.541 Read completed with error (sct=0, sc=8) 00:41:42.541 starting I/O failed 00:41:42.541 Write completed with error (sct=0, sc=8) 00:41:42.541 starting I/O failed 00:41:42.541 Read completed with error (sct=0, sc=8) 00:41:42.541 starting I/O failed 00:41:42.541 Write completed with error (sct=0, sc=8) 00:41:42.541 starting I/O failed 00:41:42.541 Write completed with error (sct=0, sc=8) 00:41:42.541 starting I/O failed 00:41:42.541 Read completed with error (sct=0, sc=8) 00:41:42.541 starting I/O failed 00:41:42.541 Read completed with error (sct=0, sc=8) 00:41:42.541 starting I/O failed 00:41:42.541 Read completed with error (sct=0, sc=8) 00:41:42.541 starting I/O failed 00:41:42.541 [2024-10-01 22:40:37.624406] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:41:42.541 [2024-10-01 22:40:37.624585] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.541 [2024-10-01 22:40:37.624600] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.541 qpair failed and we were unable to recover it. 00:41:42.541 [2024-10-01 22:40:37.624875] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.541 [2024-10-01 22:40:37.624903] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.541 qpair failed and we were unable to recover it. 00:41:42.541 [2024-10-01 22:40:37.625073] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.541 [2024-10-01 22:40:37.625082] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.541 qpair failed and we were unable to recover it. 00:41:42.541 [2024-10-01 22:40:37.625255] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.541 [2024-10-01 22:40:37.625263] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.541 qpair failed and we were unable to recover it. 00:41:42.541 [2024-10-01 22:40:37.625449] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.541 [2024-10-01 22:40:37.625456] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.541 qpair failed and we were unable to recover it. 00:41:42.541 [2024-10-01 22:40:37.625621] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.541 [2024-10-01 22:40:37.625633] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.541 qpair failed and we were unable to recover it. 00:41:42.541 [2024-10-01 22:40:37.625825] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.541 [2024-10-01 22:40:37.625832] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.541 qpair failed and we were unable to recover it. 00:41:42.541 [2024-10-01 22:40:37.626111] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.541 [2024-10-01 22:40:37.626118] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.541 qpair failed and we were unable to recover it. 00:41:42.541 [2024-10-01 22:40:37.626430] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.541 [2024-10-01 22:40:37.626437] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.541 qpair failed and we were unable to recover it. 00:41:42.541 [2024-10-01 22:40:37.626767] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.541 [2024-10-01 22:40:37.626774] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.541 qpair failed and we were unable to recover it. 00:41:42.541 [2024-10-01 22:40:37.627126] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.541 [2024-10-01 22:40:37.627138] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.541 qpair failed and we were unable to recover it. 00:41:42.541 [2024-10-01 22:40:37.627458] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.541 [2024-10-01 22:40:37.627468] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.541 qpair failed and we were unable to recover it. 00:41:42.541 [2024-10-01 22:40:37.627773] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.541 [2024-10-01 22:40:37.627783] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.541 qpair failed and we were unable to recover it. 00:41:42.541 [2024-10-01 22:40:37.627973] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.541 [2024-10-01 22:40:37.627983] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.541 qpair failed and we were unable to recover it. 00:41:42.541 [2024-10-01 22:40:37.628313] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.541 [2024-10-01 22:40:37.628322] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.541 qpair failed and we were unable to recover it. 00:41:42.541 [2024-10-01 22:40:37.628658] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.541 [2024-10-01 22:40:37.628668] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.541 qpair failed and we were unable to recover it. 00:41:42.541 [2024-10-01 22:40:37.629015] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.541 [2024-10-01 22:40:37.629024] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.541 qpair failed and we were unable to recover it. 00:41:42.541 [2024-10-01 22:40:37.629309] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.541 [2024-10-01 22:40:37.629318] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.541 qpair failed and we were unable to recover it. 00:41:42.541 [2024-10-01 22:40:37.629633] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.541 [2024-10-01 22:40:37.629644] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.541 qpair failed and we were unable to recover it. 00:41:42.541 [2024-10-01 22:40:37.629947] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.541 [2024-10-01 22:40:37.629957] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.541 qpair failed and we were unable to recover it. 00:41:42.541 [2024-10-01 22:40:37.630129] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.541 [2024-10-01 22:40:37.630139] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.541 qpair failed and we were unable to recover it. 00:41:42.541 [2024-10-01 22:40:37.630471] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.541 [2024-10-01 22:40:37.630480] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.541 qpair failed and we were unable to recover it. 00:41:42.541 [2024-10-01 22:40:37.630798] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.541 [2024-10-01 22:40:37.630808] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.541 qpair failed and we were unable to recover it. 00:41:42.541 [2024-10-01 22:40:37.631034] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.541 [2024-10-01 22:40:37.631043] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.541 qpair failed and we were unable to recover it. 00:41:42.541 [2024-10-01 22:40:37.631364] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.541 [2024-10-01 22:40:37.631374] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.541 qpair failed and we were unable to recover it. 00:41:42.541 [2024-10-01 22:40:37.631533] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.541 [2024-10-01 22:40:37.631544] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.541 qpair failed and we were unable to recover it. 00:41:42.541 [2024-10-01 22:40:37.631739] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.541 [2024-10-01 22:40:37.631749] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.541 qpair failed and we were unable to recover it. 00:41:42.541 [2024-10-01 22:40:37.632043] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.542 [2024-10-01 22:40:37.632052] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.542 qpair failed and we were unable to recover it. 00:41:42.542 [2024-10-01 22:40:37.632367] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.542 [2024-10-01 22:40:37.632377] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.542 qpair failed and we were unable to recover it. 00:41:42.542 [2024-10-01 22:40:37.632694] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.542 [2024-10-01 22:40:37.632704] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.542 qpair failed and we were unable to recover it. 00:41:42.542 [2024-10-01 22:40:37.633006] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.542 [2024-10-01 22:40:37.633015] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.542 qpair failed and we were unable to recover it. 00:41:42.542 [2024-10-01 22:40:37.633328] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.542 [2024-10-01 22:40:37.633337] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.542 qpair failed and we were unable to recover it. 00:41:42.542 [2024-10-01 22:40:37.633652] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.542 [2024-10-01 22:40:37.633663] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.542 qpair failed and we were unable to recover it. 00:41:42.542 [2024-10-01 22:40:37.633959] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.542 [2024-10-01 22:40:37.633969] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.542 qpair failed and we were unable to recover it. 00:41:42.542 [2024-10-01 22:40:37.634285] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.542 [2024-10-01 22:40:37.634294] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.542 qpair failed and we were unable to recover it. 00:41:42.542 [2024-10-01 22:40:37.634469] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.542 [2024-10-01 22:40:37.634479] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.542 qpair failed and we were unable to recover it. 00:41:42.542 [2024-10-01 22:40:37.634929] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.542 [2024-10-01 22:40:37.634939] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.542 qpair failed and we were unable to recover it. 00:41:42.542 [2024-10-01 22:40:37.635236] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.542 [2024-10-01 22:40:37.635247] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.542 qpair failed and we were unable to recover it. 00:41:42.542 [2024-10-01 22:40:37.635565] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.542 [2024-10-01 22:40:37.635576] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.542 qpair failed and we were unable to recover it. 00:41:42.542 [2024-10-01 22:40:37.635746] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.542 [2024-10-01 22:40:37.635757] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.542 qpair failed and we were unable to recover it. 00:41:42.542 [2024-10-01 22:40:37.636133] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.542 [2024-10-01 22:40:37.636143] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.542 qpair failed and we were unable to recover it. 00:41:42.542 [2024-10-01 22:40:37.636433] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.542 [2024-10-01 22:40:37.636444] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.542 qpair failed and we were unable to recover it. 00:41:42.542 [2024-10-01 22:40:37.636769] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.542 [2024-10-01 22:40:37.636779] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.542 qpair failed and we were unable to recover it. 00:41:42.542 [2024-10-01 22:40:37.637065] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.542 [2024-10-01 22:40:37.637075] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.542 qpair failed and we were unable to recover it. 00:41:42.542 [2024-10-01 22:40:37.637395] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.542 [2024-10-01 22:40:37.637404] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.542 qpair failed and we were unable to recover it. 00:41:42.542 [2024-10-01 22:40:37.637694] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.542 [2024-10-01 22:40:37.637704] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.542 qpair failed and we were unable to recover it. 00:41:42.542 [2024-10-01 22:40:37.638025] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.542 [2024-10-01 22:40:37.638034] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.542 qpair failed and we were unable to recover it. 00:41:42.542 [2024-10-01 22:40:37.638255] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.542 [2024-10-01 22:40:37.638265] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.542 qpair failed and we were unable to recover it. 00:41:42.542 [2024-10-01 22:40:37.638579] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.542 [2024-10-01 22:40:37.638588] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.542 qpair failed and we were unable to recover it. 00:41:42.542 [2024-10-01 22:40:37.638764] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.542 [2024-10-01 22:40:37.638775] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.542 qpair failed and we were unable to recover it. 00:41:42.542 [2024-10-01 22:40:37.638936] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.542 [2024-10-01 22:40:37.638946] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.542 qpair failed and we were unable to recover it. 00:41:42.542 [2024-10-01 22:40:37.639153] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.542 [2024-10-01 22:40:37.639164] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.542 qpair failed and we were unable to recover it. 00:41:42.542 [2024-10-01 22:40:37.639463] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.542 [2024-10-01 22:40:37.639473] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.542 qpair failed and we were unable to recover it. 00:41:42.542 [2024-10-01 22:40:37.639807] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.542 [2024-10-01 22:40:37.639817] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.542 qpair failed and we were unable to recover it. 00:41:42.542 [2024-10-01 22:40:37.639997] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.542 [2024-10-01 22:40:37.640006] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.542 qpair failed and we were unable to recover it. 00:41:42.542 [2024-10-01 22:40:37.640251] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.542 [2024-10-01 22:40:37.640260] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.542 qpair failed and we were unable to recover it. 00:41:42.542 [2024-10-01 22:40:37.640458] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.542 [2024-10-01 22:40:37.640468] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.542 qpair failed and we were unable to recover it. 00:41:42.542 [2024-10-01 22:40:37.640794] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.542 [2024-10-01 22:40:37.640804] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.542 qpair failed and we were unable to recover it. 00:41:42.542 [2024-10-01 22:40:37.640973] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.542 [2024-10-01 22:40:37.640983] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.542 qpair failed and we were unable to recover it. 00:41:42.542 [2024-10-01 22:40:37.641152] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.542 [2024-10-01 22:40:37.641162] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.542 qpair failed and we were unable to recover it. 00:41:42.542 [2024-10-01 22:40:37.641488] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.542 [2024-10-01 22:40:37.641497] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.542 qpair failed and we were unable to recover it. 00:41:42.542 [2024-10-01 22:40:37.641702] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.542 [2024-10-01 22:40:37.641712] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.542 qpair failed and we were unable to recover it. 00:41:42.542 [2024-10-01 22:40:37.642001] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.542 [2024-10-01 22:40:37.642010] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.542 qpair failed and we were unable to recover it. 00:41:42.542 [2024-10-01 22:40:37.642054] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.542 [2024-10-01 22:40:37.642063] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.542 qpair failed and we were unable to recover it. 00:41:42.542 [2024-10-01 22:40:37.642153] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.542 [2024-10-01 22:40:37.642164] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.542 qpair failed and we were unable to recover it. 00:41:42.542 [2024-10-01 22:40:37.642404] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.542 [2024-10-01 22:40:37.642414] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.542 qpair failed and we were unable to recover it. 00:41:42.542 [2024-10-01 22:40:37.642587] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.543 [2024-10-01 22:40:37.642596] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.543 qpair failed and we were unable to recover it. 00:41:42.543 [2024-10-01 22:40:37.642815] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.543 [2024-10-01 22:40:37.642826] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.543 qpair failed and we were unable to recover it. 00:41:42.543 [2024-10-01 22:40:37.643007] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.543 [2024-10-01 22:40:37.643017] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.543 qpair failed and we were unable to recover it. 00:41:42.543 [2024-10-01 22:40:37.643356] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.543 [2024-10-01 22:40:37.643366] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.543 qpair failed and we were unable to recover it. 00:41:42.543 [2024-10-01 22:40:37.643634] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.543 [2024-10-01 22:40:37.643643] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.543 qpair failed and we were unable to recover it. 00:41:42.543 [2024-10-01 22:40:37.643824] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.543 [2024-10-01 22:40:37.643833] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.543 qpair failed and we were unable to recover it. 00:41:42.543 [2024-10-01 22:40:37.644156] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.543 [2024-10-01 22:40:37.644165] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.543 qpair failed and we were unable to recover it. 00:41:42.543 [2024-10-01 22:40:37.644343] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.543 [2024-10-01 22:40:37.644352] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.543 qpair failed and we were unable to recover it. 00:41:42.543 [2024-10-01 22:40:37.644681] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.543 [2024-10-01 22:40:37.644691] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.543 qpair failed and we were unable to recover it. 00:41:42.543 [2024-10-01 22:40:37.645034] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.543 [2024-10-01 22:40:37.645044] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.543 qpair failed and we were unable to recover it. 00:41:42.543 [2024-10-01 22:40:37.645365] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.543 [2024-10-01 22:40:37.645376] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.543 qpair failed and we were unable to recover it. 00:41:42.543 [2024-10-01 22:40:37.645724] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.543 [2024-10-01 22:40:37.645734] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.543 qpair failed and we were unable to recover it. 00:41:42.543 [2024-10-01 22:40:37.645942] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.543 [2024-10-01 22:40:37.645952] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.543 qpair failed and we were unable to recover it. 00:41:42.543 [2024-10-01 22:40:37.646268] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.543 [2024-10-01 22:40:37.646278] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.543 qpair failed and we were unable to recover it. 00:41:42.543 [2024-10-01 22:40:37.646617] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.543 [2024-10-01 22:40:37.646631] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.543 qpair failed and we were unable to recover it. 00:41:42.543 [2024-10-01 22:40:37.646804] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.543 [2024-10-01 22:40:37.646815] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.543 qpair failed and we were unable to recover it. 00:41:42.543 [2024-10-01 22:40:37.646990] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.543 [2024-10-01 22:40:37.647000] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.543 qpair failed and we were unable to recover it. 00:41:42.543 [2024-10-01 22:40:37.647186] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.543 [2024-10-01 22:40:37.647196] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.543 qpair failed and we were unable to recover it. 00:41:42.543 [2024-10-01 22:40:37.647502] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.543 [2024-10-01 22:40:37.647512] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.543 qpair failed and we were unable to recover it. 00:41:42.543 [2024-10-01 22:40:37.647714] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.543 [2024-10-01 22:40:37.647724] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.543 qpair failed and we were unable to recover it. 00:41:42.543 [2024-10-01 22:40:37.648072] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.543 [2024-10-01 22:40:37.648082] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.543 qpair failed and we were unable to recover it. 00:41:42.543 [2024-10-01 22:40:37.648378] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.543 [2024-10-01 22:40:37.648388] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.543 qpair failed and we were unable to recover it. 00:41:42.543 [2024-10-01 22:40:37.648703] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.543 [2024-10-01 22:40:37.648713] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.543 qpair failed and we were unable to recover it. 00:41:42.543 [2024-10-01 22:40:37.649005] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.543 [2024-10-01 22:40:37.649014] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.543 qpair failed and we were unable to recover it. 00:41:42.543 [2024-10-01 22:40:37.649374] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.543 [2024-10-01 22:40:37.649383] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.543 qpair failed and we were unable to recover it. 00:41:42.543 [2024-10-01 22:40:37.649691] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.543 [2024-10-01 22:40:37.649703] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.543 qpair failed and we were unable to recover it. 00:41:42.543 [2024-10-01 22:40:37.650017] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.543 [2024-10-01 22:40:37.650027] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.543 qpair failed and we were unable to recover it. 00:41:42.543 [2024-10-01 22:40:37.650248] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.543 [2024-10-01 22:40:37.650259] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.543 qpair failed and we were unable to recover it. 00:41:42.543 [2024-10-01 22:40:37.650525] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.543 [2024-10-01 22:40:37.650535] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.543 qpair failed and we were unable to recover it. 00:41:42.543 [2024-10-01 22:40:37.650732] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.543 [2024-10-01 22:40:37.650742] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.543 qpair failed and we were unable to recover it. 00:41:42.543 [2024-10-01 22:40:37.650989] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.543 [2024-10-01 22:40:37.650999] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.543 qpair failed and we were unable to recover it. 00:41:42.543 [2024-10-01 22:40:37.651189] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.543 [2024-10-01 22:40:37.651199] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.543 qpair failed and we were unable to recover it. 00:41:42.543 [2024-10-01 22:40:37.651521] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.543 [2024-10-01 22:40:37.651531] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.543 qpair failed and we were unable to recover it. 00:41:42.543 [2024-10-01 22:40:37.651856] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.543 [2024-10-01 22:40:37.651866] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.543 qpair failed and we were unable to recover it. 00:41:42.544 [2024-10-01 22:40:37.652051] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.544 [2024-10-01 22:40:37.652061] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.544 qpair failed and we were unable to recover it. 00:41:42.544 [2024-10-01 22:40:37.652426] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.544 [2024-10-01 22:40:37.652437] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.544 qpair failed and we were unable to recover it. 00:41:42.544 [2024-10-01 22:40:37.652607] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.544 [2024-10-01 22:40:37.652618] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.544 qpair failed and we were unable to recover it. 00:41:42.544 [2024-10-01 22:40:37.652675] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.544 [2024-10-01 22:40:37.652685] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.544 qpair failed and we were unable to recover it. 00:41:42.544 [2024-10-01 22:40:37.652876] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.544 [2024-10-01 22:40:37.652887] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.544 qpair failed and we were unable to recover it. 00:41:42.544 [2024-10-01 22:40:37.653066] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.544 [2024-10-01 22:40:37.653077] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.544 qpair failed and we were unable to recover it. 00:41:42.544 [2024-10-01 22:40:37.653381] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.544 [2024-10-01 22:40:37.653391] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.544 qpair failed and we were unable to recover it. 00:41:42.544 [2024-10-01 22:40:37.653438] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.544 [2024-10-01 22:40:37.653447] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.544 qpair failed and we were unable to recover it. 00:41:42.544 [2024-10-01 22:40:37.653628] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.544 [2024-10-01 22:40:37.653640] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.544 qpair failed and we were unable to recover it. 00:41:42.544 [2024-10-01 22:40:37.653950] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.544 [2024-10-01 22:40:37.653960] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.544 qpair failed and we were unable to recover it. 00:41:42.544 [2024-10-01 22:40:37.654334] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.544 [2024-10-01 22:40:37.654343] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.544 qpair failed and we were unable to recover it. 00:41:42.544 [2024-10-01 22:40:37.654683] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.544 [2024-10-01 22:40:37.654694] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.544 qpair failed and we were unable to recover it. 00:41:42.544 [2024-10-01 22:40:37.654996] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.544 [2024-10-01 22:40:37.655006] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.544 qpair failed and we were unable to recover it. 00:41:42.544 [2024-10-01 22:40:37.655324] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.544 [2024-10-01 22:40:37.655335] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.544 qpair failed and we were unable to recover it. 00:41:42.544 [2024-10-01 22:40:37.655652] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.544 [2024-10-01 22:40:37.655662] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.544 qpair failed and we were unable to recover it. 00:41:42.544 [2024-10-01 22:40:37.655965] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.544 [2024-10-01 22:40:37.655975] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.544 qpair failed and we were unable to recover it. 00:41:42.544 [2024-10-01 22:40:37.656260] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.544 [2024-10-01 22:40:37.656270] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.544 qpair failed and we were unable to recover it. 00:41:42.544 [2024-10-01 22:40:37.656458] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.544 [2024-10-01 22:40:37.656468] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.544 qpair failed and we were unable to recover it. 00:41:42.544 [2024-10-01 22:40:37.656804] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.544 [2024-10-01 22:40:37.656814] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.544 qpair failed and we were unable to recover it. 00:41:42.544 [2024-10-01 22:40:37.657010] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.544 [2024-10-01 22:40:37.657020] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.544 qpair failed and we were unable to recover it. 00:41:42.544 [2024-10-01 22:40:37.657213] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.544 [2024-10-01 22:40:37.657223] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.544 qpair failed and we were unable to recover it. 00:41:42.544 [2024-10-01 22:40:37.657410] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.544 [2024-10-01 22:40:37.657420] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.544 qpair failed and we were unable to recover it. 00:41:42.544 [2024-10-01 22:40:37.657746] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.544 [2024-10-01 22:40:37.657757] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.544 qpair failed and we were unable to recover it. 00:41:42.544 [2024-10-01 22:40:37.657949] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.544 [2024-10-01 22:40:37.657960] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.544 qpair failed and we were unable to recover it. 00:41:42.544 [2024-10-01 22:40:37.658215] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.544 [2024-10-01 22:40:37.658224] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.544 qpair failed and we were unable to recover it. 00:41:42.544 [2024-10-01 22:40:37.658538] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.544 [2024-10-01 22:40:37.658549] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.544 qpair failed and we were unable to recover it. 00:41:42.544 [2024-10-01 22:40:37.658904] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.544 [2024-10-01 22:40:37.658914] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.544 qpair failed and we were unable to recover it. 00:41:42.544 [2024-10-01 22:40:37.658984] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.544 [2024-10-01 22:40:37.658994] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.544 qpair failed and we were unable to recover it. 00:41:42.544 [2024-10-01 22:40:37.659262] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.544 [2024-10-01 22:40:37.659271] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.544 qpair failed and we were unable to recover it. 00:41:42.544 [2024-10-01 22:40:37.659574] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.544 [2024-10-01 22:40:37.659584] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.544 qpair failed and we were unable to recover it. 00:41:42.544 [2024-10-01 22:40:37.659765] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.544 [2024-10-01 22:40:37.659775] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.544 qpair failed and we were unable to recover it. 00:41:42.544 [2024-10-01 22:40:37.660134] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.544 [2024-10-01 22:40:37.660145] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.544 qpair failed and we were unable to recover it. 00:41:42.544 [2024-10-01 22:40:37.660317] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.544 [2024-10-01 22:40:37.660327] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.544 qpair failed and we were unable to recover it. 00:41:42.544 [2024-10-01 22:40:37.660663] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.544 [2024-10-01 22:40:37.660674] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.544 qpair failed and we were unable to recover it. 00:41:42.544 [2024-10-01 22:40:37.660854] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.544 [2024-10-01 22:40:37.660864] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.544 qpair failed and we were unable to recover it. 00:41:42.544 [2024-10-01 22:40:37.661044] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.544 [2024-10-01 22:40:37.661053] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.544 qpair failed and we were unable to recover it. 00:41:42.544 [2024-10-01 22:40:37.661379] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.545 [2024-10-01 22:40:37.661389] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.545 qpair failed and we were unable to recover it. 00:41:42.545 [2024-10-01 22:40:37.661706] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.545 [2024-10-01 22:40:37.661716] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.545 qpair failed and we were unable to recover it. 00:41:42.545 [2024-10-01 22:40:37.662000] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.545 [2024-10-01 22:40:37.662009] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.545 qpair failed and we were unable to recover it. 00:41:42.545 [2024-10-01 22:40:37.662314] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.545 [2024-10-01 22:40:37.662324] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.545 qpair failed and we were unable to recover it. 00:41:42.545 [2024-10-01 22:40:37.662601] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.545 [2024-10-01 22:40:37.662611] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.545 qpair failed and we were unable to recover it. 00:41:42.545 [2024-10-01 22:40:37.662988] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.545 [2024-10-01 22:40:37.662998] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.545 qpair failed and we were unable to recover it. 00:41:42.545 [2024-10-01 22:40:37.663286] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.545 [2024-10-01 22:40:37.663296] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.545 qpair failed and we were unable to recover it. 00:41:42.545 [2024-10-01 22:40:37.663629] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.545 [2024-10-01 22:40:37.663639] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.545 qpair failed and we were unable to recover it. 00:41:42.545 [2024-10-01 22:40:37.663961] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.545 [2024-10-01 22:40:37.663971] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.545 qpair failed and we were unable to recover it. 00:41:42.545 [2024-10-01 22:40:37.664278] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.545 [2024-10-01 22:40:37.664288] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.545 qpair failed and we were unable to recover it. 00:41:42.545 [2024-10-01 22:40:37.664617] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.545 [2024-10-01 22:40:37.664637] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.545 qpair failed and we were unable to recover it. 00:41:42.545 [2024-10-01 22:40:37.664859] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.545 [2024-10-01 22:40:37.664868] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.545 qpair failed and we were unable to recover it. 00:41:42.545 [2024-10-01 22:40:37.665194] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.545 [2024-10-01 22:40:37.665203] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.545 qpair failed and we were unable to recover it. 00:41:42.545 [2024-10-01 22:40:37.665373] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.545 [2024-10-01 22:40:37.665382] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.545 qpair failed and we were unable to recover it. 00:41:42.545 [2024-10-01 22:40:37.665668] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.545 [2024-10-01 22:40:37.665678] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.545 qpair failed and we were unable to recover it. 00:41:42.545 [2024-10-01 22:40:37.666001] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.545 [2024-10-01 22:40:37.666012] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.545 qpair failed and we were unable to recover it. 00:41:42.545 [2024-10-01 22:40:37.666344] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.545 [2024-10-01 22:40:37.666355] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.545 qpair failed and we were unable to recover it. 00:41:42.545 [2024-10-01 22:40:37.666648] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.545 [2024-10-01 22:40:37.666659] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.545 qpair failed and we were unable to recover it. 00:41:42.545 [2024-10-01 22:40:37.666996] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.545 [2024-10-01 22:40:37.667005] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.545 qpair failed and we were unable to recover it. 00:41:42.545 [2024-10-01 22:40:37.667163] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.545 [2024-10-01 22:40:37.667173] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.545 qpair failed and we were unable to recover it. 00:41:42.545 [2024-10-01 22:40:37.667411] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.545 [2024-10-01 22:40:37.667421] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.545 qpair failed and we were unable to recover it. 00:41:42.545 [2024-10-01 22:40:37.667618] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.545 [2024-10-01 22:40:37.667632] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.545 qpair failed and we were unable to recover it. 00:41:42.545 [2024-10-01 22:40:37.667932] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.545 [2024-10-01 22:40:37.667942] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.545 qpair failed and we were unable to recover it. 00:41:42.545 [2024-10-01 22:40:37.668111] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.545 [2024-10-01 22:40:37.668123] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.545 qpair failed and we were unable to recover it. 00:41:42.545 [2024-10-01 22:40:37.668298] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.545 [2024-10-01 22:40:37.668308] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.545 qpair failed and we were unable to recover it. 00:41:42.545 [2024-10-01 22:40:37.668492] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.545 [2024-10-01 22:40:37.668503] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.545 qpair failed and we were unable to recover it. 00:41:42.545 [2024-10-01 22:40:37.668701] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.545 [2024-10-01 22:40:37.668712] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.545 qpair failed and we were unable to recover it. 00:41:42.545 [2024-10-01 22:40:37.668919] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.545 [2024-10-01 22:40:37.668930] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.545 qpair failed and we were unable to recover it. 00:41:42.545 [2024-10-01 22:40:37.669260] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.545 [2024-10-01 22:40:37.669270] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.545 qpair failed and we were unable to recover it. 00:41:42.545 [2024-10-01 22:40:37.669550] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.545 [2024-10-01 22:40:37.669559] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.545 qpair failed and we were unable to recover it. 00:41:42.545 [2024-10-01 22:40:37.669869] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.545 [2024-10-01 22:40:37.669879] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.545 qpair failed and we were unable to recover it. 00:41:42.545 [2024-10-01 22:40:37.670044] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.545 [2024-10-01 22:40:37.670054] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.545 qpair failed and we were unable to recover it. 00:41:42.545 [2024-10-01 22:40:37.670247] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.545 [2024-10-01 22:40:37.670257] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.545 qpair failed and we were unable to recover it. 00:41:42.545 [2024-10-01 22:40:37.670568] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.545 [2024-10-01 22:40:37.670578] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.545 qpair failed and we were unable to recover it. 00:41:42.545 [2024-10-01 22:40:37.670869] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.545 [2024-10-01 22:40:37.670885] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.545 qpair failed and we were unable to recover it. 00:41:42.545 [2024-10-01 22:40:37.671079] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.545 [2024-10-01 22:40:37.671088] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.545 qpair failed and we were unable to recover it. 00:41:42.545 [2024-10-01 22:40:37.671378] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.545 [2024-10-01 22:40:37.671388] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.545 qpair failed and we were unable to recover it. 00:41:42.545 [2024-10-01 22:40:37.671751] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.546 [2024-10-01 22:40:37.671762] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.546 qpair failed and we were unable to recover it. 00:41:42.546 [2024-10-01 22:40:37.672000] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.546 [2024-10-01 22:40:37.672010] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.546 qpair failed and we were unable to recover it. 00:41:42.546 [2024-10-01 22:40:37.672342] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.546 [2024-10-01 22:40:37.672351] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.546 qpair failed and we were unable to recover it. 00:41:42.546 [2024-10-01 22:40:37.672563] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.546 [2024-10-01 22:40:37.672573] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.546 qpair failed and we were unable to recover it. 00:41:42.546 [2024-10-01 22:40:37.672769] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.546 [2024-10-01 22:40:37.672787] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.546 qpair failed and we were unable to recover it. 00:41:42.546 [2024-10-01 22:40:37.673114] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.546 [2024-10-01 22:40:37.673124] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.546 qpair failed and we were unable to recover it. 00:41:42.546 [2024-10-01 22:40:37.673437] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.546 [2024-10-01 22:40:37.673453] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.546 qpair failed and we were unable to recover it. 00:41:42.546 [2024-10-01 22:40:37.673732] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.546 [2024-10-01 22:40:37.673743] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.546 qpair failed and we were unable to recover it. 00:41:42.546 [2024-10-01 22:40:37.673947] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.546 [2024-10-01 22:40:37.673956] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.546 qpair failed and we were unable to recover it. 00:41:42.546 [2024-10-01 22:40:37.674313] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.546 [2024-10-01 22:40:37.674323] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.546 qpair failed and we were unable to recover it. 00:41:42.546 [2024-10-01 22:40:37.674408] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.546 [2024-10-01 22:40:37.674417] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.546 qpair failed and we were unable to recover it. 00:41:42.546 [2024-10-01 22:40:37.674477] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.546 [2024-10-01 22:40:37.674487] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.546 qpair failed and we were unable to recover it. 00:41:42.546 [2024-10-01 22:40:37.674806] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.546 [2024-10-01 22:40:37.674817] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.546 qpair failed and we were unable to recover it. 00:41:42.546 [2024-10-01 22:40:37.675006] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.546 [2024-10-01 22:40:37.675019] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.546 qpair failed and we were unable to recover it. 00:41:42.546 [2024-10-01 22:40:37.675343] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.546 [2024-10-01 22:40:37.675354] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.546 qpair failed and we were unable to recover it. 00:41:42.546 [2024-10-01 22:40:37.675692] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.546 [2024-10-01 22:40:37.675702] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.546 qpair failed and we were unable to recover it. 00:41:42.546 [2024-10-01 22:40:37.676047] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.546 [2024-10-01 22:40:37.676056] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.546 qpair failed and we were unable to recover it. 00:41:42.546 [2024-10-01 22:40:37.676400] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.546 [2024-10-01 22:40:37.676410] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.546 qpair failed and we were unable to recover it. 00:41:42.546 [2024-10-01 22:40:37.676695] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.546 [2024-10-01 22:40:37.676705] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.546 qpair failed and we were unable to recover it. 00:41:42.546 [2024-10-01 22:40:37.676999] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.546 [2024-10-01 22:40:37.677008] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.546 qpair failed and we were unable to recover it. 00:41:42.546 [2024-10-01 22:40:37.677172] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.546 [2024-10-01 22:40:37.677182] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.546 qpair failed and we were unable to recover it. 00:41:42.546 [2024-10-01 22:40:37.677436] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.546 [2024-10-01 22:40:37.677446] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.546 qpair failed and we were unable to recover it. 00:41:42.546 [2024-10-01 22:40:37.677772] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.546 [2024-10-01 22:40:37.677782] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.546 qpair failed and we were unable to recover it. 00:41:42.546 [2024-10-01 22:40:37.677953] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.546 [2024-10-01 22:40:37.677964] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.546 qpair failed and we were unable to recover it. 00:41:42.546 [2024-10-01 22:40:37.678356] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.546 [2024-10-01 22:40:37.678365] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.546 qpair failed and we were unable to recover it. 00:41:42.546 [2024-10-01 22:40:37.678526] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.546 [2024-10-01 22:40:37.678536] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.546 qpair failed and we were unable to recover it. 00:41:42.546 [2024-10-01 22:40:37.678952] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.546 [2024-10-01 22:40:37.678962] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.546 qpair failed and we were unable to recover it. 00:41:42.546 [2024-10-01 22:40:37.679279] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.546 [2024-10-01 22:40:37.679289] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.546 qpair failed and we were unable to recover it. 00:41:42.546 [2024-10-01 22:40:37.679629] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.546 [2024-10-01 22:40:37.679640] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.546 qpair failed and we were unable to recover it. 00:41:42.546 [2024-10-01 22:40:37.679987] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.546 [2024-10-01 22:40:37.679997] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.546 qpair failed and we were unable to recover it. 00:41:42.546 [2024-10-01 22:40:37.680200] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.546 [2024-10-01 22:40:37.680209] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.546 qpair failed and we were unable to recover it. 00:41:42.546 [2024-10-01 22:40:37.680498] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.546 [2024-10-01 22:40:37.680508] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.546 qpair failed and we were unable to recover it. 00:41:42.546 [2024-10-01 22:40:37.680848] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.546 [2024-10-01 22:40:37.680859] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.546 qpair failed and we were unable to recover it. 00:41:42.546 [2024-10-01 22:40:37.681153] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.546 [2024-10-01 22:40:37.681163] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.546 qpair failed and we were unable to recover it. 00:41:42.546 [2024-10-01 22:40:37.681476] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.546 [2024-10-01 22:40:37.681486] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.546 qpair failed and we were unable to recover it. 00:41:42.546 [2024-10-01 22:40:37.681772] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.546 [2024-10-01 22:40:37.681783] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.546 qpair failed and we were unable to recover it. 00:41:42.546 [2024-10-01 22:40:37.682151] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.546 [2024-10-01 22:40:37.682160] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.546 qpair failed and we were unable to recover it. 00:41:42.546 [2024-10-01 22:40:37.682346] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.546 [2024-10-01 22:40:37.682356] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.546 qpair failed and we were unable to recover it. 00:41:42.546 [2024-10-01 22:40:37.682699] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.546 [2024-10-01 22:40:37.682709] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.546 qpair failed and we were unable to recover it. 00:41:42.546 [2024-10-01 22:40:37.683013] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.547 [2024-10-01 22:40:37.683030] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.547 qpair failed and we were unable to recover it. 00:41:42.547 [2024-10-01 22:40:37.683342] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.547 [2024-10-01 22:40:37.683352] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.547 qpair failed and we were unable to recover it. 00:41:42.547 [2024-10-01 22:40:37.683541] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.547 [2024-10-01 22:40:37.683551] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.547 qpair failed and we were unable to recover it. 00:41:42.547 [2024-10-01 22:40:37.683728] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.547 [2024-10-01 22:40:37.683739] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.547 qpair failed and we were unable to recover it. 00:41:42.547 [2024-10-01 22:40:37.684049] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.547 [2024-10-01 22:40:37.684059] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.547 qpair failed and we were unable to recover it. 00:41:42.547 [2024-10-01 22:40:37.684107] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.547 [2024-10-01 22:40:37.684116] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.547 qpair failed and we were unable to recover it. 00:41:42.547 [2024-10-01 22:40:37.684321] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.547 [2024-10-01 22:40:37.684330] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.547 qpair failed and we were unable to recover it. 00:41:42.547 [2024-10-01 22:40:37.684667] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.547 [2024-10-01 22:40:37.684678] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.547 qpair failed and we were unable to recover it. 00:41:42.547 [2024-10-01 22:40:37.684728] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.547 [2024-10-01 22:40:37.684738] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.547 qpair failed and we were unable to recover it. 00:41:42.547 [2024-10-01 22:40:37.684934] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.547 [2024-10-01 22:40:37.684945] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.547 qpair failed and we were unable to recover it. 00:41:42.547 [2024-10-01 22:40:37.685140] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.547 [2024-10-01 22:40:37.685150] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.547 qpair failed and we were unable to recover it. 00:41:42.547 [2024-10-01 22:40:37.685344] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.547 [2024-10-01 22:40:37.685354] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.547 qpair failed and we were unable to recover it. 00:41:42.547 [2024-10-01 22:40:37.685640] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.547 [2024-10-01 22:40:37.685651] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.547 qpair failed and we were unable to recover it. 00:41:42.547 [2024-10-01 22:40:37.686021] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.547 [2024-10-01 22:40:37.686032] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.547 qpair failed and we were unable to recover it. 00:41:42.547 [2024-10-01 22:40:37.686347] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.547 [2024-10-01 22:40:37.686356] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.547 qpair failed and we were unable to recover it. 00:41:42.547 [2024-10-01 22:40:37.686669] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.547 [2024-10-01 22:40:37.686679] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.547 qpair failed and we were unable to recover it. 00:41:42.547 [2024-10-01 22:40:37.686849] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.547 [2024-10-01 22:40:37.686858] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.547 qpair failed and we were unable to recover it. 00:41:42.547 [2024-10-01 22:40:37.687059] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.547 [2024-10-01 22:40:37.687069] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.547 qpair failed and we were unable to recover it. 00:41:42.547 [2024-10-01 22:40:37.687235] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.547 [2024-10-01 22:40:37.687245] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.547 qpair failed and we were unable to recover it. 00:41:42.547 [2024-10-01 22:40:37.687545] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.547 [2024-10-01 22:40:37.687555] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.547 qpair failed and we were unable to recover it. 00:41:42.547 [2024-10-01 22:40:37.687888] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.547 [2024-10-01 22:40:37.687898] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.547 qpair failed and we were unable to recover it. 00:41:42.547 [2024-10-01 22:40:37.688083] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.547 [2024-10-01 22:40:37.688093] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.547 qpair failed and we were unable to recover it. 00:41:42.547 [2024-10-01 22:40:37.688285] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.547 [2024-10-01 22:40:37.688295] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.547 qpair failed and we were unable to recover it. 00:41:42.547 [2024-10-01 22:40:37.688632] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.547 [2024-10-01 22:40:37.688642] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.547 qpair failed and we were unable to recover it. 00:41:42.547 [2024-10-01 22:40:37.689002] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.547 [2024-10-01 22:40:37.689011] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.547 qpair failed and we were unable to recover it. 00:41:42.547 [2024-10-01 22:40:37.689327] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.547 [2024-10-01 22:40:37.689337] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.547 qpair failed and we were unable to recover it. 00:41:42.547 [2024-10-01 22:40:37.689620] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.547 [2024-10-01 22:40:37.689634] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.547 qpair failed and we were unable to recover it. 00:41:42.547 [2024-10-01 22:40:37.689915] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.547 [2024-10-01 22:40:37.689925] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.547 qpair failed and we were unable to recover it. 00:41:42.547 [2024-10-01 22:40:37.690302] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.547 [2024-10-01 22:40:37.690312] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.547 qpair failed and we were unable to recover it. 00:41:42.547 [2024-10-01 22:40:37.690645] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.547 [2024-10-01 22:40:37.690655] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.547 qpair failed and we were unable to recover it. 00:41:42.547 [2024-10-01 22:40:37.690964] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.547 [2024-10-01 22:40:37.690974] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.547 qpair failed and we were unable to recover it. 00:41:42.547 [2024-10-01 22:40:37.691200] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.547 [2024-10-01 22:40:37.691210] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.547 qpair failed and we were unable to recover it. 00:41:42.547 [2024-10-01 22:40:37.691502] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.547 [2024-10-01 22:40:37.691512] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.547 qpair failed and we were unable to recover it. 00:41:42.547 [2024-10-01 22:40:37.691808] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.548 [2024-10-01 22:40:37.691819] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.548 qpair failed and we were unable to recover it. 00:41:42.548 [2024-10-01 22:40:37.692005] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.548 [2024-10-01 22:40:37.692015] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.548 qpair failed and we were unable to recover it. 00:41:42.548 [2024-10-01 22:40:37.692366] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.548 [2024-10-01 22:40:37.692376] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.548 qpair failed and we were unable to recover it. 00:41:42.548 [2024-10-01 22:40:37.692475] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.548 [2024-10-01 22:40:37.692485] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.548 qpair failed and we were unable to recover it. 00:41:42.548 [2024-10-01 22:40:37.692655] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.548 [2024-10-01 22:40:37.692736] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7f8000b90 with addr=10.0.0.2, port=4420 00:41:42.548 qpair failed and we were unable to recover it. 00:41:42.548 [2024-10-01 22:40:37.692982] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.548 [2024-10-01 22:40:37.693015] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7f8000b90 with addr=10.0.0.2, port=4420 00:41:42.548 qpair failed and we were unable to recover it. 00:41:42.548 [2024-10-01 22:40:37.693359] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.548 [2024-10-01 22:40:37.693390] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7f8000b90 with addr=10.0.0.2, port=4420 00:41:42.548 qpair failed and we were unable to recover it. 00:41:42.548 [2024-10-01 22:40:37.693718] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.548 [2024-10-01 22:40:37.693729] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.548 qpair failed and we were unable to recover it. 00:41:42.548 [2024-10-01 22:40:37.694069] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.548 [2024-10-01 22:40:37.694081] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.548 qpair failed and we were unable to recover it. 00:41:42.548 [2024-10-01 22:40:37.694252] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.548 [2024-10-01 22:40:37.694265] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.548 qpair failed and we were unable to recover it. 00:41:42.548 [2024-10-01 22:40:37.694478] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.548 [2024-10-01 22:40:37.694489] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.548 qpair failed and we were unable to recover it. 00:41:42.548 [2024-10-01 22:40:37.694829] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.548 [2024-10-01 22:40:37.694841] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.548 qpair failed and we were unable to recover it. 00:41:42.548 [2024-10-01 22:40:37.695135] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.548 [2024-10-01 22:40:37.695148] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.548 qpair failed and we were unable to recover it. 00:41:42.548 [2024-10-01 22:40:37.695309] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.548 [2024-10-01 22:40:37.695320] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.548 qpair failed and we were unable to recover it. 00:41:42.548 [2024-10-01 22:40:37.695618] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.548 [2024-10-01 22:40:37.695633] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.548 qpair failed and we were unable to recover it. 00:41:42.548 [2024-10-01 22:40:37.695984] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.548 [2024-10-01 22:40:37.695995] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.548 qpair failed and we were unable to recover it. 00:41:42.548 [2024-10-01 22:40:37.696199] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.548 [2024-10-01 22:40:37.696209] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.548 qpair failed and we were unable to recover it. 00:41:42.548 [2024-10-01 22:40:37.696423] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.548 [2024-10-01 22:40:37.696433] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.548 qpair failed and we were unable to recover it. 00:41:42.548 [2024-10-01 22:40:37.696608] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.548 [2024-10-01 22:40:37.696618] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.548 qpair failed and we were unable to recover it. 00:41:42.548 [2024-10-01 22:40:37.696968] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.548 [2024-10-01 22:40:37.696978] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.548 qpair failed and we were unable to recover it. 00:41:42.548 [2024-10-01 22:40:37.697256] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.548 [2024-10-01 22:40:37.697266] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.548 qpair failed and we were unable to recover it. 00:41:42.548 [2024-10-01 22:40:37.697446] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.548 [2024-10-01 22:40:37.697456] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.548 qpair failed and we were unable to recover it. 00:41:42.548 [2024-10-01 22:40:37.697755] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.548 [2024-10-01 22:40:37.697765] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.548 qpair failed and we were unable to recover it. 00:41:42.548 [2024-10-01 22:40:37.698101] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.548 [2024-10-01 22:40:37.698111] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.548 qpair failed and we were unable to recover it. 00:41:42.548 [2024-10-01 22:40:37.698424] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.548 [2024-10-01 22:40:37.698433] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.548 qpair failed and we were unable to recover it. 00:41:42.548 [2024-10-01 22:40:37.698596] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.548 [2024-10-01 22:40:37.698605] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.548 qpair failed and we were unable to recover it. 00:41:42.548 [2024-10-01 22:40:37.699012] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.548 [2024-10-01 22:40:37.699023] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.548 qpair failed and we were unable to recover it. 00:41:42.548 [2024-10-01 22:40:37.699248] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.548 [2024-10-01 22:40:37.699258] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.548 qpair failed and we were unable to recover it. 00:41:42.548 [2024-10-01 22:40:37.699423] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.548 [2024-10-01 22:40:37.699434] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.548 qpair failed and we were unable to recover it. 00:41:42.548 [2024-10-01 22:40:37.699492] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.548 [2024-10-01 22:40:37.699502] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.548 qpair failed and we were unable to recover it. 00:41:42.548 [2024-10-01 22:40:37.699689] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.548 [2024-10-01 22:40:37.699700] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.548 qpair failed and we were unable to recover it. 00:41:42.548 [2024-10-01 22:40:37.699878] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.548 [2024-10-01 22:40:37.699887] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.548 qpair failed and we were unable to recover it. 00:41:42.548 [2024-10-01 22:40:37.700199] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.548 [2024-10-01 22:40:37.700209] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.548 qpair failed and we were unable to recover it. 00:41:42.548 [2024-10-01 22:40:37.700493] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.548 [2024-10-01 22:40:37.700503] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.548 qpair failed and we were unable to recover it. 00:41:42.548 [2024-10-01 22:40:37.700793] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.548 [2024-10-01 22:40:37.700803] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.548 qpair failed and we were unable to recover it. 00:41:42.548 [2024-10-01 22:40:37.701140] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.548 [2024-10-01 22:40:37.701149] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.548 qpair failed and we were unable to recover it. 00:41:42.548 [2024-10-01 22:40:37.701427] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.548 [2024-10-01 22:40:37.701438] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.548 qpair failed and we were unable to recover it. 00:41:42.548 [2024-10-01 22:40:37.701765] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.548 [2024-10-01 22:40:37.701776] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.548 qpair failed and we were unable to recover it. 00:41:42.548 [2024-10-01 22:40:37.702060] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.548 [2024-10-01 22:40:37.702070] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.548 qpair failed and we were unable to recover it. 00:41:42.549 [2024-10-01 22:40:37.702371] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.549 [2024-10-01 22:40:37.702381] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.549 qpair failed and we were unable to recover it. 00:41:42.549 [2024-10-01 22:40:37.702542] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.549 [2024-10-01 22:40:37.702553] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.549 qpair failed and we were unable to recover it. 00:41:42.549 [2024-10-01 22:40:37.702951] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.549 [2024-10-01 22:40:37.702961] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.549 qpair failed and we were unable to recover it. 00:41:42.549 [2024-10-01 22:40:37.703253] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.549 [2024-10-01 22:40:37.703263] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.549 qpair failed and we were unable to recover it. 00:41:42.549 [2024-10-01 22:40:37.703576] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.549 [2024-10-01 22:40:37.703586] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.549 qpair failed and we were unable to recover it. 00:41:42.549 [2024-10-01 22:40:37.703862] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.549 [2024-10-01 22:40:37.703873] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.549 qpair failed and we were unable to recover it. 00:41:42.549 [2024-10-01 22:40:37.704163] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.549 [2024-10-01 22:40:37.704173] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.549 qpair failed and we were unable to recover it. 00:41:42.549 [2024-10-01 22:40:37.704439] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.549 [2024-10-01 22:40:37.704449] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.549 qpair failed and we were unable to recover it. 00:41:42.549 [2024-10-01 22:40:37.704716] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.549 [2024-10-01 22:40:37.704727] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.549 qpair failed and we were unable to recover it. 00:41:42.549 [2024-10-01 22:40:37.705030] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.549 [2024-10-01 22:40:37.705040] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.549 qpair failed and we were unable to recover it. 00:41:42.549 [2024-10-01 22:40:37.705345] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.549 [2024-10-01 22:40:37.705355] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.549 qpair failed and we were unable to recover it. 00:41:42.549 [2024-10-01 22:40:37.705670] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.549 [2024-10-01 22:40:37.705680] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.549 qpair failed and we were unable to recover it. 00:41:42.549 [2024-10-01 22:40:37.705897] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.549 [2024-10-01 22:40:37.705907] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.549 qpair failed and we were unable to recover it. 00:41:42.549 [2024-10-01 22:40:37.706226] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.549 [2024-10-01 22:40:37.706236] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.549 qpair failed and we were unable to recover it. 00:41:42.549 [2024-10-01 22:40:37.706548] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.549 [2024-10-01 22:40:37.706558] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.549 qpair failed and we were unable to recover it. 00:41:42.549 [2024-10-01 22:40:37.706646] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.549 [2024-10-01 22:40:37.706656] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130a180 with addr=10.0.0.2, port=4420 00:41:42.549 qpair failed and we were unable to recover it. 00:41:42.549 [2024-10-01 22:40:37.707068] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.549 [2024-10-01 22:40:37.707097] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.549 qpair failed and we were unable to recover it. 00:41:42.549 [2024-10-01 22:40:37.707398] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.549 [2024-10-01 22:40:37.707407] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.549 qpair failed and we were unable to recover it. 00:41:42.549 [2024-10-01 22:40:37.707580] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.549 [2024-10-01 22:40:37.707588] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.549 qpair failed and we were unable to recover it. 00:41:42.549 [2024-10-01 22:40:37.707921] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.549 [2024-10-01 22:40:37.707929] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.549 qpair failed and we were unable to recover it. 00:41:42.549 [2024-10-01 22:40:37.708257] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.549 [2024-10-01 22:40:37.708264] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.549 qpair failed and we were unable to recover it. 00:41:42.549 [2024-10-01 22:40:37.708547] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.549 [2024-10-01 22:40:37.708554] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.549 qpair failed and we were unable to recover it. 00:41:42.549 [2024-10-01 22:40:37.708828] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.549 [2024-10-01 22:40:37.708836] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.549 qpair failed and we were unable to recover it. 00:41:42.549 [2024-10-01 22:40:37.709004] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.549 [2024-10-01 22:40:37.709011] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.549 qpair failed and we were unable to recover it. 00:41:42.549 [2024-10-01 22:40:37.709294] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.549 [2024-10-01 22:40:37.709306] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.549 qpair failed and we were unable to recover it. 00:41:42.549 [2024-10-01 22:40:37.709611] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.549 [2024-10-01 22:40:37.709618] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.549 qpair failed and we were unable to recover it. 00:41:42.549 [2024-10-01 22:40:37.709846] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.549 [2024-10-01 22:40:37.709854] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.549 qpair failed and we were unable to recover it. 00:41:42.549 [2024-10-01 22:40:37.710125] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.549 [2024-10-01 22:40:37.710132] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.549 qpair failed and we were unable to recover it. 00:41:42.549 [2024-10-01 22:40:37.710477] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.549 [2024-10-01 22:40:37.710484] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.549 qpair failed and we were unable to recover it. 00:41:42.549 [2024-10-01 22:40:37.710666] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.549 [2024-10-01 22:40:37.710674] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.549 qpair failed and we were unable to recover it. 00:41:42.549 [2024-10-01 22:40:37.710880] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.549 [2024-10-01 22:40:37.710887] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.549 qpair failed and we were unable to recover it. 00:41:42.549 [2024-10-01 22:40:37.711184] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.549 [2024-10-01 22:40:37.711192] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.549 qpair failed and we were unable to recover it. 00:41:42.549 [2024-10-01 22:40:37.711472] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.549 [2024-10-01 22:40:37.711479] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.549 qpair failed and we were unable to recover it. 00:41:42.549 [2024-10-01 22:40:37.711742] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.549 [2024-10-01 22:40:37.711750] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.549 qpair failed and we were unable to recover it. 00:41:42.549 [2024-10-01 22:40:37.712096] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.549 [2024-10-01 22:40:37.712103] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.549 qpair failed and we were unable to recover it. 00:41:42.549 [2024-10-01 22:40:37.712416] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.549 [2024-10-01 22:40:37.712424] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.549 qpair failed and we were unable to recover it. 00:41:42.549 [2024-10-01 22:40:37.712741] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.549 [2024-10-01 22:40:37.712749] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.549 qpair failed and we were unable to recover it. 00:41:42.549 [2024-10-01 22:40:37.713062] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.549 [2024-10-01 22:40:37.713068] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.549 qpair failed and we were unable to recover it. 00:41:42.550 [2024-10-01 22:40:37.713278] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.550 [2024-10-01 22:40:37.713285] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.550 qpair failed and we were unable to recover it. 00:41:42.550 [2024-10-01 22:40:37.713645] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.550 [2024-10-01 22:40:37.713653] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.550 qpair failed and we were unable to recover it. 00:41:42.550 [2024-10-01 22:40:37.713990] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.550 [2024-10-01 22:40:37.713997] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.550 qpair failed and we were unable to recover it. 00:41:42.550 [2024-10-01 22:40:37.714307] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.550 [2024-10-01 22:40:37.714313] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.550 qpair failed and we were unable to recover it. 00:41:42.550 [2024-10-01 22:40:37.714632] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.550 [2024-10-01 22:40:37.714640] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.550 qpair failed and we were unable to recover it. 00:41:42.550 [2024-10-01 22:40:37.714998] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.550 [2024-10-01 22:40:37.715005] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.550 qpair failed and we were unable to recover it. 00:41:42.550 [2024-10-01 22:40:37.715320] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.550 [2024-10-01 22:40:37.715328] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.550 qpair failed and we were unable to recover it. 00:41:42.550 [2024-10-01 22:40:37.715664] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.550 [2024-10-01 22:40:37.715672] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.550 qpair failed and we were unable to recover it. 00:41:42.550 [2024-10-01 22:40:37.715855] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.550 [2024-10-01 22:40:37.715862] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.550 qpair failed and we were unable to recover it. 00:41:42.550 [2024-10-01 22:40:37.716177] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.550 [2024-10-01 22:40:37.716183] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.550 qpair failed and we were unable to recover it. 00:41:42.550 [2024-10-01 22:40:37.716516] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.550 [2024-10-01 22:40:37.716523] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.550 qpair failed and we were unable to recover it. 00:41:42.550 [2024-10-01 22:40:37.716832] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.550 [2024-10-01 22:40:37.716840] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.550 qpair failed and we were unable to recover it. 00:41:42.550 [2024-10-01 22:40:37.717142] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.550 [2024-10-01 22:40:37.717148] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.550 qpair failed and we were unable to recover it. 00:41:42.550 [2024-10-01 22:40:37.717315] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.550 [2024-10-01 22:40:37.717322] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.550 qpair failed and we were unable to recover it. 00:41:42.550 [2024-10-01 22:40:37.717553] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.550 [2024-10-01 22:40:37.717560] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.550 qpair failed and we were unable to recover it. 00:41:42.550 [2024-10-01 22:40:37.717881] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.550 [2024-10-01 22:40:37.717889] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.550 qpair failed and we were unable to recover it. 00:41:42.550 [2024-10-01 22:40:37.718228] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.550 [2024-10-01 22:40:37.718235] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.550 qpair failed and we were unable to recover it. 00:41:42.550 [2024-10-01 22:40:37.718568] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.550 [2024-10-01 22:40:37.718575] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.550 qpair failed and we were unable to recover it. 00:41:42.550 [2024-10-01 22:40:37.718875] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.550 [2024-10-01 22:40:37.718887] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.550 qpair failed and we were unable to recover it. 00:41:42.550 [2024-10-01 22:40:37.719197] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.550 [2024-10-01 22:40:37.719204] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.550 qpair failed and we were unable to recover it. 00:41:42.550 [2024-10-01 22:40:37.719495] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.550 [2024-10-01 22:40:37.719503] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.550 qpair failed and we were unable to recover it. 00:41:42.550 [2024-10-01 22:40:37.719786] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.550 [2024-10-01 22:40:37.719793] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.550 qpair failed and we were unable to recover it. 00:41:42.550 [2024-10-01 22:40:37.720171] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.550 [2024-10-01 22:40:37.720177] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.550 qpair failed and we were unable to recover it. 00:41:42.550 [2024-10-01 22:40:37.720486] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.550 [2024-10-01 22:40:37.720493] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.550 qpair failed and we were unable to recover it. 00:41:42.550 [2024-10-01 22:40:37.720696] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.550 [2024-10-01 22:40:37.720704] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.550 qpair failed and we were unable to recover it. 00:41:42.550 [2024-10-01 22:40:37.721040] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.550 [2024-10-01 22:40:37.721048] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.550 qpair failed and we were unable to recover it. 00:41:42.550 [2024-10-01 22:40:37.721356] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.550 [2024-10-01 22:40:37.721365] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.550 qpair failed and we were unable to recover it. 00:41:42.550 [2024-10-01 22:40:37.721532] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.550 [2024-10-01 22:40:37.721540] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.550 qpair failed and we were unable to recover it. 00:41:42.550 [2024-10-01 22:40:37.721833] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.550 [2024-10-01 22:40:37.721841] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.550 qpair failed and we were unable to recover it. 00:41:42.550 [2024-10-01 22:40:37.722112] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.550 [2024-10-01 22:40:37.722119] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.550 qpair failed and we were unable to recover it. 00:41:42.550 [2024-10-01 22:40:37.722313] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.550 [2024-10-01 22:40:37.722320] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.550 qpair failed and we were unable to recover it. 00:41:42.550 [2024-10-01 22:40:37.722656] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.550 [2024-10-01 22:40:37.722663] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.550 qpair failed and we were unable to recover it. 00:41:42.550 [2024-10-01 22:40:37.722976] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.550 [2024-10-01 22:40:37.722983] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.550 qpair failed and we were unable to recover it. 00:41:42.550 [2024-10-01 22:40:37.723306] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.550 [2024-10-01 22:40:37.723313] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.550 qpair failed and we were unable to recover it. 00:41:42.550 [2024-10-01 22:40:37.723601] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.550 [2024-10-01 22:40:37.723608] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.550 qpair failed and we were unable to recover it. 00:41:42.550 [2024-10-01 22:40:37.723916] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.550 [2024-10-01 22:40:37.723923] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.550 qpair failed and we were unable to recover it. 00:41:42.551 [2024-10-01 22:40:37.724092] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.551 [2024-10-01 22:40:37.724099] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.551 qpair failed and we were unable to recover it. 00:41:42.551 [2024-10-01 22:40:37.724384] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.551 [2024-10-01 22:40:37.724398] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.551 qpair failed and we were unable to recover it. 00:41:42.551 [2024-10-01 22:40:37.724566] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.551 [2024-10-01 22:40:37.724573] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.551 qpair failed and we were unable to recover it. 00:41:42.551 [2024-10-01 22:40:37.724616] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.551 [2024-10-01 22:40:37.724627] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.551 qpair failed and we were unable to recover it. 00:41:42.551 [2024-10-01 22:40:37.724807] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.551 [2024-10-01 22:40:37.724815] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.551 qpair failed and we were unable to recover it. 00:41:42.551 [2024-10-01 22:40:37.724965] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.551 [2024-10-01 22:40:37.724972] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.551 qpair failed and we were unable to recover it. 00:41:42.551 [2024-10-01 22:40:37.725159] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.551 [2024-10-01 22:40:37.725166] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.551 qpair failed and we were unable to recover it. 00:41:42.551 [2024-10-01 22:40:37.725547] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.551 [2024-10-01 22:40:37.725555] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.551 qpair failed and we were unable to recover it. 00:41:42.551 [2024-10-01 22:40:37.725893] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.551 [2024-10-01 22:40:37.725901] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.551 qpair failed and we were unable to recover it. 00:41:42.551 [2024-10-01 22:40:37.726203] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.551 [2024-10-01 22:40:37.726210] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.551 qpair failed and we were unable to recover it. 00:41:42.551 [2024-10-01 22:40:37.726507] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.551 [2024-10-01 22:40:37.726515] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.551 qpair failed and we were unable to recover it. 00:41:42.551 [2024-10-01 22:40:37.726688] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.551 [2024-10-01 22:40:37.726696] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.551 qpair failed and we were unable to recover it. 00:41:42.551 [2024-10-01 22:40:37.726943] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.551 [2024-10-01 22:40:37.726950] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.551 qpair failed and we were unable to recover it. 00:41:42.551 [2024-10-01 22:40:37.727229] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.551 [2024-10-01 22:40:37.727236] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.551 qpair failed and we were unable to recover it. 00:41:42.551 [2024-10-01 22:40:37.727547] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.551 [2024-10-01 22:40:37.727554] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.551 qpair failed and we were unable to recover it. 00:41:42.551 [2024-10-01 22:40:37.727817] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.551 [2024-10-01 22:40:37.727824] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.551 qpair failed and we were unable to recover it. 00:41:42.551 [2024-10-01 22:40:37.728112] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.551 [2024-10-01 22:40:37.728119] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.551 qpair failed and we were unable to recover it. 00:41:42.551 [2024-10-01 22:40:37.728289] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.551 [2024-10-01 22:40:37.728296] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.551 qpair failed and we were unable to recover it. 00:41:42.551 [2024-10-01 22:40:37.728609] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.551 [2024-10-01 22:40:37.728616] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.551 qpair failed and we were unable to recover it. 00:41:42.551 [2024-10-01 22:40:37.728771] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.551 [2024-10-01 22:40:37.728778] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.551 qpair failed and we were unable to recover it. 00:41:42.551 [2024-10-01 22:40:37.728932] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.551 [2024-10-01 22:40:37.728940] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.551 qpair failed and we were unable to recover it. 00:41:42.551 [2024-10-01 22:40:37.729262] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.551 [2024-10-01 22:40:37.729269] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.551 qpair failed and we were unable to recover it. 00:41:42.551 [2024-10-01 22:40:37.729529] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.551 [2024-10-01 22:40:37.729536] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.551 qpair failed and we were unable to recover it. 00:41:42.551 [2024-10-01 22:40:37.729977] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.551 [2024-10-01 22:40:37.729985] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.551 qpair failed and we were unable to recover it. 00:41:42.551 [2024-10-01 22:40:37.730330] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.551 [2024-10-01 22:40:37.730337] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.551 qpair failed and we were unable to recover it. 00:41:42.551 [2024-10-01 22:40:37.730615] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.551 [2024-10-01 22:40:37.730622] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.551 qpair failed and we were unable to recover it. 00:41:42.551 [2024-10-01 22:40:37.730933] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.551 [2024-10-01 22:40:37.730941] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.551 qpair failed and we were unable to recover it. 00:41:42.551 [2024-10-01 22:40:37.731146] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.551 [2024-10-01 22:40:37.731154] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.551 qpair failed and we were unable to recover it. 00:41:42.551 [2024-10-01 22:40:37.731325] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.551 [2024-10-01 22:40:37.731332] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.551 qpair failed and we were unable to recover it. 00:41:42.551 [2024-10-01 22:40:37.731603] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.551 [2024-10-01 22:40:37.731611] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.551 qpair failed and we were unable to recover it. 00:41:42.551 [2024-10-01 22:40:37.731910] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.551 [2024-10-01 22:40:37.731919] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.551 qpair failed and we were unable to recover it. 00:41:42.551 [2024-10-01 22:40:37.732184] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.551 [2024-10-01 22:40:37.732191] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.551 qpair failed and we were unable to recover it. 00:41:42.551 [2024-10-01 22:40:37.732533] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.551 [2024-10-01 22:40:37.732540] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.551 qpair failed and we were unable to recover it. 00:41:42.551 [2024-10-01 22:40:37.732837] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.551 [2024-10-01 22:40:37.732845] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.551 qpair failed and we were unable to recover it. 00:41:42.551 [2024-10-01 22:40:37.733242] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.551 [2024-10-01 22:40:37.733249] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.551 qpair failed and we were unable to recover it. 00:41:42.551 [2024-10-01 22:40:37.733536] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.551 [2024-10-01 22:40:37.733543] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.551 qpair failed and we were unable to recover it. 00:41:42.551 [2024-10-01 22:40:37.733864] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.551 [2024-10-01 22:40:37.733872] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.551 qpair failed and we were unable to recover it. 00:41:42.551 [2024-10-01 22:40:37.734171] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.551 [2024-10-01 22:40:37.734178] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.551 qpair failed and we were unable to recover it. 00:41:42.551 [2024-10-01 22:40:37.734470] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.552 [2024-10-01 22:40:37.734478] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.552 qpair failed and we were unable to recover it. 00:41:42.552 [2024-10-01 22:40:37.734733] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.552 [2024-10-01 22:40:37.734741] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.552 qpair failed and we were unable to recover it. 00:41:42.552 [2024-10-01 22:40:37.735039] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.552 [2024-10-01 22:40:37.735046] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.552 qpair failed and we were unable to recover it. 00:41:42.552 [2024-10-01 22:40:37.735383] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.552 [2024-10-01 22:40:37.735391] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.552 qpair failed and we were unable to recover it. 00:41:42.552 [2024-10-01 22:40:37.735669] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.552 [2024-10-01 22:40:37.735676] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.552 qpair failed and we were unable to recover it. 00:41:42.552 [2024-10-01 22:40:37.735996] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.552 [2024-10-01 22:40:37.736003] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.552 qpair failed and we were unable to recover it. 00:41:42.552 [2024-10-01 22:40:37.736290] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.552 [2024-10-01 22:40:37.736297] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.552 qpair failed and we were unable to recover it. 00:41:42.552 [2024-10-01 22:40:37.736623] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.552 [2024-10-01 22:40:37.736635] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.552 qpair failed and we were unable to recover it. 00:41:42.552 [2024-10-01 22:40:37.736954] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.552 [2024-10-01 22:40:37.736962] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.552 qpair failed and we were unable to recover it. 00:41:42.552 [2024-10-01 22:40:37.737127] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.552 [2024-10-01 22:40:37.737135] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.552 qpair failed and we were unable to recover it. 00:41:42.552 [2024-10-01 22:40:37.737445] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.552 [2024-10-01 22:40:37.737452] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.552 qpair failed and we were unable to recover it. 00:41:42.552 [2024-10-01 22:40:37.737751] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.552 [2024-10-01 22:40:37.737759] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.552 qpair failed and we were unable to recover it. 00:41:42.552 [2024-10-01 22:40:37.738075] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.552 [2024-10-01 22:40:37.738082] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.552 qpair failed and we were unable to recover it. 00:41:42.552 [2024-10-01 22:40:37.738404] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.552 [2024-10-01 22:40:37.738412] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.552 qpair failed and we were unable to recover it. 00:41:42.552 [2024-10-01 22:40:37.738671] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.552 [2024-10-01 22:40:37.738678] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.552 qpair failed and we were unable to recover it. 00:41:42.552 [2024-10-01 22:40:37.738978] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.552 [2024-10-01 22:40:37.738986] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.552 qpair failed and we were unable to recover it. 00:41:42.552 [2024-10-01 22:40:37.739303] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.552 [2024-10-01 22:40:37.739309] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.552 qpair failed and we were unable to recover it. 00:41:42.552 [2024-10-01 22:40:37.739473] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.552 [2024-10-01 22:40:37.739481] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.552 qpair failed and we were unable to recover it. 00:41:42.552 [2024-10-01 22:40:37.739657] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.552 [2024-10-01 22:40:37.739664] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.552 qpair failed and we were unable to recover it. 00:41:42.552 [2024-10-01 22:40:37.739825] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.552 [2024-10-01 22:40:37.739831] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.552 qpair failed and we were unable to recover it. 00:41:42.552 [2024-10-01 22:40:37.740002] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.552 [2024-10-01 22:40:37.740011] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.552 qpair failed and we were unable to recover it. 00:41:42.552 [2024-10-01 22:40:37.740047] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.552 [2024-10-01 22:40:37.740054] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.552 qpair failed and we were unable to recover it. 00:41:42.552 [2024-10-01 22:40:37.740228] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.552 [2024-10-01 22:40:37.740236] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.552 qpair failed and we were unable to recover it. 00:41:42.552 [2024-10-01 22:40:37.740392] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.552 [2024-10-01 22:40:37.740399] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.552 qpair failed and we were unable to recover it. 00:41:42.552 [2024-10-01 22:40:37.740681] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.552 [2024-10-01 22:40:37.740689] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.552 qpair failed and we were unable to recover it. 00:41:42.552 [2024-10-01 22:40:37.740884] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.552 [2024-10-01 22:40:37.740891] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.552 qpair failed and we were unable to recover it. 00:41:42.552 [2024-10-01 22:40:37.741203] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.552 [2024-10-01 22:40:37.741211] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.552 qpair failed and we were unable to recover it. 00:41:42.552 [2024-10-01 22:40:37.741523] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.552 [2024-10-01 22:40:37.741532] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.552 qpair failed and we were unable to recover it. 00:41:42.552 [2024-10-01 22:40:37.741849] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.552 [2024-10-01 22:40:37.741857] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.552 qpair failed and we were unable to recover it. 00:41:42.552 [2024-10-01 22:40:37.742165] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.552 [2024-10-01 22:40:37.742180] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.552 qpair failed and we were unable to recover it. 00:41:42.552 [2024-10-01 22:40:37.742448] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.552 [2024-10-01 22:40:37.742456] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.552 qpair failed and we were unable to recover it. 00:41:42.552 [2024-10-01 22:40:37.742775] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.552 [2024-10-01 22:40:37.742783] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.552 qpair failed and we were unable to recover it. 00:41:42.552 [2024-10-01 22:40:37.743094] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.552 [2024-10-01 22:40:37.743103] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.553 qpair failed and we were unable to recover it. 00:41:42.553 [2024-10-01 22:40:37.743417] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.553 [2024-10-01 22:40:37.743423] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.553 qpair failed and we were unable to recover it. 00:41:42.553 [2024-10-01 22:40:37.743743] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.553 [2024-10-01 22:40:37.743751] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.553 qpair failed and we were unable to recover it. 00:41:42.553 [2024-10-01 22:40:37.744160] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.553 [2024-10-01 22:40:37.744167] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.553 qpair failed and we were unable to recover it. 00:41:42.553 [2024-10-01 22:40:37.744448] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.553 [2024-10-01 22:40:37.744456] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.553 qpair failed and we were unable to recover it. 00:41:42.553 [2024-10-01 22:40:37.744633] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.553 [2024-10-01 22:40:37.744640] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.553 qpair failed and we were unable to recover it. 00:41:42.553 [2024-10-01 22:40:37.744939] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.553 [2024-10-01 22:40:37.744947] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.553 qpair failed and we were unable to recover it. 00:41:42.553 [2024-10-01 22:40:37.745247] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.553 [2024-10-01 22:40:37.745255] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.553 qpair failed and we were unable to recover it. 00:41:42.553 [2024-10-01 22:40:37.745417] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.553 [2024-10-01 22:40:37.745425] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.553 qpair failed and we were unable to recover it. 00:41:42.553 [2024-10-01 22:40:37.745735] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.553 [2024-10-01 22:40:37.745742] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.553 qpair failed and we were unable to recover it. 00:41:42.553 [2024-10-01 22:40:37.746121] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.553 [2024-10-01 22:40:37.746128] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.553 qpair failed and we were unable to recover it. 00:41:42.553 [2024-10-01 22:40:37.746465] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.553 [2024-10-01 22:40:37.746472] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.553 qpair failed and we were unable to recover it. 00:41:42.553 [2024-10-01 22:40:37.746784] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.553 [2024-10-01 22:40:37.746791] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.553 qpair failed and we were unable to recover it. 00:41:42.553 [2024-10-01 22:40:37.747098] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.553 [2024-10-01 22:40:37.747106] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.553 qpair failed and we were unable to recover it. 00:41:42.553 [2024-10-01 22:40:37.747417] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.553 [2024-10-01 22:40:37.747424] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.553 qpair failed and we were unable to recover it. 00:41:42.553 [2024-10-01 22:40:37.747741] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.553 [2024-10-01 22:40:37.747748] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.553 qpair failed and we were unable to recover it. 00:41:42.553 [2024-10-01 22:40:37.748109] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.553 [2024-10-01 22:40:37.748116] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.553 qpair failed and we were unable to recover it. 00:41:42.553 [2024-10-01 22:40:37.748418] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.553 [2024-10-01 22:40:37.748426] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.553 qpair failed and we were unable to recover it. 00:41:42.553 [2024-10-01 22:40:37.748751] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.553 [2024-10-01 22:40:37.748758] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.553 qpair failed and we were unable to recover it. 00:41:42.553 [2024-10-01 22:40:37.748939] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.553 [2024-10-01 22:40:37.748946] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.553 qpair failed and we were unable to recover it. 00:41:42.553 [2024-10-01 22:40:37.749266] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.553 [2024-10-01 22:40:37.749273] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.553 qpair failed and we were unable to recover it. 00:41:42.553 [2024-10-01 22:40:37.749421] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.553 [2024-10-01 22:40:37.749428] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.553 qpair failed and we were unable to recover it. 00:41:42.553 [2024-10-01 22:40:37.749784] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.553 [2024-10-01 22:40:37.749791] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.553 qpair failed and we were unable to recover it. 00:41:42.553 [2024-10-01 22:40:37.750062] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.553 [2024-10-01 22:40:37.750069] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.553 qpair failed and we were unable to recover it. 00:41:42.553 [2024-10-01 22:40:37.750248] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.553 [2024-10-01 22:40:37.750255] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.553 qpair failed and we were unable to recover it. 00:41:42.553 [2024-10-01 22:40:37.750451] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.553 [2024-10-01 22:40:37.750459] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.553 qpair failed and we were unable to recover it. 00:41:42.553 [2024-10-01 22:40:37.750642] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.553 [2024-10-01 22:40:37.750650] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.553 qpair failed and we were unable to recover it. 00:41:42.553 [2024-10-01 22:40:37.750978] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.553 [2024-10-01 22:40:37.750985] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.553 qpair failed and we were unable to recover it. 00:41:42.553 [2024-10-01 22:40:37.751303] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.553 [2024-10-01 22:40:37.751310] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.553 qpair failed and we were unable to recover it. 00:41:42.553 [2024-10-01 22:40:37.751595] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.553 [2024-10-01 22:40:37.751602] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.553 qpair failed and we were unable to recover it. 00:41:42.553 [2024-10-01 22:40:37.751893] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.553 [2024-10-01 22:40:37.751909] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.553 qpair failed and we were unable to recover it. 00:41:42.553 [2024-10-01 22:40:37.752207] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.553 [2024-10-01 22:40:37.752214] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.553 qpair failed and we were unable to recover it. 00:41:42.553 [2024-10-01 22:40:37.752507] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.553 [2024-10-01 22:40:37.752522] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.553 qpair failed and we were unable to recover it. 00:41:42.553 [2024-10-01 22:40:37.752789] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.553 [2024-10-01 22:40:37.752796] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.553 qpair failed and we were unable to recover it. 00:41:42.553 [2024-10-01 22:40:37.752983] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.553 [2024-10-01 22:40:37.752990] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.553 qpair failed and we were unable to recover it. 00:41:42.553 [2024-10-01 22:40:37.753322] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.553 [2024-10-01 22:40:37.753330] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.553 qpair failed and we were unable to recover it. 00:41:42.553 [2024-10-01 22:40:37.753606] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.553 [2024-10-01 22:40:37.753613] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.553 qpair failed and we were unable to recover it. 00:41:42.553 [2024-10-01 22:40:37.753802] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.553 [2024-10-01 22:40:37.753811] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.553 qpair failed and we were unable to recover it. 00:41:42.553 [2024-10-01 22:40:37.754134] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.553 [2024-10-01 22:40:37.754141] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.553 qpair failed and we were unable to recover it. 00:41:42.553 [2024-10-01 22:40:37.754461] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.554 [2024-10-01 22:40:37.754468] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.554 qpair failed and we were unable to recover it. 00:41:42.554 [2024-10-01 22:40:37.754839] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.554 [2024-10-01 22:40:37.754848] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.554 qpair failed and we were unable to recover it. 00:41:42.554 [2024-10-01 22:40:37.755144] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.554 [2024-10-01 22:40:37.755151] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.554 qpair failed and we were unable to recover it. 00:41:42.554 [2024-10-01 22:40:37.755467] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.554 [2024-10-01 22:40:37.755473] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.554 qpair failed and we were unable to recover it. 00:41:42.554 [2024-10-01 22:40:37.755658] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.554 [2024-10-01 22:40:37.755665] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.554 qpair failed and we were unable to recover it. 00:41:42.835 [2024-10-01 22:40:37.755992] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.835 [2024-10-01 22:40:37.756001] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.835 qpair failed and we were unable to recover it. 00:41:42.835 [2024-10-01 22:40:37.756312] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.835 [2024-10-01 22:40:37.756320] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.835 qpair failed and we were unable to recover it. 00:41:42.835 [2024-10-01 22:40:37.756640] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.835 [2024-10-01 22:40:37.756647] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.835 qpair failed and we were unable to recover it. 00:41:42.835 [2024-10-01 22:40:37.756983] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.835 [2024-10-01 22:40:37.756989] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.835 qpair failed and we were unable to recover it. 00:41:42.835 [2024-10-01 22:40:37.757171] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.835 [2024-10-01 22:40:37.757178] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.835 qpair failed and we were unable to recover it. 00:41:42.835 [2024-10-01 22:40:37.757470] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.835 [2024-10-01 22:40:37.757478] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.835 qpair failed and we were unable to recover it. 00:41:42.835 [2024-10-01 22:40:37.757772] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.835 [2024-10-01 22:40:37.757780] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.835 qpair failed and we were unable to recover it. 00:41:42.835 [2024-10-01 22:40:37.757978] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.835 [2024-10-01 22:40:37.757985] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.835 qpair failed and we were unable to recover it. 00:41:42.835 [2024-10-01 22:40:37.758285] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.835 [2024-10-01 22:40:37.758294] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.835 qpair failed and we were unable to recover it. 00:41:42.835 [2024-10-01 22:40:37.758465] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.835 [2024-10-01 22:40:37.758473] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.835 qpair failed and we were unable to recover it. 00:41:42.836 [2024-10-01 22:40:37.758788] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.836 [2024-10-01 22:40:37.758795] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.836 qpair failed and we were unable to recover it. 00:41:42.836 [2024-10-01 22:40:37.758970] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.836 [2024-10-01 22:40:37.758978] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.836 qpair failed and we were unable to recover it. 00:41:42.836 [2024-10-01 22:40:37.759179] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.836 [2024-10-01 22:40:37.759186] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.836 qpair failed and we were unable to recover it. 00:41:42.836 [2024-10-01 22:40:37.759347] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.836 [2024-10-01 22:40:37.759355] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.836 qpair failed and we were unable to recover it. 00:41:42.836 [2024-10-01 22:40:37.759669] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.836 [2024-10-01 22:40:37.759676] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.836 qpair failed and we were unable to recover it. 00:41:42.836 [2024-10-01 22:40:37.760059] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.836 [2024-10-01 22:40:37.760066] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.836 qpair failed and we were unable to recover it. 00:41:42.836 [2024-10-01 22:40:37.760385] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.836 [2024-10-01 22:40:37.760392] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.836 qpair failed and we were unable to recover it. 00:41:42.836 [2024-10-01 22:40:37.760575] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.836 [2024-10-01 22:40:37.760583] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.836 qpair failed and we were unable to recover it. 00:41:42.836 [2024-10-01 22:40:37.760913] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.836 [2024-10-01 22:40:37.760920] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.836 qpair failed and we were unable to recover it. 00:41:42.836 [2024-10-01 22:40:37.761191] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.836 [2024-10-01 22:40:37.761199] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.836 qpair failed and we were unable to recover it. 00:41:42.836 [2024-10-01 22:40:37.761501] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.836 [2024-10-01 22:40:37.761507] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.836 qpair failed and we were unable to recover it. 00:41:42.836 [2024-10-01 22:40:37.761681] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.836 [2024-10-01 22:40:37.761689] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.836 qpair failed and we were unable to recover it. 00:41:42.836 [2024-10-01 22:40:37.761982] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.836 [2024-10-01 22:40:37.761989] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.836 qpair failed and we were unable to recover it. 00:41:42.836 [2024-10-01 22:40:37.762306] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.836 [2024-10-01 22:40:37.762313] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.836 qpair failed and we were unable to recover it. 00:41:42.836 [2024-10-01 22:40:37.762602] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.836 [2024-10-01 22:40:37.762608] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.836 qpair failed and we were unable to recover it. 00:41:42.836 [2024-10-01 22:40:37.762793] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.836 [2024-10-01 22:40:37.762801] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.836 qpair failed and we were unable to recover it. 00:41:42.836 [2024-10-01 22:40:37.763090] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.836 [2024-10-01 22:40:37.763097] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.836 qpair failed and we were unable to recover it. 00:41:42.836 [2024-10-01 22:40:37.763389] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.836 [2024-10-01 22:40:37.763397] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.836 qpair failed and we were unable to recover it. 00:41:42.836 [2024-10-01 22:40:37.763751] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.836 [2024-10-01 22:40:37.763758] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.836 qpair failed and we were unable to recover it. 00:41:42.836 [2024-10-01 22:40:37.763938] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.836 [2024-10-01 22:40:37.763945] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.836 qpair failed and we were unable to recover it. 00:41:42.836 [2024-10-01 22:40:37.764277] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.836 [2024-10-01 22:40:37.764284] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.836 qpair failed and we were unable to recover it. 00:41:42.836 [2024-10-01 22:40:37.764477] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.836 [2024-10-01 22:40:37.764484] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.836 qpair failed and we were unable to recover it. 00:41:42.836 [2024-10-01 22:40:37.764700] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.836 [2024-10-01 22:40:37.764709] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.836 qpair failed and we were unable to recover it. 00:41:42.836 [2024-10-01 22:40:37.764975] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.836 [2024-10-01 22:40:37.764982] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.836 qpair failed and we were unable to recover it. 00:41:42.836 [2024-10-01 22:40:37.765298] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.836 [2024-10-01 22:40:37.765305] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.836 qpair failed and we were unable to recover it. 00:41:42.836 [2024-10-01 22:40:37.765649] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.836 [2024-10-01 22:40:37.765656] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.836 qpair failed and we were unable to recover it. 00:41:42.836 [2024-10-01 22:40:37.765981] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.836 [2024-10-01 22:40:37.765990] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.836 qpair failed and we were unable to recover it. 00:41:42.836 [2024-10-01 22:40:37.766207] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.836 [2024-10-01 22:40:37.766215] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.836 qpair failed and we were unable to recover it. 00:41:42.836 [2024-10-01 22:40:37.766526] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.836 [2024-10-01 22:40:37.766534] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.836 qpair failed and we were unable to recover it. 00:41:42.836 [2024-10-01 22:40:37.766826] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.836 [2024-10-01 22:40:37.766834] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.836 qpair failed and we were unable to recover it. 00:41:42.836 [2024-10-01 22:40:37.767040] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.836 [2024-10-01 22:40:37.767047] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.836 qpair failed and we were unable to recover it. 00:41:42.836 [2024-10-01 22:40:37.767342] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.836 [2024-10-01 22:40:37.767349] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.836 qpair failed and we were unable to recover it. 00:41:42.836 [2024-10-01 22:40:37.767525] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.836 [2024-10-01 22:40:37.767532] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.836 qpair failed and we were unable to recover it. 00:41:42.836 [2024-10-01 22:40:37.767718] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.836 [2024-10-01 22:40:37.767726] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.836 qpair failed and we were unable to recover it. 00:41:42.836 [2024-10-01 22:40:37.768065] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.836 [2024-10-01 22:40:37.768072] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.836 qpair failed and we were unable to recover it. 00:41:42.836 [2024-10-01 22:40:37.768252] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.836 [2024-10-01 22:40:37.768260] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.836 qpair failed and we were unable to recover it. 00:41:42.836 [2024-10-01 22:40:37.768410] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.836 [2024-10-01 22:40:37.768418] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.836 qpair failed and we were unable to recover it. 00:41:42.836 [2024-10-01 22:40:37.768686] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.836 [2024-10-01 22:40:37.768692] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.836 qpair failed and we were unable to recover it. 00:41:42.836 [2024-10-01 22:40:37.769009] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.837 [2024-10-01 22:40:37.769016] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.837 qpair failed and we were unable to recover it. 00:41:42.837 [2024-10-01 22:40:37.769201] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.837 [2024-10-01 22:40:37.769218] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.837 qpair failed and we were unable to recover it. 00:41:42.837 [2024-10-01 22:40:37.769541] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.837 [2024-10-01 22:40:37.769548] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.837 qpair failed and we were unable to recover it. 00:41:42.837 [2024-10-01 22:40:37.769726] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.837 [2024-10-01 22:40:37.769733] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.837 qpair failed and we were unable to recover it. 00:41:42.837 [2024-10-01 22:40:37.769994] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.837 [2024-10-01 22:40:37.770001] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.837 qpair failed and we were unable to recover it. 00:41:42.837 [2024-10-01 22:40:37.770307] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.837 [2024-10-01 22:40:37.770313] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.837 qpair failed and we were unable to recover it. 00:41:42.837 [2024-10-01 22:40:37.770519] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.837 [2024-10-01 22:40:37.770526] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.837 qpair failed and we were unable to recover it. 00:41:42.837 [2024-10-01 22:40:37.770869] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.837 [2024-10-01 22:40:37.770876] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.837 qpair failed and we were unable to recover it. 00:41:42.837 [2024-10-01 22:40:37.771236] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.837 [2024-10-01 22:40:37.771242] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.837 qpair failed and we were unable to recover it. 00:41:42.837 [2024-10-01 22:40:37.771512] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.837 [2024-10-01 22:40:37.771518] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.837 qpair failed and we were unable to recover it. 00:41:42.837 [2024-10-01 22:40:37.771832] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.837 [2024-10-01 22:40:37.771839] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.837 qpair failed and we were unable to recover it. 00:41:42.837 [2024-10-01 22:40:37.772142] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.837 [2024-10-01 22:40:37.772148] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.837 qpair failed and we were unable to recover it. 00:41:42.837 [2024-10-01 22:40:37.772332] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.837 [2024-10-01 22:40:37.772340] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.837 qpair failed and we were unable to recover it. 00:41:42.837 [2024-10-01 22:40:37.772514] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.837 [2024-10-01 22:40:37.772521] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.837 qpair failed and we were unable to recover it. 00:41:42.837 [2024-10-01 22:40:37.772675] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.837 [2024-10-01 22:40:37.772681] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.837 qpair failed and we were unable to recover it. 00:41:42.837 [2024-10-01 22:40:37.772972] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.837 [2024-10-01 22:40:37.772979] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.837 qpair failed and we were unable to recover it. 00:41:42.837 [2024-10-01 22:40:37.773176] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.837 [2024-10-01 22:40:37.773183] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.837 qpair failed and we were unable to recover it. 00:41:42.837 [2024-10-01 22:40:37.773380] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.837 [2024-10-01 22:40:37.773387] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.837 qpair failed and we were unable to recover it. 00:41:42.837 [2024-10-01 22:40:37.773716] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.837 [2024-10-01 22:40:37.773723] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.837 qpair failed and we were unable to recover it. 00:41:42.837 [2024-10-01 22:40:37.774048] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.837 [2024-10-01 22:40:37.774055] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.837 qpair failed and we were unable to recover it. 00:41:42.837 [2024-10-01 22:40:37.774224] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.837 [2024-10-01 22:40:37.774231] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.837 qpair failed and we were unable to recover it. 00:41:42.837 [2024-10-01 22:40:37.774515] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.837 [2024-10-01 22:40:37.774523] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.837 qpair failed and we were unable to recover it. 00:41:42.837 [2024-10-01 22:40:37.774834] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.837 [2024-10-01 22:40:37.774841] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.837 qpair failed and we were unable to recover it. 00:41:42.837 [2024-10-01 22:40:37.775112] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.837 [2024-10-01 22:40:37.775120] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.837 qpair failed and we were unable to recover it. 00:41:42.837 [2024-10-01 22:40:37.775299] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.837 [2024-10-01 22:40:37.775314] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.837 qpair failed and we were unable to recover it. 00:41:42.837 [2024-10-01 22:40:37.775653] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.837 [2024-10-01 22:40:37.775660] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.837 qpair failed and we were unable to recover it. 00:41:42.837 [2024-10-01 22:40:37.775838] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.837 [2024-10-01 22:40:37.775846] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.837 qpair failed and we were unable to recover it. 00:41:42.837 [2024-10-01 22:40:37.776018] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.837 [2024-10-01 22:40:37.776025] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.837 qpair failed and we were unable to recover it. 00:41:42.837 [2024-10-01 22:40:37.776211] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.837 [2024-10-01 22:40:37.776220] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.837 qpair failed and we were unable to recover it. 00:41:42.837 [2024-10-01 22:40:37.776259] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.837 [2024-10-01 22:40:37.776266] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.837 qpair failed and we were unable to recover it. 00:41:42.837 [2024-10-01 22:40:37.776403] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.837 [2024-10-01 22:40:37.776409] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.837 qpair failed and we were unable to recover it. 00:41:42.837 [2024-10-01 22:40:37.776590] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.837 [2024-10-01 22:40:37.776600] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.837 qpair failed and we were unable to recover it. 00:41:42.837 [2024-10-01 22:40:37.776766] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.837 [2024-10-01 22:40:37.776774] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.837 qpair failed and we were unable to recover it. 00:41:42.837 [2024-10-01 22:40:37.777048] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.837 [2024-10-01 22:40:37.777055] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.837 qpair failed and we were unable to recover it. 00:41:42.837 [2024-10-01 22:40:37.777357] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.837 [2024-10-01 22:40:37.777365] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.837 qpair failed and we were unable to recover it. 00:41:42.837 [2024-10-01 22:40:37.777681] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.837 [2024-10-01 22:40:37.777688] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.837 qpair failed and we were unable to recover it. 00:41:42.837 [2024-10-01 22:40:37.777964] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.837 [2024-10-01 22:40:37.777972] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.837 qpair failed and we were unable to recover it. 00:41:42.837 [2024-10-01 22:40:37.778176] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.837 [2024-10-01 22:40:37.778184] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.837 qpair failed and we were unable to recover it. 00:41:42.837 [2024-10-01 22:40:37.778374] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.837 [2024-10-01 22:40:37.778380] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.837 qpair failed and we were unable to recover it. 00:41:42.838 [2024-10-01 22:40:37.778702] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.838 [2024-10-01 22:40:37.778709] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.838 qpair failed and we were unable to recover it. 00:41:42.838 [2024-10-01 22:40:37.778891] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.838 [2024-10-01 22:40:37.778899] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.838 qpair failed and we were unable to recover it. 00:41:42.838 [2024-10-01 22:40:37.779045] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.838 [2024-10-01 22:40:37.779052] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.838 qpair failed and we were unable to recover it. 00:41:42.838 [2024-10-01 22:40:37.779364] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.838 [2024-10-01 22:40:37.779371] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.838 qpair failed and we were unable to recover it. 00:41:42.838 [2024-10-01 22:40:37.779651] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.838 [2024-10-01 22:40:37.779658] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.838 qpair failed and we were unable to recover it. 00:41:42.838 [2024-10-01 22:40:37.780004] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.838 [2024-10-01 22:40:37.780011] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.838 qpair failed and we were unable to recover it. 00:41:42.838 [2024-10-01 22:40:37.780343] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.838 [2024-10-01 22:40:37.780350] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.838 qpair failed and we were unable to recover it. 00:41:42.838 [2024-10-01 22:40:37.780671] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.838 [2024-10-01 22:40:37.780677] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.838 qpair failed and we were unable to recover it. 00:41:42.838 [2024-10-01 22:40:37.781010] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.838 [2024-10-01 22:40:37.781017] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.838 qpair failed and we were unable to recover it. 00:41:42.838 [2024-10-01 22:40:37.781292] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.838 [2024-10-01 22:40:37.781299] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.838 qpair failed and we were unable to recover it. 00:41:42.838 [2024-10-01 22:40:37.781605] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.838 [2024-10-01 22:40:37.781612] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.838 qpair failed and we were unable to recover it. 00:41:42.838 [2024-10-01 22:40:37.781914] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.838 [2024-10-01 22:40:37.781923] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.838 qpair failed and we were unable to recover it. 00:41:42.838 [2024-10-01 22:40:37.782109] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.838 [2024-10-01 22:40:37.782116] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.838 qpair failed and we were unable to recover it. 00:41:42.838 [2024-10-01 22:40:37.782288] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.838 [2024-10-01 22:40:37.782295] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.838 qpair failed and we were unable to recover it. 00:41:42.838 [2024-10-01 22:40:37.782577] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.838 [2024-10-01 22:40:37.782584] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.838 qpair failed and we were unable to recover it. 00:41:42.838 [2024-10-01 22:40:37.782778] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.838 [2024-10-01 22:40:37.782786] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.838 qpair failed and we were unable to recover it. 00:41:42.838 [2024-10-01 22:40:37.783048] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.838 [2024-10-01 22:40:37.783055] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.838 qpair failed and we were unable to recover it. 00:41:42.838 [2024-10-01 22:40:37.783258] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.838 [2024-10-01 22:40:37.783266] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.838 qpair failed and we were unable to recover it. 00:41:42.838 [2024-10-01 22:40:37.783424] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.838 [2024-10-01 22:40:37.783431] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.838 qpair failed and we were unable to recover it. 00:41:42.838 [2024-10-01 22:40:37.783597] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.838 [2024-10-01 22:40:37.783603] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.838 qpair failed and we were unable to recover it. 00:41:42.838 [2024-10-01 22:40:37.783961] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.838 [2024-10-01 22:40:37.783968] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.838 qpair failed and we were unable to recover it. 00:41:42.838 [2024-10-01 22:40:37.784236] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.838 [2024-10-01 22:40:37.784243] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.838 qpair failed and we were unable to recover it. 00:41:42.838 [2024-10-01 22:40:37.784547] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.838 [2024-10-01 22:40:37.784553] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.838 qpair failed and we were unable to recover it. 00:41:42.838 [2024-10-01 22:40:37.784833] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.838 [2024-10-01 22:40:37.784840] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.838 qpair failed and we were unable to recover it. 00:41:42.838 [2024-10-01 22:40:37.785010] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.838 [2024-10-01 22:40:37.785017] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.838 qpair failed and we were unable to recover it. 00:41:42.838 [2024-10-01 22:40:37.785178] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.838 [2024-10-01 22:40:37.785186] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.838 qpair failed and we were unable to recover it. 00:41:42.838 [2024-10-01 22:40:37.785580] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.838 [2024-10-01 22:40:37.785588] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.838 qpair failed and we were unable to recover it. 00:41:42.838 [2024-10-01 22:40:37.785909] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.838 [2024-10-01 22:40:37.785916] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.838 qpair failed and we were unable to recover it. 00:41:42.838 [2024-10-01 22:40:37.785957] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.838 [2024-10-01 22:40:37.785964] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.838 qpair failed and we were unable to recover it. 00:41:42.838 [2024-10-01 22:40:37.786116] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.838 [2024-10-01 22:40:37.786124] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.838 qpair failed and we were unable to recover it. 00:41:42.838 [2024-10-01 22:40:37.786161] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.838 [2024-10-01 22:40:37.786167] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.838 qpair failed and we were unable to recover it. 00:41:42.838 [2024-10-01 22:40:37.786548] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.838 [2024-10-01 22:40:37.786555] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.838 qpair failed and we were unable to recover it. 00:41:42.838 [2024-10-01 22:40:37.786754] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.838 [2024-10-01 22:40:37.786762] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.838 qpair failed and we were unable to recover it. 00:41:42.838 [2024-10-01 22:40:37.787061] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.838 [2024-10-01 22:40:37.787068] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.838 qpair failed and we were unable to recover it. 00:41:42.838 [2024-10-01 22:40:37.787370] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.838 [2024-10-01 22:40:37.787378] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.838 qpair failed and we were unable to recover it. 00:41:42.838 [2024-10-01 22:40:37.787593] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.838 [2024-10-01 22:40:37.787600] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.838 qpair failed and we were unable to recover it. 00:41:42.839 [2024-10-01 22:40:37.787912] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.839 [2024-10-01 22:40:37.787920] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.839 qpair failed and we were unable to recover it. 00:41:42.839 [2024-10-01 22:40:37.788220] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.839 [2024-10-01 22:40:37.788228] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.839 qpair failed and we were unable to recover it. 00:41:42.839 [2024-10-01 22:40:37.788538] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.839 [2024-10-01 22:40:37.788546] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.839 qpair failed and we were unable to recover it. 00:41:42.839 [2024-10-01 22:40:37.788724] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.839 [2024-10-01 22:40:37.788731] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.839 qpair failed and we were unable to recover it. 00:41:42.839 [2024-10-01 22:40:37.789081] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.839 [2024-10-01 22:40:37.789089] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.839 qpair failed and we were unable to recover it. 00:41:42.839 [2024-10-01 22:40:37.789294] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.839 [2024-10-01 22:40:37.789302] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.839 qpair failed and we were unable to recover it. 00:41:42.839 [2024-10-01 22:40:37.789621] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.839 [2024-10-01 22:40:37.789635] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.839 qpair failed and we were unable to recover it. 00:41:42.839 [2024-10-01 22:40:37.789802] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.839 [2024-10-01 22:40:37.789809] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.839 qpair failed and we were unable to recover it. 00:41:42.839 [2024-10-01 22:40:37.790106] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.839 [2024-10-01 22:40:37.790115] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.839 qpair failed and we were unable to recover it. 00:41:42.839 [2024-10-01 22:40:37.790273] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.839 [2024-10-01 22:40:37.790280] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.839 qpair failed and we were unable to recover it. 00:41:42.839 [2024-10-01 22:40:37.790567] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.839 [2024-10-01 22:40:37.790574] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.839 qpair failed and we were unable to recover it. 00:41:42.839 [2024-10-01 22:40:37.790908] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.839 [2024-10-01 22:40:37.790916] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.839 qpair failed and we were unable to recover it. 00:41:42.839 [2024-10-01 22:40:37.791192] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.839 [2024-10-01 22:40:37.791200] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.839 qpair failed and we were unable to recover it. 00:41:42.839 [2024-10-01 22:40:37.791538] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.839 [2024-10-01 22:40:37.791546] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.839 qpair failed and we were unable to recover it. 00:41:42.839 [2024-10-01 22:40:37.791863] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.839 [2024-10-01 22:40:37.791878] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.839 qpair failed and we were unable to recover it. 00:41:42.839 [2024-10-01 22:40:37.792177] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.839 [2024-10-01 22:40:37.792184] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.839 qpair failed and we were unable to recover it. 00:41:42.839 [2024-10-01 22:40:37.792471] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.839 [2024-10-01 22:40:37.792479] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.839 qpair failed and we were unable to recover it. 00:41:42.839 [2024-10-01 22:40:37.792642] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.839 [2024-10-01 22:40:37.792649] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.839 qpair failed and we were unable to recover it. 00:41:42.839 [2024-10-01 22:40:37.792966] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.839 [2024-10-01 22:40:37.792973] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.839 qpair failed and we were unable to recover it. 00:41:42.839 [2024-10-01 22:40:37.793274] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.839 [2024-10-01 22:40:37.793284] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.839 qpair failed and we were unable to recover it. 00:41:42.839 [2024-10-01 22:40:37.793574] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.839 [2024-10-01 22:40:37.793581] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.839 qpair failed and we were unable to recover it. 00:41:42.839 [2024-10-01 22:40:37.793917] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.839 [2024-10-01 22:40:37.793925] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.839 qpair failed and we were unable to recover it. 00:41:42.839 [2024-10-01 22:40:37.794262] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.839 [2024-10-01 22:40:37.794269] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.839 qpair failed and we were unable to recover it. 00:41:42.839 [2024-10-01 22:40:37.794552] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.839 [2024-10-01 22:40:37.794564] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.839 qpair failed and we were unable to recover it. 00:41:42.839 [2024-10-01 22:40:37.794857] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.839 [2024-10-01 22:40:37.794865] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.839 qpair failed and we were unable to recover it. 00:41:42.839 [2024-10-01 22:40:37.795198] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.839 [2024-10-01 22:40:37.795204] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.839 qpair failed and we were unable to recover it. 00:41:42.839 [2024-10-01 22:40:37.795402] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.839 [2024-10-01 22:40:37.795408] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.839 qpair failed and we were unable to recover it. 00:41:42.839 [2024-10-01 22:40:37.795716] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.839 [2024-10-01 22:40:37.795724] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.839 qpair failed and we were unable to recover it. 00:41:42.839 [2024-10-01 22:40:37.796021] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.839 [2024-10-01 22:40:37.796028] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.839 qpair failed and we were unable to recover it. 00:41:42.839 [2024-10-01 22:40:37.796305] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.839 [2024-10-01 22:40:37.796312] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.839 qpair failed and we were unable to recover it. 00:41:42.839 [2024-10-01 22:40:37.796502] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.839 [2024-10-01 22:40:37.796509] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.839 qpair failed and we were unable to recover it. 00:41:42.839 [2024-10-01 22:40:37.796809] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.839 [2024-10-01 22:40:37.796817] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.839 qpair failed and we were unable to recover it. 00:41:42.839 [2024-10-01 22:40:37.797031] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.839 [2024-10-01 22:40:37.797039] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.839 qpair failed and we were unable to recover it. 00:41:42.839 [2024-10-01 22:40:37.797346] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.839 [2024-10-01 22:40:37.797354] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.839 qpair failed and we were unable to recover it. 00:41:42.839 [2024-10-01 22:40:37.797673] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.839 [2024-10-01 22:40:37.797680] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.839 qpair failed and we were unable to recover it. 00:41:42.839 [2024-10-01 22:40:37.797994] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.839 [2024-10-01 22:40:37.798001] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.839 qpair failed and we were unable to recover it. 00:41:42.839 [2024-10-01 22:40:37.798381] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.839 [2024-10-01 22:40:37.798388] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.840 qpair failed and we were unable to recover it. 00:41:42.840 [2024-10-01 22:40:37.798678] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.840 [2024-10-01 22:40:37.798686] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.840 qpair failed and we were unable to recover it. 00:41:42.840 [2024-10-01 22:40:37.799002] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.840 [2024-10-01 22:40:37.799010] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.840 qpair failed and we were unable to recover it. 00:41:42.840 [2024-10-01 22:40:37.799292] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.840 [2024-10-01 22:40:37.799299] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.840 qpair failed and we were unable to recover it. 00:41:42.840 [2024-10-01 22:40:37.799643] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.840 [2024-10-01 22:40:37.799651] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.840 qpair failed and we were unable to recover it. 00:41:42.840 [2024-10-01 22:40:37.799937] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.840 [2024-10-01 22:40:37.799946] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.840 qpair failed and we were unable to recover it. 00:41:42.840 [2024-10-01 22:40:37.800256] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.840 [2024-10-01 22:40:37.800263] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.840 qpair failed and we were unable to recover it. 00:41:42.840 [2024-10-01 22:40:37.800549] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.840 [2024-10-01 22:40:37.800557] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.840 qpair failed and we were unable to recover it. 00:41:42.840 [2024-10-01 22:40:37.800753] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.840 [2024-10-01 22:40:37.800760] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.840 qpair failed and we were unable to recover it. 00:41:42.840 [2024-10-01 22:40:37.800934] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.840 [2024-10-01 22:40:37.800940] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.840 qpair failed and we were unable to recover it. 00:41:42.840 [2024-10-01 22:40:37.801118] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.840 [2024-10-01 22:40:37.801127] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.840 qpair failed and we were unable to recover it. 00:41:42.840 [2024-10-01 22:40:37.801281] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.840 [2024-10-01 22:40:37.801289] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.840 qpair failed and we were unable to recover it. 00:41:42.840 [2024-10-01 22:40:37.801606] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.840 [2024-10-01 22:40:37.801614] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.840 qpair failed and we were unable to recover it. 00:41:42.840 [2024-10-01 22:40:37.801790] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.840 [2024-10-01 22:40:37.801797] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.840 qpair failed and we were unable to recover it. 00:41:42.840 [2024-10-01 22:40:37.802056] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.840 [2024-10-01 22:40:37.802063] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.840 qpair failed and we were unable to recover it. 00:41:42.840 [2024-10-01 22:40:37.802248] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.840 [2024-10-01 22:40:37.802255] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.840 qpair failed and we were unable to recover it. 00:41:42.840 [2024-10-01 22:40:37.802476] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.840 [2024-10-01 22:40:37.802483] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.840 qpair failed and we were unable to recover it. 00:41:42.840 [2024-10-01 22:40:37.802790] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.840 [2024-10-01 22:40:37.802798] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.840 qpair failed and we were unable to recover it. 00:41:42.840 [2024-10-01 22:40:37.803099] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.840 [2024-10-01 22:40:37.803106] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.840 qpair failed and we were unable to recover it. 00:41:42.840 [2024-10-01 22:40:37.803287] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.840 [2024-10-01 22:40:37.803295] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.840 qpair failed and we were unable to recover it. 00:41:42.840 [2024-10-01 22:40:37.803593] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.840 [2024-10-01 22:40:37.803600] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.840 qpair failed and we were unable to recover it. 00:41:42.840 [2024-10-01 22:40:37.803938] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.840 [2024-10-01 22:40:37.803945] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.840 qpair failed and we were unable to recover it. 00:41:42.840 [2024-10-01 22:40:37.804236] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.840 [2024-10-01 22:40:37.804243] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.840 qpair failed and we were unable to recover it. 00:41:42.840 [2024-10-01 22:40:37.804561] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.840 [2024-10-01 22:40:37.804568] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.840 qpair failed and we were unable to recover it. 00:41:42.840 [2024-10-01 22:40:37.804913] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.840 [2024-10-01 22:40:37.804924] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.840 qpair failed and we were unable to recover it. 00:41:42.840 [2024-10-01 22:40:37.805234] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.840 [2024-10-01 22:40:37.805241] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.840 qpair failed and we were unable to recover it. 00:41:42.840 [2024-10-01 22:40:37.805427] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.840 [2024-10-01 22:40:37.805435] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.840 qpair failed and we were unable to recover it. 00:41:42.840 [2024-10-01 22:40:37.805878] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.840 [2024-10-01 22:40:37.805885] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.840 qpair failed and we were unable to recover it. 00:41:42.840 [2024-10-01 22:40:37.806211] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.840 [2024-10-01 22:40:37.806217] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.840 qpair failed and we were unable to recover it. 00:41:42.840 [2024-10-01 22:40:37.806501] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.840 [2024-10-01 22:40:37.806507] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.840 qpair failed and we were unable to recover it. 00:41:42.840 [2024-10-01 22:40:37.806921] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.840 [2024-10-01 22:40:37.806929] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.840 qpair failed and we were unable to recover it. 00:41:42.840 [2024-10-01 22:40:37.807234] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.841 [2024-10-01 22:40:37.807242] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.841 qpair failed and we were unable to recover it. 00:41:42.841 [2024-10-01 22:40:37.807566] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.841 [2024-10-01 22:40:37.807573] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.841 qpair failed and we were unable to recover it. 00:41:42.841 [2024-10-01 22:40:37.807936] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.841 [2024-10-01 22:40:37.807944] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.841 qpair failed and we were unable to recover it. 00:41:42.841 [2024-10-01 22:40:37.808252] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.841 [2024-10-01 22:40:37.808259] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.841 qpair failed and we were unable to recover it. 00:41:42.841 [2024-10-01 22:40:37.808571] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.841 [2024-10-01 22:40:37.808586] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.841 qpair failed and we were unable to recover it. 00:41:42.841 [2024-10-01 22:40:37.808763] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.841 [2024-10-01 22:40:37.808770] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.841 qpair failed and we were unable to recover it. 00:41:42.841 [2024-10-01 22:40:37.809060] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.841 [2024-10-01 22:40:37.809067] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.841 qpair failed and we were unable to recover it. 00:41:42.841 [2024-10-01 22:40:37.809381] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.841 [2024-10-01 22:40:37.809388] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.841 qpair failed and we were unable to recover it. 00:41:42.841 [2024-10-01 22:40:37.809575] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.841 [2024-10-01 22:40:37.809582] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.841 qpair failed and we were unable to recover it. 00:41:42.841 [2024-10-01 22:40:37.809872] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.841 [2024-10-01 22:40:37.809880] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.841 qpair failed and we were unable to recover it. 00:41:42.841 [2024-10-01 22:40:37.810177] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.841 [2024-10-01 22:40:37.810184] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.841 qpair failed and we were unable to recover it. 00:41:42.841 [2024-10-01 22:40:37.810511] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.841 [2024-10-01 22:40:37.810518] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.841 qpair failed and we were unable to recover it. 00:41:42.841 [2024-10-01 22:40:37.810830] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.841 [2024-10-01 22:40:37.810837] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.841 qpair failed and we were unable to recover it. 00:41:42.841 [2024-10-01 22:40:37.811068] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.841 [2024-10-01 22:40:37.811075] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.841 qpair failed and we were unable to recover it. 00:41:42.841 [2024-10-01 22:40:37.811387] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.841 [2024-10-01 22:40:37.811394] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.841 qpair failed and we were unable to recover it. 00:41:42.841 [2024-10-01 22:40:37.811735] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.841 [2024-10-01 22:40:37.811742] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.841 qpair failed and we were unable to recover it. 00:41:42.841 [2024-10-01 22:40:37.812051] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.841 [2024-10-01 22:40:37.812058] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.841 qpair failed and we were unable to recover it. 00:41:42.841 [2024-10-01 22:40:37.812361] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.841 [2024-10-01 22:40:37.812376] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.841 qpair failed and we were unable to recover it. 00:41:42.841 [2024-10-01 22:40:37.812679] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.841 [2024-10-01 22:40:37.812686] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.841 qpair failed and we were unable to recover it. 00:41:42.841 [2024-10-01 22:40:37.812991] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.841 [2024-10-01 22:40:37.812999] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.841 qpair failed and we were unable to recover it. 00:41:42.841 [2024-10-01 22:40:37.813311] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.841 [2024-10-01 22:40:37.813319] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.841 qpair failed and we were unable to recover it. 00:41:42.841 [2024-10-01 22:40:37.813400] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.841 [2024-10-01 22:40:37.813407] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.841 qpair failed and we were unable to recover it. 00:41:42.841 [2024-10-01 22:40:37.813598] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.841 [2024-10-01 22:40:37.813606] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.841 qpair failed and we were unable to recover it. 00:41:42.841 [2024-10-01 22:40:37.813764] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.841 [2024-10-01 22:40:37.813771] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.841 qpair failed and we were unable to recover it. 00:41:42.841 [2024-10-01 22:40:37.814087] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.841 [2024-10-01 22:40:37.814094] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.841 qpair failed and we were unable to recover it. 00:41:42.841 [2024-10-01 22:40:37.814383] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.841 [2024-10-01 22:40:37.814391] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.841 qpair failed and we were unable to recover it. 00:41:42.841 [2024-10-01 22:40:37.814698] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.841 [2024-10-01 22:40:37.814706] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.841 qpair failed and we were unable to recover it. 00:41:42.841 [2024-10-01 22:40:37.815009] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.841 [2024-10-01 22:40:37.815017] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.841 qpair failed and we were unable to recover it. 00:41:42.841 [2024-10-01 22:40:37.815309] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.841 [2024-10-01 22:40:37.815316] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.841 qpair failed and we were unable to recover it. 00:41:42.841 [2024-10-01 22:40:37.815634] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.841 [2024-10-01 22:40:37.815642] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.841 qpair failed and we were unable to recover it. 00:41:42.841 [2024-10-01 22:40:37.815864] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.841 [2024-10-01 22:40:37.815871] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.841 qpair failed and we were unable to recover it. 00:41:42.841 [2024-10-01 22:40:37.816176] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.841 [2024-10-01 22:40:37.816184] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.841 qpair failed and we were unable to recover it. 00:41:42.841 [2024-10-01 22:40:37.816485] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.841 [2024-10-01 22:40:37.816492] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.841 qpair failed and we were unable to recover it. 00:41:42.841 [2024-10-01 22:40:37.816800] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.841 [2024-10-01 22:40:37.816809] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.841 qpair failed and we were unable to recover it. 00:41:42.841 [2024-10-01 22:40:37.817029] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.841 [2024-10-01 22:40:37.817036] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.841 qpair failed and we were unable to recover it. 00:41:42.841 [2024-10-01 22:40:37.817202] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.841 [2024-10-01 22:40:37.817209] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.841 qpair failed and we were unable to recover it. 00:41:42.841 [2024-10-01 22:40:37.817521] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.841 [2024-10-01 22:40:37.817534] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.841 qpair failed and we were unable to recover it. 00:41:42.841 [2024-10-01 22:40:37.817705] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.841 [2024-10-01 22:40:37.817714] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.841 qpair failed and we were unable to recover it. 00:41:42.841 [2024-10-01 22:40:37.818026] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.842 [2024-10-01 22:40:37.818033] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.842 qpair failed and we were unable to recover it. 00:41:42.842 [2024-10-01 22:40:37.818308] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.842 [2024-10-01 22:40:37.818315] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.842 qpair failed and we were unable to recover it. 00:41:42.842 [2024-10-01 22:40:37.818650] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.842 [2024-10-01 22:40:37.818657] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.842 qpair failed and we were unable to recover it. 00:41:42.842 [2024-10-01 22:40:37.818994] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.842 [2024-10-01 22:40:37.819002] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.842 qpair failed and we were unable to recover it. 00:41:42.842 [2024-10-01 22:40:37.819330] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.842 [2024-10-01 22:40:37.819337] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.842 qpair failed and we were unable to recover it. 00:41:42.842 [2024-10-01 22:40:37.819699] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.842 [2024-10-01 22:40:37.819707] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.842 qpair failed and we were unable to recover it. 00:41:42.842 [2024-10-01 22:40:37.819905] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.842 [2024-10-01 22:40:37.819912] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.842 qpair failed and we were unable to recover it. 00:41:42.842 [2024-10-01 22:40:37.820055] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.842 [2024-10-01 22:40:37.820061] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.842 qpair failed and we were unable to recover it. 00:41:42.842 [2024-10-01 22:40:37.820217] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.842 [2024-10-01 22:40:37.820224] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.842 qpair failed and we were unable to recover it. 00:41:42.842 [2024-10-01 22:40:37.820402] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.842 [2024-10-01 22:40:37.820409] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.842 qpair failed and we were unable to recover it. 00:41:42.842 [2024-10-01 22:40:37.820451] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.842 [2024-10-01 22:40:37.820457] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.842 qpair failed and we were unable to recover it. 00:41:42.842 [2024-10-01 22:40:37.820537] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.842 [2024-10-01 22:40:37.820544] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.842 qpair failed and we were unable to recover it. 00:41:42.842 [2024-10-01 22:40:37.820839] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.842 [2024-10-01 22:40:37.820846] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.842 qpair failed and we were unable to recover it. 00:41:42.842 [2024-10-01 22:40:37.821179] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.842 [2024-10-01 22:40:37.821186] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.842 qpair failed and we were unable to recover it. 00:41:42.842 [2024-10-01 22:40:37.821502] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.842 [2024-10-01 22:40:37.821510] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.842 qpair failed and we were unable to recover it. 00:41:42.842 [2024-10-01 22:40:37.821809] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.842 [2024-10-01 22:40:37.821816] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.842 qpair failed and we were unable to recover it. 00:41:42.842 [2024-10-01 22:40:37.822169] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.842 [2024-10-01 22:40:37.822177] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.842 qpair failed and we were unable to recover it. 00:41:42.842 [2024-10-01 22:40:37.822496] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.842 [2024-10-01 22:40:37.822502] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.842 qpair failed and we were unable to recover it. 00:41:42.842 [2024-10-01 22:40:37.822801] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.842 [2024-10-01 22:40:37.822809] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.842 qpair failed and we were unable to recover it. 00:41:42.842 [2024-10-01 22:40:37.823141] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.842 [2024-10-01 22:40:37.823148] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.842 qpair failed and we were unable to recover it. 00:41:42.842 [2024-10-01 22:40:37.823375] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.842 [2024-10-01 22:40:37.823382] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.842 qpair failed and we were unable to recover it. 00:41:42.842 [2024-10-01 22:40:37.823554] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.842 [2024-10-01 22:40:37.823561] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.842 qpair failed and we were unable to recover it. 00:41:42.842 [2024-10-01 22:40:37.823873] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.842 [2024-10-01 22:40:37.823880] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.842 qpair failed and we were unable to recover it. 00:41:42.842 [2024-10-01 22:40:37.824193] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.842 [2024-10-01 22:40:37.824200] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.842 qpair failed and we were unable to recover it. 00:41:42.842 [2024-10-01 22:40:37.824355] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.842 [2024-10-01 22:40:37.824362] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.842 qpair failed and we were unable to recover it. 00:41:42.842 [2024-10-01 22:40:37.824636] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.842 [2024-10-01 22:40:37.824644] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.842 qpair failed and we were unable to recover it. 00:41:42.842 [2024-10-01 22:40:37.825007] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.842 [2024-10-01 22:40:37.825013] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.842 qpair failed and we were unable to recover it. 00:41:42.842 [2024-10-01 22:40:37.825247] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.842 [2024-10-01 22:40:37.825254] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.842 qpair failed and we were unable to recover it. 00:41:42.842 [2024-10-01 22:40:37.825414] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.842 [2024-10-01 22:40:37.825422] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.842 qpair failed and we were unable to recover it. 00:41:42.842 [2024-10-01 22:40:37.825737] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.842 [2024-10-01 22:40:37.825745] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.842 qpair failed and we were unable to recover it. 00:41:42.842 [2024-10-01 22:40:37.826109] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.842 [2024-10-01 22:40:37.826116] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.842 qpair failed and we were unable to recover it. 00:41:42.842 [2024-10-01 22:40:37.826409] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.842 [2024-10-01 22:40:37.826416] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.842 qpair failed and we were unable to recover it. 00:41:42.842 [2024-10-01 22:40:37.826698] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.842 [2024-10-01 22:40:37.826706] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.842 qpair failed and we were unable to recover it. 00:41:42.842 [2024-10-01 22:40:37.827020] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.842 [2024-10-01 22:40:37.827036] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.842 qpair failed and we were unable to recover it. 00:41:42.842 [2024-10-01 22:40:37.827325] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.842 [2024-10-01 22:40:37.827333] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.842 qpair failed and we were unable to recover it. 00:41:42.842 [2024-10-01 22:40:37.827536] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.842 [2024-10-01 22:40:37.827545] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.842 qpair failed and we were unable to recover it. 00:41:42.842 [2024-10-01 22:40:37.827873] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.842 [2024-10-01 22:40:37.827881] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.842 qpair failed and we were unable to recover it. 00:41:42.842 [2024-10-01 22:40:37.828177] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.842 [2024-10-01 22:40:37.828185] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.842 qpair failed and we were unable to recover it. 00:41:42.842 [2024-10-01 22:40:37.828369] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.842 [2024-10-01 22:40:37.828377] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.842 qpair failed and we were unable to recover it. 00:41:42.843 [2024-10-01 22:40:37.828695] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.843 [2024-10-01 22:40:37.828703] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.843 qpair failed and we were unable to recover it. 00:41:42.843 [2024-10-01 22:40:37.828953] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.843 [2024-10-01 22:40:37.828960] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.843 qpair failed and we were unable to recover it. 00:41:42.843 [2024-10-01 22:40:37.829306] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.843 [2024-10-01 22:40:37.829312] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.843 qpair failed and we were unable to recover it. 00:41:42.843 [2024-10-01 22:40:37.829620] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.843 [2024-10-01 22:40:37.829630] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.843 qpair failed and we were unable to recover it. 00:41:42.843 [2024-10-01 22:40:37.829813] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.843 [2024-10-01 22:40:37.829820] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.843 qpair failed and we were unable to recover it. 00:41:42.843 [2024-10-01 22:40:37.830143] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.843 [2024-10-01 22:40:37.830150] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.843 qpair failed and we were unable to recover it. 00:41:42.843 [2024-10-01 22:40:37.830481] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.843 [2024-10-01 22:40:37.830487] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.843 qpair failed and we were unable to recover it. 00:41:42.843 [2024-10-01 22:40:37.830787] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.843 [2024-10-01 22:40:37.830795] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.843 qpair failed and we were unable to recover it. 00:41:42.843 [2024-10-01 22:40:37.831107] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.843 [2024-10-01 22:40:37.831114] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.843 qpair failed and we were unable to recover it. 00:41:42.843 [2024-10-01 22:40:37.831309] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.843 [2024-10-01 22:40:37.831316] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.843 qpair failed and we were unable to recover it. 00:41:42.843 [2024-10-01 22:40:37.831639] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.843 [2024-10-01 22:40:37.831647] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.843 qpair failed and we were unable to recover it. 00:41:42.843 [2024-10-01 22:40:37.831832] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.843 [2024-10-01 22:40:37.831840] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.843 qpair failed and we were unable to recover it. 00:41:42.843 [2024-10-01 22:40:37.832134] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.843 [2024-10-01 22:40:37.832141] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.843 qpair failed and we were unable to recover it. 00:41:42.843 [2024-10-01 22:40:37.832495] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.843 [2024-10-01 22:40:37.832502] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.843 qpair failed and we were unable to recover it. 00:41:42.843 [2024-10-01 22:40:37.832797] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.843 [2024-10-01 22:40:37.832804] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.843 qpair failed and we were unable to recover it. 00:41:42.843 [2024-10-01 22:40:37.833125] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.843 [2024-10-01 22:40:37.833133] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.843 qpair failed and we were unable to recover it. 00:41:42.843 [2024-10-01 22:40:37.833312] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.843 [2024-10-01 22:40:37.833319] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.843 qpair failed and we were unable to recover it. 00:41:42.843 [2024-10-01 22:40:37.833482] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.843 [2024-10-01 22:40:37.833489] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.843 qpair failed and we were unable to recover it. 00:41:42.843 [2024-10-01 22:40:37.833672] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.843 [2024-10-01 22:40:37.833679] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.843 qpair failed and we were unable to recover it. 00:41:42.843 [2024-10-01 22:40:37.833831] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.843 [2024-10-01 22:40:37.833838] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.843 qpair failed and we were unable to recover it. 00:41:42.843 [2024-10-01 22:40:37.834016] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.843 [2024-10-01 22:40:37.834024] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.843 qpair failed and we were unable to recover it. 00:41:42.843 [2024-10-01 22:40:37.834196] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.843 [2024-10-01 22:40:37.834202] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.843 qpair failed and we were unable to recover it. 00:41:42.843 [2024-10-01 22:40:37.834385] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.843 [2024-10-01 22:40:37.834392] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.843 qpair failed and we were unable to recover it. 00:41:42.843 [2024-10-01 22:40:37.834551] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.843 [2024-10-01 22:40:37.834558] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.843 qpair failed and we were unable to recover it. 00:41:42.843 [2024-10-01 22:40:37.834704] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.843 [2024-10-01 22:40:37.834712] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.843 qpair failed and we were unable to recover it. 00:41:42.843 [2024-10-01 22:40:37.835038] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.843 [2024-10-01 22:40:37.835045] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.843 qpair failed and we were unable to recover it. 00:41:42.843 [2024-10-01 22:40:37.835331] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.843 [2024-10-01 22:40:37.835338] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.843 qpair failed and we were unable to recover it. 00:41:42.843 [2024-10-01 22:40:37.835651] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.843 [2024-10-01 22:40:37.835658] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.843 qpair failed and we were unable to recover it. 00:41:42.843 [2024-10-01 22:40:37.836009] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.843 [2024-10-01 22:40:37.836017] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.843 qpair failed and we were unable to recover it. 00:41:42.843 [2024-10-01 22:40:37.836326] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.843 [2024-10-01 22:40:37.836333] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.843 qpair failed and we were unable to recover it. 00:41:42.843 [2024-10-01 22:40:37.836621] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.843 [2024-10-01 22:40:37.836631] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.843 qpair failed and we were unable to recover it. 00:41:42.843 [2024-10-01 22:40:37.836797] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.843 [2024-10-01 22:40:37.836805] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.843 qpair failed and we were unable to recover it. 00:41:42.843 [2024-10-01 22:40:37.837122] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.843 [2024-10-01 22:40:37.837129] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.843 qpair failed and we were unable to recover it. 00:41:42.843 [2024-10-01 22:40:37.837399] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.843 [2024-10-01 22:40:37.837405] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.843 qpair failed and we were unable to recover it. 00:41:42.843 [2024-10-01 22:40:37.837742] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.843 [2024-10-01 22:40:37.837749] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.843 qpair failed and we were unable to recover it. 00:41:42.843 [2024-10-01 22:40:37.837957] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.843 [2024-10-01 22:40:37.837964] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.843 qpair failed and we were unable to recover it. 00:41:42.843 [2024-10-01 22:40:37.838178] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.843 [2024-10-01 22:40:37.838186] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.843 qpair failed and we were unable to recover it. 00:41:42.843 [2024-10-01 22:40:37.838364] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.843 [2024-10-01 22:40:37.838370] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.843 qpair failed and we were unable to recover it. 00:41:42.843 [2024-10-01 22:40:37.838693] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.844 [2024-10-01 22:40:37.838700] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.844 qpair failed and we were unable to recover it. 00:41:42.844 [2024-10-01 22:40:37.839096] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.844 [2024-10-01 22:40:37.839104] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.844 qpair failed and we were unable to recover it. 00:41:42.844 [2024-10-01 22:40:37.839295] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.844 [2024-10-01 22:40:37.839302] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.844 qpair failed and we were unable to recover it. 00:41:42.844 [2024-10-01 22:40:37.839620] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.844 [2024-10-01 22:40:37.839631] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.844 qpair failed and we were unable to recover it. 00:41:42.844 [2024-10-01 22:40:37.839944] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.844 [2024-10-01 22:40:37.839951] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.844 qpair failed and we were unable to recover it. 00:41:42.844 [2024-10-01 22:40:37.840241] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.844 [2024-10-01 22:40:37.840247] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.844 qpair failed and we were unable to recover it. 00:41:42.844 [2024-10-01 22:40:37.840569] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.844 [2024-10-01 22:40:37.840576] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.844 qpair failed and we were unable to recover it. 00:41:42.844 [2024-10-01 22:40:37.840872] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.844 [2024-10-01 22:40:37.840880] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.844 qpair failed and we were unable to recover it. 00:41:42.844 [2024-10-01 22:40:37.841183] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.844 [2024-10-01 22:40:37.841191] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.844 qpair failed and we were unable to recover it. 00:41:42.844 [2024-10-01 22:40:37.841498] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.844 [2024-10-01 22:40:37.841505] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.844 qpair failed and we were unable to recover it. 00:41:42.844 [2024-10-01 22:40:37.841667] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.844 [2024-10-01 22:40:37.841675] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.844 qpair failed and we were unable to recover it. 00:41:42.844 [2024-10-01 22:40:37.841878] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.844 [2024-10-01 22:40:37.841885] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.844 qpair failed and we were unable to recover it. 00:41:42.844 [2024-10-01 22:40:37.842188] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.844 [2024-10-01 22:40:37.842195] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.844 qpair failed and we were unable to recover it. 00:41:42.844 [2024-10-01 22:40:37.842506] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.844 [2024-10-01 22:40:37.842513] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.844 qpair failed and we were unable to recover it. 00:41:42.844 [2024-10-01 22:40:37.842820] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.844 [2024-10-01 22:40:37.842827] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.844 qpair failed and we were unable to recover it. 00:41:42.844 [2024-10-01 22:40:37.843136] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.844 [2024-10-01 22:40:37.843144] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.844 qpair failed and we were unable to recover it. 00:41:42.844 [2024-10-01 22:40:37.843421] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.844 [2024-10-01 22:40:37.843428] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.844 qpair failed and we were unable to recover it. 00:41:42.844 [2024-10-01 22:40:37.843727] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.844 [2024-10-01 22:40:37.843735] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.844 qpair failed and we were unable to recover it. 00:41:42.844 [2024-10-01 22:40:37.844021] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.844 [2024-10-01 22:40:37.844028] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.844 qpair failed and we were unable to recover it. 00:41:42.844 [2024-10-01 22:40:37.844337] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.844 [2024-10-01 22:40:37.844344] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.844 qpair failed and we were unable to recover it. 00:41:42.844 [2024-10-01 22:40:37.844658] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.844 [2024-10-01 22:40:37.844665] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.844 qpair failed and we were unable to recover it. 00:41:42.844 [2024-10-01 22:40:37.844847] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.844 [2024-10-01 22:40:37.844854] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.844 qpair failed and we were unable to recover it. 00:41:42.844 [2024-10-01 22:40:37.845163] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.844 [2024-10-01 22:40:37.845171] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.844 qpair failed and we were unable to recover it. 00:41:42.844 [2024-10-01 22:40:37.845325] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.844 [2024-10-01 22:40:37.845332] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.844 qpair failed and we were unable to recover it. 00:41:42.844 [2024-10-01 22:40:37.845699] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.844 [2024-10-01 22:40:37.845706] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.844 qpair failed and we were unable to recover it. 00:41:42.844 [2024-10-01 22:40:37.846024] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.844 [2024-10-01 22:40:37.846031] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.844 qpair failed and we were unable to recover it. 00:41:42.844 [2024-10-01 22:40:37.846109] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.844 [2024-10-01 22:40:37.846115] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.844 qpair failed and we were unable to recover it. 00:41:42.844 [2024-10-01 22:40:37.846306] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.844 [2024-10-01 22:40:37.846313] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.844 qpair failed and we were unable to recover it. 00:41:42.844 [2024-10-01 22:40:37.846600] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.844 [2024-10-01 22:40:37.846607] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.844 qpair failed and we were unable to recover it. 00:41:42.844 [2024-10-01 22:40:37.846807] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.844 [2024-10-01 22:40:37.846814] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.844 qpair failed and we were unable to recover it. 00:41:42.844 [2024-10-01 22:40:37.847039] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.844 [2024-10-01 22:40:37.847046] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.844 qpair failed and we were unable to recover it. 00:41:42.844 [2024-10-01 22:40:37.847214] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.844 [2024-10-01 22:40:37.847222] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.844 qpair failed and we were unable to recover it. 00:41:42.844 [2024-10-01 22:40:37.847531] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.844 [2024-10-01 22:40:37.847538] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.845 qpair failed and we were unable to recover it. 00:41:42.845 [2024-10-01 22:40:37.847860] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.845 [2024-10-01 22:40:37.847875] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.845 qpair failed and we were unable to recover it. 00:41:42.845 [2024-10-01 22:40:37.848204] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.845 [2024-10-01 22:40:37.848211] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.845 qpair failed and we were unable to recover it. 00:41:42.845 [2024-10-01 22:40:37.848404] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.845 [2024-10-01 22:40:37.848411] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.845 qpair failed and we were unable to recover it. 00:41:42.845 [2024-10-01 22:40:37.848762] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.845 [2024-10-01 22:40:37.848769] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.845 qpair failed and we were unable to recover it. 00:41:42.845 [2024-10-01 22:40:37.849093] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.845 [2024-10-01 22:40:37.849100] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.845 qpair failed and we were unable to recover it. 00:41:42.845 [2024-10-01 22:40:37.849165] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.845 [2024-10-01 22:40:37.849173] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.845 qpair failed and we were unable to recover it. 00:41:42.845 [2024-10-01 22:40:37.849331] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.845 [2024-10-01 22:40:37.849339] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.845 qpair failed and we were unable to recover it. 00:41:42.845 [2024-10-01 22:40:37.849604] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.845 [2024-10-01 22:40:37.849612] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.845 qpair failed and we were unable to recover it. 00:41:42.845 [2024-10-01 22:40:37.849948] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.845 [2024-10-01 22:40:37.849955] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.845 qpair failed and we were unable to recover it. 00:41:42.845 [2024-10-01 22:40:37.850124] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.845 [2024-10-01 22:40:37.850131] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.845 qpair failed and we were unable to recover it. 00:41:42.845 [2024-10-01 22:40:37.850482] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.845 [2024-10-01 22:40:37.850489] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.845 qpair failed and we were unable to recover it. 00:41:42.845 [2024-10-01 22:40:37.850786] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.845 [2024-10-01 22:40:37.850793] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.845 qpair failed and we were unable to recover it. 00:41:42.845 [2024-10-01 22:40:37.851151] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.845 [2024-10-01 22:40:37.851158] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.845 qpair failed and we were unable to recover it. 00:41:42.845 [2024-10-01 22:40:37.851507] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.845 [2024-10-01 22:40:37.851514] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.845 qpair failed and we were unable to recover it. 00:41:42.845 [2024-10-01 22:40:37.851839] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.845 [2024-10-01 22:40:37.851847] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.845 qpair failed and we were unable to recover it. 00:41:42.845 [2024-10-01 22:40:37.852154] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.845 [2024-10-01 22:40:37.852161] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.845 qpair failed and we were unable to recover it. 00:41:42.845 [2024-10-01 22:40:37.852490] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.845 [2024-10-01 22:40:37.852497] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.845 qpair failed and we were unable to recover it. 00:41:42.845 [2024-10-01 22:40:37.852795] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.845 [2024-10-01 22:40:37.852802] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.845 qpair failed and we were unable to recover it. 00:41:42.845 [2024-10-01 22:40:37.853116] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.845 [2024-10-01 22:40:37.853130] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.845 qpair failed and we were unable to recover it. 00:41:42.845 [2024-10-01 22:40:37.853441] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.845 [2024-10-01 22:40:37.853448] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.845 qpair failed and we were unable to recover it. 00:41:42.845 [2024-10-01 22:40:37.853659] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.845 [2024-10-01 22:40:37.853666] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.845 qpair failed and we were unable to recover it. 00:41:42.845 [2024-10-01 22:40:37.853838] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.845 [2024-10-01 22:40:37.853845] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.845 qpair failed and we were unable to recover it. 00:41:42.845 [2024-10-01 22:40:37.854145] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.845 [2024-10-01 22:40:37.854152] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.845 qpair failed and we were unable to recover it. 00:41:42.845 [2024-10-01 22:40:37.854473] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.845 [2024-10-01 22:40:37.854480] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.845 qpair failed and we were unable to recover it. 00:41:42.845 [2024-10-01 22:40:37.854818] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.845 [2024-10-01 22:40:37.854825] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.845 qpair failed and we were unable to recover it. 00:41:42.845 [2024-10-01 22:40:37.855116] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.845 [2024-10-01 22:40:37.855123] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.845 qpair failed and we were unable to recover it. 00:41:42.845 [2024-10-01 22:40:37.855428] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.845 [2024-10-01 22:40:37.855435] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.845 qpair failed and we were unable to recover it. 00:41:42.845 [2024-10-01 22:40:37.855722] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.845 [2024-10-01 22:40:37.855729] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.845 qpair failed and we were unable to recover it. 00:41:42.845 [2024-10-01 22:40:37.856059] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.845 [2024-10-01 22:40:37.856067] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.845 qpair failed and we were unable to recover it. 00:41:42.845 [2024-10-01 22:40:37.856346] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.846 [2024-10-01 22:40:37.856353] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.846 qpair failed and we were unable to recover it. 00:41:42.846 [2024-10-01 22:40:37.856683] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.846 [2024-10-01 22:40:37.856691] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.846 qpair failed and we were unable to recover it. 00:41:42.846 [2024-10-01 22:40:37.857027] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.846 [2024-10-01 22:40:37.857034] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.846 qpair failed and we were unable to recover it. 00:41:42.846 [2024-10-01 22:40:37.857332] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.846 [2024-10-01 22:40:37.857344] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.846 qpair failed and we were unable to recover it. 00:41:42.846 [2024-10-01 22:40:37.857661] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.846 [2024-10-01 22:40:37.857667] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.846 qpair failed and we were unable to recover it. 00:41:42.846 [2024-10-01 22:40:37.857985] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.846 [2024-10-01 22:40:37.857993] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.846 qpair failed and we were unable to recover it. 00:41:42.846 [2024-10-01 22:40:37.858324] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.846 [2024-10-01 22:40:37.858331] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.846 qpair failed and we were unable to recover it. 00:41:42.846 [2024-10-01 22:40:37.858723] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.846 [2024-10-01 22:40:37.858730] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.846 qpair failed and we were unable to recover it. 00:41:42.846 [2024-10-01 22:40:37.859054] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.846 [2024-10-01 22:40:37.859061] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.846 qpair failed and we were unable to recover it. 00:41:42.846 [2024-10-01 22:40:37.859381] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.846 [2024-10-01 22:40:37.859388] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.846 qpair failed and we were unable to recover it. 00:41:42.846 [2024-10-01 22:40:37.859695] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.846 [2024-10-01 22:40:37.859702] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.846 qpair failed and we were unable to recover it. 00:41:42.846 [2024-10-01 22:40:37.860023] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.846 [2024-10-01 22:40:37.860031] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.846 qpair failed and we were unable to recover it. 00:41:42.846 [2024-10-01 22:40:37.860348] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.846 [2024-10-01 22:40:37.860355] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.846 qpair failed and we were unable to recover it. 00:41:42.846 [2024-10-01 22:40:37.860571] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.846 [2024-10-01 22:40:37.860578] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.846 qpair failed and we were unable to recover it. 00:41:42.846 [2024-10-01 22:40:37.860904] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.846 [2024-10-01 22:40:37.860911] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.846 qpair failed and we were unable to recover it. 00:41:42.846 [2024-10-01 22:40:37.861113] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.846 [2024-10-01 22:40:37.861119] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.846 qpair failed and we were unable to recover it. 00:41:42.846 [2024-10-01 22:40:37.861352] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.846 [2024-10-01 22:40:37.861361] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.846 qpair failed and we were unable to recover it. 00:41:42.846 [2024-10-01 22:40:37.861653] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.846 [2024-10-01 22:40:37.861661] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.846 qpair failed and we were unable to recover it. 00:41:42.846 [2024-10-01 22:40:37.861855] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.846 [2024-10-01 22:40:37.861862] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.846 qpair failed and we were unable to recover it. 00:41:42.846 [2024-10-01 22:40:37.862262] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.846 [2024-10-01 22:40:37.862270] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.846 qpair failed and we were unable to recover it. 00:41:42.846 [2024-10-01 22:40:37.862576] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.846 [2024-10-01 22:40:37.862583] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.846 qpair failed and we were unable to recover it. 00:41:42.846 [2024-10-01 22:40:37.862764] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.846 [2024-10-01 22:40:37.862772] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.846 qpair failed and we were unable to recover it. 00:41:42.846 [2024-10-01 22:40:37.863111] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.846 [2024-10-01 22:40:37.863117] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.846 qpair failed and we were unable to recover it. 00:41:42.846 [2024-10-01 22:40:37.863298] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.846 [2024-10-01 22:40:37.863306] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.846 qpair failed and we were unable to recover it. 00:41:42.846 [2024-10-01 22:40:37.863569] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.846 [2024-10-01 22:40:37.863576] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.846 qpair failed and we were unable to recover it. 00:41:42.846 [2024-10-01 22:40:37.863906] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.846 [2024-10-01 22:40:37.863912] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.846 qpair failed and we were unable to recover it. 00:41:42.846 [2024-10-01 22:40:37.864193] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.846 [2024-10-01 22:40:37.864200] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.846 qpair failed and we were unable to recover it. 00:41:42.846 [2024-10-01 22:40:37.864515] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.846 [2024-10-01 22:40:37.864522] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.846 qpair failed and we were unable to recover it. 00:41:42.846 [2024-10-01 22:40:37.864698] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.846 [2024-10-01 22:40:37.864705] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.846 qpair failed and we were unable to recover it. 00:41:42.846 [2024-10-01 22:40:37.865009] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.846 [2024-10-01 22:40:37.865016] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.846 qpair failed and we were unable to recover it. 00:41:42.846 [2024-10-01 22:40:37.865234] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.846 [2024-10-01 22:40:37.865241] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.846 qpair failed and we were unable to recover it. 00:41:42.846 [2024-10-01 22:40:37.865538] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.846 [2024-10-01 22:40:37.865545] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.846 qpair failed and we were unable to recover it. 00:41:42.846 [2024-10-01 22:40:37.865838] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.846 [2024-10-01 22:40:37.865845] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.846 qpair failed and we were unable to recover it. 00:41:42.846 [2024-10-01 22:40:37.866133] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.846 [2024-10-01 22:40:37.866141] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.846 qpair failed and we were unable to recover it. 00:41:42.846 [2024-10-01 22:40:37.866409] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.846 [2024-10-01 22:40:37.866416] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.846 qpair failed and we were unable to recover it. 00:41:42.846 [2024-10-01 22:40:37.866745] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.846 [2024-10-01 22:40:37.866753] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.846 qpair failed and we were unable to recover it. 00:41:42.846 [2024-10-01 22:40:37.867076] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.846 [2024-10-01 22:40:37.867084] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.846 qpair failed and we were unable to recover it. 00:41:42.846 [2024-10-01 22:40:37.867391] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.846 [2024-10-01 22:40:37.867399] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.846 qpair failed and we were unable to recover it. 00:41:42.846 [2024-10-01 22:40:37.867793] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.847 [2024-10-01 22:40:37.867801] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.847 qpair failed and we were unable to recover it. 00:41:42.847 [2024-10-01 22:40:37.868015] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.847 [2024-10-01 22:40:37.868023] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.847 qpair failed and we were unable to recover it. 00:41:42.847 [2024-10-01 22:40:37.868236] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.847 [2024-10-01 22:40:37.868243] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.847 qpair failed and we were unable to recover it. 00:41:42.847 [2024-10-01 22:40:37.868436] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.847 [2024-10-01 22:40:37.868443] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.847 qpair failed and we were unable to recover it. 00:41:42.847 [2024-10-01 22:40:37.868797] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.847 [2024-10-01 22:40:37.868805] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.847 qpair failed and we were unable to recover it. 00:41:42.847 [2024-10-01 22:40:37.869136] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.847 [2024-10-01 22:40:37.869143] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.847 qpair failed and we were unable to recover it. 00:41:42.847 [2024-10-01 22:40:37.869464] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.847 [2024-10-01 22:40:37.869471] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.847 qpair failed and we were unable to recover it. 00:41:42.847 [2024-10-01 22:40:37.869833] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.847 [2024-10-01 22:40:37.869841] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.847 qpair failed and we were unable to recover it. 00:41:42.847 [2024-10-01 22:40:37.870166] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.847 [2024-10-01 22:40:37.870174] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.847 qpair failed and we were unable to recover it. 00:41:42.847 [2024-10-01 22:40:37.870467] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.847 [2024-10-01 22:40:37.870474] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.847 qpair failed and we were unable to recover it. 00:41:42.847 [2024-10-01 22:40:37.870781] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.847 [2024-10-01 22:40:37.870789] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.847 qpair failed and we were unable to recover it. 00:41:42.847 [2024-10-01 22:40:37.871117] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.847 [2024-10-01 22:40:37.871124] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.847 qpair failed and we were unable to recover it. 00:41:42.847 [2024-10-01 22:40:37.871462] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.847 [2024-10-01 22:40:37.871469] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.847 qpair failed and we were unable to recover it. 00:41:42.847 [2024-10-01 22:40:37.871637] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.847 [2024-10-01 22:40:37.871645] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.847 qpair failed and we were unable to recover it. 00:41:42.847 [2024-10-01 22:40:37.871911] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.847 [2024-10-01 22:40:37.871918] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.847 qpair failed and we were unable to recover it. 00:41:42.847 [2024-10-01 22:40:37.872233] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.847 [2024-10-01 22:40:37.872240] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.847 qpair failed and we were unable to recover it. 00:41:42.847 [2024-10-01 22:40:37.872520] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.847 [2024-10-01 22:40:37.872527] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.847 qpair failed and we were unable to recover it. 00:41:42.847 [2024-10-01 22:40:37.872835] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.847 [2024-10-01 22:40:37.872843] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.847 qpair failed and we were unable to recover it. 00:41:42.847 [2024-10-01 22:40:37.873122] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.847 [2024-10-01 22:40:37.873130] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.847 qpair failed and we were unable to recover it. 00:41:42.847 [2024-10-01 22:40:37.873449] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.847 [2024-10-01 22:40:37.873457] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.847 qpair failed and we were unable to recover it. 00:41:42.847 [2024-10-01 22:40:37.873751] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.847 [2024-10-01 22:40:37.873759] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.847 qpair failed and we were unable to recover it. 00:41:42.847 [2024-10-01 22:40:37.873969] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.847 [2024-10-01 22:40:37.873976] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.847 qpair failed and we were unable to recover it. 00:41:42.847 [2024-10-01 22:40:37.874299] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.847 [2024-10-01 22:40:37.874306] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.847 qpair failed and we were unable to recover it. 00:41:42.847 [2024-10-01 22:40:37.874482] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.847 [2024-10-01 22:40:37.874489] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.847 qpair failed and we were unable to recover it. 00:41:42.847 [2024-10-01 22:40:37.874526] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.847 [2024-10-01 22:40:37.874532] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.847 qpair failed and we were unable to recover it. 00:41:42.847 [2024-10-01 22:40:37.874794] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.847 [2024-10-01 22:40:37.874801] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.847 qpair failed and we were unable to recover it. 00:41:42.847 [2024-10-01 22:40:37.874981] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.847 [2024-10-01 22:40:37.874988] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.847 qpair failed and we were unable to recover it. 00:41:42.847 [2024-10-01 22:40:37.875328] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.847 [2024-10-01 22:40:37.875336] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.847 qpair failed and we were unable to recover it. 00:41:42.847 [2024-10-01 22:40:37.875550] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.847 [2024-10-01 22:40:37.875557] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.847 qpair failed and we were unable to recover it. 00:41:42.847 [2024-10-01 22:40:37.875736] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.847 [2024-10-01 22:40:37.875744] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.847 qpair failed and we were unable to recover it. 00:41:42.847 [2024-10-01 22:40:37.875901] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.847 [2024-10-01 22:40:37.875908] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.847 qpair failed and we were unable to recover it. 00:41:42.847 [2024-10-01 22:40:37.876191] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.847 [2024-10-01 22:40:37.876198] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.847 qpair failed and we were unable to recover it. 00:41:42.847 [2024-10-01 22:40:37.876506] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.847 [2024-10-01 22:40:37.876514] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.847 qpair failed and we were unable to recover it. 00:41:42.847 [2024-10-01 22:40:37.876835] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.847 [2024-10-01 22:40:37.876843] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.847 qpair failed and we were unable to recover it. 00:41:42.847 [2024-10-01 22:40:37.877165] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.847 [2024-10-01 22:40:37.877173] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.847 qpair failed and we were unable to recover it. 00:41:42.847 [2024-10-01 22:40:37.877492] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.847 [2024-10-01 22:40:37.877500] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.847 qpair failed and we were unable to recover it. 00:41:42.847 [2024-10-01 22:40:37.877804] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.847 [2024-10-01 22:40:37.877811] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.847 qpair failed and we were unable to recover it. 00:41:42.847 [2024-10-01 22:40:37.878146] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.847 [2024-10-01 22:40:37.878154] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.847 qpair failed and we were unable to recover it. 00:41:42.847 [2024-10-01 22:40:37.878473] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.848 [2024-10-01 22:40:37.878479] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.848 qpair failed and we were unable to recover it. 00:41:42.848 [2024-10-01 22:40:37.878768] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.848 [2024-10-01 22:40:37.878775] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.848 qpair failed and we were unable to recover it. 00:41:42.848 [2024-10-01 22:40:37.878954] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.848 [2024-10-01 22:40:37.878960] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.848 qpair failed and we were unable to recover it. 00:41:42.848 [2024-10-01 22:40:37.879256] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.848 [2024-10-01 22:40:37.879264] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.848 qpair failed and we were unable to recover it. 00:41:42.848 [2024-10-01 22:40:37.879565] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.848 [2024-10-01 22:40:37.879572] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.848 qpair failed and we were unable to recover it. 00:41:42.848 [2024-10-01 22:40:37.879735] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.848 [2024-10-01 22:40:37.879743] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.848 qpair failed and we were unable to recover it. 00:41:42.848 [2024-10-01 22:40:37.880089] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.848 [2024-10-01 22:40:37.880096] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.848 qpair failed and we were unable to recover it. 00:41:42.848 [2024-10-01 22:40:37.880316] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.848 [2024-10-01 22:40:37.880323] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.848 qpair failed and we were unable to recover it. 00:41:42.848 [2024-10-01 22:40:37.880636] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.848 [2024-10-01 22:40:37.880643] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.848 qpair failed and we were unable to recover it. 00:41:42.848 [2024-10-01 22:40:37.880967] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.848 [2024-10-01 22:40:37.880975] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.848 qpair failed and we were unable to recover it. 00:41:42.848 [2024-10-01 22:40:37.881158] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.848 [2024-10-01 22:40:37.881165] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.848 qpair failed and we were unable to recover it. 00:41:42.848 [2024-10-01 22:40:37.881403] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.848 [2024-10-01 22:40:37.881410] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.848 qpair failed and we were unable to recover it. 00:41:42.848 [2024-10-01 22:40:37.881758] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.848 [2024-10-01 22:40:37.881766] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.848 qpair failed and we were unable to recover it. 00:41:42.848 [2024-10-01 22:40:37.882117] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.848 [2024-10-01 22:40:37.882124] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.848 qpair failed and we were unable to recover it. 00:41:42.848 [2024-10-01 22:40:37.882422] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.848 [2024-10-01 22:40:37.882429] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.848 qpair failed and we were unable to recover it. 00:41:42.848 [2024-10-01 22:40:37.882748] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.848 [2024-10-01 22:40:37.882755] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.848 qpair failed and we were unable to recover it. 00:41:42.848 [2024-10-01 22:40:37.883093] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.848 [2024-10-01 22:40:37.883101] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.848 qpair failed and we were unable to recover it. 00:41:42.848 [2024-10-01 22:40:37.883284] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.848 [2024-10-01 22:40:37.883292] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.848 qpair failed and we were unable to recover it. 00:41:42.848 [2024-10-01 22:40:37.883618] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.848 [2024-10-01 22:40:37.883629] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.848 qpair failed and we were unable to recover it. 00:41:42.848 [2024-10-01 22:40:37.883736] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.848 [2024-10-01 22:40:37.883743] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.848 qpair failed and we were unable to recover it. 00:41:42.848 [2024-10-01 22:40:37.884021] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.848 [2024-10-01 22:40:37.884030] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.848 qpair failed and we were unable to recover it. 00:41:42.848 [2024-10-01 22:40:37.884195] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.848 [2024-10-01 22:40:37.884202] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.848 qpair failed and we were unable to recover it. 00:41:42.848 [2024-10-01 22:40:37.884357] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.848 [2024-10-01 22:40:37.884364] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.848 qpair failed and we were unable to recover it. 00:41:42.848 [2024-10-01 22:40:37.884787] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.848 [2024-10-01 22:40:37.884794] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.848 qpair failed and we were unable to recover it. 00:41:42.848 [2024-10-01 22:40:37.885108] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.848 [2024-10-01 22:40:37.885115] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.848 qpair failed and we were unable to recover it. 00:41:42.848 [2024-10-01 22:40:37.885454] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.848 [2024-10-01 22:40:37.885462] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.848 qpair failed and we were unable to recover it. 00:41:42.848 [2024-10-01 22:40:37.885758] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.848 [2024-10-01 22:40:37.885765] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.848 qpair failed and we were unable to recover it. 00:41:42.848 [2024-10-01 22:40:37.885953] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.848 [2024-10-01 22:40:37.885960] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.848 qpair failed and we were unable to recover it. 00:41:42.848 [2024-10-01 22:40:37.886024] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.848 [2024-10-01 22:40:37.886030] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.848 qpair failed and we were unable to recover it. 00:41:42.848 [2024-10-01 22:40:37.886310] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.848 [2024-10-01 22:40:37.886317] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.848 qpair failed and we were unable to recover it. 00:41:42.848 [2024-10-01 22:40:37.886627] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.848 [2024-10-01 22:40:37.886634] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.848 qpair failed and we were unable to recover it. 00:41:42.848 [2024-10-01 22:40:37.886711] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.848 [2024-10-01 22:40:37.886717] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.848 qpair failed and we were unable to recover it. 00:41:42.848 [2024-10-01 22:40:37.887065] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.848 [2024-10-01 22:40:37.887072] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.848 qpair failed and we were unable to recover it. 00:41:42.848 [2024-10-01 22:40:37.887364] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.848 [2024-10-01 22:40:37.887372] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.848 qpair failed and we were unable to recover it. 00:41:42.848 [2024-10-01 22:40:37.887655] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.848 [2024-10-01 22:40:37.887663] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.848 qpair failed and we were unable to recover it. 00:41:42.848 [2024-10-01 22:40:37.887964] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.848 [2024-10-01 22:40:37.887971] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.848 qpair failed and we were unable to recover it. 00:41:42.848 [2024-10-01 22:40:37.888269] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.848 [2024-10-01 22:40:37.888276] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.848 qpair failed and we were unable to recover it. 00:41:42.848 [2024-10-01 22:40:37.888603] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.848 [2024-10-01 22:40:37.888610] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.848 qpair failed and we were unable to recover it. 00:41:42.848 [2024-10-01 22:40:37.888998] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.849 [2024-10-01 22:40:37.889005] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.849 qpair failed and we were unable to recover it. 00:41:42.849 [2024-10-01 22:40:37.889297] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.849 [2024-10-01 22:40:37.889304] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.849 qpair failed and we were unable to recover it. 00:41:42.849 [2024-10-01 22:40:37.889523] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.849 [2024-10-01 22:40:37.889531] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.849 qpair failed and we were unable to recover it. 00:41:42.849 [2024-10-01 22:40:37.889822] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.849 [2024-10-01 22:40:37.889829] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.849 qpair failed and we were unable to recover it. 00:41:42.849 [2024-10-01 22:40:37.890045] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.849 [2024-10-01 22:40:37.890052] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.849 qpair failed and we were unable to recover it. 00:41:42.849 [2024-10-01 22:40:37.890204] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.849 [2024-10-01 22:40:37.890211] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.849 qpair failed and we were unable to recover it. 00:41:42.849 [2024-10-01 22:40:37.890415] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.849 [2024-10-01 22:40:37.890422] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.849 qpair failed and we were unable to recover it. 00:41:42.849 [2024-10-01 22:40:37.890776] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.849 [2024-10-01 22:40:37.890783] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.849 qpair failed and we were unable to recover it. 00:41:42.849 [2024-10-01 22:40:37.891099] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.849 [2024-10-01 22:40:37.891106] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.849 qpair failed and we were unable to recover it. 00:41:42.849 [2024-10-01 22:40:37.891316] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.849 [2024-10-01 22:40:37.891323] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.849 qpair failed and we were unable to recover it. 00:41:42.849 [2024-10-01 22:40:37.891692] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.849 [2024-10-01 22:40:37.891699] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.849 qpair failed and we were unable to recover it. 00:41:42.849 [2024-10-01 22:40:37.892019] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.849 [2024-10-01 22:40:37.892027] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.849 qpair failed and we were unable to recover it. 00:41:42.849 [2024-10-01 22:40:37.892210] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.849 [2024-10-01 22:40:37.892217] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.849 qpair failed and we were unable to recover it. 00:41:42.849 [2024-10-01 22:40:37.892514] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.849 [2024-10-01 22:40:37.892521] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.849 qpair failed and we were unable to recover it. 00:41:42.849 [2024-10-01 22:40:37.892674] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.849 [2024-10-01 22:40:37.892681] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.849 qpair failed and we were unable to recover it. 00:41:42.849 [2024-10-01 22:40:37.893082] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.849 [2024-10-01 22:40:37.893089] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.849 qpair failed and we were unable to recover it. 00:41:42.849 [2024-10-01 22:40:37.893386] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.849 [2024-10-01 22:40:37.893393] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.849 qpair failed and we were unable to recover it. 00:41:42.849 [2024-10-01 22:40:37.893668] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.849 [2024-10-01 22:40:37.893682] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.849 qpair failed and we were unable to recover it. 00:41:42.849 [2024-10-01 22:40:37.894004] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.849 [2024-10-01 22:40:37.894012] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.849 qpair failed and we were unable to recover it. 00:41:42.849 [2024-10-01 22:40:37.894221] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.849 [2024-10-01 22:40:37.894229] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.849 qpair failed and we were unable to recover it. 00:41:42.849 [2024-10-01 22:40:37.894552] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.849 [2024-10-01 22:40:37.894560] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.849 qpair failed and we were unable to recover it. 00:41:42.849 [2024-10-01 22:40:37.894827] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.849 [2024-10-01 22:40:37.894836] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.849 qpair failed and we were unable to recover it. 00:41:42.849 [2024-10-01 22:40:37.895187] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.849 [2024-10-01 22:40:37.895196] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.849 qpair failed and we were unable to recover it. 00:41:42.849 [2024-10-01 22:40:37.895500] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.849 [2024-10-01 22:40:37.895508] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.849 qpair failed and we were unable to recover it. 00:41:42.849 [2024-10-01 22:40:37.895697] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.849 [2024-10-01 22:40:37.895704] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.849 qpair failed and we were unable to recover it. 00:41:42.849 [2024-10-01 22:40:37.895889] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.849 [2024-10-01 22:40:37.895895] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.849 qpair failed and we were unable to recover it. 00:41:42.849 [2024-10-01 22:40:37.896106] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.849 [2024-10-01 22:40:37.896113] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.849 qpair failed and we were unable to recover it. 00:41:42.849 [2024-10-01 22:40:37.896373] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.849 [2024-10-01 22:40:37.896381] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.849 qpair failed and we were unable to recover it. 00:41:42.849 [2024-10-01 22:40:37.896586] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.850 [2024-10-01 22:40:37.896593] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.850 qpair failed and we were unable to recover it. 00:41:42.850 [2024-10-01 22:40:37.896783] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.850 [2024-10-01 22:40:37.896791] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.850 qpair failed and we were unable to recover it. 00:41:42.850 [2024-10-01 22:40:37.896992] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.850 [2024-10-01 22:40:37.896999] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.850 qpair failed and we were unable to recover it. 00:41:42.850 [2024-10-01 22:40:37.897276] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.850 [2024-10-01 22:40:37.897283] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.850 qpair failed and we were unable to recover it. 00:41:42.850 [2024-10-01 22:40:37.897570] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.850 [2024-10-01 22:40:37.897577] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.850 qpair failed and we were unable to recover it. 00:41:42.850 [2024-10-01 22:40:37.897919] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.850 [2024-10-01 22:40:37.897926] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.850 qpair failed and we were unable to recover it. 00:41:42.850 [2024-10-01 22:40:37.898126] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.850 [2024-10-01 22:40:37.898133] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.850 qpair failed and we were unable to recover it. 00:41:42.850 [2024-10-01 22:40:37.898406] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.850 [2024-10-01 22:40:37.898413] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.850 qpair failed and we were unable to recover it. 00:41:42.850 [2024-10-01 22:40:37.898725] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.850 [2024-10-01 22:40:37.898733] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.850 qpair failed and we were unable to recover it. 00:41:42.850 [2024-10-01 22:40:37.898924] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.850 [2024-10-01 22:40:37.898933] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.850 qpair failed and we were unable to recover it. 00:41:42.850 [2024-10-01 22:40:37.899272] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.850 [2024-10-01 22:40:37.899279] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.850 qpair failed and we were unable to recover it. 00:41:42.850 [2024-10-01 22:40:37.899454] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.850 [2024-10-01 22:40:37.899461] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.850 qpair failed and we were unable to recover it. 00:41:42.850 [2024-10-01 22:40:37.899767] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.850 [2024-10-01 22:40:37.899775] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.850 qpair failed and we were unable to recover it. 00:41:42.850 [2024-10-01 22:40:37.900092] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.850 [2024-10-01 22:40:37.900099] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.850 qpair failed and we were unable to recover it. 00:41:42.850 [2024-10-01 22:40:37.900285] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.850 [2024-10-01 22:40:37.900292] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.850 qpair failed and we were unable to recover it. 00:41:42.850 [2024-10-01 22:40:37.900635] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.850 [2024-10-01 22:40:37.900643] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.850 qpair failed and we were unable to recover it. 00:41:42.850 [2024-10-01 22:40:37.900960] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.850 [2024-10-01 22:40:37.900968] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.850 qpair failed and we were unable to recover it. 00:41:42.850 [2024-10-01 22:40:37.901277] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.850 [2024-10-01 22:40:37.901284] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.850 qpair failed and we were unable to recover it. 00:41:42.850 [2024-10-01 22:40:37.901574] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.850 [2024-10-01 22:40:37.901582] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.850 qpair failed and we were unable to recover it. 00:41:42.850 [2024-10-01 22:40:37.901747] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.850 [2024-10-01 22:40:37.901754] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.850 qpair failed and we were unable to recover it. 00:41:42.850 [2024-10-01 22:40:37.902053] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.850 [2024-10-01 22:40:37.902061] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.850 qpair failed and we were unable to recover it. 00:41:42.850 [2024-10-01 22:40:37.902255] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.850 [2024-10-01 22:40:37.902263] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.850 qpair failed and we were unable to recover it. 00:41:42.850 [2024-10-01 22:40:37.902613] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.850 [2024-10-01 22:40:37.902622] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.850 qpair failed and we were unable to recover it. 00:41:42.850 [2024-10-01 22:40:37.902672] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.850 [2024-10-01 22:40:37.902679] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.850 qpair failed and we were unable to recover it. 00:41:42.850 [2024-10-01 22:40:37.902756] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.850 [2024-10-01 22:40:37.902763] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.850 qpair failed and we were unable to recover it. 00:41:42.850 [2024-10-01 22:40:37.902799] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.850 [2024-10-01 22:40:37.902807] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.850 qpair failed and we were unable to recover it. 00:41:42.850 [2024-10-01 22:40:37.902964] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.850 [2024-10-01 22:40:37.902971] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.850 qpair failed and we were unable to recover it. 00:41:42.850 [2024-10-01 22:40:37.903308] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.850 [2024-10-01 22:40:37.903317] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.850 qpair failed and we were unable to recover it. 00:41:42.850 [2024-10-01 22:40:37.903491] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.850 [2024-10-01 22:40:37.903499] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.850 qpair failed and we were unable to recover it. 00:41:42.850 [2024-10-01 22:40:37.903841] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.850 [2024-10-01 22:40:37.903848] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.850 qpair failed and we were unable to recover it. 00:41:42.850 [2024-10-01 22:40:37.904138] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.850 [2024-10-01 22:40:37.904145] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.850 qpair failed and we were unable to recover it. 00:41:42.850 [2024-10-01 22:40:37.904454] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.850 [2024-10-01 22:40:37.904462] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.850 qpair failed and we were unable to recover it. 00:41:42.850 [2024-10-01 22:40:37.904763] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.850 [2024-10-01 22:40:37.904771] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.850 qpair failed and we were unable to recover it. 00:41:42.850 [2024-10-01 22:40:37.904933] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.850 [2024-10-01 22:40:37.904940] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.850 qpair failed and we were unable to recover it. 00:41:42.850 [2024-10-01 22:40:37.905271] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.850 [2024-10-01 22:40:37.905280] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.850 qpair failed and we were unable to recover it. 00:41:42.850 [2024-10-01 22:40:37.905441] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.850 [2024-10-01 22:40:37.905448] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.850 qpair failed and we were unable to recover it. 00:41:42.850 [2024-10-01 22:40:37.905825] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.850 [2024-10-01 22:40:37.905832] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.850 qpair failed and we were unable to recover it. 00:41:42.850 [2024-10-01 22:40:37.906144] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.850 [2024-10-01 22:40:37.906151] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.850 qpair failed and we were unable to recover it. 00:41:42.850 [2024-10-01 22:40:37.906457] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.850 [2024-10-01 22:40:37.906464] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.851 qpair failed and we were unable to recover it. 00:41:42.851 [2024-10-01 22:40:37.906753] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.851 [2024-10-01 22:40:37.906760] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.851 qpair failed and we were unable to recover it. 00:41:42.851 [2024-10-01 22:40:37.907078] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.851 [2024-10-01 22:40:37.907086] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.851 qpair failed and we were unable to recover it. 00:41:42.851 [2024-10-01 22:40:37.907366] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.851 [2024-10-01 22:40:37.907374] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.851 qpair failed and we were unable to recover it. 00:41:42.851 [2024-10-01 22:40:37.907713] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.851 [2024-10-01 22:40:37.907721] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.851 qpair failed and we were unable to recover it. 00:41:42.851 [2024-10-01 22:40:37.908058] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.851 [2024-10-01 22:40:37.908066] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.851 qpair failed and we were unable to recover it. 00:41:42.851 [2024-10-01 22:40:37.908336] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.851 [2024-10-01 22:40:37.908343] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.851 qpair failed and we were unable to recover it. 00:41:42.851 [2024-10-01 22:40:37.908517] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.851 [2024-10-01 22:40:37.908525] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.851 qpair failed and we were unable to recover it. 00:41:42.851 [2024-10-01 22:40:37.908689] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.851 [2024-10-01 22:40:37.908696] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.851 qpair failed and we were unable to recover it. 00:41:42.851 [2024-10-01 22:40:37.908851] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.851 [2024-10-01 22:40:37.908858] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.851 qpair failed and we were unable to recover it. 00:41:42.851 [2024-10-01 22:40:37.909201] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.851 [2024-10-01 22:40:37.909209] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.851 qpair failed and we were unable to recover it. 00:41:42.851 [2024-10-01 22:40:37.909497] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.851 [2024-10-01 22:40:37.909505] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.851 qpair failed and we were unable to recover it. 00:41:42.851 [2024-10-01 22:40:37.909696] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.851 [2024-10-01 22:40:37.909704] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.851 qpair failed and we were unable to recover it. 00:41:42.851 [2024-10-01 22:40:37.909867] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.851 [2024-10-01 22:40:37.909874] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.851 qpair failed and we were unable to recover it. 00:41:42.851 [2024-10-01 22:40:37.910159] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.851 [2024-10-01 22:40:37.910166] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.851 qpair failed and we were unable to recover it. 00:41:42.851 [2024-10-01 22:40:37.910481] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.851 [2024-10-01 22:40:37.910489] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.851 qpair failed and we were unable to recover it. 00:41:42.851 [2024-10-01 22:40:37.910753] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.851 [2024-10-01 22:40:37.910761] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.851 qpair failed and we were unable to recover it. 00:41:42.851 [2024-10-01 22:40:37.910954] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.851 [2024-10-01 22:40:37.910962] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.851 qpair failed and we were unable to recover it. 00:41:42.851 [2024-10-01 22:40:37.911257] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.851 [2024-10-01 22:40:37.911265] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.851 qpair failed and we were unable to recover it. 00:41:42.851 [2024-10-01 22:40:37.911557] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.851 [2024-10-01 22:40:37.911571] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.851 qpair failed and we were unable to recover it. 00:41:42.851 [2024-10-01 22:40:37.911769] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.851 [2024-10-01 22:40:37.911778] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.851 qpair failed and we were unable to recover it. 00:41:42.851 [2024-10-01 22:40:37.912094] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.851 [2024-10-01 22:40:37.912101] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.851 qpair failed and we were unable to recover it. 00:41:42.851 [2024-10-01 22:40:37.912441] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.851 [2024-10-01 22:40:37.912448] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.851 qpair failed and we were unable to recover it. 00:41:42.851 [2024-10-01 22:40:37.912629] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.851 [2024-10-01 22:40:37.912638] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.851 qpair failed and we were unable to recover it. 00:41:42.851 [2024-10-01 22:40:37.912962] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.851 [2024-10-01 22:40:37.912969] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.851 qpair failed and we were unable to recover it. 00:41:42.851 [2024-10-01 22:40:37.913290] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.851 [2024-10-01 22:40:37.913299] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.851 qpair failed and we were unable to recover it. 00:41:42.851 [2024-10-01 22:40:37.913634] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.851 [2024-10-01 22:40:37.913641] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.851 qpair failed and we were unable to recover it. 00:41:42.851 [2024-10-01 22:40:37.913998] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.851 [2024-10-01 22:40:37.914006] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.851 qpair failed and we were unable to recover it. 00:41:42.851 [2024-10-01 22:40:37.914374] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.851 [2024-10-01 22:40:37.914381] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.851 qpair failed and we were unable to recover it. 00:41:42.851 [2024-10-01 22:40:37.914581] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.851 [2024-10-01 22:40:37.914588] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.851 qpair failed and we were unable to recover it. 00:41:42.851 [2024-10-01 22:40:37.914896] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.851 [2024-10-01 22:40:37.914903] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.851 qpair failed and we were unable to recover it. 00:41:42.851 [2024-10-01 22:40:37.915286] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.851 [2024-10-01 22:40:37.915293] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.851 qpair failed and we were unable to recover it. 00:41:42.851 [2024-10-01 22:40:37.915613] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.851 [2024-10-01 22:40:37.915620] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.851 qpair failed and we were unable to recover it. 00:41:42.851 [2024-10-01 22:40:37.915903] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.851 [2024-10-01 22:40:37.915911] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.851 qpair failed and we were unable to recover it. 00:41:42.851 [2024-10-01 22:40:37.916112] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.851 [2024-10-01 22:40:37.916119] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.851 qpair failed and we were unable to recover it. 00:41:42.851 [2024-10-01 22:40:37.916401] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.851 [2024-10-01 22:40:37.916409] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.851 qpair failed and we were unable to recover it. 00:41:42.851 [2024-10-01 22:40:37.916755] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.851 [2024-10-01 22:40:37.916767] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.851 qpair failed and we were unable to recover it. 00:41:42.851 [2024-10-01 22:40:37.917064] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.851 [2024-10-01 22:40:37.917072] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.851 qpair failed and we were unable to recover it. 00:41:42.851 [2024-10-01 22:40:37.917388] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.851 [2024-10-01 22:40:37.917396] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.851 qpair failed and we were unable to recover it. 00:41:42.851 [2024-10-01 22:40:37.917674] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.852 [2024-10-01 22:40:37.917682] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.852 qpair failed and we were unable to recover it. 00:41:42.852 [2024-10-01 22:40:37.918072] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.852 [2024-10-01 22:40:37.918079] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.852 qpair failed and we were unable to recover it. 00:41:42.852 [2024-10-01 22:40:37.918357] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.852 [2024-10-01 22:40:37.918364] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.852 qpair failed and we were unable to recover it. 00:41:42.852 [2024-10-01 22:40:37.918687] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.852 [2024-10-01 22:40:37.918695] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.852 qpair failed and we were unable to recover it. 00:41:42.852 [2024-10-01 22:40:37.918893] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.852 [2024-10-01 22:40:37.918900] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.852 qpair failed and we were unable to recover it. 00:41:42.852 [2024-10-01 22:40:37.919062] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.852 [2024-10-01 22:40:37.919070] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.852 qpair failed and we were unable to recover it. 00:41:42.852 [2024-10-01 22:40:37.919152] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.852 [2024-10-01 22:40:37.919159] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.852 qpair failed and we were unable to recover it. 00:41:42.852 [2024-10-01 22:40:37.919330] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.852 [2024-10-01 22:40:37.919337] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.852 qpair failed and we were unable to recover it. 00:41:42.852 [2024-10-01 22:40:37.919533] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.852 [2024-10-01 22:40:37.919541] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.852 qpair failed and we were unable to recover it. 00:41:42.852 [2024-10-01 22:40:37.919686] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.852 [2024-10-01 22:40:37.919693] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.852 qpair failed and we were unable to recover it. 00:41:42.852 [2024-10-01 22:40:37.919889] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.852 [2024-10-01 22:40:37.919896] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.852 qpair failed and we were unable to recover it. 00:41:42.852 [2024-10-01 22:40:37.920234] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.852 [2024-10-01 22:40:37.920242] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.852 qpair failed and we were unable to recover it. 00:41:42.852 [2024-10-01 22:40:37.920565] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.852 [2024-10-01 22:40:37.920573] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.852 qpair failed and we were unable to recover it. 00:41:42.852 [2024-10-01 22:40:37.920902] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.852 [2024-10-01 22:40:37.920909] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.852 qpair failed and we were unable to recover it. 00:41:42.852 [2024-10-01 22:40:37.921230] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.852 [2024-10-01 22:40:37.921237] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.852 qpair failed and we were unable to recover it. 00:41:42.852 [2024-10-01 22:40:37.921422] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.852 [2024-10-01 22:40:37.921430] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.852 qpair failed and we were unable to recover it. 00:41:42.852 [2024-10-01 22:40:37.921603] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.852 [2024-10-01 22:40:37.921612] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.852 qpair failed and we were unable to recover it. 00:41:42.852 [2024-10-01 22:40:37.921914] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.852 [2024-10-01 22:40:37.921922] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.852 qpair failed and we were unable to recover it. 00:41:42.852 [2024-10-01 22:40:37.922119] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.852 [2024-10-01 22:40:37.922127] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.852 qpair failed and we were unable to recover it. 00:41:42.852 [2024-10-01 22:40:37.922435] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.852 [2024-10-01 22:40:37.922443] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.852 qpair failed and we were unable to recover it. 00:41:42.852 [2024-10-01 22:40:37.922746] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.852 [2024-10-01 22:40:37.922754] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.852 qpair failed and we were unable to recover it. 00:41:42.852 [2024-10-01 22:40:37.923062] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.852 [2024-10-01 22:40:37.923069] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.852 qpair failed and we were unable to recover it. 00:41:42.852 [2024-10-01 22:40:37.923404] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.852 [2024-10-01 22:40:37.923412] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.852 qpair failed and we were unable to recover it. 00:41:42.852 [2024-10-01 22:40:37.923735] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.852 [2024-10-01 22:40:37.923743] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.852 qpair failed and we were unable to recover it. 00:41:42.852 [2024-10-01 22:40:37.924079] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.852 [2024-10-01 22:40:37.924086] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.852 qpair failed and we were unable to recover it. 00:41:42.852 [2024-10-01 22:40:37.924389] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.852 [2024-10-01 22:40:37.924397] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.852 qpair failed and we were unable to recover it. 00:41:42.852 [2024-10-01 22:40:37.924577] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.852 [2024-10-01 22:40:37.924584] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.852 qpair failed and we were unable to recover it. 00:41:42.852 [2024-10-01 22:40:37.924749] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.852 [2024-10-01 22:40:37.924756] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.852 qpair failed and we were unable to recover it. 00:41:42.852 [2024-10-01 22:40:37.925050] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.852 [2024-10-01 22:40:37.925057] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.852 qpair failed and we were unable to recover it. 00:41:42.852 [2024-10-01 22:40:37.925368] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.852 [2024-10-01 22:40:37.925375] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.852 qpair failed and we were unable to recover it. 00:41:42.852 [2024-10-01 22:40:37.925659] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.852 [2024-10-01 22:40:37.925667] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.852 qpair failed and we were unable to recover it. 00:41:42.852 [2024-10-01 22:40:37.925970] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.852 [2024-10-01 22:40:37.925978] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.852 qpair failed and we were unable to recover it. 00:41:42.852 [2024-10-01 22:40:37.926175] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.852 [2024-10-01 22:40:37.926182] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.852 qpair failed and we were unable to recover it. 00:41:42.852 [2024-10-01 22:40:37.926448] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.852 [2024-10-01 22:40:37.926455] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.852 qpair failed and we were unable to recover it. 00:41:42.852 [2024-10-01 22:40:37.926744] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.852 [2024-10-01 22:40:37.926753] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.852 qpair failed and we were unable to recover it. 00:41:42.852 [2024-10-01 22:40:37.926833] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.852 [2024-10-01 22:40:37.926841] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.852 qpair failed and we were unable to recover it. 00:41:42.852 [2024-10-01 22:40:37.927004] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.852 [2024-10-01 22:40:37.927011] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.852 qpair failed and we were unable to recover it. 00:41:42.852 [2024-10-01 22:40:37.927328] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.852 [2024-10-01 22:40:37.927337] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.852 qpair failed and we were unable to recover it. 00:41:42.852 [2024-10-01 22:40:37.927549] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.852 [2024-10-01 22:40:37.927556] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.852 qpair failed and we were unable to recover it. 00:41:42.853 [2024-10-01 22:40:37.927827] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.853 [2024-10-01 22:40:37.927835] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.853 qpair failed and we were unable to recover it. 00:41:42.853 [2024-10-01 22:40:37.928151] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.853 [2024-10-01 22:40:37.928158] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.853 qpair failed and we were unable to recover it. 00:41:42.853 [2024-10-01 22:40:37.928345] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.853 [2024-10-01 22:40:37.928353] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.853 qpair failed and we were unable to recover it. 00:41:42.853 [2024-10-01 22:40:37.928653] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.853 [2024-10-01 22:40:37.928660] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.853 qpair failed and we were unable to recover it. 00:41:42.853 [2024-10-01 22:40:37.928918] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.853 [2024-10-01 22:40:37.928926] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.853 qpair failed and we were unable to recover it. 00:41:42.853 [2024-10-01 22:40:37.929203] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.853 [2024-10-01 22:40:37.929211] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.853 qpair failed and we were unable to recover it. 00:41:42.853 [2024-10-01 22:40:37.929556] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.853 [2024-10-01 22:40:37.929564] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.853 qpair failed and we were unable to recover it. 00:41:42.853 [2024-10-01 22:40:37.929869] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.853 [2024-10-01 22:40:37.929877] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.853 qpair failed and we were unable to recover it. 00:41:42.853 [2024-10-01 22:40:37.930257] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.853 [2024-10-01 22:40:37.930265] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.853 qpair failed and we were unable to recover it. 00:41:42.853 [2024-10-01 22:40:37.930511] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.853 [2024-10-01 22:40:37.930519] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.853 qpair failed and we were unable to recover it. 00:41:42.853 [2024-10-01 22:40:37.930996] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.853 [2024-10-01 22:40:37.931004] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.853 qpair failed and we were unable to recover it. 00:41:42.853 [2024-10-01 22:40:37.931285] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.853 [2024-10-01 22:40:37.931292] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.853 qpair failed and we were unable to recover it. 00:41:42.853 [2024-10-01 22:40:37.931618] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.853 [2024-10-01 22:40:37.931628] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.853 qpair failed and we were unable to recover it. 00:41:42.853 [2024-10-01 22:40:37.931804] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.853 [2024-10-01 22:40:37.931811] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.853 qpair failed and we were unable to recover it. 00:41:42.853 [2024-10-01 22:40:37.931853] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.853 [2024-10-01 22:40:37.931859] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.853 qpair failed and we were unable to recover it. 00:41:42.853 [2024-10-01 22:40:37.931905] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.853 [2024-10-01 22:40:37.931911] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.853 qpair failed and we were unable to recover it. 00:41:42.853 [2024-10-01 22:40:37.932240] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.853 [2024-10-01 22:40:37.932247] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.853 qpair failed and we were unable to recover it. 00:41:42.853 [2024-10-01 22:40:37.932288] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.853 [2024-10-01 22:40:37.932296] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.853 qpair failed and we were unable to recover it. 00:41:42.853 [2024-10-01 22:40:37.932454] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.853 [2024-10-01 22:40:37.932462] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.853 qpair failed and we were unable to recover it. 00:41:42.853 [2024-10-01 22:40:37.932645] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.853 [2024-10-01 22:40:37.932653] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.853 qpair failed and we were unable to recover it. 00:41:42.853 [2024-10-01 22:40:37.932696] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.853 [2024-10-01 22:40:37.932704] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.853 qpair failed and we were unable to recover it. 00:41:42.853 [2024-10-01 22:40:37.933035] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.853 [2024-10-01 22:40:37.933041] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.853 qpair failed and we were unable to recover it. 00:41:42.853 [2024-10-01 22:40:37.933375] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.853 [2024-10-01 22:40:37.933383] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.853 qpair failed and we were unable to recover it. 00:41:42.853 [2024-10-01 22:40:37.933713] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.853 [2024-10-01 22:40:37.933721] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.853 qpair failed and we were unable to recover it. 00:41:42.853 [2024-10-01 22:40:37.933887] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.853 [2024-10-01 22:40:37.933894] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.853 qpair failed and we were unable to recover it. 00:41:42.853 [2024-10-01 22:40:37.934063] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.853 [2024-10-01 22:40:37.934070] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.853 qpair failed and we were unable to recover it. 00:41:42.853 [2024-10-01 22:40:37.934363] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.853 [2024-10-01 22:40:37.934371] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.853 qpair failed and we were unable to recover it. 00:41:42.853 [2024-10-01 22:40:37.934524] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.853 [2024-10-01 22:40:37.934531] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.853 qpair failed and we were unable to recover it. 00:41:42.853 [2024-10-01 22:40:37.934814] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.853 [2024-10-01 22:40:37.934822] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.853 qpair failed and we were unable to recover it. 00:41:42.853 [2024-10-01 22:40:37.935132] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.853 [2024-10-01 22:40:37.935139] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.853 qpair failed and we were unable to recover it. 00:41:42.853 [2024-10-01 22:40:37.935307] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.853 [2024-10-01 22:40:37.935314] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.853 qpair failed and we were unable to recover it. 00:41:42.853 [2024-10-01 22:40:37.935501] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.853 [2024-10-01 22:40:37.935508] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.853 qpair failed and we were unable to recover it. 00:41:42.853 [2024-10-01 22:40:37.935904] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.853 [2024-10-01 22:40:37.935913] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.853 qpair failed and we were unable to recover it. 00:41:42.853 [2024-10-01 22:40:37.936231] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.853 [2024-10-01 22:40:37.936238] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.853 qpair failed and we were unable to recover it. 00:41:42.853 [2024-10-01 22:40:37.936533] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.853 [2024-10-01 22:40:37.936541] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.853 qpair failed and we were unable to recover it. 00:41:42.853 [2024-10-01 22:40:37.936878] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.853 [2024-10-01 22:40:37.936885] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.853 qpair failed and we were unable to recover it. 00:41:42.853 [2024-10-01 22:40:37.937161] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.853 [2024-10-01 22:40:37.937169] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.853 qpair failed and we were unable to recover it. 00:41:42.853 [2024-10-01 22:40:37.937377] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.853 [2024-10-01 22:40:37.937384] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.853 qpair failed and we were unable to recover it. 00:41:42.853 [2024-10-01 22:40:37.937674] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.854 [2024-10-01 22:40:37.937682] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.854 qpair failed and we were unable to recover it. 00:41:42.854 [2024-10-01 22:40:37.938012] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.854 [2024-10-01 22:40:37.938021] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.854 qpair failed and we were unable to recover it. 00:41:42.854 [2024-10-01 22:40:37.938203] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.854 [2024-10-01 22:40:37.938211] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.854 qpair failed and we were unable to recover it. 00:41:42.854 [2024-10-01 22:40:37.938612] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.854 [2024-10-01 22:40:37.938619] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.854 qpair failed and we were unable to recover it. 00:41:42.854 [2024-10-01 22:40:37.938904] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.854 [2024-10-01 22:40:37.938912] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.854 qpair failed and we were unable to recover it. 00:41:42.854 [2024-10-01 22:40:37.939083] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.854 [2024-10-01 22:40:37.939090] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.854 qpair failed and we were unable to recover it. 00:41:42.854 [2024-10-01 22:40:37.939410] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.854 [2024-10-01 22:40:37.939418] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.854 qpair failed and we were unable to recover it. 00:41:42.854 [2024-10-01 22:40:37.939590] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.854 [2024-10-01 22:40:37.939597] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.854 qpair failed and we were unable to recover it. 00:41:42.854 [2024-10-01 22:40:37.939838] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.854 [2024-10-01 22:40:37.939846] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.854 qpair failed and we were unable to recover it. 00:41:42.854 [2024-10-01 22:40:37.940126] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.854 [2024-10-01 22:40:37.940133] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.854 qpair failed and we were unable to recover it. 00:41:42.854 [2024-10-01 22:40:37.940473] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.854 [2024-10-01 22:40:37.940481] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.854 qpair failed and we were unable to recover it. 00:41:42.854 [2024-10-01 22:40:37.940728] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.854 [2024-10-01 22:40:37.940736] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.854 qpair failed and we were unable to recover it. 00:41:42.854 [2024-10-01 22:40:37.941022] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.854 [2024-10-01 22:40:37.941029] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.854 qpair failed and we were unable to recover it. 00:41:42.854 [2024-10-01 22:40:37.941350] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.854 [2024-10-01 22:40:37.941357] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.854 qpair failed and we were unable to recover it. 00:41:42.854 [2024-10-01 22:40:37.941662] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.854 [2024-10-01 22:40:37.941670] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.854 qpair failed and we were unable to recover it. 00:41:42.854 [2024-10-01 22:40:37.942013] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.854 [2024-10-01 22:40:37.942022] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.854 qpair failed and we were unable to recover it. 00:41:42.854 [2024-10-01 22:40:37.942311] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.854 [2024-10-01 22:40:37.942319] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.854 qpair failed and we were unable to recover it. 00:41:42.854 [2024-10-01 22:40:37.942642] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.854 [2024-10-01 22:40:37.942649] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.854 qpair failed and we were unable to recover it. 00:41:42.854 [2024-10-01 22:40:37.943031] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.854 [2024-10-01 22:40:37.943038] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.854 qpair failed and we were unable to recover it. 00:41:42.854 [2024-10-01 22:40:37.943340] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.854 [2024-10-01 22:40:37.943347] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.854 qpair failed and we were unable to recover it. 00:41:42.854 [2024-10-01 22:40:37.943656] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.854 [2024-10-01 22:40:37.943663] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.854 qpair failed and we were unable to recover it. 00:41:42.854 [2024-10-01 22:40:37.943979] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.854 [2024-10-01 22:40:37.943987] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.854 qpair failed and we were unable to recover it. 00:41:42.854 [2024-10-01 22:40:37.944158] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.854 [2024-10-01 22:40:37.944166] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.854 qpair failed and we were unable to recover it. 00:41:42.854 [2024-10-01 22:40:37.944426] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.854 [2024-10-01 22:40:37.944433] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.854 qpair failed and we were unable to recover it. 00:41:42.854 [2024-10-01 22:40:37.944613] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.854 [2024-10-01 22:40:37.944620] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.854 qpair failed and we were unable to recover it. 00:41:42.854 [2024-10-01 22:40:37.944949] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.854 [2024-10-01 22:40:37.944956] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.854 qpair failed and we were unable to recover it. 00:41:42.854 [2024-10-01 22:40:37.945288] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.854 [2024-10-01 22:40:37.945296] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.854 qpair failed and we were unable to recover it. 00:41:42.854 [2024-10-01 22:40:37.945497] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.854 [2024-10-01 22:40:37.945506] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.854 qpair failed and we were unable to recover it. 00:41:42.854 [2024-10-01 22:40:37.945800] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.854 [2024-10-01 22:40:37.945808] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.854 qpair failed and we were unable to recover it. 00:41:42.854 [2024-10-01 22:40:37.945999] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.854 [2024-10-01 22:40:37.946007] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.854 qpair failed and we were unable to recover it. 00:41:42.854 [2024-10-01 22:40:37.946321] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.854 [2024-10-01 22:40:37.946328] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.854 qpair failed and we were unable to recover it. 00:41:42.854 [2024-10-01 22:40:37.946488] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.854 [2024-10-01 22:40:37.946496] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.854 qpair failed and we were unable to recover it. 00:41:42.854 [2024-10-01 22:40:37.946764] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.854 [2024-10-01 22:40:37.946771] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.854 qpair failed and we were unable to recover it. 00:41:42.854 [2024-10-01 22:40:37.947040] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.854 [2024-10-01 22:40:37.947047] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.854 qpair failed and we were unable to recover it. 00:41:42.855 [2024-10-01 22:40:37.947221] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.855 [2024-10-01 22:40:37.947228] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.855 qpair failed and we were unable to recover it. 00:41:42.855 [2024-10-01 22:40:37.947543] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.855 [2024-10-01 22:40:37.947550] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.855 qpair failed and we were unable to recover it. 00:41:42.855 [2024-10-01 22:40:37.947590] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.855 [2024-10-01 22:40:37.947598] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.855 qpair failed and we were unable to recover it. 00:41:42.855 [2024-10-01 22:40:37.947770] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.855 [2024-10-01 22:40:37.947777] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.855 qpair failed and we were unable to recover it. 00:41:42.855 [2024-10-01 22:40:37.948083] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.855 [2024-10-01 22:40:37.948091] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.855 qpair failed and we were unable to recover it. 00:41:42.855 [2024-10-01 22:40:37.948263] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.855 [2024-10-01 22:40:37.948270] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.855 qpair failed and we were unable to recover it. 00:41:42.855 [2024-10-01 22:40:37.948350] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.855 [2024-10-01 22:40:37.948357] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.855 qpair failed and we were unable to recover it. 00:41:42.855 [2024-10-01 22:40:37.948400] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.855 [2024-10-01 22:40:37.948407] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.855 qpair failed and we were unable to recover it. 00:41:42.855 [2024-10-01 22:40:37.948443] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.855 [2024-10-01 22:40:37.948449] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.855 qpair failed and we were unable to recover it. 00:41:42.855 [2024-10-01 22:40:37.948775] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.855 [2024-10-01 22:40:37.948782] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.855 qpair failed and we were unable to recover it. 00:41:42.855 [2024-10-01 22:40:37.949104] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.855 [2024-10-01 22:40:37.949111] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.855 qpair failed and we were unable to recover it. 00:41:42.855 [2024-10-01 22:40:37.949293] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.855 [2024-10-01 22:40:37.949300] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.855 qpair failed and we were unable to recover it. 00:41:42.855 [2024-10-01 22:40:37.949498] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.855 [2024-10-01 22:40:37.949505] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.855 qpair failed and we were unable to recover it. 00:41:42.855 [2024-10-01 22:40:37.949851] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.855 [2024-10-01 22:40:37.949858] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.855 qpair failed and we were unable to recover it. 00:41:42.855 [2024-10-01 22:40:37.950194] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.855 [2024-10-01 22:40:37.950201] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.855 qpair failed and we were unable to recover it. 00:41:42.855 [2024-10-01 22:40:37.950506] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.855 [2024-10-01 22:40:37.950514] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.855 qpair failed and we were unable to recover it. 00:41:42.855 [2024-10-01 22:40:37.950830] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.855 [2024-10-01 22:40:37.950838] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.855 qpair failed and we were unable to recover it. 00:41:42.855 [2024-10-01 22:40:37.951112] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.855 [2024-10-01 22:40:37.951119] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.855 qpair failed and we were unable to recover it. 00:41:42.855 [2024-10-01 22:40:37.951379] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.855 [2024-10-01 22:40:37.951387] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.855 qpair failed and we were unable to recover it. 00:41:42.855 [2024-10-01 22:40:37.951689] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.855 [2024-10-01 22:40:37.951696] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.855 qpair failed and we were unable to recover it. 00:41:42.855 [2024-10-01 22:40:37.951992] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.855 [2024-10-01 22:40:37.951999] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.855 qpair failed and we were unable to recover it. 00:41:42.855 [2024-10-01 22:40:37.952341] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.855 [2024-10-01 22:40:37.952348] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.855 qpair failed and we were unable to recover it. 00:41:42.855 [2024-10-01 22:40:37.952504] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.855 [2024-10-01 22:40:37.952511] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.855 qpair failed and we were unable to recover it. 00:41:42.855 [2024-10-01 22:40:37.952795] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.855 [2024-10-01 22:40:37.952802] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.855 qpair failed and we were unable to recover it. 00:41:42.855 [2024-10-01 22:40:37.952995] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.855 [2024-10-01 22:40:37.953002] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.855 qpair failed and we were unable to recover it. 00:41:42.855 [2024-10-01 22:40:37.953313] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.855 [2024-10-01 22:40:37.953321] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.855 qpair failed and we were unable to recover it. 00:41:42.855 [2024-10-01 22:40:37.953512] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.855 [2024-10-01 22:40:37.953519] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.855 qpair failed and we were unable to recover it. 00:41:42.855 [2024-10-01 22:40:37.953807] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.855 [2024-10-01 22:40:37.953814] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.855 qpair failed and we were unable to recover it. 00:41:42.855 [2024-10-01 22:40:37.954125] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.855 [2024-10-01 22:40:37.954132] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.855 qpair failed and we were unable to recover it. 00:41:42.855 [2024-10-01 22:40:37.954299] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.855 [2024-10-01 22:40:37.954306] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.855 qpair failed and we were unable to recover it. 00:41:42.855 [2024-10-01 22:40:37.954621] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.855 [2024-10-01 22:40:37.954633] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.855 qpair failed and we were unable to recover it. 00:41:42.855 [2024-10-01 22:40:37.954920] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.855 [2024-10-01 22:40:37.954927] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.855 qpair failed and we were unable to recover it. 00:41:42.855 [2024-10-01 22:40:37.955269] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.855 [2024-10-01 22:40:37.955276] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.855 qpair failed and we were unable to recover it. 00:41:42.855 [2024-10-01 22:40:37.955675] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.855 [2024-10-01 22:40:37.955684] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.855 qpair failed and we were unable to recover it. 00:41:42.855 [2024-10-01 22:40:37.955896] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.855 [2024-10-01 22:40:37.955902] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.855 qpair failed and we were unable to recover it. 00:41:42.855 [2024-10-01 22:40:37.956235] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.855 [2024-10-01 22:40:37.956242] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.855 qpair failed and we were unable to recover it. 00:41:42.855 [2024-10-01 22:40:37.956399] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.855 [2024-10-01 22:40:37.956405] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.855 qpair failed and we were unable to recover it. 00:41:42.855 [2024-10-01 22:40:37.956712] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.855 [2024-10-01 22:40:37.956720] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.855 qpair failed and we were unable to recover it. 00:41:42.855 [2024-10-01 22:40:37.956941] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.856 [2024-10-01 22:40:37.956949] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.856 qpair failed and we were unable to recover it. 00:41:42.856 [2024-10-01 22:40:37.957271] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.856 [2024-10-01 22:40:37.957278] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.856 qpair failed and we were unable to recover it. 00:41:42.856 [2024-10-01 22:40:37.957616] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.856 [2024-10-01 22:40:37.957626] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.856 qpair failed and we were unable to recover it. 00:41:42.856 [2024-10-01 22:40:37.957922] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.856 [2024-10-01 22:40:37.957929] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.856 qpair failed and we were unable to recover it. 00:41:42.856 [2024-10-01 22:40:37.958251] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.856 [2024-10-01 22:40:37.958257] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.856 qpair failed and we were unable to recover it. 00:41:42.856 [2024-10-01 22:40:37.958551] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.856 [2024-10-01 22:40:37.958558] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.856 qpair failed and we were unable to recover it. 00:41:42.856 [2024-10-01 22:40:37.958866] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.856 [2024-10-01 22:40:37.958874] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.856 qpair failed and we were unable to recover it. 00:41:42.856 [2024-10-01 22:40:37.959192] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.856 [2024-10-01 22:40:37.959200] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.856 qpair failed and we were unable to recover it. 00:41:42.856 [2024-10-01 22:40:37.959521] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.856 [2024-10-01 22:40:37.959530] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.856 qpair failed and we were unable to recover it. 00:41:42.856 [2024-10-01 22:40:37.959840] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.856 [2024-10-01 22:40:37.959847] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.856 qpair failed and we were unable to recover it. 00:41:42.856 [2024-10-01 22:40:37.960023] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.856 [2024-10-01 22:40:37.960030] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.856 qpair failed and we were unable to recover it. 00:41:42.856 [2024-10-01 22:40:37.960197] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.856 [2024-10-01 22:40:37.960204] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.856 qpair failed and we were unable to recover it. 00:41:42.856 [2024-10-01 22:40:37.960531] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.856 [2024-10-01 22:40:37.960539] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.856 qpair failed and we were unable to recover it. 00:41:42.856 [2024-10-01 22:40:37.960690] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.856 [2024-10-01 22:40:37.960698] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.856 qpair failed and we were unable to recover it. 00:41:42.856 [2024-10-01 22:40:37.961101] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.856 [2024-10-01 22:40:37.961108] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.856 qpair failed and we were unable to recover it. 00:41:42.856 [2024-10-01 22:40:37.961400] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.856 [2024-10-01 22:40:37.961408] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.856 qpair failed and we were unable to recover it. 00:41:42.856 [2024-10-01 22:40:37.961745] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.856 [2024-10-01 22:40:37.961752] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.856 qpair failed and we were unable to recover it. 00:41:42.856 [2024-10-01 22:40:37.962019] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.856 [2024-10-01 22:40:37.962026] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.856 qpair failed and we were unable to recover it. 00:41:42.856 [2024-10-01 22:40:37.962354] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.856 [2024-10-01 22:40:37.962361] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.856 qpair failed and we were unable to recover it. 00:41:42.856 [2024-10-01 22:40:37.962567] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.856 [2024-10-01 22:40:37.962573] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.856 qpair failed and we were unable to recover it. 00:41:42.856 [2024-10-01 22:40:37.962892] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.856 [2024-10-01 22:40:37.962899] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.856 qpair failed and we were unable to recover it. 00:41:42.856 [2024-10-01 22:40:37.963205] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.856 [2024-10-01 22:40:37.963212] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.856 qpair failed and we were unable to recover it. 00:41:42.856 [2024-10-01 22:40:37.963432] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.856 [2024-10-01 22:40:37.963440] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.856 qpair failed and we were unable to recover it. 00:41:42.856 [2024-10-01 22:40:37.963760] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.856 [2024-10-01 22:40:37.963767] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.856 qpair failed and we were unable to recover it. 00:41:42.856 [2024-10-01 22:40:37.963959] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.856 [2024-10-01 22:40:37.963966] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.856 qpair failed and we were unable to recover it. 00:41:42.856 [2024-10-01 22:40:37.964329] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.856 [2024-10-01 22:40:37.964336] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.856 qpair failed and we were unable to recover it. 00:41:42.856 [2024-10-01 22:40:37.964671] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.856 [2024-10-01 22:40:37.964678] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.856 qpair failed and we were unable to recover it. 00:41:42.856 [2024-10-01 22:40:37.964821] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.856 [2024-10-01 22:40:37.964830] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.856 qpair failed and we were unable to recover it. 00:41:42.856 [2024-10-01 22:40:37.965138] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.856 [2024-10-01 22:40:37.965145] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.856 qpair failed and we were unable to recover it. 00:41:42.856 [2024-10-01 22:40:37.965548] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.856 [2024-10-01 22:40:37.965556] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.856 qpair failed and we were unable to recover it. 00:41:42.856 [2024-10-01 22:40:37.965848] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.856 [2024-10-01 22:40:37.965855] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.856 qpair failed and we were unable to recover it. 00:41:42.856 [2024-10-01 22:40:37.966163] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.856 [2024-10-01 22:40:37.966170] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.856 qpair failed and we were unable to recover it. 00:41:42.856 [2024-10-01 22:40:37.966371] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.856 [2024-10-01 22:40:37.966378] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.856 qpair failed and we were unable to recover it. 00:41:42.856 [2024-10-01 22:40:37.966711] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.856 [2024-10-01 22:40:37.966718] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.856 qpair failed and we were unable to recover it. 00:41:42.856 [2024-10-01 22:40:37.966870] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.856 [2024-10-01 22:40:37.966877] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.856 qpair failed and we were unable to recover it. 00:41:42.856 [2024-10-01 22:40:37.967054] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.856 [2024-10-01 22:40:37.967062] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.856 qpair failed and we were unable to recover it. 00:41:42.856 [2024-10-01 22:40:37.967376] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.856 [2024-10-01 22:40:37.967383] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.856 qpair failed and we were unable to recover it. 00:41:42.856 [2024-10-01 22:40:37.967739] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.856 [2024-10-01 22:40:37.967746] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.856 qpair failed and we were unable to recover it. 00:41:42.856 [2024-10-01 22:40:37.968125] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.856 [2024-10-01 22:40:37.968133] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.856 qpair failed and we were unable to recover it. 00:41:42.857 [2024-10-01 22:40:37.968408] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.857 [2024-10-01 22:40:37.968415] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.857 qpair failed and we were unable to recover it. 00:41:42.857 [2024-10-01 22:40:37.968735] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.857 [2024-10-01 22:40:37.968742] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.857 qpair failed and we were unable to recover it. 00:41:42.857 [2024-10-01 22:40:37.969067] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.857 [2024-10-01 22:40:37.969074] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.857 qpair failed and we were unable to recover it. 00:41:42.857 [2024-10-01 22:40:37.969278] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.857 [2024-10-01 22:40:37.969285] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.857 qpair failed and we were unable to recover it. 00:41:42.857 [2024-10-01 22:40:37.969482] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.857 [2024-10-01 22:40:37.969490] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.857 qpair failed and we were unable to recover it. 00:41:42.857 [2024-10-01 22:40:37.969791] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.857 [2024-10-01 22:40:37.969798] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.857 qpair failed and we were unable to recover it. 00:41:42.857 [2024-10-01 22:40:37.970104] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.857 [2024-10-01 22:40:37.970120] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.857 qpair failed and we were unable to recover it. 00:41:42.857 [2024-10-01 22:40:37.970429] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.857 [2024-10-01 22:40:37.970436] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.857 qpair failed and we were unable to recover it. 00:41:42.857 [2024-10-01 22:40:37.970633] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.857 [2024-10-01 22:40:37.970640] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.857 qpair failed and we were unable to recover it. 00:41:42.857 [2024-10-01 22:40:37.971027] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.857 [2024-10-01 22:40:37.971034] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.857 qpair failed and we were unable to recover it. 00:41:42.857 [2024-10-01 22:40:37.971323] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.857 [2024-10-01 22:40:37.971330] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.857 qpair failed and we were unable to recover it. 00:41:42.857 [2024-10-01 22:40:37.971523] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.857 [2024-10-01 22:40:37.971530] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.857 qpair failed and we were unable to recover it. 00:41:42.857 [2024-10-01 22:40:37.971841] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.857 [2024-10-01 22:40:37.971848] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.857 qpair failed and we were unable to recover it. 00:41:42.857 [2024-10-01 22:40:37.972153] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.857 [2024-10-01 22:40:37.972160] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.857 qpair failed and we were unable to recover it. 00:41:42.857 [2024-10-01 22:40:37.972347] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.857 [2024-10-01 22:40:37.972353] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.857 qpair failed and we were unable to recover it. 00:41:42.857 [2024-10-01 22:40:37.972678] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.857 [2024-10-01 22:40:37.972685] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.857 qpair failed and we were unable to recover it. 00:41:42.857 [2024-10-01 22:40:37.973023] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.857 [2024-10-01 22:40:37.973030] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.857 qpair failed and we were unable to recover it. 00:41:42.857 [2024-10-01 22:40:37.973312] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.857 [2024-10-01 22:40:37.973319] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.857 qpair failed and we were unable to recover it. 00:41:42.857 [2024-10-01 22:40:37.973467] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.857 [2024-10-01 22:40:37.973475] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.857 qpair failed and we were unable to recover it. 00:41:42.857 [2024-10-01 22:40:37.973713] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.857 [2024-10-01 22:40:37.973721] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.857 qpair failed and we were unable to recover it. 00:41:42.857 [2024-10-01 22:40:37.973916] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.857 [2024-10-01 22:40:37.973923] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.857 qpair failed and we were unable to recover it. 00:41:42.857 [2024-10-01 22:40:37.974206] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.857 [2024-10-01 22:40:37.974214] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.857 qpair failed and we were unable to recover it. 00:41:42.857 [2024-10-01 22:40:37.974521] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.857 [2024-10-01 22:40:37.974527] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.857 qpair failed and we were unable to recover it. 00:41:42.857 [2024-10-01 22:40:37.974709] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.857 [2024-10-01 22:40:37.974716] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.857 qpair failed and we were unable to recover it. 00:41:42.857 [2024-10-01 22:40:37.975037] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.857 [2024-10-01 22:40:37.975044] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.857 qpair failed and we were unable to recover it. 00:41:42.857 [2024-10-01 22:40:37.975353] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.857 [2024-10-01 22:40:37.975360] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.857 qpair failed and we were unable to recover it. 00:41:42.857 [2024-10-01 22:40:37.975665] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.857 [2024-10-01 22:40:37.975672] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.857 qpair failed and we were unable to recover it. 00:41:42.857 [2024-10-01 22:40:37.975849] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.857 [2024-10-01 22:40:37.975856] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.857 qpair failed and we were unable to recover it. 00:41:42.857 [2024-10-01 22:40:37.975911] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.857 [2024-10-01 22:40:37.975919] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.857 qpair failed and we were unable to recover it. 00:41:42.857 [2024-10-01 22:40:37.976156] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.857 [2024-10-01 22:40:37.976163] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.857 qpair failed and we were unable to recover it. 00:41:42.857 [2024-10-01 22:40:37.976331] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.857 [2024-10-01 22:40:37.976338] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.857 qpair failed and we were unable to recover it. 00:41:42.857 [2024-10-01 22:40:37.976517] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.857 [2024-10-01 22:40:37.976523] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.857 qpair failed and we were unable to recover it. 00:41:42.857 [2024-10-01 22:40:37.976854] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.857 [2024-10-01 22:40:37.976861] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.857 qpair failed and we were unable to recover it. 00:41:42.857 [2024-10-01 22:40:37.977156] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.857 [2024-10-01 22:40:37.977162] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.857 qpair failed and we were unable to recover it. 00:41:42.857 [2024-10-01 22:40:37.977355] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.857 [2024-10-01 22:40:37.977362] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.857 qpair failed and we were unable to recover it. 00:41:42.857 [2024-10-01 22:40:37.977553] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.858 [2024-10-01 22:40:37.977568] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.858 qpair failed and we were unable to recover it. 00:41:42.858 [2024-10-01 22:40:37.977750] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.858 [2024-10-01 22:40:37.977759] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.858 qpair failed and we were unable to recover it. 00:41:42.858 [2024-10-01 22:40:37.978092] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.858 [2024-10-01 22:40:37.978099] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.858 qpair failed and we were unable to recover it. 00:41:42.858 [2024-10-01 22:40:37.978295] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.858 [2024-10-01 22:40:37.978303] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.858 qpair failed and we were unable to recover it. 00:41:42.858 [2024-10-01 22:40:37.978589] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.858 [2024-10-01 22:40:37.978597] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.858 qpair failed and we were unable to recover it. 00:41:42.858 [2024-10-01 22:40:37.978863] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.858 [2024-10-01 22:40:37.978870] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.858 qpair failed and we were unable to recover it. 00:41:42.858 [2024-10-01 22:40:37.979146] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.858 [2024-10-01 22:40:37.979154] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.858 qpair failed and we were unable to recover it. 00:41:42.858 [2024-10-01 22:40:37.979323] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.858 [2024-10-01 22:40:37.979331] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.858 qpair failed and we were unable to recover it. 00:41:42.858 [2024-10-01 22:40:37.979686] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.858 [2024-10-01 22:40:37.979693] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.858 qpair failed and we were unable to recover it. 00:41:42.858 [2024-10-01 22:40:37.980000] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.858 [2024-10-01 22:40:37.980007] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.858 qpair failed and we were unable to recover it. 00:41:42.858 [2024-10-01 22:40:37.980306] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.858 [2024-10-01 22:40:37.980313] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.858 qpair failed and we were unable to recover it. 00:41:42.858 [2024-10-01 22:40:37.980654] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.858 [2024-10-01 22:40:37.980661] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.858 qpair failed and we were unable to recover it. 00:41:42.858 [2024-10-01 22:40:37.980965] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.858 [2024-10-01 22:40:37.980972] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.858 qpair failed and we were unable to recover it. 00:41:42.858 [2024-10-01 22:40:37.981278] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.858 [2024-10-01 22:40:37.981292] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.858 qpair failed and we were unable to recover it. 00:41:42.858 22:40:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:41:42.858 [2024-10-01 22:40:37.981599] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.858 [2024-10-01 22:40:37.981607] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.858 qpair failed and we were unable to recover it. 00:41:42.858 22:40:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@864 -- # return 0 00:41:42.858 [2024-10-01 22:40:37.982042] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.858 [2024-10-01 22:40:37.982050] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.858 qpair failed and we were unable to recover it. 00:41:42.858 22:40:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:41:42.858 22:40:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@730 -- # xtrace_disable 00:41:42.858 [2024-10-01 22:40:37.982360] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.858 [2024-10-01 22:40:37.982367] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.858 qpair failed and we were unable to recover it. 00:41:42.858 22:40:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:41:42.858 [2024-10-01 22:40:37.982537] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.858 [2024-10-01 22:40:37.982544] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.858 qpair failed and we were unable to recover it. 00:41:42.858 [2024-10-01 22:40:37.982731] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.858 [2024-10-01 22:40:37.982738] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.858 qpair failed and we were unable to recover it. 00:41:42.858 [2024-10-01 22:40:37.983071] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.858 [2024-10-01 22:40:37.983078] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.858 qpair failed and we were unable to recover it. 00:41:42.858 [2024-10-01 22:40:37.983235] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.858 [2024-10-01 22:40:37.983241] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.858 qpair failed and we were unable to recover it. 00:41:42.858 [2024-10-01 22:40:37.983527] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.858 [2024-10-01 22:40:37.983534] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.858 qpair failed and we were unable to recover it. 00:41:42.858 [2024-10-01 22:40:37.983893] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.858 [2024-10-01 22:40:37.983901] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.858 qpair failed and we were unable to recover it. 00:41:42.858 [2024-10-01 22:40:37.984079] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.858 [2024-10-01 22:40:37.984087] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.858 qpair failed and we were unable to recover it. 00:41:42.858 [2024-10-01 22:40:37.984170] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.858 [2024-10-01 22:40:37.984177] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.858 qpair failed and we were unable to recover it. 00:41:42.858 [2024-10-01 22:40:37.984463] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.858 [2024-10-01 22:40:37.984471] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.858 qpair failed and we were unable to recover it. 00:41:42.858 [2024-10-01 22:40:37.984784] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.858 [2024-10-01 22:40:37.984792] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.858 qpair failed and we were unable to recover it. 00:41:42.858 [2024-10-01 22:40:37.985053] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.858 [2024-10-01 22:40:37.985060] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.858 qpair failed and we were unable to recover it. 00:41:42.858 [2024-10-01 22:40:37.985373] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.858 [2024-10-01 22:40:37.985380] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.858 qpair failed and we were unable to recover it. 00:41:42.858 [2024-10-01 22:40:37.985696] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.858 [2024-10-01 22:40:37.985704] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.858 qpair failed and we were unable to recover it. 00:41:42.858 [2024-10-01 22:40:37.986053] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.858 [2024-10-01 22:40:37.986060] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.858 qpair failed and we were unable to recover it. 00:41:42.858 [2024-10-01 22:40:37.986378] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.858 [2024-10-01 22:40:37.986386] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.858 qpair failed and we were unable to recover it. 00:41:42.858 [2024-10-01 22:40:37.986579] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.858 [2024-10-01 22:40:37.986586] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.859 qpair failed and we were unable to recover it. 00:41:42.859 [2024-10-01 22:40:37.986759] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.859 [2024-10-01 22:40:37.986767] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.859 qpair failed and we were unable to recover it. 00:41:42.859 [2024-10-01 22:40:37.987005] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.859 [2024-10-01 22:40:37.987013] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.859 qpair failed and we were unable to recover it. 00:41:42.859 [2024-10-01 22:40:37.987314] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.859 [2024-10-01 22:40:37.987322] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.859 qpair failed and we were unable to recover it. 00:41:42.859 [2024-10-01 22:40:37.987607] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.859 [2024-10-01 22:40:37.987615] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.859 qpair failed and we were unable to recover it. 00:41:42.859 [2024-10-01 22:40:37.987898] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.859 [2024-10-01 22:40:37.987907] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.859 qpair failed and we were unable to recover it. 00:41:42.859 [2024-10-01 22:40:37.988087] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.859 [2024-10-01 22:40:37.988094] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.859 qpair failed and we were unable to recover it. 00:41:42.859 [2024-10-01 22:40:37.988244] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.859 [2024-10-01 22:40:37.988253] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.859 qpair failed and we were unable to recover it. 00:41:42.859 [2024-10-01 22:40:37.988605] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.859 [2024-10-01 22:40:37.988612] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.859 qpair failed and we were unable to recover it. 00:41:42.859 [2024-10-01 22:40:37.988923] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.859 [2024-10-01 22:40:37.988930] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.859 qpair failed and we were unable to recover it. 00:41:42.859 [2024-10-01 22:40:37.989215] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.859 [2024-10-01 22:40:37.989223] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.859 qpair failed and we were unable to recover it. 00:41:42.859 [2024-10-01 22:40:37.989524] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.859 [2024-10-01 22:40:37.989532] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.859 qpair failed and we were unable to recover it. 00:41:42.859 [2024-10-01 22:40:37.989887] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.859 [2024-10-01 22:40:37.989897] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.859 qpair failed and we were unable to recover it. 00:41:42.859 [2024-10-01 22:40:37.990220] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.859 [2024-10-01 22:40:37.990227] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.859 qpair failed and we were unable to recover it. 00:41:42.859 [2024-10-01 22:40:37.990526] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.859 [2024-10-01 22:40:37.990534] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.859 qpair failed and we were unable to recover it. 00:41:42.859 [2024-10-01 22:40:37.990886] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.859 [2024-10-01 22:40:37.990895] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.859 qpair failed and we were unable to recover it. 00:41:42.859 [2024-10-01 22:40:37.991051] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.859 [2024-10-01 22:40:37.991059] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.859 qpair failed and we were unable to recover it. 00:41:42.859 [2024-10-01 22:40:37.991362] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.859 [2024-10-01 22:40:37.991370] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.859 qpair failed and we were unable to recover it. 00:41:42.859 [2024-10-01 22:40:37.991406] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.859 [2024-10-01 22:40:37.991414] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.859 qpair failed and we were unable to recover it. 00:41:42.859 [2024-10-01 22:40:37.991641] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.859 [2024-10-01 22:40:37.991649] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.859 qpair failed and we were unable to recover it. 00:41:42.859 [2024-10-01 22:40:37.991745] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.859 [2024-10-01 22:40:37.991752] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.859 qpair failed and we were unable to recover it. 00:41:42.859 [2024-10-01 22:40:37.992083] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.859 [2024-10-01 22:40:37.992090] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.859 qpair failed and we were unable to recover it. 00:41:42.859 [2024-10-01 22:40:37.992384] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.859 [2024-10-01 22:40:37.992391] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.859 qpair failed and we were unable to recover it. 00:41:42.859 [2024-10-01 22:40:37.992727] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.859 [2024-10-01 22:40:37.992735] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.859 qpair failed and we were unable to recover it. 00:41:42.859 [2024-10-01 22:40:37.993031] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.859 [2024-10-01 22:40:37.993038] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.859 qpair failed and we were unable to recover it. 00:41:42.859 [2024-10-01 22:40:37.993364] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.859 [2024-10-01 22:40:37.993371] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.859 qpair failed and we were unable to recover it. 00:41:42.859 [2024-10-01 22:40:37.993536] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.859 [2024-10-01 22:40:37.993543] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.859 qpair failed and we were unable to recover it. 00:41:42.859 [2024-10-01 22:40:37.993768] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.859 [2024-10-01 22:40:37.993776] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.859 qpair failed and we were unable to recover it. 00:41:42.859 [2024-10-01 22:40:37.994119] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.859 [2024-10-01 22:40:37.994126] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.859 qpair failed and we were unable to recover it. 00:41:42.859 [2024-10-01 22:40:37.994304] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.859 [2024-10-01 22:40:37.994311] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.859 qpair failed and we were unable to recover it. 00:41:42.859 [2024-10-01 22:40:37.994499] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.859 [2024-10-01 22:40:37.994506] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.859 qpair failed and we were unable to recover it. 00:41:42.859 [2024-10-01 22:40:37.994675] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.859 [2024-10-01 22:40:37.994682] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.859 qpair failed and we were unable to recover it. 00:41:42.859 [2024-10-01 22:40:37.995105] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.859 [2024-10-01 22:40:37.995112] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.859 qpair failed and we were unable to recover it. 00:41:42.859 [2024-10-01 22:40:37.995392] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.859 [2024-10-01 22:40:37.995399] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.859 qpair failed and we were unable to recover it. 00:41:42.859 [2024-10-01 22:40:37.995589] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.859 [2024-10-01 22:40:37.995596] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.859 qpair failed and we were unable to recover it. 00:41:42.859 [2024-10-01 22:40:37.995913] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.859 [2024-10-01 22:40:37.995922] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.859 qpair failed and we were unable to recover it. 00:41:42.859 [2024-10-01 22:40:37.996191] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.859 [2024-10-01 22:40:37.996199] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.859 qpair failed and we were unable to recover it. 00:41:42.860 [2024-10-01 22:40:37.996377] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.860 [2024-10-01 22:40:37.996385] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.860 qpair failed and we were unable to recover it. 00:41:42.860 [2024-10-01 22:40:37.996622] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.860 [2024-10-01 22:40:37.996632] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.860 qpair failed and we were unable to recover it. 00:41:42.860 [2024-10-01 22:40:37.997026] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.860 [2024-10-01 22:40:37.997032] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.860 qpair failed and we were unable to recover it. 00:41:42.860 [2024-10-01 22:40:37.997354] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.860 [2024-10-01 22:40:37.997361] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.860 qpair failed and we were unable to recover it. 00:41:42.860 [2024-10-01 22:40:37.997710] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.860 [2024-10-01 22:40:37.997718] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.860 qpair failed and we were unable to recover it. 00:41:42.860 [2024-10-01 22:40:37.998027] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.860 [2024-10-01 22:40:37.998034] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.860 qpair failed and we were unable to recover it. 00:41:42.860 [2024-10-01 22:40:37.998343] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.860 [2024-10-01 22:40:37.998351] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.860 qpair failed and we were unable to recover it. 00:41:42.860 [2024-10-01 22:40:37.998654] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.860 [2024-10-01 22:40:37.998661] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.860 qpair failed and we were unable to recover it. 00:41:42.860 [2024-10-01 22:40:37.998969] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.860 [2024-10-01 22:40:37.998977] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.860 qpair failed and we were unable to recover it. 00:41:42.860 [2024-10-01 22:40:37.999246] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.860 [2024-10-01 22:40:37.999252] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.860 qpair failed and we were unable to recover it. 00:41:42.860 [2024-10-01 22:40:37.999532] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.860 [2024-10-01 22:40:37.999541] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.860 qpair failed and we were unable to recover it. 00:41:42.860 [2024-10-01 22:40:37.999737] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.860 [2024-10-01 22:40:37.999744] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.860 qpair failed and we were unable to recover it. 00:41:42.860 [2024-10-01 22:40:38.000061] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.860 [2024-10-01 22:40:38.000069] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.860 qpair failed and we were unable to recover it. 00:41:42.860 [2024-10-01 22:40:38.000407] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.860 [2024-10-01 22:40:38.000414] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.860 qpair failed and we were unable to recover it. 00:41:42.860 [2024-10-01 22:40:38.000698] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.860 [2024-10-01 22:40:38.000706] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.860 qpair failed and we were unable to recover it. 00:41:42.860 [2024-10-01 22:40:38.001110] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.860 [2024-10-01 22:40:38.001118] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.860 qpair failed and we were unable to recover it. 00:41:42.860 [2024-10-01 22:40:38.001412] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.860 [2024-10-01 22:40:38.001419] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.860 qpair failed and we were unable to recover it. 00:41:42.860 [2024-10-01 22:40:38.001584] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.860 [2024-10-01 22:40:38.001592] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.860 qpair failed and we were unable to recover it. 00:41:42.860 [2024-10-01 22:40:38.001866] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.860 [2024-10-01 22:40:38.001873] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.860 qpair failed and we were unable to recover it. 00:41:42.860 [2024-10-01 22:40:38.002167] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.860 [2024-10-01 22:40:38.002174] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.860 qpair failed and we were unable to recover it. 00:41:42.860 [2024-10-01 22:40:38.002502] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.860 [2024-10-01 22:40:38.002509] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.860 qpair failed and we were unable to recover it. 00:41:42.860 [2024-10-01 22:40:38.002796] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.860 [2024-10-01 22:40:38.002804] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.860 qpair failed and we were unable to recover it. 00:41:42.860 [2024-10-01 22:40:38.003109] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.860 [2024-10-01 22:40:38.003116] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.860 qpair failed and we were unable to recover it. 00:41:42.860 [2024-10-01 22:40:38.003393] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.860 [2024-10-01 22:40:38.003400] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.860 qpair failed and we were unable to recover it. 00:41:42.860 [2024-10-01 22:40:38.003733] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.860 [2024-10-01 22:40:38.003740] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.860 qpair failed and we were unable to recover it. 00:41:42.860 [2024-10-01 22:40:38.004062] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.860 [2024-10-01 22:40:38.004070] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.860 qpair failed and we were unable to recover it. 00:41:42.860 [2024-10-01 22:40:38.004384] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.860 [2024-10-01 22:40:38.004391] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.860 qpair failed and we were unable to recover it. 00:41:42.860 [2024-10-01 22:40:38.004548] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.860 [2024-10-01 22:40:38.004555] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.860 qpair failed and we were unable to recover it. 00:41:42.860 [2024-10-01 22:40:38.004784] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.860 [2024-10-01 22:40:38.004792] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.860 qpair failed and we were unable to recover it. 00:41:42.860 [2024-10-01 22:40:38.005150] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.860 [2024-10-01 22:40:38.005156] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.860 qpair failed and we were unable to recover it. 00:41:42.860 [2024-10-01 22:40:38.005192] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.860 [2024-10-01 22:40:38.005199] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.860 qpair failed and we were unable to recover it. 00:41:42.860 [2024-10-01 22:40:38.005499] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.860 [2024-10-01 22:40:38.005506] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.860 qpair failed and we were unable to recover it. 00:41:42.860 [2024-10-01 22:40:38.005799] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.860 [2024-10-01 22:40:38.005806] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.860 qpair failed and we were unable to recover it. 00:41:42.860 [2024-10-01 22:40:38.006025] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.860 [2024-10-01 22:40:38.006032] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.860 qpair failed and we were unable to recover it. 00:41:42.860 [2024-10-01 22:40:38.006343] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.860 [2024-10-01 22:40:38.006350] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.860 qpair failed and we were unable to recover it. 00:41:42.860 [2024-10-01 22:40:38.006637] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.860 [2024-10-01 22:40:38.006646] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.860 qpair failed and we were unable to recover it. 00:41:42.860 [2024-10-01 22:40:38.006937] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.860 [2024-10-01 22:40:38.006943] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.860 qpair failed and we were unable to recover it. 00:41:42.861 [2024-10-01 22:40:38.007116] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.861 [2024-10-01 22:40:38.007123] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.861 qpair failed and we were unable to recover it. 00:41:42.861 [2024-10-01 22:40:38.007445] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.861 [2024-10-01 22:40:38.007453] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.861 qpair failed and we were unable to recover it. 00:41:42.861 [2024-10-01 22:40:38.007760] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.861 [2024-10-01 22:40:38.007767] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.861 qpair failed and we were unable to recover it. 00:41:42.861 [2024-10-01 22:40:38.008083] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.861 [2024-10-01 22:40:38.008090] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.861 qpair failed and we were unable to recover it. 00:41:42.861 [2024-10-01 22:40:38.008379] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.861 [2024-10-01 22:40:38.008386] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.861 qpair failed and we were unable to recover it. 00:41:42.861 [2024-10-01 22:40:38.008769] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.861 [2024-10-01 22:40:38.008777] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.861 qpair failed and we were unable to recover it. 00:41:42.861 [2024-10-01 22:40:38.009088] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.861 [2024-10-01 22:40:38.009095] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.861 qpair failed and we were unable to recover it. 00:41:42.861 [2024-10-01 22:40:38.009407] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.861 [2024-10-01 22:40:38.009414] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.861 qpair failed and we were unable to recover it. 00:41:42.861 [2024-10-01 22:40:38.009600] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.861 [2024-10-01 22:40:38.009607] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.861 qpair failed and we were unable to recover it. 00:41:42.861 [2024-10-01 22:40:38.009857] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.861 [2024-10-01 22:40:38.009866] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.861 qpair failed and we were unable to recover it. 00:41:42.861 [2024-10-01 22:40:38.010165] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.861 [2024-10-01 22:40:38.010172] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.861 qpair failed and we were unable to recover it. 00:41:42.861 [2024-10-01 22:40:38.010375] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.861 [2024-10-01 22:40:38.010382] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.861 qpair failed and we were unable to recover it. 00:41:42.861 [2024-10-01 22:40:38.010696] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.861 [2024-10-01 22:40:38.010704] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.861 qpair failed and we were unable to recover it. 00:41:42.861 [2024-10-01 22:40:38.010912] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.861 [2024-10-01 22:40:38.010921] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.861 qpair failed and we were unable to recover it. 00:41:42.861 [2024-10-01 22:40:38.011075] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.861 [2024-10-01 22:40:38.011082] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.861 qpair failed and we were unable to recover it. 00:41:42.861 [2024-10-01 22:40:38.011381] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.861 [2024-10-01 22:40:38.011388] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.861 qpair failed and we were unable to recover it. 00:41:42.861 [2024-10-01 22:40:38.011560] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.861 [2024-10-01 22:40:38.011567] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.861 qpair failed and we were unable to recover it. 00:41:42.861 [2024-10-01 22:40:38.011970] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.861 [2024-10-01 22:40:38.011977] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.861 qpair failed and we were unable to recover it. 00:41:42.861 [2024-10-01 22:40:38.012297] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.861 [2024-10-01 22:40:38.012305] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.861 qpair failed and we were unable to recover it. 00:41:42.861 [2024-10-01 22:40:38.012490] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.861 [2024-10-01 22:40:38.012498] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.861 qpair failed and we were unable to recover it. 00:41:42.861 [2024-10-01 22:40:38.012817] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.861 [2024-10-01 22:40:38.012825] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.861 qpair failed and we were unable to recover it. 00:41:42.861 [2024-10-01 22:40:38.013121] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.861 [2024-10-01 22:40:38.013128] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.861 qpair failed and we were unable to recover it. 00:41:42.861 [2024-10-01 22:40:38.013464] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.861 [2024-10-01 22:40:38.013472] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.861 qpair failed and we were unable to recover it. 00:41:42.861 [2024-10-01 22:40:38.013657] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.861 [2024-10-01 22:40:38.013665] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.861 qpair failed and we were unable to recover it. 00:41:42.861 [2024-10-01 22:40:38.013989] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.861 [2024-10-01 22:40:38.013996] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.861 qpair failed and we were unable to recover it. 00:41:42.861 [2024-10-01 22:40:38.014292] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.861 [2024-10-01 22:40:38.014299] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.861 qpair failed and we were unable to recover it. 00:41:42.861 [2024-10-01 22:40:38.014588] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.861 [2024-10-01 22:40:38.014596] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.861 qpair failed and we were unable to recover it. 00:41:42.861 [2024-10-01 22:40:38.014748] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.861 [2024-10-01 22:40:38.014756] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.861 qpair failed and we were unable to recover it. 00:41:42.861 [2024-10-01 22:40:38.014935] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.861 [2024-10-01 22:40:38.014941] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.861 qpair failed and we were unable to recover it. 00:41:42.861 [2024-10-01 22:40:38.015247] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.861 [2024-10-01 22:40:38.015254] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.861 qpair failed and we were unable to recover it. 00:41:42.861 [2024-10-01 22:40:38.015599] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.861 [2024-10-01 22:40:38.015607] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.861 qpair failed and we were unable to recover it. 00:41:42.861 [2024-10-01 22:40:38.015947] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.861 [2024-10-01 22:40:38.015955] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.861 qpair failed and we were unable to recover it. 00:41:42.861 [2024-10-01 22:40:38.016240] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.861 [2024-10-01 22:40:38.016248] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.861 qpair failed and we were unable to recover it. 00:41:42.861 [2024-10-01 22:40:38.016518] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.861 [2024-10-01 22:40:38.016526] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.861 qpair failed and we were unable to recover it. 00:41:42.861 [2024-10-01 22:40:38.016842] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.861 [2024-10-01 22:40:38.016851] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.861 qpair failed and we were unable to recover it. 00:41:42.861 [2024-10-01 22:40:38.017123] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.861 [2024-10-01 22:40:38.017131] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.861 qpair failed and we were unable to recover it. 00:41:42.861 [2024-10-01 22:40:38.017336] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.861 [2024-10-01 22:40:38.017344] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.861 qpair failed and we were unable to recover it. 00:41:42.861 [2024-10-01 22:40:38.017634] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.861 [2024-10-01 22:40:38.017642] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.861 qpair failed and we were unable to recover it. 00:41:42.862 [2024-10-01 22:40:38.017798] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.862 [2024-10-01 22:40:38.017805] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.862 qpair failed and we were unable to recover it. 00:41:42.862 [2024-10-01 22:40:38.018080] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.862 [2024-10-01 22:40:38.018087] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.862 qpair failed and we were unable to recover it. 00:41:42.862 [2024-10-01 22:40:38.018432] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.862 [2024-10-01 22:40:38.018439] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.862 qpair failed and we were unable to recover it. 00:41:42.862 [2024-10-01 22:40:38.018746] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.862 [2024-10-01 22:40:38.018753] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.862 qpair failed and we were unable to recover it. 00:41:42.862 [2024-10-01 22:40:38.018929] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.862 [2024-10-01 22:40:38.018936] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.862 qpair failed and we were unable to recover it. 00:41:42.862 [2024-10-01 22:40:38.019151] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.862 [2024-10-01 22:40:38.019158] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.862 qpair failed and we were unable to recover it. 00:41:42.862 [2024-10-01 22:40:38.019199] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.862 [2024-10-01 22:40:38.019206] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.862 qpair failed and we were unable to recover it. 00:41:42.862 [2024-10-01 22:40:38.019308] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.862 [2024-10-01 22:40:38.019315] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.862 qpair failed and we were unable to recover it. 00:41:42.862 [2024-10-01 22:40:38.019476] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.862 [2024-10-01 22:40:38.019483] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.862 qpair failed and we were unable to recover it. 00:41:42.862 [2024-10-01 22:40:38.019868] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.862 [2024-10-01 22:40:38.019876] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.862 qpair failed and we were unable to recover it. 00:41:42.862 [2024-10-01 22:40:38.020185] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.862 [2024-10-01 22:40:38.020191] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.862 qpair failed and we were unable to recover it. 00:41:42.862 [2024-10-01 22:40:38.020486] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.862 [2024-10-01 22:40:38.020494] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.862 qpair failed and we were unable to recover it. 00:41:42.862 [2024-10-01 22:40:38.020717] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.862 [2024-10-01 22:40:38.020726] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.862 qpair failed and we were unable to recover it. 00:41:42.862 [2024-10-01 22:40:38.021093] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.862 [2024-10-01 22:40:38.021101] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.862 qpair failed and we were unable to recover it. 00:41:42.862 [2024-10-01 22:40:38.021408] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.862 [2024-10-01 22:40:38.021415] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.862 qpair failed and we were unable to recover it. 00:41:42.862 [2024-10-01 22:40:38.021704] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.862 [2024-10-01 22:40:38.021713] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.862 qpair failed and we were unable to recover it. 00:41:42.862 [2024-10-01 22:40:38.022040] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.862 [2024-10-01 22:40:38.022047] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.862 qpair failed and we were unable to recover it. 00:41:42.862 [2024-10-01 22:40:38.022325] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.862 [2024-10-01 22:40:38.022332] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.862 22:40:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:41:42.862 qpair failed and we were unable to recover it. 00:41:42.862 [2024-10-01 22:40:38.022635] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.862 [2024-10-01 22:40:38.022642] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.862 qpair failed and we were unable to recover it. 00:41:42.862 22:40:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:41:42.862 [2024-10-01 22:40:38.022954] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.862 [2024-10-01 22:40:38.022963] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.862 qpair failed and we were unable to recover it. 00:41:42.862 22:40:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:42.862 22:40:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:41:42.862 [2024-10-01 22:40:38.023304] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.862 [2024-10-01 22:40:38.023313] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.862 qpair failed and we were unable to recover it. 00:41:42.862 [2024-10-01 22:40:38.023474] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.862 [2024-10-01 22:40:38.023481] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.862 qpair failed and we were unable to recover it. 00:41:42.862 [2024-10-01 22:40:38.023870] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.862 [2024-10-01 22:40:38.023877] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.862 qpair failed and we were unable to recover it. 00:41:42.862 [2024-10-01 22:40:38.024072] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.862 [2024-10-01 22:40:38.024078] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.862 qpair failed and we were unable to recover it. 00:41:42.862 [2024-10-01 22:40:38.024240] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.862 [2024-10-01 22:40:38.024247] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.862 qpair failed and we were unable to recover it. 00:41:42.862 [2024-10-01 22:40:38.024454] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.862 [2024-10-01 22:40:38.024461] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.862 qpair failed and we were unable to recover it. 00:41:42.862 [2024-10-01 22:40:38.024782] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.862 [2024-10-01 22:40:38.024789] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.862 qpair failed and we were unable to recover it. 00:41:42.862 [2024-10-01 22:40:38.025096] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.862 [2024-10-01 22:40:38.025103] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.862 qpair failed and we were unable to recover it. 00:41:42.862 [2024-10-01 22:40:38.025297] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.862 [2024-10-01 22:40:38.025304] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.862 qpair failed and we were unable to recover it. 00:41:42.862 [2024-10-01 22:40:38.025566] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.862 [2024-10-01 22:40:38.025573] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.862 qpair failed and we were unable to recover it. 00:41:42.862 [2024-10-01 22:40:38.025892] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.862 [2024-10-01 22:40:38.025900] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.862 qpair failed and we were unable to recover it. 00:41:42.862 [2024-10-01 22:40:38.026258] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.862 [2024-10-01 22:40:38.026265] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.862 qpair failed and we were unable to recover it. 00:41:42.862 [2024-10-01 22:40:38.026586] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.862 [2024-10-01 22:40:38.026593] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.862 qpair failed and we were unable to recover it. 00:41:42.862 [2024-10-01 22:40:38.026773] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.862 [2024-10-01 22:40:38.026780] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.862 qpair failed and we were unable to recover it. 00:41:42.862 [2024-10-01 22:40:38.026938] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.862 [2024-10-01 22:40:38.026945] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.862 qpair failed and we were unable to recover it. 00:41:42.862 [2024-10-01 22:40:38.027237] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.862 [2024-10-01 22:40:38.027244] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.862 qpair failed and we were unable to recover it. 00:41:42.862 [2024-10-01 22:40:38.027409] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.862 [2024-10-01 22:40:38.027416] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.863 qpair failed and we were unable to recover it. 00:41:42.863 [2024-10-01 22:40:38.027721] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.863 [2024-10-01 22:40:38.027728] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.863 qpair failed and we were unable to recover it. 00:41:42.863 [2024-10-01 22:40:38.028043] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.863 [2024-10-01 22:40:38.028051] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.863 qpair failed and we were unable to recover it. 00:41:42.863 [2024-10-01 22:40:38.028376] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.863 [2024-10-01 22:40:38.028383] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.863 qpair failed and we were unable to recover it. 00:41:42.863 [2024-10-01 22:40:38.028676] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.863 [2024-10-01 22:40:38.028683] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.863 qpair failed and we were unable to recover it. 00:41:42.863 [2024-10-01 22:40:38.028993] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.863 [2024-10-01 22:40:38.029000] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.863 qpair failed and we were unable to recover it. 00:41:42.863 [2024-10-01 22:40:38.029277] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.863 [2024-10-01 22:40:38.029284] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.863 qpair failed and we were unable to recover it. 00:41:42.863 [2024-10-01 22:40:38.029619] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.863 [2024-10-01 22:40:38.029629] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.863 qpair failed and we were unable to recover it. 00:41:42.863 [2024-10-01 22:40:38.029911] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.863 [2024-10-01 22:40:38.029918] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.863 qpair failed and we were unable to recover it. 00:41:42.863 [2024-10-01 22:40:38.030234] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.863 [2024-10-01 22:40:38.030241] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.863 qpair failed and we were unable to recover it. 00:41:42.863 [2024-10-01 22:40:38.030573] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.863 [2024-10-01 22:40:38.030579] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.863 qpair failed and we were unable to recover it. 00:41:42.863 [2024-10-01 22:40:38.030879] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.863 [2024-10-01 22:40:38.030886] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.863 qpair failed and we were unable to recover it. 00:41:42.863 [2024-10-01 22:40:38.031206] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.863 [2024-10-01 22:40:38.031213] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.863 qpair failed and we were unable to recover it. 00:41:42.863 [2024-10-01 22:40:38.031498] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.863 [2024-10-01 22:40:38.031505] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.863 qpair failed and we were unable to recover it. 00:41:42.863 [2024-10-01 22:40:38.031783] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.863 [2024-10-01 22:40:38.031791] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.863 qpair failed and we were unable to recover it. 00:41:42.863 [2024-10-01 22:40:38.032112] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.863 [2024-10-01 22:40:38.032119] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.863 qpair failed and we were unable to recover it. 00:41:42.863 [2024-10-01 22:40:38.032289] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.863 [2024-10-01 22:40:38.032297] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.863 qpair failed and we were unable to recover it. 00:41:42.863 [2024-10-01 22:40:38.032339] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.863 [2024-10-01 22:40:38.032346] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.863 qpair failed and we were unable to recover it. 00:41:42.863 [2024-10-01 22:40:38.032412] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.863 [2024-10-01 22:40:38.032419] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.863 qpair failed and we were unable to recover it. 00:41:42.863 [2024-10-01 22:40:38.032606] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.863 [2024-10-01 22:40:38.032613] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.863 qpair failed and we were unable to recover it. 00:41:42.863 [2024-10-01 22:40:38.032958] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.863 [2024-10-01 22:40:38.032966] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.863 qpair failed and we were unable to recover it. 00:41:42.863 [2024-10-01 22:40:38.033119] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.863 [2024-10-01 22:40:38.033126] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.863 qpair failed and we were unable to recover it. 00:41:42.863 [2024-10-01 22:40:38.033322] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.863 [2024-10-01 22:40:38.033329] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.863 qpair failed and we were unable to recover it. 00:41:42.863 [2024-10-01 22:40:38.033674] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.863 [2024-10-01 22:40:38.033681] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.863 qpair failed and we were unable to recover it. 00:41:42.863 [2024-10-01 22:40:38.033857] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.863 [2024-10-01 22:40:38.033864] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.863 qpair failed and we were unable to recover it. 00:41:42.863 [2024-10-01 22:40:38.034043] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.863 [2024-10-01 22:40:38.034050] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.863 qpair failed and we were unable to recover it. 00:41:42.863 [2024-10-01 22:40:38.034353] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.863 [2024-10-01 22:40:38.034361] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.863 qpair failed and we were unable to recover it. 00:41:42.863 [2024-10-01 22:40:38.034681] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.863 [2024-10-01 22:40:38.034689] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.863 qpair failed and we were unable to recover it. 00:41:42.863 [2024-10-01 22:40:38.035003] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.863 [2024-10-01 22:40:38.035011] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.863 qpair failed and we were unable to recover it. 00:41:42.863 [2024-10-01 22:40:38.035223] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.863 [2024-10-01 22:40:38.035231] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.863 qpair failed and we were unable to recover it. 00:41:42.863 [2024-10-01 22:40:38.035543] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.863 [2024-10-01 22:40:38.035550] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.863 qpair failed and we were unable to recover it. 00:41:42.863 [2024-10-01 22:40:38.035843] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.863 [2024-10-01 22:40:38.035850] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.863 qpair failed and we were unable to recover it. 00:41:42.863 [2024-10-01 22:40:38.036024] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.863 [2024-10-01 22:40:38.036031] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.863 qpair failed and we were unable to recover it. 00:41:42.863 [2024-10-01 22:40:38.036304] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.863 [2024-10-01 22:40:38.036311] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.863 qpair failed and we were unable to recover it. 00:41:42.863 [2024-10-01 22:40:38.036627] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.864 [2024-10-01 22:40:38.036635] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.864 qpair failed and we were unable to recover it. 00:41:42.864 [2024-10-01 22:40:38.036952] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.864 [2024-10-01 22:40:38.036959] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.864 qpair failed and we were unable to recover it. 00:41:42.864 [2024-10-01 22:40:38.037268] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.864 [2024-10-01 22:40:38.037274] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.864 qpair failed and we were unable to recover it. 00:41:42.864 [2024-10-01 22:40:38.037590] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.864 [2024-10-01 22:40:38.037597] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.864 qpair failed and we were unable to recover it. 00:41:42.864 [2024-10-01 22:40:38.037821] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.864 [2024-10-01 22:40:38.037828] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.864 qpair failed and we were unable to recover it. 00:41:42.864 [2024-10-01 22:40:38.038092] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.864 [2024-10-01 22:40:38.038098] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.864 qpair failed and we were unable to recover it. 00:41:42.864 Malloc0 00:41:42.864 [2024-10-01 22:40:38.038411] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.864 [2024-10-01 22:40:38.038419] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.864 qpair failed and we were unable to recover it. 00:41:42.864 [2024-10-01 22:40:38.038760] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.864 [2024-10-01 22:40:38.038768] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.864 qpair failed and we were unable to recover it. 00:41:42.864 [2024-10-01 22:40:38.039088] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.864 [2024-10-01 22:40:38.039096] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.864 qpair failed and we were unable to recover it. 00:41:42.864 22:40:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:42.864 [2024-10-01 22:40:38.039405] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.864 [2024-10-01 22:40:38.039412] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.864 qpair failed and we were unable to recover it. 00:41:42.864 22:40:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:41:42.864 [2024-10-01 22:40:38.039789] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.864 22:40:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:42.864 [2024-10-01 22:40:38.039797] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.864 qpair failed and we were unable to recover it. 00:41:42.864 22:40:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:41:42.864 [2024-10-01 22:40:38.040086] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.864 [2024-10-01 22:40:38.040094] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.864 qpair failed and we were unable to recover it. 00:41:42.864 [2024-10-01 22:40:38.040387] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.864 [2024-10-01 22:40:38.040395] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.864 qpair failed and we were unable to recover it. 00:41:42.864 [2024-10-01 22:40:38.040630] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.864 [2024-10-01 22:40:38.040637] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.864 qpair failed and we were unable to recover it. 00:41:42.864 [2024-10-01 22:40:38.040958] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.864 [2024-10-01 22:40:38.040965] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.864 qpair failed and we were unable to recover it. 00:41:42.864 [2024-10-01 22:40:38.041318] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.864 [2024-10-01 22:40:38.041325] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.864 qpair failed and we were unable to recover it. 00:41:42.864 [2024-10-01 22:40:38.041507] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.864 [2024-10-01 22:40:38.041513] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.864 qpair failed and we were unable to recover it. 00:41:42.864 [2024-10-01 22:40:38.041805] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.864 [2024-10-01 22:40:38.041812] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.864 qpair failed and we were unable to recover it. 00:41:42.864 [2024-10-01 22:40:38.041969] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.864 [2024-10-01 22:40:38.041976] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.864 qpair failed and we were unable to recover it. 00:41:42.864 [2024-10-01 22:40:38.042181] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.864 [2024-10-01 22:40:38.042187] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.864 qpair failed and we were unable to recover it. 00:41:42.864 [2024-10-01 22:40:38.042450] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.864 [2024-10-01 22:40:38.042457] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.864 qpair failed and we were unable to recover it. 00:41:42.864 [2024-10-01 22:40:38.042733] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.864 [2024-10-01 22:40:38.042741] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.864 qpair failed and we were unable to recover it. 00:41:42.864 [2024-10-01 22:40:38.043095] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.864 [2024-10-01 22:40:38.043102] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.864 qpair failed and we were unable to recover it. 00:41:42.864 [2024-10-01 22:40:38.043463] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.864 [2024-10-01 22:40:38.043471] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.864 qpair failed and we were unable to recover it. 00:41:42.864 [2024-10-01 22:40:38.043662] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.864 [2024-10-01 22:40:38.043669] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.864 qpair failed and we were unable to recover it. 00:41:42.864 [2024-10-01 22:40:38.043946] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.864 [2024-10-01 22:40:38.043952] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.864 qpair failed and we were unable to recover it. 00:41:42.864 [2024-10-01 22:40:38.044263] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.864 [2024-10-01 22:40:38.044270] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.864 qpair failed and we were unable to recover it. 00:41:42.864 [2024-10-01 22:40:38.044581] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.864 [2024-10-01 22:40:38.044587] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.864 qpair failed and we were unable to recover it. 00:41:42.864 [2024-10-01 22:40:38.044894] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.864 [2024-10-01 22:40:38.044902] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.864 qpair failed and we were unable to recover it. 00:41:42.864 [2024-10-01 22:40:38.045220] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.864 [2024-10-01 22:40:38.045227] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.864 qpair failed and we were unable to recover it. 00:41:42.864 [2024-10-01 22:40:38.045536] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.864 [2024-10-01 22:40:38.045543] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.864 qpair failed and we were unable to recover it. 00:41:42.864 [2024-10-01 22:40:38.045773] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:41:42.864 [2024-10-01 22:40:38.045841] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.864 [2024-10-01 22:40:38.045848] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.864 qpair failed and we were unable to recover it. 00:41:42.864 [2024-10-01 22:40:38.046148] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.864 [2024-10-01 22:40:38.046155] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.864 qpair failed and we were unable to recover it. 00:41:42.864 [2024-10-01 22:40:38.046469] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.864 [2024-10-01 22:40:38.046476] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.864 qpair failed and we were unable to recover it. 00:41:42.864 [2024-10-01 22:40:38.046668] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.864 [2024-10-01 22:40:38.046675] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.864 qpair failed and we were unable to recover it. 00:41:42.864 [2024-10-01 22:40:38.046988] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.864 [2024-10-01 22:40:38.046996] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.864 qpair failed and we were unable to recover it. 00:41:42.864 [2024-10-01 22:40:38.047273] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.865 [2024-10-01 22:40:38.047280] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.865 qpair failed and we were unable to recover it. 00:41:42.865 [2024-10-01 22:40:38.047476] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.865 [2024-10-01 22:40:38.047483] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.865 qpair failed and we were unable to recover it. 00:41:42.865 [2024-10-01 22:40:38.047674] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.865 [2024-10-01 22:40:38.047682] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.865 qpair failed and we were unable to recover it. 00:41:42.865 [2024-10-01 22:40:38.047885] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.865 [2024-10-01 22:40:38.047892] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.865 qpair failed and we were unable to recover it. 00:41:42.865 [2024-10-01 22:40:38.048263] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.865 [2024-10-01 22:40:38.048269] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.865 qpair failed and we were unable to recover it. 00:41:42.865 [2024-10-01 22:40:38.048597] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.865 [2024-10-01 22:40:38.048603] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.865 qpair failed and we were unable to recover it. 00:41:42.865 [2024-10-01 22:40:38.048903] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.865 [2024-10-01 22:40:38.048919] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.865 qpair failed and we were unable to recover it. 00:41:42.865 [2024-10-01 22:40:38.049189] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.865 [2024-10-01 22:40:38.049196] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.865 qpair failed and we were unable to recover it. 00:41:42.865 [2024-10-01 22:40:38.049358] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.865 [2024-10-01 22:40:38.049365] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.865 qpair failed and we were unable to recover it. 00:41:42.865 [2024-10-01 22:40:38.049530] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.865 [2024-10-01 22:40:38.049537] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.865 qpair failed and we were unable to recover it. 00:41:42.865 [2024-10-01 22:40:38.049627] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.865 [2024-10-01 22:40:38.049635] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.865 qpair failed and we were unable to recover it. 00:41:42.865 [2024-10-01 22:40:38.049838] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.865 [2024-10-01 22:40:38.049845] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.865 qpair failed and we were unable to recover it. 00:41:42.865 [2024-10-01 22:40:38.050020] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.865 [2024-10-01 22:40:38.050027] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.865 qpair failed and we were unable to recover it. 00:41:42.865 [2024-10-01 22:40:38.050211] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.865 [2024-10-01 22:40:38.050218] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.865 qpair failed and we were unable to recover it. 00:41:42.865 [2024-10-01 22:40:38.050413] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.865 [2024-10-01 22:40:38.050419] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.865 qpair failed and we were unable to recover it. 00:41:42.865 [2024-10-01 22:40:38.050583] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.865 [2024-10-01 22:40:38.050590] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.865 qpair failed and we were unable to recover it. 00:41:42.865 [2024-10-01 22:40:38.050906] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.865 [2024-10-01 22:40:38.050913] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.865 qpair failed and we were unable to recover it. 00:41:42.865 [2024-10-01 22:40:38.051230] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.865 [2024-10-01 22:40:38.051237] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.865 qpair failed and we were unable to recover it. 00:41:42.865 [2024-10-01 22:40:38.051418] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.865 [2024-10-01 22:40:38.051426] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.865 qpair failed and we were unable to recover it. 00:41:42.865 [2024-10-01 22:40:38.051722] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.865 [2024-10-01 22:40:38.051730] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.865 qpair failed and we were unable to recover it. 00:41:42.865 [2024-10-01 22:40:38.052062] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.865 [2024-10-01 22:40:38.052069] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.865 qpair failed and we were unable to recover it. 00:41:42.865 [2024-10-01 22:40:38.052374] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.865 [2024-10-01 22:40:38.052381] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.865 qpair failed and we were unable to recover it. 00:41:42.865 [2024-10-01 22:40:38.052576] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.865 [2024-10-01 22:40:38.052582] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.865 qpair failed and we were unable to recover it. 00:41:42.865 [2024-10-01 22:40:38.052866] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.865 [2024-10-01 22:40:38.052873] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.865 qpair failed and we were unable to recover it. 00:41:42.865 [2024-10-01 22:40:38.053189] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.865 [2024-10-01 22:40:38.053195] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.865 qpair failed and we were unable to recover it. 00:41:42.865 [2024-10-01 22:40:38.053495] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.865 [2024-10-01 22:40:38.053504] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.865 qpair failed and we were unable to recover it. 00:41:42.865 [2024-10-01 22:40:38.053834] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.865 [2024-10-01 22:40:38.053842] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.865 qpair failed and we were unable to recover it. 00:41:42.865 [2024-10-01 22:40:38.053916] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.865 [2024-10-01 22:40:38.053923] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.865 qpair failed and we were unable to recover it. 00:41:42.865 [2024-10-01 22:40:38.054065] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.865 [2024-10-01 22:40:38.054072] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.865 qpair failed and we were unable to recover it. 00:41:42.865 [2024-10-01 22:40:38.054413] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.865 [2024-10-01 22:40:38.054420] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.865 qpair failed and we were unable to recover it. 00:41:42.865 [2024-10-01 22:40:38.054609] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.865 [2024-10-01 22:40:38.054617] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.865 qpair failed and we were unable to recover it. 00:41:42.865 [2024-10-01 22:40:38.054919] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.865 [2024-10-01 22:40:38.054928] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.865 qpair failed and we were unable to recover it. 00:41:42.865 22:40:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:42.865 [2024-10-01 22:40:38.055212] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.865 [2024-10-01 22:40:38.055220] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.865 qpair failed and we were unable to recover it. 00:41:42.865 22:40:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:41:42.865 [2024-10-01 22:40:38.055492] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.865 [2024-10-01 22:40:38.055500] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.865 qpair failed and we were unable to recover it. 00:41:42.865 22:40:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:42.865 [2024-10-01 22:40:38.055842] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.865 [2024-10-01 22:40:38.055851] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.865 qpair failed and we were unable to recover it. 00:41:42.865 22:40:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:41:42.865 [2024-10-01 22:40:38.056135] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.865 [2024-10-01 22:40:38.056142] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.865 qpair failed and we were unable to recover it. 00:41:42.865 [2024-10-01 22:40:38.056451] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.865 [2024-10-01 22:40:38.056458] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.865 qpair failed and we were unable to recover it. 00:41:42.865 [2024-10-01 22:40:38.056776] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.866 [2024-10-01 22:40:38.056784] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.866 qpair failed and we were unable to recover it. 00:41:42.866 [2024-10-01 22:40:38.057074] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.866 [2024-10-01 22:40:38.057081] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.866 qpair failed and we were unable to recover it. 00:41:42.866 [2024-10-01 22:40:38.057362] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.866 [2024-10-01 22:40:38.057370] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.866 qpair failed and we were unable to recover it. 00:41:42.866 [2024-10-01 22:40:38.057689] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.866 [2024-10-01 22:40:38.057696] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.866 qpair failed and we were unable to recover it. 00:41:42.866 [2024-10-01 22:40:38.057992] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.866 [2024-10-01 22:40:38.057999] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.866 qpair failed and we were unable to recover it. 00:41:42.866 [2024-10-01 22:40:38.058214] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.866 [2024-10-01 22:40:38.058222] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.866 qpair failed and we were unable to recover it. 00:41:42.866 [2024-10-01 22:40:38.058526] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.866 [2024-10-01 22:40:38.058532] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.866 qpair failed and we were unable to recover it. 00:41:42.866 [2024-10-01 22:40:38.058717] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.866 [2024-10-01 22:40:38.058724] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.866 qpair failed and we were unable to recover it. 00:41:42.866 [2024-10-01 22:40:38.059101] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.866 [2024-10-01 22:40:38.059108] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.866 qpair failed and we were unable to recover it. 00:41:42.866 [2024-10-01 22:40:38.059427] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.866 [2024-10-01 22:40:38.059436] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.866 qpair failed and we were unable to recover it. 00:41:42.866 [2024-10-01 22:40:38.059632] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.866 [2024-10-01 22:40:38.059639] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.866 qpair failed and we were unable to recover it. 00:41:42.866 [2024-10-01 22:40:38.059830] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.866 [2024-10-01 22:40:38.059837] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.866 qpair failed and we were unable to recover it. 00:41:42.866 [2024-10-01 22:40:38.060149] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.866 [2024-10-01 22:40:38.060156] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.866 qpair failed and we were unable to recover it. 00:41:42.866 [2024-10-01 22:40:38.060485] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.866 [2024-10-01 22:40:38.060492] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.866 qpair failed and we were unable to recover it. 00:41:42.866 [2024-10-01 22:40:38.060840] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.866 [2024-10-01 22:40:38.060847] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.866 qpair failed and we were unable to recover it. 00:41:42.866 [2024-10-01 22:40:38.061157] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.866 [2024-10-01 22:40:38.061164] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.866 qpair failed and we were unable to recover it. 00:41:42.866 [2024-10-01 22:40:38.061514] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.866 [2024-10-01 22:40:38.061522] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.866 qpair failed and we were unable to recover it. 00:41:42.866 [2024-10-01 22:40:38.061850] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.866 [2024-10-01 22:40:38.061857] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.866 qpair failed and we were unable to recover it. 00:41:42.866 [2024-10-01 22:40:38.062143] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.866 [2024-10-01 22:40:38.062150] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.866 qpair failed and we were unable to recover it. 00:41:42.866 [2024-10-01 22:40:38.062436] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.866 [2024-10-01 22:40:38.062444] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.866 qpair failed and we were unable to recover it. 00:41:42.866 [2024-10-01 22:40:38.062730] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.866 [2024-10-01 22:40:38.062737] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.866 qpair failed and we were unable to recover it. 00:41:42.866 [2024-10-01 22:40:38.063056] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.866 [2024-10-01 22:40:38.063063] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.866 qpair failed and we were unable to recover it. 00:41:42.866 [2024-10-01 22:40:38.063261] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.866 [2024-10-01 22:40:38.063267] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.866 qpair failed and we were unable to recover it. 00:41:42.866 [2024-10-01 22:40:38.063560] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.866 [2024-10-01 22:40:38.063567] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.866 qpair failed and we were unable to recover it. 00:41:42.866 [2024-10-01 22:40:38.063757] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.866 [2024-10-01 22:40:38.063764] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.866 qpair failed and we were unable to recover it. 00:41:42.866 [2024-10-01 22:40:38.064097] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.866 [2024-10-01 22:40:38.064104] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.866 qpair failed and we were unable to recover it. 00:41:42.866 [2024-10-01 22:40:38.064388] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.866 [2024-10-01 22:40:38.064396] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.866 qpair failed and we were unable to recover it. 00:41:42.866 [2024-10-01 22:40:38.064551] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.866 [2024-10-01 22:40:38.064557] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.866 qpair failed and we were unable to recover it. 00:41:42.866 [2024-10-01 22:40:38.064737] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.866 [2024-10-01 22:40:38.064744] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.866 qpair failed and we were unable to recover it. 00:41:42.866 [2024-10-01 22:40:38.065035] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.866 [2024-10-01 22:40:38.065042] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.866 qpair failed and we were unable to recover it. 00:41:42.866 [2024-10-01 22:40:38.065317] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.866 [2024-10-01 22:40:38.065325] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.866 qpair failed and we were unable to recover it. 00:41:42.866 [2024-10-01 22:40:38.065614] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.866 [2024-10-01 22:40:38.065631] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.866 qpair failed and we were unable to recover it. 00:41:42.866 [2024-10-01 22:40:38.065834] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.866 [2024-10-01 22:40:38.065841] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.866 qpair failed and we were unable to recover it. 00:41:42.866 [2024-10-01 22:40:38.066118] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.866 [2024-10-01 22:40:38.066124] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.866 qpair failed and we were unable to recover it. 00:41:42.866 [2024-10-01 22:40:38.066307] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.866 [2024-10-01 22:40:38.066314] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.866 qpair failed and we were unable to recover it. 00:41:42.866 [2024-10-01 22:40:38.066661] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.866 [2024-10-01 22:40:38.066668] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.866 qpair failed and we were unable to recover it. 00:41:42.866 22:40:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:42.866 [2024-10-01 22:40:38.066973] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.866 [2024-10-01 22:40:38.066981] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:42.866 qpair failed and we were unable to recover it. 00:41:43.130 22:40:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:41:43.130 [2024-10-01 22:40:38.067288] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:43.130 [2024-10-01 22:40:38.067296] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:43.130 qpair failed and we were unable to recover it. 00:41:43.130 22:40:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:43.130 [2024-10-01 22:40:38.067577] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:43.130 [2024-10-01 22:40:38.067586] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:43.130 qpair failed and we were unable to recover it. 00:41:43.130 22:40:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:41:43.130 [2024-10-01 22:40:38.067810] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:43.130 [2024-10-01 22:40:38.067817] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:43.130 qpair failed and we were unable to recover it. 00:41:43.130 [2024-10-01 22:40:38.068125] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:43.130 [2024-10-01 22:40:38.068132] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:43.130 qpair failed and we were unable to recover it. 00:41:43.130 [2024-10-01 22:40:38.068448] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:43.130 [2024-10-01 22:40:38.068455] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:43.130 qpair failed and we were unable to recover it. 00:41:43.130 [2024-10-01 22:40:38.068781] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:43.130 [2024-10-01 22:40:38.068788] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:43.130 qpair failed and we were unable to recover it. 00:41:43.130 [2024-10-01 22:40:38.069106] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:43.130 [2024-10-01 22:40:38.069112] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:43.130 qpair failed and we were unable to recover it. 00:41:43.130 [2024-10-01 22:40:38.069329] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:43.130 [2024-10-01 22:40:38.069335] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:43.130 qpair failed and we were unable to recover it. 00:41:43.130 [2024-10-01 22:40:38.069537] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:43.130 [2024-10-01 22:40:38.069544] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:43.130 qpair failed and we were unable to recover it. 00:41:43.130 [2024-10-01 22:40:38.069847] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:43.130 [2024-10-01 22:40:38.069856] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:43.130 qpair failed and we were unable to recover it. 00:41:43.130 [2024-10-01 22:40:38.070169] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:43.130 [2024-10-01 22:40:38.070176] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:43.130 qpair failed and we were unable to recover it. 00:41:43.130 [2024-10-01 22:40:38.070492] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:43.130 [2024-10-01 22:40:38.070498] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:43.130 qpair failed and we were unable to recover it. 00:41:43.130 [2024-10-01 22:40:38.070819] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:43.130 [2024-10-01 22:40:38.070826] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:43.130 qpair failed and we were unable to recover it. 00:41:43.130 [2024-10-01 22:40:38.071127] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:43.130 [2024-10-01 22:40:38.071134] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:43.130 qpair failed and we were unable to recover it. 00:41:43.130 [2024-10-01 22:40:38.071445] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:43.130 [2024-10-01 22:40:38.071454] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:43.130 qpair failed and we were unable to recover it. 00:41:43.130 [2024-10-01 22:40:38.071616] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:43.130 [2024-10-01 22:40:38.071623] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:43.130 qpair failed and we were unable to recover it. 00:41:43.130 [2024-10-01 22:40:38.071980] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:43.130 [2024-10-01 22:40:38.071987] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:43.130 qpair failed and we were unable to recover it. 00:41:43.130 [2024-10-01 22:40:38.072305] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:43.130 [2024-10-01 22:40:38.072312] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:43.130 qpair failed and we were unable to recover it. 00:41:43.130 [2024-10-01 22:40:38.072641] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:43.130 [2024-10-01 22:40:38.072649] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:43.130 qpair failed and we were unable to recover it. 00:41:43.130 [2024-10-01 22:40:38.072924] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:43.130 [2024-10-01 22:40:38.072930] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:43.130 qpair failed and we were unable to recover it. 00:41:43.130 [2024-10-01 22:40:38.073123] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:43.131 [2024-10-01 22:40:38.073130] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:43.131 qpair failed and we were unable to recover it. 00:41:43.131 [2024-10-01 22:40:38.073439] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:43.131 [2024-10-01 22:40:38.073446] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:43.131 qpair failed and we were unable to recover it. 00:41:43.131 [2024-10-01 22:40:38.073769] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:43.131 [2024-10-01 22:40:38.073777] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:43.131 qpair failed and we were unable to recover it. 00:41:43.131 [2024-10-01 22:40:38.074078] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:43.131 [2024-10-01 22:40:38.074094] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:43.131 qpair failed and we were unable to recover it. 00:41:43.131 [2024-10-01 22:40:38.074407] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:43.131 [2024-10-01 22:40:38.074414] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:43.131 qpair failed and we were unable to recover it. 00:41:43.131 [2024-10-01 22:40:38.074687] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:43.131 [2024-10-01 22:40:38.074694] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:43.131 qpair failed and we were unable to recover it. 00:41:43.131 [2024-10-01 22:40:38.074999] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:43.131 [2024-10-01 22:40:38.075006] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:43.131 qpair failed and we were unable to recover it. 00:41:43.131 [2024-10-01 22:40:38.075295] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:43.131 [2024-10-01 22:40:38.075302] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:43.131 qpair failed and we were unable to recover it. 00:41:43.131 [2024-10-01 22:40:38.075615] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:43.131 [2024-10-01 22:40:38.075622] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:43.131 qpair failed and we were unable to recover it. 00:41:43.131 [2024-10-01 22:40:38.075810] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:43.131 [2024-10-01 22:40:38.075817] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:43.131 qpair failed and we were unable to recover it. 00:41:43.131 [2024-10-01 22:40:38.076154] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:43.131 [2024-10-01 22:40:38.076161] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:43.131 qpair failed and we were unable to recover it. 00:41:43.131 [2024-10-01 22:40:38.076499] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:43.131 [2024-10-01 22:40:38.076507] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:43.131 qpair failed and we were unable to recover it. 00:41:43.131 [2024-10-01 22:40:38.076686] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:43.131 [2024-10-01 22:40:38.076693] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:43.131 qpair failed and we were unable to recover it. 00:41:43.131 [2024-10-01 22:40:38.076978] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:43.131 [2024-10-01 22:40:38.076985] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:43.131 qpair failed and we were unable to recover it. 00:41:43.131 [2024-10-01 22:40:38.077187] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:43.131 [2024-10-01 22:40:38.077194] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:43.131 qpair failed and we were unable to recover it. 00:41:43.131 [2024-10-01 22:40:38.077389] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:43.131 [2024-10-01 22:40:38.077396] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:43.131 qpair failed and we were unable to recover it. 00:41:43.131 [2024-10-01 22:40:38.077634] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:43.131 [2024-10-01 22:40:38.077641] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:43.131 qpair failed and we were unable to recover it. 00:41:43.131 [2024-10-01 22:40:38.077968] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:43.131 [2024-10-01 22:40:38.077975] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:43.131 qpair failed and we were unable to recover it. 00:41:43.131 [2024-10-01 22:40:38.078325] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:43.131 [2024-10-01 22:40:38.078333] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:43.131 qpair failed and we were unable to recover it. 00:41:43.131 [2024-10-01 22:40:38.078548] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:43.131 [2024-10-01 22:40:38.078556] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:43.131 qpair failed and we were unable to recover it. 00:41:43.131 [2024-10-01 22:40:38.078770] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:43.131 [2024-10-01 22:40:38.078777] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:43.131 qpair failed and we were unable to recover it. 00:41:43.131 [2024-10-01 22:40:38.078977] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:43.131 [2024-10-01 22:40:38.078984] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:43.131 qpair failed and we were unable to recover it. 00:41:43.131 22:40:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:43.131 [2024-10-01 22:40:38.079155] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:43.131 [2024-10-01 22:40:38.079162] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:43.131 qpair failed and we were unable to recover it. 00:41:43.131 [2024-10-01 22:40:38.079318] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:43.131 [2024-10-01 22:40:38.079326] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:43.131 qpair failed and we were unable to recover it. 00:41:43.131 22:40:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:41:43.131 [2024-10-01 22:40:38.079627] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:43.131 [2024-10-01 22:40:38.079635] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:43.131 qpair failed and we were unable to recover it. 00:41:43.131 22:40:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:43.131 [2024-10-01 22:40:38.079835] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:43.131 [2024-10-01 22:40:38.079842] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:43.131 qpair failed and we were unable to recover it. 00:41:43.131 22:40:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:41:43.131 [2024-10-01 22:40:38.080000] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:43.131 [2024-10-01 22:40:38.080007] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:43.131 qpair failed and we were unable to recover it. 00:41:43.131 [2024-10-01 22:40:38.080294] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:43.131 [2024-10-01 22:40:38.080302] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:43.131 qpair failed and we were unable to recover it. 00:41:43.131 [2024-10-01 22:40:38.080570] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:43.131 [2024-10-01 22:40:38.080577] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:43.131 qpair failed and we were unable to recover it. 00:41:43.131 [2024-10-01 22:40:38.080889] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:43.131 [2024-10-01 22:40:38.080897] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:43.131 qpair failed and we were unable to recover it. 00:41:43.131 [2024-10-01 22:40:38.081213] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:43.131 [2024-10-01 22:40:38.081220] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:43.131 qpair failed and we were unable to recover it. 00:41:43.131 [2024-10-01 22:40:38.081518] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:43.131 [2024-10-01 22:40:38.081525] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:43.131 qpair failed and we were unable to recover it. 00:41:43.131 [2024-10-01 22:40:38.081710] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:43.131 [2024-10-01 22:40:38.081719] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:43.131 qpair failed and we were unable to recover it. 00:41:43.131 [2024-10-01 22:40:38.082039] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:43.131 [2024-10-01 22:40:38.082045] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:43.131 qpair failed and we were unable to recover it. 00:41:43.131 [2024-10-01 22:40:38.082243] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:43.131 [2024-10-01 22:40:38.082250] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:43.131 qpair failed and we were unable to recover it. 00:41:43.131 [2024-10-01 22:40:38.082576] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:43.131 [2024-10-01 22:40:38.082583] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:43.131 qpair failed and we were unable to recover it. 00:41:43.131 [2024-10-01 22:40:38.082870] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:43.131 [2024-10-01 22:40:38.082878] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:43.132 qpair failed and we were unable to recover it. 00:41:43.132 [2024-10-01 22:40:38.083191] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:43.132 [2024-10-01 22:40:38.083198] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:43.132 qpair failed and we were unable to recover it. 00:41:43.132 [2024-10-01 22:40:38.083372] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:43.132 [2024-10-01 22:40:38.083379] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:43.132 qpair failed and we were unable to recover it. 00:41:43.132 [2024-10-01 22:40:38.083647] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:43.132 [2024-10-01 22:40:38.083654] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:43.132 qpair failed and we were unable to recover it. 00:41:43.132 [2024-10-01 22:40:38.083851] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:43.132 [2024-10-01 22:40:38.083857] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:43.132 qpair failed and we were unable to recover it. 00:41:43.132 [2024-10-01 22:40:38.084044] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:43.132 [2024-10-01 22:40:38.084050] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:43.132 qpair failed and we were unable to recover it. 00:41:43.132 [2024-10-01 22:40:38.084360] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:43.132 [2024-10-01 22:40:38.084367] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:43.132 qpair failed and we were unable to recover it. 00:41:43.132 [2024-10-01 22:40:38.084654] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:43.132 [2024-10-01 22:40:38.084662] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:43.132 qpair failed and we were unable to recover it. 00:41:43.132 [2024-10-01 22:40:38.084974] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:43.132 [2024-10-01 22:40:38.084982] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:43.132 qpair failed and we were unable to recover it. 00:41:43.132 [2024-10-01 22:40:38.085182] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:43.132 [2024-10-01 22:40:38.085189] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:43.132 qpair failed and we were unable to recover it. 00:41:43.132 [2024-10-01 22:40:38.085522] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:43.132 [2024-10-01 22:40:38.085528] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:43.132 qpair failed and we were unable to recover it. 00:41:43.132 [2024-10-01 22:40:38.085855] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:43.132 [2024-10-01 22:40:38.085862] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa7fc000b90 with addr=10.0.0.2, port=4420 00:41:43.132 qpair failed and we were unable to recover it. 00:41:43.132 [2024-10-01 22:40:38.086023] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:41:43.132 22:40:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:43.132 22:40:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:41:43.132 22:40:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:43.132 22:40:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:41:43.132 [2024-10-01 22:40:38.096708] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:43.132 [2024-10-01 22:40:38.096782] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:43.132 [2024-10-01 22:40:38.096796] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:43.132 [2024-10-01 22:40:38.096802] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:43.132 [2024-10-01 22:40:38.096806] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa7fc000b90 00:41:43.132 [2024-10-01 22:40:38.096820] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:41:43.132 qpair failed and we were unable to recover it. 00:41:43.132 22:40:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:43.132 22:40:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@50 -- # wait 399496 00:41:43.132 [2024-10-01 22:40:38.106612] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:43.132 [2024-10-01 22:40:38.106698] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:43.132 [2024-10-01 22:40:38.106710] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:43.132 [2024-10-01 22:40:38.106715] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:43.132 [2024-10-01 22:40:38.106720] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa7fc000b90 00:41:43.132 [2024-10-01 22:40:38.106730] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:41:43.132 qpair failed and we were unable to recover it. 00:41:43.132 [2024-10-01 22:40:38.116613] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:43.132 [2024-10-01 22:40:38.116663] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:43.132 [2024-10-01 22:40:38.116674] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:43.132 [2024-10-01 22:40:38.116679] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:43.132 [2024-10-01 22:40:38.116686] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa7fc000b90 00:41:43.132 [2024-10-01 22:40:38.116697] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:41:43.132 qpair failed and we were unable to recover it. 00:41:43.132 [2024-10-01 22:40:38.126573] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:43.132 [2024-10-01 22:40:38.126620] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:43.132 [2024-10-01 22:40:38.126634] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:43.132 [2024-10-01 22:40:38.126639] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:43.132 [2024-10-01 22:40:38.126643] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa7fc000b90 00:41:43.132 [2024-10-01 22:40:38.126653] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:41:43.132 qpair failed and we were unable to recover it. 00:41:43.132 [2024-10-01 22:40:38.136598] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:43.132 [2024-10-01 22:40:38.136651] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:43.132 [2024-10-01 22:40:38.136663] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:43.132 [2024-10-01 22:40:38.136668] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:43.132 [2024-10-01 22:40:38.136675] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa7fc000b90 00:41:43.132 [2024-10-01 22:40:38.136687] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:41:43.132 qpair failed and we were unable to recover it. 00:41:43.132 [2024-10-01 22:40:38.146598] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:43.132 [2024-10-01 22:40:38.146647] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:43.132 [2024-10-01 22:40:38.146657] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:43.132 [2024-10-01 22:40:38.146662] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:43.132 [2024-10-01 22:40:38.146666] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa7fc000b90 00:41:43.132 [2024-10-01 22:40:38.146676] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:41:43.132 qpair failed and we were unable to recover it. 00:41:43.132 [2024-10-01 22:40:38.156608] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:43.132 [2024-10-01 22:40:38.156700] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:43.132 [2024-10-01 22:40:38.156710] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:43.132 [2024-10-01 22:40:38.156715] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:43.132 [2024-10-01 22:40:38.156719] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa7fc000b90 00:41:43.132 [2024-10-01 22:40:38.156729] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:41:43.132 qpair failed and we were unable to recover it. 00:41:43.132 [2024-10-01 22:40:38.166597] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:43.132 [2024-10-01 22:40:38.166648] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:43.132 [2024-10-01 22:40:38.166657] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:43.132 [2024-10-01 22:40:38.166662] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:43.132 [2024-10-01 22:40:38.166667] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa7fc000b90 00:41:43.132 [2024-10-01 22:40:38.166677] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:41:43.132 qpair failed and we were unable to recover it. 00:41:43.132 [2024-10-01 22:40:38.176749] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:43.132 [2024-10-01 22:40:38.176801] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:43.132 [2024-10-01 22:40:38.176811] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:43.132 [2024-10-01 22:40:38.176816] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:43.133 [2024-10-01 22:40:38.176820] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa7fc000b90 00:41:43.133 [2024-10-01 22:40:38.176830] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:41:43.133 qpair failed and we were unable to recover it. 00:41:43.133 [2024-10-01 22:40:38.186729] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:43.133 [2024-10-01 22:40:38.186775] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:43.133 [2024-10-01 22:40:38.186785] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:43.133 [2024-10-01 22:40:38.186790] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:43.133 [2024-10-01 22:40:38.186794] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa7fc000b90 00:41:43.133 [2024-10-01 22:40:38.186804] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:41:43.133 qpair failed and we were unable to recover it. 00:41:43.133 [2024-10-01 22:40:38.196754] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:43.133 [2024-10-01 22:40:38.196795] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:43.133 [2024-10-01 22:40:38.196805] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:43.133 [2024-10-01 22:40:38.196810] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:43.133 [2024-10-01 22:40:38.196814] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa7fc000b90 00:41:43.133 [2024-10-01 22:40:38.196824] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:41:43.133 qpair failed and we were unable to recover it. 00:41:43.133 [2024-10-01 22:40:38.206757] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:43.133 [2024-10-01 22:40:38.206839] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:43.133 [2024-10-01 22:40:38.206848] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:43.133 [2024-10-01 22:40:38.206853] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:43.133 [2024-10-01 22:40:38.206861] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa7fc000b90 00:41:43.133 [2024-10-01 22:40:38.206872] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:41:43.133 qpair failed and we were unable to recover it. 00:41:43.133 [2024-10-01 22:40:38.216795] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:43.133 [2024-10-01 22:40:38.216842] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:43.133 [2024-10-01 22:40:38.216852] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:43.133 [2024-10-01 22:40:38.216857] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:43.133 [2024-10-01 22:40:38.216861] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa7fc000b90 00:41:43.133 [2024-10-01 22:40:38.216872] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:41:43.133 qpair failed and we were unable to recover it. 00:41:43.133 [2024-10-01 22:40:38.226806] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:43.133 [2024-10-01 22:40:38.226853] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:43.133 [2024-10-01 22:40:38.226862] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:43.133 [2024-10-01 22:40:38.226867] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:43.133 [2024-10-01 22:40:38.226872] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa7fc000b90 00:41:43.133 [2024-10-01 22:40:38.226882] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:41:43.133 qpair failed and we were unable to recover it. 00:41:43.133 [2024-10-01 22:40:38.236824] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:43.133 [2024-10-01 22:40:38.236867] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:43.133 [2024-10-01 22:40:38.236876] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:43.133 [2024-10-01 22:40:38.236881] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:43.133 [2024-10-01 22:40:38.236885] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa7fc000b90 00:41:43.133 [2024-10-01 22:40:38.236895] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:41:43.133 qpair failed and we were unable to recover it. 00:41:43.133 [2024-10-01 22:40:38.246817] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:43.133 [2024-10-01 22:40:38.246861] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:43.133 [2024-10-01 22:40:38.246871] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:43.133 [2024-10-01 22:40:38.246876] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:43.133 [2024-10-01 22:40:38.246880] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa7fc000b90 00:41:43.133 [2024-10-01 22:40:38.246890] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:41:43.133 qpair failed and we were unable to recover it. 00:41:43.133 [2024-10-01 22:40:38.256887] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:43.133 [2024-10-01 22:40:38.256930] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:43.133 [2024-10-01 22:40:38.256940] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:43.133 [2024-10-01 22:40:38.256945] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:43.133 [2024-10-01 22:40:38.256949] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa7fc000b90 00:41:43.133 [2024-10-01 22:40:38.256959] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:41:43.133 qpair failed and we were unable to recover it. 00:41:43.133 [2024-10-01 22:40:38.266910] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:43.133 [2024-10-01 22:40:38.266951] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:43.133 [2024-10-01 22:40:38.266961] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:43.133 [2024-10-01 22:40:38.266966] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:43.133 [2024-10-01 22:40:38.266970] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa7fc000b90 00:41:43.133 [2024-10-01 22:40:38.266980] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:41:43.133 qpair failed and we were unable to recover it. 00:41:43.133 [2024-10-01 22:40:38.276931] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:43.133 [2024-10-01 22:40:38.276977] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:43.133 [2024-10-01 22:40:38.276987] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:43.133 [2024-10-01 22:40:38.276992] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:43.133 [2024-10-01 22:40:38.276996] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa7fc000b90 00:41:43.133 [2024-10-01 22:40:38.277006] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:41:43.133 qpair failed and we were unable to recover it. 00:41:43.133 [2024-10-01 22:40:38.286941] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:43.133 [2024-10-01 22:40:38.286982] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:43.133 [2024-10-01 22:40:38.286992] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:43.133 [2024-10-01 22:40:38.286997] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:43.133 [2024-10-01 22:40:38.287001] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa7fc000b90 00:41:43.133 [2024-10-01 22:40:38.287011] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:41:43.133 qpair failed and we were unable to recover it. 00:41:43.133 [2024-10-01 22:40:38.296996] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:43.133 [2024-10-01 22:40:38.297041] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:43.133 [2024-10-01 22:40:38.297051] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:43.133 [2024-10-01 22:40:38.297058] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:43.133 [2024-10-01 22:40:38.297063] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa7fc000b90 00:41:43.133 [2024-10-01 22:40:38.297073] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:41:43.133 qpair failed and we were unable to recover it. 00:41:43.133 [2024-10-01 22:40:38.307007] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:43.133 [2024-10-01 22:40:38.307097] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:43.133 [2024-10-01 22:40:38.307107] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:43.133 [2024-10-01 22:40:38.307112] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:43.133 [2024-10-01 22:40:38.307116] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa7fc000b90 00:41:43.133 [2024-10-01 22:40:38.307126] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:41:43.133 qpair failed and we were unable to recover it. 00:41:43.133 [2024-10-01 22:40:38.317028] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:43.133 [2024-10-01 22:40:38.317116] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:43.134 [2024-10-01 22:40:38.317126] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:43.134 [2024-10-01 22:40:38.317130] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:43.134 [2024-10-01 22:40:38.317134] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa7fc000b90 00:41:43.134 [2024-10-01 22:40:38.317145] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:41:43.134 qpair failed and we were unable to recover it. 00:41:43.134 [2024-10-01 22:40:38.326999] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:43.134 [2024-10-01 22:40:38.327038] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:43.134 [2024-10-01 22:40:38.327048] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:43.134 [2024-10-01 22:40:38.327053] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:43.134 [2024-10-01 22:40:38.327057] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa7fc000b90 00:41:43.134 [2024-10-01 22:40:38.327067] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:41:43.134 qpair failed and we were unable to recover it. 00:41:43.134 [2024-10-01 22:40:38.337112] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:43.134 [2024-10-01 22:40:38.337165] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:43.134 [2024-10-01 22:40:38.337174] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:43.134 [2024-10-01 22:40:38.337179] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:43.134 [2024-10-01 22:40:38.337183] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa7fc000b90 00:41:43.134 [2024-10-01 22:40:38.337193] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:41:43.134 qpair failed and we were unable to recover it. 00:41:43.134 [2024-10-01 22:40:38.347048] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:43.134 [2024-10-01 22:40:38.347089] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:43.134 [2024-10-01 22:40:38.347100] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:43.134 [2024-10-01 22:40:38.347105] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:43.134 [2024-10-01 22:40:38.347110] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa7fc000b90 00:41:43.134 [2024-10-01 22:40:38.347120] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:41:43.134 qpair failed and we were unable to recover it. 00:41:43.134 [2024-10-01 22:40:38.357109] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:43.134 [2024-10-01 22:40:38.357188] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:43.134 [2024-10-01 22:40:38.357198] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:43.134 [2024-10-01 22:40:38.357203] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:43.134 [2024-10-01 22:40:38.357208] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa7fc000b90 00:41:43.134 [2024-10-01 22:40:38.357218] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:41:43.134 qpair failed and we were unable to recover it. 00:41:43.134 [2024-10-01 22:40:38.367168] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:43.134 [2024-10-01 22:40:38.367210] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:43.134 [2024-10-01 22:40:38.367220] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:43.134 [2024-10-01 22:40:38.367225] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:43.134 [2024-10-01 22:40:38.367229] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa7fc000b90 00:41:43.134 [2024-10-01 22:40:38.367239] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:41:43.134 qpair failed and we were unable to recover it. 00:41:43.134 [2024-10-01 22:40:38.377219] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:43.134 [2024-10-01 22:40:38.377268] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:43.134 [2024-10-01 22:40:38.377278] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:43.134 [2024-10-01 22:40:38.377283] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:43.134 [2024-10-01 22:40:38.377287] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa7fc000b90 00:41:43.134 [2024-10-01 22:40:38.377298] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:41:43.134 qpair failed and we were unable to recover it. 00:41:43.396 [2024-10-01 22:40:38.387098] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:43.396 [2024-10-01 22:40:38.387142] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:43.396 [2024-10-01 22:40:38.387155] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:43.396 [2024-10-01 22:40:38.387159] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:43.396 [2024-10-01 22:40:38.387164] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa7fc000b90 00:41:43.396 [2024-10-01 22:40:38.387174] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:41:43.396 qpair failed and we were unable to recover it. 00:41:43.396 [2024-10-01 22:40:38.397266] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:43.396 [2024-10-01 22:40:38.397355] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:43.396 [2024-10-01 22:40:38.397365] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:43.396 [2024-10-01 22:40:38.397370] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:43.396 [2024-10-01 22:40:38.397374] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa7fc000b90 00:41:43.396 [2024-10-01 22:40:38.397384] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:41:43.396 qpair failed and we were unable to recover it. 00:41:43.396 [2024-10-01 22:40:38.407274] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:43.396 [2024-10-01 22:40:38.407318] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:43.396 [2024-10-01 22:40:38.407328] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:43.396 [2024-10-01 22:40:38.407333] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:43.396 [2024-10-01 22:40:38.407337] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa7fc000b90 00:41:43.396 [2024-10-01 22:40:38.407347] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:41:43.396 qpair failed and we were unable to recover it. 00:41:43.396 [2024-10-01 22:40:38.417327] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:43.396 [2024-10-01 22:40:38.417374] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:43.396 [2024-10-01 22:40:38.417384] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:43.396 [2024-10-01 22:40:38.417389] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:43.396 [2024-10-01 22:40:38.417393] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa7fc000b90 00:41:43.396 [2024-10-01 22:40:38.417403] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:41:43.396 qpair failed and we were unable to recover it. 00:41:43.396 [2024-10-01 22:40:38.427321] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:43.396 [2024-10-01 22:40:38.427365] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:43.396 [2024-10-01 22:40:38.427375] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:43.396 [2024-10-01 22:40:38.427380] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:43.396 [2024-10-01 22:40:38.427384] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa7fc000b90 00:41:43.396 [2024-10-01 22:40:38.427394] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:41:43.396 qpair failed and we were unable to recover it. 00:41:43.396 [2024-10-01 22:40:38.437384] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:43.396 [2024-10-01 22:40:38.437425] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:43.396 [2024-10-01 22:40:38.437435] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:43.396 [2024-10-01 22:40:38.437439] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:43.396 [2024-10-01 22:40:38.437444] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa7fc000b90 00:41:43.396 [2024-10-01 22:40:38.437454] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:41:43.396 qpair failed and we were unable to recover it. 00:41:43.396 [2024-10-01 22:40:38.447341] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:43.396 [2024-10-01 22:40:38.447389] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:43.396 [2024-10-01 22:40:38.447398] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:43.396 [2024-10-01 22:40:38.447403] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:43.396 [2024-10-01 22:40:38.447407] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa7fc000b90 00:41:43.396 [2024-10-01 22:40:38.447417] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:41:43.396 qpair failed and we were unable to recover it. 00:41:43.396 [2024-10-01 22:40:38.457408] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:43.396 [2024-10-01 22:40:38.457461] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:43.396 [2024-10-01 22:40:38.457471] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:43.397 [2024-10-01 22:40:38.457476] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:43.397 [2024-10-01 22:40:38.457480] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa7fc000b90 00:41:43.397 [2024-10-01 22:40:38.457490] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:41:43.397 qpair failed and we were unable to recover it. 00:41:43.397 [2024-10-01 22:40:38.467465] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:43.397 [2024-10-01 22:40:38.467511] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:43.397 [2024-10-01 22:40:38.467521] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:43.397 [2024-10-01 22:40:38.467526] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:43.397 [2024-10-01 22:40:38.467530] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa7fc000b90 00:41:43.397 [2024-10-01 22:40:38.467540] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:41:43.397 qpair failed and we were unable to recover it. 00:41:43.397 [2024-10-01 22:40:38.477499] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:43.397 [2024-10-01 22:40:38.477547] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:43.397 [2024-10-01 22:40:38.477559] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:43.397 [2024-10-01 22:40:38.477564] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:43.397 [2024-10-01 22:40:38.477568] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa7fc000b90 00:41:43.397 [2024-10-01 22:40:38.477578] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:41:43.397 qpair failed and we were unable to recover it. 00:41:43.397 [2024-10-01 22:40:38.487446] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:43.397 [2024-10-01 22:40:38.487490] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:43.397 [2024-10-01 22:40:38.487500] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:43.397 [2024-10-01 22:40:38.487505] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:43.397 [2024-10-01 22:40:38.487509] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa7fc000b90 00:41:43.397 [2024-10-01 22:40:38.487519] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:41:43.397 qpair failed and we were unable to recover it. 00:41:43.397 [2024-10-01 22:40:38.497557] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:43.397 [2024-10-01 22:40:38.497640] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:43.397 [2024-10-01 22:40:38.497650] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:43.397 [2024-10-01 22:40:38.497655] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:43.397 [2024-10-01 22:40:38.497659] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa7fc000b90 00:41:43.397 [2024-10-01 22:40:38.497669] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:41:43.397 qpair failed and we were unable to recover it. 00:41:43.397 [2024-10-01 22:40:38.507558] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:43.397 [2024-10-01 22:40:38.507646] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:43.397 [2024-10-01 22:40:38.507656] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:43.397 [2024-10-01 22:40:38.507661] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:43.397 [2024-10-01 22:40:38.507665] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa7fc000b90 00:41:43.397 [2024-10-01 22:40:38.507675] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:41:43.397 qpair failed and we were unable to recover it. 00:41:43.397 [2024-10-01 22:40:38.517588] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:43.397 [2024-10-01 22:40:38.517634] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:43.397 [2024-10-01 22:40:38.517643] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:43.397 [2024-10-01 22:40:38.517648] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:43.397 [2024-10-01 22:40:38.517652] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa7fc000b90 00:41:43.397 [2024-10-01 22:40:38.517665] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:41:43.397 qpair failed and we were unable to recover it. 00:41:43.397 [2024-10-01 22:40:38.527599] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:43.397 [2024-10-01 22:40:38.527639] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:43.397 [2024-10-01 22:40:38.527649] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:43.397 [2024-10-01 22:40:38.527654] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:43.397 [2024-10-01 22:40:38.527658] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa7fc000b90 00:41:43.397 [2024-10-01 22:40:38.527669] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:41:43.397 qpair failed and we were unable to recover it. 00:41:43.397 [2024-10-01 22:40:38.537670] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:43.397 [2024-10-01 22:40:38.537716] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:43.397 [2024-10-01 22:40:38.537725] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:43.397 [2024-10-01 22:40:38.537730] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:43.397 [2024-10-01 22:40:38.537735] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa7fc000b90 00:41:43.397 [2024-10-01 22:40:38.537745] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:41:43.397 qpair failed and we were unable to recover it. 00:41:43.397 [2024-10-01 22:40:38.547680] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:43.397 [2024-10-01 22:40:38.547729] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:43.397 [2024-10-01 22:40:38.547739] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:43.397 [2024-10-01 22:40:38.547744] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:43.397 [2024-10-01 22:40:38.547748] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa7fc000b90 00:41:43.397 [2024-10-01 22:40:38.547758] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:41:43.397 qpair failed and we were unable to recover it. 00:41:43.397 [2024-10-01 22:40:38.557696] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:43.397 [2024-10-01 22:40:38.557742] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:43.397 [2024-10-01 22:40:38.557752] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:43.397 [2024-10-01 22:40:38.557757] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:43.397 [2024-10-01 22:40:38.557761] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa7fc000b90 00:41:43.397 [2024-10-01 22:40:38.557771] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:41:43.397 qpair failed and we were unable to recover it. 00:41:43.397 [2024-10-01 22:40:38.567699] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:43.397 [2024-10-01 22:40:38.567739] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:43.397 [2024-10-01 22:40:38.567751] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:43.397 [2024-10-01 22:40:38.567756] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:43.397 [2024-10-01 22:40:38.567760] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa7fc000b90 00:41:43.397 [2024-10-01 22:40:38.567770] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:41:43.397 qpair failed and we were unable to recover it. 00:41:43.397 [2024-10-01 22:40:38.577804] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:43.397 [2024-10-01 22:40:38.577886] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:43.397 [2024-10-01 22:40:38.577896] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:43.397 [2024-10-01 22:40:38.577900] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:43.397 [2024-10-01 22:40:38.577905] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa7fc000b90 00:41:43.397 [2024-10-01 22:40:38.577915] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:41:43.397 qpair failed and we were unable to recover it. 00:41:43.397 [2024-10-01 22:40:38.587781] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:43.397 [2024-10-01 22:40:38.587826] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:43.397 [2024-10-01 22:40:38.587836] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:43.397 [2024-10-01 22:40:38.587841] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:43.397 [2024-10-01 22:40:38.587845] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa7fc000b90 00:41:43.397 [2024-10-01 22:40:38.587855] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:41:43.397 qpair failed and we were unable to recover it. 00:41:43.397 [2024-10-01 22:40:38.597779] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:43.398 [2024-10-01 22:40:38.597826] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:43.398 [2024-10-01 22:40:38.597836] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:43.398 [2024-10-01 22:40:38.597841] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:43.398 [2024-10-01 22:40:38.597845] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa7fc000b90 00:41:43.398 [2024-10-01 22:40:38.597855] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:41:43.398 qpair failed and we were unable to recover it. 00:41:43.398 [2024-10-01 22:40:38.607801] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:43.398 [2024-10-01 22:40:38.607844] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:43.398 [2024-10-01 22:40:38.607854] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:43.398 [2024-10-01 22:40:38.607859] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:43.398 [2024-10-01 22:40:38.607868] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa7fc000b90 00:41:43.398 [2024-10-01 22:40:38.607878] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:41:43.398 qpair failed and we were unable to recover it. 00:41:43.398 [2024-10-01 22:40:38.617834] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:43.398 [2024-10-01 22:40:38.617882] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:43.398 [2024-10-01 22:40:38.617892] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:43.398 [2024-10-01 22:40:38.617896] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:43.398 [2024-10-01 22:40:38.617901] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa7fc000b90 00:41:43.398 [2024-10-01 22:40:38.617910] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:41:43.398 qpair failed and we were unable to recover it. 00:41:43.398 [2024-10-01 22:40:38.627859] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:43.398 [2024-10-01 22:40:38.627914] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:43.398 [2024-10-01 22:40:38.627924] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:43.398 [2024-10-01 22:40:38.627929] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:43.398 [2024-10-01 22:40:38.627933] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa7fc000b90 00:41:43.398 [2024-10-01 22:40:38.627943] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:41:43.398 qpair failed and we were unable to recover it. 00:41:43.398 [2024-10-01 22:40:38.637930] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:43.398 [2024-10-01 22:40:38.637971] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:43.398 [2024-10-01 22:40:38.637981] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:43.398 [2024-10-01 22:40:38.637985] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:43.398 [2024-10-01 22:40:38.637990] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa7fc000b90 00:41:43.398 [2024-10-01 22:40:38.638000] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:41:43.398 qpair failed and we were unable to recover it. 00:41:43.659 [2024-10-01 22:40:38.647909] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:43.659 [2024-10-01 22:40:38.647953] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:43.659 [2024-10-01 22:40:38.647963] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:43.659 [2024-10-01 22:40:38.647968] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:43.659 [2024-10-01 22:40:38.647972] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa7fc000b90 00:41:43.659 [2024-10-01 22:40:38.647982] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:41:43.659 qpair failed and we were unable to recover it. 00:41:43.659 [2024-10-01 22:40:38.657983] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:43.659 [2024-10-01 22:40:38.658028] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:43.660 [2024-10-01 22:40:38.658038] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:43.660 [2024-10-01 22:40:38.658042] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:43.660 [2024-10-01 22:40:38.658047] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa7fc000b90 00:41:43.660 [2024-10-01 22:40:38.658057] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:41:43.660 qpair failed and we were unable to recover it. 00:41:43.660 [2024-10-01 22:40:38.667977] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:43.660 [2024-10-01 22:40:38.668016] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:43.660 [2024-10-01 22:40:38.668026] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:43.660 [2024-10-01 22:40:38.668031] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:43.660 [2024-10-01 22:40:38.668035] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa7fc000b90 00:41:43.660 [2024-10-01 22:40:38.668045] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:41:43.660 qpair failed and we were unable to recover it. 00:41:43.660 [2024-10-01 22:40:38.678003] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:43.660 [2024-10-01 22:40:38.678043] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:43.660 [2024-10-01 22:40:38.678052] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:43.660 [2024-10-01 22:40:38.678057] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:43.660 [2024-10-01 22:40:38.678061] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa7fc000b90 00:41:43.660 [2024-10-01 22:40:38.678071] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:41:43.660 qpair failed and we were unable to recover it. 00:41:43.660 [2024-10-01 22:40:38.687988] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:43.660 [2024-10-01 22:40:38.688028] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:43.660 [2024-10-01 22:40:38.688038] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:43.660 [2024-10-01 22:40:38.688043] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:43.660 [2024-10-01 22:40:38.688047] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa7fc000b90 00:41:43.660 [2024-10-01 22:40:38.688057] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:41:43.660 qpair failed and we were unable to recover it. 00:41:43.660 [2024-10-01 22:40:38.698092] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:43.660 [2024-10-01 22:40:38.698142] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:43.660 [2024-10-01 22:40:38.698151] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:43.660 [2024-10-01 22:40:38.698156] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:43.660 [2024-10-01 22:40:38.698163] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa7fc000b90 00:41:43.660 [2024-10-01 22:40:38.698173] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:41:43.660 qpair failed and we were unable to recover it. 00:41:43.660 [2024-10-01 22:40:38.708087] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:43.660 [2024-10-01 22:40:38.708130] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:43.660 [2024-10-01 22:40:38.708140] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:43.660 [2024-10-01 22:40:38.708145] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:43.660 [2024-10-01 22:40:38.708149] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa7fc000b90 00:41:43.660 [2024-10-01 22:40:38.708158] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:41:43.660 qpair failed and we were unable to recover it. 00:41:43.660 [2024-10-01 22:40:38.718121] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:43.660 [2024-10-01 22:40:38.718165] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:43.660 [2024-10-01 22:40:38.718175] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:43.660 [2024-10-01 22:40:38.718179] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:43.660 [2024-10-01 22:40:38.718184] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa7fc000b90 00:41:43.660 [2024-10-01 22:40:38.718194] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:41:43.660 qpair failed and we were unable to recover it. 00:41:43.660 [2024-10-01 22:40:38.728004] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:43.660 [2024-10-01 22:40:38.728046] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:43.660 [2024-10-01 22:40:38.728056] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:43.660 [2024-10-01 22:40:38.728061] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:43.660 [2024-10-01 22:40:38.728065] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa7fc000b90 00:41:43.660 [2024-10-01 22:40:38.728075] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:41:43.660 qpair failed and we were unable to recover it. 00:41:43.660 [2024-10-01 22:40:38.738172] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:43.660 [2024-10-01 22:40:38.738216] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:43.660 [2024-10-01 22:40:38.738226] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:43.660 [2024-10-01 22:40:38.738231] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:43.660 [2024-10-01 22:40:38.738235] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa7fc000b90 00:41:43.660 [2024-10-01 22:40:38.738245] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:41:43.660 qpair failed and we were unable to recover it. 00:41:43.660 [2024-10-01 22:40:38.748196] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:43.660 [2024-10-01 22:40:38.748238] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:43.660 [2024-10-01 22:40:38.748248] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:43.660 [2024-10-01 22:40:38.748253] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:43.660 [2024-10-01 22:40:38.748257] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa7fc000b90 00:41:43.660 [2024-10-01 22:40:38.748267] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:41:43.660 qpair failed and we were unable to recover it. 00:41:43.660 [2024-10-01 22:40:38.758213] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:43.660 [2024-10-01 22:40:38.758256] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:43.660 [2024-10-01 22:40:38.758265] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:43.660 [2024-10-01 22:40:38.758270] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:43.660 [2024-10-01 22:40:38.758274] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa7fc000b90 00:41:43.660 [2024-10-01 22:40:38.758284] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:41:43.660 qpair failed and we were unable to recover it. 00:41:43.660 [2024-10-01 22:40:38.768212] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:43.660 [2024-10-01 22:40:38.768260] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:43.660 [2024-10-01 22:40:38.768270] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:43.660 [2024-10-01 22:40:38.768275] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:43.660 [2024-10-01 22:40:38.768279] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa7fc000b90 00:41:43.660 [2024-10-01 22:40:38.768289] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:41:43.660 qpair failed and we were unable to recover it. 00:41:43.660 [2024-10-01 22:40:38.778286] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:43.660 [2024-10-01 22:40:38.778331] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:43.660 [2024-10-01 22:40:38.778341] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:43.660 [2024-10-01 22:40:38.778346] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:43.660 [2024-10-01 22:40:38.778350] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa7fc000b90 00:41:43.660 [2024-10-01 22:40:38.778360] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:41:43.660 qpair failed and we were unable to recover it. 00:41:43.660 [2024-10-01 22:40:38.788311] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:43.660 [2024-10-01 22:40:38.788365] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:43.660 [2024-10-01 22:40:38.788384] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:43.660 [2024-10-01 22:40:38.788393] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:43.660 [2024-10-01 22:40:38.788398] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa7fc000b90 00:41:43.661 [2024-10-01 22:40:38.788413] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:41:43.661 qpair failed and we were unable to recover it. 00:41:43.661 [2024-10-01 22:40:38.798337] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:43.661 [2024-10-01 22:40:38.798384] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:43.661 [2024-10-01 22:40:38.798395] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:43.661 [2024-10-01 22:40:38.798400] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:43.661 [2024-10-01 22:40:38.798404] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa7fc000b90 00:41:43.661 [2024-10-01 22:40:38.798415] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:41:43.661 qpair failed and we were unable to recover it. 00:41:43.661 [2024-10-01 22:40:38.808359] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:43.661 [2024-10-01 22:40:38.808451] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:43.661 [2024-10-01 22:40:38.808470] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:43.661 [2024-10-01 22:40:38.808476] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:43.661 [2024-10-01 22:40:38.808481] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa7fc000b90 00:41:43.661 [2024-10-01 22:40:38.808494] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:41:43.661 qpair failed and we were unable to recover it. 00:41:43.661 [2024-10-01 22:40:38.818411] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:43.661 [2024-10-01 22:40:38.818460] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:43.661 [2024-10-01 22:40:38.818478] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:43.661 [2024-10-01 22:40:38.818484] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:43.661 [2024-10-01 22:40:38.818489] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa7fc000b90 00:41:43.661 [2024-10-01 22:40:38.818503] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:41:43.661 qpair failed and we were unable to recover it. 00:41:43.661 [2024-10-01 22:40:38.828421] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:43.661 [2024-10-01 22:40:38.828465] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:43.661 [2024-10-01 22:40:38.828476] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:43.661 [2024-10-01 22:40:38.828481] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:43.661 [2024-10-01 22:40:38.828485] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa7fc000b90 00:41:43.661 [2024-10-01 22:40:38.828497] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:41:43.661 qpair failed and we were unable to recover it. 00:41:43.661 [2024-10-01 22:40:38.838435] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:43.661 [2024-10-01 22:40:38.838487] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:43.661 [2024-10-01 22:40:38.838499] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:43.661 [2024-10-01 22:40:38.838504] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:43.661 [2024-10-01 22:40:38.838508] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa7fc000b90 00:41:43.661 [2024-10-01 22:40:38.838520] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:41:43.661 qpair failed and we were unable to recover it. 00:41:43.661 [2024-10-01 22:40:38.848440] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:43.661 [2024-10-01 22:40:38.848483] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:43.661 [2024-10-01 22:40:38.848494] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:43.661 [2024-10-01 22:40:38.848498] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:43.661 [2024-10-01 22:40:38.848503] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa7fc000b90 00:41:43.661 [2024-10-01 22:40:38.848514] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:41:43.661 qpair failed and we were unable to recover it. 00:41:43.661 [2024-10-01 22:40:38.858515] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:43.661 [2024-10-01 22:40:38.858562] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:43.661 [2024-10-01 22:40:38.858572] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:43.661 [2024-10-01 22:40:38.858577] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:43.661 [2024-10-01 22:40:38.858581] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa7fc000b90 00:41:43.661 [2024-10-01 22:40:38.858591] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:41:43.661 qpair failed and we were unable to recover it. 00:41:43.661 [2024-10-01 22:40:38.868519] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:43.661 [2024-10-01 22:40:38.868563] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:43.661 [2024-10-01 22:40:38.868573] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:43.661 [2024-10-01 22:40:38.868578] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:43.661 [2024-10-01 22:40:38.868582] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa7fc000b90 00:41:43.661 [2024-10-01 22:40:38.868592] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:41:43.661 qpair failed and we were unable to recover it. 00:41:43.661 [2024-10-01 22:40:38.878558] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:43.661 [2024-10-01 22:40:38.878599] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:43.661 [2024-10-01 22:40:38.878609] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:43.661 [2024-10-01 22:40:38.878616] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:43.661 [2024-10-01 22:40:38.878621] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa7fc000b90 00:41:43.661 [2024-10-01 22:40:38.878634] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:41:43.661 qpair failed and we were unable to recover it. 00:41:43.661 [2024-10-01 22:40:38.888546] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:43.661 [2024-10-01 22:40:38.888588] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:43.661 [2024-10-01 22:40:38.888597] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:43.661 [2024-10-01 22:40:38.888602] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:43.661 [2024-10-01 22:40:38.888606] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa7fc000b90 00:41:43.661 [2024-10-01 22:40:38.888616] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:41:43.661 qpair failed and we were unable to recover it. 00:41:43.661 [2024-10-01 22:40:38.898632] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:43.661 [2024-10-01 22:40:38.898683] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:43.661 [2024-10-01 22:40:38.898693] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:43.661 [2024-10-01 22:40:38.898698] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:43.661 [2024-10-01 22:40:38.898702] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa7fc000b90 00:41:43.661 [2024-10-01 22:40:38.898712] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:41:43.661 qpair failed and we were unable to recover it. 00:41:43.661 [2024-10-01 22:40:38.908661] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:43.661 [2024-10-01 22:40:38.908707] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:43.661 [2024-10-01 22:40:38.908716] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:43.661 [2024-10-01 22:40:38.908721] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:43.661 [2024-10-01 22:40:38.908726] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa7fc000b90 00:41:43.661 [2024-10-01 22:40:38.908736] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:41:43.661 qpair failed and we were unable to recover it. 00:41:43.925 [2024-10-01 22:40:38.918618] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:43.925 [2024-10-01 22:40:38.918663] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:43.925 [2024-10-01 22:40:38.918672] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:43.925 [2024-10-01 22:40:38.918677] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:43.925 [2024-10-01 22:40:38.918682] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa7fc000b90 00:41:43.925 [2024-10-01 22:40:38.918692] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:41:43.925 qpair failed and we were unable to recover it. 00:41:43.925 [2024-10-01 22:40:38.928660] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:43.925 [2024-10-01 22:40:38.928702] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:43.925 [2024-10-01 22:40:38.928711] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:43.925 [2024-10-01 22:40:38.928716] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:43.925 [2024-10-01 22:40:38.928720] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa7fc000b90 00:41:43.925 [2024-10-01 22:40:38.928730] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:41:43.925 qpair failed and we were unable to recover it. 00:41:43.925 [2024-10-01 22:40:38.938716] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:43.925 [2024-10-01 22:40:38.938763] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:43.925 [2024-10-01 22:40:38.938772] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:43.925 [2024-10-01 22:40:38.938777] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:43.925 [2024-10-01 22:40:38.938781] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa7fc000b90 00:41:43.925 [2024-10-01 22:40:38.938791] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:41:43.925 qpair failed and we were unable to recover it. 00:41:43.925 [2024-10-01 22:40:38.948721] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:43.925 [2024-10-01 22:40:38.948765] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:43.925 [2024-10-01 22:40:38.948774] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:43.925 [2024-10-01 22:40:38.948779] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:43.925 [2024-10-01 22:40:38.948784] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa7fc000b90 00:41:43.925 [2024-10-01 22:40:38.948793] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:41:43.925 qpair failed and we were unable to recover it. 00:41:43.925 [2024-10-01 22:40:38.958627] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:43.925 [2024-10-01 22:40:38.958695] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:43.925 [2024-10-01 22:40:38.958704] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:43.925 [2024-10-01 22:40:38.958709] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:43.925 [2024-10-01 22:40:38.958714] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa7fc000b90 00:41:43.925 [2024-10-01 22:40:38.958724] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:41:43.925 qpair failed and we were unable to recover it. 00:41:43.925 [2024-10-01 22:40:38.968687] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:43.925 [2024-10-01 22:40:38.968728] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:43.925 [2024-10-01 22:40:38.968740] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:43.925 [2024-10-01 22:40:38.968745] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:43.925 [2024-10-01 22:40:38.968749] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa7fc000b90 00:41:43.925 [2024-10-01 22:40:38.968759] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:41:43.925 qpair failed and we were unable to recover it. 00:41:43.925 [2024-10-01 22:40:38.978762] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:43.925 [2024-10-01 22:40:38.978806] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:43.925 [2024-10-01 22:40:38.978815] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:43.925 [2024-10-01 22:40:38.978820] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:43.925 [2024-10-01 22:40:38.978824] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa7fc000b90 00:41:43.925 [2024-10-01 22:40:38.978834] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:41:43.925 qpair failed and we were unable to recover it. 00:41:43.925 [2024-10-01 22:40:38.988841] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:43.925 [2024-10-01 22:40:38.988921] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:43.925 [2024-10-01 22:40:38.988931] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:43.925 [2024-10-01 22:40:38.988936] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:43.925 [2024-10-01 22:40:38.988940] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa7fc000b90 00:41:43.925 [2024-10-01 22:40:38.988950] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:41:43.925 qpair failed and we were unable to recover it. 00:41:43.925 [2024-10-01 22:40:38.998836] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:43.925 [2024-10-01 22:40:38.998875] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:43.925 [2024-10-01 22:40:38.998884] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:43.925 [2024-10-01 22:40:38.998889] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:43.925 [2024-10-01 22:40:38.998894] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa7fc000b90 00:41:43.925 [2024-10-01 22:40:38.998904] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:41:43.925 qpair failed and we were unable to recover it. 00:41:43.925 [2024-10-01 22:40:39.008808] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:43.925 [2024-10-01 22:40:39.008853] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:43.925 [2024-10-01 22:40:39.008863] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:43.925 [2024-10-01 22:40:39.008867] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:43.925 [2024-10-01 22:40:39.008872] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa7fc000b90 00:41:43.925 [2024-10-01 22:40:39.008884] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:41:43.925 qpair failed and we were unable to recover it. 00:41:43.925 [2024-10-01 22:40:39.018948] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:43.925 [2024-10-01 22:40:39.019026] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:43.925 [2024-10-01 22:40:39.019035] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:43.925 [2024-10-01 22:40:39.019041] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:43.925 [2024-10-01 22:40:39.019045] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa7fc000b90 00:41:43.925 [2024-10-01 22:40:39.019055] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:41:43.925 qpair failed and we were unable to recover it. 00:41:43.925 [2024-10-01 22:40:39.028927] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:43.925 [2024-10-01 22:40:39.028967] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:43.925 [2024-10-01 22:40:39.028976] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:43.925 [2024-10-01 22:40:39.028981] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:43.925 [2024-10-01 22:40:39.028986] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa7fc000b90 00:41:43.925 [2024-10-01 22:40:39.028995] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:41:43.925 qpair failed and we were unable to recover it. 00:41:43.925 [2024-10-01 22:40:39.038996] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:43.925 [2024-10-01 22:40:39.039045] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:43.925 [2024-10-01 22:40:39.039054] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:43.925 [2024-10-01 22:40:39.039059] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:43.925 [2024-10-01 22:40:39.039064] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa7fc000b90 00:41:43.925 [2024-10-01 22:40:39.039074] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:41:43.925 qpair failed and we were unable to recover it. 00:41:43.925 [2024-10-01 22:40:39.048941] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:43.925 [2024-10-01 22:40:39.048981] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:43.925 [2024-10-01 22:40:39.048991] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:43.925 [2024-10-01 22:40:39.048996] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:43.925 [2024-10-01 22:40:39.049000] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa7fc000b90 00:41:43.925 [2024-10-01 22:40:39.049010] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:41:43.925 qpair failed and we were unable to recover it. 00:41:43.925 [2024-10-01 22:40:39.059051] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:43.925 [2024-10-01 22:40:39.059103] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:43.925 [2024-10-01 22:40:39.059115] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:43.925 [2024-10-01 22:40:39.059120] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:43.925 [2024-10-01 22:40:39.059125] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa7fc000b90 00:41:43.925 [2024-10-01 22:40:39.059135] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:41:43.925 qpair failed and we were unable to recover it. 00:41:43.925 [2024-10-01 22:40:39.069066] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:43.925 [2024-10-01 22:40:39.069106] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:43.925 [2024-10-01 22:40:39.069115] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:43.925 [2024-10-01 22:40:39.069120] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:43.925 [2024-10-01 22:40:39.069124] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa7fc000b90 00:41:43.925 [2024-10-01 22:40:39.069134] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:41:43.925 qpair failed and we were unable to recover it. 00:41:43.925 [2024-10-01 22:40:39.079107] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:43.925 [2024-10-01 22:40:39.079151] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:43.925 [2024-10-01 22:40:39.079160] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:43.925 [2024-10-01 22:40:39.079165] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:43.926 [2024-10-01 22:40:39.079170] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa7fc000b90 00:41:43.926 [2024-10-01 22:40:39.079180] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:41:43.926 qpair failed and we were unable to recover it. 00:41:43.926 [2024-10-01 22:40:39.089117] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:43.926 [2024-10-01 22:40:39.089193] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:43.926 [2024-10-01 22:40:39.089202] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:43.926 [2024-10-01 22:40:39.089207] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:43.926 [2024-10-01 22:40:39.089211] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa7fc000b90 00:41:43.926 [2024-10-01 22:40:39.089222] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:41:43.926 qpair failed and we were unable to recover it. 00:41:43.926 [2024-10-01 22:40:39.099165] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:43.926 [2024-10-01 22:40:39.099255] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:43.926 [2024-10-01 22:40:39.099265] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:43.926 [2024-10-01 22:40:39.099270] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:43.926 [2024-10-01 22:40:39.099274] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa7fc000b90 00:41:43.926 [2024-10-01 22:40:39.099287] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:41:43.926 qpair failed and we were unable to recover it. 00:41:43.926 [2024-10-01 22:40:39.109175] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:43.926 [2024-10-01 22:40:39.109217] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:43.926 [2024-10-01 22:40:39.109226] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:43.926 [2024-10-01 22:40:39.109231] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:43.926 [2024-10-01 22:40:39.109235] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa7fc000b90 00:41:43.926 [2024-10-01 22:40:39.109245] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:41:43.926 qpair failed and we were unable to recover it. 00:41:43.926 [2024-10-01 22:40:39.119207] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:43.926 [2024-10-01 22:40:39.119254] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:43.926 [2024-10-01 22:40:39.119263] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:43.926 [2024-10-01 22:40:39.119268] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:43.926 [2024-10-01 22:40:39.119273] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa7fc000b90 00:41:43.926 [2024-10-01 22:40:39.119283] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:41:43.926 qpair failed and we were unable to recover it. 00:41:43.926 [2024-10-01 22:40:39.129216] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:43.926 [2024-10-01 22:40:39.129266] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:43.926 [2024-10-01 22:40:39.129276] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:43.926 [2024-10-01 22:40:39.129280] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:43.926 [2024-10-01 22:40:39.129285] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa7fc000b90 00:41:43.926 [2024-10-01 22:40:39.129294] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:41:43.926 qpair failed and we were unable to recover it. 00:41:43.926 [2024-10-01 22:40:39.139278] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:43.926 [2024-10-01 22:40:39.139326] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:43.926 [2024-10-01 22:40:39.139336] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:43.926 [2024-10-01 22:40:39.139341] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:43.926 [2024-10-01 22:40:39.139345] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa7fc000b90 00:41:43.926 [2024-10-01 22:40:39.139355] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:41:43.926 qpair failed and we were unable to recover it. 00:41:43.926 [2024-10-01 22:40:39.149277] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:43.926 [2024-10-01 22:40:39.149324] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:43.926 [2024-10-01 22:40:39.149333] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:43.926 [2024-10-01 22:40:39.149338] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:43.926 [2024-10-01 22:40:39.149343] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa7fc000b90 00:41:43.926 [2024-10-01 22:40:39.149353] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:41:43.926 qpair failed and we were unable to recover it. 00:41:43.926 [2024-10-01 22:40:39.159295] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:43.926 [2024-10-01 22:40:39.159347] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:43.926 [2024-10-01 22:40:39.159365] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:43.926 [2024-10-01 22:40:39.159371] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:43.926 [2024-10-01 22:40:39.159376] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa7fc000b90 00:41:43.926 [2024-10-01 22:40:39.159391] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:41:43.926 qpair failed and we were unable to recover it. 00:41:43.926 [2024-10-01 22:40:39.169340] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:43.926 [2024-10-01 22:40:39.169418] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:43.926 [2024-10-01 22:40:39.169450] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:43.926 [2024-10-01 22:40:39.169456] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:43.926 [2024-10-01 22:40:39.169460] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa7fc000b90 00:41:43.926 [2024-10-01 22:40:39.169479] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:41:43.926 qpair failed and we were unable to recover it. 00:41:44.187 [2024-10-01 22:40:39.179348] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:44.188 [2024-10-01 22:40:39.179394] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:44.188 [2024-10-01 22:40:39.179413] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:44.188 [2024-10-01 22:40:39.179419] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:44.188 [2024-10-01 22:40:39.179424] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa7fc000b90 00:41:44.188 [2024-10-01 22:40:39.179438] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:41:44.188 qpair failed and we were unable to recover it. 00:41:44.188 [2024-10-01 22:40:39.189382] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:44.188 [2024-10-01 22:40:39.189430] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:44.188 [2024-10-01 22:40:39.189448] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:44.188 [2024-10-01 22:40:39.189454] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:44.188 [2024-10-01 22:40:39.189465] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa7fc000b90 00:41:44.188 [2024-10-01 22:40:39.189478] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:41:44.188 qpair failed and we were unable to recover it. 00:41:44.188 [2024-10-01 22:40:39.199409] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:44.188 [2024-10-01 22:40:39.199501] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:44.188 [2024-10-01 22:40:39.199512] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:44.188 [2024-10-01 22:40:39.199517] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:44.188 [2024-10-01 22:40:39.199521] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa7fc000b90 00:41:44.188 [2024-10-01 22:40:39.199532] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:41:44.188 qpair failed and we were unable to recover it. 00:41:44.188 [2024-10-01 22:40:39.209419] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:44.188 [2024-10-01 22:40:39.209458] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:44.188 [2024-10-01 22:40:39.209468] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:44.188 [2024-10-01 22:40:39.209473] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:44.188 [2024-10-01 22:40:39.209478] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa7fc000b90 00:41:44.188 [2024-10-01 22:40:39.209488] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:41:44.188 qpair failed and we were unable to recover it. 00:41:44.188 [2024-10-01 22:40:39.219510] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:44.188 [2024-10-01 22:40:39.219595] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:44.188 [2024-10-01 22:40:39.219604] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:44.188 [2024-10-01 22:40:39.219609] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:44.188 [2024-10-01 22:40:39.219613] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa7fc000b90 00:41:44.188 [2024-10-01 22:40:39.219627] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:41:44.188 qpair failed and we were unable to recover it. 00:41:44.188 [2024-10-01 22:40:39.229481] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:44.188 [2024-10-01 22:40:39.229527] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:44.188 [2024-10-01 22:40:39.229536] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:44.188 [2024-10-01 22:40:39.229541] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:44.188 [2024-10-01 22:40:39.229546] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa7fc000b90 00:41:44.188 [2024-10-01 22:40:39.229555] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:41:44.188 qpair failed and we were unable to recover it. 00:41:44.188 [2024-10-01 22:40:39.239516] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:44.188 [2024-10-01 22:40:39.239617] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:44.188 [2024-10-01 22:40:39.239629] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:44.188 [2024-10-01 22:40:39.239633] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:44.188 [2024-10-01 22:40:39.239638] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa7fc000b90 00:41:44.188 [2024-10-01 22:40:39.239648] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:41:44.188 qpair failed and we were unable to recover it. 00:41:44.188 [2024-10-01 22:40:39.249513] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:44.188 [2024-10-01 22:40:39.249600] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:44.188 [2024-10-01 22:40:39.249610] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:44.188 [2024-10-01 22:40:39.249614] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:44.188 [2024-10-01 22:40:39.249619] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa7fc000b90 00:41:44.188 [2024-10-01 22:40:39.249631] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:41:44.188 qpair failed and we were unable to recover it. 00:41:44.188 [2024-10-01 22:40:39.259581] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:44.188 [2024-10-01 22:40:39.259633] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:44.188 [2024-10-01 22:40:39.259642] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:44.188 [2024-10-01 22:40:39.259647] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:44.188 [2024-10-01 22:40:39.259652] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa7fc000b90 00:41:44.188 [2024-10-01 22:40:39.259662] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:41:44.188 qpair failed and we were unable to recover it. 00:41:44.188 [2024-10-01 22:40:39.269572] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:44.188 [2024-10-01 22:40:39.269637] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:44.188 [2024-10-01 22:40:39.269647] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:44.188 [2024-10-01 22:40:39.269652] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:44.188 [2024-10-01 22:40:39.269657] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa7fc000b90 00:41:44.188 [2024-10-01 22:40:39.269667] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:41:44.188 qpair failed and we were unable to recover it. 00:41:44.188 [2024-10-01 22:40:39.279622] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:44.188 [2024-10-01 22:40:39.279672] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:44.188 [2024-10-01 22:40:39.279682] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:44.188 [2024-10-01 22:40:39.279689] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:44.188 [2024-10-01 22:40:39.279693] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa7fc000b90 00:41:44.188 [2024-10-01 22:40:39.279704] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:41:44.188 qpair failed and we were unable to recover it. 00:41:44.188 [2024-10-01 22:40:39.289627] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:44.188 [2024-10-01 22:40:39.289672] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:44.188 [2024-10-01 22:40:39.289682] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:44.188 [2024-10-01 22:40:39.289686] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:44.188 [2024-10-01 22:40:39.289691] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa7fc000b90 00:41:44.188 [2024-10-01 22:40:39.289700] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:41:44.188 qpair failed and we were unable to recover it. 00:41:44.188 [2024-10-01 22:40:39.299686] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:44.188 [2024-10-01 22:40:39.299729] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:44.188 [2024-10-01 22:40:39.299738] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:44.188 [2024-10-01 22:40:39.299743] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:44.188 [2024-10-01 22:40:39.299748] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa7fc000b90 00:41:44.188 [2024-10-01 22:40:39.299758] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:41:44.188 qpair failed and we were unable to recover it. 00:41:44.188 [2024-10-01 22:40:39.309662] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:44.188 [2024-10-01 22:40:39.309700] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:44.188 [2024-10-01 22:40:39.309710] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:44.188 [2024-10-01 22:40:39.309715] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:44.189 [2024-10-01 22:40:39.309719] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa7fc000b90 00:41:44.189 [2024-10-01 22:40:39.309729] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:41:44.189 qpair failed and we were unable to recover it. 00:41:44.189 [2024-10-01 22:40:39.319679] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:44.189 [2024-10-01 22:40:39.319723] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:44.189 [2024-10-01 22:40:39.319733] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:44.189 [2024-10-01 22:40:39.319738] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:44.189 [2024-10-01 22:40:39.319742] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa7fc000b90 00:41:44.189 [2024-10-01 22:40:39.319752] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:41:44.189 qpair failed and we were unable to recover it. 00:41:44.189 [2024-10-01 22:40:39.329695] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:44.189 [2024-10-01 22:40:39.329736] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:44.189 [2024-10-01 22:40:39.329746] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:44.189 [2024-10-01 22:40:39.329751] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:44.189 [2024-10-01 22:40:39.329755] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa7fc000b90 00:41:44.189 [2024-10-01 22:40:39.329765] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:41:44.189 qpair failed and we were unable to recover it. 00:41:44.189 [2024-10-01 22:40:39.339784] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:44.189 [2024-10-01 22:40:39.339828] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:44.189 [2024-10-01 22:40:39.339837] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:44.189 [2024-10-01 22:40:39.339842] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:44.189 [2024-10-01 22:40:39.339847] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa7fc000b90 00:41:44.189 [2024-10-01 22:40:39.339856] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:41:44.189 qpair failed and we were unable to recover it. 00:41:44.189 [2024-10-01 22:40:39.349805] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:44.189 [2024-10-01 22:40:39.349849] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:44.189 [2024-10-01 22:40:39.349858] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:44.189 [2024-10-01 22:40:39.349863] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:44.189 [2024-10-01 22:40:39.349868] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa7fc000b90 00:41:44.189 [2024-10-01 22:40:39.349877] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:41:44.189 qpair failed and we were unable to recover it. 00:41:44.189 [2024-10-01 22:40:39.359831] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:44.189 [2024-10-01 22:40:39.359871] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:44.189 [2024-10-01 22:40:39.359881] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:44.189 [2024-10-01 22:40:39.359886] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:44.189 [2024-10-01 22:40:39.359890] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa7fc000b90 00:41:44.189 [2024-10-01 22:40:39.359900] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:41:44.189 qpair failed and we were unable to recover it. 00:41:44.189 [2024-10-01 22:40:39.369703] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:44.189 [2024-10-01 22:40:39.369745] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:44.189 [2024-10-01 22:40:39.369755] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:44.189 [2024-10-01 22:40:39.369762] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:44.189 [2024-10-01 22:40:39.369766] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa7fc000b90 00:41:44.189 [2024-10-01 22:40:39.369776] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:41:44.189 qpair failed and we were unable to recover it. 00:41:44.189 [2024-10-01 22:40:39.379909] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:44.189 [2024-10-01 22:40:39.379962] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:44.189 [2024-10-01 22:40:39.379971] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:44.189 [2024-10-01 22:40:39.379976] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:44.189 [2024-10-01 22:40:39.379980] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa7fc000b90 00:41:44.189 [2024-10-01 22:40:39.379990] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:41:44.189 qpair failed and we were unable to recover it. 00:41:44.189 [2024-10-01 22:40:39.389794] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:44.189 [2024-10-01 22:40:39.389842] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:44.189 [2024-10-01 22:40:39.389852] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:44.189 [2024-10-01 22:40:39.389857] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:44.189 [2024-10-01 22:40:39.389861] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa7fc000b90 00:41:44.189 [2024-10-01 22:40:39.389871] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:41:44.189 qpair failed and we were unable to recover it. 00:41:44.189 [2024-10-01 22:40:39.399810] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:44.189 [2024-10-01 22:40:39.399855] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:44.189 [2024-10-01 22:40:39.399864] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:44.189 [2024-10-01 22:40:39.399869] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:44.189 [2024-10-01 22:40:39.399873] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa7fc000b90 00:41:44.189 [2024-10-01 22:40:39.399883] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:41:44.189 qpair failed and we were unable to recover it. 00:41:44.189 [2024-10-01 22:40:39.409921] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:44.189 [2024-10-01 22:40:39.409963] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:44.189 [2024-10-01 22:40:39.409972] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:44.189 [2024-10-01 22:40:39.409977] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:44.189 [2024-10-01 22:40:39.409981] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa7fc000b90 00:41:44.189 [2024-10-01 22:40:39.409991] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:41:44.189 qpair failed and we were unable to recover it. 00:41:44.189 [2024-10-01 22:40:39.419980] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:44.189 [2024-10-01 22:40:39.420027] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:44.189 [2024-10-01 22:40:39.420036] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:44.189 [2024-10-01 22:40:39.420041] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:44.189 [2024-10-01 22:40:39.420046] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa7fc000b90 00:41:44.189 [2024-10-01 22:40:39.420056] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:41:44.189 qpair failed and we were unable to recover it. 00:41:44.189 [2024-10-01 22:40:39.429945] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:44.189 [2024-10-01 22:40:39.429987] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:44.189 [2024-10-01 22:40:39.429996] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:44.189 [2024-10-01 22:40:39.430001] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:44.189 [2024-10-01 22:40:39.430006] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa7fc000b90 00:41:44.189 [2024-10-01 22:40:39.430015] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:41:44.189 qpair failed and we were unable to recover it. 00:41:44.450 [2024-10-01 22:40:39.440037] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:44.450 [2024-10-01 22:40:39.440121] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:44.450 [2024-10-01 22:40:39.440130] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:44.450 [2024-10-01 22:40:39.440135] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:44.450 [2024-10-01 22:40:39.440140] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa7fc000b90 00:41:44.450 [2024-10-01 22:40:39.440150] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:41:44.450 qpair failed and we were unable to recover it. 00:41:44.450 [2024-10-01 22:40:39.450028] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:44.450 [2024-10-01 22:40:39.450068] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:44.450 [2024-10-01 22:40:39.450078] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:44.450 [2024-10-01 22:40:39.450082] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:44.450 [2024-10-01 22:40:39.450087] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa7fc000b90 00:41:44.450 [2024-10-01 22:40:39.450097] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:41:44.450 qpair failed and we were unable to recover it. 00:41:44.450 [2024-10-01 22:40:39.460095] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:44.450 [2024-10-01 22:40:39.460153] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:44.450 [2024-10-01 22:40:39.460166] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:44.450 [2024-10-01 22:40:39.460170] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:44.450 [2024-10-01 22:40:39.460175] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa7fc000b90 00:41:44.450 [2024-10-01 22:40:39.460185] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:41:44.450 qpair failed and we were unable to recover it. 00:41:44.450 [2024-10-01 22:40:39.470129] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:44.450 [2024-10-01 22:40:39.470210] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:44.450 [2024-10-01 22:40:39.470220] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:44.450 [2024-10-01 22:40:39.470224] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:44.450 [2024-10-01 22:40:39.470229] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa7fc000b90 00:41:44.450 [2024-10-01 22:40:39.470239] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:41:44.450 qpair failed and we were unable to recover it. 00:41:44.450 [2024-10-01 22:40:39.480117] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:44.451 [2024-10-01 22:40:39.480164] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:44.451 [2024-10-01 22:40:39.480173] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:44.451 [2024-10-01 22:40:39.480178] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:44.451 [2024-10-01 22:40:39.480182] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa7fc000b90 00:41:44.451 [2024-10-01 22:40:39.480192] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:41:44.451 qpair failed and we were unable to recover it. 00:41:44.451 [2024-10-01 22:40:39.490114] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:44.451 [2024-10-01 22:40:39.490165] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:44.451 [2024-10-01 22:40:39.490175] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:44.451 [2024-10-01 22:40:39.490179] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:44.451 [2024-10-01 22:40:39.490184] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa7fc000b90 00:41:44.451 [2024-10-01 22:40:39.490194] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:41:44.451 qpair failed and we were unable to recover it. 00:41:44.451 [2024-10-01 22:40:39.500263] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:44.451 [2024-10-01 22:40:39.500335] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:44.451 [2024-10-01 22:40:39.500345] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:44.451 [2024-10-01 22:40:39.500349] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:44.451 [2024-10-01 22:40:39.500354] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa7fc000b90 00:41:44.451 [2024-10-01 22:40:39.500367] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:41:44.451 qpair failed and we were unable to recover it. 00:41:44.451 [2024-10-01 22:40:39.510237] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:44.451 [2024-10-01 22:40:39.510278] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:44.451 [2024-10-01 22:40:39.510288] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:44.451 [2024-10-01 22:40:39.510293] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:44.451 [2024-10-01 22:40:39.510297] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa7fc000b90 00:41:44.451 [2024-10-01 22:40:39.510307] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:41:44.451 qpair failed and we were unable to recover it. 00:41:44.451 [2024-10-01 22:40:39.520263] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:44.451 [2024-10-01 22:40:39.520310] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:44.451 [2024-10-01 22:40:39.520319] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:44.451 [2024-10-01 22:40:39.520324] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:44.451 [2024-10-01 22:40:39.520328] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa7fc000b90 00:41:44.451 [2024-10-01 22:40:39.520338] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:41:44.451 qpair failed and we were unable to recover it. 00:41:44.451 [2024-10-01 22:40:39.530203] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:44.451 [2024-10-01 22:40:39.530247] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:44.451 [2024-10-01 22:40:39.530257] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:44.451 [2024-10-01 22:40:39.530262] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:44.451 [2024-10-01 22:40:39.530266] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa7fc000b90 00:41:44.451 [2024-10-01 22:40:39.530276] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:41:44.451 qpair failed and we were unable to recover it. 00:41:44.451 [2024-10-01 22:40:39.540313] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:44.451 [2024-10-01 22:40:39.540370] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:44.451 [2024-10-01 22:40:39.540380] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:44.451 [2024-10-01 22:40:39.540385] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:44.451 [2024-10-01 22:40:39.540389] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa7fc000b90 00:41:44.451 [2024-10-01 22:40:39.540399] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:41:44.451 qpair failed and we were unable to recover it. 00:41:44.451 [2024-10-01 22:40:39.550335] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:44.451 [2024-10-01 22:40:39.550380] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:44.451 [2024-10-01 22:40:39.550392] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:44.451 [2024-10-01 22:40:39.550397] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:44.451 [2024-10-01 22:40:39.550401] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa7fc000b90 00:41:44.451 [2024-10-01 22:40:39.550411] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:41:44.451 qpair failed and we were unable to recover it. 00:41:44.451 [2024-10-01 22:40:39.560355] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:44.451 [2024-10-01 22:40:39.560408] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:44.451 [2024-10-01 22:40:39.560426] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:44.451 [2024-10-01 22:40:39.560432] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:44.451 [2024-10-01 22:40:39.560437] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa7fc000b90 00:41:44.451 [2024-10-01 22:40:39.560451] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:41:44.451 qpair failed and we were unable to recover it. 00:41:44.451 [2024-10-01 22:40:39.570314] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:44.451 [2024-10-01 22:40:39.570360] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:44.451 [2024-10-01 22:40:39.570378] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:44.451 [2024-10-01 22:40:39.570384] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:44.451 [2024-10-01 22:40:39.570389] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa7fc000b90 00:41:44.451 [2024-10-01 22:40:39.570403] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:41:44.451 qpair failed and we were unable to recover it. 00:41:44.451 [2024-10-01 22:40:39.580468] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:44.451 [2024-10-01 22:40:39.580520] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:44.451 [2024-10-01 22:40:39.580532] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:44.451 [2024-10-01 22:40:39.580537] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:44.451 [2024-10-01 22:40:39.580542] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa7fc000b90 00:41:44.451 [2024-10-01 22:40:39.580552] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:41:44.451 qpair failed and we were unable to recover it. 00:41:44.451 [2024-10-01 22:40:39.590410] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:44.451 [2024-10-01 22:40:39.590455] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:44.451 [2024-10-01 22:40:39.590465] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:44.451 [2024-10-01 22:40:39.590470] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:44.451 [2024-10-01 22:40:39.590475] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa7fc000b90 00:41:44.451 [2024-10-01 22:40:39.590488] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:41:44.451 qpair failed and we were unable to recover it. 00:41:44.451 [2024-10-01 22:40:39.600438] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:44.451 [2024-10-01 22:40:39.600481] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:44.452 [2024-10-01 22:40:39.600491] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:44.452 [2024-10-01 22:40:39.600496] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:44.452 [2024-10-01 22:40:39.600500] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa7fc000b90 00:41:44.452 [2024-10-01 22:40:39.600511] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:41:44.452 qpair failed and we were unable to recover it. 00:41:44.452 [2024-10-01 22:40:39.610454] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:44.452 [2024-10-01 22:40:39.610493] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:44.452 [2024-10-01 22:40:39.610503] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:44.452 [2024-10-01 22:40:39.610507] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:44.452 [2024-10-01 22:40:39.610512] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa7fc000b90 00:41:44.452 [2024-10-01 22:40:39.610522] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:41:44.452 qpair failed and we were unable to recover it. 00:41:44.452 [2024-10-01 22:40:39.620492] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:44.452 [2024-10-01 22:40:39.620536] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:44.452 [2024-10-01 22:40:39.620546] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:44.452 [2024-10-01 22:40:39.620550] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:44.452 [2024-10-01 22:40:39.620555] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa7fc000b90 00:41:44.452 [2024-10-01 22:40:39.620565] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:41:44.452 qpair failed and we were unable to recover it. 00:41:44.452 [2024-10-01 22:40:39.630542] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:44.452 [2024-10-01 22:40:39.630580] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:44.452 [2024-10-01 22:40:39.630590] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:44.452 [2024-10-01 22:40:39.630595] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:44.452 [2024-10-01 22:40:39.630599] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa7fc000b90 00:41:44.452 [2024-10-01 22:40:39.630609] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:41:44.452 qpair failed and we were unable to recover it. 00:41:44.452 [2024-10-01 22:40:39.640566] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:44.452 [2024-10-01 22:40:39.640606] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:44.452 [2024-10-01 22:40:39.640618] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:44.452 [2024-10-01 22:40:39.640623] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:44.452 [2024-10-01 22:40:39.640631] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa7fc000b90 00:41:44.452 [2024-10-01 22:40:39.640641] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:41:44.452 qpair failed and we were unable to recover it. 00:41:44.452 [2024-10-01 22:40:39.650563] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:44.452 [2024-10-01 22:40:39.650607] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:44.452 [2024-10-01 22:40:39.650617] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:44.452 [2024-10-01 22:40:39.650622] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:44.452 [2024-10-01 22:40:39.650631] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa7fc000b90 00:41:44.452 [2024-10-01 22:40:39.650641] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:41:44.452 qpair failed and we were unable to recover it. 00:41:44.452 [2024-10-01 22:40:39.660638] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:44.452 [2024-10-01 22:40:39.660687] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:44.452 [2024-10-01 22:40:39.660696] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:44.452 [2024-10-01 22:40:39.660701] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:44.452 [2024-10-01 22:40:39.660705] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa7fc000b90 00:41:44.452 [2024-10-01 22:40:39.660715] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:41:44.452 qpair failed and we were unable to recover it. 00:41:44.452 [2024-10-01 22:40:39.670660] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:44.452 [2024-10-01 22:40:39.670703] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:44.452 [2024-10-01 22:40:39.670713] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:44.452 [2024-10-01 22:40:39.670717] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:44.452 [2024-10-01 22:40:39.670722] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa7fc000b90 00:41:44.452 [2024-10-01 22:40:39.670732] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:41:44.452 qpair failed and we were unable to recover it. 00:41:44.452 [2024-10-01 22:40:39.680669] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:44.452 [2024-10-01 22:40:39.680713] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:44.452 [2024-10-01 22:40:39.680723] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:44.452 [2024-10-01 22:40:39.680727] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:44.452 [2024-10-01 22:40:39.680734] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa7fc000b90 00:41:44.452 [2024-10-01 22:40:39.680745] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:41:44.452 qpair failed and we were unable to recover it. 00:41:44.452 [2024-10-01 22:40:39.690665] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:44.452 [2024-10-01 22:40:39.690708] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:44.452 [2024-10-01 22:40:39.690718] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:44.452 [2024-10-01 22:40:39.690722] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:44.452 [2024-10-01 22:40:39.690727] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa7fc000b90 00:41:44.452 [2024-10-01 22:40:39.690737] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:41:44.452 qpair failed and we were unable to recover it. 00:41:44.452 [2024-10-01 22:40:39.700723] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:44.452 [2024-10-01 22:40:39.700766] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:44.452 [2024-10-01 22:40:39.700775] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:44.452 [2024-10-01 22:40:39.700780] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:44.452 [2024-10-01 22:40:39.700784] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa7fc000b90 00:41:44.452 [2024-10-01 22:40:39.700794] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:41:44.452 qpair failed and we were unable to recover it. 00:41:44.713 [2024-10-01 22:40:39.710730] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:44.713 [2024-10-01 22:40:39.710779] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:44.713 [2024-10-01 22:40:39.710789] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:44.713 [2024-10-01 22:40:39.710794] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:44.713 [2024-10-01 22:40:39.710798] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa7fc000b90 00:41:44.713 [2024-10-01 22:40:39.710808] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:41:44.713 qpair failed and we were unable to recover it. 00:41:44.713 [2024-10-01 22:40:39.720759] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:44.713 [2024-10-01 22:40:39.720804] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:44.713 [2024-10-01 22:40:39.720814] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:44.713 [2024-10-01 22:40:39.720819] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:44.713 [2024-10-01 22:40:39.720823] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa7fc000b90 00:41:44.713 [2024-10-01 22:40:39.720833] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:41:44.713 qpair failed and we were unable to recover it. 00:41:44.713 [2024-10-01 22:40:39.730758] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:44.713 [2024-10-01 22:40:39.730804] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:44.714 [2024-10-01 22:40:39.730814] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:44.714 [2024-10-01 22:40:39.730819] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:44.714 [2024-10-01 22:40:39.730824] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa7fc000b90 00:41:44.714 [2024-10-01 22:40:39.730834] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:41:44.714 qpair failed and we were unable to recover it. 00:41:44.714 [2024-10-01 22:40:39.740816] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:44.714 [2024-10-01 22:40:39.740855] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:44.714 [2024-10-01 22:40:39.740864] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:44.714 [2024-10-01 22:40:39.740869] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:44.714 [2024-10-01 22:40:39.740874] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa7fc000b90 00:41:44.714 [2024-10-01 22:40:39.740884] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:41:44.714 qpair failed and we were unable to recover it. 00:41:44.714 [2024-10-01 22:40:39.750893] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:44.714 [2024-10-01 22:40:39.750937] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:44.714 [2024-10-01 22:40:39.750947] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:44.714 [2024-10-01 22:40:39.750951] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:44.714 [2024-10-01 22:40:39.750956] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa7fc000b90 00:41:44.714 [2024-10-01 22:40:39.750965] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:41:44.714 qpair failed and we were unable to recover it. 00:41:44.714 [2024-10-01 22:40:39.760891] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:44.714 [2024-10-01 22:40:39.760943] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:44.714 [2024-10-01 22:40:39.760952] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:44.714 [2024-10-01 22:40:39.760957] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:44.714 [2024-10-01 22:40:39.760961] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa7fc000b90 00:41:44.714 [2024-10-01 22:40:39.760971] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:41:44.714 qpair failed and we were unable to recover it. 00:41:44.714 [2024-10-01 22:40:39.770887] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:44.714 [2024-10-01 22:40:39.770928] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:44.714 [2024-10-01 22:40:39.770938] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:44.714 [2024-10-01 22:40:39.770943] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:44.714 [2024-10-01 22:40:39.770949] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa7fc000b90 00:41:44.714 [2024-10-01 22:40:39.770960] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:41:44.714 qpair failed and we were unable to recover it. 00:41:44.714 [2024-10-01 22:40:39.780923] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:44.714 [2024-10-01 22:40:39.781010] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:44.714 [2024-10-01 22:40:39.781020] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:44.714 [2024-10-01 22:40:39.781025] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:44.714 [2024-10-01 22:40:39.781029] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa7fc000b90 00:41:44.714 [2024-10-01 22:40:39.781040] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:41:44.714 qpair failed and we were unable to recover it. 00:41:44.714 [2024-10-01 22:40:39.790984] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:44.714 [2024-10-01 22:40:39.791025] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:44.714 [2024-10-01 22:40:39.791034] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:44.714 [2024-10-01 22:40:39.791039] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:44.714 [2024-10-01 22:40:39.791044] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa7fc000b90 00:41:44.714 [2024-10-01 22:40:39.791054] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:41:44.714 qpair failed and we were unable to recover it. 00:41:44.714 [2024-10-01 22:40:39.800989] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:44.714 [2024-10-01 22:40:39.801031] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:44.714 [2024-10-01 22:40:39.801040] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:44.714 [2024-10-01 22:40:39.801045] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:44.714 [2024-10-01 22:40:39.801049] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa7fc000b90 00:41:44.714 [2024-10-01 22:40:39.801059] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:41:44.714 qpair failed and we were unable to recover it. 00:41:44.714 [2024-10-01 22:40:39.811007] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:44.714 [2024-10-01 22:40:39.811051] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:44.714 [2024-10-01 22:40:39.811061] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:44.714 [2024-10-01 22:40:39.811065] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:44.714 [2024-10-01 22:40:39.811070] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa7fc000b90 00:41:44.714 [2024-10-01 22:40:39.811080] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:41:44.714 qpair failed and we were unable to recover it. 00:41:44.714 [2024-10-01 22:40:39.821041] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:44.714 [2024-10-01 22:40:39.821081] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:44.714 [2024-10-01 22:40:39.821091] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:44.714 [2024-10-01 22:40:39.821095] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:44.714 [2024-10-01 22:40:39.821100] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa7fc000b90 00:41:44.714 [2024-10-01 22:40:39.821110] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:41:44.714 qpair failed and we were unable to recover it. 00:41:44.714 [2024-10-01 22:40:39.831062] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:44.714 [2024-10-01 22:40:39.831103] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:44.714 [2024-10-01 22:40:39.831112] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:44.714 [2024-10-01 22:40:39.831117] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:44.714 [2024-10-01 22:40:39.831121] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa7fc000b90 00:41:44.714 [2024-10-01 22:40:39.831131] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:41:44.714 qpair failed and we were unable to recover it. 00:41:44.714 [2024-10-01 22:40:39.841047] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:44.714 [2024-10-01 22:40:39.841081] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:44.714 [2024-10-01 22:40:39.841091] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:44.714 [2024-10-01 22:40:39.841095] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:44.714 [2024-10-01 22:40:39.841100] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa7fc000b90 00:41:44.714 [2024-10-01 22:40:39.841110] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:41:44.714 qpair failed and we were unable to recover it. 00:41:44.714 [2024-10-01 22:40:39.851116] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:44.714 [2024-10-01 22:40:39.851157] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:44.714 [2024-10-01 22:40:39.851166] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:44.714 [2024-10-01 22:40:39.851171] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:44.714 [2024-10-01 22:40:39.851176] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa7fc000b90 00:41:44.714 [2024-10-01 22:40:39.851185] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:41:44.714 qpair failed and we were unable to recover it. 00:41:44.714 [2024-10-01 22:40:39.861030] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:44.714 [2024-10-01 22:40:39.861074] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:44.714 [2024-10-01 22:40:39.861084] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:44.714 [2024-10-01 22:40:39.861092] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:44.714 [2024-10-01 22:40:39.861096] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa7fc000b90 00:41:44.715 [2024-10-01 22:40:39.861106] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:41:44.715 qpair failed and we were unable to recover it. 00:41:44.715 [2024-10-01 22:40:39.871095] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:44.715 [2024-10-01 22:40:39.871135] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:44.715 [2024-10-01 22:40:39.871145] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:44.715 [2024-10-01 22:40:39.871150] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:44.715 [2024-10-01 22:40:39.871154] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa7fc000b90 00:41:44.715 [2024-10-01 22:40:39.871164] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:41:44.715 qpair failed and we were unable to recover it. 00:41:44.715 [2024-10-01 22:40:39.881143] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:44.715 [2024-10-01 22:40:39.881186] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:44.715 [2024-10-01 22:40:39.881195] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:44.715 [2024-10-01 22:40:39.881200] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:44.715 [2024-10-01 22:40:39.881204] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa7fc000b90 00:41:44.715 [2024-10-01 22:40:39.881214] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:41:44.715 qpair failed and we were unable to recover it. 00:41:44.715 [2024-10-01 22:40:39.891097] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:44.715 [2024-10-01 22:40:39.891161] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:44.715 [2024-10-01 22:40:39.891171] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:44.715 [2024-10-01 22:40:39.891176] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:44.715 [2024-10-01 22:40:39.891180] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa7fc000b90 00:41:44.715 [2024-10-01 22:40:39.891190] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:41:44.715 qpair failed and we were unable to recover it. 00:41:44.715 [2024-10-01 22:40:39.901138] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:44.715 [2024-10-01 22:40:39.901185] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:44.715 [2024-10-01 22:40:39.901194] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:44.715 [2024-10-01 22:40:39.901199] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:44.715 [2024-10-01 22:40:39.901203] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa7fc000b90 00:41:44.715 [2024-10-01 22:40:39.901213] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:41:44.715 qpair failed and we were unable to recover it. 00:41:44.715 [2024-10-01 22:40:39.911190] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:44.715 [2024-10-01 22:40:39.911230] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:44.715 [2024-10-01 22:40:39.911240] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:44.715 [2024-10-01 22:40:39.911245] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:44.715 [2024-10-01 22:40:39.911249] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa7fc000b90 00:41:44.715 [2024-10-01 22:40:39.911259] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:41:44.715 qpair failed and we were unable to recover it. 00:41:44.715 [2024-10-01 22:40:39.921268] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:44.715 [2024-10-01 22:40:39.921306] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:44.715 [2024-10-01 22:40:39.921315] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:44.715 [2024-10-01 22:40:39.921320] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:44.715 [2024-10-01 22:40:39.921325] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa7fc000b90 00:41:44.715 [2024-10-01 22:40:39.921334] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:41:44.715 qpair failed and we were unable to recover it. 00:41:44.715 [2024-10-01 22:40:39.931315] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:44.715 [2024-10-01 22:40:39.931358] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:44.715 [2024-10-01 22:40:39.931367] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:44.715 [2024-10-01 22:40:39.931373] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:44.715 [2024-10-01 22:40:39.931378] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa7fc000b90 00:41:44.715 [2024-10-01 22:40:39.931388] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:41:44.715 qpair failed and we were unable to recover it. 00:41:44.715 [2024-10-01 22:40:39.941385] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:44.715 [2024-10-01 22:40:39.941428] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:44.715 [2024-10-01 22:40:39.941438] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:44.715 [2024-10-01 22:40:39.941442] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:44.715 [2024-10-01 22:40:39.941447] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa7fc000b90 00:41:44.715 [2024-10-01 22:40:39.941457] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:41:44.715 qpair failed and we were unable to recover it. 00:41:44.715 [2024-10-01 22:40:39.951391] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:44.715 [2024-10-01 22:40:39.951431] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:44.715 [2024-10-01 22:40:39.951446] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:44.715 [2024-10-01 22:40:39.951451] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:44.715 [2024-10-01 22:40:39.951456] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa7fc000b90 00:41:44.715 [2024-10-01 22:40:39.951466] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:41:44.715 qpair failed and we were unable to recover it. 00:41:44.715 [2024-10-01 22:40:39.961387] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:44.715 [2024-10-01 22:40:39.961425] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:44.715 [2024-10-01 22:40:39.961434] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:44.715 [2024-10-01 22:40:39.961439] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:44.715 [2024-10-01 22:40:39.961444] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa7fc000b90 00:41:44.715 [2024-10-01 22:40:39.961454] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:41:44.715 qpair failed and we were unable to recover it. 00:41:44.976 [2024-10-01 22:40:39.971447] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:44.976 [2024-10-01 22:40:39.971530] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:44.976 [2024-10-01 22:40:39.971540] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:44.976 [2024-10-01 22:40:39.971545] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:44.976 [2024-10-01 22:40:39.971550] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa7fc000b90 00:41:44.976 [2024-10-01 22:40:39.971560] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:41:44.976 qpair failed and we were unable to recover it. 00:41:44.976 [2024-10-01 22:40:39.981467] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:44.976 [2024-10-01 22:40:39.981508] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:44.976 [2024-10-01 22:40:39.981518] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:44.976 [2024-10-01 22:40:39.981523] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:44.976 [2024-10-01 22:40:39.981527] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa7fc000b90 00:41:44.976 [2024-10-01 22:40:39.981537] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:41:44.976 qpair failed and we were unable to recover it. 00:41:44.976 [2024-10-01 22:40:39.991481] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:44.976 [2024-10-01 22:40:39.991521] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:44.976 [2024-10-01 22:40:39.991531] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:44.976 [2024-10-01 22:40:39.991536] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:44.976 [2024-10-01 22:40:39.991541] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa7fc000b90 00:41:44.976 [2024-10-01 22:40:39.991551] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:41:44.976 qpair failed and we were unable to recover it. 00:41:44.976 [2024-10-01 22:40:40.001906] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:44.976 [2024-10-01 22:40:40.001951] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:44.976 [2024-10-01 22:40:40.001961] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:44.976 [2024-10-01 22:40:40.001966] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:44.976 [2024-10-01 22:40:40.001971] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa7fc000b90 00:41:44.976 [2024-10-01 22:40:40.001982] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:41:44.976 qpair failed and we were unable to recover it. 00:41:44.976 [2024-10-01 22:40:40.011523] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:44.976 [2024-10-01 22:40:40.011565] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:44.976 [2024-10-01 22:40:40.011575] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:44.976 [2024-10-01 22:40:40.011580] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:44.976 [2024-10-01 22:40:40.011584] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa7fc000b90 00:41:44.976 [2024-10-01 22:40:40.011595] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:41:44.976 qpair failed and we were unable to recover it. 00:41:44.976 [2024-10-01 22:40:40.021606] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:44.976 [2024-10-01 22:40:40.021653] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:44.976 [2024-10-01 22:40:40.021663] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:44.976 [2024-10-01 22:40:40.021668] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:44.976 [2024-10-01 22:40:40.021673] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa7fc000b90 00:41:44.976 [2024-10-01 22:40:40.021683] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:41:44.976 qpair failed and we were unable to recover it. 00:41:44.976 [2024-10-01 22:40:40.031629] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:44.976 [2024-10-01 22:40:40.031668] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:44.976 [2024-10-01 22:40:40.031678] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:44.976 [2024-10-01 22:40:40.031683] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:44.976 [2024-10-01 22:40:40.031688] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa7fc000b90 00:41:44.976 [2024-10-01 22:40:40.031698] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:41:44.976 qpair failed and we were unable to recover it. 00:41:44.976 [2024-10-01 22:40:40.041640] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:44.976 [2024-10-01 22:40:40.041680] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:44.976 [2024-10-01 22:40:40.041693] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:44.976 [2024-10-01 22:40:40.041698] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:44.976 [2024-10-01 22:40:40.041703] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa7fc000b90 00:41:44.976 [2024-10-01 22:40:40.041713] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:41:44.976 qpair failed and we were unable to recover it. 00:41:44.976 [2024-10-01 22:40:40.051662] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:44.976 [2024-10-01 22:40:40.051705] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:44.976 [2024-10-01 22:40:40.051715] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:44.976 [2024-10-01 22:40:40.051720] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:44.976 [2024-10-01 22:40:40.051724] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa7fc000b90 00:41:44.976 [2024-10-01 22:40:40.051735] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:41:44.976 qpair failed and we were unable to recover it. 00:41:44.976 [2024-10-01 22:40:40.061661] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:44.977 [2024-10-01 22:40:40.061705] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:44.977 [2024-10-01 22:40:40.061715] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:44.977 [2024-10-01 22:40:40.061720] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:44.977 [2024-10-01 22:40:40.061725] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa7fc000b90 00:41:44.977 [2024-10-01 22:40:40.061735] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:41:44.977 qpair failed and we were unable to recover it. 00:41:44.977 [2024-10-01 22:40:40.071689] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:44.977 [2024-10-01 22:40:40.071732] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:44.977 [2024-10-01 22:40:40.071742] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:44.977 [2024-10-01 22:40:40.071747] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:44.977 [2024-10-01 22:40:40.071752] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa7fc000b90 00:41:44.977 [2024-10-01 22:40:40.071762] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:41:44.977 qpair failed and we were unable to recover it. 00:41:44.977 [2024-10-01 22:40:40.081602] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:44.977 [2024-10-01 22:40:40.081639] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:44.977 [2024-10-01 22:40:40.081649] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:44.977 [2024-10-01 22:40:40.081654] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:44.977 [2024-10-01 22:40:40.081658] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa7fc000b90 00:41:44.977 [2024-10-01 22:40:40.081671] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:41:44.977 qpair failed and we were unable to recover it. 00:41:44.977 [2024-10-01 22:40:40.091926] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:44.977 [2024-10-01 22:40:40.092012] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:44.977 [2024-10-01 22:40:40.092023] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:44.977 [2024-10-01 22:40:40.092028] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:44.977 [2024-10-01 22:40:40.092032] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa7fc000b90 00:41:44.977 [2024-10-01 22:40:40.092043] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:41:44.977 qpair failed and we were unable to recover it. 00:41:44.977 [2024-10-01 22:40:40.101804] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:44.977 [2024-10-01 22:40:40.101848] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:44.977 [2024-10-01 22:40:40.101857] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:44.977 [2024-10-01 22:40:40.101862] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:44.977 [2024-10-01 22:40:40.101867] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa7fc000b90 00:41:44.977 [2024-10-01 22:40:40.101878] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:41:44.977 qpair failed and we were unable to recover it. 00:41:44.977 [2024-10-01 22:40:40.111835] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:44.977 [2024-10-01 22:40:40.111875] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:44.977 [2024-10-01 22:40:40.111885] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:44.977 [2024-10-01 22:40:40.111890] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:44.977 [2024-10-01 22:40:40.111895] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa7fc000b90 00:41:44.977 [2024-10-01 22:40:40.111905] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:41:44.977 qpair failed and we were unable to recover it. 00:41:44.977 [2024-10-01 22:40:40.121771] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:44.977 [2024-10-01 22:40:40.121816] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:44.977 [2024-10-01 22:40:40.121826] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:44.977 [2024-10-01 22:40:40.121831] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:44.977 [2024-10-01 22:40:40.121836] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa7fc000b90 00:41:44.977 [2024-10-01 22:40:40.121846] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:41:44.977 qpair failed and we were unable to recover it. 00:41:44.977 [2024-10-01 22:40:40.131851] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:44.977 [2024-10-01 22:40:40.131891] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:44.977 [2024-10-01 22:40:40.131904] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:44.977 [2024-10-01 22:40:40.131909] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:44.977 [2024-10-01 22:40:40.131914] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa7fc000b90 00:41:44.977 [2024-10-01 22:40:40.131924] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:41:44.977 qpair failed and we were unable to recover it. 00:41:44.977 [2024-10-01 22:40:40.141886] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:44.977 [2024-10-01 22:40:40.141928] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:44.977 [2024-10-01 22:40:40.141938] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:44.977 [2024-10-01 22:40:40.141943] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:44.977 [2024-10-01 22:40:40.141947] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa7fc000b90 00:41:44.977 [2024-10-01 22:40:40.141957] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:41:44.977 qpair failed and we were unable to recover it. 00:41:44.977 [2024-10-01 22:40:40.151989] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:44.977 [2024-10-01 22:40:40.152026] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:44.977 [2024-10-01 22:40:40.152035] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:44.977 [2024-10-01 22:40:40.152040] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:44.977 [2024-10-01 22:40:40.152045] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa7fc000b90 00:41:44.977 [2024-10-01 22:40:40.152055] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:41:44.977 qpair failed and we were unable to recover it. 00:41:44.977 [2024-10-01 22:40:40.161934] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:44.977 [2024-10-01 22:40:40.161975] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:44.977 [2024-10-01 22:40:40.161984] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:44.977 [2024-10-01 22:40:40.161989] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:44.977 [2024-10-01 22:40:40.161993] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa7fc000b90 00:41:44.977 [2024-10-01 22:40:40.162003] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:41:44.977 qpair failed and we were unable to recover it. 00:41:44.977 [2024-10-01 22:40:40.171954] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:44.977 [2024-10-01 22:40:40.171994] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:44.977 [2024-10-01 22:40:40.172004] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:44.977 [2024-10-01 22:40:40.172009] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:44.977 [2024-10-01 22:40:40.172016] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa7fc000b90 00:41:44.977 [2024-10-01 22:40:40.172026] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:41:44.977 qpair failed and we were unable to recover it. 00:41:44.977 [2024-10-01 22:40:40.181958] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:44.977 [2024-10-01 22:40:40.182004] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:44.977 [2024-10-01 22:40:40.182013] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:44.977 [2024-10-01 22:40:40.182018] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:44.977 [2024-10-01 22:40:40.182023] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa7fc000b90 00:41:44.977 [2024-10-01 22:40:40.182033] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:41:44.977 qpair failed and we were unable to recover it. 00:41:44.977 [2024-10-01 22:40:40.192003] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:44.977 [2024-10-01 22:40:40.192044] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:44.977 [2024-10-01 22:40:40.192053] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:44.978 [2024-10-01 22:40:40.192058] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:44.978 [2024-10-01 22:40:40.192063] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa7fc000b90 00:41:44.978 [2024-10-01 22:40:40.192073] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:41:44.978 qpair failed and we were unable to recover it. 00:41:44.978 [2024-10-01 22:40:40.202020] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:44.978 [2024-10-01 22:40:40.202056] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:44.978 [2024-10-01 22:40:40.202066] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:44.978 [2024-10-01 22:40:40.202070] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:44.978 [2024-10-01 22:40:40.202075] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa7fc000b90 00:41:44.978 [2024-10-01 22:40:40.202085] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:41:44.978 qpair failed and we were unable to recover it. 00:41:44.978 [2024-10-01 22:40:40.212081] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:44.978 [2024-10-01 22:40:40.212144] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:44.978 [2024-10-01 22:40:40.212154] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:44.978 [2024-10-01 22:40:40.212159] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:44.978 [2024-10-01 22:40:40.212163] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa7fc000b90 00:41:44.978 [2024-10-01 22:40:40.212173] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:41:44.978 qpair failed and we were unable to recover it. 00:41:44.978 [2024-10-01 22:40:40.222117] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:44.978 [2024-10-01 22:40:40.222163] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:44.978 [2024-10-01 22:40:40.222173] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:44.978 [2024-10-01 22:40:40.222178] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:44.978 [2024-10-01 22:40:40.222182] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa7fc000b90 00:41:44.978 [2024-10-01 22:40:40.222192] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:41:44.978 qpair failed and we were unable to recover it. 00:41:45.239 [2024-10-01 22:40:40.232130] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:45.239 [2024-10-01 22:40:40.232173] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:45.239 [2024-10-01 22:40:40.232183] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:45.239 [2024-10-01 22:40:40.232188] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:45.239 [2024-10-01 22:40:40.232192] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa7fc000b90 00:41:45.239 [2024-10-01 22:40:40.232202] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:41:45.239 qpair failed and we were unable to recover it. 00:41:45.239 [2024-10-01 22:40:40.242119] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:45.239 [2024-10-01 22:40:40.242160] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:45.239 [2024-10-01 22:40:40.242170] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:45.239 [2024-10-01 22:40:40.242174] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:45.239 [2024-10-01 22:40:40.242179] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa7fc000b90 00:41:45.239 [2024-10-01 22:40:40.242189] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:41:45.239 qpair failed and we were unable to recover it. 00:41:45.239 [2024-10-01 22:40:40.252191] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:45.239 [2024-10-01 22:40:40.252230] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:45.239 [2024-10-01 22:40:40.252240] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:45.239 [2024-10-01 22:40:40.252244] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:45.239 [2024-10-01 22:40:40.252249] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa7fc000b90 00:41:45.239 [2024-10-01 22:40:40.252259] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:41:45.239 qpair failed and we were unable to recover it. 00:41:45.239 [2024-10-01 22:40:40.262201] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:45.240 [2024-10-01 22:40:40.262245] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:45.240 [2024-10-01 22:40:40.262255] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:45.240 [2024-10-01 22:40:40.262260] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:45.240 [2024-10-01 22:40:40.262267] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa7fc000b90 00:41:45.240 [2024-10-01 22:40:40.262277] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:41:45.240 qpair failed and we were unable to recover it. 00:41:45.240 [2024-10-01 22:40:40.272217] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:45.240 [2024-10-01 22:40:40.272257] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:45.240 [2024-10-01 22:40:40.272266] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:45.240 [2024-10-01 22:40:40.272271] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:45.240 [2024-10-01 22:40:40.272275] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa7fc000b90 00:41:45.240 [2024-10-01 22:40:40.272285] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:41:45.240 qpair failed and we were unable to recover it. 00:41:45.240 [2024-10-01 22:40:40.282252] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:45.240 [2024-10-01 22:40:40.282289] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:45.240 [2024-10-01 22:40:40.282298] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:45.240 [2024-10-01 22:40:40.282303] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:45.240 [2024-10-01 22:40:40.282307] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa7fc000b90 00:41:45.240 [2024-10-01 22:40:40.282317] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:41:45.240 qpair failed and we were unable to recover it. 00:41:45.240 [2024-10-01 22:40:40.292295] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:45.240 [2024-10-01 22:40:40.292371] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:45.240 [2024-10-01 22:40:40.292380] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:45.240 [2024-10-01 22:40:40.292385] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:45.240 [2024-10-01 22:40:40.292389] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa7fc000b90 00:41:45.240 [2024-10-01 22:40:40.292399] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:41:45.240 qpair failed and we were unable to recover it. 00:41:45.240 [2024-10-01 22:40:40.302333] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:45.240 [2024-10-01 22:40:40.302397] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:45.240 [2024-10-01 22:40:40.302407] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:45.240 [2024-10-01 22:40:40.302412] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:45.240 [2024-10-01 22:40:40.302416] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa7fc000b90 00:41:45.240 [2024-10-01 22:40:40.302425] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:41:45.240 qpair failed and we were unable to recover it. 00:41:45.240 [2024-10-01 22:40:40.312343] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:45.240 [2024-10-01 22:40:40.312388] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:45.240 [2024-10-01 22:40:40.312398] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:45.240 [2024-10-01 22:40:40.312402] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:45.240 [2024-10-01 22:40:40.312407] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa7fc000b90 00:41:45.240 [2024-10-01 22:40:40.312417] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:41:45.240 qpair failed and we were unable to recover it. 00:41:45.240 [2024-10-01 22:40:40.322361] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:45.240 [2024-10-01 22:40:40.322406] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:45.240 [2024-10-01 22:40:40.322425] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:45.240 [2024-10-01 22:40:40.322432] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:45.240 [2024-10-01 22:40:40.322436] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa7fc000b90 00:41:45.240 [2024-10-01 22:40:40.322450] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:41:45.240 qpair failed and we were unable to recover it. 00:41:45.240 [2024-10-01 22:40:40.332272] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:45.240 [2024-10-01 22:40:40.332316] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:45.240 [2024-10-01 22:40:40.332334] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:45.240 [2024-10-01 22:40:40.332340] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:45.240 [2024-10-01 22:40:40.332345] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa7fc000b90 00:41:45.240 [2024-10-01 22:40:40.332359] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:41:45.240 qpair failed and we were unable to recover it. 00:41:45.240 [2024-10-01 22:40:40.342504] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:45.240 [2024-10-01 22:40:40.342557] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:45.240 [2024-10-01 22:40:40.342575] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:45.240 [2024-10-01 22:40:40.342581] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:45.240 [2024-10-01 22:40:40.342586] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa7fc000b90 00:41:45.240 [2024-10-01 22:40:40.342600] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:41:45.240 qpair failed and we were unable to recover it. 00:41:45.240 [2024-10-01 22:40:40.352393] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:45.240 [2024-10-01 22:40:40.352432] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:45.240 [2024-10-01 22:40:40.352443] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:45.240 [2024-10-01 22:40:40.352451] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:45.240 [2024-10-01 22:40:40.352456] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa7fc000b90 00:41:45.240 [2024-10-01 22:40:40.352467] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:41:45.240 qpair failed and we were unable to recover it. 00:41:45.240 [2024-10-01 22:40:40.362372] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:45.240 [2024-10-01 22:40:40.362413] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:45.240 [2024-10-01 22:40:40.362424] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:45.240 [2024-10-01 22:40:40.362429] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:45.240 [2024-10-01 22:40:40.362434] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa7fc000b90 00:41:45.240 [2024-10-01 22:40:40.362444] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:41:45.240 qpair failed and we were unable to recover it. 00:41:45.240 [2024-10-01 22:40:40.372527] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:45.240 [2024-10-01 22:40:40.372587] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:45.240 [2024-10-01 22:40:40.372597] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:45.240 [2024-10-01 22:40:40.372602] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:45.240 [2024-10-01 22:40:40.372606] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa7fc000b90 00:41:45.240 [2024-10-01 22:40:40.372616] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:41:45.240 qpair failed and we were unable to recover it. 00:41:45.240 [2024-10-01 22:40:40.382408] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:45.240 [2024-10-01 22:40:40.382451] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:45.240 [2024-10-01 22:40:40.382461] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:45.240 [2024-10-01 22:40:40.382466] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:45.240 [2024-10-01 22:40:40.382470] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa7fc000b90 00:41:45.240 [2024-10-01 22:40:40.382480] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:41:45.240 qpair failed and we were unable to recover it. 00:41:45.240 [2024-10-01 22:40:40.392561] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:45.240 [2024-10-01 22:40:40.392602] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:45.240 [2024-10-01 22:40:40.392612] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:45.240 [2024-10-01 22:40:40.392617] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:45.240 [2024-10-01 22:40:40.392621] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa7fc000b90 00:41:45.241 [2024-10-01 22:40:40.392634] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:41:45.241 qpair failed and we were unable to recover it. 00:41:45.241 [2024-10-01 22:40:40.402577] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:45.241 [2024-10-01 22:40:40.402618] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:45.241 [2024-10-01 22:40:40.402630] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:45.241 [2024-10-01 22:40:40.402635] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:45.241 [2024-10-01 22:40:40.402639] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa7fc000b90 00:41:45.241 [2024-10-01 22:40:40.402649] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:41:45.241 qpair failed and we were unable to recover it. 00:41:45.241 [2024-10-01 22:40:40.412610] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:45.241 [2024-10-01 22:40:40.412660] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:45.241 [2024-10-01 22:40:40.412669] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:45.241 [2024-10-01 22:40:40.412674] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:45.241 [2024-10-01 22:40:40.412678] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa7fc000b90 00:41:45.241 [2024-10-01 22:40:40.412688] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:41:45.241 qpair failed and we were unable to recover it. 00:41:45.241 [2024-10-01 22:40:40.422610] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:45.241 [2024-10-01 22:40:40.422654] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:45.241 [2024-10-01 22:40:40.422663] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:45.241 [2024-10-01 22:40:40.422668] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:45.241 [2024-10-01 22:40:40.422673] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa7fc000b90 00:41:45.241 [2024-10-01 22:40:40.422683] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:41:45.241 qpair failed and we were unable to recover it. 00:41:45.241 [2024-10-01 22:40:40.432654] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:45.241 [2024-10-01 22:40:40.432691] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:45.241 [2024-10-01 22:40:40.432700] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:45.241 [2024-10-01 22:40:40.432705] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:45.241 [2024-10-01 22:40:40.432709] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa7fc000b90 00:41:45.241 [2024-10-01 22:40:40.432720] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:41:45.241 qpair failed and we were unable to recover it. 00:41:45.241 [2024-10-01 22:40:40.442653] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:45.241 [2024-10-01 22:40:40.442691] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:45.241 [2024-10-01 22:40:40.442701] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:45.241 [2024-10-01 22:40:40.442709] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:45.241 [2024-10-01 22:40:40.442713] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa7fc000b90 00:41:45.241 [2024-10-01 22:40:40.442723] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:41:45.241 qpair failed and we were unable to recover it. 00:41:45.241 [2024-10-01 22:40:40.452714] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:45.241 [2024-10-01 22:40:40.452756] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:45.241 [2024-10-01 22:40:40.452766] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:45.241 [2024-10-01 22:40:40.452771] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:45.241 [2024-10-01 22:40:40.452775] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa7fc000b90 00:41:45.241 [2024-10-01 22:40:40.452785] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:41:45.241 qpair failed and we were unable to recover it. 00:41:45.241 [2024-10-01 22:40:40.462735] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:45.241 [2024-10-01 22:40:40.462783] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:45.241 [2024-10-01 22:40:40.462793] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:45.241 [2024-10-01 22:40:40.462798] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:45.241 [2024-10-01 22:40:40.462802] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa7fc000b90 00:41:45.241 [2024-10-01 22:40:40.462812] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:41:45.241 qpair failed and we were unable to recover it. 00:41:45.241 [2024-10-01 22:40:40.472779] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:45.241 [2024-10-01 22:40:40.472820] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:45.241 [2024-10-01 22:40:40.472830] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:45.241 [2024-10-01 22:40:40.472835] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:45.241 [2024-10-01 22:40:40.472839] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa7fc000b90 00:41:45.241 [2024-10-01 22:40:40.472849] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:41:45.241 qpair failed and we were unable to recover it. 00:41:45.241 [2024-10-01 22:40:40.482779] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:45.241 [2024-10-01 22:40:40.482827] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:45.241 [2024-10-01 22:40:40.482837] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:45.241 [2024-10-01 22:40:40.482842] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:45.241 [2024-10-01 22:40:40.482846] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa7fc000b90 00:41:45.241 [2024-10-01 22:40:40.482856] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:41:45.241 qpair failed and we were unable to recover it. 00:41:45.503 [2024-10-01 22:40:40.492830] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:45.503 [2024-10-01 22:40:40.492895] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:45.503 [2024-10-01 22:40:40.492905] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:45.503 [2024-10-01 22:40:40.492910] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:45.503 [2024-10-01 22:40:40.492914] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa7fc000b90 00:41:45.503 [2024-10-01 22:40:40.492924] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:41:45.503 qpair failed and we were unable to recover it. 00:41:45.503 [2024-10-01 22:40:40.502880] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:45.503 [2024-10-01 22:40:40.502925] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:45.503 [2024-10-01 22:40:40.502936] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:45.503 [2024-10-01 22:40:40.502941] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:45.503 [2024-10-01 22:40:40.502945] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa7fc000b90 00:41:45.503 [2024-10-01 22:40:40.502956] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:41:45.503 qpair failed and we were unable to recover it. 00:41:45.503 [2024-10-01 22:40:40.512854] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:45.503 [2024-10-01 22:40:40.512904] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:45.503 [2024-10-01 22:40:40.512914] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:45.503 [2024-10-01 22:40:40.512919] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:45.503 [2024-10-01 22:40:40.512924] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa7fc000b90 00:41:45.503 [2024-10-01 22:40:40.512934] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:41:45.503 qpair failed and we were unable to recover it. 00:41:45.503 [2024-10-01 22:40:40.522905] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:45.503 [2024-10-01 22:40:40.522944] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:45.503 [2024-10-01 22:40:40.522953] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:45.503 [2024-10-01 22:40:40.522958] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:45.503 [2024-10-01 22:40:40.522962] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa7fc000b90 00:41:45.503 [2024-10-01 22:40:40.522972] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:41:45.503 qpair failed and we were unable to recover it. 00:41:45.503 [2024-10-01 22:40:40.532929] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:45.503 [2024-10-01 22:40:40.533015] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:45.503 [2024-10-01 22:40:40.533028] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:45.503 [2024-10-01 22:40:40.533033] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:45.503 [2024-10-01 22:40:40.533037] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa7fc000b90 00:41:45.503 [2024-10-01 22:40:40.533047] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:41:45.503 qpair failed and we were unable to recover it. 00:41:45.503 [2024-10-01 22:40:40.542981] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:45.503 [2024-10-01 22:40:40.543028] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:45.503 [2024-10-01 22:40:40.543038] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:45.503 [2024-10-01 22:40:40.543043] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:45.503 [2024-10-01 22:40:40.543047] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa7fc000b90 00:41:45.503 [2024-10-01 22:40:40.543057] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:41:45.503 qpair failed and we were unable to recover it. 00:41:45.503 [2024-10-01 22:40:40.552934] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:45.503 [2024-10-01 22:40:40.552975] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:45.503 [2024-10-01 22:40:40.552985] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:45.503 [2024-10-01 22:40:40.552989] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:45.503 [2024-10-01 22:40:40.552994] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa7fc000b90 00:41:45.503 [2024-10-01 22:40:40.553004] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:41:45.503 qpair failed and we were unable to recover it. 00:41:45.503 [2024-10-01 22:40:40.562999] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:45.503 [2024-10-01 22:40:40.563039] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:45.503 [2024-10-01 22:40:40.563049] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:45.503 [2024-10-01 22:40:40.563054] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:45.503 [2024-10-01 22:40:40.563058] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa7fc000b90 00:41:45.503 [2024-10-01 22:40:40.563068] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:41:45.503 qpair failed and we were unable to recover it. 00:41:45.503 [2024-10-01 22:40:40.573039] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:45.503 [2024-10-01 22:40:40.573080] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:45.503 [2024-10-01 22:40:40.573090] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:45.503 [2024-10-01 22:40:40.573095] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:45.503 [2024-10-01 22:40:40.573099] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa7fc000b90 00:41:45.503 [2024-10-01 22:40:40.573113] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:41:45.503 qpair failed and we were unable to recover it. 00:41:45.503 [2024-10-01 22:40:40.583059] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:45.503 [2024-10-01 22:40:40.583108] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:45.503 [2024-10-01 22:40:40.583118] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:45.503 [2024-10-01 22:40:40.583122] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:45.503 [2024-10-01 22:40:40.583127] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa7fc000b90 00:41:45.503 [2024-10-01 22:40:40.583136] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:41:45.503 qpair failed and we were unable to recover it. 00:41:45.503 [2024-10-01 22:40:40.593065] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:45.503 [2024-10-01 22:40:40.593104] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:45.503 [2024-10-01 22:40:40.593113] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:45.503 [2024-10-01 22:40:40.593118] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:45.503 [2024-10-01 22:40:40.593122] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa7fc000b90 00:41:45.503 [2024-10-01 22:40:40.593132] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:41:45.503 qpair failed and we were unable to recover it. 00:41:45.503 [2024-10-01 22:40:40.603110] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:45.503 [2024-10-01 22:40:40.603147] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:45.503 [2024-10-01 22:40:40.603157] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:45.503 [2024-10-01 22:40:40.603161] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:45.504 [2024-10-01 22:40:40.603166] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa7fc000b90 00:41:45.504 [2024-10-01 22:40:40.603175] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:41:45.504 qpair failed and we were unable to recover it. 00:41:45.504 [2024-10-01 22:40:40.613134] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:45.504 [2024-10-01 22:40:40.613184] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:45.504 [2024-10-01 22:40:40.613193] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:45.504 [2024-10-01 22:40:40.613198] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:45.504 [2024-10-01 22:40:40.613202] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa7fc000b90 00:41:45.504 [2024-10-01 22:40:40.613212] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:41:45.504 qpair failed and we were unable to recover it. 00:41:45.504 [2024-10-01 22:40:40.623169] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:45.504 [2024-10-01 22:40:40.623209] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:45.504 [2024-10-01 22:40:40.623221] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:45.504 [2024-10-01 22:40:40.623226] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:45.504 [2024-10-01 22:40:40.623230] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa7fc000b90 00:41:45.504 [2024-10-01 22:40:40.623240] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:41:45.504 qpair failed and we were unable to recover it. 00:41:45.504 [2024-10-01 22:40:40.633145] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:45.504 [2024-10-01 22:40:40.633183] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:45.504 [2024-10-01 22:40:40.633193] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:45.504 [2024-10-01 22:40:40.633198] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:45.504 [2024-10-01 22:40:40.633202] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa7fc000b90 00:41:45.504 [2024-10-01 22:40:40.633212] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:41:45.504 qpair failed and we were unable to recover it. 00:41:45.504 [2024-10-01 22:40:40.643210] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:45.504 [2024-10-01 22:40:40.643246] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:45.504 [2024-10-01 22:40:40.643256] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:45.504 [2024-10-01 22:40:40.643260] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:45.504 [2024-10-01 22:40:40.643265] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa7fc000b90 00:41:45.504 [2024-10-01 22:40:40.643275] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:41:45.504 qpair failed and we were unable to recover it. 00:41:45.504 [2024-10-01 22:40:40.653249] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:45.504 [2024-10-01 22:40:40.653290] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:45.504 [2024-10-01 22:40:40.653299] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:45.504 [2024-10-01 22:40:40.653304] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:45.504 [2024-10-01 22:40:40.653308] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa7fc000b90 00:41:45.504 [2024-10-01 22:40:40.653318] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:41:45.504 qpair failed and we were unable to recover it. 00:41:45.504 [2024-10-01 22:40:40.663217] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:45.504 [2024-10-01 22:40:40.663264] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:45.504 [2024-10-01 22:40:40.663274] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:45.504 [2024-10-01 22:40:40.663279] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:45.504 [2024-10-01 22:40:40.663283] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa7fc000b90 00:41:45.504 [2024-10-01 22:40:40.663299] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:41:45.504 qpair failed and we were unable to recover it. 00:41:45.504 [2024-10-01 22:40:40.673290] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:45.504 [2024-10-01 22:40:40.673326] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:45.504 [2024-10-01 22:40:40.673335] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:45.504 [2024-10-01 22:40:40.673340] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:45.504 [2024-10-01 22:40:40.673344] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa7fc000b90 00:41:45.504 [2024-10-01 22:40:40.673354] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:41:45.504 qpair failed and we were unable to recover it. 00:41:45.504 [2024-10-01 22:40:40.683307] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:45.504 [2024-10-01 22:40:40.683350] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:45.504 [2024-10-01 22:40:40.683360] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:45.504 [2024-10-01 22:40:40.683365] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:45.504 [2024-10-01 22:40:40.683369] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa7fc000b90 00:41:45.504 [2024-10-01 22:40:40.683379] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:41:45.504 qpair failed and we were unable to recover it. 00:41:45.504 [2024-10-01 22:40:40.693311] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:45.504 [2024-10-01 22:40:40.693356] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:45.504 [2024-10-01 22:40:40.693374] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:45.504 [2024-10-01 22:40:40.693380] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:45.504 [2024-10-01 22:40:40.693385] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa7fc000b90 00:41:45.504 [2024-10-01 22:40:40.693399] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:41:45.504 qpair failed and we were unable to recover it. 00:41:45.504 [2024-10-01 22:40:40.703373] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:45.504 [2024-10-01 22:40:40.703436] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:45.504 [2024-10-01 22:40:40.703455] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:45.504 [2024-10-01 22:40:40.703461] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:45.504 [2024-10-01 22:40:40.703466] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa7fc000b90 00:41:45.504 [2024-10-01 22:40:40.703479] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:41:45.504 qpair failed and we were unable to recover it. 00:41:45.504 [2024-10-01 22:40:40.713421] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:45.504 [2024-10-01 22:40:40.713470] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:45.504 [2024-10-01 22:40:40.713488] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:45.504 [2024-10-01 22:40:40.713494] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:45.504 [2024-10-01 22:40:40.713499] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa7fc000b90 00:41:45.504 [2024-10-01 22:40:40.713513] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:41:45.504 qpair failed and we were unable to recover it. 00:41:45.504 [2024-10-01 22:40:40.723397] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:45.504 [2024-10-01 22:40:40.723435] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:45.504 [2024-10-01 22:40:40.723446] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:45.504 [2024-10-01 22:40:40.723451] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:45.504 [2024-10-01 22:40:40.723456] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa7fc000b90 00:41:45.504 [2024-10-01 22:40:40.723467] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:41:45.504 qpair failed and we were unable to recover it. 00:41:45.504 [2024-10-01 22:40:40.733446] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:45.504 [2024-10-01 22:40:40.733490] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:45.504 [2024-10-01 22:40:40.733500] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:45.504 [2024-10-01 22:40:40.733505] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:45.504 [2024-10-01 22:40:40.733509] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa7fc000b90 00:41:45.504 [2024-10-01 22:40:40.733520] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:41:45.504 qpair failed and we were unable to recover it. 00:41:45.504 [2024-10-01 22:40:40.743439] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:45.505 [2024-10-01 22:40:40.743482] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:45.505 [2024-10-01 22:40:40.743492] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:45.505 [2024-10-01 22:40:40.743497] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:45.505 [2024-10-01 22:40:40.743501] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa7fc000b90 00:41:45.505 [2024-10-01 22:40:40.743511] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:41:45.505 qpair failed and we were unable to recover it. 00:41:45.505 [2024-10-01 22:40:40.753469] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:45.505 [2024-10-01 22:40:40.753505] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:45.505 [2024-10-01 22:40:40.753515] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:45.505 [2024-10-01 22:40:40.753520] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:45.505 [2024-10-01 22:40:40.753527] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa7fc000b90 00:41:45.505 [2024-10-01 22:40:40.753538] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:41:45.505 qpair failed and we were unable to recover it. 00:41:45.766 [2024-10-01 22:40:40.763496] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:45.766 [2024-10-01 22:40:40.763562] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:45.766 [2024-10-01 22:40:40.763572] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:45.766 [2024-10-01 22:40:40.763577] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:45.766 [2024-10-01 22:40:40.763581] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa7fc000b90 00:41:45.766 [2024-10-01 22:40:40.763591] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:41:45.766 qpair failed and we were unable to recover it. 00:41:45.766 [2024-10-01 22:40:40.773529] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:45.766 [2024-10-01 22:40:40.773571] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:45.766 [2024-10-01 22:40:40.773581] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:45.766 [2024-10-01 22:40:40.773586] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:45.766 [2024-10-01 22:40:40.773590] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa7fc000b90 00:41:45.766 [2024-10-01 22:40:40.773600] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:41:45.766 qpair failed and we were unable to recover it. 00:41:45.766 [2024-10-01 22:40:40.783561] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:45.766 [2024-10-01 22:40:40.783607] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:45.766 [2024-10-01 22:40:40.783616] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:45.766 [2024-10-01 22:40:40.783621] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:45.766 [2024-10-01 22:40:40.783628] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa7fc000b90 00:41:45.766 [2024-10-01 22:40:40.783639] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:41:45.766 qpair failed and we were unable to recover it. 00:41:45.766 [2024-10-01 22:40:40.793601] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:45.766 [2024-10-01 22:40:40.793643] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:45.766 [2024-10-01 22:40:40.793653] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:45.766 [2024-10-01 22:40:40.793658] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:45.766 [2024-10-01 22:40:40.793662] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa7fc000b90 00:41:45.766 [2024-10-01 22:40:40.793672] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:41:45.766 qpair failed and we were unable to recover it. 00:41:45.766 [2024-10-01 22:40:40.803591] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:45.766 [2024-10-01 22:40:40.803639] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:45.766 [2024-10-01 22:40:40.803650] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:45.766 [2024-10-01 22:40:40.803654] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:45.766 [2024-10-01 22:40:40.803659] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa7fc000b90 00:41:45.766 [2024-10-01 22:40:40.803668] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:41:45.766 qpair failed and we were unable to recover it. 00:41:45.766 [2024-10-01 22:40:40.813662] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:45.766 [2024-10-01 22:40:40.813705] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:45.766 [2024-10-01 22:40:40.813715] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:45.766 [2024-10-01 22:40:40.813719] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:45.766 [2024-10-01 22:40:40.813724] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa7fc000b90 00:41:45.766 [2024-10-01 22:40:40.813733] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:41:45.766 qpair failed and we were unable to recover it. 00:41:45.766 [2024-10-01 22:40:40.823683] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:45.766 [2024-10-01 22:40:40.823725] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:45.766 [2024-10-01 22:40:40.823734] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:45.766 [2024-10-01 22:40:40.823739] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:45.766 [2024-10-01 22:40:40.823744] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa7fc000b90 00:41:45.766 [2024-10-01 22:40:40.823753] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:41:45.766 qpair failed and we were unable to recover it. 00:41:45.766 [2024-10-01 22:40:40.833706] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:45.766 [2024-10-01 22:40:40.833742] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:45.766 [2024-10-01 22:40:40.833752] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:45.766 [2024-10-01 22:40:40.833757] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:45.766 [2024-10-01 22:40:40.833761] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa7fc000b90 00:41:45.766 [2024-10-01 22:40:40.833771] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:41:45.766 qpair failed and we were unable to recover it. 00:41:45.766 [2024-10-01 22:40:40.843665] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:45.766 [2024-10-01 22:40:40.843708] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:45.766 [2024-10-01 22:40:40.843718] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:45.766 [2024-10-01 22:40:40.843726] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:45.766 [2024-10-01 22:40:40.843730] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa7fc000b90 00:41:45.766 [2024-10-01 22:40:40.843740] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:41:45.766 qpair failed and we were unable to recover it. 00:41:45.766 [2024-10-01 22:40:40.853669] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:45.766 [2024-10-01 22:40:40.853718] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:45.766 [2024-10-01 22:40:40.853727] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:45.766 [2024-10-01 22:40:40.853732] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:45.766 [2024-10-01 22:40:40.853736] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa7fc000b90 00:41:45.766 [2024-10-01 22:40:40.853746] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:41:45.766 qpair failed and we were unable to recover it. 00:41:45.766 [2024-10-01 22:40:40.863852] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:45.767 [2024-10-01 22:40:40.863897] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:45.767 [2024-10-01 22:40:40.863907] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:45.767 [2024-10-01 22:40:40.863912] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:45.767 [2024-10-01 22:40:40.863916] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa7fc000b90 00:41:45.767 [2024-10-01 22:40:40.863926] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:41:45.767 qpair failed and we were unable to recover it. 00:41:45.767 [2024-10-01 22:40:40.873795] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:45.767 [2024-10-01 22:40:40.873843] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:45.767 [2024-10-01 22:40:40.873852] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:45.767 [2024-10-01 22:40:40.873857] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:45.767 [2024-10-01 22:40:40.873861] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa7fc000b90 00:41:45.767 [2024-10-01 22:40:40.873871] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:41:45.767 qpair failed and we were unable to recover it. 00:41:45.767 [2024-10-01 22:40:40.883839] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:45.767 [2024-10-01 22:40:40.883876] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:45.767 [2024-10-01 22:40:40.883885] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:45.767 [2024-10-01 22:40:40.883890] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:45.767 [2024-10-01 22:40:40.883895] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa7fc000b90 00:41:45.767 [2024-10-01 22:40:40.883904] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:41:45.767 qpair failed and we were unable to recover it. 00:41:45.767 [2024-10-01 22:40:40.893842] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:45.767 [2024-10-01 22:40:40.893894] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:45.767 [2024-10-01 22:40:40.893904] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:45.767 [2024-10-01 22:40:40.893908] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:45.767 [2024-10-01 22:40:40.893913] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa7fc000b90 00:41:45.767 [2024-10-01 22:40:40.893923] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:41:45.767 qpair failed and we were unable to recover it. 00:41:45.767 [2024-10-01 22:40:40.903896] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:45.767 [2024-10-01 22:40:40.903935] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:45.767 [2024-10-01 22:40:40.903945] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:45.767 [2024-10-01 22:40:40.903949] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:45.767 [2024-10-01 22:40:40.903954] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa7fc000b90 00:41:45.767 [2024-10-01 22:40:40.903964] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:41:45.767 qpair failed and we were unable to recover it. 00:41:45.767 [2024-10-01 22:40:40.913790] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:45.767 [2024-10-01 22:40:40.913872] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:45.767 [2024-10-01 22:40:40.913882] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:45.767 [2024-10-01 22:40:40.913886] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:45.767 [2024-10-01 22:40:40.913891] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa7fc000b90 00:41:45.767 [2024-10-01 22:40:40.913901] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:41:45.767 qpair failed and we were unable to recover it. 00:41:45.767 [2024-10-01 22:40:40.923912] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:45.767 [2024-10-01 22:40:40.923949] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:45.767 [2024-10-01 22:40:40.923958] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:45.767 [2024-10-01 22:40:40.923963] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:45.767 [2024-10-01 22:40:40.923967] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa7fc000b90 00:41:45.767 [2024-10-01 22:40:40.923977] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:41:45.767 qpair failed and we were unable to recover it. 00:41:45.767 [2024-10-01 22:40:40.933926] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:45.767 [2024-10-01 22:40:40.933968] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:45.767 [2024-10-01 22:40:40.933978] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:45.767 [2024-10-01 22:40:40.933986] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:45.767 [2024-10-01 22:40:40.933990] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa7fc000b90 00:41:45.767 [2024-10-01 22:40:40.934000] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:41:45.767 qpair failed and we were unable to recover it. 00:41:45.767 [2024-10-01 22:40:40.943902] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:45.767 [2024-10-01 22:40:40.943960] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:45.767 [2024-10-01 22:40:40.943971] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:45.767 [2024-10-01 22:40:40.943976] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:45.767 [2024-10-01 22:40:40.943980] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa7fc000b90 00:41:45.767 [2024-10-01 22:40:40.943990] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:41:45.767 qpair failed and we were unable to recover it. 00:41:45.767 [2024-10-01 22:40:40.953931] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:45.767 [2024-10-01 22:40:40.953993] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:45.767 [2024-10-01 22:40:40.954002] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:45.767 [2024-10-01 22:40:40.954007] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:45.767 [2024-10-01 22:40:40.954012] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa7fc000b90 00:41:45.767 [2024-10-01 22:40:40.954021] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:41:45.767 qpair failed and we were unable to recover it. 00:41:45.767 [2024-10-01 22:40:40.964009] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:45.767 [2024-10-01 22:40:40.964055] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:45.767 [2024-10-01 22:40:40.964064] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:45.767 [2024-10-01 22:40:40.964069] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:45.767 [2024-10-01 22:40:40.964074] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa7fc000b90 00:41:45.767 [2024-10-01 22:40:40.964083] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:41:45.767 qpair failed and we were unable to recover it. 00:41:45.767 [2024-10-01 22:40:40.974068] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:45.767 [2024-10-01 22:40:40.974108] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:45.767 [2024-10-01 22:40:40.974117] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:45.767 [2024-10-01 22:40:40.974122] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:45.767 [2024-10-01 22:40:40.974126] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa7fc000b90 00:41:45.767 [2024-10-01 22:40:40.974136] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:41:45.767 qpair failed and we were unable to recover it. 00:41:45.767 [2024-10-01 22:40:40.984140] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:45.767 [2024-10-01 22:40:40.984182] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:45.767 [2024-10-01 22:40:40.984192] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:45.767 [2024-10-01 22:40:40.984196] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:45.767 [2024-10-01 22:40:40.984200] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa7fc000b90 00:41:45.767 [2024-10-01 22:40:40.984210] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:41:45.767 qpair failed and we were unable to recover it. 00:41:45.767 [2024-10-01 22:40:40.994142] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:45.767 [2024-10-01 22:40:40.994191] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:45.767 [2024-10-01 22:40:40.994201] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:45.767 [2024-10-01 22:40:40.994206] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:45.768 [2024-10-01 22:40:40.994210] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa7fc000b90 00:41:45.768 [2024-10-01 22:40:40.994220] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:41:45.768 qpair failed and we were unable to recover it. 00:41:45.768 [2024-10-01 22:40:41.004138] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:45.768 [2024-10-01 22:40:41.004178] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:45.768 [2024-10-01 22:40:41.004188] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:45.768 [2024-10-01 22:40:41.004193] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:45.768 [2024-10-01 22:40:41.004197] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa7fc000b90 00:41:45.768 [2024-10-01 22:40:41.004207] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:41:45.768 qpair failed and we were unable to recover it. 00:41:45.768 [2024-10-01 22:40:41.014195] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:45.768 [2024-10-01 22:40:41.014267] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:45.768 [2024-10-01 22:40:41.014276] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:45.768 [2024-10-01 22:40:41.014281] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:45.768 [2024-10-01 22:40:41.014285] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa7fc000b90 00:41:45.768 [2024-10-01 22:40:41.014295] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:41:45.768 qpair failed and we were unable to recover it. 00:41:46.030 [2024-10-01 22:40:41.024235] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:46.030 [2024-10-01 22:40:41.024292] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:46.030 [2024-10-01 22:40:41.024305] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:46.030 [2024-10-01 22:40:41.024310] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:46.030 [2024-10-01 22:40:41.024314] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa7fc000b90 00:41:46.030 [2024-10-01 22:40:41.024324] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:41:46.030 qpair failed and we were unable to recover it. 00:41:46.030 [2024-10-01 22:40:41.034246] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:46.030 [2024-10-01 22:40:41.034295] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:46.030 [2024-10-01 22:40:41.034305] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:46.030 [2024-10-01 22:40:41.034311] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:46.030 [2024-10-01 22:40:41.034315] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa7fc000b90 00:41:46.030 [2024-10-01 22:40:41.034325] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:41:46.030 qpair failed and we were unable to recover it. 00:41:46.030 [2024-10-01 22:40:41.044264] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:46.030 [2024-10-01 22:40:41.044320] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:46.030 [2024-10-01 22:40:41.044329] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:46.030 [2024-10-01 22:40:41.044334] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:46.030 [2024-10-01 22:40:41.044339] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa7fc000b90 00:41:46.030 [2024-10-01 22:40:41.044348] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:41:46.030 qpair failed and we were unable to recover it. 00:41:46.030 [2024-10-01 22:40:41.054314] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:46.030 [2024-10-01 22:40:41.054360] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:46.030 [2024-10-01 22:40:41.054369] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:46.030 [2024-10-01 22:40:41.054374] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:46.030 [2024-10-01 22:40:41.054379] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa7fc000b90 00:41:46.030 [2024-10-01 22:40:41.054388] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:41:46.030 qpair failed and we were unable to recover it. 00:41:46.030 [2024-10-01 22:40:41.064348] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:46.030 [2024-10-01 22:40:41.064411] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:46.030 [2024-10-01 22:40:41.064421] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:46.030 [2024-10-01 22:40:41.064426] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:46.030 [2024-10-01 22:40:41.064430] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa7fc000b90 00:41:46.030 [2024-10-01 22:40:41.064444] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:41:46.030 qpair failed and we were unable to recover it. 00:41:46.030 [2024-10-01 22:40:41.074329] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:46.030 [2024-10-01 22:40:41.074366] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:46.030 [2024-10-01 22:40:41.074376] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:46.030 [2024-10-01 22:40:41.074381] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:46.030 [2024-10-01 22:40:41.074385] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa7fc000b90 00:41:46.030 [2024-10-01 22:40:41.074395] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:41:46.030 qpair failed and we were unable to recover it. 00:41:46.030 [2024-10-01 22:40:41.084386] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:46.030 [2024-10-01 22:40:41.084423] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:46.030 [2024-10-01 22:40:41.084433] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:46.030 [2024-10-01 22:40:41.084438] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:46.030 [2024-10-01 22:40:41.084442] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa7fc000b90 00:41:46.030 [2024-10-01 22:40:41.084452] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:41:46.030 qpair failed and we were unable to recover it. 00:41:46.030 [2024-10-01 22:40:41.094368] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:46.030 [2024-10-01 22:40:41.094408] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:46.030 [2024-10-01 22:40:41.094417] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:46.030 [2024-10-01 22:40:41.094422] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:46.030 [2024-10-01 22:40:41.094427] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa7fc000b90 00:41:46.030 [2024-10-01 22:40:41.094437] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:41:46.030 qpair failed and we were unable to recover it. 00:41:46.031 [2024-10-01 22:40:41.104428] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:46.031 [2024-10-01 22:40:41.104468] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:46.031 [2024-10-01 22:40:41.104478] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:46.031 [2024-10-01 22:40:41.104482] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:46.031 [2024-10-01 22:40:41.104487] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa7fc000b90 00:41:46.031 [2024-10-01 22:40:41.104497] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:41:46.031 qpair failed and we were unable to recover it. 00:41:46.031 [2024-10-01 22:40:41.114450] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:46.031 [2024-10-01 22:40:41.114523] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:46.031 [2024-10-01 22:40:41.114535] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:46.031 [2024-10-01 22:40:41.114540] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:46.031 [2024-10-01 22:40:41.114544] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa7fc000b90 00:41:46.031 [2024-10-01 22:40:41.114554] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:41:46.031 qpair failed and we were unable to recover it. 00:41:46.031 [2024-10-01 22:40:41.124483] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:46.031 [2024-10-01 22:40:41.124524] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:46.031 [2024-10-01 22:40:41.124533] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:46.031 [2024-10-01 22:40:41.124538] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:46.031 [2024-10-01 22:40:41.124543] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa7fc000b90 00:41:46.031 [2024-10-01 22:40:41.124552] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:41:46.031 qpair failed and we were unable to recover it. 00:41:46.031 [2024-10-01 22:40:41.134555] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:46.031 [2024-10-01 22:40:41.134636] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:46.031 [2024-10-01 22:40:41.134646] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:46.031 [2024-10-01 22:40:41.134651] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:46.031 [2024-10-01 22:40:41.134655] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa7fc000b90 00:41:46.031 [2024-10-01 22:40:41.134666] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:41:46.031 qpair failed and we were unable to recover it. 00:41:46.031 [2024-10-01 22:40:41.144557] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:46.031 [2024-10-01 22:40:41.144602] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:46.031 [2024-10-01 22:40:41.144612] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:46.031 [2024-10-01 22:40:41.144617] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:46.031 [2024-10-01 22:40:41.144621] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa7fc000b90 00:41:46.031 [2024-10-01 22:40:41.144634] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:41:46.031 qpair failed and we were unable to recover it. 00:41:46.031 [2024-10-01 22:40:41.154566] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:46.031 [2024-10-01 22:40:41.154603] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:46.031 [2024-10-01 22:40:41.154613] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:46.031 [2024-10-01 22:40:41.154617] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:46.031 [2024-10-01 22:40:41.154622] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa7fc000b90 00:41:46.031 [2024-10-01 22:40:41.154637] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:41:46.031 qpair failed and we were unable to recover it. 00:41:46.031 [2024-10-01 22:40:41.164584] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:46.031 [2024-10-01 22:40:41.164630] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:46.031 [2024-10-01 22:40:41.164640] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:46.031 [2024-10-01 22:40:41.164645] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:46.031 [2024-10-01 22:40:41.164650] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa7fc000b90 00:41:46.031 [2024-10-01 22:40:41.164661] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:41:46.031 qpair failed and we were unable to recover it. 00:41:46.031 [2024-10-01 22:40:41.174630] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:46.031 [2024-10-01 22:40:41.174673] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:46.031 [2024-10-01 22:40:41.174683] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:46.031 [2024-10-01 22:40:41.174688] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:46.031 [2024-10-01 22:40:41.174692] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa7fc000b90 00:41:46.031 [2024-10-01 22:40:41.174702] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:41:46.031 qpair failed and we were unable to recover it. 00:41:46.031 [2024-10-01 22:40:41.184650] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:46.031 [2024-10-01 22:40:41.184693] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:46.031 [2024-10-01 22:40:41.184703] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:46.031 [2024-10-01 22:40:41.184708] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:46.031 [2024-10-01 22:40:41.184712] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa7fc000b90 00:41:46.031 [2024-10-01 22:40:41.184722] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:41:46.031 qpair failed and we were unable to recover it. 00:41:46.031 [2024-10-01 22:40:41.194631] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:46.031 [2024-10-01 22:40:41.194672] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:46.031 [2024-10-01 22:40:41.194682] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:46.031 [2024-10-01 22:40:41.194687] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:46.031 [2024-10-01 22:40:41.194691] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa7fc000b90 00:41:46.031 [2024-10-01 22:40:41.194701] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:41:46.031 qpair failed and we were unable to recover it. 00:41:46.031 [2024-10-01 22:40:41.204708] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:46.031 [2024-10-01 22:40:41.204796] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:46.031 [2024-10-01 22:40:41.204809] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:46.031 [2024-10-01 22:40:41.204814] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:46.031 [2024-10-01 22:40:41.204818] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa7fc000b90 00:41:46.031 [2024-10-01 22:40:41.204829] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:41:46.031 qpair failed and we were unable to recover it. 00:41:46.031 [2024-10-01 22:40:41.214763] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:46.031 [2024-10-01 22:40:41.214847] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:46.031 [2024-10-01 22:40:41.214856] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:46.031 [2024-10-01 22:40:41.214861] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:46.031 [2024-10-01 22:40:41.214865] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa7fc000b90 00:41:46.031 [2024-10-01 22:40:41.214876] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:41:46.031 qpair failed and we were unable to recover it. 00:41:46.031 [2024-10-01 22:40:41.224773] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:46.031 [2024-10-01 22:40:41.224817] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:46.031 [2024-10-01 22:40:41.224827] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:46.031 [2024-10-01 22:40:41.224832] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:46.031 [2024-10-01 22:40:41.224837] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa7fc000b90 00:41:46.031 [2024-10-01 22:40:41.224847] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:41:46.031 qpair failed and we were unable to recover it. 00:41:46.031 [2024-10-01 22:40:41.234789] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:46.031 [2024-10-01 22:40:41.234828] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:46.031 [2024-10-01 22:40:41.234838] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:46.031 [2024-10-01 22:40:41.234843] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:46.032 [2024-10-01 22:40:41.234847] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa7fc000b90 00:41:46.032 [2024-10-01 22:40:41.234858] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:41:46.032 qpair failed and we were unable to recover it. 00:41:46.032 [2024-10-01 22:40:41.244813] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:46.032 [2024-10-01 22:40:41.244850] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:46.032 [2024-10-01 22:40:41.244860] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:46.032 [2024-10-01 22:40:41.244864] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:46.032 [2024-10-01 22:40:41.244872] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa7fc000b90 00:41:46.032 [2024-10-01 22:40:41.244881] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:41:46.032 qpair failed and we were unable to recover it. 00:41:46.032 [2024-10-01 22:40:41.254840] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:46.032 [2024-10-01 22:40:41.254883] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:46.032 [2024-10-01 22:40:41.254893] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:46.032 [2024-10-01 22:40:41.254898] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:46.032 [2024-10-01 22:40:41.254903] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa7fc000b90 00:41:46.032 [2024-10-01 22:40:41.254912] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:41:46.032 qpair failed and we were unable to recover it. 00:41:46.032 [2024-10-01 22:40:41.264888] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:46.032 [2024-10-01 22:40:41.264932] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:46.032 [2024-10-01 22:40:41.264942] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:46.032 [2024-10-01 22:40:41.264947] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:46.032 [2024-10-01 22:40:41.264951] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa7fc000b90 00:41:46.032 [2024-10-01 22:40:41.264961] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:41:46.032 qpair failed and we were unable to recover it. 00:41:46.032 [2024-10-01 22:40:41.274900] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:46.032 [2024-10-01 22:40:41.274939] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:46.032 [2024-10-01 22:40:41.274950] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:46.032 [2024-10-01 22:40:41.274955] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:46.032 [2024-10-01 22:40:41.274959] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa7fc000b90 00:41:46.032 [2024-10-01 22:40:41.274969] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:41:46.032 qpair failed and we were unable to recover it. 00:41:46.294 [2024-10-01 22:40:41.284945] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:46.294 [2024-10-01 22:40:41.285032] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:46.294 [2024-10-01 22:40:41.285042] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:46.294 [2024-10-01 22:40:41.285047] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:46.294 [2024-10-01 22:40:41.285052] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa7fc000b90 00:41:46.294 [2024-10-01 22:40:41.285062] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:41:46.294 qpair failed and we were unable to recover it. 00:41:46.294 [2024-10-01 22:40:41.294937] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:46.294 [2024-10-01 22:40:41.295001] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:46.294 [2024-10-01 22:40:41.295011] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:46.294 [2024-10-01 22:40:41.295016] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:46.294 [2024-10-01 22:40:41.295020] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa7fc000b90 00:41:46.294 [2024-10-01 22:40:41.295030] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:41:46.294 qpair failed and we were unable to recover it. 00:41:46.294 [2024-10-01 22:40:41.304993] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:46.294 [2024-10-01 22:40:41.305040] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:46.294 [2024-10-01 22:40:41.305050] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:46.294 [2024-10-01 22:40:41.305054] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:46.294 [2024-10-01 22:40:41.305059] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa7fc000b90 00:41:46.294 [2024-10-01 22:40:41.305069] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:41:46.294 qpair failed and we were unable to recover it. 00:41:46.294 [2024-10-01 22:40:41.315010] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:46.294 [2024-10-01 22:40:41.315048] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:46.294 [2024-10-01 22:40:41.315058] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:46.294 [2024-10-01 22:40:41.315063] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:46.294 [2024-10-01 22:40:41.315067] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa7fc000b90 00:41:46.294 [2024-10-01 22:40:41.315077] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:41:46.294 qpair failed and we were unable to recover it. 00:41:46.294 [2024-10-01 22:40:41.324981] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:46.294 [2024-10-01 22:40:41.325023] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:46.294 [2024-10-01 22:40:41.325033] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:46.294 [2024-10-01 22:40:41.325040] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:46.294 [2024-10-01 22:40:41.325045] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa7fc000b90 00:41:46.294 [2024-10-01 22:40:41.325056] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:41:46.294 qpair failed and we were unable to recover it. 00:41:46.294 [2024-10-01 22:40:41.335024] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:46.294 [2024-10-01 22:40:41.335065] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:46.294 [2024-10-01 22:40:41.335075] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:46.294 [2024-10-01 22:40:41.335080] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:46.294 [2024-10-01 22:40:41.335087] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa7fc000b90 00:41:46.294 [2024-10-01 22:40:41.335097] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:41:46.294 qpair failed and we were unable to recover it. 00:41:46.294 [2024-10-01 22:40:41.345104] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:46.294 [2024-10-01 22:40:41.345145] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:46.294 [2024-10-01 22:40:41.345155] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:46.294 [2024-10-01 22:40:41.345160] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:46.294 [2024-10-01 22:40:41.345164] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa7fc000b90 00:41:46.294 [2024-10-01 22:40:41.345175] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:41:46.294 qpair failed and we were unable to recover it. 00:41:46.294 [2024-10-01 22:40:41.355092] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:46.294 [2024-10-01 22:40:41.355133] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:46.294 [2024-10-01 22:40:41.355143] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:46.294 [2024-10-01 22:40:41.355148] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:46.294 [2024-10-01 22:40:41.355153] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa7fc000b90 00:41:46.294 [2024-10-01 22:40:41.355163] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:41:46.294 qpair failed and we were unable to recover it. 00:41:46.294 [2024-10-01 22:40:41.364998] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:46.294 [2024-10-01 22:40:41.365036] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:46.294 [2024-10-01 22:40:41.365046] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:46.294 [2024-10-01 22:40:41.365050] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:46.294 [2024-10-01 22:40:41.365055] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa7fc000b90 00:41:46.294 [2024-10-01 22:40:41.365064] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:41:46.294 qpair failed and we were unable to recover it. 00:41:46.294 [2024-10-01 22:40:41.375027] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:46.294 [2024-10-01 22:40:41.375069] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:46.294 [2024-10-01 22:40:41.375079] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:46.294 [2024-10-01 22:40:41.375083] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:46.294 [2024-10-01 22:40:41.375088] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa7fc000b90 00:41:46.294 [2024-10-01 22:40:41.375097] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:41:46.294 qpair failed and we were unable to recover it. 00:41:46.294 [2024-10-01 22:40:41.385205] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:46.294 [2024-10-01 22:40:41.385254] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:46.294 [2024-10-01 22:40:41.385263] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:46.294 [2024-10-01 22:40:41.385268] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:46.294 [2024-10-01 22:40:41.385273] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa7fc000b90 00:41:46.294 [2024-10-01 22:40:41.385282] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:41:46.294 qpair failed and we were unable to recover it. 00:41:46.294 [2024-10-01 22:40:41.395205] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:46.294 [2024-10-01 22:40:41.395245] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:46.294 [2024-10-01 22:40:41.395259] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:46.294 [2024-10-01 22:40:41.395264] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:46.294 [2024-10-01 22:40:41.395270] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa7fc000b90 00:41:46.294 [2024-10-01 22:40:41.395280] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:41:46.294 qpair failed and we were unable to recover it. 00:41:46.294 [2024-10-01 22:40:41.405216] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:46.294 [2024-10-01 22:40:41.405252] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:46.294 [2024-10-01 22:40:41.405262] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:46.294 [2024-10-01 22:40:41.405267] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:46.295 [2024-10-01 22:40:41.405271] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa7fc000b90 00:41:46.295 [2024-10-01 22:40:41.405281] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:41:46.295 qpair failed and we were unable to recover it. 00:41:46.295 [2024-10-01 22:40:41.415250] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:46.295 [2024-10-01 22:40:41.415300] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:46.295 [2024-10-01 22:40:41.415310] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:46.295 [2024-10-01 22:40:41.415315] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:46.295 [2024-10-01 22:40:41.415319] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa7fc000b90 00:41:46.295 [2024-10-01 22:40:41.415329] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:41:46.295 qpair failed and we were unable to recover it. 00:41:46.295 [2024-10-01 22:40:41.425282] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:46.295 [2024-10-01 22:40:41.425328] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:46.295 [2024-10-01 22:40:41.425338] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:46.295 [2024-10-01 22:40:41.425345] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:46.295 [2024-10-01 22:40:41.425350] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa7fc000b90 00:41:46.295 [2024-10-01 22:40:41.425359] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:41:46.295 qpair failed and we were unable to recover it. 00:41:46.295 [2024-10-01 22:40:41.435279] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:46.295 [2024-10-01 22:40:41.435317] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:46.295 [2024-10-01 22:40:41.435326] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:46.295 [2024-10-01 22:40:41.435331] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:46.295 [2024-10-01 22:40:41.435335] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa7fc000b90 00:41:46.295 [2024-10-01 22:40:41.435346] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:41:46.295 qpair failed and we were unable to recover it. 00:41:46.295 [2024-10-01 22:40:41.445301] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:46.295 [2024-10-01 22:40:41.445339] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:46.295 [2024-10-01 22:40:41.445349] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:46.295 [2024-10-01 22:40:41.445354] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:46.295 [2024-10-01 22:40:41.445358] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa7fc000b90 00:41:46.295 [2024-10-01 22:40:41.445368] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:41:46.295 qpair failed and we were unable to recover it. 00:41:46.295 [2024-10-01 22:40:41.455339] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:46.295 [2024-10-01 22:40:41.455381] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:46.295 [2024-10-01 22:40:41.455391] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:46.295 [2024-10-01 22:40:41.455396] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:46.295 [2024-10-01 22:40:41.455400] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa7fc000b90 00:41:46.295 [2024-10-01 22:40:41.455409] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:41:46.295 qpair failed and we were unable to recover it. 00:41:46.295 [2024-10-01 22:40:41.465391] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:46.295 [2024-10-01 22:40:41.465435] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:46.295 [2024-10-01 22:40:41.465445] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:46.295 [2024-10-01 22:40:41.465450] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:46.295 [2024-10-01 22:40:41.465454] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa7fc000b90 00:41:46.295 [2024-10-01 22:40:41.465464] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:41:46.295 qpair failed and we were unable to recover it. 00:41:46.295 [2024-10-01 22:40:41.475426] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:46.295 [2024-10-01 22:40:41.475466] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:46.295 [2024-10-01 22:40:41.475476] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:46.295 [2024-10-01 22:40:41.475481] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:46.295 [2024-10-01 22:40:41.475485] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa7fc000b90 00:41:46.295 [2024-10-01 22:40:41.475495] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:41:46.295 qpair failed and we were unable to recover it. 00:41:46.295 [2024-10-01 22:40:41.485438] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:46.295 [2024-10-01 22:40:41.485481] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:46.295 [2024-10-01 22:40:41.485491] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:46.295 [2024-10-01 22:40:41.485496] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:46.295 [2024-10-01 22:40:41.485500] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa7fc000b90 00:41:46.295 [2024-10-01 22:40:41.485510] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:41:46.295 qpair failed and we were unable to recover it. 00:41:46.295 [2024-10-01 22:40:41.495485] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:46.295 [2024-10-01 22:40:41.495527] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:46.295 [2024-10-01 22:40:41.495536] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:46.295 [2024-10-01 22:40:41.495541] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:46.295 [2024-10-01 22:40:41.495545] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa7fc000b90 00:41:46.295 [2024-10-01 22:40:41.495556] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:41:46.295 qpair failed and we were unable to recover it. 00:41:46.295 [2024-10-01 22:40:41.505518] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:46.295 [2024-10-01 22:40:41.505561] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:46.295 [2024-10-01 22:40:41.505572] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:46.295 [2024-10-01 22:40:41.505576] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:46.295 [2024-10-01 22:40:41.505581] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa7fc000b90 00:41:46.295 [2024-10-01 22:40:41.505591] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:41:46.295 qpair failed and we were unable to recover it. 00:41:46.295 [2024-10-01 22:40:41.515531] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:46.295 [2024-10-01 22:40:41.515568] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:46.295 [2024-10-01 22:40:41.515582] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:46.295 [2024-10-01 22:40:41.515587] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:46.295 [2024-10-01 22:40:41.515592] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa7fc000b90 00:41:46.295 [2024-10-01 22:40:41.515602] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:41:46.295 qpair failed and we were unable to recover it. 00:41:46.295 [2024-10-01 22:40:41.525551] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:46.296 [2024-10-01 22:40:41.525588] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:46.296 [2024-10-01 22:40:41.525598] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:46.296 [2024-10-01 22:40:41.525603] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:46.296 [2024-10-01 22:40:41.525607] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa7fc000b90 00:41:46.296 [2024-10-01 22:40:41.525617] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:41:46.296 qpair failed and we were unable to recover it. 00:41:46.296 [2024-10-01 22:40:41.535583] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:46.296 [2024-10-01 22:40:41.535641] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:46.296 [2024-10-01 22:40:41.535651] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:46.296 [2024-10-01 22:40:41.535656] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:46.296 [2024-10-01 22:40:41.535661] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa7fc000b90 00:41:46.296 [2024-10-01 22:40:41.535671] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:41:46.296 qpair failed and we were unable to recover it. 00:41:46.557 [2024-10-01 22:40:41.545612] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:46.557 [2024-10-01 22:40:41.545662] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:46.557 [2024-10-01 22:40:41.545672] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:46.557 [2024-10-01 22:40:41.545677] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:46.557 [2024-10-01 22:40:41.545681] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa7fc000b90 00:41:46.557 [2024-10-01 22:40:41.545692] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:41:46.557 qpair failed and we were unable to recover it. 00:41:46.557 [2024-10-01 22:40:41.555604] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:46.557 [2024-10-01 22:40:41.555646] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:46.557 [2024-10-01 22:40:41.555656] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:46.557 [2024-10-01 22:40:41.555661] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:46.557 [2024-10-01 22:40:41.555665] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa7fc000b90 00:41:46.557 [2024-10-01 22:40:41.555676] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:41:46.557 qpair failed and we were unable to recover it. 00:41:46.557 [2024-10-01 22:40:41.565657] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:46.557 [2024-10-01 22:40:41.565695] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:46.557 [2024-10-01 22:40:41.565704] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:46.557 [2024-10-01 22:40:41.565709] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:46.557 [2024-10-01 22:40:41.565713] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa7fc000b90 00:41:46.557 [2024-10-01 22:40:41.565723] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:41:46.557 qpair failed and we were unable to recover it. 00:41:46.557 [2024-10-01 22:40:41.575692] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:46.557 [2024-10-01 22:40:41.575733] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:46.557 [2024-10-01 22:40:41.575743] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:46.557 [2024-10-01 22:40:41.575748] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:46.557 [2024-10-01 22:40:41.575752] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa7fc000b90 00:41:46.557 [2024-10-01 22:40:41.575762] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:41:46.557 qpair failed and we were unable to recover it. 00:41:46.557 [2024-10-01 22:40:41.585726] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:46.557 [2024-10-01 22:40:41.585768] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:46.557 [2024-10-01 22:40:41.585778] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:46.557 [2024-10-01 22:40:41.585782] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:46.557 [2024-10-01 22:40:41.585787] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa7fc000b90 00:41:46.557 [2024-10-01 22:40:41.585797] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:41:46.557 qpair failed and we were unable to recover it. 00:41:46.557 [2024-10-01 22:40:41.595734] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:46.557 [2024-10-01 22:40:41.595778] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:46.557 [2024-10-01 22:40:41.595788] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:46.557 [2024-10-01 22:40:41.595793] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:46.557 [2024-10-01 22:40:41.595797] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa7fc000b90 00:41:46.557 [2024-10-01 22:40:41.595807] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:41:46.557 qpair failed and we were unable to recover it. 00:41:46.557 [2024-10-01 22:40:41.605762] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:46.557 [2024-10-01 22:40:41.605800] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:46.557 [2024-10-01 22:40:41.605812] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:46.557 [2024-10-01 22:40:41.605817] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:46.557 [2024-10-01 22:40:41.605821] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa7fc000b90 00:41:46.557 [2024-10-01 22:40:41.605831] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:41:46.557 qpair failed and we were unable to recover it. 00:41:46.557 [2024-10-01 22:40:41.615801] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:46.557 [2024-10-01 22:40:41.615845] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:46.557 [2024-10-01 22:40:41.615855] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:46.557 [2024-10-01 22:40:41.615860] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:46.557 [2024-10-01 22:40:41.615864] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa7fc000b90 00:41:46.558 [2024-10-01 22:40:41.615874] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:41:46.558 qpair failed and we were unable to recover it. 00:41:46.558 [2024-10-01 22:40:41.625798] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:46.558 [2024-10-01 22:40:41.625841] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:46.558 [2024-10-01 22:40:41.625852] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:46.558 [2024-10-01 22:40:41.625856] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:46.558 [2024-10-01 22:40:41.625861] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa7fc000b90 00:41:46.558 [2024-10-01 22:40:41.625871] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:41:46.558 qpair failed and we were unable to recover it. 00:41:46.558 [2024-10-01 22:40:41.635862] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:46.558 [2024-10-01 22:40:41.635902] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:46.558 [2024-10-01 22:40:41.635912] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:46.558 [2024-10-01 22:40:41.635917] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:46.558 [2024-10-01 22:40:41.635921] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa7fc000b90 00:41:46.558 [2024-10-01 22:40:41.635931] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:41:46.558 qpair failed and we were unable to recover it. 00:41:46.558 [2024-10-01 22:40:41.645865] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:46.558 [2024-10-01 22:40:41.645904] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:46.558 [2024-10-01 22:40:41.645914] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:46.558 [2024-10-01 22:40:41.645919] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:46.558 [2024-10-01 22:40:41.645923] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa7fc000b90 00:41:46.558 [2024-10-01 22:40:41.645936] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:41:46.558 qpair failed and we were unable to recover it. 00:41:46.558 [2024-10-01 22:40:41.655923] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:46.558 [2024-10-01 22:40:41.655998] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:46.558 [2024-10-01 22:40:41.656007] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:46.558 [2024-10-01 22:40:41.656012] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:46.558 [2024-10-01 22:40:41.656016] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa7fc000b90 00:41:46.558 [2024-10-01 22:40:41.656026] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:41:46.558 qpair failed and we were unable to recover it. 00:41:46.558 [2024-10-01 22:40:41.665934] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:46.558 [2024-10-01 22:40:41.665978] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:46.558 [2024-10-01 22:40:41.665988] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:46.558 [2024-10-01 22:40:41.665993] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:46.558 [2024-10-01 22:40:41.665997] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa7fc000b90 00:41:46.558 [2024-10-01 22:40:41.666007] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:41:46.558 qpair failed and we were unable to recover it. 00:41:46.558 [2024-10-01 22:40:41.675979] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:46.558 [2024-10-01 22:40:41.676017] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:46.558 [2024-10-01 22:40:41.676027] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:46.558 [2024-10-01 22:40:41.676032] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:46.558 [2024-10-01 22:40:41.676036] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa7fc000b90 00:41:46.558 [2024-10-01 22:40:41.676046] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:41:46.558 qpair failed and we were unable to recover it. 00:41:46.558 [2024-10-01 22:40:41.685973] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:46.558 [2024-10-01 22:40:41.686011] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:46.558 [2024-10-01 22:40:41.686021] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:46.558 [2024-10-01 22:40:41.686026] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:46.558 [2024-10-01 22:40:41.686030] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa7fc000b90 00:41:46.558 [2024-10-01 22:40:41.686040] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:41:46.558 qpair failed and we were unable to recover it. 00:41:46.558 [2024-10-01 22:40:41.696009] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:46.558 [2024-10-01 22:40:41.696047] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:46.558 [2024-10-01 22:40:41.696059] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:46.558 [2024-10-01 22:40:41.696064] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:46.558 [2024-10-01 22:40:41.696069] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa7fc000b90 00:41:46.558 [2024-10-01 22:40:41.696078] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:41:46.558 qpair failed and we were unable to recover it. 00:41:46.558 [2024-10-01 22:40:41.706012] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:46.558 [2024-10-01 22:40:41.706060] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:46.558 [2024-10-01 22:40:41.706070] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:46.558 [2024-10-01 22:40:41.706075] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:46.558 [2024-10-01 22:40:41.706079] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa7fc000b90 00:41:46.558 [2024-10-01 22:40:41.706089] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:41:46.558 qpair failed and we were unable to recover it. 00:41:46.558 [2024-10-01 22:40:41.716077] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:46.558 [2024-10-01 22:40:41.716115] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:46.558 [2024-10-01 22:40:41.716125] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:46.558 [2024-10-01 22:40:41.716130] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:46.558 [2024-10-01 22:40:41.716135] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa7fc000b90 00:41:46.558 [2024-10-01 22:40:41.716144] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:41:46.558 qpair failed and we were unable to recover it. 00:41:46.558 [2024-10-01 22:40:41.726074] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:46.558 [2024-10-01 22:40:41.726127] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:46.558 [2024-10-01 22:40:41.726137] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:46.558 [2024-10-01 22:40:41.726142] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:46.558 [2024-10-01 22:40:41.726147] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa7fc000b90 00:41:46.558 [2024-10-01 22:40:41.726156] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:41:46.558 qpair failed and we were unable to recover it. 00:41:46.558 [2024-10-01 22:40:41.736122] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:46.558 [2024-10-01 22:40:41.736191] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:46.558 [2024-10-01 22:40:41.736201] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:46.558 [2024-10-01 22:40:41.736206] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:46.558 [2024-10-01 22:40:41.736213] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa7fc000b90 00:41:46.558 [2024-10-01 22:40:41.736222] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:41:46.558 qpair failed and we were unable to recover it. 00:41:46.558 [2024-10-01 22:40:41.746153] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:46.558 [2024-10-01 22:40:41.746198] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:46.558 [2024-10-01 22:40:41.746208] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:46.558 [2024-10-01 22:40:41.746213] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:46.558 [2024-10-01 22:40:41.746217] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa7fc000b90 00:41:46.558 [2024-10-01 22:40:41.746227] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:41:46.559 qpair failed and we were unable to recover it. 00:41:46.559 [2024-10-01 22:40:41.756180] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:46.559 [2024-10-01 22:40:41.756216] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:46.559 [2024-10-01 22:40:41.756226] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:46.559 [2024-10-01 22:40:41.756230] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:46.559 [2024-10-01 22:40:41.756235] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa7fc000b90 00:41:46.559 [2024-10-01 22:40:41.756244] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:41:46.559 qpair failed and we were unable to recover it. 00:41:46.559 [2024-10-01 22:40:41.766192] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:46.559 [2024-10-01 22:40:41.766234] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:46.559 [2024-10-01 22:40:41.766244] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:46.559 [2024-10-01 22:40:41.766248] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:46.559 [2024-10-01 22:40:41.766253] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa7fc000b90 00:41:46.559 [2024-10-01 22:40:41.766262] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:41:46.559 qpair failed and we were unable to recover it. 00:41:46.559 [2024-10-01 22:40:41.776230] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:46.559 [2024-10-01 22:40:41.776287] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:46.559 [2024-10-01 22:40:41.776297] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:46.559 [2024-10-01 22:40:41.776301] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:46.559 [2024-10-01 22:40:41.776306] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa7fc000b90 00:41:46.559 [2024-10-01 22:40:41.776315] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:41:46.559 qpair failed and we were unable to recover it. 00:41:46.559 [2024-10-01 22:40:41.786261] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:46.559 [2024-10-01 22:40:41.786305] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:46.559 [2024-10-01 22:40:41.786315] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:46.559 [2024-10-01 22:40:41.786320] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:46.559 [2024-10-01 22:40:41.786324] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa7fc000b90 00:41:46.559 [2024-10-01 22:40:41.786334] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:41:46.559 qpair failed and we were unable to recover it. 00:41:46.559 [2024-10-01 22:40:41.796271] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:46.559 [2024-10-01 22:40:41.796313] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:46.559 [2024-10-01 22:40:41.796324] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:46.559 [2024-10-01 22:40:41.796329] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:46.559 [2024-10-01 22:40:41.796333] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa7fc000b90 00:41:46.559 [2024-10-01 22:40:41.796344] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:41:46.559 qpair failed and we were unable to recover it. 00:41:46.559 [2024-10-01 22:40:41.806287] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:46.559 [2024-10-01 22:40:41.806325] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:46.559 [2024-10-01 22:40:41.806335] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:46.559 [2024-10-01 22:40:41.806340] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:46.559 [2024-10-01 22:40:41.806344] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa7fc000b90 00:41:46.559 [2024-10-01 22:40:41.806355] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:41:46.559 qpair failed and we were unable to recover it. 00:41:46.821 [2024-10-01 22:40:41.816336] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:46.821 [2024-10-01 22:40:41.816379] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:46.821 [2024-10-01 22:40:41.816389] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:46.821 [2024-10-01 22:40:41.816394] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:46.821 [2024-10-01 22:40:41.816398] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa7fc000b90 00:41:46.821 [2024-10-01 22:40:41.816408] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:41:46.821 qpair failed and we were unable to recover it. 00:41:46.821 [2024-10-01 22:40:41.826367] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:46.821 [2024-10-01 22:40:41.826458] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:46.821 [2024-10-01 22:40:41.826468] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:46.821 [2024-10-01 22:40:41.826473] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:46.821 [2024-10-01 22:40:41.826481] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa7fc000b90 00:41:46.821 [2024-10-01 22:40:41.826492] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:41:46.821 qpair failed and we were unable to recover it. 00:41:46.821 [2024-10-01 22:40:41.836361] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:46.821 [2024-10-01 22:40:41.836422] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:46.821 [2024-10-01 22:40:41.836431] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:46.821 [2024-10-01 22:40:41.836436] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:46.821 [2024-10-01 22:40:41.836440] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa7fc000b90 00:41:46.821 [2024-10-01 22:40:41.836450] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:41:46.821 qpair failed and we were unable to recover it. 00:41:46.821 [2024-10-01 22:40:41.846406] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:46.821 [2024-10-01 22:40:41.846492] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:46.821 [2024-10-01 22:40:41.846502] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:46.821 [2024-10-01 22:40:41.846507] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:46.821 [2024-10-01 22:40:41.846511] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa7fc000b90 00:41:46.821 [2024-10-01 22:40:41.846521] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:41:46.821 qpair failed and we were unable to recover it. 00:41:46.821 [2024-10-01 22:40:41.856430] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:46.821 [2024-10-01 22:40:41.856475] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:46.821 [2024-10-01 22:40:41.856485] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:46.821 [2024-10-01 22:40:41.856489] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:46.821 [2024-10-01 22:40:41.856494] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa7fc000b90 00:41:46.821 [2024-10-01 22:40:41.856504] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:41:46.821 qpair failed and we were unable to recover it. 00:41:46.821 [2024-10-01 22:40:41.866469] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:46.821 [2024-10-01 22:40:41.866515] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:46.821 [2024-10-01 22:40:41.866525] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:46.821 [2024-10-01 22:40:41.866530] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:46.821 [2024-10-01 22:40:41.866534] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa7fc000b90 00:41:46.821 [2024-10-01 22:40:41.866544] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:41:46.821 qpair failed and we were unable to recover it. 00:41:46.821 [2024-10-01 22:40:41.876491] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:46.821 [2024-10-01 22:40:41.876529] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:46.821 [2024-10-01 22:40:41.876539] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:46.821 [2024-10-01 22:40:41.876544] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:46.821 [2024-10-01 22:40:41.876548] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa7fc000b90 00:41:46.821 [2024-10-01 22:40:41.876558] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:41:46.821 qpair failed and we were unable to recover it. 00:41:46.821 [2024-10-01 22:40:41.886513] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:46.821 [2024-10-01 22:40:41.886550] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:46.821 [2024-10-01 22:40:41.886560] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:46.821 [2024-10-01 22:40:41.886565] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:46.821 [2024-10-01 22:40:41.886569] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa7fc000b90 00:41:46.821 [2024-10-01 22:40:41.886578] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:41:46.821 qpair failed and we were unable to recover it. 00:41:46.821 [2024-10-01 22:40:41.896563] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:46.821 [2024-10-01 22:40:41.896603] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:46.821 [2024-10-01 22:40:41.896613] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:46.821 [2024-10-01 22:40:41.896618] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:46.821 [2024-10-01 22:40:41.896622] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa7fc000b90 00:41:46.821 [2024-10-01 22:40:41.896635] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:41:46.821 qpair failed and we were unable to recover it. 00:41:46.821 [2024-10-01 22:40:41.906549] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:46.822 [2024-10-01 22:40:41.906591] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:46.822 [2024-10-01 22:40:41.906601] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:46.822 [2024-10-01 22:40:41.906606] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:46.822 [2024-10-01 22:40:41.906610] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa7fc000b90 00:41:46.822 [2024-10-01 22:40:41.906620] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:41:46.822 qpair failed and we were unable to recover it. 00:41:46.822 [2024-10-01 22:40:41.916548] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:46.822 [2024-10-01 22:40:41.916627] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:46.822 [2024-10-01 22:40:41.916636] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:46.822 [2024-10-01 22:40:41.916644] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:46.822 [2024-10-01 22:40:41.916648] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa7fc000b90 00:41:46.822 [2024-10-01 22:40:41.916659] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:41:46.822 qpair failed and we were unable to recover it. 00:41:46.822 [2024-10-01 22:40:41.926614] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:46.822 [2024-10-01 22:40:41.926655] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:46.822 [2024-10-01 22:40:41.926666] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:46.822 [2024-10-01 22:40:41.926670] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:46.822 [2024-10-01 22:40:41.926675] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa7fc000b90 00:41:46.822 [2024-10-01 22:40:41.926685] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:41:46.822 qpair failed and we were unable to recover it. 00:41:46.822 [2024-10-01 22:40:41.936611] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:46.822 [2024-10-01 22:40:41.936658] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:46.822 [2024-10-01 22:40:41.936667] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:46.822 [2024-10-01 22:40:41.936672] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:46.822 [2024-10-01 22:40:41.936677] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa7fc000b90 00:41:46.822 [2024-10-01 22:40:41.936686] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:41:46.822 qpair failed and we were unable to recover it. 00:41:46.822 [2024-10-01 22:40:41.946682] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:46.822 [2024-10-01 22:40:41.946723] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:46.822 [2024-10-01 22:40:41.946733] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:46.822 [2024-10-01 22:40:41.946738] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:46.822 [2024-10-01 22:40:41.946742] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa7fc000b90 00:41:46.822 [2024-10-01 22:40:41.946752] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:41:46.822 qpair failed and we were unable to recover it. 00:41:46.822 [2024-10-01 22:40:41.956685] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:46.822 [2024-10-01 22:40:41.956721] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:46.822 [2024-10-01 22:40:41.956731] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:46.822 [2024-10-01 22:40:41.956735] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:46.822 [2024-10-01 22:40:41.956740] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa7fc000b90 00:41:46.822 [2024-10-01 22:40:41.956750] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:41:46.822 qpair failed and we were unable to recover it. 00:41:46.822 [2024-10-01 22:40:41.966685] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:46.822 [2024-10-01 22:40:41.966722] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:46.822 [2024-10-01 22:40:41.966732] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:46.822 [2024-10-01 22:40:41.966737] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:46.822 [2024-10-01 22:40:41.966741] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa7fc000b90 00:41:46.822 [2024-10-01 22:40:41.966751] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:41:46.822 qpair failed and we were unable to recover it. 00:41:46.822 [2024-10-01 22:40:41.976745] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:46.822 [2024-10-01 22:40:41.976817] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:46.822 [2024-10-01 22:40:41.976827] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:46.822 [2024-10-01 22:40:41.976831] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:46.822 [2024-10-01 22:40:41.976836] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa7fc000b90 00:41:46.822 [2024-10-01 22:40:41.976846] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:41:46.822 qpair failed and we were unable to recover it. 00:41:46.822 [2024-10-01 22:40:41.986767] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:46.822 [2024-10-01 22:40:41.986821] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:46.822 [2024-10-01 22:40:41.986830] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:46.822 [2024-10-01 22:40:41.986835] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:46.822 [2024-10-01 22:40:41.986840] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa7fc000b90 00:41:46.822 [2024-10-01 22:40:41.986850] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:41:46.822 qpair failed and we were unable to recover it. 00:41:46.822 [2024-10-01 22:40:41.996818] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:46.822 [2024-10-01 22:40:41.996853] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:46.822 [2024-10-01 22:40:41.996863] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:46.822 [2024-10-01 22:40:41.996868] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:46.822 [2024-10-01 22:40:41.996873] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa7fc000b90 00:41:46.822 [2024-10-01 22:40:41.996883] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:41:46.822 qpair failed and we were unable to recover it. 00:41:46.822 [2024-10-01 22:40:42.006844] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:46.822 [2024-10-01 22:40:42.006879] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:46.822 [2024-10-01 22:40:42.006889] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:46.822 [2024-10-01 22:40:42.006896] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:46.822 [2024-10-01 22:40:42.006901] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa7fc000b90 00:41:46.822 [2024-10-01 22:40:42.006911] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:41:46.822 qpair failed and we were unable to recover it. 00:41:46.822 [2024-10-01 22:40:42.016879] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:46.822 [2024-10-01 22:40:42.016918] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:46.822 [2024-10-01 22:40:42.016927] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:46.822 [2024-10-01 22:40:42.016932] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:46.822 [2024-10-01 22:40:42.016937] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa7fc000b90 00:41:46.822 [2024-10-01 22:40:42.016946] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:41:46.823 qpair failed and we were unable to recover it. 00:41:46.823 [2024-10-01 22:40:42.026865] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:46.823 [2024-10-01 22:40:42.026907] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:46.823 [2024-10-01 22:40:42.026917] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:46.823 [2024-10-01 22:40:42.026922] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:46.823 [2024-10-01 22:40:42.026926] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa7fc000b90 00:41:46.823 [2024-10-01 22:40:42.026935] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:41:46.823 qpair failed and we were unable to recover it. 00:41:46.823 [2024-10-01 22:40:42.036920] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:46.823 [2024-10-01 22:40:42.036961] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:46.823 [2024-10-01 22:40:42.036971] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:46.823 [2024-10-01 22:40:42.036976] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:46.823 [2024-10-01 22:40:42.036981] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa7fc000b90 00:41:46.823 [2024-10-01 22:40:42.036991] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:41:46.823 qpair failed and we were unable to recover it. 00:41:46.823 [2024-10-01 22:40:42.046919] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:46.823 [2024-10-01 22:40:42.046955] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:46.823 [2024-10-01 22:40:42.046965] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:46.823 [2024-10-01 22:40:42.046970] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:46.823 [2024-10-01 22:40:42.046974] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa7fc000b90 00:41:46.823 [2024-10-01 22:40:42.046984] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:41:46.823 qpair failed and we were unable to recover it. 00:41:46.823 [2024-10-01 22:40:42.056926] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:46.823 [2024-10-01 22:40:42.056968] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:46.823 [2024-10-01 22:40:42.056978] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:46.823 [2024-10-01 22:40:42.056983] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:46.823 [2024-10-01 22:40:42.056987] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa7fc000b90 00:41:46.823 [2024-10-01 22:40:42.056997] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:41:46.823 qpair failed and we were unable to recover it. 00:41:46.823 [2024-10-01 22:40:42.067003] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:46.823 [2024-10-01 22:40:42.067049] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:46.823 [2024-10-01 22:40:42.067058] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:46.823 [2024-10-01 22:40:42.067063] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:46.823 [2024-10-01 22:40:42.067067] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa7fc000b90 00:41:46.823 [2024-10-01 22:40:42.067077] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:41:46.823 qpair failed and we were unable to recover it. 00:41:47.084 [2024-10-01 22:40:42.076879] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:47.084 [2024-10-01 22:40:42.076919] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:47.084 [2024-10-01 22:40:42.076929] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:47.084 [2024-10-01 22:40:42.076934] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:47.084 [2024-10-01 22:40:42.076938] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa7fc000b90 00:41:47.084 [2024-10-01 22:40:42.076948] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:41:47.084 qpair failed and we were unable to recover it. 00:41:47.084 [2024-10-01 22:40:42.087051] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:47.085 [2024-10-01 22:40:42.087090] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:47.085 [2024-10-01 22:40:42.087100] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:47.085 [2024-10-01 22:40:42.087105] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:47.085 [2024-10-01 22:40:42.087110] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa7fc000b90 00:41:47.085 [2024-10-01 22:40:42.087119] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:41:47.085 qpair failed and we were unable to recover it. 00:41:47.085 [2024-10-01 22:40:42.097045] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:47.085 [2024-10-01 22:40:42.097103] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:47.085 [2024-10-01 22:40:42.097115] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:47.085 [2024-10-01 22:40:42.097120] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:47.085 [2024-10-01 22:40:42.097124] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa7fc000b90 00:41:47.085 [2024-10-01 22:40:42.097134] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:41:47.085 qpair failed and we were unable to recover it. 00:41:47.085 [2024-10-01 22:40:42.107120] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:47.085 [2024-10-01 22:40:42.107203] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:47.085 [2024-10-01 22:40:42.107213] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:47.085 [2024-10-01 22:40:42.107218] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:47.085 [2024-10-01 22:40:42.107222] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa7fc000b90 00:41:47.085 [2024-10-01 22:40:42.107232] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:41:47.085 qpair failed and we were unable to recover it. 00:41:47.085 [2024-10-01 22:40:42.116990] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:47.085 [2024-10-01 22:40:42.117032] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:47.085 [2024-10-01 22:40:42.117042] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:47.085 [2024-10-01 22:40:42.117047] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:47.085 [2024-10-01 22:40:42.117051] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa7fc000b90 00:41:47.085 [2024-10-01 22:40:42.117061] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:41:47.085 qpair failed and we were unable to recover it. 00:41:47.085 [2024-10-01 22:40:42.127145] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:47.085 [2024-10-01 22:40:42.127182] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:47.085 [2024-10-01 22:40:42.127191] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:47.085 [2024-10-01 22:40:42.127196] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:47.085 [2024-10-01 22:40:42.127200] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa7fc000b90 00:41:47.085 [2024-10-01 22:40:42.127210] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:41:47.085 qpair failed and we were unable to recover it. 00:41:47.085 [2024-10-01 22:40:42.137146] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:47.085 [2024-10-01 22:40:42.137205] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:47.085 [2024-10-01 22:40:42.137215] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:47.085 [2024-10-01 22:40:42.137220] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:47.085 [2024-10-01 22:40:42.137224] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa7fc000b90 00:41:47.085 [2024-10-01 22:40:42.137237] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:41:47.085 qpair failed and we were unable to recover it. 00:41:47.085 [2024-10-01 22:40:42.147222] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:47.085 [2024-10-01 22:40:42.147300] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:47.085 [2024-10-01 22:40:42.147311] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:47.085 [2024-10-01 22:40:42.147316] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:47.085 [2024-10-01 22:40:42.147320] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa7fc000b90 00:41:47.085 [2024-10-01 22:40:42.147331] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:41:47.085 qpair failed and we were unable to recover it. 00:41:47.085 [2024-10-01 22:40:42.157202] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:47.085 [2024-10-01 22:40:42.157238] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:47.085 [2024-10-01 22:40:42.157248] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:47.085 [2024-10-01 22:40:42.157253] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:47.085 [2024-10-01 22:40:42.157257] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa7fc000b90 00:41:47.085 [2024-10-01 22:40:42.157267] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:41:47.085 qpair failed and we were unable to recover it. 00:41:47.085 [2024-10-01 22:40:42.167255] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:47.085 [2024-10-01 22:40:42.167294] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:47.085 [2024-10-01 22:40:42.167304] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:47.085 [2024-10-01 22:40:42.167309] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:47.085 [2024-10-01 22:40:42.167314] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa7fc000b90 00:41:47.085 [2024-10-01 22:40:42.167324] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:41:47.085 qpair failed and we were unable to recover it. 00:41:47.085 [2024-10-01 22:40:42.177277] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:47.085 [2024-10-01 22:40:42.177316] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:47.085 [2024-10-01 22:40:42.177326] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:47.085 [2024-10-01 22:40:42.177331] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:47.085 [2024-10-01 22:40:42.177336] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa7fc000b90 00:41:47.085 [2024-10-01 22:40:42.177346] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:41:47.085 qpair failed and we were unable to recover it. 00:41:47.085 [2024-10-01 22:40:42.187344] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:47.085 [2024-10-01 22:40:42.187417] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:47.085 [2024-10-01 22:40:42.187432] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:47.085 [2024-10-01 22:40:42.187437] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:47.085 [2024-10-01 22:40:42.187441] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa7fc000b90 00:41:47.085 [2024-10-01 22:40:42.187451] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:41:47.085 qpair failed and we were unable to recover it. 00:41:47.085 [2024-10-01 22:40:42.197337] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:47.085 [2024-10-01 22:40:42.197373] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:47.085 [2024-10-01 22:40:42.197383] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:47.085 [2024-10-01 22:40:42.197388] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:47.085 [2024-10-01 22:40:42.197392] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa7fc000b90 00:41:47.085 [2024-10-01 22:40:42.197402] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:41:47.086 qpair failed and we were unable to recover it. 00:41:47.086 [2024-10-01 22:40:42.207368] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:47.086 [2024-10-01 22:40:42.207448] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:47.086 [2024-10-01 22:40:42.207458] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:47.086 [2024-10-01 22:40:42.207463] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:47.086 [2024-10-01 22:40:42.207468] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa7fc000b90 00:41:47.086 [2024-10-01 22:40:42.207478] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:41:47.086 qpair failed and we were unable to recover it. 00:41:47.086 [2024-10-01 22:40:42.217301] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:47.086 [2024-10-01 22:40:42.217343] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:47.086 [2024-10-01 22:40:42.217352] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:47.086 [2024-10-01 22:40:42.217357] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:47.086 [2024-10-01 22:40:42.217361] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa7fc000b90 00:41:47.086 [2024-10-01 22:40:42.217371] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:41:47.086 qpair failed and we were unable to recover it. 00:41:47.086 [2024-10-01 22:40:42.227432] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:47.086 [2024-10-01 22:40:42.227491] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:47.086 [2024-10-01 22:40:42.227501] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:47.086 [2024-10-01 22:40:42.227506] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:47.086 [2024-10-01 22:40:42.227510] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa7fc000b90 00:41:47.086 [2024-10-01 22:40:42.227523] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:41:47.086 qpair failed and we were unable to recover it. 00:41:47.086 [2024-10-01 22:40:42.237462] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:47.086 [2024-10-01 22:40:42.237499] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:47.086 [2024-10-01 22:40:42.237509] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:47.086 [2024-10-01 22:40:42.237514] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:47.086 [2024-10-01 22:40:42.237518] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa7fc000b90 00:41:47.086 [2024-10-01 22:40:42.237528] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:41:47.086 qpair failed and we were unable to recover it. 00:41:47.086 [2024-10-01 22:40:42.247482] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:47.086 [2024-10-01 22:40:42.247518] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:47.086 [2024-10-01 22:40:42.247528] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:47.086 [2024-10-01 22:40:42.247533] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:47.086 [2024-10-01 22:40:42.247537] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa7fc000b90 00:41:47.086 [2024-10-01 22:40:42.247547] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:41:47.086 qpair failed and we were unable to recover it. 00:41:47.086 [2024-10-01 22:40:42.257478] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:47.086 [2024-10-01 22:40:42.257516] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:47.086 [2024-10-01 22:40:42.257525] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:47.086 [2024-10-01 22:40:42.257530] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:47.086 [2024-10-01 22:40:42.257534] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa7fc000b90 00:41:47.086 [2024-10-01 22:40:42.257544] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:41:47.086 qpair failed and we were unable to recover it. 00:41:47.086 [2024-10-01 22:40:42.267539] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:47.086 [2024-10-01 22:40:42.267583] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:47.086 [2024-10-01 22:40:42.267592] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:47.086 [2024-10-01 22:40:42.267597] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:47.086 [2024-10-01 22:40:42.267602] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa7fc000b90 00:41:47.086 [2024-10-01 22:40:42.267611] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:41:47.086 qpair failed and we were unable to recover it. 00:41:47.086 [2024-10-01 22:40:42.277557] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:47.086 [2024-10-01 22:40:42.277599] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:47.086 [2024-10-01 22:40:42.277609] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:47.086 [2024-10-01 22:40:42.277614] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:47.086 [2024-10-01 22:40:42.277618] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa7fc000b90 00:41:47.086 [2024-10-01 22:40:42.277631] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:41:47.086 qpair failed and we were unable to recover it. 00:41:47.086 [2024-10-01 22:40:42.287593] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:47.086 [2024-10-01 22:40:42.287634] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:47.086 [2024-10-01 22:40:42.287644] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:47.086 [2024-10-01 22:40:42.287649] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:47.086 [2024-10-01 22:40:42.287653] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa7fc000b90 00:41:47.086 [2024-10-01 22:40:42.287663] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:41:47.086 qpair failed and we were unable to recover it. 00:41:47.086 [2024-10-01 22:40:42.297629] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:47.086 [2024-10-01 22:40:42.297672] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:47.086 [2024-10-01 22:40:42.297681] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:47.086 [2024-10-01 22:40:42.297686] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:47.086 [2024-10-01 22:40:42.297691] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa7fc000b90 00:41:47.086 [2024-10-01 22:40:42.297700] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:41:47.086 qpair failed and we were unable to recover it. 00:41:47.086 [2024-10-01 22:40:42.307708] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:47.086 [2024-10-01 22:40:42.307753] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:47.086 [2024-10-01 22:40:42.307762] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:47.086 [2024-10-01 22:40:42.307767] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:47.087 [2024-10-01 22:40:42.307772] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa7fc000b90 00:41:47.087 [2024-10-01 22:40:42.307781] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:41:47.087 qpair failed and we were unable to recover it. 00:41:47.087 [2024-10-01 22:40:42.317547] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:47.087 [2024-10-01 22:40:42.317585] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:47.087 [2024-10-01 22:40:42.317594] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:47.087 [2024-10-01 22:40:42.317599] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:47.087 [2024-10-01 22:40:42.317606] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa7fc000b90 00:41:47.087 [2024-10-01 22:40:42.317616] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:41:47.087 qpair failed and we were unable to recover it. 00:41:47.087 [2024-10-01 22:40:42.327731] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:47.087 [2024-10-01 22:40:42.327771] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:47.087 [2024-10-01 22:40:42.327781] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:47.087 [2024-10-01 22:40:42.327786] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:47.087 [2024-10-01 22:40:42.327790] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa7fc000b90 00:41:47.087 [2024-10-01 22:40:42.327800] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:41:47.087 qpair failed and we were unable to recover it. 00:41:47.347 [2024-10-01 22:40:42.337732] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:47.347 [2024-10-01 22:40:42.337772] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:47.347 [2024-10-01 22:40:42.337781] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:47.347 [2024-10-01 22:40:42.337786] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:47.347 [2024-10-01 22:40:42.337791] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa7fc000b90 00:41:47.347 [2024-10-01 22:40:42.337800] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:41:47.348 qpair failed and we were unable to recover it. 00:41:47.348 [2024-10-01 22:40:42.347763] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:47.348 [2024-10-01 22:40:42.347805] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:47.348 [2024-10-01 22:40:42.347815] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:47.348 [2024-10-01 22:40:42.347820] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:47.348 [2024-10-01 22:40:42.347824] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa7fc000b90 00:41:47.348 [2024-10-01 22:40:42.347834] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:41:47.348 qpair failed and we were unable to recover it. 00:41:47.348 [2024-10-01 22:40:42.357767] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:47.348 [2024-10-01 22:40:42.357837] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:47.348 [2024-10-01 22:40:42.357847] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:47.348 [2024-10-01 22:40:42.357851] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:47.348 [2024-10-01 22:40:42.357856] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa7fc000b90 00:41:47.348 [2024-10-01 22:40:42.357866] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:41:47.348 qpair failed and we were unable to recover it. 00:41:47.348 [2024-10-01 22:40:42.367806] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:47.348 [2024-10-01 22:40:42.367884] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:47.348 [2024-10-01 22:40:42.367893] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:47.348 [2024-10-01 22:40:42.367898] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:47.348 [2024-10-01 22:40:42.367903] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa7fc000b90 00:41:47.348 [2024-10-01 22:40:42.367912] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:41:47.348 qpair failed and we were unable to recover it. 00:41:47.348 [2024-10-01 22:40:42.377883] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:47.348 [2024-10-01 22:40:42.377966] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:47.348 [2024-10-01 22:40:42.377975] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:47.348 [2024-10-01 22:40:42.377980] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:47.348 [2024-10-01 22:40:42.377985] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa7fc000b90 00:41:47.348 [2024-10-01 22:40:42.377994] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:41:47.348 qpair failed and we were unable to recover it. 00:41:47.348 [2024-10-01 22:40:42.387884] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:47.348 [2024-10-01 22:40:42.387925] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:47.348 [2024-10-01 22:40:42.387935] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:47.348 [2024-10-01 22:40:42.387939] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:47.348 [2024-10-01 22:40:42.387944] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa7fc000b90 00:41:47.348 [2024-10-01 22:40:42.387954] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:41:47.348 qpair failed and we were unable to recover it. 00:41:47.348 [2024-10-01 22:40:42.397881] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:47.348 [2024-10-01 22:40:42.397920] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:47.348 [2024-10-01 22:40:42.397929] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:47.348 [2024-10-01 22:40:42.397934] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:47.348 [2024-10-01 22:40:42.397938] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa7fc000b90 00:41:47.348 [2024-10-01 22:40:42.397948] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:41:47.348 qpair failed and we were unable to recover it. 00:41:47.348 [2024-10-01 22:40:42.407916] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:47.348 [2024-10-01 22:40:42.407970] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:47.348 [2024-10-01 22:40:42.407979] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:47.348 [2024-10-01 22:40:42.407987] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:47.348 [2024-10-01 22:40:42.407991] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa7fc000b90 00:41:47.348 [2024-10-01 22:40:42.408001] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:41:47.348 qpair failed and we were unable to recover it. 00:41:47.348 [2024-10-01 22:40:42.417953] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:47.348 [2024-10-01 22:40:42.417995] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:47.348 [2024-10-01 22:40:42.418004] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:47.348 [2024-10-01 22:40:42.418009] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:47.348 [2024-10-01 22:40:42.418013] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa7fc000b90 00:41:47.348 [2024-10-01 22:40:42.418023] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:41:47.348 qpair failed and we were unable to recover it. 00:41:47.348 [2024-10-01 22:40:42.427982] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:47.348 [2024-10-01 22:40:42.428027] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:47.348 [2024-10-01 22:40:42.428037] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:47.348 [2024-10-01 22:40:42.428042] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:47.348 [2024-10-01 22:40:42.428046] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa7fc000b90 00:41:47.348 [2024-10-01 22:40:42.428056] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:41:47.348 qpair failed and we were unable to recover it. 00:41:47.348 [2024-10-01 22:40:42.438009] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:47.348 [2024-10-01 22:40:42.438048] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:47.348 [2024-10-01 22:40:42.438057] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:47.348 [2024-10-01 22:40:42.438062] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:47.348 [2024-10-01 22:40:42.438066] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa7fc000b90 00:41:47.348 [2024-10-01 22:40:42.438076] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:41:47.348 qpair failed and we were unable to recover it. 00:41:47.348 [2024-10-01 22:40:42.448017] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:47.348 [2024-10-01 22:40:42.448054] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:47.348 [2024-10-01 22:40:42.448064] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:47.348 [2024-10-01 22:40:42.448069] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:47.348 [2024-10-01 22:40:42.448073] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa7fc000b90 00:41:47.348 [2024-10-01 22:40:42.448083] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:41:47.348 qpair failed and we were unable to recover it. 00:41:47.348 [2024-10-01 22:40:42.458068] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:47.348 [2024-10-01 22:40:42.458110] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:47.349 [2024-10-01 22:40:42.458119] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:47.349 [2024-10-01 22:40:42.458124] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:47.349 [2024-10-01 22:40:42.458128] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa7fc000b90 00:41:47.349 [2024-10-01 22:40:42.458138] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:41:47.349 qpair failed and we were unable to recover it. 00:41:47.349 [2024-10-01 22:40:42.468083] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:47.349 [2024-10-01 22:40:42.468140] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:47.349 [2024-10-01 22:40:42.468149] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:47.349 [2024-10-01 22:40:42.468154] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:47.349 [2024-10-01 22:40:42.468159] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa7fc000b90 00:41:47.349 [2024-10-01 22:40:42.468169] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:41:47.349 qpair failed and we were unable to recover it. 00:41:47.349 [2024-10-01 22:40:42.478096] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:47.349 [2024-10-01 22:40:42.478134] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:47.349 [2024-10-01 22:40:42.478143] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:47.349 [2024-10-01 22:40:42.478148] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:47.349 [2024-10-01 22:40:42.478153] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa7fc000b90 00:41:47.349 [2024-10-01 22:40:42.478162] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:41:47.349 qpair failed and we were unable to recover it. 00:41:47.349 [2024-10-01 22:40:42.488114] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:47.349 [2024-10-01 22:40:42.488152] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:47.349 [2024-10-01 22:40:42.488162] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:47.349 [2024-10-01 22:40:42.488167] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:47.349 [2024-10-01 22:40:42.488171] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa7fc000b90 00:41:47.349 [2024-10-01 22:40:42.488181] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:41:47.349 qpair failed and we were unable to recover it. 00:41:47.349 [2024-10-01 22:40:42.498163] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:47.349 [2024-10-01 22:40:42.498205] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:47.349 [2024-10-01 22:40:42.498214] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:47.349 [2024-10-01 22:40:42.498221] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:47.349 [2024-10-01 22:40:42.498226] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa7fc000b90 00:41:47.349 [2024-10-01 22:40:42.498236] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:41:47.349 qpair failed and we were unable to recover it. 00:41:47.349 [2024-10-01 22:40:42.508197] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:47.349 [2024-10-01 22:40:42.508258] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:47.349 [2024-10-01 22:40:42.508268] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:47.349 [2024-10-01 22:40:42.508272] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:47.349 [2024-10-01 22:40:42.508277] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa7fc000b90 00:41:47.349 [2024-10-01 22:40:42.508286] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:41:47.349 qpair failed and we were unable to recover it. 00:41:47.349 [2024-10-01 22:40:42.518214] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:47.349 [2024-10-01 22:40:42.518286] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:47.349 [2024-10-01 22:40:42.518296] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:47.349 [2024-10-01 22:40:42.518301] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:47.349 [2024-10-01 22:40:42.518305] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa7fc000b90 00:41:47.349 [2024-10-01 22:40:42.518315] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:41:47.349 qpair failed and we were unable to recover it. 00:41:47.349 [2024-10-01 22:40:42.528278] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:47.349 [2024-10-01 22:40:42.528348] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:47.349 [2024-10-01 22:40:42.528358] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:47.349 [2024-10-01 22:40:42.528363] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:47.349 [2024-10-01 22:40:42.528367] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa7fc000b90 00:41:47.349 [2024-10-01 22:40:42.528377] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:41:47.349 qpair failed and we were unable to recover it. 00:41:47.349 [2024-10-01 22:40:42.538266] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:47.349 [2024-10-01 22:40:42.538350] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:47.349 [2024-10-01 22:40:42.538360] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:47.349 [2024-10-01 22:40:42.538365] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:47.349 [2024-10-01 22:40:42.538369] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa7fc000b90 00:41:47.349 [2024-10-01 22:40:42.538379] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:41:47.349 qpair failed and we were unable to recover it. 00:41:47.349 [2024-10-01 22:40:42.548315] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:47.349 [2024-10-01 22:40:42.548355] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:47.349 [2024-10-01 22:40:42.548365] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:47.349 [2024-10-01 22:40:42.548370] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:47.349 [2024-10-01 22:40:42.548375] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa7fc000b90 00:41:47.349 [2024-10-01 22:40:42.548385] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:41:47.349 qpair failed and we were unable to recover it. 00:41:47.349 [2024-10-01 22:40:42.558263] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:47.349 [2024-10-01 22:40:42.558306] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:47.349 [2024-10-01 22:40:42.558316] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:47.349 [2024-10-01 22:40:42.558321] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:47.349 [2024-10-01 22:40:42.558326] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa7fc000b90 00:41:47.349 [2024-10-01 22:40:42.558336] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:41:47.349 qpair failed and we were unable to recover it. 00:41:47.349 [2024-10-01 22:40:42.568336] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:47.350 [2024-10-01 22:40:42.568375] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:47.350 [2024-10-01 22:40:42.568386] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:47.350 [2024-10-01 22:40:42.568390] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:47.350 [2024-10-01 22:40:42.568395] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa7fc000b90 00:41:47.350 [2024-10-01 22:40:42.568405] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:41:47.350 qpair failed and we were unable to recover it. 00:41:47.350 [2024-10-01 22:40:42.578385] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:47.350 [2024-10-01 22:40:42.578461] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:47.350 [2024-10-01 22:40:42.578479] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:47.350 [2024-10-01 22:40:42.578485] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:47.350 [2024-10-01 22:40:42.578490] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa7fc000b90 00:41:47.350 [2024-10-01 22:40:42.578504] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:41:47.350 qpair failed and we were unable to recover it. 00:41:47.350 [2024-10-01 22:40:42.588407] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:47.350 [2024-10-01 22:40:42.588464] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:47.350 [2024-10-01 22:40:42.588479] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:47.350 [2024-10-01 22:40:42.588484] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:47.350 [2024-10-01 22:40:42.588488] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa7fc000b90 00:41:47.350 [2024-10-01 22:40:42.588499] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:41:47.350 qpair failed and we were unable to recover it. 00:41:47.350 [2024-10-01 22:40:42.598382] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:47.350 [2024-10-01 22:40:42.598417] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:47.350 [2024-10-01 22:40:42.598427] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:47.350 [2024-10-01 22:40:42.598432] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:47.350 [2024-10-01 22:40:42.598437] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa7fc000b90 00:41:47.350 [2024-10-01 22:40:42.598447] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:41:47.350 qpair failed and we were unable to recover it. 00:41:47.610 [2024-10-01 22:40:42.608438] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:47.610 [2024-10-01 22:40:42.608487] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:47.610 [2024-10-01 22:40:42.608497] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:47.610 [2024-10-01 22:40:42.608501] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:47.610 [2024-10-01 22:40:42.608506] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa7fc000b90 00:41:47.610 [2024-10-01 22:40:42.608516] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:41:47.610 qpair failed and we were unable to recover it. 00:41:47.610 [2024-10-01 22:40:42.618462] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:47.610 [2024-10-01 22:40:42.618509] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:47.610 [2024-10-01 22:40:42.618519] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:47.610 [2024-10-01 22:40:42.618524] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:47.610 [2024-10-01 22:40:42.618528] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa7fc000b90 00:41:47.610 [2024-10-01 22:40:42.618539] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:41:47.610 qpair failed and we were unable to recover it. 00:41:47.610 [2024-10-01 22:40:42.628373] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:47.610 [2024-10-01 22:40:42.628415] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:47.610 [2024-10-01 22:40:42.628425] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:47.610 [2024-10-01 22:40:42.628429] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:47.611 [2024-10-01 22:40:42.628434] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa7fc000b90 00:41:47.611 [2024-10-01 22:40:42.628447] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:41:47.611 qpair failed and we were unable to recover it. 00:41:47.611 [2024-10-01 22:40:42.638421] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:47.611 [2024-10-01 22:40:42.638477] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:47.611 [2024-10-01 22:40:42.638487] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:47.611 [2024-10-01 22:40:42.638492] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:47.611 [2024-10-01 22:40:42.638496] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa7fc000b90 00:41:47.611 [2024-10-01 22:40:42.638506] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:41:47.611 qpair failed and we were unable to recover it. 00:41:47.611 [2024-10-01 22:40:42.648431] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:47.611 [2024-10-01 22:40:42.648468] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:47.611 [2024-10-01 22:40:42.648479] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:47.611 [2024-10-01 22:40:42.648484] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:47.611 [2024-10-01 22:40:42.648489] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa7fc000b90 00:41:47.611 [2024-10-01 22:40:42.648499] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:41:47.611 qpair failed and we were unable to recover it. 00:41:47.611 [2024-10-01 22:40:42.658552] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:47.611 [2024-10-01 22:40:42.658593] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:47.611 [2024-10-01 22:40:42.658603] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:47.611 [2024-10-01 22:40:42.658608] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:47.611 [2024-10-01 22:40:42.658612] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa7fc000b90 00:41:47.611 [2024-10-01 22:40:42.658623] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:41:47.611 qpair failed and we were unable to recover it. 00:41:47.611 [2024-10-01 22:40:42.668629] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:47.611 [2024-10-01 22:40:42.668671] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:47.611 [2024-10-01 22:40:42.668680] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:47.611 [2024-10-01 22:40:42.668685] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:47.611 [2024-10-01 22:40:42.668689] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa7fc000b90 00:41:47.611 [2024-10-01 22:40:42.668699] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:41:47.611 qpair failed and we were unable to recover it. 00:41:47.611 [2024-10-01 22:40:42.678501] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:47.611 [2024-10-01 22:40:42.678545] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:47.611 [2024-10-01 22:40:42.678559] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:47.611 [2024-10-01 22:40:42.678563] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:47.611 [2024-10-01 22:40:42.678568] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa7fc000b90 00:41:47.611 [2024-10-01 22:40:42.678578] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:41:47.611 qpair failed and we were unable to recover it. 00:41:47.611 [2024-10-01 22:40:42.688658] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:47.611 [2024-10-01 22:40:42.688698] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:47.611 [2024-10-01 22:40:42.688708] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:47.611 [2024-10-01 22:40:42.688713] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:47.611 [2024-10-01 22:40:42.688717] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa7fc000b90 00:41:47.611 [2024-10-01 22:40:42.688727] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:41:47.611 qpair failed and we were unable to recover it. 00:41:47.611 [2024-10-01 22:40:42.698693] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:47.611 [2024-10-01 22:40:42.698734] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:47.611 [2024-10-01 22:40:42.698743] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:47.611 [2024-10-01 22:40:42.698748] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:47.611 [2024-10-01 22:40:42.698753] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa7fc000b90 00:41:47.611 [2024-10-01 22:40:42.698763] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:41:47.611 qpair failed and we were unable to recover it. 00:41:47.611 [2024-10-01 22:40:42.708699] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:47.611 [2024-10-01 22:40:42.708741] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:47.611 [2024-10-01 22:40:42.708750] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:47.611 [2024-10-01 22:40:42.708755] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:47.611 [2024-10-01 22:40:42.708759] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa7fc000b90 00:41:47.611 [2024-10-01 22:40:42.708770] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:41:47.611 qpair failed and we were unable to recover it. 00:41:47.611 [2024-10-01 22:40:42.718711] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:47.611 [2024-10-01 22:40:42.718748] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:47.611 [2024-10-01 22:40:42.718758] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:47.611 [2024-10-01 22:40:42.718763] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:47.611 [2024-10-01 22:40:42.718767] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa7fc000b90 00:41:47.611 [2024-10-01 22:40:42.718780] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:41:47.611 qpair failed and we were unable to recover it. 00:41:47.611 [2024-10-01 22:40:42.728776] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:47.611 [2024-10-01 22:40:42.728815] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:47.611 [2024-10-01 22:40:42.728825] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:47.611 [2024-10-01 22:40:42.728830] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:47.611 [2024-10-01 22:40:42.728834] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa7fc000b90 00:41:47.611 [2024-10-01 22:40:42.728845] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:41:47.611 qpair failed and we were unable to recover it. 00:41:47.611 [2024-10-01 22:40:42.738784] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:47.611 [2024-10-01 22:40:42.738827] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:47.611 [2024-10-01 22:40:42.738837] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:47.611 [2024-10-01 22:40:42.738841] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:47.611 [2024-10-01 22:40:42.738846] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa7fc000b90 00:41:47.611 [2024-10-01 22:40:42.738856] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:41:47.611 qpair failed and we were unable to recover it. 00:41:47.611 [2024-10-01 22:40:42.748903] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:47.611 [2024-10-01 22:40:42.748963] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:47.611 [2024-10-01 22:40:42.748973] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:47.611 [2024-10-01 22:40:42.748977] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:47.611 [2024-10-01 22:40:42.748982] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa7fc000b90 00:41:47.611 [2024-10-01 22:40:42.748992] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:41:47.611 qpair failed and we were unable to recover it. 00:41:47.611 [2024-10-01 22:40:42.758851] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:47.611 [2024-10-01 22:40:42.758890] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:47.611 [2024-10-01 22:40:42.758899] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:47.611 [2024-10-01 22:40:42.758904] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:47.611 [2024-10-01 22:40:42.758909] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa7fc000b90 00:41:47.611 [2024-10-01 22:40:42.758919] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:41:47.611 qpair failed and we were unable to recover it. 00:41:47.611 [2024-10-01 22:40:42.768904] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:47.612 [2024-10-01 22:40:42.768946] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:47.612 [2024-10-01 22:40:42.768959] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:47.612 [2024-10-01 22:40:42.768964] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:47.612 [2024-10-01 22:40:42.768969] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa7fc000b90 00:41:47.612 [2024-10-01 22:40:42.768979] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:41:47.612 qpair failed and we were unable to recover it. 00:41:47.612 [2024-10-01 22:40:42.778896] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:47.612 [2024-10-01 22:40:42.778934] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:47.612 [2024-10-01 22:40:42.778944] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:47.612 [2024-10-01 22:40:42.778948] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:47.612 [2024-10-01 22:40:42.778953] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa7fc000b90 00:41:47.612 [2024-10-01 22:40:42.778963] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:41:47.612 qpair failed and we were unable to recover it. 00:41:47.612 [2024-10-01 22:40:42.788959] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:47.612 [2024-10-01 22:40:42.789001] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:47.612 [2024-10-01 22:40:42.789010] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:47.612 [2024-10-01 22:40:42.789015] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:47.612 [2024-10-01 22:40:42.789020] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa7fc000b90 00:41:47.612 [2024-10-01 22:40:42.789029] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:41:47.612 qpair failed and we were unable to recover it. 00:41:47.612 [2024-10-01 22:40:42.798963] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:47.612 [2024-10-01 22:40:42.799004] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:47.612 [2024-10-01 22:40:42.799013] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:47.612 [2024-10-01 22:40:42.799018] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:47.612 [2024-10-01 22:40:42.799022] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa7fc000b90 00:41:47.612 [2024-10-01 22:40:42.799032] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:41:47.612 qpair failed and we were unable to recover it. 00:41:47.612 [2024-10-01 22:40:42.808979] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:47.612 [2024-10-01 22:40:42.809014] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:47.612 [2024-10-01 22:40:42.809024] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:47.612 [2024-10-01 22:40:42.809029] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:47.612 [2024-10-01 22:40:42.809036] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa7fc000b90 00:41:47.612 [2024-10-01 22:40:42.809046] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:41:47.612 qpair failed and we were unable to recover it. 00:41:47.612 [2024-10-01 22:40:42.818996] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:47.612 [2024-10-01 22:40:42.819079] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:47.612 [2024-10-01 22:40:42.819088] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:47.612 [2024-10-01 22:40:42.819093] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:47.612 [2024-10-01 22:40:42.819098] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa7fc000b90 00:41:47.612 [2024-10-01 22:40:42.819108] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:41:47.612 qpair failed and we were unable to recover it. 00:41:47.612 [2024-10-01 22:40:42.829027] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:47.612 [2024-10-01 22:40:42.829071] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:47.612 [2024-10-01 22:40:42.829081] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:47.612 [2024-10-01 22:40:42.829086] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:47.612 [2024-10-01 22:40:42.829090] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa7fc000b90 00:41:47.612 [2024-10-01 22:40:42.829100] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:41:47.612 qpair failed and we were unable to recover it. 00:41:47.612 [2024-10-01 22:40:42.839062] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:47.612 [2024-10-01 22:40:42.839097] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:47.612 [2024-10-01 22:40:42.839106] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:47.612 [2024-10-01 22:40:42.839111] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:47.612 [2024-10-01 22:40:42.839115] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa7fc000b90 00:41:47.612 [2024-10-01 22:40:42.839125] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:41:47.612 qpair failed and we were unable to recover it. 00:41:47.612 [2024-10-01 22:40:42.849082] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:47.612 [2024-10-01 22:40:42.849118] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:47.612 [2024-10-01 22:40:42.849128] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:47.612 [2024-10-01 22:40:42.849132] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:47.612 [2024-10-01 22:40:42.849137] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa7fc000b90 00:41:47.612 [2024-10-01 22:40:42.849146] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:41:47.612 qpair failed and we were unable to recover it. 00:41:47.612 [2024-10-01 22:40:42.859101] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:47.612 [2024-10-01 22:40:42.859146] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:47.612 [2024-10-01 22:40:42.859155] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:47.612 [2024-10-01 22:40:42.859160] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:47.612 [2024-10-01 22:40:42.859164] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa7fc000b90 00:41:47.612 [2024-10-01 22:40:42.859174] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:41:47.612 qpair failed and we were unable to recover it. 00:41:47.873 [2024-10-01 22:40:42.869166] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:47.873 [2024-10-01 22:40:42.869240] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:47.873 [2024-10-01 22:40:42.869250] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:47.873 [2024-10-01 22:40:42.869254] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:47.873 [2024-10-01 22:40:42.869259] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa7fc000b90 00:41:47.873 [2024-10-01 22:40:42.869269] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:41:47.873 qpair failed and we were unable to recover it. 00:41:47.873 [2024-10-01 22:40:42.879185] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:47.873 [2024-10-01 22:40:42.879224] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:47.873 [2024-10-01 22:40:42.879234] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:47.873 [2024-10-01 22:40:42.879239] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:47.873 [2024-10-01 22:40:42.879243] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa7fc000b90 00:41:47.873 [2024-10-01 22:40:42.879253] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:41:47.873 qpair failed and we were unable to recover it. 00:41:47.873 [2024-10-01 22:40:42.889218] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:47.874 [2024-10-01 22:40:42.889257] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:47.874 [2024-10-01 22:40:42.889267] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:47.874 [2024-10-01 22:40:42.889272] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:47.874 [2024-10-01 22:40:42.889276] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa7fc000b90 00:41:47.874 [2024-10-01 22:40:42.889286] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:41:47.874 qpair failed and we were unable to recover it. 00:41:47.874 [2024-10-01 22:40:42.899157] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:47.874 [2024-10-01 22:40:42.899198] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:47.874 [2024-10-01 22:40:42.899207] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:47.874 [2024-10-01 22:40:42.899212] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:47.874 [2024-10-01 22:40:42.899219] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa7fc000b90 00:41:47.874 [2024-10-01 22:40:42.899229] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:41:47.874 qpair failed and we were unable to recover it. 00:41:47.874 [2024-10-01 22:40:42.909271] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:47.874 [2024-10-01 22:40:42.909313] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:47.874 [2024-10-01 22:40:42.909323] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:47.874 [2024-10-01 22:40:42.909328] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:47.874 [2024-10-01 22:40:42.909332] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa7fc000b90 00:41:47.874 [2024-10-01 22:40:42.909342] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:41:47.874 qpair failed and we were unable to recover it. 00:41:47.874 [2024-10-01 22:40:42.919294] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:47.874 [2024-10-01 22:40:42.919334] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:47.874 [2024-10-01 22:40:42.919344] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:47.874 [2024-10-01 22:40:42.919348] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:47.874 [2024-10-01 22:40:42.919353] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa7fc000b90 00:41:47.874 [2024-10-01 22:40:42.919362] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:41:47.874 qpair failed and we were unable to recover it. 00:41:47.874 [2024-10-01 22:40:42.929304] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:47.874 [2024-10-01 22:40:42.929350] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:47.874 [2024-10-01 22:40:42.929368] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:47.874 [2024-10-01 22:40:42.929374] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:47.874 [2024-10-01 22:40:42.929379] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa7fc000b90 00:41:47.874 [2024-10-01 22:40:42.929393] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:41:47.874 qpair failed and we were unable to recover it. 00:41:47.874 [2024-10-01 22:40:42.939338] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:47.874 [2024-10-01 22:40:42.939388] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:47.874 [2024-10-01 22:40:42.939399] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:47.874 [2024-10-01 22:40:42.939404] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:47.874 [2024-10-01 22:40:42.939408] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa7fc000b90 00:41:47.874 [2024-10-01 22:40:42.939419] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:41:47.874 qpair failed and we were unable to recover it. 00:41:47.874 [2024-10-01 22:40:42.949362] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:47.874 [2024-10-01 22:40:42.949404] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:47.874 [2024-10-01 22:40:42.949415] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:47.874 [2024-10-01 22:40:42.949419] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:47.874 [2024-10-01 22:40:42.949424] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa7fc000b90 00:41:47.874 [2024-10-01 22:40:42.949434] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:41:47.874 qpair failed and we were unable to recover it. 00:41:47.874 [2024-10-01 22:40:42.959410] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:47.874 [2024-10-01 22:40:42.959491] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:47.874 [2024-10-01 22:40:42.959504] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:47.874 [2024-10-01 22:40:42.959509] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:47.874 [2024-10-01 22:40:42.959514] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa7fc000b90 00:41:47.874 [2024-10-01 22:40:42.959525] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:41:47.874 qpair failed and we were unable to recover it. 00:41:47.874 [2024-10-01 22:40:42.969467] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:47.874 [2024-10-01 22:40:42.969534] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:47.874 [2024-10-01 22:40:42.969544] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:47.874 [2024-10-01 22:40:42.969549] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:47.874 [2024-10-01 22:40:42.969553] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa7fc000b90 00:41:47.874 [2024-10-01 22:40:42.969564] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:41:47.874 qpair failed and we were unable to recover it. 00:41:47.874 [2024-10-01 22:40:42.979486] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:47.874 [2024-10-01 22:40:42.979531] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:47.874 [2024-10-01 22:40:42.979540] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:47.874 [2024-10-01 22:40:42.979545] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:47.874 [2024-10-01 22:40:42.979550] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa7fc000b90 00:41:47.874 [2024-10-01 22:40:42.979560] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:41:47.874 qpair failed and we were unable to recover it. 00:41:47.874 [2024-10-01 22:40:42.989452] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:47.874 [2024-10-01 22:40:42.989496] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:47.874 [2024-10-01 22:40:42.989505] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:47.874 [2024-10-01 22:40:42.989516] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:47.874 [2024-10-01 22:40:42.989521] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa7fc000b90 00:41:47.874 [2024-10-01 22:40:42.989531] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:41:47.874 qpair failed and we were unable to recover it. 00:41:47.874 [2024-10-01 22:40:42.999389] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:47.874 [2024-10-01 22:40:42.999429] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:47.874 [2024-10-01 22:40:42.999439] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:47.874 [2024-10-01 22:40:42.999444] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:47.874 [2024-10-01 22:40:42.999448] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa7fc000b90 00:41:47.874 [2024-10-01 22:40:42.999458] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:41:47.875 qpair failed and we were unable to recover it. 00:41:47.875 [2024-10-01 22:40:43.009535] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:47.875 [2024-10-01 22:40:43.009572] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:47.875 [2024-10-01 22:40:43.009582] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:47.875 [2024-10-01 22:40:43.009586] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:47.875 [2024-10-01 22:40:43.009591] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa7fc000b90 00:41:47.875 [2024-10-01 22:40:43.009601] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:41:47.875 qpair failed and we were unable to recover it. 00:41:47.875 [2024-10-01 22:40:43.019544] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:47.875 [2024-10-01 22:40:43.019587] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:47.875 [2024-10-01 22:40:43.019596] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:47.875 [2024-10-01 22:40:43.019601] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:47.875 [2024-10-01 22:40:43.019605] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa7fc000b90 00:41:47.875 [2024-10-01 22:40:43.019615] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:41:47.875 qpair failed and we were unable to recover it. 00:41:47.875 [2024-10-01 22:40:43.029583] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:47.875 [2024-10-01 22:40:43.029626] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:47.875 [2024-10-01 22:40:43.029637] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:47.875 [2024-10-01 22:40:43.029641] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:47.875 [2024-10-01 22:40:43.029646] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa7fc000b90 00:41:47.875 [2024-10-01 22:40:43.029656] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:41:47.875 qpair failed and we were unable to recover it. 00:41:47.875 [2024-10-01 22:40:43.039520] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:47.875 [2024-10-01 22:40:43.039558] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:47.875 [2024-10-01 22:40:43.039568] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:47.875 [2024-10-01 22:40:43.039573] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:47.875 [2024-10-01 22:40:43.039577] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa7fc000b90 00:41:47.875 [2024-10-01 22:40:43.039587] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:41:47.875 qpair failed and we were unable to recover it. 00:41:47.875 [2024-10-01 22:40:43.049546] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:47.875 [2024-10-01 22:40:43.049586] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:47.875 [2024-10-01 22:40:43.049596] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:47.875 [2024-10-01 22:40:43.049601] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:47.875 [2024-10-01 22:40:43.049605] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa7fc000b90 00:41:47.875 [2024-10-01 22:40:43.049615] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:41:47.875 qpair failed and we were unable to recover it. 00:41:47.875 [2024-10-01 22:40:43.059534] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:47.875 [2024-10-01 22:40:43.059574] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:47.875 [2024-10-01 22:40:43.059585] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:47.875 [2024-10-01 22:40:43.059590] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:47.875 [2024-10-01 22:40:43.059594] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa7fc000b90 00:41:47.875 [2024-10-01 22:40:43.059605] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:41:47.875 qpair failed and we were unable to recover it. 00:41:47.875 [2024-10-01 22:40:43.069593] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:47.875 [2024-10-01 22:40:43.069639] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:47.875 [2024-10-01 22:40:43.069650] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:47.875 [2024-10-01 22:40:43.069655] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:47.875 [2024-10-01 22:40:43.069659] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa7fc000b90 00:41:47.875 [2024-10-01 22:40:43.069669] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:41:47.875 qpair failed and we were unable to recover it. 00:41:47.875 [2024-10-01 22:40:43.079633] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:47.875 [2024-10-01 22:40:43.079674] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:47.875 [2024-10-01 22:40:43.079686] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:47.875 [2024-10-01 22:40:43.079691] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:47.875 [2024-10-01 22:40:43.079697] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa7fc000b90 00:41:47.875 [2024-10-01 22:40:43.079707] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:41:47.875 qpair failed and we were unable to recover it. 00:41:47.875 [2024-10-01 22:40:43.089716] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:47.875 [2024-10-01 22:40:43.089757] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:47.875 [2024-10-01 22:40:43.089767] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:47.875 [2024-10-01 22:40:43.089772] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:47.875 [2024-10-01 22:40:43.089777] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa7fc000b90 00:41:47.875 [2024-10-01 22:40:43.089787] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:41:47.875 qpair failed and we were unable to recover it. 00:41:47.875 [2024-10-01 22:40:43.099646] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:47.875 [2024-10-01 22:40:43.099688] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:47.875 [2024-10-01 22:40:43.099697] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:47.875 [2024-10-01 22:40:43.099702] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:47.875 [2024-10-01 22:40:43.099706] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa7fc000b90 00:41:47.875 [2024-10-01 22:40:43.099716] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:41:47.875 qpair failed and we were unable to recover it. 00:41:47.875 [2024-10-01 22:40:43.109788] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:47.875 [2024-10-01 22:40:43.109828] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:47.875 [2024-10-01 22:40:43.109837] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:47.875 [2024-10-01 22:40:43.109842] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:47.875 [2024-10-01 22:40:43.109847] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa7fc000b90 00:41:47.875 [2024-10-01 22:40:43.109857] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:41:47.875 qpair failed and we were unable to recover it. 00:41:47.875 [2024-10-01 22:40:43.119835] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:47.876 [2024-10-01 22:40:43.119875] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:47.876 [2024-10-01 22:40:43.119885] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:47.876 [2024-10-01 22:40:43.119890] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:47.876 [2024-10-01 22:40:43.119894] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa7fc000b90 00:41:47.876 [2024-10-01 22:40:43.119904] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:41:47.876 qpair failed and we were unable to recover it. 00:41:48.136 [2024-10-01 22:40:43.129849] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:48.136 [2024-10-01 22:40:43.129890] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:48.136 [2024-10-01 22:40:43.129900] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:48.136 [2024-10-01 22:40:43.129905] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:48.136 [2024-10-01 22:40:43.129909] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa7fc000b90 00:41:48.136 [2024-10-01 22:40:43.129919] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:41:48.136 qpair failed and we were unable to recover it. 00:41:48.136 [2024-10-01 22:40:43.139854] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:48.136 [2024-10-01 22:40:43.139896] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:48.136 [2024-10-01 22:40:43.139906] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:48.136 [2024-10-01 22:40:43.139911] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:48.137 [2024-10-01 22:40:43.139915] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa7fc000b90 00:41:48.137 [2024-10-01 22:40:43.139925] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:41:48.137 qpair failed and we were unable to recover it. 00:41:48.137 [2024-10-01 22:40:43.149932] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:48.137 [2024-10-01 22:40:43.149971] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:48.137 [2024-10-01 22:40:43.149981] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:48.137 [2024-10-01 22:40:43.149985] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:48.137 [2024-10-01 22:40:43.149990] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa7fc000b90 00:41:48.137 [2024-10-01 22:40:43.149999] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:41:48.137 qpair failed and we were unable to recover it. 00:41:48.137 [2024-10-01 22:40:43.159927] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:48.137 [2024-10-01 22:40:43.159995] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:48.137 [2024-10-01 22:40:43.160004] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:48.137 [2024-10-01 22:40:43.160009] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:48.137 [2024-10-01 22:40:43.160014] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa7fc000b90 00:41:48.137 [2024-10-01 22:40:43.160024] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:41:48.137 qpair failed and we were unable to recover it. 00:41:48.137 [2024-10-01 22:40:43.169978] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:48.137 [2024-10-01 22:40:43.170014] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:48.137 [2024-10-01 22:40:43.170027] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:48.137 [2024-10-01 22:40:43.170031] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:48.137 [2024-10-01 22:40:43.170036] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa7fc000b90 00:41:48.137 [2024-10-01 22:40:43.170046] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:41:48.137 qpair failed and we were unable to recover it. 00:41:48.137 [2024-10-01 22:40:43.179995] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:48.137 [2024-10-01 22:40:43.180037] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:48.137 [2024-10-01 22:40:43.180046] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:48.137 [2024-10-01 22:40:43.180051] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:48.137 [2024-10-01 22:40:43.180055] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa7fc000b90 00:41:48.137 [2024-10-01 22:40:43.180065] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:41:48.137 qpair failed and we were unable to recover it. 00:41:48.137 [2024-10-01 22:40:43.190010] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:48.137 [2024-10-01 22:40:43.190087] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:48.137 [2024-10-01 22:40:43.190096] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:48.137 [2024-10-01 22:40:43.190101] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:48.137 [2024-10-01 22:40:43.190105] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa7fc000b90 00:41:48.137 [2024-10-01 22:40:43.190115] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:41:48.137 qpair failed and we were unable to recover it. 00:41:48.137 [2024-10-01 22:40:43.199976] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:48.137 [2024-10-01 22:40:43.200016] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:48.137 [2024-10-01 22:40:43.200025] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:48.137 [2024-10-01 22:40:43.200030] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:48.137 [2024-10-01 22:40:43.200034] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa7fc000b90 00:41:48.137 [2024-10-01 22:40:43.200044] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:41:48.137 qpair failed and we were unable to recover it. 00:41:48.137 [2024-10-01 22:40:43.210067] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:48.137 [2024-10-01 22:40:43.210147] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:48.137 [2024-10-01 22:40:43.210157] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:48.137 [2024-10-01 22:40:43.210161] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:48.137 [2024-10-01 22:40:43.210166] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa7fc000b90 00:41:48.137 [2024-10-01 22:40:43.210178] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:41:48.137 qpair failed and we were unable to recover it. 00:41:48.137 [2024-10-01 22:40:43.220105] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:48.137 [2024-10-01 22:40:43.220146] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:48.137 [2024-10-01 22:40:43.220156] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:48.137 [2024-10-01 22:40:43.220161] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:48.137 [2024-10-01 22:40:43.220165] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa7fc000b90 00:41:48.137 [2024-10-01 22:40:43.220175] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:41:48.137 qpair failed and we were unable to recover it. 00:41:48.137 [2024-10-01 22:40:43.230148] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:48.137 [2024-10-01 22:40:43.230191] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:48.137 [2024-10-01 22:40:43.230201] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:48.137 [2024-10-01 22:40:43.230206] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:48.137 [2024-10-01 22:40:43.230210] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa7fc000b90 00:41:48.137 [2024-10-01 22:40:43.230220] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:41:48.137 qpair failed and we were unable to recover it. 00:41:48.137 [2024-10-01 22:40:43.240155] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:48.137 [2024-10-01 22:40:43.240217] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:48.137 [2024-10-01 22:40:43.240227] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:48.137 [2024-10-01 22:40:43.240232] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:48.137 [2024-10-01 22:40:43.240236] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa7fc000b90 00:41:48.137 [2024-10-01 22:40:43.240246] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:41:48.137 qpair failed and we were unable to recover it. 00:41:48.137 [2024-10-01 22:40:43.250175] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:48.137 [2024-10-01 22:40:43.250214] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:48.137 [2024-10-01 22:40:43.250224] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:48.137 [2024-10-01 22:40:43.250229] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:48.137 [2024-10-01 22:40:43.250233] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa7fc000b90 00:41:48.138 [2024-10-01 22:40:43.250243] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:41:48.138 qpair failed and we were unable to recover it. 00:41:48.138 [2024-10-01 22:40:43.260208] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:48.138 [2024-10-01 22:40:43.260249] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:48.138 [2024-10-01 22:40:43.260261] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:48.138 [2024-10-01 22:40:43.260266] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:48.138 [2024-10-01 22:40:43.260270] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa7fc000b90 00:41:48.138 [2024-10-01 22:40:43.260280] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:41:48.138 qpair failed and we were unable to recover it. 00:41:48.138 [2024-10-01 22:40:43.270180] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:48.138 [2024-10-01 22:40:43.270220] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:48.138 [2024-10-01 22:40:43.270230] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:48.138 [2024-10-01 22:40:43.270235] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:48.138 [2024-10-01 22:40:43.270240] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa7fc000b90 00:41:48.138 [2024-10-01 22:40:43.270250] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:41:48.138 qpair failed and we were unable to recover it. 00:41:48.138 [2024-10-01 22:40:43.280277] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:48.138 [2024-10-01 22:40:43.280314] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:48.138 [2024-10-01 22:40:43.280324] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:48.138 [2024-10-01 22:40:43.280328] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:48.138 [2024-10-01 22:40:43.280333] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa7fc000b90 00:41:48.138 [2024-10-01 22:40:43.280343] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:41:48.138 qpair failed and we were unable to recover it. 00:41:48.138 [2024-10-01 22:40:43.290320] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:48.138 [2024-10-01 22:40:43.290422] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:48.138 [2024-10-01 22:40:43.290485] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:48.138 [2024-10-01 22:40:43.290511] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:48.138 [2024-10-01 22:40:43.290533] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa7f8000b90 00:41:48.138 [2024-10-01 22:40:43.290586] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:41:48.138 qpair failed and we were unable to recover it. 00:41:48.138 [2024-10-01 22:40:43.300338] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:48.138 [2024-10-01 22:40:43.300417] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:48.138 [2024-10-01 22:40:43.300464] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:48.138 [2024-10-01 22:40:43.300483] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:48.138 [2024-10-01 22:40:43.300517] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa7f8000b90 00:41:48.138 [2024-10-01 22:40:43.300558] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:41:48.138 qpair failed and we were unable to recover it. 00:41:48.138 [2024-10-01 22:40:43.300715] nvme_ctrlr.c:4505:nvme_ctrlr_keep_alive: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Submitting Keep Alive failed 00:41:48.138 A controller has encountered a failure and is being reset. 00:41:48.138 [2024-10-01 22:40:43.300840] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1300ed0 (9): Bad file descriptor 00:41:48.138 Controller properly reset. 00:41:48.138 Initializing NVMe Controllers 00:41:48.138 Attaching to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:41:48.138 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:41:48.138 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 0 00:41:48.138 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 1 00:41:48.138 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 2 00:41:48.138 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 3 00:41:48.138 Initialization complete. Launching workers. 00:41:48.138 Starting thread on core 1 00:41:48.138 Starting thread on core 2 00:41:48.138 Starting thread on core 3 00:41:48.138 Starting thread on core 0 00:41:48.138 22:40:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@51 -- # sync 00:41:48.138 00:41:48.138 real 0m11.306s 00:41:48.138 user 0m21.585s 00:41:48.138 sys 0m3.848s 00:41:48.138 22:40:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1126 -- # xtrace_disable 00:41:48.138 22:40:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:41:48.138 ************************************ 00:41:48.138 END TEST nvmf_target_disconnect_tc2 00:41:48.138 ************************************ 00:41:48.398 22:40:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@72 -- # '[' -n '' ']' 00:41:48.398 22:40:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@76 -- # trap - SIGINT SIGTERM EXIT 00:41:48.398 22:40:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@77 -- # nvmftestfini 00:41:48.399 22:40:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@514 -- # nvmfcleanup 00:41:48.399 22:40:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@121 -- # sync 00:41:48.399 22:40:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:41:48.399 22:40:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@124 -- # set +e 00:41:48.399 22:40:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@125 -- # for i in {1..20} 00:41:48.399 22:40:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:41:48.399 rmmod nvme_tcp 00:41:48.399 rmmod nvme_fabrics 00:41:48.399 rmmod nvme_keyring 00:41:48.399 22:40:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:41:48.399 22:40:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@128 -- # set -e 00:41:48.399 22:40:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@129 -- # return 0 00:41:48.399 22:40:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@515 -- # '[' -n 400200 ']' 00:41:48.399 22:40:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@516 -- # killprocess 400200 00:41:48.399 22:40:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@950 -- # '[' -z 400200 ']' 00:41:48.399 22:40:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@954 -- # kill -0 400200 00:41:48.399 22:40:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@955 -- # uname 00:41:48.399 22:40:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:41:48.399 22:40:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 400200 00:41:48.399 22:40:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@956 -- # process_name=reactor_4 00:41:48.399 22:40:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@960 -- # '[' reactor_4 = sudo ']' 00:41:48.399 22:40:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@968 -- # echo 'killing process with pid 400200' 00:41:48.399 killing process with pid 400200 00:41:48.399 22:40:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@969 -- # kill 400200 00:41:48.399 22:40:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@974 -- # wait 400200 00:41:48.659 22:40:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:41:48.659 22:40:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:41:48.659 22:40:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:41:48.660 22:40:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@297 -- # iptr 00:41:48.660 22:40:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@789 -- # iptables-save 00:41:48.660 22:40:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:41:48.660 22:40:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@789 -- # iptables-restore 00:41:48.660 22:40:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:41:48.660 22:40:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@302 -- # remove_spdk_ns 00:41:48.660 22:40:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:41:48.660 22:40:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:41:48.660 22:40:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:41:51.202 22:40:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:41:51.202 00:41:51.202 real 0m21.670s 00:41:51.202 user 0m49.092s 00:41:51.202 sys 0m10.034s 00:41:51.202 22:40:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1126 -- # xtrace_disable 00:41:51.202 22:40:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:41:51.202 ************************************ 00:41:51.202 END TEST nvmf_target_disconnect 00:41:51.202 ************************************ 00:41:51.202 22:40:45 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:41:51.202 00:41:51.202 real 6m35.030s 00:41:51.202 user 11m31.680s 00:41:51.202 sys 2m13.574s 00:41:51.202 22:40:45 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1126 -- # xtrace_disable 00:41:51.202 22:40:45 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:41:51.202 ************************************ 00:41:51.202 END TEST nvmf_host 00:41:51.202 ************************************ 00:41:51.202 22:40:45 nvmf_tcp -- nvmf/nvmf.sh@19 -- # [[ tcp = \t\c\p ]] 00:41:51.202 22:40:45 nvmf_tcp -- nvmf/nvmf.sh@19 -- # [[ 0 -eq 0 ]] 00:41:51.202 22:40:45 nvmf_tcp -- nvmf/nvmf.sh@20 -- # run_test nvmf_target_core_interrupt_mode /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp --interrupt-mode 00:41:51.202 22:40:45 nvmf_tcp -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:41:51.202 22:40:45 nvmf_tcp -- common/autotest_common.sh@1107 -- # xtrace_disable 00:41:51.202 22:40:45 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:41:51.202 ************************************ 00:41:51.202 START TEST nvmf_target_core_interrupt_mode 00:41:51.202 ************************************ 00:41:51.202 22:40:45 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp --interrupt-mode 00:41:51.202 * Looking for test storage... 00:41:51.202 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:41:51.202 22:40:46 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:41:51.202 22:40:46 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:41:51.202 22:40:46 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1681 -- # lcov --version 00:41:51.202 22:40:46 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:41:51.202 22:40:46 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:41:51.202 22:40:46 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@333 -- # local ver1 ver1_l 00:41:51.202 22:40:46 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@334 -- # local ver2 ver2_l 00:41:51.202 22:40:46 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@336 -- # IFS=.-: 00:41:51.202 22:40:46 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@336 -- # read -ra ver1 00:41:51.203 22:40:46 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@337 -- # IFS=.-: 00:41:51.203 22:40:46 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@337 -- # read -ra ver2 00:41:51.203 22:40:46 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@338 -- # local 'op=<' 00:41:51.203 22:40:46 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@340 -- # ver1_l=2 00:41:51.203 22:40:46 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@341 -- # ver2_l=1 00:41:51.203 22:40:46 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:41:51.203 22:40:46 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@344 -- # case "$op" in 00:41:51.203 22:40:46 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@345 -- # : 1 00:41:51.203 22:40:46 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@364 -- # (( v = 0 )) 00:41:51.203 22:40:46 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:41:51.203 22:40:46 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@365 -- # decimal 1 00:41:51.203 22:40:46 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@353 -- # local d=1 00:41:51.203 22:40:46 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:41:51.203 22:40:46 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@355 -- # echo 1 00:41:51.203 22:40:46 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@365 -- # ver1[v]=1 00:41:51.203 22:40:46 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@366 -- # decimal 2 00:41:51.203 22:40:46 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@353 -- # local d=2 00:41:51.203 22:40:46 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:41:51.203 22:40:46 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@355 -- # echo 2 00:41:51.203 22:40:46 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@366 -- # ver2[v]=2 00:41:51.203 22:40:46 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:41:51.203 22:40:46 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:41:51.203 22:40:46 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@368 -- # return 0 00:41:51.203 22:40:46 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:41:51.203 22:40:46 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:41:51.203 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:41:51.203 --rc genhtml_branch_coverage=1 00:41:51.203 --rc genhtml_function_coverage=1 00:41:51.203 --rc genhtml_legend=1 00:41:51.203 --rc geninfo_all_blocks=1 00:41:51.203 --rc geninfo_unexecuted_blocks=1 00:41:51.203 00:41:51.203 ' 00:41:51.203 22:40:46 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:41:51.203 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:41:51.203 --rc genhtml_branch_coverage=1 00:41:51.203 --rc genhtml_function_coverage=1 00:41:51.203 --rc genhtml_legend=1 00:41:51.203 --rc geninfo_all_blocks=1 00:41:51.203 --rc geninfo_unexecuted_blocks=1 00:41:51.203 00:41:51.203 ' 00:41:51.203 22:40:46 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:41:51.203 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:41:51.203 --rc genhtml_branch_coverage=1 00:41:51.203 --rc genhtml_function_coverage=1 00:41:51.203 --rc genhtml_legend=1 00:41:51.203 --rc geninfo_all_blocks=1 00:41:51.203 --rc geninfo_unexecuted_blocks=1 00:41:51.203 00:41:51.203 ' 00:41:51.203 22:40:46 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:41:51.203 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:41:51.203 --rc genhtml_branch_coverage=1 00:41:51.203 --rc genhtml_function_coverage=1 00:41:51.203 --rc genhtml_legend=1 00:41:51.203 --rc geninfo_all_blocks=1 00:41:51.203 --rc geninfo_unexecuted_blocks=1 00:41:51.203 00:41:51.203 ' 00:41:51.203 22:40:46 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@10 -- # uname -s 00:41:51.203 22:40:46 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@10 -- # '[' '!' Linux = Linux ']' 00:41:51.203 22:40:46 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:41:51.203 22:40:46 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@7 -- # uname -s 00:41:51.203 22:40:46 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:41:51.203 22:40:46 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:41:51.203 22:40:46 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:41:51.203 22:40:46 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:41:51.203 22:40:46 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:41:51.203 22:40:46 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:41:51.203 22:40:46 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:41:51.203 22:40:46 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:41:51.203 22:40:46 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:41:51.203 22:40:46 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:41:51.203 22:40:46 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 00:41:51.203 22:40:46 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@18 -- # NVME_HOSTID=80c5a598-37ec-ec11-9bc7-a4bf01928204 00:41:51.203 22:40:46 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:41:51.203 22:40:46 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:41:51.203 22:40:46 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:41:51.203 22:40:46 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:41:51.203 22:40:46 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:41:51.203 22:40:46 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@15 -- # shopt -s extglob 00:41:51.203 22:40:46 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:41:51.203 22:40:46 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:41:51.203 22:40:46 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:41:51.203 22:40:46 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:51.203 22:40:46 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:51.203 22:40:46 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:51.203 22:40:46 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@5 -- # export PATH 00:41:51.203 22:40:46 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:51.203 22:40:46 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@51 -- # : 0 00:41:51.203 22:40:46 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:41:51.203 22:40:46 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:41:51.203 22:40:46 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:41:51.203 22:40:46 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:41:51.203 22:40:46 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:41:51.203 22:40:46 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:41:51.203 22:40:46 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:41:51.203 22:40:46 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:41:51.203 22:40:46 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:41:51.203 22:40:46 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@55 -- # have_pci_nics=0 00:41:51.203 22:40:46 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:41:51.203 22:40:46 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@18 -- # TEST_ARGS=("$@") 00:41:51.203 22:40:46 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@20 -- # [[ 0 -eq 0 ]] 00:41:51.203 22:40:46 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@21 -- # run_test nvmf_abort /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp --interrupt-mode 00:41:51.203 22:40:46 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:41:51.203 22:40:46 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1107 -- # xtrace_disable 00:41:51.203 22:40:46 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:41:51.203 ************************************ 00:41:51.203 START TEST nvmf_abort 00:41:51.203 ************************************ 00:41:51.203 22:40:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp --interrupt-mode 00:41:51.203 * Looking for test storage... 00:41:51.203 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:41:51.203 22:40:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:41:51.203 22:40:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1681 -- # lcov --version 00:41:51.203 22:40:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:41:51.204 22:40:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:41:51.204 22:40:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:41:51.204 22:40:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@333 -- # local ver1 ver1_l 00:41:51.204 22:40:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@334 -- # local ver2 ver2_l 00:41:51.204 22:40:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@336 -- # IFS=.-: 00:41:51.204 22:40:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@336 -- # read -ra ver1 00:41:51.204 22:40:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@337 -- # IFS=.-: 00:41:51.204 22:40:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@337 -- # read -ra ver2 00:41:51.204 22:40:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@338 -- # local 'op=<' 00:41:51.204 22:40:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@340 -- # ver1_l=2 00:41:51.204 22:40:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@341 -- # ver2_l=1 00:41:51.204 22:40:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:41:51.204 22:40:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@344 -- # case "$op" in 00:41:51.204 22:40:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@345 -- # : 1 00:41:51.204 22:40:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@364 -- # (( v = 0 )) 00:41:51.204 22:40:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:41:51.204 22:40:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@365 -- # decimal 1 00:41:51.204 22:40:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@353 -- # local d=1 00:41:51.204 22:40:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:41:51.204 22:40:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@355 -- # echo 1 00:41:51.204 22:40:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@365 -- # ver1[v]=1 00:41:51.204 22:40:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@366 -- # decimal 2 00:41:51.204 22:40:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@353 -- # local d=2 00:41:51.204 22:40:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:41:51.204 22:40:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@355 -- # echo 2 00:41:51.204 22:40:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@366 -- # ver2[v]=2 00:41:51.204 22:40:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:41:51.204 22:40:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:41:51.204 22:40:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@368 -- # return 0 00:41:51.204 22:40:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:41:51.204 22:40:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:41:51.204 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:41:51.204 --rc genhtml_branch_coverage=1 00:41:51.204 --rc genhtml_function_coverage=1 00:41:51.204 --rc genhtml_legend=1 00:41:51.204 --rc geninfo_all_blocks=1 00:41:51.204 --rc geninfo_unexecuted_blocks=1 00:41:51.204 00:41:51.204 ' 00:41:51.204 22:40:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:41:51.204 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:41:51.204 --rc genhtml_branch_coverage=1 00:41:51.204 --rc genhtml_function_coverage=1 00:41:51.204 --rc genhtml_legend=1 00:41:51.204 --rc geninfo_all_blocks=1 00:41:51.204 --rc geninfo_unexecuted_blocks=1 00:41:51.204 00:41:51.204 ' 00:41:51.204 22:40:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:41:51.204 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:41:51.204 --rc genhtml_branch_coverage=1 00:41:51.204 --rc genhtml_function_coverage=1 00:41:51.204 --rc genhtml_legend=1 00:41:51.204 --rc geninfo_all_blocks=1 00:41:51.204 --rc geninfo_unexecuted_blocks=1 00:41:51.204 00:41:51.204 ' 00:41:51.204 22:40:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:41:51.204 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:41:51.204 --rc genhtml_branch_coverage=1 00:41:51.204 --rc genhtml_function_coverage=1 00:41:51.204 --rc genhtml_legend=1 00:41:51.204 --rc geninfo_all_blocks=1 00:41:51.204 --rc geninfo_unexecuted_blocks=1 00:41:51.204 00:41:51.204 ' 00:41:51.204 22:40:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:41:51.204 22:40:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@7 -- # uname -s 00:41:51.204 22:40:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:41:51.204 22:40:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:41:51.204 22:40:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:41:51.204 22:40:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:41:51.204 22:40:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:41:51.204 22:40:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:41:51.204 22:40:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:41:51.204 22:40:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:41:51.204 22:40:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:41:51.204 22:40:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:41:51.204 22:40:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 00:41:51.204 22:40:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@18 -- # NVME_HOSTID=80c5a598-37ec-ec11-9bc7-a4bf01928204 00:41:51.204 22:40:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:41:51.204 22:40:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:41:51.204 22:40:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:41:51.204 22:40:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:41:51.204 22:40:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:41:51.204 22:40:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@15 -- # shopt -s extglob 00:41:51.204 22:40:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:41:51.204 22:40:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:41:51.204 22:40:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:41:51.204 22:40:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:51.204 22:40:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:51.204 22:40:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:51.204 22:40:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@5 -- # export PATH 00:41:51.204 22:40:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:51.204 22:40:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@51 -- # : 0 00:41:51.204 22:40:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:41:51.204 22:40:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:41:51.204 22:40:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:41:51.204 22:40:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:41:51.204 22:40:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:41:51.204 22:40:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:41:51.204 22:40:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:41:51.204 22:40:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:41:51.204 22:40:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:41:51.204 22:40:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@55 -- # have_pci_nics=0 00:41:51.204 22:40:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@11 -- # MALLOC_BDEV_SIZE=64 00:41:51.204 22:40:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@12 -- # MALLOC_BLOCK_SIZE=4096 00:41:51.204 22:40:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@14 -- # nvmftestinit 00:41:51.204 22:40:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:41:51.205 22:40:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:41:51.205 22:40:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@474 -- # prepare_net_devs 00:41:51.205 22:40:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@436 -- # local -g is_hw=no 00:41:51.205 22:40:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@438 -- # remove_spdk_ns 00:41:51.205 22:40:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:41:51.205 22:40:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:41:51.205 22:40:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:41:51.205 22:40:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:41:51.205 22:40:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:41:51.205 22:40:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@309 -- # xtrace_disable 00:41:51.205 22:40:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:41:59.339 22:40:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:41:59.339 22:40:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@315 -- # pci_devs=() 00:41:59.339 22:40:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@315 -- # local -a pci_devs 00:41:59.339 22:40:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@316 -- # pci_net_devs=() 00:41:59.339 22:40:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:41:59.339 22:40:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@317 -- # pci_drivers=() 00:41:59.339 22:40:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@317 -- # local -A pci_drivers 00:41:59.339 22:40:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@319 -- # net_devs=() 00:41:59.339 22:40:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@319 -- # local -ga net_devs 00:41:59.339 22:40:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@320 -- # e810=() 00:41:59.339 22:40:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@320 -- # local -ga e810 00:41:59.339 22:40:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@321 -- # x722=() 00:41:59.339 22:40:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@321 -- # local -ga x722 00:41:59.339 22:40:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@322 -- # mlx=() 00:41:59.339 22:40:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@322 -- # local -ga mlx 00:41:59.339 22:40:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:41:59.339 22:40:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:41:59.339 22:40:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:41:59.339 22:40:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:41:59.339 22:40:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:41:59.339 22:40:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:41:59.339 22:40:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:41:59.339 22:40:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:41:59.339 22:40:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:41:59.339 22:40:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:41:59.339 22:40:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:41:59.339 22:40:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:41:59.339 22:40:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:41:59.339 22:40:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:41:59.339 22:40:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:41:59.339 22:40:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:41:59.339 22:40:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:41:59.339 22:40:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:41:59.339 22:40:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:41:59.339 22:40:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:41:59.339 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:41:59.339 22:40:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:41:59.339 22:40:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:41:59.339 22:40:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:41:59.339 22:40:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:41:59.339 22:40:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:41:59.339 22:40:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:41:59.339 22:40:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:41:59.339 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:41:59.339 22:40:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:41:59.339 22:40:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:41:59.339 22:40:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:41:59.339 22:40:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:41:59.339 22:40:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:41:59.339 22:40:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:41:59.339 22:40:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:41:59.339 22:40:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:41:59.339 22:40:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:41:59.339 22:40:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:41:59.339 22:40:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:41:59.339 22:40:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:41:59.339 22:40:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@416 -- # [[ up == up ]] 00:41:59.339 22:40:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:41:59.339 22:40:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:41:59.339 22:40:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:41:59.339 Found net devices under 0000:4b:00.0: cvl_0_0 00:41:59.339 22:40:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:41:59.339 22:40:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:41:59.339 22:40:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:41:59.339 22:40:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:41:59.339 22:40:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:41:59.339 22:40:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@416 -- # [[ up == up ]] 00:41:59.339 22:40:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:41:59.339 22:40:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:41:59.339 22:40:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:41:59.339 Found net devices under 0000:4b:00.1: cvl_0_1 00:41:59.339 22:40:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:41:59.339 22:40:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:41:59.339 22:40:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@440 -- # is_hw=yes 00:41:59.339 22:40:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:41:59.339 22:40:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:41:59.339 22:40:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:41:59.339 22:40:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:41:59.339 22:40:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:41:59.339 22:40:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:41:59.339 22:40:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:41:59.339 22:40:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:41:59.339 22:40:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:41:59.339 22:40:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:41:59.339 22:40:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:41:59.339 22:40:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:41:59.339 22:40:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:41:59.339 22:40:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:41:59.339 22:40:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:41:59.339 22:40:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:41:59.339 22:40:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:41:59.339 22:40:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:41:59.339 22:40:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:41:59.339 22:40:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:41:59.339 22:40:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:41:59.339 22:40:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:41:59.339 22:40:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:41:59.340 22:40:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:41:59.340 22:40:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:41:59.340 22:40:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:41:59.340 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:41:59.340 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.516 ms 00:41:59.340 00:41:59.340 --- 10.0.0.2 ping statistics --- 00:41:59.340 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:41:59.340 rtt min/avg/max/mdev = 0.516/0.516/0.516/0.000 ms 00:41:59.340 22:40:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:41:59.340 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:41:59.340 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.376 ms 00:41:59.340 00:41:59.340 --- 10.0.0.1 ping statistics --- 00:41:59.340 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:41:59.340 rtt min/avg/max/mdev = 0.376/0.376/0.376/0.000 ms 00:41:59.340 22:40:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:41:59.340 22:40:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@448 -- # return 0 00:41:59.340 22:40:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:41:59.340 22:40:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:41:59.340 22:40:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:41:59.340 22:40:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:41:59.340 22:40:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:41:59.340 22:40:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:41:59.340 22:40:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:41:59.340 22:40:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@15 -- # nvmfappstart -m 0xE 00:41:59.340 22:40:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:41:59.340 22:40:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@724 -- # xtrace_disable 00:41:59.340 22:40:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:41:59.340 22:40:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@507 -- # nvmfpid=405627 00:41:59.340 22:40:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@508 -- # waitforlisten 405627 00:41:59.340 22:40:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xE 00:41:59.340 22:40:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@831 -- # '[' -z 405627 ']' 00:41:59.340 22:40:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:41:59.340 22:40:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@836 -- # local max_retries=100 00:41:59.340 22:40:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:41:59.340 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:41:59.340 22:40:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@840 -- # xtrace_disable 00:41:59.340 22:40:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:41:59.340 [2024-10-01 22:40:53.884228] thread.c:2945:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:41:59.340 [2024-10-01 22:40:53.885384] Starting SPDK v25.01-pre git sha1 1b1c3081e / DPDK 24.03.0 initialization... 00:41:59.340 [2024-10-01 22:40:53.885439] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:41:59.340 [2024-10-01 22:40:53.975781] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:41:59.340 [2024-10-01 22:40:54.070582] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:41:59.340 [2024-10-01 22:40:54.070648] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:41:59.340 [2024-10-01 22:40:54.070657] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:41:59.340 [2024-10-01 22:40:54.070664] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:41:59.340 [2024-10-01 22:40:54.070676] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:41:59.340 [2024-10-01 22:40:54.070834] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:41:59.340 [2024-10-01 22:40:54.070977] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:41:59.340 [2024-10-01 22:40:54.070977] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:41:59.340 [2024-10-01 22:40:54.213976] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:41:59.340 [2024-10-01 22:40:54.214039] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:41:59.340 [2024-10-01 22:40:54.214738] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:41:59.340 [2024-10-01 22:40:54.214989] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:41:59.599 22:40:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:41:59.599 22:40:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@864 -- # return 0 00:41:59.599 22:40:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:41:59.599 22:40:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@730 -- # xtrace_disable 00:41:59.599 22:40:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:41:59.599 22:40:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:41:59.599 22:40:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -a 256 00:41:59.599 22:40:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:59.599 22:40:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:41:59.599 [2024-10-01 22:40:54.735883] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:41:59.599 22:40:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:59.599 22:40:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@20 -- # rpc_cmd bdev_malloc_create 64 4096 -b Malloc0 00:41:59.599 22:40:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:59.599 22:40:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:41:59.599 Malloc0 00:41:59.599 22:40:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:59.599 22:40:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@21 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:41:59.599 22:40:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:59.599 22:40:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:41:59.599 Delay0 00:41:59.599 22:40:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:59.599 22:40:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:41:59.599 22:40:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:59.599 22:40:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:41:59.599 22:40:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:59.599 22:40:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Delay0 00:41:59.599 22:40:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:59.599 22:40:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:41:59.599 22:40:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:59.599 22:40:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:41:59.599 22:40:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:59.599 22:40:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:41:59.599 [2024-10-01 22:40:54.815699] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:41:59.599 22:40:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:59.599 22:40:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:41:59.599 22:40:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:59.599 22:40:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:41:59.599 22:40:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:59.599 22:40:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0x1 -t 1 -l warning -q 128 00:41:59.857 [2024-10-01 22:40:54.968802] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:42:01.765 Initializing NVMe Controllers 00:42:01.765 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:42:01.765 controller IO queue size 128 less than required 00:42:01.765 Consider using lower queue depth or small IO size because IO requests may be queued at the NVMe driver. 00:42:01.765 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:42:01.765 Initialization complete. Launching workers. 00:42:01.765 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 I/O completed: 123, failed: 29182 00:42:01.765 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) abort submitted 29239, failed to submit 66 00:42:01.765 success 29182, unsuccessful 57, failed 0 00:42:01.765 22:40:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:42:01.765 22:40:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:42:01.765 22:40:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:42:02.024 22:40:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:42:02.024 22:40:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:42:02.024 22:40:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@38 -- # nvmftestfini 00:42:02.024 22:40:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@514 -- # nvmfcleanup 00:42:02.024 22:40:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@121 -- # sync 00:42:02.024 22:40:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:42:02.024 22:40:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@124 -- # set +e 00:42:02.024 22:40:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@125 -- # for i in {1..20} 00:42:02.024 22:40:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:42:02.024 rmmod nvme_tcp 00:42:02.024 rmmod nvme_fabrics 00:42:02.024 rmmod nvme_keyring 00:42:02.024 22:40:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:42:02.024 22:40:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@128 -- # set -e 00:42:02.024 22:40:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@129 -- # return 0 00:42:02.024 22:40:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@515 -- # '[' -n 405627 ']' 00:42:02.024 22:40:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@516 -- # killprocess 405627 00:42:02.024 22:40:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@950 -- # '[' -z 405627 ']' 00:42:02.024 22:40:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@954 -- # kill -0 405627 00:42:02.024 22:40:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@955 -- # uname 00:42:02.024 22:40:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:42:02.024 22:40:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 405627 00:42:02.024 22:40:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:42:02.025 22:40:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:42:02.025 22:40:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@968 -- # echo 'killing process with pid 405627' 00:42:02.025 killing process with pid 405627 00:42:02.025 22:40:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@969 -- # kill 405627 00:42:02.025 22:40:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@974 -- # wait 405627 00:42:02.284 22:40:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:42:02.284 22:40:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:42:02.284 22:40:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:42:02.284 22:40:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@297 -- # iptr 00:42:02.284 22:40:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@789 -- # iptables-save 00:42:02.284 22:40:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:42:02.284 22:40:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@789 -- # iptables-restore 00:42:02.284 22:40:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:42:02.284 22:40:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@302 -- # remove_spdk_ns 00:42:02.284 22:40:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:42:02.285 22:40:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:42:02.285 22:40:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:42:04.825 22:40:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:42:04.825 00:42:04.825 real 0m13.249s 00:42:04.825 user 0m10.856s 00:42:04.825 sys 0m6.947s 00:42:04.825 22:40:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1126 -- # xtrace_disable 00:42:04.825 22:40:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:42:04.825 ************************************ 00:42:04.825 END TEST nvmf_abort 00:42:04.825 ************************************ 00:42:04.825 22:40:59 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@22 -- # run_test nvmf_ns_hotplug_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp --interrupt-mode 00:42:04.825 22:40:59 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:42:04.825 22:40:59 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1107 -- # xtrace_disable 00:42:04.825 22:40:59 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:42:04.825 ************************************ 00:42:04.825 START TEST nvmf_ns_hotplug_stress 00:42:04.825 ************************************ 00:42:04.825 22:40:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp --interrupt-mode 00:42:04.825 * Looking for test storage... 00:42:04.825 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:42:04.825 22:40:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:42:04.825 22:40:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1681 -- # lcov --version 00:42:04.825 22:40:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:42:04.825 22:40:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:42:04.825 22:40:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:42:04.825 22:40:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@333 -- # local ver1 ver1_l 00:42:04.825 22:40:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@334 -- # local ver2 ver2_l 00:42:04.825 22:40:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # IFS=.-: 00:42:04.825 22:40:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # read -ra ver1 00:42:04.825 22:40:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # IFS=.-: 00:42:04.825 22:40:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # read -ra ver2 00:42:04.826 22:40:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@338 -- # local 'op=<' 00:42:04.826 22:40:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@340 -- # ver1_l=2 00:42:04.826 22:40:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@341 -- # ver2_l=1 00:42:04.826 22:40:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:42:04.826 22:40:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@344 -- # case "$op" in 00:42:04.826 22:40:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@345 -- # : 1 00:42:04.826 22:40:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v = 0 )) 00:42:04.826 22:40:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:42:04.826 22:40:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # decimal 1 00:42:04.826 22:40:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=1 00:42:04.826 22:40:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:42:04.826 22:40:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 1 00:42:04.826 22:40:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # ver1[v]=1 00:42:04.826 22:40:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # decimal 2 00:42:04.826 22:40:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=2 00:42:04.826 22:40:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:42:04.826 22:40:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 2 00:42:04.826 22:40:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # ver2[v]=2 00:42:04.826 22:40:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:42:04.826 22:40:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:42:04.826 22:40:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # return 0 00:42:04.826 22:40:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:42:04.826 22:40:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:42:04.826 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:42:04.826 --rc genhtml_branch_coverage=1 00:42:04.826 --rc genhtml_function_coverage=1 00:42:04.826 --rc genhtml_legend=1 00:42:04.826 --rc geninfo_all_blocks=1 00:42:04.826 --rc geninfo_unexecuted_blocks=1 00:42:04.826 00:42:04.826 ' 00:42:04.826 22:40:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:42:04.826 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:42:04.826 --rc genhtml_branch_coverage=1 00:42:04.826 --rc genhtml_function_coverage=1 00:42:04.826 --rc genhtml_legend=1 00:42:04.826 --rc geninfo_all_blocks=1 00:42:04.826 --rc geninfo_unexecuted_blocks=1 00:42:04.826 00:42:04.826 ' 00:42:04.826 22:40:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:42:04.826 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:42:04.826 --rc genhtml_branch_coverage=1 00:42:04.826 --rc genhtml_function_coverage=1 00:42:04.826 --rc genhtml_legend=1 00:42:04.826 --rc geninfo_all_blocks=1 00:42:04.826 --rc geninfo_unexecuted_blocks=1 00:42:04.826 00:42:04.826 ' 00:42:04.826 22:40:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:42:04.826 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:42:04.826 --rc genhtml_branch_coverage=1 00:42:04.826 --rc genhtml_function_coverage=1 00:42:04.826 --rc genhtml_legend=1 00:42:04.826 --rc geninfo_all_blocks=1 00:42:04.826 --rc geninfo_unexecuted_blocks=1 00:42:04.826 00:42:04.826 ' 00:42:04.826 22:40:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:42:04.826 22:40:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # uname -s 00:42:04.826 22:40:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:42:04.826 22:40:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:42:04.826 22:40:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:42:04.826 22:40:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:42:04.826 22:40:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:42:04.826 22:40:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:42:04.826 22:40:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:42:04.826 22:40:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:42:04.826 22:40:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:42:04.826 22:40:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:42:04.826 22:40:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 00:42:04.826 22:40:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=80c5a598-37ec-ec11-9bc7-a4bf01928204 00:42:04.826 22:40:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:42:04.826 22:40:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:42:04.826 22:40:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:42:04.826 22:40:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:42:04.826 22:40:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:42:04.826 22:40:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@15 -- # shopt -s extglob 00:42:04.826 22:40:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:42:04.826 22:40:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:42:04.826 22:40:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:42:04.826 22:40:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:42:04.826 22:40:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:42:04.826 22:40:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:42:04.826 22:40:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@5 -- # export PATH 00:42:04.826 22:40:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:42:04.826 22:40:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@51 -- # : 0 00:42:04.826 22:40:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:42:04.826 22:40:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:42:04.826 22:40:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:42:04.826 22:40:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:42:04.826 22:40:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:42:04.826 22:40:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:42:04.826 22:40:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:42:04.826 22:40:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:42:04.826 22:40:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:42:04.826 22:40:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@55 -- # have_pci_nics=0 00:42:04.826 22:40:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:42:04.826 22:40:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@22 -- # nvmftestinit 00:42:04.826 22:40:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:42:04.826 22:40:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:42:04.827 22:40:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@474 -- # prepare_net_devs 00:42:04.827 22:40:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@436 -- # local -g is_hw=no 00:42:04.827 22:40:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@438 -- # remove_spdk_ns 00:42:04.827 22:40:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:42:04.827 22:40:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:42:04.827 22:40:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:42:04.827 22:40:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:42:04.827 22:40:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:42:04.827 22:40:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@309 -- # xtrace_disable 00:42:04.827 22:40:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:42:12.965 22:41:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:42:12.965 22:41:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # pci_devs=() 00:42:12.965 22:41:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # local -a pci_devs 00:42:12.965 22:41:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # pci_net_devs=() 00:42:12.965 22:41:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:42:12.965 22:41:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # pci_drivers=() 00:42:12.965 22:41:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # local -A pci_drivers 00:42:12.965 22:41:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # net_devs=() 00:42:12.965 22:41:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # local -ga net_devs 00:42:12.965 22:41:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # e810=() 00:42:12.965 22:41:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # local -ga e810 00:42:12.965 22:41:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # x722=() 00:42:12.965 22:41:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # local -ga x722 00:42:12.965 22:41:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # mlx=() 00:42:12.965 22:41:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # local -ga mlx 00:42:12.965 22:41:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:42:12.965 22:41:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:42:12.965 22:41:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:42:12.965 22:41:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:42:12.965 22:41:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:42:12.965 22:41:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:42:12.965 22:41:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:42:12.965 22:41:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:42:12.965 22:41:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:42:12.965 22:41:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:42:12.965 22:41:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:42:12.965 22:41:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:42:12.965 22:41:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:42:12.965 22:41:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:42:12.965 22:41:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:42:12.965 22:41:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:42:12.965 22:41:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:42:12.965 22:41:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:42:12.965 22:41:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:42:12.965 22:41:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:42:12.965 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:42:12.965 22:41:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:42:12.965 22:41:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:42:12.965 22:41:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:42:12.965 22:41:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:42:12.965 22:41:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:42:12.965 22:41:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:42:12.965 22:41:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:42:12.965 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:42:12.965 22:41:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:42:12.965 22:41:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:42:12.965 22:41:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:42:12.965 22:41:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:42:12.965 22:41:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:42:12.965 22:41:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:42:12.965 22:41:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:42:12.965 22:41:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:42:12.965 22:41:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:42:12.965 22:41:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:42:12.965 22:41:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:42:12.965 22:41:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:42:12.965 22:41:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ up == up ]] 00:42:12.965 22:41:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:42:12.965 22:41:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:42:12.965 22:41:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:42:12.965 Found net devices under 0000:4b:00.0: cvl_0_0 00:42:12.965 22:41:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:42:12.965 22:41:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:42:12.965 22:41:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:42:12.965 22:41:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:42:12.965 22:41:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:42:12.965 22:41:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ up == up ]] 00:42:12.965 22:41:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:42:12.965 22:41:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:42:12.965 22:41:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:42:12.965 Found net devices under 0000:4b:00.1: cvl_0_1 00:42:12.965 22:41:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:42:12.965 22:41:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:42:12.965 22:41:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@440 -- # is_hw=yes 00:42:12.965 22:41:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:42:12.965 22:41:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:42:12.965 22:41:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:42:12.965 22:41:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:42:12.965 22:41:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:42:12.965 22:41:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:42:12.965 22:41:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:42:12.965 22:41:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:42:12.965 22:41:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:42:12.966 22:41:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:42:12.966 22:41:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:42:12.966 22:41:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:42:12.966 22:41:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:42:12.966 22:41:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:42:12.966 22:41:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:42:12.966 22:41:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:42:12.966 22:41:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:42:12.966 22:41:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:42:12.966 22:41:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:42:12.966 22:41:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:42:12.966 22:41:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:42:12.966 22:41:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:42:12.966 22:41:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:42:12.966 22:41:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:42:12.966 22:41:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:42:12.966 22:41:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:42:12.966 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:42:12.966 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.646 ms 00:42:12.966 00:42:12.966 --- 10.0.0.2 ping statistics --- 00:42:12.966 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:42:12.966 rtt min/avg/max/mdev = 0.646/0.646/0.646/0.000 ms 00:42:12.966 22:41:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:42:12.966 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:42:12.966 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.317 ms 00:42:12.966 00:42:12.966 --- 10.0.0.1 ping statistics --- 00:42:12.966 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:42:12.966 rtt min/avg/max/mdev = 0.317/0.317/0.317/0.000 ms 00:42:12.966 22:41:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:42:12.966 22:41:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@448 -- # return 0 00:42:12.966 22:41:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:42:12.966 22:41:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:42:12.966 22:41:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:42:12.966 22:41:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:42:12.966 22:41:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:42:12.966 22:41:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:42:12.966 22:41:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:42:12.966 22:41:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@23 -- # nvmfappstart -m 0xE 00:42:12.966 22:41:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:42:12.966 22:41:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@724 -- # xtrace_disable 00:42:12.966 22:41:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:42:12.966 22:41:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@507 -- # nvmfpid=410636 00:42:12.966 22:41:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@508 -- # waitforlisten 410636 00:42:12.966 22:41:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xE 00:42:12.966 22:41:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@831 -- # '[' -z 410636 ']' 00:42:12.966 22:41:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:42:12.966 22:41:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@836 -- # local max_retries=100 00:42:12.966 22:41:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:42:12.966 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:42:12.966 22:41:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@840 -- # xtrace_disable 00:42:12.966 22:41:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:42:12.966 [2024-10-01 22:41:07.391512] thread.c:2945:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:42:12.966 [2024-10-01 22:41:07.392643] Starting SPDK v25.01-pre git sha1 1b1c3081e / DPDK 24.03.0 initialization... 00:42:12.966 [2024-10-01 22:41:07.392697] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:42:12.966 [2024-10-01 22:41:07.482986] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:42:12.966 [2024-10-01 22:41:07.577663] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:42:12.966 [2024-10-01 22:41:07.577719] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:42:12.966 [2024-10-01 22:41:07.577728] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:42:12.966 [2024-10-01 22:41:07.577735] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:42:12.966 [2024-10-01 22:41:07.577741] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:42:12.966 [2024-10-01 22:41:07.577896] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:42:12.966 [2024-10-01 22:41:07.578165] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:42:12.966 [2024-10-01 22:41:07.578167] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:42:12.966 [2024-10-01 22:41:07.715780] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:42:12.966 [2024-10-01 22:41:07.715857] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:42:12.966 [2024-10-01 22:41:07.716467] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:42:12.966 [2024-10-01 22:41:07.716775] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:42:12.966 22:41:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:42:12.966 22:41:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@864 -- # return 0 00:42:12.966 22:41:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:42:12.966 22:41:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@730 -- # xtrace_disable 00:42:12.966 22:41:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:42:13.227 22:41:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:42:13.227 22:41:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@25 -- # null_size=1000 00:42:13.227 22:41:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:42:13.227 [2024-10-01 22:41:08.407995] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:42:13.227 22:41:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:42:13.488 22:41:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:42:13.748 [2024-10-01 22:41:08.767945] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:42:13.748 22:41:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:42:13.748 22:41:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 512 -b Malloc0 00:42:14.012 Malloc0 00:42:14.012 22:41:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:42:14.273 Delay0 00:42:14.273 22:41:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:42:14.534 22:41:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create NULL1 1000 512 00:42:14.534 NULL1 00:42:14.798 22:41:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:42:14.798 22:41:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@42 -- # PERF_PID=411006 00:42:14.798 22:41:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 411006 00:42:14.798 22:41:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 30 -q 128 -w randread -o 512 -Q 1000 00:42:14.798 22:41:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:42:16.183 Read completed with error (sct=0, sc=11) 00:42:16.183 22:41:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:42:16.183 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:42:16.183 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:42:16.183 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:42:16.183 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:42:16.183 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:42:16.183 22:41:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1001 00:42:16.183 22:41:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1001 00:42:16.443 true 00:42:16.443 22:41:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 411006 00:42:16.443 22:41:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:42:17.385 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:42:17.385 22:41:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:42:17.385 22:41:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1002 00:42:17.385 22:41:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1002 00:42:17.647 true 00:42:17.647 22:41:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 411006 00:42:17.647 22:41:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:42:17.647 22:41:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:42:17.907 22:41:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1003 00:42:17.907 22:41:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1003 00:42:18.167 true 00:42:18.167 22:41:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 411006 00:42:18.167 22:41:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:42:19.550 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:42:19.551 22:41:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:42:19.551 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:42:19.551 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:42:19.551 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:42:19.551 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:42:19.551 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:42:19.551 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:42:19.551 22:41:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1004 00:42:19.551 22:41:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1004 00:42:19.551 true 00:42:19.551 22:41:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 411006 00:42:19.551 22:41:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:42:20.492 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:42:20.492 22:41:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:42:20.492 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:42:20.753 22:41:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1005 00:42:20.753 22:41:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1005 00:42:20.753 true 00:42:20.753 22:41:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 411006 00:42:20.753 22:41:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:42:21.014 22:41:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:42:21.274 22:41:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1006 00:42:21.275 22:41:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1006 00:42:21.275 true 00:42:21.275 22:41:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 411006 00:42:21.275 22:41:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:42:21.535 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:42:21.535 22:41:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:42:21.535 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:42:21.535 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:42:21.535 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:42:21.795 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:42:21.795 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:42:21.795 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:42:21.795 22:41:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1007 00:42:21.795 22:41:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1007 00:42:21.795 true 00:42:22.057 22:41:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 411006 00:42:22.057 22:41:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:42:23.005 22:41:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:42:23.005 22:41:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1008 00:42:23.005 22:41:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1008 00:42:23.005 true 00:42:23.265 22:41:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 411006 00:42:23.265 22:41:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:42:23.265 22:41:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:42:23.525 22:41:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1009 00:42:23.525 22:41:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1009 00:42:23.785 true 00:42:23.785 22:41:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 411006 00:42:23.785 22:41:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:42:24.724 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:42:24.724 22:41:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:42:24.725 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:42:24.725 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:42:24.984 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:42:24.984 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:42:24.984 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:42:24.984 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:42:24.984 22:41:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1010 00:42:24.984 22:41:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1010 00:42:25.244 true 00:42:25.244 22:41:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 411006 00:42:25.244 22:41:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:42:26.246 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:42:26.246 22:41:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:42:26.246 22:41:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1011 00:42:26.246 22:41:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1011 00:42:26.246 true 00:42:26.246 22:41:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 411006 00:42:26.246 22:41:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:42:26.536 22:41:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:42:26.796 22:41:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1012 00:42:26.796 22:41:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1012 00:42:26.796 true 00:42:26.796 22:41:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 411006 00:42:26.796 22:41:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:42:28.181 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:42:28.181 22:41:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:42:28.181 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:42:28.181 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:42:28.181 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:42:28.181 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:42:28.181 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:42:28.181 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:42:28.181 22:41:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1013 00:42:28.181 22:41:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1013 00:42:28.441 true 00:42:28.441 22:41:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 411006 00:42:28.441 22:41:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:42:29.382 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:42:29.382 22:41:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:42:29.382 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:42:29.382 22:41:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1014 00:42:29.382 22:41:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1014 00:42:29.642 true 00:42:29.642 22:41:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 411006 00:42:29.642 22:41:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:42:29.902 22:41:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:42:29.902 22:41:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1015 00:42:29.902 22:41:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1015 00:42:30.163 true 00:42:30.163 22:41:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 411006 00:42:30.163 22:41:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:42:31.548 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:42:31.548 22:41:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:42:31.548 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:42:31.548 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:42:31.548 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:42:31.548 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:42:31.548 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:42:31.548 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:42:31.548 22:41:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1016 00:42:31.548 22:41:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1016 00:42:31.809 true 00:42:31.809 22:41:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 411006 00:42:31.809 22:41:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:42:32.752 22:41:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:42:32.752 22:41:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1017 00:42:32.752 22:41:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1017 00:42:33.013 true 00:42:33.013 22:41:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 411006 00:42:33.013 22:41:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:42:33.013 22:41:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:42:33.272 22:41:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1018 00:42:33.272 22:41:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1018 00:42:33.531 true 00:42:33.531 22:41:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 411006 00:42:33.531 22:41:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:42:34.911 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:42:34.911 22:41:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:42:34.911 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:42:34.911 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:42:34.911 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:42:34.911 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:42:34.911 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:42:34.911 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:42:34.911 22:41:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1019 00:42:34.911 22:41:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1019 00:42:34.911 true 00:42:34.911 22:41:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 411006 00:42:34.911 22:41:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:42:35.852 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:42:35.852 22:41:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:42:36.113 22:41:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1020 00:42:36.113 22:41:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1020 00:42:36.113 true 00:42:36.113 22:41:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 411006 00:42:36.113 22:41:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:42:36.374 22:41:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:42:36.634 22:41:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1021 00:42:36.634 22:41:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1021 00:42:36.634 true 00:42:36.893 22:41:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 411006 00:42:36.893 22:41:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:42:37.831 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:42:37.831 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:42:37.831 22:41:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:42:37.831 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:42:38.092 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:42:38.092 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:42:38.092 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:42:38.092 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:42:38.092 22:41:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1022 00:42:38.092 22:41:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1022 00:42:38.353 true 00:42:38.353 22:41:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 411006 00:42:38.353 22:41:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:42:39.294 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:42:39.294 22:41:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:42:39.294 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:42:39.294 22:41:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1023 00:42:39.294 22:41:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1023 00:42:39.555 true 00:42:39.555 22:41:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 411006 00:42:39.556 22:41:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:42:39.556 22:41:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:42:39.816 22:41:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1024 00:42:39.816 22:41:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1024 00:42:40.076 true 00:42:40.076 22:41:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 411006 00:42:40.076 22:41:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:42:40.076 22:41:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:42:40.336 22:41:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1025 00:42:40.336 22:41:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1025 00:42:40.595 true 00:42:40.595 22:41:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 411006 00:42:40.595 22:41:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:42:40.855 22:41:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:42:40.855 22:41:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1026 00:42:40.855 22:41:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1026 00:42:41.117 true 00:42:41.117 22:41:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 411006 00:42:41.117 22:41:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:42:42.502 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:42:42.502 22:41:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:42:42.502 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:42:42.502 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:42:42.502 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:42:42.502 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:42:42.502 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:42:42.502 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:42:42.502 22:41:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1027 00:42:42.502 22:41:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1027 00:42:42.502 true 00:42:42.502 22:41:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 411006 00:42:42.502 22:41:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:42:43.443 22:41:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:42:43.704 22:41:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1028 00:42:43.704 22:41:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1028 00:42:43.704 true 00:42:43.704 22:41:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 411006 00:42:43.704 22:41:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:42:43.974 22:41:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:42:44.236 22:41:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1029 00:42:44.236 22:41:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1029 00:42:44.236 true 00:42:44.496 22:41:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 411006 00:42:44.496 22:41:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:42:45.437 Initializing NVMe Controllers 00:42:45.437 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:42:45.437 Controller IO queue size 128, less than required. 00:42:45.437 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:42:45.437 Controller IO queue size 128, less than required. 00:42:45.437 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:42:45.437 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:42:45.437 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:42:45.437 Initialization complete. Launching workers. 00:42:45.437 ======================================================== 00:42:45.437 Latency(us) 00:42:45.437 Device Information : IOPS MiB/s Average min max 00:42:45.437 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 2414.27 1.18 36557.68 1555.81 1071980.51 00:42:45.437 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 19856.93 9.70 6445.92 1629.08 403173.47 00:42:45.437 ======================================================== 00:42:45.437 Total : 22271.20 10.87 9710.13 1555.81 1071980.51 00:42:45.437 00:42:45.437 22:41:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:42:45.698 22:41:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1030 00:42:45.698 22:41:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1030 00:42:45.958 true 00:42:45.959 22:41:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 411006 00:42:45.959 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh: line 44: kill: (411006) - No such process 00:42:45.959 22:41:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@53 -- # wait 411006 00:42:45.959 22:41:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:42:45.959 22:41:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:42:46.219 22:41:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # nthreads=8 00:42:46.219 22:41:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # pids=() 00:42:46.219 22:41:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i = 0 )) 00:42:46.219 22:41:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:42:46.219 22:41:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null0 100 4096 00:42:46.480 null0 00:42:46.480 22:41:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:42:46.480 22:41:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:42:46.480 22:41:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null1 100 4096 00:42:46.480 null1 00:42:46.480 22:41:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:42:46.480 22:41:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:42:46.480 22:41:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null2 100 4096 00:42:46.741 null2 00:42:46.741 22:41:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:42:46.741 22:41:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:42:46.741 22:41:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null3 100 4096 00:42:47.002 null3 00:42:47.002 22:41:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:42:47.002 22:41:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:42:47.002 22:41:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null4 100 4096 00:42:47.002 null4 00:42:47.002 22:41:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:42:47.002 22:41:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:42:47.002 22:41:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null5 100 4096 00:42:47.263 null5 00:42:47.263 22:41:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:42:47.263 22:41:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:42:47.263 22:41:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null6 100 4096 00:42:47.263 null6 00:42:47.263 22:41:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:42:47.263 22:41:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:42:47.263 22:41:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null7 100 4096 00:42:47.525 null7 00:42:47.525 22:41:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:42:47.525 22:41:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:42:47.525 22:41:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i = 0 )) 00:42:47.525 22:41:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:42:47.525 22:41:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:42:47.525 22:41:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:42:47.525 22:41:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:42:47.525 22:41:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 1 null0 00:42:47.525 22:41:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=1 bdev=null0 00:42:47.525 22:41:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:42:47.525 22:41:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:42:47.525 22:41:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:42:47.525 22:41:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:42:47.525 22:41:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:42:47.526 22:41:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:42:47.526 22:41:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 2 null1 00:42:47.526 22:41:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=2 bdev=null1 00:42:47.526 22:41:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:42:47.526 22:41:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:42:47.526 22:41:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:42:47.526 22:41:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:42:47.526 22:41:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:42:47.526 22:41:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:42:47.526 22:41:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 3 null2 00:42:47.526 22:41:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=3 bdev=null2 00:42:47.526 22:41:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:42:47.526 22:41:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:42:47.526 22:41:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:42:47.526 22:41:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:42:47.526 22:41:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:42:47.526 22:41:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:42:47.526 22:41:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 4 null3 00:42:47.526 22:41:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=4 bdev=null3 00:42:47.526 22:41:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:42:47.526 22:41:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:42:47.526 22:41:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:42:47.526 22:41:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:42:47.526 22:41:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:42:47.526 22:41:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:42:47.526 22:41:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 5 null4 00:42:47.526 22:41:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=5 bdev=null4 00:42:47.526 22:41:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:42:47.526 22:41:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:42:47.526 22:41:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:42:47.526 22:41:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:42:47.526 22:41:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:42:47.526 22:41:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:42:47.526 22:41:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 6 null5 00:42:47.526 22:41:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=6 bdev=null5 00:42:47.526 22:41:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:42:47.526 22:41:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:42:47.526 22:41:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:42:47.526 22:41:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:42:47.526 22:41:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:42:47.526 22:41:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:42:47.526 22:41:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 7 null6 00:42:47.526 22:41:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=7 bdev=null6 00:42:47.526 22:41:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:42:47.526 22:41:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:42:47.526 22:41:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:42:47.526 22:41:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:42:47.526 22:41:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:42:47.526 22:41:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:42:47.526 22:41:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@66 -- # wait 417398 417400 417403 417405 417408 417411 417414 417416 00:42:47.526 22:41:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 8 null7 00:42:47.526 22:41:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=8 bdev=null7 00:42:47.526 22:41:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:42:47.526 22:41:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:42:47.526 22:41:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:42:47.788 22:41:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:42:47.788 22:41:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:42:47.788 22:41:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:42:47.788 22:41:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:42:47.788 22:41:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:42:47.788 22:41:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:42:47.788 22:41:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:42:47.788 22:41:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:42:47.788 22:41:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:42:47.788 22:41:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:42:47.788 22:41:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:42:48.048 22:41:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:42:48.049 22:41:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:42:48.049 22:41:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:42:48.049 22:41:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:42:48.049 22:41:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:42:48.049 22:41:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:42:48.049 22:41:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:42:48.049 22:41:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:42:48.049 22:41:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:42:48.049 22:41:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:42:48.049 22:41:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:42:48.049 22:41:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:42:48.049 22:41:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:42:48.049 22:41:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:42:48.049 22:41:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:42:48.049 22:41:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:42:48.049 22:41:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:42:48.049 22:41:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:42:48.049 22:41:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:42:48.049 22:41:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:42:48.049 22:41:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:42:48.049 22:41:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:42:48.049 22:41:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:42:48.049 22:41:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:42:48.309 22:41:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:42:48.309 22:41:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:42:48.309 22:41:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:42:48.309 22:41:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:42:48.309 22:41:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:42:48.309 22:41:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:42:48.309 22:41:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:42:48.309 22:41:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:42:48.309 22:41:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:42:48.309 22:41:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:42:48.309 22:41:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:42:48.309 22:41:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:42:48.309 22:41:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:42:48.309 22:41:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:42:48.309 22:41:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:42:48.309 22:41:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:42:48.309 22:41:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:42:48.309 22:41:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:42:48.309 22:41:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:42:48.309 22:41:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:42:48.309 22:41:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:42:48.309 22:41:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:42:48.309 22:41:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:42:48.309 22:41:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:42:48.309 22:41:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:42:48.309 22:41:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:42:48.309 22:41:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:42:48.569 22:41:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:42:48.569 22:41:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:42:48.569 22:41:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:42:48.569 22:41:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:42:48.569 22:41:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:42:48.569 22:41:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:42:48.569 22:41:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:42:48.569 22:41:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:42:48.569 22:41:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:42:48.569 22:41:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:42:48.569 22:41:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:42:48.569 22:41:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:42:48.569 22:41:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:42:48.569 22:41:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:42:48.569 22:41:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:42:48.569 22:41:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:42:48.830 22:41:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:42:48.830 22:41:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:42:48.830 22:41:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:42:48.830 22:41:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:42:48.830 22:41:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:42:48.830 22:41:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:42:48.830 22:41:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:42:48.830 22:41:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:42:48.830 22:41:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:42:48.830 22:41:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:42:48.830 22:41:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:42:48.830 22:41:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:42:48.830 22:41:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:42:48.830 22:41:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:42:48.830 22:41:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:42:48.830 22:41:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:42:48.830 22:41:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:42:48.830 22:41:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:42:48.830 22:41:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:42:48.830 22:41:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:42:48.830 22:41:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:42:48.830 22:41:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:42:48.830 22:41:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:42:48.830 22:41:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:42:48.830 22:41:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:42:48.830 22:41:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:42:48.830 22:41:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:42:48.830 22:41:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:42:49.092 22:41:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:42:49.092 22:41:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:42:49.092 22:41:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:42:49.092 22:41:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:42:49.092 22:41:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:42:49.092 22:41:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:42:49.092 22:41:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:42:49.092 22:41:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:42:49.092 22:41:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:42:49.092 22:41:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:42:49.092 22:41:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:42:49.092 22:41:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:42:49.092 22:41:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:42:49.092 22:41:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:42:49.092 22:41:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:42:49.092 22:41:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:42:49.092 22:41:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:42:49.092 22:41:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:42:49.092 22:41:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:42:49.092 22:41:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:42:49.092 22:41:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:42:49.092 22:41:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:42:49.092 22:41:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:42:49.092 22:41:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:42:49.353 22:41:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:42:49.353 22:41:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:42:49.353 22:41:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:42:49.353 22:41:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:42:49.353 22:41:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:42:49.353 22:41:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:42:49.353 22:41:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:42:49.353 22:41:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:42:49.353 22:41:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:42:49.353 22:41:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:42:49.353 22:41:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:42:49.354 22:41:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:42:49.354 22:41:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:42:49.354 22:41:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:42:49.354 22:41:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:42:49.354 22:41:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:42:49.354 22:41:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:42:49.354 22:41:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:42:49.354 22:41:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:42:49.615 22:41:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:42:49.615 22:41:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:42:49.615 22:41:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:42:49.615 22:41:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:42:49.615 22:41:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:42:49.615 22:41:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:42:49.615 22:41:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:42:49.615 22:41:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:42:49.615 22:41:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:42:49.615 22:41:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:42:49.615 22:41:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:42:49.615 22:41:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:42:49.615 22:41:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:42:49.615 22:41:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:42:49.615 22:41:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:42:49.615 22:41:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:42:49.615 22:41:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:42:49.615 22:41:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:42:49.615 22:41:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:42:49.615 22:41:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:42:49.876 22:41:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:42:49.876 22:41:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:42:49.876 22:41:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:42:49.876 22:41:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:42:49.876 22:41:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:42:49.876 22:41:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:42:49.876 22:41:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:42:49.876 22:41:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:42:49.876 22:41:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:42:49.876 22:41:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:42:49.876 22:41:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:42:49.876 22:41:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:42:49.876 22:41:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:42:49.876 22:41:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:42:49.876 22:41:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:42:49.876 22:41:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:42:49.876 22:41:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:42:49.876 22:41:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:42:49.876 22:41:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:42:49.876 22:41:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:42:49.876 22:41:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:42:49.876 22:41:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:42:49.876 22:41:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:42:49.876 22:41:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:42:49.876 22:41:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:42:49.876 22:41:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:42:49.876 22:41:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:42:49.876 22:41:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:42:49.876 22:41:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:42:49.876 22:41:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:42:50.136 22:41:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:42:50.136 22:41:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:42:50.136 22:41:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:42:50.136 22:41:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:42:50.137 22:41:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:42:50.137 22:41:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:42:50.137 22:41:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:42:50.137 22:41:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:42:50.137 22:41:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:42:50.137 22:41:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:42:50.137 22:41:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:42:50.137 22:41:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:42:50.137 22:41:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:42:50.137 22:41:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:42:50.137 22:41:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:42:50.137 22:41:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:42:50.137 22:41:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:42:50.397 22:41:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:42:50.397 22:41:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:42:50.397 22:41:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:42:50.397 22:41:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:42:50.397 22:41:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:42:50.397 22:41:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:42:50.397 22:41:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:42:50.397 22:41:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:42:50.397 22:41:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:42:50.397 22:41:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:42:50.397 22:41:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:42:50.397 22:41:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:42:50.397 22:41:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:42:50.397 22:41:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:42:50.397 22:41:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:42:50.397 22:41:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:42:50.397 22:41:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:42:50.397 22:41:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:42:50.397 22:41:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:42:50.657 22:41:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:42:50.657 22:41:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:42:50.657 22:41:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:42:50.657 22:41:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:42:50.657 22:41:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:42:50.657 22:41:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:42:50.657 22:41:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:42:50.657 22:41:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:42:50.657 22:41:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:42:50.657 22:41:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:42:50.657 22:41:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:42:50.658 22:41:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:42:50.658 22:41:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:42:50.658 22:41:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:42:50.658 22:41:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:42:50.658 22:41:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:42:50.658 22:41:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:42:50.658 22:41:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:42:50.658 22:41:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:42:50.658 22:41:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:42:50.658 22:41:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:42:50.658 22:41:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:42:50.658 22:41:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:42:50.658 22:41:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:42:50.658 22:41:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:42:50.658 22:41:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:42:50.658 22:41:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:42:50.918 22:41:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:42:50.918 22:41:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:42:50.918 22:41:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:42:50.918 22:41:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:42:50.918 22:41:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:42:50.918 22:41:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:42:50.918 22:41:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:42:50.918 22:41:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:42:50.918 22:41:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:42:50.918 22:41:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:42:50.918 22:41:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:42:50.918 22:41:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:42:50.918 22:41:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:42:50.918 22:41:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:42:50.918 22:41:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:42:50.918 22:41:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:42:50.918 22:41:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:42:50.918 22:41:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:42:50.918 22:41:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:42:50.918 22:41:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:42:50.918 22:41:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:42:51.178 22:41:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:42:51.178 22:41:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:42:51.178 22:41:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:42:51.178 22:41:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:42:51.178 22:41:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:42:51.178 22:41:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:42:51.178 22:41:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:42:51.178 22:41:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:42:51.178 22:41:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:42:51.178 22:41:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:42:51.178 22:41:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:42:51.178 22:41:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:42:51.178 22:41:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:42:51.178 22:41:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:42:51.178 22:41:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:42:51.178 22:41:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:42:51.438 22:41:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:42:51.438 22:41:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:42:51.438 22:41:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:42:51.438 22:41:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:42:51.438 22:41:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:42:51.438 22:41:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:42:51.438 22:41:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:42:51.438 22:41:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:42:51.438 22:41:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:42:51.438 22:41:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:42:51.438 22:41:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:42:51.438 22:41:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:42:51.438 22:41:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:42:51.438 22:41:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:42:51.438 22:41:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@70 -- # nvmftestfini 00:42:51.438 22:41:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@514 -- # nvmfcleanup 00:42:51.438 22:41:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@121 -- # sync 00:42:51.438 22:41:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:42:51.438 22:41:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@124 -- # set +e 00:42:51.438 22:41:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@125 -- # for i in {1..20} 00:42:51.438 22:41:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:42:51.438 rmmod nvme_tcp 00:42:51.438 rmmod nvme_fabrics 00:42:51.438 rmmod nvme_keyring 00:42:51.438 22:41:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:42:51.699 22:41:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@128 -- # set -e 00:42:51.699 22:41:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@129 -- # return 0 00:42:51.699 22:41:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@515 -- # '[' -n 410636 ']' 00:42:51.699 22:41:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@516 -- # killprocess 410636 00:42:51.699 22:41:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@950 -- # '[' -z 410636 ']' 00:42:51.699 22:41:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@954 -- # kill -0 410636 00:42:51.699 22:41:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@955 -- # uname 00:42:51.699 22:41:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:42:51.699 22:41:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 410636 00:42:51.699 22:41:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:42:51.699 22:41:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:42:51.699 22:41:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@968 -- # echo 'killing process with pid 410636' 00:42:51.699 killing process with pid 410636 00:42:51.699 22:41:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@969 -- # kill 410636 00:42:51.699 22:41:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@974 -- # wait 410636 00:42:51.699 22:41:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:42:51.699 22:41:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:42:51.699 22:41:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:42:51.699 22:41:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # iptr 00:42:51.699 22:41:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@789 -- # iptables-save 00:42:51.699 22:41:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:42:51.699 22:41:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@789 -- # iptables-restore 00:42:51.699 22:41:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:42:51.699 22:41:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@302 -- # remove_spdk_ns 00:42:51.699 22:41:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:42:51.699 22:41:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:42:51.699 22:41:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:42:54.245 22:41:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:42:54.245 00:42:54.245 real 0m49.479s 00:42:54.245 user 2m57.436s 00:42:54.245 sys 0m21.100s 00:42:54.245 22:41:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1126 -- # xtrace_disable 00:42:54.245 22:41:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:42:54.245 ************************************ 00:42:54.245 END TEST nvmf_ns_hotplug_stress 00:42:54.245 ************************************ 00:42:54.245 22:41:49 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@23 -- # run_test nvmf_delete_subsystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp --interrupt-mode 00:42:54.245 22:41:49 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:42:54.245 22:41:49 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1107 -- # xtrace_disable 00:42:54.245 22:41:49 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:42:54.245 ************************************ 00:42:54.245 START TEST nvmf_delete_subsystem 00:42:54.245 ************************************ 00:42:54.245 22:41:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp --interrupt-mode 00:42:54.245 * Looking for test storage... 00:42:54.245 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:42:54.245 22:41:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:42:54.245 22:41:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1681 -- # lcov --version 00:42:54.245 22:41:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:42:54.245 22:41:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:42:54.245 22:41:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:42:54.245 22:41:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:42:54.245 22:41:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:42:54.245 22:41:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@336 -- # IFS=.-: 00:42:54.245 22:41:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@336 -- # read -ra ver1 00:42:54.245 22:41:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@337 -- # IFS=.-: 00:42:54.245 22:41:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@337 -- # read -ra ver2 00:42:54.245 22:41:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@338 -- # local 'op=<' 00:42:54.245 22:41:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@340 -- # ver1_l=2 00:42:54.245 22:41:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@341 -- # ver2_l=1 00:42:54.245 22:41:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:42:54.245 22:41:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@344 -- # case "$op" in 00:42:54.245 22:41:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@345 -- # : 1 00:42:54.245 22:41:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:42:54.245 22:41:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:42:54.245 22:41:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@365 -- # decimal 1 00:42:54.245 22:41:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=1 00:42:54.245 22:41:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:42:54.245 22:41:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 1 00:42:54.245 22:41:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@365 -- # ver1[v]=1 00:42:54.245 22:41:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@366 -- # decimal 2 00:42:54.245 22:41:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=2 00:42:54.245 22:41:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:42:54.245 22:41:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 2 00:42:54.245 22:41:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@366 -- # ver2[v]=2 00:42:54.245 22:41:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:42:54.245 22:41:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:42:54.245 22:41:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@368 -- # return 0 00:42:54.245 22:41:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:42:54.245 22:41:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:42:54.245 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:42:54.245 --rc genhtml_branch_coverage=1 00:42:54.245 --rc genhtml_function_coverage=1 00:42:54.245 --rc genhtml_legend=1 00:42:54.245 --rc geninfo_all_blocks=1 00:42:54.245 --rc geninfo_unexecuted_blocks=1 00:42:54.245 00:42:54.245 ' 00:42:54.245 22:41:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:42:54.245 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:42:54.245 --rc genhtml_branch_coverage=1 00:42:54.245 --rc genhtml_function_coverage=1 00:42:54.245 --rc genhtml_legend=1 00:42:54.245 --rc geninfo_all_blocks=1 00:42:54.245 --rc geninfo_unexecuted_blocks=1 00:42:54.245 00:42:54.245 ' 00:42:54.245 22:41:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:42:54.245 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:42:54.245 --rc genhtml_branch_coverage=1 00:42:54.245 --rc genhtml_function_coverage=1 00:42:54.245 --rc genhtml_legend=1 00:42:54.245 --rc geninfo_all_blocks=1 00:42:54.245 --rc geninfo_unexecuted_blocks=1 00:42:54.245 00:42:54.245 ' 00:42:54.245 22:41:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:42:54.245 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:42:54.245 --rc genhtml_branch_coverage=1 00:42:54.245 --rc genhtml_function_coverage=1 00:42:54.245 --rc genhtml_legend=1 00:42:54.245 --rc geninfo_all_blocks=1 00:42:54.245 --rc geninfo_unexecuted_blocks=1 00:42:54.245 00:42:54.245 ' 00:42:54.245 22:41:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:42:54.245 22:41:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # uname -s 00:42:54.245 22:41:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:42:54.245 22:41:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:42:54.245 22:41:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:42:54.245 22:41:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:42:54.245 22:41:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:42:54.245 22:41:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:42:54.245 22:41:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:42:54.245 22:41:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:42:54.245 22:41:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:42:54.245 22:41:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:42:54.245 22:41:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 00:42:54.245 22:41:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@18 -- # NVME_HOSTID=80c5a598-37ec-ec11-9bc7-a4bf01928204 00:42:54.245 22:41:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:42:54.245 22:41:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:42:54.245 22:41:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:42:54.245 22:41:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:42:54.245 22:41:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:42:54.245 22:41:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@15 -- # shopt -s extglob 00:42:54.245 22:41:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:42:54.245 22:41:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:42:54.245 22:41:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:42:54.245 22:41:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:42:54.245 22:41:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:42:54.246 22:41:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:42:54.246 22:41:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@5 -- # export PATH 00:42:54.246 22:41:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:42:54.246 22:41:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@51 -- # : 0 00:42:54.246 22:41:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:42:54.246 22:41:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:42:54.246 22:41:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:42:54.246 22:41:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:42:54.246 22:41:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:42:54.246 22:41:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:42:54.246 22:41:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:42:54.246 22:41:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:42:54.246 22:41:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:42:54.246 22:41:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@55 -- # have_pci_nics=0 00:42:54.246 22:41:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@12 -- # nvmftestinit 00:42:54.246 22:41:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:42:54.246 22:41:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:42:54.246 22:41:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@474 -- # prepare_net_devs 00:42:54.246 22:41:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@436 -- # local -g is_hw=no 00:42:54.246 22:41:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@438 -- # remove_spdk_ns 00:42:54.246 22:41:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:42:54.246 22:41:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:42:54.246 22:41:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:42:54.246 22:41:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:42:54.246 22:41:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:42:54.246 22:41:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@309 -- # xtrace_disable 00:42:54.246 22:41:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:43:02.424 22:41:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:43:02.424 22:41:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # pci_devs=() 00:43:02.424 22:41:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # local -a pci_devs 00:43:02.424 22:41:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # pci_net_devs=() 00:43:02.424 22:41:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:43:02.424 22:41:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # pci_drivers=() 00:43:02.424 22:41:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # local -A pci_drivers 00:43:02.424 22:41:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # net_devs=() 00:43:02.424 22:41:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # local -ga net_devs 00:43:02.424 22:41:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # e810=() 00:43:02.424 22:41:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # local -ga e810 00:43:02.424 22:41:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # x722=() 00:43:02.424 22:41:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # local -ga x722 00:43:02.424 22:41:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # mlx=() 00:43:02.424 22:41:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # local -ga mlx 00:43:02.424 22:41:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:43:02.424 22:41:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:43:02.424 22:41:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:43:02.424 22:41:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:43:02.424 22:41:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:43:02.424 22:41:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:43:02.424 22:41:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:43:02.424 22:41:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:43:02.424 22:41:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:43:02.424 22:41:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:43:02.424 22:41:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:43:02.424 22:41:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:43:02.424 22:41:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:43:02.424 22:41:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:43:02.424 22:41:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:43:02.424 22:41:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:43:02.424 22:41:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:43:02.424 22:41:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:43:02.424 22:41:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:43:02.424 22:41:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:43:02.424 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:43:02.424 22:41:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:43:02.424 22:41:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:43:02.424 22:41:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:43:02.424 22:41:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:43:02.424 22:41:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:43:02.424 22:41:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:43:02.424 22:41:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:43:02.424 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:43:02.424 22:41:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:43:02.424 22:41:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:43:02.424 22:41:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:43:02.424 22:41:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:43:02.424 22:41:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:43:02.424 22:41:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:43:02.424 22:41:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:43:02.424 22:41:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:43:02.424 22:41:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:43:02.424 22:41:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:43:02.425 22:41:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:43:02.425 22:41:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:43:02.425 22:41:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ up == up ]] 00:43:02.425 22:41:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:43:02.425 22:41:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:43:02.425 22:41:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:43:02.425 Found net devices under 0000:4b:00.0: cvl_0_0 00:43:02.425 22:41:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:43:02.425 22:41:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:43:02.425 22:41:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:43:02.425 22:41:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:43:02.425 22:41:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:43:02.425 22:41:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ up == up ]] 00:43:02.425 22:41:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:43:02.425 22:41:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:43:02.425 22:41:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:43:02.425 Found net devices under 0000:4b:00.1: cvl_0_1 00:43:02.425 22:41:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:43:02.425 22:41:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:43:02.425 22:41:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@440 -- # is_hw=yes 00:43:02.425 22:41:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:43:02.425 22:41:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:43:02.425 22:41:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:43:02.425 22:41:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:43:02.425 22:41:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:43:02.425 22:41:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:43:02.425 22:41:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:43:02.425 22:41:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:43:02.425 22:41:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:43:02.425 22:41:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:43:02.425 22:41:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:43:02.425 22:41:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:43:02.425 22:41:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:43:02.425 22:41:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:43:02.425 22:41:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:43:02.425 22:41:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:43:02.425 22:41:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:43:02.425 22:41:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:43:02.425 22:41:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:43:02.425 22:41:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:43:02.425 22:41:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:43:02.425 22:41:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:43:02.425 22:41:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:43:02.425 22:41:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:43:02.425 22:41:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:43:02.425 22:41:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:43:02.425 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:43:02.425 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.669 ms 00:43:02.425 00:43:02.425 --- 10.0.0.2 ping statistics --- 00:43:02.425 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:43:02.425 rtt min/avg/max/mdev = 0.669/0.669/0.669/0.000 ms 00:43:02.425 22:41:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:43:02.425 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:43:02.425 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.284 ms 00:43:02.425 00:43:02.425 --- 10.0.0.1 ping statistics --- 00:43:02.425 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:43:02.425 rtt min/avg/max/mdev = 0.284/0.284/0.284/0.000 ms 00:43:02.425 22:41:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:43:02.425 22:41:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@448 -- # return 0 00:43:02.425 22:41:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:43:02.425 22:41:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:43:02.425 22:41:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:43:02.425 22:41:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:43:02.425 22:41:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:43:02.425 22:41:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:43:02.425 22:41:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:43:02.425 22:41:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@13 -- # nvmfappstart -m 0x3 00:43:02.425 22:41:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:43:02.425 22:41:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@724 -- # xtrace_disable 00:43:02.425 22:41:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:43:02.425 22:41:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@507 -- # nvmfpid=422344 00:43:02.425 22:41:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@508 -- # waitforlisten 422344 00:43:02.425 22:41:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x3 00:43:02.425 22:41:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@831 -- # '[' -z 422344 ']' 00:43:02.425 22:41:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:43:02.425 22:41:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@836 -- # local max_retries=100 00:43:02.425 22:41:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:43:02.425 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:43:02.425 22:41:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@840 -- # xtrace_disable 00:43:02.425 22:41:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:43:02.425 [2024-10-01 22:41:56.631669] thread.c:2945:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:43:02.425 [2024-10-01 22:41:56.632810] Starting SPDK v25.01-pre git sha1 1b1c3081e / DPDK 24.03.0 initialization... 00:43:02.425 [2024-10-01 22:41:56.632863] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:43:02.425 [2024-10-01 22:41:56.704865] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:43:02.425 [2024-10-01 22:41:56.778802] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:43:02.426 [2024-10-01 22:41:56.778843] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:43:02.426 [2024-10-01 22:41:56.778852] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:43:02.426 [2024-10-01 22:41:56.778861] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:43:02.426 [2024-10-01 22:41:56.778866] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:43:02.426 [2024-10-01 22:41:56.782645] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:43:02.426 [2024-10-01 22:41:56.782805] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:43:02.426 [2024-10-01 22:41:56.890003] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:43:02.426 [2024-10-01 22:41:56.890101] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:43:02.426 [2024-10-01 22:41:56.890247] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:43:02.426 22:41:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:43:02.426 22:41:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@864 -- # return 0 00:43:02.426 22:41:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:43:02.426 22:41:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@730 -- # xtrace_disable 00:43:02.426 22:41:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:43:02.426 22:41:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:43:02.426 22:41:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:43:02.426 22:41:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:43:02.426 22:41:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:43:02.426 [2024-10-01 22:41:57.471283] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:43:02.426 22:41:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:43:02.426 22:41:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:43:02.426 22:41:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:43:02.426 22:41:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:43:02.426 22:41:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:43:02.426 22:41:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:43:02.426 22:41:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:43:02.426 22:41:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:43:02.426 [2024-10-01 22:41:57.495994] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:43:02.426 22:41:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:43:02.426 22:41:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:43:02.426 22:41:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:43:02.426 22:41:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:43:02.426 NULL1 00:43:02.426 22:41:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:43:02.426 22:41:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@23 -- # rpc_cmd bdev_delay_create -b NULL1 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:43:02.426 22:41:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:43:02.426 22:41:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:43:02.426 Delay0 00:43:02.426 22:41:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:43:02.426 22:41:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:43:02.426 22:41:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:43:02.426 22:41:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:43:02.426 22:41:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:43:02.426 22:41:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@28 -- # perf_pid=422616 00:43:02.426 22:41:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@30 -- # sleep 2 00:43:02.426 22:41:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 5 -q 128 -w randrw -M 70 -o 512 -P 4 00:43:02.426 [2024-10-01 22:41:57.586323] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:43:04.338 22:41:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@32 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:43:04.338 22:41:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:43:04.338 22:41:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:43:04.599 Read completed with error (sct=0, sc=8) 00:43:04.599 Read completed with error (sct=0, sc=8) 00:43:04.599 Read completed with error (sct=0, sc=8) 00:43:04.599 starting I/O failed: -6 00:43:04.599 Read completed with error (sct=0, sc=8) 00:43:04.599 Write completed with error (sct=0, sc=8) 00:43:04.599 Read completed with error (sct=0, sc=8) 00:43:04.599 Write completed with error (sct=0, sc=8) 00:43:04.599 starting I/O failed: -6 00:43:04.599 Read completed with error (sct=0, sc=8) 00:43:04.599 Read completed with error (sct=0, sc=8) 00:43:04.599 Read completed with error (sct=0, sc=8) 00:43:04.599 Read completed with error (sct=0, sc=8) 00:43:04.599 starting I/O failed: -6 00:43:04.599 Write completed with error (sct=0, sc=8) 00:43:04.599 Write completed with error (sct=0, sc=8) 00:43:04.599 Read completed with error (sct=0, sc=8) 00:43:04.599 Read completed with error (sct=0, sc=8) 00:43:04.599 starting I/O failed: -6 00:43:04.599 Read completed with error (sct=0, sc=8) 00:43:04.599 Read completed with error (sct=0, sc=8) 00:43:04.599 Read completed with error (sct=0, sc=8) 00:43:04.599 Write completed with error (sct=0, sc=8) 00:43:04.599 starting I/O failed: -6 00:43:04.599 Read completed with error (sct=0, sc=8) 00:43:04.599 Write completed with error (sct=0, sc=8) 00:43:04.599 Read completed with error (sct=0, sc=8) 00:43:04.599 Read completed with error (sct=0, sc=8) 00:43:04.599 starting I/O failed: -6 00:43:04.599 Read completed with error (sct=0, sc=8) 00:43:04.599 Read completed with error (sct=0, sc=8) 00:43:04.599 Read completed with error (sct=0, sc=8) 00:43:04.599 Read completed with error (sct=0, sc=8) 00:43:04.599 starting I/O failed: -6 00:43:04.599 Write completed with error (sct=0, sc=8) 00:43:04.599 Read completed with error (sct=0, sc=8) 00:43:04.599 Read completed with error (sct=0, sc=8) 00:43:04.599 Read completed with error (sct=0, sc=8) 00:43:04.599 starting I/O failed: -6 00:43:04.599 Read completed with error (sct=0, sc=8) 00:43:04.599 Read completed with error (sct=0, sc=8) 00:43:04.599 Read completed with error (sct=0, sc=8) 00:43:04.600 Write completed with error (sct=0, sc=8) 00:43:04.600 starting I/O failed: -6 00:43:04.600 Write completed with error (sct=0, sc=8) 00:43:04.600 Read completed with error (sct=0, sc=8) 00:43:04.600 Read completed with error (sct=0, sc=8) 00:43:04.600 Read completed with error (sct=0, sc=8) 00:43:04.600 starting I/O failed: -6 00:43:04.600 Read completed with error (sct=0, sc=8) 00:43:04.600 Read completed with error (sct=0, sc=8) 00:43:04.600 Read completed with error (sct=0, sc=8) 00:43:04.600 Read completed with error (sct=0, sc=8) 00:43:04.600 starting I/O failed: -6 00:43:04.600 Read completed with error (sct=0, sc=8) 00:43:04.600 Read completed with error (sct=0, sc=8) 00:43:04.600 Read completed with error (sct=0, sc=8) 00:43:04.600 [2024-10-01 22:41:59.707412] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1810390 is same with the state(6) to be set 00:43:04.600 Write completed with error (sct=0, sc=8) 00:43:04.600 Read completed with error (sct=0, sc=8) 00:43:04.600 Read completed with error (sct=0, sc=8) 00:43:04.600 Write completed with error (sct=0, sc=8) 00:43:04.600 Read completed with error (sct=0, sc=8) 00:43:04.600 Read completed with error (sct=0, sc=8) 00:43:04.600 Read completed with error (sct=0, sc=8) 00:43:04.600 Read completed with error (sct=0, sc=8) 00:43:04.600 Write completed with error (sct=0, sc=8) 00:43:04.600 Read completed with error (sct=0, sc=8) 00:43:04.600 Write completed with error (sct=0, sc=8) 00:43:04.600 Read completed with error (sct=0, sc=8) 00:43:04.600 Read completed with error (sct=0, sc=8) 00:43:04.600 Write completed with error (sct=0, sc=8) 00:43:04.600 Write completed with error (sct=0, sc=8) 00:43:04.600 Read completed with error (sct=0, sc=8) 00:43:04.600 Write completed with error (sct=0, sc=8) 00:43:04.600 Read completed with error (sct=0, sc=8) 00:43:04.600 Read completed with error (sct=0, sc=8) 00:43:04.600 Read completed with error (sct=0, sc=8) 00:43:04.600 Write completed with error (sct=0, sc=8) 00:43:04.600 Write completed with error (sct=0, sc=8) 00:43:04.600 Read completed with error (sct=0, sc=8) 00:43:04.600 Read completed with error (sct=0, sc=8) 00:43:04.600 Write completed with error (sct=0, sc=8) 00:43:04.600 Write completed with error (sct=0, sc=8) 00:43:04.600 Read completed with error (sct=0, sc=8) 00:43:04.600 Read completed with error (sct=0, sc=8) 00:43:04.600 Read completed with error (sct=0, sc=8) 00:43:04.600 Read completed with error (sct=0, sc=8) 00:43:04.600 Read completed with error (sct=0, sc=8) 00:43:04.600 Write completed with error (sct=0, sc=8) 00:43:04.600 Read completed with error (sct=0, sc=8) 00:43:04.600 Read completed with error (sct=0, sc=8) 00:43:04.600 Read completed with error (sct=0, sc=8) 00:43:04.600 Read completed with error (sct=0, sc=8) 00:43:04.600 Write completed with error (sct=0, sc=8) 00:43:04.600 Read completed with error (sct=0, sc=8) 00:43:04.600 Write completed with error (sct=0, sc=8) 00:43:04.600 Write completed with error (sct=0, sc=8) 00:43:04.600 Read completed with error (sct=0, sc=8) 00:43:04.600 Read completed with error (sct=0, sc=8) 00:43:04.600 Read completed with error (sct=0, sc=8) 00:43:04.600 Read completed with error (sct=0, sc=8) 00:43:04.600 Write completed with error (sct=0, sc=8) 00:43:04.600 Read completed with error (sct=0, sc=8) 00:43:04.600 Read completed with error (sct=0, sc=8) 00:43:04.600 Write completed with error (sct=0, sc=8) 00:43:04.600 Write completed with error (sct=0, sc=8) 00:43:04.600 Read completed with error (sct=0, sc=8) 00:43:04.600 Write completed with error (sct=0, sc=8) 00:43:04.600 Read completed with error (sct=0, sc=8) 00:43:04.600 Write completed with error (sct=0, sc=8) 00:43:04.600 Read completed with error (sct=0, sc=8) 00:43:04.600 Read completed with error (sct=0, sc=8) 00:43:04.600 Read completed with error (sct=0, sc=8) 00:43:04.600 Read completed with error (sct=0, sc=8) 00:43:04.600 Write completed with error (sct=0, sc=8) 00:43:04.600 Write completed with error (sct=0, sc=8) 00:43:04.600 starting I/O failed: -6 00:43:04.600 Read completed with error (sct=0, sc=8) 00:43:04.600 Read completed with error (sct=0, sc=8) 00:43:04.600 Read completed with error (sct=0, sc=8) 00:43:04.600 Read completed with error (sct=0, sc=8) 00:43:04.600 starting I/O failed: -6 00:43:04.600 Read completed with error (sct=0, sc=8) 00:43:04.600 Write completed with error (sct=0, sc=8) 00:43:04.600 Read completed with error (sct=0, sc=8) 00:43:04.600 Write completed with error (sct=0, sc=8) 00:43:04.600 starting I/O failed: -6 00:43:04.600 Write completed with error (sct=0, sc=8) 00:43:04.600 Read completed with error (sct=0, sc=8) 00:43:04.600 Write completed with error (sct=0, sc=8) 00:43:04.600 Read completed with error (sct=0, sc=8) 00:43:04.600 starting I/O failed: -6 00:43:04.600 Read completed with error (sct=0, sc=8) 00:43:04.600 Read completed with error (sct=0, sc=8) 00:43:04.600 Read completed with error (sct=0, sc=8) 00:43:04.600 Read completed with error (sct=0, sc=8) 00:43:04.600 starting I/O failed: -6 00:43:04.600 Read completed with error (sct=0, sc=8) 00:43:04.600 Read completed with error (sct=0, sc=8) 00:43:04.600 Write completed with error (sct=0, sc=8) 00:43:04.600 Read completed with error (sct=0, sc=8) 00:43:04.600 starting I/O failed: -6 00:43:04.600 Read completed with error (sct=0, sc=8) 00:43:04.600 Read completed with error (sct=0, sc=8) 00:43:04.600 Read completed with error (sct=0, sc=8) 00:43:04.600 Write completed with error (sct=0, sc=8) 00:43:04.600 starting I/O failed: -6 00:43:04.600 Write completed with error (sct=0, sc=8) 00:43:04.600 Write completed with error (sct=0, sc=8) 00:43:04.600 Read completed with error (sct=0, sc=8) 00:43:04.600 Read completed with error (sct=0, sc=8) 00:43:04.600 starting I/O failed: -6 00:43:04.600 Read completed with error (sct=0, sc=8) 00:43:04.600 Read completed with error (sct=0, sc=8) 00:43:04.600 Read completed with error (sct=0, sc=8) 00:43:04.600 Read completed with error (sct=0, sc=8) 00:43:04.600 starting I/O failed: -6 00:43:04.600 Write completed with error (sct=0, sc=8) 00:43:04.600 Read completed with error (sct=0, sc=8) 00:43:04.600 Read completed with error (sct=0, sc=8) 00:43:04.600 Write completed with error (sct=0, sc=8) 00:43:04.600 starting I/O failed: -6 00:43:04.600 Read completed with error (sct=0, sc=8) 00:43:04.600 Read completed with error (sct=0, sc=8) 00:43:04.600 [2024-10-01 22:41:59.709837] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f0320000c00 is same with the state(6) to be set 00:43:04.600 Read completed with error (sct=0, sc=8) 00:43:04.600 Write completed with error (sct=0, sc=8) 00:43:04.600 Read completed with error (sct=0, sc=8) 00:43:04.600 Write completed with error (sct=0, sc=8) 00:43:04.600 Write completed with error (sct=0, sc=8) 00:43:04.600 Write completed with error (sct=0, sc=8) 00:43:04.600 Write completed with error (sct=0, sc=8) 00:43:04.600 Write completed with error (sct=0, sc=8) 00:43:04.600 Write completed with error (sct=0, sc=8) 00:43:04.600 Read completed with error (sct=0, sc=8) 00:43:04.600 Read completed with error (sct=0, sc=8) 00:43:04.600 Write completed with error (sct=0, sc=8) 00:43:04.600 Write completed with error (sct=0, sc=8) 00:43:04.600 Read completed with error (sct=0, sc=8) 00:43:04.600 Read completed with error (sct=0, sc=8) 00:43:04.600 Read completed with error (sct=0, sc=8) 00:43:04.600 Read completed with error (sct=0, sc=8) 00:43:04.600 Read completed with error (sct=0, sc=8) 00:43:04.600 Write completed with error (sct=0, sc=8) 00:43:04.600 Read completed with error (sct=0, sc=8) 00:43:04.600 Read completed with error (sct=0, sc=8) 00:43:04.600 Read completed with error (sct=0, sc=8) 00:43:04.600 Read completed with error (sct=0, sc=8) 00:43:04.600 Write completed with error (sct=0, sc=8) 00:43:04.600 Read completed with error (sct=0, sc=8) 00:43:04.600 Read completed with error (sct=0, sc=8) 00:43:04.600 Read completed with error (sct=0, sc=8) 00:43:04.600 Write completed with error (sct=0, sc=8) 00:43:04.600 Read completed with error (sct=0, sc=8) 00:43:04.600 Read completed with error (sct=0, sc=8) 00:43:04.600 Write completed with error (sct=0, sc=8) 00:43:04.600 Read completed with error (sct=0, sc=8) 00:43:04.600 Read completed with error (sct=0, sc=8) 00:43:04.600 Read completed with error (sct=0, sc=8) 00:43:04.600 Read completed with error (sct=0, sc=8) 00:43:04.600 Read completed with error (sct=0, sc=8) 00:43:04.600 Write completed with error (sct=0, sc=8) 00:43:04.600 Read completed with error (sct=0, sc=8) 00:43:04.600 Read completed with error (sct=0, sc=8) 00:43:04.600 Read completed with error (sct=0, sc=8) 00:43:04.600 Write completed with error (sct=0, sc=8) 00:43:04.600 Write completed with error (sct=0, sc=8) 00:43:04.600 Write completed with error (sct=0, sc=8) 00:43:04.600 Read completed with error (sct=0, sc=8) 00:43:04.600 Read completed with error (sct=0, sc=8) 00:43:04.600 Write completed with error (sct=0, sc=8) 00:43:04.600 Read completed with error (sct=0, sc=8) 00:43:04.600 Read completed with error (sct=0, sc=8) 00:43:04.600 Read completed with error (sct=0, sc=8) 00:43:04.600 Write completed with error (sct=0, sc=8) 00:43:05.541 [2024-10-01 22:42:00.685618] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1811a70 is same with the state(6) to be set 00:43:05.541 Read completed with error (sct=0, sc=8) 00:43:05.541 Read completed with error (sct=0, sc=8) 00:43:05.541 Read completed with error (sct=0, sc=8) 00:43:05.541 Write completed with error (sct=0, sc=8) 00:43:05.541 Write completed with error (sct=0, sc=8) 00:43:05.541 Read completed with error (sct=0, sc=8) 00:43:05.541 Write completed with error (sct=0, sc=8) 00:43:05.541 Write completed with error (sct=0, sc=8) 00:43:05.541 Read completed with error (sct=0, sc=8) 00:43:05.541 Write completed with error (sct=0, sc=8) 00:43:05.541 Read completed with error (sct=0, sc=8) 00:43:05.541 Read completed with error (sct=0, sc=8) 00:43:05.541 Read completed with error (sct=0, sc=8) 00:43:05.541 Read completed with error (sct=0, sc=8) 00:43:05.541 Read completed with error (sct=0, sc=8) 00:43:05.541 Read completed with error (sct=0, sc=8) 00:43:05.541 Read completed with error (sct=0, sc=8) 00:43:05.541 Read completed with error (sct=0, sc=8) 00:43:05.541 Write completed with error (sct=0, sc=8) 00:43:05.541 Read completed with error (sct=0, sc=8) 00:43:05.541 Read completed with error (sct=0, sc=8) 00:43:05.541 Read completed with error (sct=0, sc=8) 00:43:05.541 Read completed with error (sct=0, sc=8) 00:43:05.541 Write completed with error (sct=0, sc=8) 00:43:05.541 Write completed with error (sct=0, sc=8) 00:43:05.541 [2024-10-01 22:42:00.710894] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1810570 is same with the state(6) to be set 00:43:05.541 Write completed with error (sct=0, sc=8) 00:43:05.541 Read completed with error (sct=0, sc=8) 00:43:05.541 Write completed with error (sct=0, sc=8) 00:43:05.541 Write completed with error (sct=0, sc=8) 00:43:05.541 Write completed with error (sct=0, sc=8) 00:43:05.541 Read completed with error (sct=0, sc=8) 00:43:05.541 Read completed with error (sct=0, sc=8) 00:43:05.541 Read completed with error (sct=0, sc=8) 00:43:05.541 Read completed with error (sct=0, sc=8) 00:43:05.541 Write completed with error (sct=0, sc=8) 00:43:05.541 Read completed with error (sct=0, sc=8) 00:43:05.541 Read completed with error (sct=0, sc=8) 00:43:05.541 Read completed with error (sct=0, sc=8) 00:43:05.541 Read completed with error (sct=0, sc=8) 00:43:05.541 Read completed with error (sct=0, sc=8) 00:43:05.541 Write completed with error (sct=0, sc=8) 00:43:05.541 Read completed with error (sct=0, sc=8) 00:43:05.541 Read completed with error (sct=0, sc=8) 00:43:05.541 Write completed with error (sct=0, sc=8) 00:43:05.541 Read completed with error (sct=0, sc=8) 00:43:05.541 Read completed with error (sct=0, sc=8) 00:43:05.541 Read completed with error (sct=0, sc=8) 00:43:05.541 Write completed with error (sct=0, sc=8) 00:43:05.541 Read completed with error (sct=0, sc=8) 00:43:05.541 Read completed with error (sct=0, sc=8) 00:43:05.541 [2024-10-01 22:42:00.712186] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1810930 is same with the state(6) to be set 00:43:05.541 Read completed with error (sct=0, sc=8) 00:43:05.541 Read completed with error (sct=0, sc=8) 00:43:05.541 Read completed with error (sct=0, sc=8) 00:43:05.541 Read completed with error (sct=0, sc=8) 00:43:05.541 Read completed with error (sct=0, sc=8) 00:43:05.541 Read completed with error (sct=0, sc=8) 00:43:05.541 Read completed with error (sct=0, sc=8) 00:43:05.541 Read completed with error (sct=0, sc=8) 00:43:05.541 Read completed with error (sct=0, sc=8) 00:43:05.541 Read completed with error (sct=0, sc=8) 00:43:05.541 Write completed with error (sct=0, sc=8) 00:43:05.541 Write completed with error (sct=0, sc=8) 00:43:05.541 Read completed with error (sct=0, sc=8) 00:43:05.541 Read completed with error (sct=0, sc=8) 00:43:05.541 Write completed with error (sct=0, sc=8) 00:43:05.541 Read completed with error (sct=0, sc=8) 00:43:05.541 Write completed with error (sct=0, sc=8) 00:43:05.541 Read completed with error (sct=0, sc=8) 00:43:05.541 [2024-10-01 22:42:00.712460] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f032000cfe0 is same with the state(6) to be set 00:43:05.541 Read completed with error (sct=0, sc=8) 00:43:05.541 Read completed with error (sct=0, sc=8) 00:43:05.541 Write completed with error (sct=0, sc=8) 00:43:05.541 Read completed with error (sct=0, sc=8) 00:43:05.541 Read completed with error (sct=0, sc=8) 00:43:05.541 Read completed with error (sct=0, sc=8) 00:43:05.541 Write completed with error (sct=0, sc=8) 00:43:05.541 Read completed with error (sct=0, sc=8) 00:43:05.541 Write completed with error (sct=0, sc=8) 00:43:05.541 Read completed with error (sct=0, sc=8) 00:43:05.541 Read completed with error (sct=0, sc=8) 00:43:05.541 Read completed with error (sct=0, sc=8) 00:43:05.541 Read completed with error (sct=0, sc=8) 00:43:05.541 Read completed with error (sct=0, sc=8) 00:43:05.541 Read completed with error (sct=0, sc=8) 00:43:05.541 Read completed with error (sct=0, sc=8) 00:43:05.541 Read completed with error (sct=0, sc=8) 00:43:05.541 [2024-10-01 22:42:00.712844] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f032000d640 is same with the state(6) to be set 00:43:05.541 Initializing NVMe Controllers 00:43:05.541 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:43:05.541 Controller IO queue size 128, less than required. 00:43:05.541 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:43:05.541 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:43:05.541 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:43:05.541 Initialization complete. Launching workers. 00:43:05.541 ======================================================== 00:43:05.541 Latency(us) 00:43:05.541 Device Information : IOPS MiB/s Average min max 00:43:05.541 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 171.68 0.08 890775.10 259.09 1009049.60 00:43:05.541 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 157.74 0.08 922403.52 286.36 1009605.62 00:43:05.541 ======================================================== 00:43:05.541 Total : 329.42 0.16 905920.43 259.09 1009605.62 00:43:05.541 00:43:05.541 [2024-10-01 22:42:00.713278] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1811a70 (9): Bad file descriptor 00:43:05.541 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:43:05.541 22:42:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:43:05.541 22:42:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@34 -- # delay=0 00:43:05.541 22:42:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 422616 00:43:05.541 22:42:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:43:06.113 22:42:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:43:06.113 22:42:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 422616 00:43:06.113 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 35: kill: (422616) - No such process 00:43:06.113 22:42:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@45 -- # NOT wait 422616 00:43:06.113 22:42:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@650 -- # local es=0 00:43:06.113 22:42:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@652 -- # valid_exec_arg wait 422616 00:43:06.113 22:42:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@638 -- # local arg=wait 00:43:06.113 22:42:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:43:06.113 22:42:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@642 -- # type -t wait 00:43:06.113 22:42:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:43:06.113 22:42:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@653 -- # wait 422616 00:43:06.113 22:42:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@653 -- # es=1 00:43:06.113 22:42:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:43:06.113 22:42:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:43:06.113 22:42:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:43:06.113 22:42:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:43:06.113 22:42:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:43:06.113 22:42:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:43:06.113 22:42:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:43:06.113 22:42:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:43:06.113 22:42:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:43:06.113 22:42:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:43:06.113 [2024-10-01 22:42:01.247697] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:43:06.113 22:42:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:43:06.113 22:42:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:43:06.113 22:42:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:43:06.113 22:42:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:43:06.113 22:42:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:43:06.113 22:42:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@54 -- # perf_pid=423367 00:43:06.113 22:42:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@56 -- # delay=0 00:43:06.113 22:42:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 3 -q 128 -w randrw -M 70 -o 512 -P 4 00:43:06.113 22:42:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 423367 00:43:06.113 22:42:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:43:06.113 [2024-10-01 22:42:01.313056] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:43:06.682 22:42:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:43:06.682 22:42:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 423367 00:43:06.682 22:42:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:43:07.252 22:42:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:43:07.252 22:42:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 423367 00:43:07.252 22:42:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:43:07.823 22:42:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:43:07.823 22:42:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 423367 00:43:07.823 22:42:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:43:08.084 22:42:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:43:08.084 22:42:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 423367 00:43:08.084 22:42:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:43:08.654 22:42:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:43:08.654 22:42:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 423367 00:43:08.654 22:42:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:43:09.226 22:42:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:43:09.226 22:42:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 423367 00:43:09.226 22:42:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:43:09.226 Initializing NVMe Controllers 00:43:09.226 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:43:09.226 Controller IO queue size 128, less than required. 00:43:09.226 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:43:09.226 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:43:09.226 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:43:09.226 Initialization complete. Launching workers. 00:43:09.226 ======================================================== 00:43:09.226 Latency(us) 00:43:09.226 Device Information : IOPS MiB/s Average min max 00:43:09.226 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 128.00 0.06 1002706.87 1000229.90 1041430.70 00:43:09.226 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 128.00 0.06 1004067.74 1000266.52 1009977.53 00:43:09.226 ======================================================== 00:43:09.226 Total : 256.00 0.12 1003387.30 1000229.90 1041430.70 00:43:09.226 00:43:09.795 22:42:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:43:09.795 22:42:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 423367 00:43:09.795 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 57: kill: (423367) - No such process 00:43:09.795 22:42:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@67 -- # wait 423367 00:43:09.795 22:42:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:43:09.796 22:42:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@71 -- # nvmftestfini 00:43:09.796 22:42:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@514 -- # nvmfcleanup 00:43:09.796 22:42:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@121 -- # sync 00:43:09.796 22:42:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:43:09.796 22:42:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@124 -- # set +e 00:43:09.796 22:42:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@125 -- # for i in {1..20} 00:43:09.796 22:42:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:43:09.796 rmmod nvme_tcp 00:43:09.796 rmmod nvme_fabrics 00:43:09.796 rmmod nvme_keyring 00:43:09.796 22:42:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:43:09.796 22:42:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@128 -- # set -e 00:43:09.796 22:42:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@129 -- # return 0 00:43:09.796 22:42:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@515 -- # '[' -n 422344 ']' 00:43:09.796 22:42:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@516 -- # killprocess 422344 00:43:09.796 22:42:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@950 -- # '[' -z 422344 ']' 00:43:09.796 22:42:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@954 -- # kill -0 422344 00:43:09.796 22:42:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@955 -- # uname 00:43:09.796 22:42:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:43:09.796 22:42:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 422344 00:43:09.796 22:42:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:43:09.796 22:42:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:43:09.796 22:42:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@968 -- # echo 'killing process with pid 422344' 00:43:09.796 killing process with pid 422344 00:43:09.796 22:42:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@969 -- # kill 422344 00:43:09.796 22:42:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@974 -- # wait 422344 00:43:10.057 22:42:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:43:10.057 22:42:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:43:10.057 22:42:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:43:10.057 22:42:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # iptr 00:43:10.057 22:42:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@789 -- # iptables-save 00:43:10.057 22:42:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:43:10.057 22:42:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@789 -- # iptables-restore 00:43:10.057 22:42:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:43:10.057 22:42:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@302 -- # remove_spdk_ns 00:43:10.057 22:42:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:43:10.057 22:42:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:43:10.057 22:42:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:43:11.969 22:42:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:43:11.969 00:43:11.969 real 0m18.105s 00:43:11.969 user 0m26.296s 00:43:11.969 sys 0m7.355s 00:43:11.969 22:42:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1126 -- # xtrace_disable 00:43:11.969 22:42:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:43:11.969 ************************************ 00:43:11.969 END TEST nvmf_delete_subsystem 00:43:11.969 ************************************ 00:43:12.231 22:42:07 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@26 -- # run_test nvmf_host_management /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp --interrupt-mode 00:43:12.231 22:42:07 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:43:12.231 22:42:07 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1107 -- # xtrace_disable 00:43:12.231 22:42:07 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:43:12.231 ************************************ 00:43:12.231 START TEST nvmf_host_management 00:43:12.231 ************************************ 00:43:12.231 22:42:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp --interrupt-mode 00:43:12.231 * Looking for test storage... 00:43:12.231 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:43:12.231 22:42:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:43:12.231 22:42:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1681 -- # lcov --version 00:43:12.231 22:42:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:43:12.231 22:42:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:43:12.231 22:42:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:43:12.231 22:42:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@333 -- # local ver1 ver1_l 00:43:12.231 22:42:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@334 -- # local ver2 ver2_l 00:43:12.231 22:42:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@336 -- # IFS=.-: 00:43:12.231 22:42:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@336 -- # read -ra ver1 00:43:12.231 22:42:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@337 -- # IFS=.-: 00:43:12.231 22:42:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@337 -- # read -ra ver2 00:43:12.231 22:42:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@338 -- # local 'op=<' 00:43:12.231 22:42:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@340 -- # ver1_l=2 00:43:12.231 22:42:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@341 -- # ver2_l=1 00:43:12.231 22:42:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:43:12.231 22:42:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@344 -- # case "$op" in 00:43:12.231 22:42:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@345 -- # : 1 00:43:12.231 22:42:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@364 -- # (( v = 0 )) 00:43:12.231 22:42:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:43:12.231 22:42:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@365 -- # decimal 1 00:43:12.231 22:42:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@353 -- # local d=1 00:43:12.231 22:42:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:43:12.231 22:42:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@355 -- # echo 1 00:43:12.231 22:42:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@365 -- # ver1[v]=1 00:43:12.231 22:42:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@366 -- # decimal 2 00:43:12.231 22:42:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@353 -- # local d=2 00:43:12.231 22:42:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:43:12.231 22:42:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@355 -- # echo 2 00:43:12.231 22:42:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@366 -- # ver2[v]=2 00:43:12.231 22:42:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:43:12.231 22:42:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:43:12.231 22:42:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@368 -- # return 0 00:43:12.231 22:42:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:43:12.231 22:42:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:43:12.231 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:43:12.231 --rc genhtml_branch_coverage=1 00:43:12.231 --rc genhtml_function_coverage=1 00:43:12.231 --rc genhtml_legend=1 00:43:12.231 --rc geninfo_all_blocks=1 00:43:12.231 --rc geninfo_unexecuted_blocks=1 00:43:12.231 00:43:12.231 ' 00:43:12.231 22:42:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:43:12.231 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:43:12.231 --rc genhtml_branch_coverage=1 00:43:12.231 --rc genhtml_function_coverage=1 00:43:12.231 --rc genhtml_legend=1 00:43:12.231 --rc geninfo_all_blocks=1 00:43:12.231 --rc geninfo_unexecuted_blocks=1 00:43:12.231 00:43:12.231 ' 00:43:12.231 22:42:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:43:12.231 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:43:12.231 --rc genhtml_branch_coverage=1 00:43:12.231 --rc genhtml_function_coverage=1 00:43:12.231 --rc genhtml_legend=1 00:43:12.231 --rc geninfo_all_blocks=1 00:43:12.231 --rc geninfo_unexecuted_blocks=1 00:43:12.231 00:43:12.231 ' 00:43:12.231 22:42:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:43:12.231 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:43:12.231 --rc genhtml_branch_coverage=1 00:43:12.231 --rc genhtml_function_coverage=1 00:43:12.231 --rc genhtml_legend=1 00:43:12.231 --rc geninfo_all_blocks=1 00:43:12.231 --rc geninfo_unexecuted_blocks=1 00:43:12.232 00:43:12.232 ' 00:43:12.232 22:42:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:43:12.494 22:42:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@7 -- # uname -s 00:43:12.494 22:42:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:43:12.494 22:42:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:43:12.494 22:42:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:43:12.494 22:42:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:43:12.494 22:42:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:43:12.494 22:42:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:43:12.494 22:42:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:43:12.494 22:42:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:43:12.494 22:42:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:43:12.494 22:42:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:43:12.494 22:42:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 00:43:12.494 22:42:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@18 -- # NVME_HOSTID=80c5a598-37ec-ec11-9bc7-a4bf01928204 00:43:12.494 22:42:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:43:12.494 22:42:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:43:12.494 22:42:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:43:12.494 22:42:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:43:12.494 22:42:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:43:12.494 22:42:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@15 -- # shopt -s extglob 00:43:12.494 22:42:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:43:12.494 22:42:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:43:12.494 22:42:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:43:12.494 22:42:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:43:12.494 22:42:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:43:12.495 22:42:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:43:12.495 22:42:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@5 -- # export PATH 00:43:12.495 22:42:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:43:12.495 22:42:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@51 -- # : 0 00:43:12.495 22:42:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:43:12.495 22:42:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:43:12.495 22:42:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:43:12.495 22:42:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:43:12.495 22:42:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:43:12.495 22:42:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:43:12.495 22:42:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:43:12.495 22:42:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:43:12.495 22:42:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:43:12.495 22:42:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@55 -- # have_pci_nics=0 00:43:12.495 22:42:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:43:12.495 22:42:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:43:12.495 22:42:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@105 -- # nvmftestinit 00:43:12.495 22:42:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:43:12.495 22:42:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:43:12.495 22:42:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@474 -- # prepare_net_devs 00:43:12.495 22:42:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@436 -- # local -g is_hw=no 00:43:12.495 22:42:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@438 -- # remove_spdk_ns 00:43:12.495 22:42:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:43:12.495 22:42:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:43:12.495 22:42:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:43:12.495 22:42:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:43:12.495 22:42:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:43:12.495 22:42:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@309 -- # xtrace_disable 00:43:12.495 22:42:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:43:20.649 22:42:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:43:20.649 22:42:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@315 -- # pci_devs=() 00:43:20.649 22:42:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@315 -- # local -a pci_devs 00:43:20.649 22:42:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@316 -- # pci_net_devs=() 00:43:20.649 22:42:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:43:20.649 22:42:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@317 -- # pci_drivers=() 00:43:20.649 22:42:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@317 -- # local -A pci_drivers 00:43:20.649 22:42:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@319 -- # net_devs=() 00:43:20.649 22:42:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@319 -- # local -ga net_devs 00:43:20.649 22:42:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@320 -- # e810=() 00:43:20.649 22:42:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@320 -- # local -ga e810 00:43:20.649 22:42:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@321 -- # x722=() 00:43:20.649 22:42:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@321 -- # local -ga x722 00:43:20.649 22:42:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@322 -- # mlx=() 00:43:20.649 22:42:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@322 -- # local -ga mlx 00:43:20.649 22:42:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:43:20.649 22:42:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:43:20.649 22:42:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:43:20.649 22:42:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:43:20.649 22:42:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:43:20.649 22:42:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:43:20.649 22:42:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:43:20.649 22:42:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:43:20.649 22:42:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:43:20.649 22:42:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:43:20.649 22:42:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:43:20.649 22:42:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:43:20.649 22:42:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:43:20.649 22:42:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:43:20.649 22:42:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:43:20.649 22:42:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:43:20.649 22:42:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:43:20.649 22:42:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:43:20.649 22:42:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:43:20.649 22:42:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:43:20.649 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:43:20.649 22:42:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:43:20.649 22:42:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:43:20.649 22:42:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:43:20.649 22:42:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:43:20.649 22:42:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:43:20.649 22:42:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:43:20.649 22:42:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:43:20.649 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:43:20.649 22:42:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:43:20.649 22:42:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:43:20.649 22:42:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:43:20.649 22:42:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:43:20.649 22:42:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:43:20.649 22:42:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:43:20.649 22:42:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:43:20.649 22:42:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:43:20.649 22:42:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:43:20.649 22:42:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:43:20.649 22:42:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:43:20.649 22:42:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:43:20.649 22:42:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@416 -- # [[ up == up ]] 00:43:20.649 22:42:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:43:20.649 22:42:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:43:20.649 22:42:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:43:20.649 Found net devices under 0000:4b:00.0: cvl_0_0 00:43:20.649 22:42:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:43:20.649 22:42:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:43:20.649 22:42:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:43:20.649 22:42:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:43:20.649 22:42:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:43:20.649 22:42:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@416 -- # [[ up == up ]] 00:43:20.649 22:42:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:43:20.649 22:42:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:43:20.649 22:42:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:43:20.649 Found net devices under 0000:4b:00.1: cvl_0_1 00:43:20.649 22:42:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:43:20.649 22:42:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:43:20.649 22:42:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@440 -- # is_hw=yes 00:43:20.649 22:42:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:43:20.649 22:42:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:43:20.649 22:42:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:43:20.649 22:42:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:43:20.649 22:42:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:43:20.649 22:42:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:43:20.649 22:42:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:43:20.649 22:42:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:43:20.649 22:42:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:43:20.649 22:42:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:43:20.649 22:42:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:43:20.649 22:42:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:43:20.649 22:42:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:43:20.649 22:42:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:43:20.649 22:42:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:43:20.649 22:42:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:43:20.649 22:42:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:43:20.649 22:42:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:43:20.649 22:42:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:43:20.649 22:42:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:43:20.650 22:42:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:43:20.650 22:42:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:43:20.650 22:42:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:43:20.650 22:42:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:43:20.650 22:42:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:43:20.650 22:42:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:43:20.650 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:43:20.650 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.591 ms 00:43:20.650 00:43:20.650 --- 10.0.0.2 ping statistics --- 00:43:20.650 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:43:20.650 rtt min/avg/max/mdev = 0.591/0.591/0.591/0.000 ms 00:43:20.650 22:42:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:43:20.650 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:43:20.650 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.279 ms 00:43:20.650 00:43:20.650 --- 10.0.0.1 ping statistics --- 00:43:20.650 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:43:20.650 rtt min/avg/max/mdev = 0.279/0.279/0.279/0.000 ms 00:43:20.650 22:42:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:43:20.650 22:42:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@448 -- # return 0 00:43:20.650 22:42:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:43:20.650 22:42:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:43:20.650 22:42:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:43:20.650 22:42:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:43:20.650 22:42:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:43:20.650 22:42:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:43:20.650 22:42:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:43:20.650 22:42:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@107 -- # nvmf_host_management 00:43:20.650 22:42:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@69 -- # starttarget 00:43:20.650 22:42:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:43:20.650 22:42:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:43:20.650 22:42:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@724 -- # xtrace_disable 00:43:20.650 22:42:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:43:20.650 22:42:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@507 -- # nvmfpid=428735 00:43:20.650 22:42:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@508 -- # waitforlisten 428735 00:43:20.650 22:42:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x1E 00:43:20.650 22:42:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@831 -- # '[' -z 428735 ']' 00:43:20.650 22:42:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:43:20.650 22:42:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@836 -- # local max_retries=100 00:43:20.650 22:42:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:43:20.650 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:43:20.650 22:42:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@840 -- # xtrace_disable 00:43:20.650 22:42:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:43:20.650 [2024-10-01 22:42:15.011908] thread.c:2945:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:43:20.650 [2024-10-01 22:42:15.013853] Starting SPDK v25.01-pre git sha1 1b1c3081e / DPDK 24.03.0 initialization... 00:43:20.650 [2024-10-01 22:42:15.013942] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:43:20.650 [2024-10-01 22:42:15.104725] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:43:20.650 [2024-10-01 22:42:15.201429] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:43:20.650 [2024-10-01 22:42:15.201494] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:43:20.650 [2024-10-01 22:42:15.201503] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:43:20.650 [2024-10-01 22:42:15.201510] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:43:20.650 [2024-10-01 22:42:15.201517] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:43:20.650 [2024-10-01 22:42:15.201679] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:43:20.650 [2024-10-01 22:42:15.201902] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:43:20.650 [2024-10-01 22:42:15.202066] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:43:20.650 [2024-10-01 22:42:15.202066] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 4 00:43:20.650 [2024-10-01 22:42:15.345217] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:43:20.650 [2024-10-01 22:42:15.346064] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:43:20.650 [2024-10-01 22:42:15.347010] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:43:20.650 [2024-10-01 22:42:15.347055] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:43:20.650 [2024-10-01 22:42:15.347156] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:43:20.650 22:42:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:43:20.650 22:42:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@864 -- # return 0 00:43:20.650 22:42:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:43:20.650 22:42:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@730 -- # xtrace_disable 00:43:20.650 22:42:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:43:20.650 22:42:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:43:20.650 22:42:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:43:20.650 22:42:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:43:20.650 22:42:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:43:20.650 [2024-10-01 22:42:15.854999] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:43:20.650 22:42:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:43:20.650 22:42:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:43:20.650 22:42:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@724 -- # xtrace_disable 00:43:20.650 22:42:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:43:20.650 22:42:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@22 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:43:20.650 22:42:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@23 -- # cat 00:43:20.650 22:42:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@30 -- # rpc_cmd 00:43:20.650 22:42:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:43:20.650 22:42:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:43:21.016 Malloc0 00:43:21.016 [2024-10-01 22:42:15.939095] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:43:21.016 22:42:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:43:21.016 22:42:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:43:21.017 22:42:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@730 -- # xtrace_disable 00:43:21.017 22:42:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:43:21.017 22:42:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@73 -- # perfpid=428914 00:43:21.017 22:42:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@74 -- # waitforlisten 428914 /var/tmp/bdevperf.sock 00:43:21.017 22:42:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@831 -- # '[' -z 428914 ']' 00:43:21.017 22:42:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:43:21.017 22:42:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@836 -- # local max_retries=100 00:43:21.017 22:42:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:43:21.017 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:43:21.017 22:42:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:43:21.017 22:42:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@840 -- # xtrace_disable 00:43:21.017 22:42:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:43:21.017 22:42:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:43:21.017 22:42:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@558 -- # config=() 00:43:21.017 22:42:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@558 -- # local subsystem config 00:43:21.017 22:42:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:43:21.017 22:42:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:43:21.017 { 00:43:21.017 "params": { 00:43:21.017 "name": "Nvme$subsystem", 00:43:21.017 "trtype": "$TEST_TRANSPORT", 00:43:21.017 "traddr": "$NVMF_FIRST_TARGET_IP", 00:43:21.017 "adrfam": "ipv4", 00:43:21.017 "trsvcid": "$NVMF_PORT", 00:43:21.017 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:43:21.017 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:43:21.017 "hdgst": ${hdgst:-false}, 00:43:21.017 "ddgst": ${ddgst:-false} 00:43:21.017 }, 00:43:21.017 "method": "bdev_nvme_attach_controller" 00:43:21.017 } 00:43:21.017 EOF 00:43:21.017 )") 00:43:21.017 22:42:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@580 -- # cat 00:43:21.017 22:42:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@582 -- # jq . 00:43:21.017 22:42:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@583 -- # IFS=, 00:43:21.017 22:42:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:43:21.017 "params": { 00:43:21.017 "name": "Nvme0", 00:43:21.017 "trtype": "tcp", 00:43:21.017 "traddr": "10.0.0.2", 00:43:21.017 "adrfam": "ipv4", 00:43:21.017 "trsvcid": "4420", 00:43:21.017 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:43:21.017 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:43:21.017 "hdgst": false, 00:43:21.017 "ddgst": false 00:43:21.017 }, 00:43:21.017 "method": "bdev_nvme_attach_controller" 00:43:21.017 }' 00:43:21.017 [2024-10-01 22:42:16.045795] Starting SPDK v25.01-pre git sha1 1b1c3081e / DPDK 24.03.0 initialization... 00:43:21.017 [2024-10-01 22:42:16.045850] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid428914 ] 00:43:21.017 [2024-10-01 22:42:16.106512] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:43:21.017 [2024-10-01 22:42:16.171453] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:43:21.325 Running I/O for 10 seconds... 00:43:21.908 22:42:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:43:21.908 22:42:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@864 -- # return 0 00:43:21.908 22:42:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:43:21.908 22:42:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:43:21.909 22:42:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:43:21.909 22:42:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:43:21.909 22:42:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:43:21.909 22:42:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:43:21.909 22:42:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:43:21.909 22:42:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:43:21.909 22:42:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@52 -- # local ret=1 00:43:21.909 22:42:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@53 -- # local i 00:43:21.909 22:42:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@54 -- # (( i = 10 )) 00:43:21.909 22:42:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:43:21.909 22:42:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:43:21.909 22:42:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:43:21.909 22:42:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:43:21.909 22:42:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:43:21.909 22:42:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:43:21.909 22:42:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=515 00:43:21.909 22:42:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@58 -- # '[' 515 -ge 100 ']' 00:43:21.909 22:42:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@59 -- # ret=0 00:43:21.909 22:42:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@60 -- # break 00:43:21.909 22:42:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@64 -- # return 0 00:43:21.909 22:42:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:43:21.909 22:42:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:43:21.909 22:42:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:43:21.909 [2024-10-01 22:42:16.922589] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb184a0 is same with the state(6) to be set 00:43:21.909 [2024-10-01 22:42:16.922631] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb184a0 is same with the state(6) to be set 00:43:21.909 [2024-10-01 22:42:16.922641] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb184a0 is same with the state(6) to be set 00:43:21.909 [2024-10-01 22:42:16.922648] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb184a0 is same with the state(6) to be set 00:43:21.909 [2024-10-01 22:42:16.922655] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb184a0 is same with the state(6) to be set 00:43:21.909 [2024-10-01 22:42:16.922662] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb184a0 is same with the state(6) to be set 00:43:21.909 [2024-10-01 22:42:16.922669] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb184a0 is same with the state(6) to be set 00:43:21.909 [2024-10-01 22:42:16.922676] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb184a0 is same with the state(6) to be set 00:43:21.909 [2024-10-01 22:42:16.922682] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb184a0 is same with the state(6) to be set 00:43:21.909 [2024-10-01 22:42:16.922689] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb184a0 is same with the state(6) to be set 00:43:21.909 [2024-10-01 22:42:16.922696] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb184a0 is same with the state(6) to be set 00:43:21.909 [2024-10-01 22:42:16.922702] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb184a0 is same with the state(6) to be set 00:43:21.909 [2024-10-01 22:42:16.922709] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb184a0 is same with the state(6) to be set 00:43:21.909 [2024-10-01 22:42:16.922715] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb184a0 is same with the state(6) to be set 00:43:21.909 [2024-10-01 22:42:16.922722] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb184a0 is same with the state(6) to be set 00:43:21.909 [2024-10-01 22:42:16.922728] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb184a0 is same with the state(6) to be set 00:43:21.909 [2024-10-01 22:42:16.922735] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb184a0 is same with the state(6) to be set 00:43:21.909 [2024-10-01 22:42:16.922742] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb184a0 is same with the state(6) to be set 00:43:21.909 [2024-10-01 22:42:16.922748] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb184a0 is same with the state(6) to be set 00:43:21.909 [2024-10-01 22:42:16.922761] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb184a0 is same with the state(6) to be set 00:43:21.909 [2024-10-01 22:42:16.922768] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb184a0 is same with the state(6) to be set 00:43:21.909 [2024-10-01 22:42:16.922775] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb184a0 is same with the state(6) to be set 00:43:21.909 [2024-10-01 22:42:16.922782] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb184a0 is same with the state(6) to be set 00:43:21.909 [2024-10-01 22:42:16.922788] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb184a0 is same with the state(6) to be set 00:43:21.909 [2024-10-01 22:42:16.922795] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb184a0 is same with the state(6) to be set 00:43:21.909 [2024-10-01 22:42:16.922802] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb184a0 is same with the state(6) to be set 00:43:21.909 [2024-10-01 22:42:16.922809] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb184a0 is same with the state(6) to be set 00:43:21.909 [2024-10-01 22:42:16.922816] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb184a0 is same with the state(6) to be set 00:43:21.909 [2024-10-01 22:42:16.922822] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb184a0 is same with the state(6) to be set 00:43:21.909 [2024-10-01 22:42:16.922829] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb184a0 is same with the state(6) to be set 00:43:21.909 [2024-10-01 22:42:16.922836] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb184a0 is same with the state(6) to be set 00:43:21.909 [2024-10-01 22:42:16.922842] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb184a0 is same with the state(6) to be set 00:43:21.909 [2024-10-01 22:42:16.922849] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb184a0 is same with the state(6) to be set 00:43:21.909 [2024-10-01 22:42:16.922856] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb184a0 is same with the state(6) to be set 00:43:21.909 [2024-10-01 22:42:16.922862] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb184a0 is same with the state(6) to be set 00:43:21.909 [2024-10-01 22:42:16.922869] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb184a0 is same with the state(6) to be set 00:43:21.909 [2024-10-01 22:42:16.922876] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb184a0 is same with the state(6) to be set 00:43:21.909 [2024-10-01 22:42:16.922882] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb184a0 is same with the state(6) to be set 00:43:21.909 [2024-10-01 22:42:16.922889] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb184a0 is same with the state(6) to be set 00:43:21.909 [2024-10-01 22:42:16.922896] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb184a0 is same with the state(6) to be set 00:43:21.909 [2024-10-01 22:42:16.922905] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb184a0 is same with the state(6) to be set 00:43:21.909 [2024-10-01 22:42:16.922911] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb184a0 is same with the state(6) to be set 00:43:21.909 [2024-10-01 22:42:16.922919] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb184a0 is same with the state(6) to be set 00:43:21.909 [2024-10-01 22:42:16.922926] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb184a0 is same with the state(6) to be set 00:43:21.909 [2024-10-01 22:42:16.922932] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb184a0 is same with the state(6) to be set 00:43:21.909 [2024-10-01 22:42:16.922939] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb184a0 is same with the state(6) to be set 00:43:21.909 [2024-10-01 22:42:16.922945] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb184a0 is same with the state(6) to be set 00:43:21.909 [2024-10-01 22:42:16.922953] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb184a0 is same with the state(6) to be set 00:43:21.909 [2024-10-01 22:42:16.922960] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb184a0 is same with the state(6) to be set 00:43:21.910 [2024-10-01 22:42:16.922966] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb184a0 is same with the state(6) to be set 00:43:21.910 [2024-10-01 22:42:16.922973] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb184a0 is same with the state(6) to be set 00:43:21.910 [2024-10-01 22:42:16.922980] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb184a0 is same with the state(6) to be set 00:43:21.910 [2024-10-01 22:42:16.922986] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb184a0 is same with the state(6) to be set 00:43:21.910 [2024-10-01 22:42:16.922993] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb184a0 is same with the state(6) to be set 00:43:21.910 [2024-10-01 22:42:16.923000] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb184a0 is same with the state(6) to be set 00:43:21.910 [2024-10-01 22:42:16.923006] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb184a0 is same with the state(6) to be set 00:43:21.910 [2024-10-01 22:42:16.923012] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb184a0 is same with the state(6) to be set 00:43:21.910 [2024-10-01 22:42:16.923019] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb184a0 is same with the state(6) to be set 00:43:21.910 [2024-10-01 22:42:16.923026] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb184a0 is same with the state(6) to be set 00:43:21.910 [2024-10-01 22:42:16.923034] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb184a0 is same with the state(6) to be set 00:43:21.910 [2024-10-01 22:42:16.923041] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb184a0 is same with the state(6) to be set 00:43:21.910 [2024-10-01 22:42:16.923048] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb184a0 is same with the state(6) to be set 00:43:21.910 [2024-10-01 22:42:16.923054] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb184a0 is same with the state(6) to be set 00:43:21.910 [2024-10-01 22:42:16.923218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:73728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:21.910 [2024-10-01 22:42:16.923256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:21.910 [2024-10-01 22:42:16.923274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:73856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:21.910 [2024-10-01 22:42:16.923282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:21.910 [2024-10-01 22:42:16.923292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:73984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:21.910 [2024-10-01 22:42:16.923299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:21.910 [2024-10-01 22:42:16.923309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:74112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:21.910 [2024-10-01 22:42:16.923316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:21.910 [2024-10-01 22:42:16.923326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:74240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:21.910 [2024-10-01 22:42:16.923333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:21.910 [2024-10-01 22:42:16.923348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:74368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:21.910 [2024-10-01 22:42:16.923356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:21.910 [2024-10-01 22:42:16.923366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:74496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:21.910 [2024-10-01 22:42:16.923373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:21.910 [2024-10-01 22:42:16.923383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:74624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:21.910 [2024-10-01 22:42:16.923390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:21.910 [2024-10-01 22:42:16.923399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:74752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:21.910 [2024-10-01 22:42:16.923406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:21.910 [2024-10-01 22:42:16.923416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:74880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:21.910 [2024-10-01 22:42:16.923423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:21.910 [2024-10-01 22:42:16.923433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:75008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:21.910 [2024-10-01 22:42:16.923440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:21.910 [2024-10-01 22:42:16.923450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:75136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:21.910 [2024-10-01 22:42:16.923457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:21.910 [2024-10-01 22:42:16.923466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:75264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:21.910 [2024-10-01 22:42:16.923474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:21.910 [2024-10-01 22:42:16.923483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:75392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:21.910 [2024-10-01 22:42:16.923490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:21.910 [2024-10-01 22:42:16.923500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:75520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:21.910 [2024-10-01 22:42:16.923507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:21.910 [2024-10-01 22:42:16.923517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:75648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:21.910 [2024-10-01 22:42:16.923524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:21.910 [2024-10-01 22:42:16.923533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:75776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:21.910 [2024-10-01 22:42:16.923541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:21.910 [2024-10-01 22:42:16.923550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:75904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:21.910 [2024-10-01 22:42:16.923559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:21.910 [2024-10-01 22:42:16.923568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:76032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:21.910 [2024-10-01 22:42:16.923575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:21.910 [2024-10-01 22:42:16.923585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:76160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:21.910 [2024-10-01 22:42:16.923592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:21.910 [2024-10-01 22:42:16.923601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:76288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:21.910 [2024-10-01 22:42:16.923608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:21.910 [2024-10-01 22:42:16.923617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:76416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:21.910 [2024-10-01 22:42:16.923632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:21.910 [2024-10-01 22:42:16.923642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:76544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:21.910 [2024-10-01 22:42:16.923650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:21.910 [2024-10-01 22:42:16.923659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:76672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:21.910 [2024-10-01 22:42:16.923666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:21.911 [2024-10-01 22:42:16.923675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:76800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:21.911 [2024-10-01 22:42:16.923683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:21.911 [2024-10-01 22:42:16.923692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:76928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:21.911 [2024-10-01 22:42:16.923699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:21.911 [2024-10-01 22:42:16.923708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:77056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:21.911 [2024-10-01 22:42:16.923715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:21.911 [2024-10-01 22:42:16.923725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:77184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:21.911 [2024-10-01 22:42:16.923732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:21.911 [2024-10-01 22:42:16.923741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:77312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:21.911 [2024-10-01 22:42:16.923748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:21.911 [2024-10-01 22:42:16.923758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:77440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:21.911 [2024-10-01 22:42:16.923765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:21.911 [2024-10-01 22:42:16.923776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:77568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:21.911 [2024-10-01 22:42:16.923784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:21.911 [2024-10-01 22:42:16.923793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:77696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:21.911 [2024-10-01 22:42:16.923800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:21.911 [2024-10-01 22:42:16.923810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:77824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:21.911 [2024-10-01 22:42:16.923818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:21.911 [2024-10-01 22:42:16.923827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:77952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:21.911 [2024-10-01 22:42:16.923834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:21.911 [2024-10-01 22:42:16.923844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:78080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:21.911 [2024-10-01 22:42:16.923851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:21.911 [2024-10-01 22:42:16.923861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:78208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:21.911 [2024-10-01 22:42:16.923868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:21.911 [2024-10-01 22:42:16.923877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:78336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:21.911 [2024-10-01 22:42:16.923885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:21.911 [2024-10-01 22:42:16.923894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:78464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:21.911 [2024-10-01 22:42:16.923901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:21.911 [2024-10-01 22:42:16.923911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:78592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:21.911 [2024-10-01 22:42:16.923918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:21.911 [2024-10-01 22:42:16.923928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:78720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:21.911 [2024-10-01 22:42:16.923935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:21.911 [2024-10-01 22:42:16.923944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:78848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:21.911 [2024-10-01 22:42:16.923951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:21.911 [2024-10-01 22:42:16.923961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:78976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:21.911 [2024-10-01 22:42:16.923968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:21.911 [2024-10-01 22:42:16.923978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:79104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:21.911 [2024-10-01 22:42:16.923986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:21.911 [2024-10-01 22:42:16.923996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:79232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:21.911 [2024-10-01 22:42:16.924003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:21.911 [2024-10-01 22:42:16.924013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:79360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:21.911 [2024-10-01 22:42:16.924020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:21.911 [2024-10-01 22:42:16.924029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:79488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:21.911 [2024-10-01 22:42:16.924037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:21.911 [2024-10-01 22:42:16.924046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:79616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:21.911 [2024-10-01 22:42:16.924053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:21.911 [2024-10-01 22:42:16.924063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:79744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:21.911 [2024-10-01 22:42:16.924070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:21.911 [2024-10-01 22:42:16.924079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:79872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:21.911 [2024-10-01 22:42:16.924086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:21.911 [2024-10-01 22:42:16.924095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:80000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:21.911 [2024-10-01 22:42:16.924103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:21.911 [2024-10-01 22:42:16.924112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:80128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:21.911 [2024-10-01 22:42:16.924119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:21.911 [2024-10-01 22:42:16.924129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:80256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:21.911 [2024-10-01 22:42:16.924136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:21.911 [2024-10-01 22:42:16.924145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:80384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:21.911 [2024-10-01 22:42:16.924152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:21.911 [2024-10-01 22:42:16.924162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:80512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:21.911 [2024-10-01 22:42:16.924169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:21.911 [2024-10-01 22:42:16.924178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:80640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:21.911 [2024-10-01 22:42:16.924185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:21.911 [2024-10-01 22:42:16.924196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:80768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:21.911 [2024-10-01 22:42:16.924204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:21.911 [2024-10-01 22:42:16.924213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:80896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:21.911 [2024-10-01 22:42:16.924221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:21.911 [2024-10-01 22:42:16.924230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:81024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:21.911 [2024-10-01 22:42:16.924237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:21.911 [2024-10-01 22:42:16.924247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:81152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:21.911 [2024-10-01 22:42:16.924254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:21.912 [2024-10-01 22:42:16.924263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:81280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:21.912 [2024-10-01 22:42:16.924271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:21.912 [2024-10-01 22:42:16.924280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:81408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:21.912 [2024-10-01 22:42:16.924287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:21.912 [2024-10-01 22:42:16.924296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:81536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:21.912 [2024-10-01 22:42:16.924304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:21.912 [2024-10-01 22:42:16.924313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:81664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:21.912 [2024-10-01 22:42:16.924320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:21.912 [2024-10-01 22:42:16.924329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:81792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:21.912 [2024-10-01 22:42:16.924337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:21.912 [2024-10-01 22:42:16.924346] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2040420 is same with the state(6) to be set 00:43:21.912 [2024-10-01 22:42:16.924389] bdev_nvme.c:1730:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x2040420 was disconnected and freed. reset controller. 00:43:21.912 [2024-10-01 22:42:16.925634] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:43:21.912 task offset: 73728 on job bdev=Nvme0n1 fails 00:43:21.912 00:43:21.912 Latency(us) 00:43:21.912 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:43:21.912 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:43:21.912 Job: Nvme0n1 ended in about 0.40 seconds with error 00:43:21.912 Verification LBA range: start 0x0 length 0x400 00:43:21.912 Nvme0n1 : 0.40 1445.38 90.34 160.60 0.00 38618.43 4478.29 36918.61 00:43:21.912 =================================================================================================================== 00:43:21.912 Total : 1445.38 90.34 160.60 0.00 38618.43 4478.29 36918.61 00:43:21.912 [2024-10-01 22:42:16.927644] app.c:1061:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:43:21.912 22:42:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:43:21.912 [2024-10-01 22:42:16.927669] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2030290 (9): Bad file descriptor 00:43:21.912 22:42:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:43:21.912 22:42:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:43:21.912 22:42:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:43:21.912 [2024-10-01 22:42:16.928715] ctrlr.c: 823:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode0' does not allow host 'nqn.2016-06.io.spdk:host0' 00:43:21.912 [2024-10-01 22:42:16.928788] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:3 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:43:21.912 [2024-10-01 22:42:16.928809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND SPECIFIC (01/84) qid:0 cid:3 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:21.912 [2024-10-01 22:42:16.928823] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode0 00:43:21.912 [2024-10-01 22:42:16.928831] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 132 00:43:21.912 [2024-10-01 22:42:16.928839] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:43:21.912 [2024-10-01 22:42:16.928846] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2030290 00:43:21.912 [2024-10-01 22:42:16.928864] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2030290 (9): Bad file descriptor 00:43:21.912 [2024-10-01 22:42:16.928876] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:43:21.912 [2024-10-01 22:42:16.928884] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:43:21.912 [2024-10-01 22:42:16.928892] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:43:21.912 [2024-10-01 22:42:16.928905] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:43:21.912 22:42:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:43:21.912 22:42:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@87 -- # sleep 1 00:43:22.856 22:42:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@91 -- # kill -9 428914 00:43:22.856 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh: line 91: kill: (428914) - No such process 00:43:22.857 22:42:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@91 -- # true 00:43:22.857 22:42:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:43:22.857 22:42:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:43:22.857 22:42:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:43:22.857 22:42:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@558 -- # config=() 00:43:22.857 22:42:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@558 -- # local subsystem config 00:43:22.857 22:42:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:43:22.857 22:42:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:43:22.857 { 00:43:22.857 "params": { 00:43:22.857 "name": "Nvme$subsystem", 00:43:22.857 "trtype": "$TEST_TRANSPORT", 00:43:22.857 "traddr": "$NVMF_FIRST_TARGET_IP", 00:43:22.857 "adrfam": "ipv4", 00:43:22.857 "trsvcid": "$NVMF_PORT", 00:43:22.857 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:43:22.857 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:43:22.857 "hdgst": ${hdgst:-false}, 00:43:22.857 "ddgst": ${ddgst:-false} 00:43:22.857 }, 00:43:22.857 "method": "bdev_nvme_attach_controller" 00:43:22.857 } 00:43:22.857 EOF 00:43:22.857 )") 00:43:22.857 22:42:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@580 -- # cat 00:43:22.857 22:42:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@582 -- # jq . 00:43:22.857 22:42:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@583 -- # IFS=, 00:43:22.857 22:42:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:43:22.857 "params": { 00:43:22.857 "name": "Nvme0", 00:43:22.857 "trtype": "tcp", 00:43:22.857 "traddr": "10.0.0.2", 00:43:22.857 "adrfam": "ipv4", 00:43:22.857 "trsvcid": "4420", 00:43:22.857 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:43:22.857 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:43:22.857 "hdgst": false, 00:43:22.857 "ddgst": false 00:43:22.857 }, 00:43:22.857 "method": "bdev_nvme_attach_controller" 00:43:22.857 }' 00:43:22.857 [2024-10-01 22:42:17.999856] Starting SPDK v25.01-pre git sha1 1b1c3081e / DPDK 24.03.0 initialization... 00:43:22.857 [2024-10-01 22:42:17.999913] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid429275 ] 00:43:22.857 [2024-10-01 22:42:18.061808] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:43:23.119 [2024-10-01 22:42:18.125698] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:43:23.119 Running I/O for 1 seconds... 00:43:24.506 1652.00 IOPS, 103.25 MiB/s 00:43:24.506 Latency(us) 00:43:24.506 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:43:24.506 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:43:24.506 Verification LBA range: start 0x0 length 0x400 00:43:24.506 Nvme0n1 : 1.03 1687.33 105.46 0.00 0.00 37121.74 2375.68 36044.80 00:43:24.506 =================================================================================================================== 00:43:24.506 Total : 1687.33 105.46 0.00 0.00 37121.74 2375.68 36044.80 00:43:24.506 22:42:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@102 -- # stoptarget 00:43:24.506 22:42:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:43:24.506 22:42:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@37 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:43:24.506 22:42:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@38 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:43:24.506 22:42:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@40 -- # nvmftestfini 00:43:24.506 22:42:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@514 -- # nvmfcleanup 00:43:24.506 22:42:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@121 -- # sync 00:43:24.506 22:42:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:43:24.506 22:42:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@124 -- # set +e 00:43:24.506 22:42:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@125 -- # for i in {1..20} 00:43:24.506 22:42:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:43:24.506 rmmod nvme_tcp 00:43:24.506 rmmod nvme_fabrics 00:43:24.506 rmmod nvme_keyring 00:43:24.506 22:42:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:43:24.506 22:42:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@128 -- # set -e 00:43:24.506 22:42:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@129 -- # return 0 00:43:24.506 22:42:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@515 -- # '[' -n 428735 ']' 00:43:24.506 22:42:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@516 -- # killprocess 428735 00:43:24.506 22:42:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@950 -- # '[' -z 428735 ']' 00:43:24.506 22:42:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@954 -- # kill -0 428735 00:43:24.506 22:42:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@955 -- # uname 00:43:24.506 22:42:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:43:24.506 22:42:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 428735 00:43:24.506 22:42:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:43:24.506 22:42:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:43:24.506 22:42:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@968 -- # echo 'killing process with pid 428735' 00:43:24.506 killing process with pid 428735 00:43:24.506 22:42:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@969 -- # kill 428735 00:43:24.506 22:42:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@974 -- # wait 428735 00:43:24.767 [2024-10-01 22:42:19.838222] app.c: 719:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:43:24.767 22:42:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:43:24.767 22:42:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:43:24.767 22:42:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:43:24.767 22:42:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@297 -- # iptr 00:43:24.767 22:42:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@789 -- # iptables-save 00:43:24.767 22:42:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:43:24.767 22:42:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@789 -- # iptables-restore 00:43:24.767 22:42:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:43:24.767 22:42:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@302 -- # remove_spdk_ns 00:43:24.767 22:42:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:43:24.767 22:42:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:43:24.767 22:42:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:43:27.317 22:42:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:43:27.317 22:42:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:43:27.317 00:43:27.317 real 0m14.664s 00:43:27.317 user 0m19.435s 00:43:27.317 sys 0m7.578s 00:43:27.317 22:42:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1126 -- # xtrace_disable 00:43:27.317 22:42:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:43:27.317 ************************************ 00:43:27.317 END TEST nvmf_host_management 00:43:27.317 ************************************ 00:43:27.317 22:42:21 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@27 -- # run_test nvmf_lvol /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp --interrupt-mode 00:43:27.317 22:42:21 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:43:27.317 22:42:21 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1107 -- # xtrace_disable 00:43:27.317 22:42:21 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:43:27.317 ************************************ 00:43:27.317 START TEST nvmf_lvol 00:43:27.317 ************************************ 00:43:27.317 22:42:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp --interrupt-mode 00:43:27.317 * Looking for test storage... 00:43:27.317 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:43:27.317 22:42:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:43:27.317 22:42:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1681 -- # lcov --version 00:43:27.317 22:42:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:43:27.317 22:42:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:43:27.317 22:42:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:43:27.317 22:42:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@333 -- # local ver1 ver1_l 00:43:27.317 22:42:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@334 -- # local ver2 ver2_l 00:43:27.317 22:42:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@336 -- # IFS=.-: 00:43:27.317 22:42:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@336 -- # read -ra ver1 00:43:27.317 22:42:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@337 -- # IFS=.-: 00:43:27.317 22:42:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@337 -- # read -ra ver2 00:43:27.317 22:42:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@338 -- # local 'op=<' 00:43:27.317 22:42:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@340 -- # ver1_l=2 00:43:27.317 22:42:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@341 -- # ver2_l=1 00:43:27.317 22:42:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:43:27.317 22:42:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@344 -- # case "$op" in 00:43:27.317 22:42:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@345 -- # : 1 00:43:27.317 22:42:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@364 -- # (( v = 0 )) 00:43:27.317 22:42:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:43:27.317 22:42:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@365 -- # decimal 1 00:43:27.317 22:42:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@353 -- # local d=1 00:43:27.317 22:42:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:43:27.317 22:42:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@355 -- # echo 1 00:43:27.317 22:42:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@365 -- # ver1[v]=1 00:43:27.317 22:42:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@366 -- # decimal 2 00:43:27.317 22:42:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@353 -- # local d=2 00:43:27.317 22:42:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:43:27.317 22:42:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@355 -- # echo 2 00:43:27.317 22:42:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@366 -- # ver2[v]=2 00:43:27.317 22:42:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:43:27.317 22:42:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:43:27.317 22:42:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@368 -- # return 0 00:43:27.317 22:42:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:43:27.317 22:42:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:43:27.317 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:43:27.317 --rc genhtml_branch_coverage=1 00:43:27.317 --rc genhtml_function_coverage=1 00:43:27.317 --rc genhtml_legend=1 00:43:27.317 --rc geninfo_all_blocks=1 00:43:27.317 --rc geninfo_unexecuted_blocks=1 00:43:27.317 00:43:27.317 ' 00:43:27.317 22:42:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:43:27.317 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:43:27.317 --rc genhtml_branch_coverage=1 00:43:27.317 --rc genhtml_function_coverage=1 00:43:27.317 --rc genhtml_legend=1 00:43:27.317 --rc geninfo_all_blocks=1 00:43:27.317 --rc geninfo_unexecuted_blocks=1 00:43:27.317 00:43:27.317 ' 00:43:27.317 22:42:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:43:27.317 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:43:27.317 --rc genhtml_branch_coverage=1 00:43:27.317 --rc genhtml_function_coverage=1 00:43:27.317 --rc genhtml_legend=1 00:43:27.318 --rc geninfo_all_blocks=1 00:43:27.318 --rc geninfo_unexecuted_blocks=1 00:43:27.318 00:43:27.318 ' 00:43:27.318 22:42:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:43:27.318 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:43:27.318 --rc genhtml_branch_coverage=1 00:43:27.318 --rc genhtml_function_coverage=1 00:43:27.318 --rc genhtml_legend=1 00:43:27.318 --rc geninfo_all_blocks=1 00:43:27.318 --rc geninfo_unexecuted_blocks=1 00:43:27.318 00:43:27.318 ' 00:43:27.318 22:42:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:43:27.318 22:42:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@7 -- # uname -s 00:43:27.318 22:42:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:43:27.318 22:42:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:43:27.318 22:42:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:43:27.318 22:42:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:43:27.318 22:42:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:43:27.318 22:42:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:43:27.318 22:42:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:43:27.318 22:42:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:43:27.318 22:42:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:43:27.318 22:42:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:43:27.318 22:42:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 00:43:27.318 22:42:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@18 -- # NVME_HOSTID=80c5a598-37ec-ec11-9bc7-a4bf01928204 00:43:27.318 22:42:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:43:27.318 22:42:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:43:27.318 22:42:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:43:27.318 22:42:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:43:27.318 22:42:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:43:27.318 22:42:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@15 -- # shopt -s extglob 00:43:27.318 22:42:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:43:27.318 22:42:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:43:27.318 22:42:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:43:27.318 22:42:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:43:27.318 22:42:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:43:27.318 22:42:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:43:27.318 22:42:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@5 -- # export PATH 00:43:27.318 22:42:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:43:27.318 22:42:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@51 -- # : 0 00:43:27.318 22:42:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:43:27.318 22:42:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:43:27.318 22:42:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:43:27.318 22:42:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:43:27.318 22:42:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:43:27.318 22:42:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:43:27.318 22:42:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:43:27.318 22:42:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:43:27.318 22:42:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:43:27.318 22:42:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@55 -- # have_pci_nics=0 00:43:27.318 22:42:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:43:27.318 22:42:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:43:27.318 22:42:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:43:27.318 22:42:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:43:27.318 22:42:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:43:27.318 22:42:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:43:27.318 22:42:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:43:27.318 22:42:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:43:27.318 22:42:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@474 -- # prepare_net_devs 00:43:27.318 22:42:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@436 -- # local -g is_hw=no 00:43:27.318 22:42:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@438 -- # remove_spdk_ns 00:43:27.318 22:42:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:43:27.318 22:42:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:43:27.318 22:42:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:43:27.318 22:42:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:43:27.318 22:42:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:43:27.318 22:42:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@309 -- # xtrace_disable 00:43:27.318 22:42:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:43:33.912 22:42:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:43:33.912 22:42:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@315 -- # pci_devs=() 00:43:34.174 22:42:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@315 -- # local -a pci_devs 00:43:34.174 22:42:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@316 -- # pci_net_devs=() 00:43:34.174 22:42:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:43:34.174 22:42:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@317 -- # pci_drivers=() 00:43:34.174 22:42:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@317 -- # local -A pci_drivers 00:43:34.174 22:42:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@319 -- # net_devs=() 00:43:34.174 22:42:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@319 -- # local -ga net_devs 00:43:34.174 22:42:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@320 -- # e810=() 00:43:34.174 22:42:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@320 -- # local -ga e810 00:43:34.174 22:42:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@321 -- # x722=() 00:43:34.174 22:42:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@321 -- # local -ga x722 00:43:34.174 22:42:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@322 -- # mlx=() 00:43:34.174 22:42:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@322 -- # local -ga mlx 00:43:34.174 22:42:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:43:34.174 22:42:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:43:34.174 22:42:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:43:34.174 22:42:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:43:34.174 22:42:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:43:34.174 22:42:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:43:34.174 22:42:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:43:34.174 22:42:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:43:34.174 22:42:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:43:34.174 22:42:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:43:34.174 22:42:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:43:34.174 22:42:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:43:34.174 22:42:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:43:34.174 22:42:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:43:34.174 22:42:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:43:34.174 22:42:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:43:34.174 22:42:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:43:34.174 22:42:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:43:34.174 22:42:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:43:34.174 22:42:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:43:34.174 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:43:34.174 22:42:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:43:34.174 22:42:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:43:34.174 22:42:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:43:34.174 22:42:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:43:34.174 22:42:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:43:34.174 22:42:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:43:34.174 22:42:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:43:34.174 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:43:34.174 22:42:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:43:34.174 22:42:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:43:34.174 22:42:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:43:34.174 22:42:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:43:34.174 22:42:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:43:34.174 22:42:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:43:34.174 22:42:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:43:34.174 22:42:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:43:34.174 22:42:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:43:34.174 22:42:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:43:34.174 22:42:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:43:34.174 22:42:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:43:34.174 22:42:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@416 -- # [[ up == up ]] 00:43:34.174 22:42:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:43:34.174 22:42:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:43:34.174 22:42:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:43:34.174 Found net devices under 0000:4b:00.0: cvl_0_0 00:43:34.174 22:42:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:43:34.174 22:42:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:43:34.174 22:42:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:43:34.174 22:42:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:43:34.174 22:42:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:43:34.174 22:42:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@416 -- # [[ up == up ]] 00:43:34.174 22:42:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:43:34.174 22:42:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:43:34.174 22:42:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:43:34.174 Found net devices under 0000:4b:00.1: cvl_0_1 00:43:34.174 22:42:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:43:34.174 22:42:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:43:34.174 22:42:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@440 -- # is_hw=yes 00:43:34.174 22:42:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:43:34.174 22:42:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:43:34.174 22:42:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:43:34.174 22:42:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:43:34.174 22:42:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:43:34.174 22:42:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:43:34.174 22:42:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:43:34.174 22:42:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:43:34.174 22:42:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:43:34.174 22:42:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:43:34.175 22:42:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:43:34.175 22:42:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:43:34.175 22:42:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:43:34.175 22:42:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:43:34.175 22:42:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:43:34.175 22:42:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:43:34.175 22:42:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:43:34.175 22:42:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:43:34.175 22:42:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:43:34.175 22:42:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:43:34.175 22:42:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:43:34.175 22:42:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:43:34.175 22:42:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:43:34.436 22:42:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:43:34.436 22:42:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:43:34.436 22:42:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:43:34.436 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:43:34.436 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.641 ms 00:43:34.436 00:43:34.436 --- 10.0.0.2 ping statistics --- 00:43:34.436 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:43:34.436 rtt min/avg/max/mdev = 0.641/0.641/0.641/0.000 ms 00:43:34.436 22:42:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:43:34.436 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:43:34.436 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.277 ms 00:43:34.436 00:43:34.436 --- 10.0.0.1 ping statistics --- 00:43:34.436 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:43:34.436 rtt min/avg/max/mdev = 0.277/0.277/0.277/0.000 ms 00:43:34.436 22:42:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:43:34.436 22:42:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@448 -- # return 0 00:43:34.436 22:42:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:43:34.436 22:42:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:43:34.436 22:42:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:43:34.436 22:42:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:43:34.436 22:42:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:43:34.436 22:42:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:43:34.436 22:42:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:43:34.436 22:42:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:43:34.436 22:42:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:43:34.436 22:42:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@724 -- # xtrace_disable 00:43:34.436 22:42:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:43:34.436 22:42:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@507 -- # nvmfpid=433819 00:43:34.436 22:42:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@508 -- # waitforlisten 433819 00:43:34.436 22:42:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@831 -- # '[' -z 433819 ']' 00:43:34.436 22:42:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:43:34.436 22:42:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@836 -- # local max_retries=100 00:43:34.436 22:42:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:43:34.436 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:43:34.436 22:42:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@840 -- # xtrace_disable 00:43:34.436 22:42:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:43:34.436 22:42:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x7 00:43:34.436 [2024-10-01 22:42:29.561565] thread.c:2945:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:43:34.436 [2024-10-01 22:42:29.562710] Starting SPDK v25.01-pre git sha1 1b1c3081e / DPDK 24.03.0 initialization... 00:43:34.436 [2024-10-01 22:42:29.562761] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:43:34.436 [2024-10-01 22:42:29.635009] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:43:34.698 [2024-10-01 22:42:29.710082] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:43:34.698 [2024-10-01 22:42:29.710124] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:43:34.698 [2024-10-01 22:42:29.710132] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:43:34.698 [2024-10-01 22:42:29.710139] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:43:34.698 [2024-10-01 22:42:29.710145] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:43:34.698 [2024-10-01 22:42:29.710283] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:43:34.698 [2024-10-01 22:42:29.710407] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:43:34.698 [2024-10-01 22:42:29.710410] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:43:34.698 [2024-10-01 22:42:29.816800] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:43:34.698 [2024-10-01 22:42:29.817267] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:43:34.698 [2024-10-01 22:42:29.817600] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:43:34.698 [2024-10-01 22:42:29.817862] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:43:35.269 22:42:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:43:35.269 22:42:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@864 -- # return 0 00:43:35.269 22:42:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:43:35.269 22:42:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@730 -- # xtrace_disable 00:43:35.269 22:42:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:43:35.269 22:42:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:43:35.269 22:42:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:43:35.530 [2024-10-01 22:42:30.546988] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:43:35.530 22:42:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:43:35.530 22:42:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:43:35.793 22:42:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:43:35.793 22:42:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:43:35.793 22:42:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:43:36.053 22:42:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:43:36.314 22:42:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # lvs=99f7c58c-a187-47e9-968a-f2654f1a9f1d 00:43:36.314 22:42:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 99f7c58c-a187-47e9-968a-f2654f1a9f1d lvol 20 00:43:36.315 22:42:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # lvol=80125168-778a-4e42-bf80-eaa4db0f5dec 00:43:36.315 22:42:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:43:36.575 22:42:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 80125168-778a-4e42-bf80-eaa4db0f5dec 00:43:36.836 22:42:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:43:36.836 [2024-10-01 22:42:32.019075] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:43:36.836 22:42:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:43:37.097 22:42:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@42 -- # perf_pid=434306 00:43:37.097 22:42:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@44 -- # sleep 1 00:43:37.097 22:42:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:43:38.038 22:42:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_snapshot 80125168-778a-4e42-bf80-eaa4db0f5dec MY_SNAPSHOT 00:43:38.300 22:42:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # snapshot=7fde0762-72b8-4afa-936d-908bf7ce2b08 00:43:38.300 22:42:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_resize 80125168-778a-4e42-bf80-eaa4db0f5dec 30 00:43:38.560 22:42:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_clone 7fde0762-72b8-4afa-936d-908bf7ce2b08 MY_CLONE 00:43:38.819 22:42:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # clone=72ade627-752c-4c06-af58-10f1b1b790ec 00:43:38.819 22:42:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_inflate 72ade627-752c-4c06-af58-10f1b1b790ec 00:43:39.079 22:42:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@53 -- # wait 434306 00:43:49.072 Initializing NVMe Controllers 00:43:49.072 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:43:49.072 Controller IO queue size 128, less than required. 00:43:49.072 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:43:49.072 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:43:49.072 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:43:49.072 Initialization complete. Launching workers. 00:43:49.072 ======================================================== 00:43:49.072 Latency(us) 00:43:49.072 Device Information : IOPS MiB/s Average min max 00:43:49.072 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 11959.50 46.72 10705.43 1537.61 61309.98 00:43:49.072 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 15690.10 61.29 8158.42 1366.12 66056.70 00:43:49.072 ======================================================== 00:43:49.072 Total : 27649.59 108.01 9260.10 1366.12 66056.70 00:43:49.072 00:43:49.072 22:42:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:43:49.072 22:42:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 80125168-778a-4e42-bf80-eaa4db0f5dec 00:43:49.072 22:42:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 99f7c58c-a187-47e9-968a-f2654f1a9f1d 00:43:49.072 22:42:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@60 -- # rm -f 00:43:49.072 22:42:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:43:49.072 22:42:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:43:49.072 22:42:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@514 -- # nvmfcleanup 00:43:49.072 22:42:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@121 -- # sync 00:43:49.072 22:42:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:43:49.072 22:42:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@124 -- # set +e 00:43:49.072 22:42:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@125 -- # for i in {1..20} 00:43:49.072 22:42:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:43:49.072 rmmod nvme_tcp 00:43:49.072 rmmod nvme_fabrics 00:43:49.072 rmmod nvme_keyring 00:43:49.072 22:42:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:43:49.072 22:42:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@128 -- # set -e 00:43:49.072 22:42:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@129 -- # return 0 00:43:49.072 22:42:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@515 -- # '[' -n 433819 ']' 00:43:49.072 22:42:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@516 -- # killprocess 433819 00:43:49.072 22:42:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@950 -- # '[' -z 433819 ']' 00:43:49.072 22:42:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@954 -- # kill -0 433819 00:43:49.072 22:42:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@955 -- # uname 00:43:49.072 22:42:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:43:49.072 22:42:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 433819 00:43:49.072 22:42:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:43:49.072 22:42:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:43:49.072 22:42:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@968 -- # echo 'killing process with pid 433819' 00:43:49.072 killing process with pid 433819 00:43:49.072 22:42:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@969 -- # kill 433819 00:43:49.072 22:42:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@974 -- # wait 433819 00:43:49.072 22:42:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:43:49.072 22:42:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:43:49.072 22:42:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:43:49.072 22:42:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@297 -- # iptr 00:43:49.072 22:42:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@789 -- # iptables-save 00:43:49.072 22:42:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:43:49.072 22:42:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@789 -- # iptables-restore 00:43:49.072 22:42:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:43:49.072 22:42:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@302 -- # remove_spdk_ns 00:43:49.072 22:42:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:43:49.072 22:42:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:43:49.072 22:42:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:43:50.454 22:42:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:43:50.454 00:43:50.454 real 0m23.535s 00:43:50.454 user 0m55.538s 00:43:50.454 sys 0m10.508s 00:43:50.454 22:42:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1126 -- # xtrace_disable 00:43:50.454 22:42:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:43:50.454 ************************************ 00:43:50.454 END TEST nvmf_lvol 00:43:50.454 ************************************ 00:43:50.454 22:42:45 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@28 -- # run_test nvmf_lvs_grow /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp --interrupt-mode 00:43:50.454 22:42:45 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:43:50.454 22:42:45 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1107 -- # xtrace_disable 00:43:50.454 22:42:45 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:43:50.454 ************************************ 00:43:50.454 START TEST nvmf_lvs_grow 00:43:50.454 ************************************ 00:43:50.454 22:42:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp --interrupt-mode 00:43:50.716 * Looking for test storage... 00:43:50.716 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:43:50.716 22:42:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:43:50.716 22:42:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1681 -- # lcov --version 00:43:50.716 22:42:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:43:50.716 22:42:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:43:50.716 22:42:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:43:50.716 22:42:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@333 -- # local ver1 ver1_l 00:43:50.716 22:42:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@334 -- # local ver2 ver2_l 00:43:50.716 22:42:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@336 -- # IFS=.-: 00:43:50.716 22:42:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@336 -- # read -ra ver1 00:43:50.716 22:42:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@337 -- # IFS=.-: 00:43:50.716 22:42:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@337 -- # read -ra ver2 00:43:50.716 22:42:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@338 -- # local 'op=<' 00:43:50.716 22:42:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@340 -- # ver1_l=2 00:43:50.716 22:42:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@341 -- # ver2_l=1 00:43:50.716 22:42:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:43:50.716 22:42:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@344 -- # case "$op" in 00:43:50.716 22:42:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@345 -- # : 1 00:43:50.716 22:42:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v = 0 )) 00:43:50.716 22:42:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:43:50.716 22:42:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@365 -- # decimal 1 00:43:50.716 22:42:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=1 00:43:50.716 22:42:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:43:50.716 22:42:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 1 00:43:50.716 22:42:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@365 -- # ver1[v]=1 00:43:50.716 22:42:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@366 -- # decimal 2 00:43:50.716 22:42:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=2 00:43:50.716 22:42:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:43:50.716 22:42:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 2 00:43:50.716 22:42:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@366 -- # ver2[v]=2 00:43:50.716 22:42:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:43:50.716 22:42:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:43:50.716 22:42:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@368 -- # return 0 00:43:50.716 22:42:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:43:50.716 22:42:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:43:50.716 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:43:50.716 --rc genhtml_branch_coverage=1 00:43:50.716 --rc genhtml_function_coverage=1 00:43:50.716 --rc genhtml_legend=1 00:43:50.716 --rc geninfo_all_blocks=1 00:43:50.716 --rc geninfo_unexecuted_blocks=1 00:43:50.716 00:43:50.716 ' 00:43:50.716 22:42:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:43:50.716 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:43:50.716 --rc genhtml_branch_coverage=1 00:43:50.716 --rc genhtml_function_coverage=1 00:43:50.716 --rc genhtml_legend=1 00:43:50.716 --rc geninfo_all_blocks=1 00:43:50.716 --rc geninfo_unexecuted_blocks=1 00:43:50.716 00:43:50.716 ' 00:43:50.716 22:42:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:43:50.716 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:43:50.716 --rc genhtml_branch_coverage=1 00:43:50.716 --rc genhtml_function_coverage=1 00:43:50.716 --rc genhtml_legend=1 00:43:50.716 --rc geninfo_all_blocks=1 00:43:50.716 --rc geninfo_unexecuted_blocks=1 00:43:50.716 00:43:50.716 ' 00:43:50.716 22:42:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:43:50.716 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:43:50.716 --rc genhtml_branch_coverage=1 00:43:50.716 --rc genhtml_function_coverage=1 00:43:50.716 --rc genhtml_legend=1 00:43:50.716 --rc geninfo_all_blocks=1 00:43:50.716 --rc geninfo_unexecuted_blocks=1 00:43:50.716 00:43:50.716 ' 00:43:50.716 22:42:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:43:50.716 22:42:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@7 -- # uname -s 00:43:50.716 22:42:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:43:50.716 22:42:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:43:50.716 22:42:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:43:50.716 22:42:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:43:50.716 22:42:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:43:50.716 22:42:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:43:50.716 22:42:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:43:50.716 22:42:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:43:50.716 22:42:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:43:50.716 22:42:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:43:50.716 22:42:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 00:43:50.716 22:42:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@18 -- # NVME_HOSTID=80c5a598-37ec-ec11-9bc7-a4bf01928204 00:43:50.716 22:42:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:43:50.716 22:42:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:43:50.716 22:42:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:43:50.716 22:42:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:43:50.717 22:42:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:43:50.717 22:42:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@15 -- # shopt -s extglob 00:43:50.717 22:42:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:43:50.717 22:42:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:43:50.717 22:42:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:43:50.717 22:42:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:43:50.717 22:42:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:43:50.717 22:42:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:43:50.717 22:42:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@5 -- # export PATH 00:43:50.717 22:42:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:43:50.717 22:42:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@51 -- # : 0 00:43:50.717 22:42:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:43:50.717 22:42:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:43:50.717 22:42:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:43:50.717 22:42:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:43:50.717 22:42:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:43:50.717 22:42:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:43:50.717 22:42:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:43:50.717 22:42:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:43:50.717 22:42:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:43:50.717 22:42:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@55 -- # have_pci_nics=0 00:43:50.717 22:42:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:43:50.717 22:42:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:43:50.717 22:42:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@98 -- # nvmftestinit 00:43:50.717 22:42:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:43:50.717 22:42:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:43:50.717 22:42:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@474 -- # prepare_net_devs 00:43:50.717 22:42:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@436 -- # local -g is_hw=no 00:43:50.717 22:42:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@438 -- # remove_spdk_ns 00:43:50.717 22:42:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:43:50.717 22:42:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:43:50.717 22:42:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:43:50.717 22:42:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:43:50.717 22:42:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:43:50.717 22:42:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@309 -- # xtrace_disable 00:43:50.717 22:42:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:43:58.863 22:42:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:43:58.863 22:42:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@315 -- # pci_devs=() 00:43:58.863 22:42:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@315 -- # local -a pci_devs 00:43:58.863 22:42:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@316 -- # pci_net_devs=() 00:43:58.863 22:42:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:43:58.863 22:42:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@317 -- # pci_drivers=() 00:43:58.863 22:42:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@317 -- # local -A pci_drivers 00:43:58.863 22:42:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@319 -- # net_devs=() 00:43:58.863 22:42:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@319 -- # local -ga net_devs 00:43:58.864 22:42:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@320 -- # e810=() 00:43:58.864 22:42:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@320 -- # local -ga e810 00:43:58.864 22:42:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@321 -- # x722=() 00:43:58.864 22:42:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@321 -- # local -ga x722 00:43:58.864 22:42:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@322 -- # mlx=() 00:43:58.864 22:42:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@322 -- # local -ga mlx 00:43:58.864 22:42:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:43:58.864 22:42:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:43:58.864 22:42:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:43:58.864 22:42:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:43:58.864 22:42:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:43:58.864 22:42:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:43:58.864 22:42:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:43:58.864 22:42:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:43:58.864 22:42:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:43:58.864 22:42:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:43:58.864 22:42:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:43:58.864 22:42:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:43:58.864 22:42:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:43:58.864 22:42:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:43:58.864 22:42:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:43:58.864 22:42:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:43:58.864 22:42:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:43:58.864 22:42:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:43:58.864 22:42:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:43:58.864 22:42:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:43:58.864 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:43:58.864 22:42:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:43:58.864 22:42:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:43:58.864 22:42:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:43:58.864 22:42:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:43:58.864 22:42:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:43:58.864 22:42:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:43:58.864 22:42:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:43:58.864 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:43:58.864 22:42:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:43:58.864 22:42:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:43:58.864 22:42:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:43:58.864 22:42:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:43:58.864 22:42:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:43:58.864 22:42:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:43:58.864 22:42:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:43:58.864 22:42:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:43:58.864 22:42:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:43:58.864 22:42:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:43:58.864 22:42:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:43:58.864 22:42:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:43:58.864 22:42:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ up == up ]] 00:43:58.864 22:42:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:43:58.864 22:42:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:43:58.864 22:42:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:43:58.864 Found net devices under 0000:4b:00.0: cvl_0_0 00:43:58.864 22:42:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:43:58.864 22:42:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:43:58.864 22:42:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:43:58.864 22:42:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:43:58.864 22:42:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:43:58.864 22:42:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ up == up ]] 00:43:58.864 22:42:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:43:58.864 22:42:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:43:58.864 22:42:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:43:58.864 Found net devices under 0000:4b:00.1: cvl_0_1 00:43:58.864 22:42:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:43:58.864 22:42:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:43:58.864 22:42:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@440 -- # is_hw=yes 00:43:58.864 22:42:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:43:58.864 22:42:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:43:58.864 22:42:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:43:58.864 22:42:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:43:58.864 22:42:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:43:58.864 22:42:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:43:58.864 22:42:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:43:58.864 22:42:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:43:58.864 22:42:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:43:58.864 22:42:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:43:58.864 22:42:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:43:58.864 22:42:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:43:58.864 22:42:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:43:58.864 22:42:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:43:58.864 22:42:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:43:58.864 22:42:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:43:58.864 22:42:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:43:58.864 22:42:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:43:58.864 22:42:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:43:58.864 22:42:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:43:58.864 22:42:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:43:58.864 22:42:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:43:58.864 22:42:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:43:58.864 22:42:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:43:58.864 22:42:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:43:58.864 22:42:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:43:58.864 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:43:58.864 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.638 ms 00:43:58.864 00:43:58.864 --- 10.0.0.2 ping statistics --- 00:43:58.864 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:43:58.864 rtt min/avg/max/mdev = 0.638/0.638/0.638/0.000 ms 00:43:58.864 22:42:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:43:58.864 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:43:58.864 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.315 ms 00:43:58.864 00:43:58.864 --- 10.0.0.1 ping statistics --- 00:43:58.864 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:43:58.864 rtt min/avg/max/mdev = 0.315/0.315/0.315/0.000 ms 00:43:58.864 22:42:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:43:58.864 22:42:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@448 -- # return 0 00:43:58.864 22:42:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:43:58.865 22:42:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:43:58.865 22:42:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:43:58.865 22:42:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:43:58.865 22:42:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:43:58.865 22:42:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:43:58.865 22:42:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:43:58.865 22:42:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@99 -- # nvmfappstart -m 0x1 00:43:58.865 22:42:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:43:58.865 22:42:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@724 -- # xtrace_disable 00:43:58.865 22:42:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:43:58.865 22:42:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@507 -- # nvmfpid=440636 00:43:58.865 22:42:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@508 -- # waitforlisten 440636 00:43:58.865 22:42:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x1 00:43:58.865 22:42:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@831 -- # '[' -z 440636 ']' 00:43:58.865 22:42:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:43:58.865 22:42:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@836 -- # local max_retries=100 00:43:58.865 22:42:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:43:58.865 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:43:58.865 22:42:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@840 -- # xtrace_disable 00:43:58.865 22:42:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:43:58.865 [2024-10-01 22:42:53.417606] thread.c:2945:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:43:58.865 [2024-10-01 22:42:53.418732] Starting SPDK v25.01-pre git sha1 1b1c3081e / DPDK 24.03.0 initialization... 00:43:58.865 [2024-10-01 22:42:53.418781] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:43:58.865 [2024-10-01 22:42:53.490094] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:43:58.865 [2024-10-01 22:42:53.564741] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:43:58.865 [2024-10-01 22:42:53.564780] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:43:58.865 [2024-10-01 22:42:53.564789] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:43:58.865 [2024-10-01 22:42:53.564797] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:43:58.865 [2024-10-01 22:42:53.564804] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:43:58.865 [2024-10-01 22:42:53.564826] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:43:58.865 [2024-10-01 22:42:53.666853] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:43:58.865 [2024-10-01 22:42:53.667114] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:43:59.126 22:42:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:43:59.126 22:42:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@864 -- # return 0 00:43:59.126 22:42:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:43:59.126 22:42:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@730 -- # xtrace_disable 00:43:59.126 22:42:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:43:59.126 22:42:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:43:59.126 22:42:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:43:59.386 [2024-10-01 22:42:54.409253] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:43:59.386 22:42:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_clean lvs_grow 00:43:59.386 22:42:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:43:59.386 22:42:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1107 -- # xtrace_disable 00:43:59.386 22:42:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:43:59.386 ************************************ 00:43:59.386 START TEST lvs_grow_clean 00:43:59.386 ************************************ 00:43:59.386 22:42:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1125 -- # lvs_grow 00:43:59.386 22:42:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:43:59.386 22:42:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:43:59.386 22:42:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:43:59.386 22:42:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:43:59.386 22:42:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:43:59.386 22:42:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:43:59.386 22:42:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:43:59.386 22:42:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:43:59.387 22:42:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:43:59.648 22:42:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:43:59.648 22:42:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:43:59.648 22:42:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # lvs=6de702f0-e6c9-4b70-9406-adba0958e048 00:43:59.648 22:42:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 6de702f0-e6c9-4b70-9406-adba0958e048 00:43:59.648 22:42:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:43:59.908 22:42:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:43:59.908 22:42:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:43:59.908 22:42:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 6de702f0-e6c9-4b70-9406-adba0958e048 lvol 150 00:44:00.169 22:42:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # lvol=c602595e-e9c5-4fdc-9cdd-cbec621ae2c8 00:44:00.169 22:42:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:44:00.169 22:42:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:44:00.169 [2024-10-01 22:42:55.353188] bdev_aio.c:1044:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:44:00.169 [2024-10-01 22:42:55.353269] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:44:00.169 true 00:44:00.169 22:42:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 6de702f0-e6c9-4b70-9406-adba0958e048 00:44:00.169 22:42:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:44:00.430 22:42:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:44:00.430 22:42:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:44:00.690 22:42:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 c602595e-e9c5-4fdc-9cdd-cbec621ae2c8 00:44:00.690 22:42:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:44:00.952 [2024-10-01 22:42:56.053822] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:44:00.952 22:42:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:44:01.212 22:42:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=441153 00:44:01.212 22:42:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:44:01.212 22:42:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:44:01.212 22:42:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 441153 /var/tmp/bdevperf.sock 00:44:01.212 22:42:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@831 -- # '[' -z 441153 ']' 00:44:01.212 22:42:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:44:01.212 22:42:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@836 -- # local max_retries=100 00:44:01.212 22:42:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:44:01.212 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:44:01.212 22:42:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@840 -- # xtrace_disable 00:44:01.212 22:42:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:44:01.212 [2024-10-01 22:42:56.274519] Starting SPDK v25.01-pre git sha1 1b1c3081e / DPDK 24.03.0 initialization... 00:44:01.212 [2024-10-01 22:42:56.274574] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid441153 ] 00:44:01.212 [2024-10-01 22:42:56.354647] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:44:01.212 [2024-10-01 22:42:56.422766] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:44:02.151 22:42:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:44:02.151 22:42:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@864 -- # return 0 00:44:02.151 22:42:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:44:02.151 Nvme0n1 00:44:02.151 22:42:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:44:02.411 [ 00:44:02.411 { 00:44:02.411 "name": "Nvme0n1", 00:44:02.411 "aliases": [ 00:44:02.411 "c602595e-e9c5-4fdc-9cdd-cbec621ae2c8" 00:44:02.411 ], 00:44:02.411 "product_name": "NVMe disk", 00:44:02.411 "block_size": 4096, 00:44:02.411 "num_blocks": 38912, 00:44:02.411 "uuid": "c602595e-e9c5-4fdc-9cdd-cbec621ae2c8", 00:44:02.411 "numa_id": 0, 00:44:02.411 "assigned_rate_limits": { 00:44:02.411 "rw_ios_per_sec": 0, 00:44:02.411 "rw_mbytes_per_sec": 0, 00:44:02.411 "r_mbytes_per_sec": 0, 00:44:02.411 "w_mbytes_per_sec": 0 00:44:02.411 }, 00:44:02.411 "claimed": false, 00:44:02.411 "zoned": false, 00:44:02.411 "supported_io_types": { 00:44:02.411 "read": true, 00:44:02.411 "write": true, 00:44:02.411 "unmap": true, 00:44:02.411 "flush": true, 00:44:02.411 "reset": true, 00:44:02.411 "nvme_admin": true, 00:44:02.411 "nvme_io": true, 00:44:02.411 "nvme_io_md": false, 00:44:02.411 "write_zeroes": true, 00:44:02.411 "zcopy": false, 00:44:02.411 "get_zone_info": false, 00:44:02.411 "zone_management": false, 00:44:02.411 "zone_append": false, 00:44:02.411 "compare": true, 00:44:02.411 "compare_and_write": true, 00:44:02.411 "abort": true, 00:44:02.411 "seek_hole": false, 00:44:02.411 "seek_data": false, 00:44:02.411 "copy": true, 00:44:02.411 "nvme_iov_md": false 00:44:02.411 }, 00:44:02.411 "memory_domains": [ 00:44:02.411 { 00:44:02.411 "dma_device_id": "system", 00:44:02.411 "dma_device_type": 1 00:44:02.411 } 00:44:02.411 ], 00:44:02.411 "driver_specific": { 00:44:02.411 "nvme": [ 00:44:02.411 { 00:44:02.411 "trid": { 00:44:02.411 "trtype": "TCP", 00:44:02.411 "adrfam": "IPv4", 00:44:02.411 "traddr": "10.0.0.2", 00:44:02.411 "trsvcid": "4420", 00:44:02.411 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:44:02.411 }, 00:44:02.411 "ctrlr_data": { 00:44:02.411 "cntlid": 1, 00:44:02.411 "vendor_id": "0x8086", 00:44:02.411 "model_number": "SPDK bdev Controller", 00:44:02.411 "serial_number": "SPDK0", 00:44:02.411 "firmware_revision": "25.01", 00:44:02.411 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:44:02.411 "oacs": { 00:44:02.411 "security": 0, 00:44:02.411 "format": 0, 00:44:02.411 "firmware": 0, 00:44:02.411 "ns_manage": 0 00:44:02.411 }, 00:44:02.411 "multi_ctrlr": true, 00:44:02.411 "ana_reporting": false 00:44:02.411 }, 00:44:02.411 "vs": { 00:44:02.411 "nvme_version": "1.3" 00:44:02.411 }, 00:44:02.411 "ns_data": { 00:44:02.411 "id": 1, 00:44:02.411 "can_share": true 00:44:02.411 } 00:44:02.411 } 00:44:02.411 ], 00:44:02.411 "mp_policy": "active_passive" 00:44:02.411 } 00:44:02.411 } 00:44:02.411 ] 00:44:02.411 22:42:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=441366 00:44:02.411 22:42:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:44:02.411 22:42:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:44:02.411 Running I/O for 10 seconds... 00:44:03.350 Latency(us) 00:44:03.351 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:44:03.351 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:44:03.351 Nvme0n1 : 1.00 17596.00 68.73 0.00 0.00 0.00 0.00 0.00 00:44:03.351 =================================================================================================================== 00:44:03.351 Total : 17596.00 68.73 0.00 0.00 0.00 0.00 0.00 00:44:03.351 00:44:04.294 22:42:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 6de702f0-e6c9-4b70-9406-adba0958e048 00:44:04.554 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:44:04.554 Nvme0n1 : 2.00 17662.50 68.99 0.00 0.00 0.00 0.00 0.00 00:44:04.554 =================================================================================================================== 00:44:04.554 Total : 17662.50 68.99 0.00 0.00 0.00 0.00 0.00 00:44:04.554 00:44:04.554 true 00:44:04.554 22:42:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 6de702f0-e6c9-4b70-9406-adba0958e048 00:44:04.554 22:42:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:44:04.814 22:42:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:44:04.814 22:42:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:44:04.814 22:42:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@65 -- # wait 441366 00:44:05.384 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:44:05.384 Nvme0n1 : 3.00 17692.00 69.11 0.00 0.00 0.00 0.00 0.00 00:44:05.384 =================================================================================================================== 00:44:05.384 Total : 17692.00 69.11 0.00 0.00 0.00 0.00 0.00 00:44:05.384 00:44:06.768 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:44:06.768 Nvme0n1 : 4.00 17729.50 69.26 0.00 0.00 0.00 0.00 0.00 00:44:06.768 =================================================================================================================== 00:44:06.768 Total : 17729.50 69.26 0.00 0.00 0.00 0.00 0.00 00:44:06.768 00:44:07.710 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:44:07.710 Nvme0n1 : 5.00 17755.00 69.36 0.00 0.00 0.00 0.00 0.00 00:44:07.710 =================================================================================================================== 00:44:07.710 Total : 17755.00 69.36 0.00 0.00 0.00 0.00 0.00 00:44:07.710 00:44:08.653 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:44:08.653 Nvme0n1 : 6.00 17771.67 69.42 0.00 0.00 0.00 0.00 0.00 00:44:08.653 =================================================================================================================== 00:44:08.653 Total : 17771.67 69.42 0.00 0.00 0.00 0.00 0.00 00:44:08.653 00:44:09.596 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:44:09.596 Nvme0n1 : 7.00 17792.86 69.50 0.00 0.00 0.00 0.00 0.00 00:44:09.596 =================================================================================================================== 00:44:09.596 Total : 17792.86 69.50 0.00 0.00 0.00 0.00 0.00 00:44:09.596 00:44:10.539 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:44:10.539 Nvme0n1 : 8.00 17800.88 69.53 0.00 0.00 0.00 0.00 0.00 00:44:10.539 =================================================================================================================== 00:44:10.539 Total : 17800.88 69.53 0.00 0.00 0.00 0.00 0.00 00:44:10.539 00:44:11.483 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:44:11.483 Nvme0n1 : 9.00 17814.11 69.59 0.00 0.00 0.00 0.00 0.00 00:44:11.483 =================================================================================================================== 00:44:11.483 Total : 17814.11 69.59 0.00 0.00 0.00 0.00 0.00 00:44:11.483 00:44:12.424 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:44:12.424 Nvme0n1 : 10.00 17824.70 69.63 0.00 0.00 0.00 0.00 0.00 00:44:12.424 =================================================================================================================== 00:44:12.424 Total : 17824.70 69.63 0.00 0.00 0.00 0.00 0.00 00:44:12.424 00:44:12.424 00:44:12.424 Latency(us) 00:44:12.424 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:44:12.424 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:44:12.424 Nvme0n1 : 10.01 17826.95 69.64 0.00 0.00 7175.07 2389.33 13325.65 00:44:12.424 =================================================================================================================== 00:44:12.424 Total : 17826.95 69.64 0.00 0.00 7175.07 2389.33 13325.65 00:44:12.424 { 00:44:12.424 "results": [ 00:44:12.424 { 00:44:12.424 "job": "Nvme0n1", 00:44:12.424 "core_mask": "0x2", 00:44:12.424 "workload": "randwrite", 00:44:12.424 "status": "finished", 00:44:12.424 "queue_depth": 128, 00:44:12.424 "io_size": 4096, 00:44:12.424 "runtime": 10.00592, 00:44:12.424 "iops": 17826.94644770296, 00:44:12.424 "mibps": 69.6365095613397, 00:44:12.424 "io_failed": 0, 00:44:12.424 "io_timeout": 0, 00:44:12.424 "avg_latency_us": 7175.066781630459, 00:44:12.424 "min_latency_us": 2389.3333333333335, 00:44:12.424 "max_latency_us": 13325.653333333334 00:44:12.424 } 00:44:12.424 ], 00:44:12.424 "core_count": 1 00:44:12.424 } 00:44:12.424 22:43:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@66 -- # killprocess 441153 00:44:12.424 22:43:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@950 -- # '[' -z 441153 ']' 00:44:12.424 22:43:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@954 -- # kill -0 441153 00:44:12.424 22:43:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@955 -- # uname 00:44:12.424 22:43:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:44:12.424 22:43:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 441153 00:44:12.685 22:43:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:44:12.685 22:43:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:44:12.685 22:43:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@968 -- # echo 'killing process with pid 441153' 00:44:12.685 killing process with pid 441153 00:44:12.685 22:43:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@969 -- # kill 441153 00:44:12.685 Received shutdown signal, test time was about 10.000000 seconds 00:44:12.685 00:44:12.685 Latency(us) 00:44:12.685 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:44:12.685 =================================================================================================================== 00:44:12.685 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:44:12.685 22:43:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@974 -- # wait 441153 00:44:12.685 22:43:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:44:12.945 22:43:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:44:13.205 22:43:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 6de702f0-e6c9-4b70-9406-adba0958e048 00:44:13.205 22:43:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:44:13.205 22:43:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:44:13.205 22:43:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@72 -- # [[ '' == \d\i\r\t\y ]] 00:44:13.205 22:43:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:44:13.465 [2024-10-01 22:43:08.537334] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:44:13.465 22:43:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 6de702f0-e6c9-4b70-9406-adba0958e048 00:44:13.465 22:43:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@650 -- # local es=0 00:44:13.465 22:43:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 6de702f0-e6c9-4b70-9406-adba0958e048 00:44:13.465 22:43:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:44:13.465 22:43:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:44:13.465 22:43:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:44:13.465 22:43:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:44:13.465 22:43:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:44:13.465 22:43:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:44:13.465 22:43:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:44:13.465 22:43:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:44:13.465 22:43:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 6de702f0-e6c9-4b70-9406-adba0958e048 00:44:13.725 request: 00:44:13.725 { 00:44:13.725 "uuid": "6de702f0-e6c9-4b70-9406-adba0958e048", 00:44:13.725 "method": "bdev_lvol_get_lvstores", 00:44:13.725 "req_id": 1 00:44:13.725 } 00:44:13.725 Got JSON-RPC error response 00:44:13.725 response: 00:44:13.725 { 00:44:13.725 "code": -19, 00:44:13.725 "message": "No such device" 00:44:13.725 } 00:44:13.725 22:43:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@653 -- # es=1 00:44:13.725 22:43:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:44:13.725 22:43:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:44:13.725 22:43:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:44:13.725 22:43:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:44:13.725 aio_bdev 00:44:13.725 22:43:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev c602595e-e9c5-4fdc-9cdd-cbec621ae2c8 00:44:13.725 22:43:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@899 -- # local bdev_name=c602595e-e9c5-4fdc-9cdd-cbec621ae2c8 00:44:13.725 22:43:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:44:13.725 22:43:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@901 -- # local i 00:44:13.725 22:43:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:44:13.725 22:43:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:44:13.725 22:43:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:44:13.985 22:43:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b c602595e-e9c5-4fdc-9cdd-cbec621ae2c8 -t 2000 00:44:14.245 [ 00:44:14.245 { 00:44:14.245 "name": "c602595e-e9c5-4fdc-9cdd-cbec621ae2c8", 00:44:14.245 "aliases": [ 00:44:14.245 "lvs/lvol" 00:44:14.245 ], 00:44:14.245 "product_name": "Logical Volume", 00:44:14.245 "block_size": 4096, 00:44:14.245 "num_blocks": 38912, 00:44:14.245 "uuid": "c602595e-e9c5-4fdc-9cdd-cbec621ae2c8", 00:44:14.245 "assigned_rate_limits": { 00:44:14.245 "rw_ios_per_sec": 0, 00:44:14.245 "rw_mbytes_per_sec": 0, 00:44:14.245 "r_mbytes_per_sec": 0, 00:44:14.245 "w_mbytes_per_sec": 0 00:44:14.245 }, 00:44:14.245 "claimed": false, 00:44:14.245 "zoned": false, 00:44:14.245 "supported_io_types": { 00:44:14.245 "read": true, 00:44:14.245 "write": true, 00:44:14.245 "unmap": true, 00:44:14.245 "flush": false, 00:44:14.245 "reset": true, 00:44:14.245 "nvme_admin": false, 00:44:14.245 "nvme_io": false, 00:44:14.245 "nvme_io_md": false, 00:44:14.245 "write_zeroes": true, 00:44:14.245 "zcopy": false, 00:44:14.245 "get_zone_info": false, 00:44:14.245 "zone_management": false, 00:44:14.245 "zone_append": false, 00:44:14.245 "compare": false, 00:44:14.245 "compare_and_write": false, 00:44:14.245 "abort": false, 00:44:14.245 "seek_hole": true, 00:44:14.245 "seek_data": true, 00:44:14.245 "copy": false, 00:44:14.245 "nvme_iov_md": false 00:44:14.245 }, 00:44:14.245 "driver_specific": { 00:44:14.245 "lvol": { 00:44:14.245 "lvol_store_uuid": "6de702f0-e6c9-4b70-9406-adba0958e048", 00:44:14.245 "base_bdev": "aio_bdev", 00:44:14.245 "thin_provision": false, 00:44:14.245 "num_allocated_clusters": 38, 00:44:14.245 "snapshot": false, 00:44:14.245 "clone": false, 00:44:14.245 "esnap_clone": false 00:44:14.245 } 00:44:14.245 } 00:44:14.245 } 00:44:14.245 ] 00:44:14.245 22:43:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@907 -- # return 0 00:44:14.245 22:43:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 6de702f0-e6c9-4b70-9406-adba0958e048 00:44:14.245 22:43:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:44:14.245 22:43:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:44:14.245 22:43:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 6de702f0-e6c9-4b70-9406-adba0958e048 00:44:14.245 22:43:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:44:14.506 22:43:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:44:14.506 22:43:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete c602595e-e9c5-4fdc-9cdd-cbec621ae2c8 00:44:14.766 22:43:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 6de702f0-e6c9-4b70-9406-adba0958e048 00:44:14.766 22:43:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:44:15.026 22:43:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:44:15.026 00:44:15.026 real 0m15.677s 00:44:15.026 user 0m15.338s 00:44:15.026 sys 0m1.358s 00:44:15.026 22:43:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1126 -- # xtrace_disable 00:44:15.026 22:43:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:44:15.026 ************************************ 00:44:15.026 END TEST lvs_grow_clean 00:44:15.026 ************************************ 00:44:15.026 22:43:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@103 -- # run_test lvs_grow_dirty lvs_grow dirty 00:44:15.026 22:43:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:44:15.026 22:43:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1107 -- # xtrace_disable 00:44:15.026 22:43:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:44:15.026 ************************************ 00:44:15.026 START TEST lvs_grow_dirty 00:44:15.026 ************************************ 00:44:15.026 22:43:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1125 -- # lvs_grow dirty 00:44:15.026 22:43:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:44:15.026 22:43:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:44:15.026 22:43:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:44:15.026 22:43:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:44:15.026 22:43:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:44:15.026 22:43:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:44:15.026 22:43:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:44:15.026 22:43:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:44:15.026 22:43:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:44:15.286 22:43:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:44:15.287 22:43:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:44:15.547 22:43:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # lvs=65d51159-9f12-4c7f-bc08-9c1cf6def9fd 00:44:15.547 22:43:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 65d51159-9f12-4c7f-bc08-9c1cf6def9fd 00:44:15.547 22:43:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:44:15.547 22:43:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:44:15.547 22:43:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:44:15.547 22:43:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 65d51159-9f12-4c7f-bc08-9c1cf6def9fd lvol 150 00:44:15.808 22:43:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # lvol=2273f3ab-27bf-4e8d-953c-b890491263f5 00:44:15.808 22:43:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:44:15.808 22:43:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:44:16.068 [2024-10-01 22:43:11.121268] bdev_aio.c:1044:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:44:16.068 [2024-10-01 22:43:11.121419] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:44:16.068 true 00:44:16.068 22:43:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 65d51159-9f12-4c7f-bc08-9c1cf6def9fd 00:44:16.068 22:43:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:44:16.068 22:43:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:44:16.068 22:43:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:44:16.328 22:43:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 2273f3ab-27bf-4e8d-953c-b890491263f5 00:44:16.588 22:43:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:44:16.588 [2024-10-01 22:43:11.785704] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:44:16.588 22:43:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:44:16.874 22:43:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=444101 00:44:16.874 22:43:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:44:16.874 22:43:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:44:16.874 22:43:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 444101 /var/tmp/bdevperf.sock 00:44:16.874 22:43:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@831 -- # '[' -z 444101 ']' 00:44:16.874 22:43:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:44:16.874 22:43:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@836 -- # local max_retries=100 00:44:16.874 22:43:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:44:16.874 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:44:16.874 22:43:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # xtrace_disable 00:44:16.874 22:43:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:44:16.874 [2024-10-01 22:43:12.020133] Starting SPDK v25.01-pre git sha1 1b1c3081e / DPDK 24.03.0 initialization... 00:44:16.874 [2024-10-01 22:43:12.020192] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid444101 ] 00:44:16.874 [2024-10-01 22:43:12.099648] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:44:17.161 [2024-10-01 22:43:12.154048] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:44:17.732 22:43:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:44:17.732 22:43:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # return 0 00:44:17.732 22:43:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:44:17.992 Nvme0n1 00:44:17.992 22:43:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:44:17.992 [ 00:44:17.992 { 00:44:17.992 "name": "Nvme0n1", 00:44:17.992 "aliases": [ 00:44:17.992 "2273f3ab-27bf-4e8d-953c-b890491263f5" 00:44:17.992 ], 00:44:17.992 "product_name": "NVMe disk", 00:44:17.992 "block_size": 4096, 00:44:17.992 "num_blocks": 38912, 00:44:17.992 "uuid": "2273f3ab-27bf-4e8d-953c-b890491263f5", 00:44:17.992 "numa_id": 0, 00:44:17.992 "assigned_rate_limits": { 00:44:17.992 "rw_ios_per_sec": 0, 00:44:17.992 "rw_mbytes_per_sec": 0, 00:44:17.992 "r_mbytes_per_sec": 0, 00:44:17.992 "w_mbytes_per_sec": 0 00:44:17.992 }, 00:44:17.992 "claimed": false, 00:44:17.992 "zoned": false, 00:44:17.992 "supported_io_types": { 00:44:17.992 "read": true, 00:44:17.992 "write": true, 00:44:17.992 "unmap": true, 00:44:17.992 "flush": true, 00:44:17.992 "reset": true, 00:44:17.992 "nvme_admin": true, 00:44:17.992 "nvme_io": true, 00:44:17.992 "nvme_io_md": false, 00:44:17.992 "write_zeroes": true, 00:44:17.992 "zcopy": false, 00:44:17.992 "get_zone_info": false, 00:44:17.992 "zone_management": false, 00:44:17.992 "zone_append": false, 00:44:17.992 "compare": true, 00:44:17.992 "compare_and_write": true, 00:44:17.992 "abort": true, 00:44:17.992 "seek_hole": false, 00:44:17.992 "seek_data": false, 00:44:17.992 "copy": true, 00:44:17.992 "nvme_iov_md": false 00:44:17.992 }, 00:44:17.992 "memory_domains": [ 00:44:17.992 { 00:44:17.992 "dma_device_id": "system", 00:44:17.992 "dma_device_type": 1 00:44:17.992 } 00:44:17.993 ], 00:44:17.993 "driver_specific": { 00:44:17.993 "nvme": [ 00:44:17.993 { 00:44:17.993 "trid": { 00:44:17.993 "trtype": "TCP", 00:44:17.993 "adrfam": "IPv4", 00:44:17.993 "traddr": "10.0.0.2", 00:44:17.993 "trsvcid": "4420", 00:44:17.993 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:44:17.993 }, 00:44:17.993 "ctrlr_data": { 00:44:17.993 "cntlid": 1, 00:44:17.993 "vendor_id": "0x8086", 00:44:17.993 "model_number": "SPDK bdev Controller", 00:44:17.993 "serial_number": "SPDK0", 00:44:17.993 "firmware_revision": "25.01", 00:44:17.993 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:44:17.993 "oacs": { 00:44:17.993 "security": 0, 00:44:17.993 "format": 0, 00:44:17.993 "firmware": 0, 00:44:17.993 "ns_manage": 0 00:44:17.993 }, 00:44:17.993 "multi_ctrlr": true, 00:44:17.993 "ana_reporting": false 00:44:17.993 }, 00:44:17.993 "vs": { 00:44:17.993 "nvme_version": "1.3" 00:44:17.993 }, 00:44:17.993 "ns_data": { 00:44:17.993 "id": 1, 00:44:17.993 "can_share": true 00:44:17.993 } 00:44:17.993 } 00:44:17.993 ], 00:44:17.993 "mp_policy": "active_passive" 00:44:17.993 } 00:44:17.993 } 00:44:17.993 ] 00:44:17.993 22:43:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=444444 00:44:17.993 22:43:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:44:17.993 22:43:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:44:18.253 Running I/O for 10 seconds... 00:44:19.195 Latency(us) 00:44:19.195 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:44:19.195 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:44:19.195 Nvme0n1 : 1.00 17549.00 68.55 0.00 0.00 0.00 0.00 0.00 00:44:19.195 =================================================================================================================== 00:44:19.195 Total : 17549.00 68.55 0.00 0.00 0.00 0.00 0.00 00:44:19.195 00:44:20.137 22:43:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 65d51159-9f12-4c7f-bc08-9c1cf6def9fd 00:44:20.137 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:44:20.137 Nvme0n1 : 2.00 17662.00 68.99 0.00 0.00 0.00 0.00 0.00 00:44:20.137 =================================================================================================================== 00:44:20.137 Total : 17662.00 68.99 0.00 0.00 0.00 0.00 0.00 00:44:20.137 00:44:20.399 true 00:44:20.399 22:43:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 65d51159-9f12-4c7f-bc08-9c1cf6def9fd 00:44:20.399 22:43:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:44:20.399 22:43:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:44:20.399 22:43:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:44:20.399 22:43:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@65 -- # wait 444444 00:44:21.340 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:44:21.340 Nvme0n1 : 3.00 17684.33 69.08 0.00 0.00 0.00 0.00 0.00 00:44:21.340 =================================================================================================================== 00:44:21.340 Total : 17684.33 69.08 0.00 0.00 0.00 0.00 0.00 00:44:21.340 00:44:22.283 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:44:22.283 Nvme0n1 : 4.00 17727.00 69.25 0.00 0.00 0.00 0.00 0.00 00:44:22.283 =================================================================================================================== 00:44:22.283 Total : 17727.00 69.25 0.00 0.00 0.00 0.00 0.00 00:44:22.283 00:44:23.225 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:44:23.225 Nvme0n1 : 5.00 17752.80 69.35 0.00 0.00 0.00 0.00 0.00 00:44:23.225 =================================================================================================================== 00:44:23.225 Total : 17752.80 69.35 0.00 0.00 0.00 0.00 0.00 00:44:23.225 00:44:24.165 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:44:24.165 Nvme0n1 : 6.00 17770.00 69.41 0.00 0.00 0.00 0.00 0.00 00:44:24.165 =================================================================================================================== 00:44:24.165 Total : 17770.00 69.41 0.00 0.00 0.00 0.00 0.00 00:44:24.165 00:44:25.107 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:44:25.107 Nvme0n1 : 7.00 17782.43 69.46 0.00 0.00 0.00 0.00 0.00 00:44:25.107 =================================================================================================================== 00:44:25.107 Total : 17782.43 69.46 0.00 0.00 0.00 0.00 0.00 00:44:25.107 00:44:26.491 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:44:26.491 Nvme0n1 : 8.00 17791.50 69.50 0.00 0.00 0.00 0.00 0.00 00:44:26.491 =================================================================================================================== 00:44:26.491 Total : 17791.50 69.50 0.00 0.00 0.00 0.00 0.00 00:44:26.491 00:44:27.433 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:44:27.433 Nvme0n1 : 9.00 17805.78 69.55 0.00 0.00 0.00 0.00 0.00 00:44:27.433 =================================================================================================================== 00:44:27.433 Total : 17805.78 69.55 0.00 0.00 0.00 0.00 0.00 00:44:27.433 00:44:28.376 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:44:28.376 Nvme0n1 : 10.00 17817.20 69.60 0.00 0.00 0.00 0.00 0.00 00:44:28.376 =================================================================================================================== 00:44:28.376 Total : 17817.20 69.60 0.00 0.00 0.00 0.00 0.00 00:44:28.376 00:44:28.376 00:44:28.376 Latency(us) 00:44:28.376 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:44:28.376 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:44:28.376 Nvme0n1 : 10.01 17816.85 69.60 0.00 0.00 7179.60 1747.63 13489.49 00:44:28.376 =================================================================================================================== 00:44:28.376 Total : 17816.85 69.60 0.00 0.00 7179.60 1747.63 13489.49 00:44:28.376 { 00:44:28.376 "results": [ 00:44:28.376 { 00:44:28.376 "job": "Nvme0n1", 00:44:28.376 "core_mask": "0x2", 00:44:28.376 "workload": "randwrite", 00:44:28.376 "status": "finished", 00:44:28.376 "queue_depth": 128, 00:44:28.376 "io_size": 4096, 00:44:28.376 "runtime": 10.007381, 00:44:28.376 "iops": 17816.84938347006, 00:44:28.376 "mibps": 69.59706790417992, 00:44:28.376 "io_failed": 0, 00:44:28.376 "io_timeout": 0, 00:44:28.376 "avg_latency_us": 7179.601083417462, 00:44:28.376 "min_latency_us": 1747.6266666666668, 00:44:28.376 "max_latency_us": 13489.493333333334 00:44:28.376 } 00:44:28.376 ], 00:44:28.376 "core_count": 1 00:44:28.376 } 00:44:28.376 22:43:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@66 -- # killprocess 444101 00:44:28.376 22:43:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@950 -- # '[' -z 444101 ']' 00:44:28.376 22:43:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@954 -- # kill -0 444101 00:44:28.377 22:43:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@955 -- # uname 00:44:28.377 22:43:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:44:28.377 22:43:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 444101 00:44:28.377 22:43:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:44:28.377 22:43:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:44:28.377 22:43:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@968 -- # echo 'killing process with pid 444101' 00:44:28.377 killing process with pid 444101 00:44:28.377 22:43:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@969 -- # kill 444101 00:44:28.377 Received shutdown signal, test time was about 10.000000 seconds 00:44:28.377 00:44:28.377 Latency(us) 00:44:28.377 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:44:28.377 =================================================================================================================== 00:44:28.377 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:44:28.377 22:43:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@974 -- # wait 444101 00:44:28.377 22:43:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:44:28.637 22:43:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:44:28.898 22:43:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 65d51159-9f12-4c7f-bc08-9c1cf6def9fd 00:44:28.898 22:43:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:44:28.898 22:43:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:44:28.898 22:43:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@72 -- # [[ dirty == \d\i\r\t\y ]] 00:44:28.898 22:43:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@74 -- # kill -9 440636 00:44:28.898 22:43:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # wait 440636 00:44:29.160 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 75: 440636 Killed "${NVMF_APP[@]}" "$@" 00:44:29.160 22:43:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # true 00:44:29.160 22:43:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@76 -- # nvmfappstart -m 0x1 00:44:29.160 22:43:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:44:29.160 22:43:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@724 -- # xtrace_disable 00:44:29.160 22:43:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:44:29.160 22:43:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@507 -- # nvmfpid=446457 00:44:29.160 22:43:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@508 -- # waitforlisten 446457 00:44:29.160 22:43:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x1 00:44:29.160 22:43:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@831 -- # '[' -z 446457 ']' 00:44:29.160 22:43:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:44:29.160 22:43:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@836 -- # local max_retries=100 00:44:29.160 22:43:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:44:29.160 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:44:29.160 22:43:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # xtrace_disable 00:44:29.160 22:43:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:44:29.160 [2024-10-01 22:43:24.226098] thread.c:2945:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:44:29.160 [2024-10-01 22:43:24.227082] Starting SPDK v25.01-pre git sha1 1b1c3081e / DPDK 24.03.0 initialization... 00:44:29.160 [2024-10-01 22:43:24.227124] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:44:29.160 [2024-10-01 22:43:24.295013] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:44:29.160 [2024-10-01 22:43:24.360795] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:44:29.160 [2024-10-01 22:43:24.360830] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:44:29.160 [2024-10-01 22:43:24.360838] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:44:29.160 [2024-10-01 22:43:24.360844] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:44:29.160 [2024-10-01 22:43:24.360850] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:44:29.160 [2024-10-01 22:43:24.360869] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:44:29.421 [2024-10-01 22:43:24.467004] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:44:29.421 [2024-10-01 22:43:24.467282] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:44:29.992 22:43:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:44:29.992 22:43:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # return 0 00:44:29.992 22:43:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:44:29.992 22:43:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@730 -- # xtrace_disable 00:44:29.992 22:43:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:44:29.992 22:43:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:44:29.992 22:43:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:44:29.992 [2024-10-01 22:43:25.224389] blobstore.c:4875:bs_recover: *NOTICE*: Performing recovery on blobstore 00:44:29.992 [2024-10-01 22:43:25.224494] blobstore.c:4822:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:44:29.992 [2024-10-01 22:43:25.224527] blobstore.c:4822:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:44:29.992 22:43:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # aio_bdev=aio_bdev 00:44:29.992 22:43:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@78 -- # waitforbdev 2273f3ab-27bf-4e8d-953c-b890491263f5 00:44:29.992 22:43:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@899 -- # local bdev_name=2273f3ab-27bf-4e8d-953c-b890491263f5 00:44:29.992 22:43:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:44:29.992 22:43:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@901 -- # local i 00:44:29.992 22:43:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:44:29.992 22:43:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:44:29.993 22:43:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:44:30.253 22:43:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 2273f3ab-27bf-4e8d-953c-b890491263f5 -t 2000 00:44:30.514 [ 00:44:30.514 { 00:44:30.514 "name": "2273f3ab-27bf-4e8d-953c-b890491263f5", 00:44:30.514 "aliases": [ 00:44:30.514 "lvs/lvol" 00:44:30.514 ], 00:44:30.514 "product_name": "Logical Volume", 00:44:30.514 "block_size": 4096, 00:44:30.514 "num_blocks": 38912, 00:44:30.514 "uuid": "2273f3ab-27bf-4e8d-953c-b890491263f5", 00:44:30.514 "assigned_rate_limits": { 00:44:30.514 "rw_ios_per_sec": 0, 00:44:30.514 "rw_mbytes_per_sec": 0, 00:44:30.514 "r_mbytes_per_sec": 0, 00:44:30.514 "w_mbytes_per_sec": 0 00:44:30.514 }, 00:44:30.514 "claimed": false, 00:44:30.514 "zoned": false, 00:44:30.514 "supported_io_types": { 00:44:30.514 "read": true, 00:44:30.514 "write": true, 00:44:30.514 "unmap": true, 00:44:30.514 "flush": false, 00:44:30.514 "reset": true, 00:44:30.514 "nvme_admin": false, 00:44:30.514 "nvme_io": false, 00:44:30.514 "nvme_io_md": false, 00:44:30.514 "write_zeroes": true, 00:44:30.514 "zcopy": false, 00:44:30.514 "get_zone_info": false, 00:44:30.514 "zone_management": false, 00:44:30.514 "zone_append": false, 00:44:30.514 "compare": false, 00:44:30.514 "compare_and_write": false, 00:44:30.514 "abort": false, 00:44:30.514 "seek_hole": true, 00:44:30.514 "seek_data": true, 00:44:30.514 "copy": false, 00:44:30.514 "nvme_iov_md": false 00:44:30.514 }, 00:44:30.514 "driver_specific": { 00:44:30.514 "lvol": { 00:44:30.514 "lvol_store_uuid": "65d51159-9f12-4c7f-bc08-9c1cf6def9fd", 00:44:30.514 "base_bdev": "aio_bdev", 00:44:30.514 "thin_provision": false, 00:44:30.514 "num_allocated_clusters": 38, 00:44:30.514 "snapshot": false, 00:44:30.514 "clone": false, 00:44:30.514 "esnap_clone": false 00:44:30.514 } 00:44:30.514 } 00:44:30.514 } 00:44:30.514 ] 00:44:30.514 22:43:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@907 -- # return 0 00:44:30.514 22:43:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 65d51159-9f12-4c7f-bc08-9c1cf6def9fd 00:44:30.514 22:43:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].free_clusters' 00:44:30.514 22:43:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # (( free_clusters == 61 )) 00:44:30.514 22:43:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 65d51159-9f12-4c7f-bc08-9c1cf6def9fd 00:44:30.514 22:43:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # jq -r '.[0].total_data_clusters' 00:44:30.775 22:43:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # (( data_clusters == 99 )) 00:44:30.775 22:43:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:44:31.036 [2024-10-01 22:43:26.085378] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:44:31.036 22:43:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 65d51159-9f12-4c7f-bc08-9c1cf6def9fd 00:44:31.036 22:43:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@650 -- # local es=0 00:44:31.036 22:43:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 65d51159-9f12-4c7f-bc08-9c1cf6def9fd 00:44:31.036 22:43:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:44:31.036 22:43:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:44:31.036 22:43:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:44:31.036 22:43:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:44:31.036 22:43:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:44:31.036 22:43:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:44:31.036 22:43:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:44:31.036 22:43:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:44:31.036 22:43:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 65d51159-9f12-4c7f-bc08-9c1cf6def9fd 00:44:31.036 request: 00:44:31.036 { 00:44:31.036 "uuid": "65d51159-9f12-4c7f-bc08-9c1cf6def9fd", 00:44:31.036 "method": "bdev_lvol_get_lvstores", 00:44:31.036 "req_id": 1 00:44:31.036 } 00:44:31.036 Got JSON-RPC error response 00:44:31.036 response: 00:44:31.036 { 00:44:31.036 "code": -19, 00:44:31.036 "message": "No such device" 00:44:31.036 } 00:44:31.296 22:43:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@653 -- # es=1 00:44:31.297 22:43:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:44:31.297 22:43:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:44:31.297 22:43:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:44:31.297 22:43:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:44:31.297 aio_bdev 00:44:31.297 22:43:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 2273f3ab-27bf-4e8d-953c-b890491263f5 00:44:31.297 22:43:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@899 -- # local bdev_name=2273f3ab-27bf-4e8d-953c-b890491263f5 00:44:31.297 22:43:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:44:31.297 22:43:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@901 -- # local i 00:44:31.297 22:43:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:44:31.297 22:43:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:44:31.297 22:43:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:44:31.558 22:43:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 2273f3ab-27bf-4e8d-953c-b890491263f5 -t 2000 00:44:31.558 [ 00:44:31.558 { 00:44:31.558 "name": "2273f3ab-27bf-4e8d-953c-b890491263f5", 00:44:31.558 "aliases": [ 00:44:31.558 "lvs/lvol" 00:44:31.558 ], 00:44:31.558 "product_name": "Logical Volume", 00:44:31.558 "block_size": 4096, 00:44:31.558 "num_blocks": 38912, 00:44:31.558 "uuid": "2273f3ab-27bf-4e8d-953c-b890491263f5", 00:44:31.558 "assigned_rate_limits": { 00:44:31.558 "rw_ios_per_sec": 0, 00:44:31.558 "rw_mbytes_per_sec": 0, 00:44:31.558 "r_mbytes_per_sec": 0, 00:44:31.558 "w_mbytes_per_sec": 0 00:44:31.558 }, 00:44:31.558 "claimed": false, 00:44:31.558 "zoned": false, 00:44:31.558 "supported_io_types": { 00:44:31.558 "read": true, 00:44:31.558 "write": true, 00:44:31.558 "unmap": true, 00:44:31.558 "flush": false, 00:44:31.558 "reset": true, 00:44:31.558 "nvme_admin": false, 00:44:31.558 "nvme_io": false, 00:44:31.558 "nvme_io_md": false, 00:44:31.558 "write_zeroes": true, 00:44:31.558 "zcopy": false, 00:44:31.558 "get_zone_info": false, 00:44:31.558 "zone_management": false, 00:44:31.558 "zone_append": false, 00:44:31.558 "compare": false, 00:44:31.558 "compare_and_write": false, 00:44:31.558 "abort": false, 00:44:31.558 "seek_hole": true, 00:44:31.558 "seek_data": true, 00:44:31.558 "copy": false, 00:44:31.558 "nvme_iov_md": false 00:44:31.558 }, 00:44:31.558 "driver_specific": { 00:44:31.558 "lvol": { 00:44:31.558 "lvol_store_uuid": "65d51159-9f12-4c7f-bc08-9c1cf6def9fd", 00:44:31.558 "base_bdev": "aio_bdev", 00:44:31.558 "thin_provision": false, 00:44:31.558 "num_allocated_clusters": 38, 00:44:31.558 "snapshot": false, 00:44:31.558 "clone": false, 00:44:31.558 "esnap_clone": false 00:44:31.558 } 00:44:31.558 } 00:44:31.558 } 00:44:31.558 ] 00:44:31.558 22:43:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@907 -- # return 0 00:44:31.558 22:43:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 65d51159-9f12-4c7f-bc08-9c1cf6def9fd 00:44:31.558 22:43:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:44:31.819 22:43:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:44:31.819 22:43:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 65d51159-9f12-4c7f-bc08-9c1cf6def9fd 00:44:31.819 22:43:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:44:32.079 22:43:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:44:32.079 22:43:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 2273f3ab-27bf-4e8d-953c-b890491263f5 00:44:32.079 22:43:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 65d51159-9f12-4c7f-bc08-9c1cf6def9fd 00:44:32.341 22:43:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:44:32.602 22:43:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:44:32.602 00:44:32.602 real 0m17.456s 00:44:32.602 user 0m35.334s 00:44:32.602 sys 0m3.042s 00:44:32.602 22:43:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1126 -- # xtrace_disable 00:44:32.602 22:43:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:44:32.602 ************************************ 00:44:32.602 END TEST lvs_grow_dirty 00:44:32.602 ************************************ 00:44:32.602 22:43:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:44:32.602 22:43:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@808 -- # type=--id 00:44:32.602 22:43:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@809 -- # id=0 00:44:32.602 22:43:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@810 -- # '[' --id = --pid ']' 00:44:32.602 22:43:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@814 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:44:32.602 22:43:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@814 -- # shm_files=nvmf_trace.0 00:44:32.602 22:43:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@816 -- # [[ -z nvmf_trace.0 ]] 00:44:32.602 22:43:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@820 -- # for n in $shm_files 00:44:32.602 22:43:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@821 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:44:32.602 nvmf_trace.0 00:44:32.602 22:43:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@823 -- # return 0 00:44:32.602 22:43:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:44:32.602 22:43:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@514 -- # nvmfcleanup 00:44:32.602 22:43:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@121 -- # sync 00:44:32.602 22:43:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:44:32.602 22:43:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@124 -- # set +e 00:44:32.602 22:43:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@125 -- # for i in {1..20} 00:44:32.602 22:43:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:44:32.602 rmmod nvme_tcp 00:44:32.602 rmmod nvme_fabrics 00:44:32.602 rmmod nvme_keyring 00:44:32.602 22:43:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:44:32.602 22:43:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@128 -- # set -e 00:44:32.602 22:43:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@129 -- # return 0 00:44:32.602 22:43:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@515 -- # '[' -n 446457 ']' 00:44:32.602 22:43:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@516 -- # killprocess 446457 00:44:32.602 22:43:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@950 -- # '[' -z 446457 ']' 00:44:32.602 22:43:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@954 -- # kill -0 446457 00:44:32.602 22:43:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@955 -- # uname 00:44:32.863 22:43:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:44:32.863 22:43:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 446457 00:44:32.863 22:43:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:44:32.863 22:43:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:44:32.863 22:43:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@968 -- # echo 'killing process with pid 446457' 00:44:32.863 killing process with pid 446457 00:44:32.863 22:43:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@969 -- # kill 446457 00:44:32.863 22:43:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@974 -- # wait 446457 00:44:32.863 22:43:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:44:32.863 22:43:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:44:32.863 22:43:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:44:32.863 22:43:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@297 -- # iptr 00:44:32.863 22:43:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@789 -- # iptables-save 00:44:32.863 22:43:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:44:32.864 22:43:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@789 -- # iptables-restore 00:44:33.124 22:43:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:44:33.124 22:43:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@302 -- # remove_spdk_ns 00:44:33.124 22:43:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:44:33.124 22:43:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:44:33.124 22:43:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:44:35.036 22:43:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:44:35.036 00:44:35.036 real 0m44.552s 00:44:35.036 user 0m53.640s 00:44:35.036 sys 0m10.596s 00:44:35.036 22:43:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1126 -- # xtrace_disable 00:44:35.036 22:43:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:44:35.036 ************************************ 00:44:35.036 END TEST nvmf_lvs_grow 00:44:35.036 ************************************ 00:44:35.036 22:43:30 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@29 -- # run_test nvmf_bdev_io_wait /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp --interrupt-mode 00:44:35.036 22:43:30 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:44:35.036 22:43:30 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1107 -- # xtrace_disable 00:44:35.036 22:43:30 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:44:35.036 ************************************ 00:44:35.036 START TEST nvmf_bdev_io_wait 00:44:35.036 ************************************ 00:44:35.036 22:43:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp --interrupt-mode 00:44:35.298 * Looking for test storage... 00:44:35.298 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:44:35.298 22:43:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:44:35.298 22:43:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1681 -- # lcov --version 00:44:35.298 22:43:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:44:35.298 22:43:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:44:35.298 22:43:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:44:35.298 22:43:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@333 -- # local ver1 ver1_l 00:44:35.298 22:43:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@334 -- # local ver2 ver2_l 00:44:35.298 22:43:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # IFS=.-: 00:44:35.298 22:43:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # read -ra ver1 00:44:35.298 22:43:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # IFS=.-: 00:44:35.298 22:43:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # read -ra ver2 00:44:35.298 22:43:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@338 -- # local 'op=<' 00:44:35.298 22:43:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@340 -- # ver1_l=2 00:44:35.298 22:43:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@341 -- # ver2_l=1 00:44:35.298 22:43:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:44:35.298 22:43:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@344 -- # case "$op" in 00:44:35.298 22:43:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@345 -- # : 1 00:44:35.298 22:43:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v = 0 )) 00:44:35.298 22:43:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:44:35.298 22:43:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # decimal 1 00:44:35.298 22:43:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=1 00:44:35.298 22:43:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:44:35.298 22:43:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 1 00:44:35.298 22:43:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # ver1[v]=1 00:44:35.298 22:43:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # decimal 2 00:44:35.298 22:43:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=2 00:44:35.298 22:43:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:44:35.298 22:43:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 2 00:44:35.298 22:43:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # ver2[v]=2 00:44:35.298 22:43:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:44:35.298 22:43:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:44:35.298 22:43:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # return 0 00:44:35.298 22:43:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:44:35.298 22:43:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:44:35.298 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:44:35.298 --rc genhtml_branch_coverage=1 00:44:35.298 --rc genhtml_function_coverage=1 00:44:35.298 --rc genhtml_legend=1 00:44:35.298 --rc geninfo_all_blocks=1 00:44:35.298 --rc geninfo_unexecuted_blocks=1 00:44:35.298 00:44:35.298 ' 00:44:35.298 22:43:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:44:35.298 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:44:35.298 --rc genhtml_branch_coverage=1 00:44:35.298 --rc genhtml_function_coverage=1 00:44:35.298 --rc genhtml_legend=1 00:44:35.298 --rc geninfo_all_blocks=1 00:44:35.298 --rc geninfo_unexecuted_blocks=1 00:44:35.298 00:44:35.298 ' 00:44:35.298 22:43:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:44:35.298 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:44:35.298 --rc genhtml_branch_coverage=1 00:44:35.298 --rc genhtml_function_coverage=1 00:44:35.298 --rc genhtml_legend=1 00:44:35.298 --rc geninfo_all_blocks=1 00:44:35.298 --rc geninfo_unexecuted_blocks=1 00:44:35.298 00:44:35.298 ' 00:44:35.298 22:43:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:44:35.298 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:44:35.298 --rc genhtml_branch_coverage=1 00:44:35.298 --rc genhtml_function_coverage=1 00:44:35.298 --rc genhtml_legend=1 00:44:35.298 --rc geninfo_all_blocks=1 00:44:35.298 --rc geninfo_unexecuted_blocks=1 00:44:35.298 00:44:35.298 ' 00:44:35.299 22:43:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:44:35.299 22:43:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # uname -s 00:44:35.299 22:43:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:44:35.299 22:43:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:44:35.299 22:43:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:44:35.299 22:43:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:44:35.299 22:43:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:44:35.299 22:43:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:44:35.299 22:43:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:44:35.299 22:43:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:44:35.299 22:43:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:44:35.299 22:43:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:44:35.299 22:43:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 00:44:35.299 22:43:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@18 -- # NVME_HOSTID=80c5a598-37ec-ec11-9bc7-a4bf01928204 00:44:35.299 22:43:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:44:35.299 22:43:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:44:35.299 22:43:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:44:35.299 22:43:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:44:35.299 22:43:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:44:35.299 22:43:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@15 -- # shopt -s extglob 00:44:35.299 22:43:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:44:35.299 22:43:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:44:35.299 22:43:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:44:35.299 22:43:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:44:35.299 22:43:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:44:35.299 22:43:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:44:35.299 22:43:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@5 -- # export PATH 00:44:35.299 22:43:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:44:35.299 22:43:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@51 -- # : 0 00:44:35.299 22:43:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:44:35.299 22:43:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:44:35.299 22:43:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:44:35.299 22:43:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:44:35.299 22:43:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:44:35.299 22:43:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:44:35.299 22:43:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:44:35.299 22:43:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:44:35.299 22:43:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:44:35.299 22:43:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@55 -- # have_pci_nics=0 00:44:35.299 22:43:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:44:35.299 22:43:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:44:35.299 22:43:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:44:35.299 22:43:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:44:35.299 22:43:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:44:35.299 22:43:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@474 -- # prepare_net_devs 00:44:35.299 22:43:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@436 -- # local -g is_hw=no 00:44:35.299 22:43:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@438 -- # remove_spdk_ns 00:44:35.299 22:43:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:44:35.299 22:43:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:44:35.299 22:43:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:44:35.299 22:43:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:44:35.299 22:43:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:44:35.299 22:43:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@309 -- # xtrace_disable 00:44:35.299 22:43:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:44:43.439 22:43:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:44:43.439 22:43:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # pci_devs=() 00:44:43.439 22:43:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # local -a pci_devs 00:44:43.439 22:43:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # pci_net_devs=() 00:44:43.439 22:43:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:44:43.439 22:43:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # pci_drivers=() 00:44:43.439 22:43:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # local -A pci_drivers 00:44:43.439 22:43:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # net_devs=() 00:44:43.439 22:43:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # local -ga net_devs 00:44:43.439 22:43:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # e810=() 00:44:43.439 22:43:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # local -ga e810 00:44:43.439 22:43:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # x722=() 00:44:43.439 22:43:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # local -ga x722 00:44:43.439 22:43:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # mlx=() 00:44:43.439 22:43:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # local -ga mlx 00:44:43.439 22:43:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:44:43.439 22:43:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:44:43.439 22:43:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:44:43.439 22:43:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:44:43.439 22:43:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:44:43.439 22:43:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:44:43.439 22:43:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:44:43.440 22:43:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:44:43.440 22:43:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:44:43.440 22:43:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:44:43.440 22:43:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:44:43.440 22:43:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:44:43.440 22:43:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:44:43.440 22:43:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:44:43.440 22:43:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:44:43.440 22:43:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:44:43.440 22:43:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:44:43.440 22:43:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:44:43.440 22:43:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:44:43.440 22:43:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:44:43.440 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:44:43.440 22:43:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:44:43.440 22:43:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:44:43.440 22:43:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:44:43.440 22:43:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:44:43.440 22:43:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:44:43.440 22:43:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:44:43.440 22:43:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:44:43.440 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:44:43.440 22:43:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:44:43.440 22:43:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:44:43.440 22:43:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:44:43.440 22:43:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:44:43.440 22:43:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:44:43.440 22:43:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:44:43.440 22:43:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:44:43.440 22:43:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:44:43.440 22:43:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:44:43.440 22:43:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:44:43.440 22:43:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:44:43.440 22:43:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:44:43.440 22:43:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ up == up ]] 00:44:43.440 22:43:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:44:43.440 22:43:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:44:43.440 22:43:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:44:43.440 Found net devices under 0000:4b:00.0: cvl_0_0 00:44:43.440 22:43:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:44:43.440 22:43:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:44:43.440 22:43:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:44:43.440 22:43:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:44:43.440 22:43:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:44:43.440 22:43:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ up == up ]] 00:44:43.440 22:43:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:44:43.440 22:43:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:44:43.440 22:43:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:44:43.440 Found net devices under 0000:4b:00.1: cvl_0_1 00:44:43.440 22:43:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:44:43.440 22:43:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:44:43.440 22:43:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@440 -- # is_hw=yes 00:44:43.440 22:43:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:44:43.440 22:43:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:44:43.440 22:43:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:44:43.440 22:43:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:44:43.440 22:43:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:44:43.440 22:43:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:44:43.440 22:43:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:44:43.440 22:43:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:44:43.440 22:43:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:44:43.440 22:43:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:44:43.440 22:43:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:44:43.440 22:43:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:44:43.440 22:43:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:44:43.440 22:43:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:44:43.440 22:43:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:44:43.440 22:43:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:44:43.440 22:43:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:44:43.440 22:43:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:44:43.440 22:43:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:44:43.440 22:43:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:44:43.440 22:43:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:44:43.440 22:43:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:44:43.440 22:43:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:44:43.440 22:43:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:44:43.440 22:43:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:44:43.440 22:43:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:44:43.440 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:44:43.440 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.640 ms 00:44:43.440 00:44:43.440 --- 10.0.0.2 ping statistics --- 00:44:43.440 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:44:43.440 rtt min/avg/max/mdev = 0.640/0.640/0.640/0.000 ms 00:44:43.440 22:43:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:44:43.440 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:44:43.440 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.316 ms 00:44:43.440 00:44:43.440 --- 10.0.0.1 ping statistics --- 00:44:43.440 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:44:43.440 rtt min/avg/max/mdev = 0.316/0.316/0.316/0.000 ms 00:44:43.440 22:43:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:44:43.440 22:43:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@448 -- # return 0 00:44:43.440 22:43:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:44:43.440 22:43:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:44:43.440 22:43:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:44:43.440 22:43:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:44:43.440 22:43:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:44:43.440 22:43:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:44:43.440 22:43:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:44:43.440 22:43:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:44:43.441 22:43:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:44:43.441 22:43:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@724 -- # xtrace_disable 00:44:43.441 22:43:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:44:43.441 22:43:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@507 -- # nvmfpid=451366 00:44:43.441 22:43:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@508 -- # waitforlisten 451366 00:44:43.441 22:43:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@831 -- # '[' -z 451366 ']' 00:44:43.441 22:43:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:44:43.441 22:43:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@836 -- # local max_retries=100 00:44:43.441 22:43:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:44:43.441 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:44:43.441 22:43:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@840 -- # xtrace_disable 00:44:43.441 22:43:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:44:43.441 22:43:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xF --wait-for-rpc 00:44:43.441 [2024-10-01 22:43:37.886241] thread.c:2945:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:44:43.441 [2024-10-01 22:43:37.887376] Starting SPDK v25.01-pre git sha1 1b1c3081e / DPDK 24.03.0 initialization... 00:44:43.441 [2024-10-01 22:43:37.887426] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:44:43.441 [2024-10-01 22:43:37.960289] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:44:43.441 [2024-10-01 22:43:38.035739] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:44:43.441 [2024-10-01 22:43:38.035779] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:44:43.441 [2024-10-01 22:43:38.035787] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:44:43.441 [2024-10-01 22:43:38.035794] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:44:43.441 [2024-10-01 22:43:38.035800] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:44:43.441 [2024-10-01 22:43:38.035886] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:44:43.441 [2024-10-01 22:43:38.036019] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:44:43.441 [2024-10-01 22:43:38.036174] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:44:43.441 [2024-10-01 22:43:38.036175] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:44:43.441 [2024-10-01 22:43:38.036504] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:44:43.441 22:43:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:44:43.441 22:43:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@864 -- # return 0 00:44:43.441 22:43:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:44:43.441 22:43:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@730 -- # xtrace_disable 00:44:43.441 22:43:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:44:43.702 22:43:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:44:43.702 22:43:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:44:43.702 22:43:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:44:43.702 22:43:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:44:43.702 22:43:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:44:43.702 22:43:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:44:43.702 22:43:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:44:43.702 22:43:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:44:43.702 [2024-10-01 22:43:38.783386] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:44:43.702 [2024-10-01 22:43:38.783665] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:44:43.702 [2024-10-01 22:43:38.784107] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:44:43.702 [2024-10-01 22:43:38.784262] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:44:43.702 22:43:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:44:43.702 22:43:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:44:43.702 22:43:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:44:43.702 22:43:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:44:43.702 [2024-10-01 22:43:38.792696] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:44:43.702 22:43:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:44:43.702 22:43:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:44:43.702 22:43:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:44:43.702 22:43:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:44:43.702 Malloc0 00:44:43.702 22:43:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:44:43.702 22:43:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:44:43.702 22:43:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:44:43.702 22:43:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:44:43.703 22:43:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:44:43.703 22:43:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:44:43.703 22:43:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:44:43.703 22:43:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:44:43.703 22:43:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:44:43.703 22:43:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:44:43.703 22:43:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:44:43.703 22:43:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:44:43.703 [2024-10-01 22:43:38.852857] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:44:43.703 22:43:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:44:43.703 22:43:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@28 -- # WRITE_PID=451540 00:44:43.703 22:43:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@30 -- # READ_PID=451542 00:44:43.703 22:43:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:44:43.703 22:43:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:44:43.703 22:43:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # config=() 00:44:43.703 22:43:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # local subsystem config 00:44:43.703 22:43:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:44:43.703 22:43:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:44:43.703 { 00:44:43.703 "params": { 00:44:43.703 "name": "Nvme$subsystem", 00:44:43.703 "trtype": "$TEST_TRANSPORT", 00:44:43.703 "traddr": "$NVMF_FIRST_TARGET_IP", 00:44:43.703 "adrfam": "ipv4", 00:44:43.703 "trsvcid": "$NVMF_PORT", 00:44:43.703 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:44:43.703 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:44:43.703 "hdgst": ${hdgst:-false}, 00:44:43.703 "ddgst": ${ddgst:-false} 00:44:43.703 }, 00:44:43.703 "method": "bdev_nvme_attach_controller" 00:44:43.703 } 00:44:43.703 EOF 00:44:43.703 )") 00:44:43.703 22:43:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=451544 00:44:43.703 22:43:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:44:43.703 22:43:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:44:43.703 22:43:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # config=() 00:44:43.703 22:43:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # local subsystem config 00:44:43.703 22:43:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=451546 00:44:43.703 22:43:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:44:43.703 22:43:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@35 -- # sync 00:44:43.703 22:43:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:44:43.703 { 00:44:43.703 "params": { 00:44:43.703 "name": "Nvme$subsystem", 00:44:43.703 "trtype": "$TEST_TRANSPORT", 00:44:43.703 "traddr": "$NVMF_FIRST_TARGET_IP", 00:44:43.703 "adrfam": "ipv4", 00:44:43.703 "trsvcid": "$NVMF_PORT", 00:44:43.703 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:44:43.703 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:44:43.703 "hdgst": ${hdgst:-false}, 00:44:43.703 "ddgst": ${ddgst:-false} 00:44:43.703 }, 00:44:43.703 "method": "bdev_nvme_attach_controller" 00:44:43.703 } 00:44:43.703 EOF 00:44:43.703 )") 00:44:43.703 22:43:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:44:43.703 22:43:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:44:43.703 22:43:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # config=() 00:44:43.703 22:43:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@580 -- # cat 00:44:43.703 22:43:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # local subsystem config 00:44:43.703 22:43:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:44:43.703 22:43:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:44:43.703 22:43:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:44:43.703 { 00:44:43.703 "params": { 00:44:43.703 "name": "Nvme$subsystem", 00:44:43.703 "trtype": "$TEST_TRANSPORT", 00:44:43.703 "traddr": "$NVMF_FIRST_TARGET_IP", 00:44:43.703 "adrfam": "ipv4", 00:44:43.703 "trsvcid": "$NVMF_PORT", 00:44:43.703 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:44:43.703 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:44:43.703 "hdgst": ${hdgst:-false}, 00:44:43.703 "ddgst": ${ddgst:-false} 00:44:43.703 }, 00:44:43.703 "method": "bdev_nvme_attach_controller" 00:44:43.703 } 00:44:43.703 EOF 00:44:43.703 )") 00:44:43.703 22:43:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:44:43.703 22:43:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # config=() 00:44:43.703 22:43:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # local subsystem config 00:44:43.703 22:43:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:44:43.703 22:43:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@580 -- # cat 00:44:43.703 22:43:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:44:43.703 { 00:44:43.703 "params": { 00:44:43.703 "name": "Nvme$subsystem", 00:44:43.703 "trtype": "$TEST_TRANSPORT", 00:44:43.703 "traddr": "$NVMF_FIRST_TARGET_IP", 00:44:43.703 "adrfam": "ipv4", 00:44:43.703 "trsvcid": "$NVMF_PORT", 00:44:43.703 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:44:43.703 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:44:43.703 "hdgst": ${hdgst:-false}, 00:44:43.703 "ddgst": ${ddgst:-false} 00:44:43.703 }, 00:44:43.703 "method": "bdev_nvme_attach_controller" 00:44:43.703 } 00:44:43.703 EOF 00:44:43.703 )") 00:44:43.703 22:43:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@580 -- # cat 00:44:43.703 22:43:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@37 -- # wait 451540 00:44:43.703 22:43:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@580 -- # cat 00:44:43.703 22:43:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # jq . 00:44:43.703 22:43:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # jq . 00:44:43.703 22:43:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # jq . 00:44:43.703 22:43:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@583 -- # IFS=, 00:44:43.703 22:43:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:44:43.703 "params": { 00:44:43.703 "name": "Nvme1", 00:44:43.703 "trtype": "tcp", 00:44:43.703 "traddr": "10.0.0.2", 00:44:43.703 "adrfam": "ipv4", 00:44:43.703 "trsvcid": "4420", 00:44:43.703 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:44:43.703 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:44:43.703 "hdgst": false, 00:44:43.703 "ddgst": false 00:44:43.703 }, 00:44:43.703 "method": "bdev_nvme_attach_controller" 00:44:43.703 }' 00:44:43.703 22:43:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # jq . 00:44:43.703 22:43:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@583 -- # IFS=, 00:44:43.703 22:43:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:44:43.703 "params": { 00:44:43.703 "name": "Nvme1", 00:44:43.703 "trtype": "tcp", 00:44:43.703 "traddr": "10.0.0.2", 00:44:43.703 "adrfam": "ipv4", 00:44:43.703 "trsvcid": "4420", 00:44:43.703 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:44:43.703 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:44:43.703 "hdgst": false, 00:44:43.703 "ddgst": false 00:44:43.703 }, 00:44:43.703 "method": "bdev_nvme_attach_controller" 00:44:43.703 }' 00:44:43.703 22:43:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@583 -- # IFS=, 00:44:43.703 22:43:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:44:43.703 "params": { 00:44:43.703 "name": "Nvme1", 00:44:43.703 "trtype": "tcp", 00:44:43.703 "traddr": "10.0.0.2", 00:44:43.703 "adrfam": "ipv4", 00:44:43.703 "trsvcid": "4420", 00:44:43.703 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:44:43.703 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:44:43.703 "hdgst": false, 00:44:43.703 "ddgst": false 00:44:43.703 }, 00:44:43.703 "method": "bdev_nvme_attach_controller" 00:44:43.704 }' 00:44:43.704 22:43:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@583 -- # IFS=, 00:44:43.704 22:43:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:44:43.704 "params": { 00:44:43.704 "name": "Nvme1", 00:44:43.704 "trtype": "tcp", 00:44:43.704 "traddr": "10.0.0.2", 00:44:43.704 "adrfam": "ipv4", 00:44:43.704 "trsvcid": "4420", 00:44:43.704 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:44:43.704 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:44:43.704 "hdgst": false, 00:44:43.704 "ddgst": false 00:44:43.704 }, 00:44:43.704 "method": "bdev_nvme_attach_controller" 00:44:43.704 }' 00:44:43.704 [2024-10-01 22:43:38.907808] Starting SPDK v25.01-pre git sha1 1b1c3081e / DPDK 24.03.0 initialization... 00:44:43.704 [2024-10-01 22:43:38.907862] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:44:43.704 [2024-10-01 22:43:38.910066] Starting SPDK v25.01-pre git sha1 1b1c3081e / DPDK 24.03.0 initialization... 00:44:43.704 [2024-10-01 22:43:38.910114] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 --proc-type=auto ] 00:44:43.704 [2024-10-01 22:43:38.910361] Starting SPDK v25.01-pre git sha1 1b1c3081e / DPDK 24.03.0 initialization... 00:44:43.704 [2024-10-01 22:43:38.910406] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 00:44:43.704 [2024-10-01 22:43:38.910577] Starting SPDK v25.01-pre git sha1 1b1c3081e / DPDK 24.03.0 initialization... 00:44:43.704 [2024-10-01 22:43:38.910621] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:44:43.964 [2024-10-01 22:43:39.055172] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:44:43.964 [2024-10-01 22:43:39.098672] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:44:43.964 [2024-10-01 22:43:39.107558] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 4 00:44:43.964 [2024-10-01 22:43:39.148554] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 7 00:44:43.964 [2024-10-01 22:43:39.149875] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:44:43.964 [2024-10-01 22:43:39.194174] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:44:43.964 [2024-10-01 22:43:39.199906] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 6 00:44:44.226 [2024-10-01 22:43:39.245133] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 5 00:44:44.486 Running I/O for 1 seconds... 00:44:44.486 Running I/O for 1 seconds... 00:44:44.486 Running I/O for 1 seconds... 00:44:44.747 Running I/O for 1 seconds... 00:44:45.343 13159.00 IOPS, 51.40 MiB/s 00:44:45.343 Latency(us) 00:44:45.343 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:44:45.343 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:44:45.343 Nvme1n1 : 1.01 13214.17 51.62 0.00 0.00 9653.62 2102.61 13380.27 00:44:45.343 =================================================================================================================== 00:44:45.343 Total : 13214.17 51.62 0.00 0.00 9653.62 2102.61 13380.27 00:44:45.343 18660.00 IOPS, 72.89 MiB/s 00:44:45.343 Latency(us) 00:44:45.343 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:44:45.343 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:44:45.343 Nvme1n1 : 1.01 18727.63 73.15 0.00 0.00 6817.33 3126.61 10376.53 00:44:45.343 =================================================================================================================== 00:44:45.343 Total : 18727.63 73.15 0.00 0.00 6817.33 3126.61 10376.53 00:44:45.602 176688.00 IOPS, 690.19 MiB/s 00:44:45.602 Latency(us) 00:44:45.602 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:44:45.602 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:44:45.602 Nvme1n1 : 1.00 176333.99 688.80 0.00 0.00 721.95 327.68 1993.39 00:44:45.602 =================================================================================================================== 00:44:45.602 Total : 176333.99 688.80 0.00 0.00 721.95 327.68 1993.39 00:44:45.602 22:43:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@38 -- # wait 451542 00:44:45.602 11705.00 IOPS, 45.72 MiB/s 00:44:45.602 Latency(us) 00:44:45.602 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:44:45.602 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:44:45.602 Nvme1n1 : 1.01 11760.66 45.94 0.00 0.00 10845.59 4860.59 17476.27 00:44:45.602 =================================================================================================================== 00:44:45.603 Total : 11760.66 45.94 0.00 0.00 10845.59 4860.59 17476.27 00:44:45.862 22:43:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@39 -- # wait 451544 00:44:45.862 22:43:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@40 -- # wait 451546 00:44:45.862 22:43:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:44:45.862 22:43:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:44:45.862 22:43:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:44:45.862 22:43:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:44:45.862 22:43:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:44:45.862 22:43:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:44:45.862 22:43:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@514 -- # nvmfcleanup 00:44:45.862 22:43:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@121 -- # sync 00:44:45.862 22:43:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:44:45.862 22:43:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@124 -- # set +e 00:44:45.862 22:43:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@125 -- # for i in {1..20} 00:44:45.862 22:43:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:44:45.862 rmmod nvme_tcp 00:44:45.862 rmmod nvme_fabrics 00:44:45.862 rmmod nvme_keyring 00:44:45.862 22:43:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:44:45.862 22:43:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@128 -- # set -e 00:44:45.862 22:43:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@129 -- # return 0 00:44:45.862 22:43:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@515 -- # '[' -n 451366 ']' 00:44:45.862 22:43:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@516 -- # killprocess 451366 00:44:45.862 22:43:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@950 -- # '[' -z 451366 ']' 00:44:45.862 22:43:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@954 -- # kill -0 451366 00:44:45.862 22:43:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@955 -- # uname 00:44:45.862 22:43:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:44:45.862 22:43:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 451366 00:44:46.121 22:43:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:44:46.121 22:43:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:44:46.121 22:43:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@968 -- # echo 'killing process with pid 451366' 00:44:46.121 killing process with pid 451366 00:44:46.121 22:43:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@969 -- # kill 451366 00:44:46.121 22:43:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@974 -- # wait 451366 00:44:46.121 22:43:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:44:46.121 22:43:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:44:46.121 22:43:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:44:46.121 22:43:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # iptr 00:44:46.121 22:43:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@789 -- # iptables-save 00:44:46.121 22:43:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:44:46.121 22:43:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@789 -- # iptables-restore 00:44:46.121 22:43:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:44:46.121 22:43:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@302 -- # remove_spdk_ns 00:44:46.121 22:43:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:44:46.121 22:43:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:44:46.121 22:43:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:44:48.666 22:43:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:44:48.666 00:44:48.666 real 0m13.112s 00:44:48.666 user 0m16.918s 00:44:48.666 sys 0m7.814s 00:44:48.666 22:43:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1126 -- # xtrace_disable 00:44:48.666 22:43:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:44:48.666 ************************************ 00:44:48.666 END TEST nvmf_bdev_io_wait 00:44:48.666 ************************************ 00:44:48.666 22:43:43 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@30 -- # run_test nvmf_queue_depth /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp --interrupt-mode 00:44:48.666 22:43:43 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:44:48.666 22:43:43 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1107 -- # xtrace_disable 00:44:48.666 22:43:43 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:44:48.666 ************************************ 00:44:48.666 START TEST nvmf_queue_depth 00:44:48.666 ************************************ 00:44:48.666 22:43:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp --interrupt-mode 00:44:48.666 * Looking for test storage... 00:44:48.666 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:44:48.666 22:43:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:44:48.666 22:43:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1681 -- # lcov --version 00:44:48.666 22:43:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:44:48.666 22:43:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:44:48.666 22:43:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:44:48.666 22:43:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@333 -- # local ver1 ver1_l 00:44:48.666 22:43:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@334 -- # local ver2 ver2_l 00:44:48.666 22:43:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@336 -- # IFS=.-: 00:44:48.666 22:43:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@336 -- # read -ra ver1 00:44:48.666 22:43:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@337 -- # IFS=.-: 00:44:48.666 22:43:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@337 -- # read -ra ver2 00:44:48.666 22:43:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@338 -- # local 'op=<' 00:44:48.666 22:43:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@340 -- # ver1_l=2 00:44:48.666 22:43:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@341 -- # ver2_l=1 00:44:48.666 22:43:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:44:48.666 22:43:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@344 -- # case "$op" in 00:44:48.666 22:43:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@345 -- # : 1 00:44:48.666 22:43:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v = 0 )) 00:44:48.666 22:43:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:44:48.666 22:43:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@365 -- # decimal 1 00:44:48.666 22:43:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=1 00:44:48.666 22:43:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:44:48.666 22:43:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 1 00:44:48.666 22:43:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@365 -- # ver1[v]=1 00:44:48.666 22:43:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@366 -- # decimal 2 00:44:48.666 22:43:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=2 00:44:48.666 22:43:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:44:48.666 22:43:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 2 00:44:48.666 22:43:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@366 -- # ver2[v]=2 00:44:48.666 22:43:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:44:48.666 22:43:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:44:48.666 22:43:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@368 -- # return 0 00:44:48.666 22:43:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:44:48.666 22:43:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:44:48.666 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:44:48.666 --rc genhtml_branch_coverage=1 00:44:48.666 --rc genhtml_function_coverage=1 00:44:48.666 --rc genhtml_legend=1 00:44:48.666 --rc geninfo_all_blocks=1 00:44:48.666 --rc geninfo_unexecuted_blocks=1 00:44:48.666 00:44:48.666 ' 00:44:48.666 22:43:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:44:48.666 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:44:48.666 --rc genhtml_branch_coverage=1 00:44:48.666 --rc genhtml_function_coverage=1 00:44:48.666 --rc genhtml_legend=1 00:44:48.666 --rc geninfo_all_blocks=1 00:44:48.666 --rc geninfo_unexecuted_blocks=1 00:44:48.666 00:44:48.666 ' 00:44:48.666 22:43:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:44:48.666 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:44:48.666 --rc genhtml_branch_coverage=1 00:44:48.666 --rc genhtml_function_coverage=1 00:44:48.666 --rc genhtml_legend=1 00:44:48.666 --rc geninfo_all_blocks=1 00:44:48.666 --rc geninfo_unexecuted_blocks=1 00:44:48.666 00:44:48.666 ' 00:44:48.666 22:43:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:44:48.666 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:44:48.666 --rc genhtml_branch_coverage=1 00:44:48.666 --rc genhtml_function_coverage=1 00:44:48.666 --rc genhtml_legend=1 00:44:48.666 --rc geninfo_all_blocks=1 00:44:48.666 --rc geninfo_unexecuted_blocks=1 00:44:48.666 00:44:48.666 ' 00:44:48.666 22:43:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:44:48.666 22:43:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@7 -- # uname -s 00:44:48.666 22:43:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:44:48.666 22:43:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:44:48.666 22:43:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:44:48.666 22:43:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:44:48.666 22:43:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:44:48.666 22:43:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:44:48.666 22:43:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:44:48.666 22:43:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:44:48.666 22:43:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:44:48.667 22:43:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:44:48.667 22:43:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 00:44:48.667 22:43:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@18 -- # NVME_HOSTID=80c5a598-37ec-ec11-9bc7-a4bf01928204 00:44:48.667 22:43:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:44:48.667 22:43:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:44:48.667 22:43:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:44:48.667 22:43:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:44:48.667 22:43:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:44:48.667 22:43:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@15 -- # shopt -s extglob 00:44:48.667 22:43:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:44:48.667 22:43:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:44:48.667 22:43:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:44:48.667 22:43:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:44:48.667 22:43:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:44:48.667 22:43:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:44:48.667 22:43:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@5 -- # export PATH 00:44:48.667 22:43:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:44:48.667 22:43:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@51 -- # : 0 00:44:48.667 22:43:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:44:48.667 22:43:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:44:48.667 22:43:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:44:48.667 22:43:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:44:48.667 22:43:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:44:48.667 22:43:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:44:48.667 22:43:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:44:48.667 22:43:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:44:48.667 22:43:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:44:48.667 22:43:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@55 -- # have_pci_nics=0 00:44:48.667 22:43:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:44:48.667 22:43:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:44:48.667 22:43:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:44:48.667 22:43:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@19 -- # nvmftestinit 00:44:48.667 22:43:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:44:48.667 22:43:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:44:48.667 22:43:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@474 -- # prepare_net_devs 00:44:48.667 22:43:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@436 -- # local -g is_hw=no 00:44:48.667 22:43:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@438 -- # remove_spdk_ns 00:44:48.667 22:43:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:44:48.667 22:43:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:44:48.667 22:43:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:44:48.667 22:43:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:44:48.667 22:43:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:44:48.667 22:43:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@309 -- # xtrace_disable 00:44:48.667 22:43:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:44:56.804 22:43:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:44:56.804 22:43:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@315 -- # pci_devs=() 00:44:56.804 22:43:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@315 -- # local -a pci_devs 00:44:56.804 22:43:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@316 -- # pci_net_devs=() 00:44:56.804 22:43:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:44:56.804 22:43:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@317 -- # pci_drivers=() 00:44:56.804 22:43:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@317 -- # local -A pci_drivers 00:44:56.804 22:43:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@319 -- # net_devs=() 00:44:56.804 22:43:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@319 -- # local -ga net_devs 00:44:56.804 22:43:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@320 -- # e810=() 00:44:56.804 22:43:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@320 -- # local -ga e810 00:44:56.804 22:43:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@321 -- # x722=() 00:44:56.804 22:43:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@321 -- # local -ga x722 00:44:56.804 22:43:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@322 -- # mlx=() 00:44:56.804 22:43:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@322 -- # local -ga mlx 00:44:56.804 22:43:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:44:56.804 22:43:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:44:56.804 22:43:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:44:56.804 22:43:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:44:56.804 22:43:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:44:56.804 22:43:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:44:56.804 22:43:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:44:56.804 22:43:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:44:56.804 22:43:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:44:56.804 22:43:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:44:56.804 22:43:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:44:56.804 22:43:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:44:56.804 22:43:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:44:56.804 22:43:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:44:56.804 22:43:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:44:56.804 22:43:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:44:56.804 22:43:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:44:56.804 22:43:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:44:56.804 22:43:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:44:56.804 22:43:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:44:56.804 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:44:56.804 22:43:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:44:56.804 22:43:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:44:56.804 22:43:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:44:56.804 22:43:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:44:56.804 22:43:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:44:56.804 22:43:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:44:56.804 22:43:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:44:56.804 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:44:56.804 22:43:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:44:56.804 22:43:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:44:56.805 22:43:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:44:56.805 22:43:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:44:56.805 22:43:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:44:56.805 22:43:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:44:56.805 22:43:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:44:56.805 22:43:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:44:56.805 22:43:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:44:56.805 22:43:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:44:56.805 22:43:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:44:56.805 22:43:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:44:56.805 22:43:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ up == up ]] 00:44:56.805 22:43:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:44:56.805 22:43:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:44:56.805 22:43:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:44:56.805 Found net devices under 0000:4b:00.0: cvl_0_0 00:44:56.805 22:43:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:44:56.805 22:43:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:44:56.805 22:43:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:44:56.805 22:43:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:44:56.805 22:43:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:44:56.805 22:43:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ up == up ]] 00:44:56.805 22:43:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:44:56.805 22:43:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:44:56.805 22:43:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:44:56.805 Found net devices under 0000:4b:00.1: cvl_0_1 00:44:56.805 22:43:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:44:56.805 22:43:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:44:56.805 22:43:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@440 -- # is_hw=yes 00:44:56.805 22:43:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:44:56.805 22:43:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:44:56.805 22:43:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:44:56.805 22:43:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:44:56.805 22:43:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:44:56.805 22:43:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:44:56.805 22:43:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:44:56.805 22:43:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:44:56.805 22:43:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:44:56.805 22:43:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:44:56.805 22:43:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:44:56.805 22:43:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:44:56.805 22:43:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:44:56.805 22:43:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:44:56.805 22:43:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:44:56.805 22:43:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:44:56.805 22:43:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:44:56.805 22:43:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:44:56.805 22:43:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:44:56.805 22:43:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:44:56.805 22:43:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:44:56.805 22:43:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:44:56.805 22:43:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:44:56.805 22:43:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:44:56.805 22:43:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:44:56.805 22:43:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:44:56.805 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:44:56.805 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.688 ms 00:44:56.805 00:44:56.805 --- 10.0.0.2 ping statistics --- 00:44:56.805 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:44:56.805 rtt min/avg/max/mdev = 0.688/0.688/0.688/0.000 ms 00:44:56.805 22:43:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:44:56.805 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:44:56.805 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.342 ms 00:44:56.805 00:44:56.805 --- 10.0.0.1 ping statistics --- 00:44:56.805 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:44:56.805 rtt min/avg/max/mdev = 0.342/0.342/0.342/0.000 ms 00:44:56.805 22:43:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:44:56.805 22:43:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@448 -- # return 0 00:44:56.805 22:43:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:44:56.805 22:43:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:44:56.805 22:43:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:44:56.805 22:43:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:44:56.805 22:43:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:44:56.805 22:43:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:44:56.805 22:43:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:44:56.805 22:43:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:44:56.805 22:43:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:44:56.805 22:43:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@724 -- # xtrace_disable 00:44:56.805 22:43:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:44:56.805 22:43:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@507 -- # nvmfpid=456229 00:44:56.805 22:43:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@508 -- # waitforlisten 456229 00:44:56.805 22:43:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x2 00:44:56.805 22:43:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@831 -- # '[' -z 456229 ']' 00:44:56.805 22:43:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:44:56.805 22:43:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@836 -- # local max_retries=100 00:44:56.805 22:43:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:44:56.805 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:44:56.805 22:43:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@840 -- # xtrace_disable 00:44:56.805 22:43:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:44:56.805 [2024-10-01 22:43:51.050735] thread.c:2945:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:44:56.805 [2024-10-01 22:43:51.051855] Starting SPDK v25.01-pre git sha1 1b1c3081e / DPDK 24.03.0 initialization... 00:44:56.805 [2024-10-01 22:43:51.051911] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:44:56.805 [2024-10-01 22:43:51.118451] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:44:56.805 [2024-10-01 22:43:51.182781] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:44:56.806 [2024-10-01 22:43:51.182818] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:44:56.806 [2024-10-01 22:43:51.182823] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:44:56.806 [2024-10-01 22:43:51.182828] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:44:56.806 [2024-10-01 22:43:51.182833] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:44:56.806 [2024-10-01 22:43:51.182850] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:44:56.806 [2024-10-01 22:43:51.300555] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:44:56.806 [2024-10-01 22:43:51.300863] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:44:56.806 22:43:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:44:56.806 22:43:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@864 -- # return 0 00:44:56.806 22:43:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:44:56.806 22:43:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@730 -- # xtrace_disable 00:44:56.806 22:43:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:44:56.806 22:43:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:44:56.806 22:43:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:44:56.806 22:43:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:44:56.806 22:43:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:44:56.806 [2024-10-01 22:43:51.375579] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:44:56.806 22:43:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:44:56.806 22:43:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:44:56.806 22:43:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:44:56.806 22:43:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:44:56.806 Malloc0 00:44:56.806 22:43:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:44:56.806 22:43:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:44:56.806 22:43:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:44:56.806 22:43:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:44:56.806 22:43:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:44:56.806 22:43:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:44:56.806 22:43:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:44:56.806 22:43:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:44:56.806 22:43:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:44:56.806 22:43:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:44:56.806 22:43:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:44:56.806 22:43:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:44:56.806 [2024-10-01 22:43:51.443702] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:44:56.806 22:43:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:44:56.806 22:43:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@30 -- # bdevperf_pid=456254 00:44:56.806 22:43:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:44:56.806 22:43:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:44:56.806 22:43:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@33 -- # waitforlisten 456254 /var/tmp/bdevperf.sock 00:44:56.806 22:43:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@831 -- # '[' -z 456254 ']' 00:44:56.806 22:43:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:44:56.806 22:43:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@836 -- # local max_retries=100 00:44:56.806 22:43:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:44:56.806 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:44:56.806 22:43:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@840 -- # xtrace_disable 00:44:56.806 22:43:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:44:56.806 [2024-10-01 22:43:51.509429] Starting SPDK v25.01-pre git sha1 1b1c3081e / DPDK 24.03.0 initialization... 00:44:56.806 [2024-10-01 22:43:51.509496] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid456254 ] 00:44:56.806 [2024-10-01 22:43:51.575573] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:44:56.806 [2024-10-01 22:43:51.650016] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:44:57.067 22:43:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:44:57.067 22:43:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@864 -- # return 0 00:44:57.067 22:43:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:44:57.067 22:43:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:44:57.067 22:43:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:44:57.328 NVMe0n1 00:44:57.328 22:43:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:44:57.328 22:43:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:44:57.328 Running I/O for 10 seconds... 00:45:07.623 9216.00 IOPS, 36.00 MiB/s 9222.50 IOPS, 36.03 MiB/s 9510.33 IOPS, 37.15 MiB/s 10022.00 IOPS, 39.15 MiB/s 10447.20 IOPS, 40.81 MiB/s 10752.50 IOPS, 42.00 MiB/s 10935.71 IOPS, 42.72 MiB/s 11133.75 IOPS, 43.49 MiB/s 11270.89 IOPS, 44.03 MiB/s 11383.40 IOPS, 44.47 MiB/s 00:45:07.623 Latency(us) 00:45:07.623 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:45:07.623 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:45:07.623 Verification LBA range: start 0x0 length 0x4000 00:45:07.623 NVMe0n1 : 10.05 11424.82 44.63 0.00 0.00 89328.54 13598.72 62914.56 00:45:07.623 =================================================================================================================== 00:45:07.623 Total : 11424.82 44.63 0.00 0.00 89328.54 13598.72 62914.56 00:45:07.623 { 00:45:07.623 "results": [ 00:45:07.623 { 00:45:07.623 "job": "NVMe0n1", 00:45:07.623 "core_mask": "0x1", 00:45:07.623 "workload": "verify", 00:45:07.623 "status": "finished", 00:45:07.623 "verify_range": { 00:45:07.623 "start": 0, 00:45:07.623 "length": 16384 00:45:07.623 }, 00:45:07.623 "queue_depth": 1024, 00:45:07.623 "io_size": 4096, 00:45:07.623 "runtime": 10.053371, 00:45:07.623 "iops": 11424.82456879389, 00:45:07.623 "mibps": 44.628220971851135, 00:45:07.623 "io_failed": 0, 00:45:07.623 "io_timeout": 0, 00:45:07.623 "avg_latency_us": 89328.53845409113, 00:45:07.623 "min_latency_us": 13598.72, 00:45:07.623 "max_latency_us": 62914.56 00:45:07.623 } 00:45:07.623 ], 00:45:07.623 "core_count": 1 00:45:07.623 } 00:45:07.623 22:44:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@39 -- # killprocess 456254 00:45:07.623 22:44:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@950 -- # '[' -z 456254 ']' 00:45:07.623 22:44:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@954 -- # kill -0 456254 00:45:07.623 22:44:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@955 -- # uname 00:45:07.623 22:44:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:45:07.623 22:44:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 456254 00:45:07.623 22:44:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:45:07.623 22:44:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:45:07.623 22:44:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@968 -- # echo 'killing process with pid 456254' 00:45:07.623 killing process with pid 456254 00:45:07.623 22:44:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@969 -- # kill 456254 00:45:07.623 Received shutdown signal, test time was about 10.000000 seconds 00:45:07.623 00:45:07.623 Latency(us) 00:45:07.623 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:45:07.623 =================================================================================================================== 00:45:07.623 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:45:07.623 22:44:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@974 -- # wait 456254 00:45:07.623 22:44:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:45:07.623 22:44:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@43 -- # nvmftestfini 00:45:07.623 22:44:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@514 -- # nvmfcleanup 00:45:07.623 22:44:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@121 -- # sync 00:45:07.623 22:44:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:45:07.623 22:44:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@124 -- # set +e 00:45:07.623 22:44:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@125 -- # for i in {1..20} 00:45:07.623 22:44:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:45:07.623 rmmod nvme_tcp 00:45:07.623 rmmod nvme_fabrics 00:45:07.884 rmmod nvme_keyring 00:45:07.884 22:44:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:45:07.884 22:44:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@128 -- # set -e 00:45:07.884 22:44:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@129 -- # return 0 00:45:07.884 22:44:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@515 -- # '[' -n 456229 ']' 00:45:07.884 22:44:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@516 -- # killprocess 456229 00:45:07.884 22:44:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@950 -- # '[' -z 456229 ']' 00:45:07.884 22:44:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@954 -- # kill -0 456229 00:45:07.884 22:44:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@955 -- # uname 00:45:07.884 22:44:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:45:07.884 22:44:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 456229 00:45:07.884 22:44:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:45:07.884 22:44:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:45:07.884 22:44:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@968 -- # echo 'killing process with pid 456229' 00:45:07.884 killing process with pid 456229 00:45:07.884 22:44:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@969 -- # kill 456229 00:45:07.884 22:44:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@974 -- # wait 456229 00:45:08.145 22:44:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:45:08.145 22:44:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:45:08.145 22:44:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:45:08.145 22:44:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@297 -- # iptr 00:45:08.145 22:44:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@789 -- # iptables-save 00:45:08.145 22:44:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:45:08.145 22:44:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@789 -- # iptables-restore 00:45:08.145 22:44:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:45:08.145 22:44:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@302 -- # remove_spdk_ns 00:45:08.145 22:44:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:45:08.145 22:44:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:45:08.145 22:44:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:45:10.056 22:44:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:45:10.056 00:45:10.056 real 0m21.785s 00:45:10.056 user 0m24.627s 00:45:10.056 sys 0m7.178s 00:45:10.056 22:44:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1126 -- # xtrace_disable 00:45:10.056 22:44:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:45:10.056 ************************************ 00:45:10.056 END TEST nvmf_queue_depth 00:45:10.056 ************************************ 00:45:10.056 22:44:05 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@31 -- # run_test nvmf_target_multipath /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp --interrupt-mode 00:45:10.056 22:44:05 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:45:10.056 22:44:05 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1107 -- # xtrace_disable 00:45:10.056 22:44:05 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:45:10.317 ************************************ 00:45:10.317 START TEST nvmf_target_multipath 00:45:10.317 ************************************ 00:45:10.317 22:44:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp --interrupt-mode 00:45:10.317 * Looking for test storage... 00:45:10.317 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:45:10.317 22:44:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:45:10.317 22:44:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1681 -- # lcov --version 00:45:10.317 22:44:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:45:10.317 22:44:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:45:10.317 22:44:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:45:10.317 22:44:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@333 -- # local ver1 ver1_l 00:45:10.317 22:44:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@334 -- # local ver2 ver2_l 00:45:10.317 22:44:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@336 -- # IFS=.-: 00:45:10.317 22:44:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@336 -- # read -ra ver1 00:45:10.317 22:44:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@337 -- # IFS=.-: 00:45:10.317 22:44:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@337 -- # read -ra ver2 00:45:10.317 22:44:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@338 -- # local 'op=<' 00:45:10.317 22:44:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@340 -- # ver1_l=2 00:45:10.317 22:44:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@341 -- # ver2_l=1 00:45:10.317 22:44:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:45:10.317 22:44:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@344 -- # case "$op" in 00:45:10.317 22:44:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@345 -- # : 1 00:45:10.317 22:44:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v = 0 )) 00:45:10.317 22:44:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:45:10.317 22:44:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@365 -- # decimal 1 00:45:10.317 22:44:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=1 00:45:10.317 22:44:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:45:10.317 22:44:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 1 00:45:10.317 22:44:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@365 -- # ver1[v]=1 00:45:10.317 22:44:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@366 -- # decimal 2 00:45:10.317 22:44:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=2 00:45:10.317 22:44:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:45:10.317 22:44:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 2 00:45:10.317 22:44:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@366 -- # ver2[v]=2 00:45:10.318 22:44:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:45:10.318 22:44:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:45:10.318 22:44:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@368 -- # return 0 00:45:10.318 22:44:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:45:10.318 22:44:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:45:10.318 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:45:10.318 --rc genhtml_branch_coverage=1 00:45:10.318 --rc genhtml_function_coverage=1 00:45:10.318 --rc genhtml_legend=1 00:45:10.318 --rc geninfo_all_blocks=1 00:45:10.318 --rc geninfo_unexecuted_blocks=1 00:45:10.318 00:45:10.318 ' 00:45:10.318 22:44:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:45:10.318 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:45:10.318 --rc genhtml_branch_coverage=1 00:45:10.318 --rc genhtml_function_coverage=1 00:45:10.318 --rc genhtml_legend=1 00:45:10.318 --rc geninfo_all_blocks=1 00:45:10.318 --rc geninfo_unexecuted_blocks=1 00:45:10.318 00:45:10.318 ' 00:45:10.318 22:44:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:45:10.318 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:45:10.318 --rc genhtml_branch_coverage=1 00:45:10.318 --rc genhtml_function_coverage=1 00:45:10.318 --rc genhtml_legend=1 00:45:10.318 --rc geninfo_all_blocks=1 00:45:10.318 --rc geninfo_unexecuted_blocks=1 00:45:10.318 00:45:10.318 ' 00:45:10.318 22:44:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:45:10.318 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:45:10.318 --rc genhtml_branch_coverage=1 00:45:10.318 --rc genhtml_function_coverage=1 00:45:10.318 --rc genhtml_legend=1 00:45:10.318 --rc geninfo_all_blocks=1 00:45:10.318 --rc geninfo_unexecuted_blocks=1 00:45:10.318 00:45:10.318 ' 00:45:10.318 22:44:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:45:10.318 22:44:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@7 -- # uname -s 00:45:10.318 22:44:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:45:10.318 22:44:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:45:10.318 22:44:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:45:10.318 22:44:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:45:10.318 22:44:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:45:10.318 22:44:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:45:10.318 22:44:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:45:10.318 22:44:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:45:10.318 22:44:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:45:10.318 22:44:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:45:10.318 22:44:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 00:45:10.318 22:44:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=80c5a598-37ec-ec11-9bc7-a4bf01928204 00:45:10.318 22:44:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:45:10.318 22:44:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:45:10.318 22:44:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:45:10.318 22:44:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:45:10.318 22:44:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:45:10.318 22:44:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@15 -- # shopt -s extglob 00:45:10.318 22:44:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:45:10.318 22:44:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:45:10.318 22:44:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:45:10.318 22:44:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:45:10.318 22:44:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:45:10.318 22:44:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:45:10.318 22:44:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@5 -- # export PATH 00:45:10.318 22:44:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:45:10.318 22:44:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@51 -- # : 0 00:45:10.318 22:44:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:45:10.318 22:44:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:45:10.318 22:44:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:45:10.318 22:44:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:45:10.318 22:44:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:45:10.318 22:44:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:45:10.318 22:44:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:45:10.318 22:44:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:45:10.318 22:44:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:45:10.318 22:44:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@55 -- # have_pci_nics=0 00:45:10.318 22:44:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:45:10.318 22:44:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:45:10.318 22:44:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:45:10.318 22:44:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:45:10.318 22:44:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@43 -- # nvmftestinit 00:45:10.318 22:44:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:45:10.318 22:44:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:45:10.318 22:44:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@474 -- # prepare_net_devs 00:45:10.318 22:44:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@436 -- # local -g is_hw=no 00:45:10.318 22:44:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@438 -- # remove_spdk_ns 00:45:10.318 22:44:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:45:10.318 22:44:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:45:10.318 22:44:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:45:10.318 22:44:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:45:10.318 22:44:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:45:10.318 22:44:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@309 -- # xtrace_disable 00:45:10.318 22:44:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:45:18.616 22:44:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:45:18.616 22:44:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@315 -- # pci_devs=() 00:45:18.616 22:44:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@315 -- # local -a pci_devs 00:45:18.616 22:44:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@316 -- # pci_net_devs=() 00:45:18.616 22:44:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:45:18.616 22:44:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@317 -- # pci_drivers=() 00:45:18.616 22:44:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@317 -- # local -A pci_drivers 00:45:18.616 22:44:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@319 -- # net_devs=() 00:45:18.616 22:44:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@319 -- # local -ga net_devs 00:45:18.616 22:44:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@320 -- # e810=() 00:45:18.616 22:44:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@320 -- # local -ga e810 00:45:18.616 22:44:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@321 -- # x722=() 00:45:18.616 22:44:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@321 -- # local -ga x722 00:45:18.616 22:44:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@322 -- # mlx=() 00:45:18.616 22:44:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@322 -- # local -ga mlx 00:45:18.616 22:44:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:45:18.616 22:44:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:45:18.616 22:44:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:45:18.616 22:44:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:45:18.616 22:44:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:45:18.616 22:44:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:45:18.616 22:44:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:45:18.616 22:44:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:45:18.616 22:44:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:45:18.616 22:44:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:45:18.616 22:44:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:45:18.616 22:44:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:45:18.616 22:44:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:45:18.616 22:44:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:45:18.616 22:44:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:45:18.616 22:44:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:45:18.616 22:44:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:45:18.616 22:44:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:45:18.616 22:44:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:45:18.616 22:44:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:45:18.616 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:45:18.616 22:44:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:45:18.616 22:44:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:45:18.616 22:44:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:45:18.616 22:44:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:45:18.616 22:44:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:45:18.616 22:44:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:45:18.616 22:44:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:45:18.616 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:45:18.616 22:44:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:45:18.616 22:44:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:45:18.616 22:44:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:45:18.616 22:44:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:45:18.616 22:44:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:45:18.616 22:44:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:45:18.616 22:44:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:45:18.616 22:44:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:45:18.616 22:44:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:45:18.616 22:44:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:45:18.616 22:44:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:45:18.616 22:44:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:45:18.616 22:44:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ up == up ]] 00:45:18.616 22:44:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:45:18.616 22:44:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:45:18.616 22:44:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:45:18.616 Found net devices under 0000:4b:00.0: cvl_0_0 00:45:18.616 22:44:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:45:18.616 22:44:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:45:18.616 22:44:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:45:18.616 22:44:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:45:18.616 22:44:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:45:18.616 22:44:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ up == up ]] 00:45:18.616 22:44:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:45:18.616 22:44:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:45:18.616 22:44:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:45:18.616 Found net devices under 0000:4b:00.1: cvl_0_1 00:45:18.616 22:44:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:45:18.616 22:44:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:45:18.616 22:44:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@440 -- # is_hw=yes 00:45:18.616 22:44:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:45:18.617 22:44:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:45:18.617 22:44:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:45:18.617 22:44:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:45:18.617 22:44:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:45:18.617 22:44:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:45:18.617 22:44:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:45:18.617 22:44:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:45:18.617 22:44:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:45:18.617 22:44:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:45:18.617 22:44:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:45:18.617 22:44:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:45:18.617 22:44:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:45:18.617 22:44:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:45:18.617 22:44:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:45:18.617 22:44:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:45:18.617 22:44:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:45:18.617 22:44:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:45:18.617 22:44:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:45:18.617 22:44:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:45:18.617 22:44:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:45:18.617 22:44:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:45:18.617 22:44:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:45:18.617 22:44:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:45:18.617 22:44:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:45:18.617 22:44:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:45:18.617 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:45:18.617 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.578 ms 00:45:18.617 00:45:18.617 --- 10.0.0.2 ping statistics --- 00:45:18.617 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:45:18.617 rtt min/avg/max/mdev = 0.578/0.578/0.578/0.000 ms 00:45:18.617 22:44:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:45:18.617 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:45:18.617 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.345 ms 00:45:18.617 00:45:18.617 --- 10.0.0.1 ping statistics --- 00:45:18.617 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:45:18.617 rtt min/avg/max/mdev = 0.345/0.345/0.345/0.000 ms 00:45:18.617 22:44:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:45:18.617 22:44:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@448 -- # return 0 00:45:18.617 22:44:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:45:18.617 22:44:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:45:18.617 22:44:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:45:18.617 22:44:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:45:18.617 22:44:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:45:18.617 22:44:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:45:18.617 22:44:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:45:18.617 22:44:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@45 -- # '[' -z ']' 00:45:18.617 22:44:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@46 -- # echo 'only one NIC for nvmf test' 00:45:18.617 only one NIC for nvmf test 00:45:18.617 22:44:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@47 -- # nvmftestfini 00:45:18.617 22:44:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@514 -- # nvmfcleanup 00:45:18.617 22:44:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:45:18.617 22:44:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:45:18.617 22:44:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:45:18.617 22:44:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:45:18.617 22:44:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:45:18.617 rmmod nvme_tcp 00:45:18.617 rmmod nvme_fabrics 00:45:18.617 rmmod nvme_keyring 00:45:18.617 22:44:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:45:18.617 22:44:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:45:18.617 22:44:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:45:18.617 22:44:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@515 -- # '[' -n '' ']' 00:45:18.617 22:44:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:45:18.617 22:44:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:45:18.617 22:44:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:45:18.617 22:44:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:45:18.617 22:44:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@789 -- # iptables-save 00:45:18.617 22:44:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:45:18.617 22:44:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@789 -- # iptables-restore 00:45:18.617 22:44:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:45:18.617 22:44:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@302 -- # remove_spdk_ns 00:45:18.617 22:44:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:45:18.617 22:44:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:45:18.617 22:44:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:45:20.004 22:44:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:45:20.004 22:44:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@48 -- # exit 0 00:45:20.004 22:44:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@1 -- # nvmftestfini 00:45:20.004 22:44:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@514 -- # nvmfcleanup 00:45:20.004 22:44:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:45:20.004 22:44:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:45:20.004 22:44:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:45:20.004 22:44:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:45:20.004 22:44:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:45:20.004 22:44:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:45:20.004 22:44:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:45:20.004 22:44:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:45:20.004 22:44:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@515 -- # '[' -n '' ']' 00:45:20.004 22:44:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:45:20.004 22:44:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:45:20.004 22:44:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:45:20.004 22:44:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:45:20.004 22:44:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@789 -- # iptables-save 00:45:20.004 22:44:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:45:20.004 22:44:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@789 -- # iptables-restore 00:45:20.004 22:44:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:45:20.004 22:44:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@302 -- # remove_spdk_ns 00:45:20.004 22:44:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:45:20.004 22:44:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:45:20.004 22:44:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:45:20.004 22:44:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:45:20.004 00:45:20.004 real 0m9.669s 00:45:20.004 user 0m2.088s 00:45:20.004 sys 0m5.513s 00:45:20.004 22:44:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1126 -- # xtrace_disable 00:45:20.004 22:44:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:45:20.004 ************************************ 00:45:20.004 END TEST nvmf_target_multipath 00:45:20.004 ************************************ 00:45:20.004 22:44:15 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@32 -- # run_test nvmf_zcopy /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp --interrupt-mode 00:45:20.004 22:44:15 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:45:20.004 22:44:15 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1107 -- # xtrace_disable 00:45:20.004 22:44:15 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:45:20.004 ************************************ 00:45:20.004 START TEST nvmf_zcopy 00:45:20.004 ************************************ 00:45:20.004 22:44:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp --interrupt-mode 00:45:20.004 * Looking for test storage... 00:45:20.004 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:45:20.004 22:44:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:45:20.004 22:44:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1681 -- # lcov --version 00:45:20.004 22:44:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:45:20.004 22:44:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:45:20.004 22:44:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:45:20.004 22:44:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@333 -- # local ver1 ver1_l 00:45:20.004 22:44:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@334 -- # local ver2 ver2_l 00:45:20.004 22:44:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@336 -- # IFS=.-: 00:45:20.004 22:44:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@336 -- # read -ra ver1 00:45:20.004 22:44:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@337 -- # IFS=.-: 00:45:20.004 22:44:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@337 -- # read -ra ver2 00:45:20.004 22:44:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@338 -- # local 'op=<' 00:45:20.004 22:44:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@340 -- # ver1_l=2 00:45:20.004 22:44:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@341 -- # ver2_l=1 00:45:20.005 22:44:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:45:20.005 22:44:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@344 -- # case "$op" in 00:45:20.005 22:44:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@345 -- # : 1 00:45:20.005 22:44:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@364 -- # (( v = 0 )) 00:45:20.005 22:44:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:45:20.005 22:44:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@365 -- # decimal 1 00:45:20.005 22:44:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@353 -- # local d=1 00:45:20.005 22:44:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:45:20.005 22:44:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@355 -- # echo 1 00:45:20.005 22:44:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@365 -- # ver1[v]=1 00:45:20.005 22:44:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@366 -- # decimal 2 00:45:20.005 22:44:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@353 -- # local d=2 00:45:20.005 22:44:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:45:20.005 22:44:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@355 -- # echo 2 00:45:20.266 22:44:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@366 -- # ver2[v]=2 00:45:20.266 22:44:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:45:20.266 22:44:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:45:20.266 22:44:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@368 -- # return 0 00:45:20.266 22:44:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:45:20.266 22:44:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:45:20.266 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:45:20.266 --rc genhtml_branch_coverage=1 00:45:20.266 --rc genhtml_function_coverage=1 00:45:20.266 --rc genhtml_legend=1 00:45:20.266 --rc geninfo_all_blocks=1 00:45:20.266 --rc geninfo_unexecuted_blocks=1 00:45:20.266 00:45:20.266 ' 00:45:20.266 22:44:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:45:20.266 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:45:20.266 --rc genhtml_branch_coverage=1 00:45:20.266 --rc genhtml_function_coverage=1 00:45:20.266 --rc genhtml_legend=1 00:45:20.266 --rc geninfo_all_blocks=1 00:45:20.266 --rc geninfo_unexecuted_blocks=1 00:45:20.266 00:45:20.266 ' 00:45:20.266 22:44:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:45:20.266 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:45:20.266 --rc genhtml_branch_coverage=1 00:45:20.266 --rc genhtml_function_coverage=1 00:45:20.266 --rc genhtml_legend=1 00:45:20.266 --rc geninfo_all_blocks=1 00:45:20.266 --rc geninfo_unexecuted_blocks=1 00:45:20.266 00:45:20.266 ' 00:45:20.266 22:44:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:45:20.266 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:45:20.266 --rc genhtml_branch_coverage=1 00:45:20.266 --rc genhtml_function_coverage=1 00:45:20.266 --rc genhtml_legend=1 00:45:20.266 --rc geninfo_all_blocks=1 00:45:20.266 --rc geninfo_unexecuted_blocks=1 00:45:20.266 00:45:20.266 ' 00:45:20.266 22:44:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:45:20.266 22:44:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@7 -- # uname -s 00:45:20.266 22:44:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:45:20.266 22:44:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:45:20.266 22:44:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:45:20.266 22:44:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:45:20.266 22:44:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:45:20.266 22:44:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:45:20.266 22:44:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:45:20.266 22:44:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:45:20.266 22:44:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:45:20.266 22:44:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:45:20.266 22:44:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 00:45:20.266 22:44:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@18 -- # NVME_HOSTID=80c5a598-37ec-ec11-9bc7-a4bf01928204 00:45:20.266 22:44:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:45:20.266 22:44:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:45:20.266 22:44:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:45:20.266 22:44:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:45:20.266 22:44:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:45:20.266 22:44:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@15 -- # shopt -s extglob 00:45:20.266 22:44:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:45:20.266 22:44:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:45:20.267 22:44:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:45:20.267 22:44:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:45:20.267 22:44:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:45:20.267 22:44:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:45:20.267 22:44:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@5 -- # export PATH 00:45:20.267 22:44:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:45:20.267 22:44:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@51 -- # : 0 00:45:20.267 22:44:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:45:20.267 22:44:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:45:20.267 22:44:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:45:20.267 22:44:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:45:20.267 22:44:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:45:20.267 22:44:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:45:20.267 22:44:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:45:20.267 22:44:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:45:20.267 22:44:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:45:20.267 22:44:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@55 -- # have_pci_nics=0 00:45:20.267 22:44:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@12 -- # nvmftestinit 00:45:20.267 22:44:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:45:20.267 22:44:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:45:20.267 22:44:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@474 -- # prepare_net_devs 00:45:20.267 22:44:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@436 -- # local -g is_hw=no 00:45:20.267 22:44:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@438 -- # remove_spdk_ns 00:45:20.267 22:44:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:45:20.267 22:44:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:45:20.267 22:44:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:45:20.267 22:44:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:45:20.267 22:44:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:45:20.267 22:44:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@309 -- # xtrace_disable 00:45:20.267 22:44:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:45:28.411 22:44:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:45:28.411 22:44:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@315 -- # pci_devs=() 00:45:28.411 22:44:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@315 -- # local -a pci_devs 00:45:28.411 22:44:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@316 -- # pci_net_devs=() 00:45:28.411 22:44:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:45:28.411 22:44:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@317 -- # pci_drivers=() 00:45:28.411 22:44:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@317 -- # local -A pci_drivers 00:45:28.411 22:44:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@319 -- # net_devs=() 00:45:28.411 22:44:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@319 -- # local -ga net_devs 00:45:28.411 22:44:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@320 -- # e810=() 00:45:28.411 22:44:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@320 -- # local -ga e810 00:45:28.411 22:44:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@321 -- # x722=() 00:45:28.411 22:44:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@321 -- # local -ga x722 00:45:28.411 22:44:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@322 -- # mlx=() 00:45:28.411 22:44:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@322 -- # local -ga mlx 00:45:28.411 22:44:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:45:28.411 22:44:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:45:28.411 22:44:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:45:28.411 22:44:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:45:28.411 22:44:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:45:28.411 22:44:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:45:28.411 22:44:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:45:28.411 22:44:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:45:28.411 22:44:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:45:28.411 22:44:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:45:28.411 22:44:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:45:28.411 22:44:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:45:28.411 22:44:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:45:28.411 22:44:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:45:28.411 22:44:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:45:28.411 22:44:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:45:28.411 22:44:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:45:28.411 22:44:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:45:28.411 22:44:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:45:28.411 22:44:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:45:28.411 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:45:28.411 22:44:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:45:28.411 22:44:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:45:28.411 22:44:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:45:28.411 22:44:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:45:28.411 22:44:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:45:28.411 22:44:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:45:28.411 22:44:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:45:28.411 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:45:28.411 22:44:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:45:28.411 22:44:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:45:28.411 22:44:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:45:28.411 22:44:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:45:28.411 22:44:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:45:28.411 22:44:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:45:28.411 22:44:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:45:28.411 22:44:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:45:28.411 22:44:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:45:28.411 22:44:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:45:28.411 22:44:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:45:28.411 22:44:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:45:28.411 22:44:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ up == up ]] 00:45:28.411 22:44:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:45:28.411 22:44:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:45:28.411 22:44:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:45:28.411 Found net devices under 0000:4b:00.0: cvl_0_0 00:45:28.411 22:44:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:45:28.411 22:44:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:45:28.411 22:44:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:45:28.411 22:44:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:45:28.411 22:44:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:45:28.411 22:44:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ up == up ]] 00:45:28.411 22:44:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:45:28.411 22:44:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:45:28.411 22:44:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:45:28.411 Found net devices under 0000:4b:00.1: cvl_0_1 00:45:28.411 22:44:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:45:28.411 22:44:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:45:28.411 22:44:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@440 -- # is_hw=yes 00:45:28.411 22:44:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:45:28.411 22:44:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:45:28.411 22:44:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:45:28.411 22:44:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:45:28.411 22:44:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:45:28.411 22:44:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:45:28.411 22:44:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:45:28.411 22:44:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:45:28.411 22:44:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:45:28.411 22:44:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:45:28.411 22:44:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:45:28.411 22:44:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:45:28.411 22:44:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:45:28.411 22:44:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:45:28.411 22:44:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:45:28.411 22:44:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:45:28.411 22:44:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:45:28.411 22:44:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:45:28.411 22:44:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:45:28.411 22:44:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:45:28.411 22:44:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:45:28.411 22:44:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:45:28.411 22:44:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:45:28.411 22:44:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:45:28.411 22:44:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:45:28.411 22:44:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:45:28.411 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:45:28.411 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.649 ms 00:45:28.411 00:45:28.411 --- 10.0.0.2 ping statistics --- 00:45:28.411 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:45:28.411 rtt min/avg/max/mdev = 0.649/0.649/0.649/0.000 ms 00:45:28.411 22:44:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:45:28.411 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:45:28.411 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.277 ms 00:45:28.411 00:45:28.411 --- 10.0.0.1 ping statistics --- 00:45:28.411 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:45:28.411 rtt min/avg/max/mdev = 0.277/0.277/0.277/0.000 ms 00:45:28.411 22:44:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:45:28.411 22:44:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@448 -- # return 0 00:45:28.411 22:44:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:45:28.411 22:44:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:45:28.411 22:44:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:45:28.411 22:44:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:45:28.411 22:44:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:45:28.411 22:44:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:45:28.411 22:44:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:45:28.411 22:44:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:45:28.411 22:44:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:45:28.412 22:44:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@724 -- # xtrace_disable 00:45:28.412 22:44:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:45:28.412 22:44:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@507 -- # nvmfpid=466589 00:45:28.412 22:44:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@508 -- # waitforlisten 466589 00:45:28.412 22:44:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x2 00:45:28.412 22:44:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@831 -- # '[' -z 466589 ']' 00:45:28.412 22:44:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:45:28.412 22:44:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@836 -- # local max_retries=100 00:45:28.412 22:44:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:45:28.412 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:45:28.412 22:44:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@840 -- # xtrace_disable 00:45:28.412 22:44:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:45:28.412 [2024-10-01 22:44:22.711250] thread.c:2945:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:45:28.412 [2024-10-01 22:44:22.712361] Starting SPDK v25.01-pre git sha1 1b1c3081e / DPDK 24.03.0 initialization... 00:45:28.412 [2024-10-01 22:44:22.712411] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:45:28.412 [2024-10-01 22:44:22.800177] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:45:28.412 [2024-10-01 22:44:22.892477] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:45:28.412 [2024-10-01 22:44:22.892537] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:45:28.412 [2024-10-01 22:44:22.892546] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:45:28.412 [2024-10-01 22:44:22.892553] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:45:28.412 [2024-10-01 22:44:22.892560] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:45:28.412 [2024-10-01 22:44:22.892589] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:45:28.412 [2024-10-01 22:44:23.020289] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:45:28.412 [2024-10-01 22:44:23.020566] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:45:28.412 22:44:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:45:28.412 22:44:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@864 -- # return 0 00:45:28.412 22:44:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:45:28.412 22:44:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@730 -- # xtrace_disable 00:45:28.412 22:44:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:45:28.412 22:44:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:45:28.412 22:44:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:45:28.412 22:44:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:45:28.412 22:44:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:45:28.412 22:44:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:45:28.412 [2024-10-01 22:44:23.569445] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:45:28.412 22:44:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:45:28.412 22:44:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:45:28.412 22:44:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:45:28.412 22:44:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:45:28.412 22:44:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:45:28.412 22:44:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:45:28.412 22:44:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:45:28.412 22:44:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:45:28.412 [2024-10-01 22:44:23.597695] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:45:28.412 22:44:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:45:28.412 22:44:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:45:28.412 22:44:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:45:28.412 22:44:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:45:28.412 22:44:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:45:28.412 22:44:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:45:28.412 22:44:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:45:28.412 22:44:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:45:28.412 malloc0 00:45:28.412 22:44:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:45:28.412 22:44:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:45:28.412 22:44:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:45:28.412 22:44:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:45:28.412 22:44:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:45:28.412 22:44:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:45:28.412 22:44:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:45:28.412 22:44:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@558 -- # config=() 00:45:28.412 22:44:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@558 -- # local subsystem config 00:45:28.412 22:44:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:45:28.412 22:44:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:45:28.412 { 00:45:28.412 "params": { 00:45:28.412 "name": "Nvme$subsystem", 00:45:28.412 "trtype": "$TEST_TRANSPORT", 00:45:28.412 "traddr": "$NVMF_FIRST_TARGET_IP", 00:45:28.412 "adrfam": "ipv4", 00:45:28.412 "trsvcid": "$NVMF_PORT", 00:45:28.412 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:45:28.412 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:45:28.412 "hdgst": ${hdgst:-false}, 00:45:28.412 "ddgst": ${ddgst:-false} 00:45:28.412 }, 00:45:28.412 "method": "bdev_nvme_attach_controller" 00:45:28.412 } 00:45:28.412 EOF 00:45:28.412 )") 00:45:28.412 22:44:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@580 -- # cat 00:45:28.412 22:44:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@582 -- # jq . 00:45:28.412 22:44:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@583 -- # IFS=, 00:45:28.412 22:44:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:45:28.412 "params": { 00:45:28.412 "name": "Nvme1", 00:45:28.412 "trtype": "tcp", 00:45:28.412 "traddr": "10.0.0.2", 00:45:28.412 "adrfam": "ipv4", 00:45:28.412 "trsvcid": "4420", 00:45:28.412 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:45:28.412 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:45:28.412 "hdgst": false, 00:45:28.412 "ddgst": false 00:45:28.412 }, 00:45:28.412 "method": "bdev_nvme_attach_controller" 00:45:28.412 }' 00:45:28.673 [2024-10-01 22:44:23.706235] Starting SPDK v25.01-pre git sha1 1b1c3081e / DPDK 24.03.0 initialization... 00:45:28.673 [2024-10-01 22:44:23.706299] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid466936 ] 00:45:28.673 [2024-10-01 22:44:23.771876] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:45:28.673 [2024-10-01 22:44:23.845387] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:45:28.932 Running I/O for 10 seconds... 00:45:39.227 6488.00 IOPS, 50.69 MiB/s 6528.00 IOPS, 51.00 MiB/s 6542.00 IOPS, 51.11 MiB/s 6549.25 IOPS, 51.17 MiB/s 6968.40 IOPS, 54.44 MiB/s 7391.50 IOPS, 57.75 MiB/s 7691.43 IOPS, 60.09 MiB/s 7918.12 IOPS, 61.86 MiB/s 8096.89 IOPS, 63.26 MiB/s 8238.40 IOPS, 64.36 MiB/s 00:45:39.227 Latency(us) 00:45:39.227 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:45:39.227 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:45:39.227 Verification LBA range: start 0x0 length 0x1000 00:45:39.227 Nvme1n1 : 10.01 8242.72 64.40 0.00 0.00 15475.56 1638.40 27197.44 00:45:39.227 =================================================================================================================== 00:45:39.227 Total : 8242.72 64.40 0.00 0.00 15475.56 1638.40 27197.44 00:45:39.227 22:44:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@39 -- # perfpid=468935 00:45:39.227 22:44:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@41 -- # xtrace_disable 00:45:39.227 22:44:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:45:39.227 22:44:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:45:39.227 22:44:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:45:39.227 22:44:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@558 -- # config=() 00:45:39.227 22:44:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@558 -- # local subsystem config 00:45:39.227 22:44:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:45:39.227 22:44:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:45:39.227 { 00:45:39.227 "params": { 00:45:39.227 "name": "Nvme$subsystem", 00:45:39.227 "trtype": "$TEST_TRANSPORT", 00:45:39.227 "traddr": "$NVMF_FIRST_TARGET_IP", 00:45:39.227 "adrfam": "ipv4", 00:45:39.227 "trsvcid": "$NVMF_PORT", 00:45:39.227 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:45:39.227 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:45:39.227 "hdgst": ${hdgst:-false}, 00:45:39.227 "ddgst": ${ddgst:-false} 00:45:39.227 }, 00:45:39.227 "method": "bdev_nvme_attach_controller" 00:45:39.227 } 00:45:39.227 EOF 00:45:39.227 )") 00:45:39.227 22:44:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@580 -- # cat 00:45:39.227 [2024-10-01 22:44:34.269012] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:39.227 [2024-10-01 22:44:34.269038] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:39.227 22:44:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@582 -- # jq . 00:45:39.227 22:44:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@583 -- # IFS=, 00:45:39.227 22:44:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:45:39.227 "params": { 00:45:39.227 "name": "Nvme1", 00:45:39.227 "trtype": "tcp", 00:45:39.227 "traddr": "10.0.0.2", 00:45:39.227 "adrfam": "ipv4", 00:45:39.227 "trsvcid": "4420", 00:45:39.227 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:45:39.227 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:45:39.227 "hdgst": false, 00:45:39.227 "ddgst": false 00:45:39.227 }, 00:45:39.227 "method": "bdev_nvme_attach_controller" 00:45:39.227 }' 00:45:39.227 [2024-10-01 22:44:34.280980] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:39.227 [2024-10-01 22:44:34.280989] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:39.227 [2024-10-01 22:44:34.292976] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:39.227 [2024-10-01 22:44:34.292984] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:39.227 [2024-10-01 22:44:34.304976] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:39.227 [2024-10-01 22:44:34.304983] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:39.227 [2024-10-01 22:44:34.312732] Starting SPDK v25.01-pre git sha1 1b1c3081e / DPDK 24.03.0 initialization... 00:45:39.227 [2024-10-01 22:44:34.312779] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid468935 ] 00:45:39.227 [2024-10-01 22:44:34.316976] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:39.227 [2024-10-01 22:44:34.316983] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:39.227 [2024-10-01 22:44:34.328977] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:39.227 [2024-10-01 22:44:34.328986] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:39.227 [2024-10-01 22:44:34.340975] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:39.227 [2024-10-01 22:44:34.340983] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:39.227 [2024-10-01 22:44:34.352975] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:39.227 [2024-10-01 22:44:34.352982] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:39.227 [2024-10-01 22:44:34.364975] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:39.228 [2024-10-01 22:44:34.364983] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:39.228 [2024-10-01 22:44:34.373325] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:45:39.228 [2024-10-01 22:44:34.376976] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:39.228 [2024-10-01 22:44:34.376983] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:39.228 [2024-10-01 22:44:34.388976] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:39.228 [2024-10-01 22:44:34.388984] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:39.228 [2024-10-01 22:44:34.400976] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:39.228 [2024-10-01 22:44:34.400986] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:39.228 [2024-10-01 22:44:34.412976] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:39.228 [2024-10-01 22:44:34.412988] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:39.228 [2024-10-01 22:44:34.424976] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:39.228 [2024-10-01 22:44:34.424984] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:39.228 [2024-10-01 22:44:34.436976] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:39.228 [2024-10-01 22:44:34.436984] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:39.228 [2024-10-01 22:44:34.437548] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:45:39.228 [2024-10-01 22:44:34.448978] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:39.228 [2024-10-01 22:44:34.448987] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:39.228 [2024-10-01 22:44:34.460981] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:39.228 [2024-10-01 22:44:34.460995] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:39.228 [2024-10-01 22:44:34.472978] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:39.228 [2024-10-01 22:44:34.472988] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:39.488 [2024-10-01 22:44:34.484977] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:39.488 [2024-10-01 22:44:34.484988] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:39.488 [2024-10-01 22:44:34.496978] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:39.488 [2024-10-01 22:44:34.496985] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:39.488 [2024-10-01 22:44:34.508975] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:39.488 [2024-10-01 22:44:34.508983] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:39.488 [2024-10-01 22:44:34.520976] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:39.488 [2024-10-01 22:44:34.520983] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:39.488 [2024-10-01 22:44:34.532976] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:39.488 [2024-10-01 22:44:34.532984] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:39.488 [2024-10-01 22:44:34.544976] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:39.488 [2024-10-01 22:44:34.544985] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:39.488 [2024-10-01 22:44:34.556985] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:39.488 [2024-10-01 22:44:34.556999] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:39.488 [2024-10-01 22:44:34.568979] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:39.488 [2024-10-01 22:44:34.568991] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:39.488 [2024-10-01 22:44:34.580978] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:39.488 [2024-10-01 22:44:34.580987] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:39.488 [2024-10-01 22:44:34.592976] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:39.488 [2024-10-01 22:44:34.592985] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:39.488 [2024-10-01 22:44:34.604975] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:39.488 [2024-10-01 22:44:34.604982] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:39.488 [2024-10-01 22:44:34.616975] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:39.488 [2024-10-01 22:44:34.616982] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:39.488 [2024-10-01 22:44:34.628976] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:39.488 [2024-10-01 22:44:34.628985] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:39.488 [2024-10-01 22:44:34.640977] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:39.488 [2024-10-01 22:44:34.640987] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:39.488 [2024-10-01 22:44:34.652976] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:39.488 [2024-10-01 22:44:34.652983] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:39.488 [2024-10-01 22:44:34.664976] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:39.488 [2024-10-01 22:44:34.664983] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:39.488 [2024-10-01 22:44:34.676975] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:39.489 [2024-10-01 22:44:34.676982] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:39.489 [2024-10-01 22:44:34.688977] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:39.489 [2024-10-01 22:44:34.688991] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:39.489 [2024-10-01 22:44:34.700976] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:39.489 [2024-10-01 22:44:34.700983] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:39.489 [2024-10-01 22:44:34.712976] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:39.489 [2024-10-01 22:44:34.712982] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:39.489 [2024-10-01 22:44:34.724977] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:39.489 [2024-10-01 22:44:34.724986] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:39.489 [2024-10-01 22:44:34.736976] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:39.489 [2024-10-01 22:44:34.736983] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:39.749 [2024-10-01 22:44:34.748975] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:39.749 [2024-10-01 22:44:34.748983] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:39.749 [2024-10-01 22:44:34.760976] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:39.749 [2024-10-01 22:44:34.760982] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:39.749 [2024-10-01 22:44:34.772975] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:39.749 [2024-10-01 22:44:34.772983] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:39.749 [2024-10-01 22:44:34.820112] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:39.749 [2024-10-01 22:44:34.820125] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:39.749 [2024-10-01 22:44:34.828977] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:39.749 [2024-10-01 22:44:34.828987] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:39.749 Running I/O for 5 seconds... 00:45:39.749 [2024-10-01 22:44:34.843724] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:39.749 [2024-10-01 22:44:34.843741] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:39.749 [2024-10-01 22:44:34.857094] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:39.749 [2024-10-01 22:44:34.857111] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:39.749 [2024-10-01 22:44:34.869698] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:39.749 [2024-10-01 22:44:34.869712] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:39.749 [2024-10-01 22:44:34.883948] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:39.749 [2024-10-01 22:44:34.883963] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:39.749 [2024-10-01 22:44:34.897030] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:39.749 [2024-10-01 22:44:34.897046] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:39.749 [2024-10-01 22:44:34.908985] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:39.749 [2024-10-01 22:44:34.909000] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:39.749 [2024-10-01 22:44:34.921628] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:39.749 [2024-10-01 22:44:34.921642] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:39.749 [2024-10-01 22:44:34.936280] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:39.749 [2024-10-01 22:44:34.936295] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:39.749 [2024-10-01 22:44:34.949547] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:39.749 [2024-10-01 22:44:34.949561] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:39.750 [2024-10-01 22:44:34.964403] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:39.750 [2024-10-01 22:44:34.964422] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:39.750 [2024-10-01 22:44:34.977311] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:39.750 [2024-10-01 22:44:34.977325] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:39.750 [2024-10-01 22:44:34.991840] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:39.750 [2024-10-01 22:44:34.991854] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:40.010 [2024-10-01 22:44:35.005210] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:40.010 [2024-10-01 22:44:35.005225] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:40.010 [2024-10-01 22:44:35.019902] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:40.010 [2024-10-01 22:44:35.019917] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:40.010 [2024-10-01 22:44:35.033110] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:40.010 [2024-10-01 22:44:35.033124] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:40.010 [2024-10-01 22:44:35.044693] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:40.010 [2024-10-01 22:44:35.044708] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:40.010 [2024-10-01 22:44:35.057531] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:40.010 [2024-10-01 22:44:35.057546] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:40.010 [2024-10-01 22:44:35.072286] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:40.011 [2024-10-01 22:44:35.072301] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:40.011 [2024-10-01 22:44:35.085207] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:40.011 [2024-10-01 22:44:35.085221] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:40.011 [2024-10-01 22:44:35.100064] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:40.011 [2024-10-01 22:44:35.100079] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:40.011 [2024-10-01 22:44:35.112946] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:40.011 [2024-10-01 22:44:35.112961] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:40.011 [2024-10-01 22:44:35.125002] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:40.011 [2024-10-01 22:44:35.125016] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:40.011 [2024-10-01 22:44:35.137862] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:40.011 [2024-10-01 22:44:35.137877] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:40.011 [2024-10-01 22:44:35.152435] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:40.011 [2024-10-01 22:44:35.152449] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:40.011 [2024-10-01 22:44:35.165473] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:40.011 [2024-10-01 22:44:35.165487] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:40.011 [2024-10-01 22:44:35.180066] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:40.011 [2024-10-01 22:44:35.180081] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:40.011 [2024-10-01 22:44:35.192804] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:40.011 [2024-10-01 22:44:35.192818] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:40.011 [2024-10-01 22:44:35.204878] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:40.011 [2024-10-01 22:44:35.204892] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:40.011 [2024-10-01 22:44:35.217857] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:40.011 [2024-10-01 22:44:35.217875] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:40.011 [2024-10-01 22:44:35.232374] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:40.011 [2024-10-01 22:44:35.232389] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:40.011 [2024-10-01 22:44:35.245241] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:40.011 [2024-10-01 22:44:35.245256] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:40.011 [2024-10-01 22:44:35.259872] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:40.011 [2024-10-01 22:44:35.259888] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:40.272 [2024-10-01 22:44:35.273092] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:40.272 [2024-10-01 22:44:35.273108] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:40.272 [2024-10-01 22:44:35.285530] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:40.272 [2024-10-01 22:44:35.285544] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:40.272 [2024-10-01 22:44:35.300581] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:40.272 [2024-10-01 22:44:35.300597] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:40.272 [2024-10-01 22:44:35.313634] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:40.272 [2024-10-01 22:44:35.313649] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:40.272 [2024-10-01 22:44:35.328746] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:40.272 [2024-10-01 22:44:35.328761] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:40.272 [2024-10-01 22:44:35.342192] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:40.272 [2024-10-01 22:44:35.342206] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:40.272 [2024-10-01 22:44:35.356449] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:40.272 [2024-10-01 22:44:35.356464] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:40.272 [2024-10-01 22:44:35.368903] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:40.272 [2024-10-01 22:44:35.368917] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:40.272 [2024-10-01 22:44:35.381967] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:40.272 [2024-10-01 22:44:35.381982] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:40.272 [2024-10-01 22:44:35.396896] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:40.272 [2024-10-01 22:44:35.396911] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:40.272 [2024-10-01 22:44:35.409662] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:40.272 [2024-10-01 22:44:35.409677] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:40.272 [2024-10-01 22:44:35.424704] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:40.272 [2024-10-01 22:44:35.424719] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:40.272 [2024-10-01 22:44:35.437258] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:40.272 [2024-10-01 22:44:35.437273] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:40.272 [2024-10-01 22:44:35.452079] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:40.272 [2024-10-01 22:44:35.452094] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:40.272 [2024-10-01 22:44:35.464833] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:40.272 [2024-10-01 22:44:35.464848] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:40.272 [2024-10-01 22:44:35.477450] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:40.272 [2024-10-01 22:44:35.477472] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:40.272 [2024-10-01 22:44:35.491720] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:40.272 [2024-10-01 22:44:35.491735] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:40.272 [2024-10-01 22:44:35.504628] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:40.272 [2024-10-01 22:44:35.504643] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:40.272 [2024-10-01 22:44:35.517542] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:40.272 [2024-10-01 22:44:35.517556] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:40.533 [2024-10-01 22:44:35.532111] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:40.533 [2024-10-01 22:44:35.532126] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:40.533 [2024-10-01 22:44:35.545398] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:40.533 [2024-10-01 22:44:35.545413] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:40.533 [2024-10-01 22:44:35.560533] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:40.533 [2024-10-01 22:44:35.560549] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:40.533 [2024-10-01 22:44:35.573364] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:40.533 [2024-10-01 22:44:35.573379] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:40.533 [2024-10-01 22:44:35.587727] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:40.533 [2024-10-01 22:44:35.587741] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:40.533 [2024-10-01 22:44:35.600891] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:40.533 [2024-10-01 22:44:35.600905] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:40.533 [2024-10-01 22:44:35.613568] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:40.533 [2024-10-01 22:44:35.613583] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:40.533 [2024-10-01 22:44:35.628880] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:40.533 [2024-10-01 22:44:35.628895] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:40.533 [2024-10-01 22:44:35.642113] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:40.533 [2024-10-01 22:44:35.642127] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:40.533 [2024-10-01 22:44:35.656988] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:40.533 [2024-10-01 22:44:35.657002] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:40.533 [2024-10-01 22:44:35.669720] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:40.533 [2024-10-01 22:44:35.669735] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:40.533 [2024-10-01 22:44:35.684345] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:40.533 [2024-10-01 22:44:35.684359] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:40.533 [2024-10-01 22:44:35.696809] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:40.533 [2024-10-01 22:44:35.696824] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:40.534 [2024-10-01 22:44:35.708601] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:40.534 [2024-10-01 22:44:35.708616] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:40.534 [2024-10-01 22:44:35.721650] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:40.534 [2024-10-01 22:44:35.721664] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:40.534 [2024-10-01 22:44:35.736037] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:40.534 [2024-10-01 22:44:35.736052] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:40.534 [2024-10-01 22:44:35.749037] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:40.534 [2024-10-01 22:44:35.749051] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:40.534 [2024-10-01 22:44:35.760902] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:40.534 [2024-10-01 22:44:35.760917] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:40.534 [2024-10-01 22:44:35.773849] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:40.534 [2024-10-01 22:44:35.773863] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:40.794 [2024-10-01 22:44:35.787770] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:40.794 [2024-10-01 22:44:35.787785] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:40.794 [2024-10-01 22:44:35.800709] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:40.794 [2024-10-01 22:44:35.800723] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:40.794 [2024-10-01 22:44:35.813479] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:40.794 [2024-10-01 22:44:35.813493] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:40.794 [2024-10-01 22:44:35.828329] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:40.794 [2024-10-01 22:44:35.828344] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:40.794 18700.00 IOPS, 146.09 MiB/s [2024-10-01 22:44:35.841256] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:40.794 [2024-10-01 22:44:35.841270] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:40.794 [2024-10-01 22:44:35.855915] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:40.794 [2024-10-01 22:44:35.855929] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:40.794 [2024-10-01 22:44:35.869157] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:40.794 [2024-10-01 22:44:35.869172] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:40.794 [2024-10-01 22:44:35.884095] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:40.794 [2024-10-01 22:44:35.884109] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:40.794 [2024-10-01 22:44:35.897266] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:40.794 [2024-10-01 22:44:35.897280] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:40.794 [2024-10-01 22:44:35.911938] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:40.794 [2024-10-01 22:44:35.911953] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:40.794 [2024-10-01 22:44:35.924888] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:40.794 [2024-10-01 22:44:35.924902] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:40.794 [2024-10-01 22:44:35.936928] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:40.794 [2024-10-01 22:44:35.936942] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:40.794 [2024-10-01 22:44:35.949852] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:40.794 [2024-10-01 22:44:35.949867] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:40.794 [2024-10-01 22:44:35.964598] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:40.794 [2024-10-01 22:44:35.964613] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:40.794 [2024-10-01 22:44:35.977495] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:40.794 [2024-10-01 22:44:35.977510] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:40.794 [2024-10-01 22:44:35.992314] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:40.794 [2024-10-01 22:44:35.992329] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:40.794 [2024-10-01 22:44:36.005039] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:40.794 [2024-10-01 22:44:36.005054] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:40.794 [2024-10-01 22:44:36.017368] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:40.794 [2024-10-01 22:44:36.017383] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:40.794 [2024-10-01 22:44:36.032060] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:40.794 [2024-10-01 22:44:36.032075] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:40.794 [2024-10-01 22:44:36.044840] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:40.794 [2024-10-01 22:44:36.044857] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:41.055 [2024-10-01 22:44:36.057742] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:41.055 [2024-10-01 22:44:36.057757] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:41.055 [2024-10-01 22:44:36.072477] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:41.055 [2024-10-01 22:44:36.072493] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:41.055 [2024-10-01 22:44:36.085535] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:41.055 [2024-10-01 22:44:36.085549] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:41.055 [2024-10-01 22:44:36.100469] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:41.055 [2024-10-01 22:44:36.100485] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:41.055 [2024-10-01 22:44:36.113385] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:41.055 [2024-10-01 22:44:36.113399] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:41.055 [2024-10-01 22:44:36.128461] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:41.055 [2024-10-01 22:44:36.128476] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:41.055 [2024-10-01 22:44:36.140888] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:41.055 [2024-10-01 22:44:36.140903] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:41.055 [2024-10-01 22:44:36.152907] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:41.055 [2024-10-01 22:44:36.152921] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:41.055 [2024-10-01 22:44:36.165978] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:41.055 [2024-10-01 22:44:36.165994] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:41.055 [2024-10-01 22:44:36.179601] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:41.055 [2024-10-01 22:44:36.179616] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:41.055 [2024-10-01 22:44:36.192590] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:41.055 [2024-10-01 22:44:36.192606] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:41.055 [2024-10-01 22:44:36.205345] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:41.055 [2024-10-01 22:44:36.205360] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:41.055 [2024-10-01 22:44:36.219887] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:41.055 [2024-10-01 22:44:36.219902] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:41.055 [2024-10-01 22:44:36.233186] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:41.055 [2024-10-01 22:44:36.233204] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:41.055 [2024-10-01 22:44:36.248290] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:41.055 [2024-10-01 22:44:36.248305] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:41.055 [2024-10-01 22:44:36.261267] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:41.055 [2024-10-01 22:44:36.261282] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:41.055 [2024-10-01 22:44:36.276023] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:41.055 [2024-10-01 22:44:36.276039] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:41.055 [2024-10-01 22:44:36.288786] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:41.056 [2024-10-01 22:44:36.288801] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:41.056 [2024-10-01 22:44:36.300909] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:41.056 [2024-10-01 22:44:36.300924] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:41.316 [2024-10-01 22:44:36.313500] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:41.316 [2024-10-01 22:44:36.313515] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:41.316 [2024-10-01 22:44:36.327836] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:41.316 [2024-10-01 22:44:36.327852] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:41.316 [2024-10-01 22:44:36.340958] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:41.316 [2024-10-01 22:44:36.340973] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:41.316 [2024-10-01 22:44:36.353216] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:41.316 [2024-10-01 22:44:36.353231] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:41.316 [2024-10-01 22:44:36.365847] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:41.316 [2024-10-01 22:44:36.365862] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:41.316 [2024-10-01 22:44:36.380582] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:41.316 [2024-10-01 22:44:36.380597] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:41.316 [2024-10-01 22:44:36.393811] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:41.316 [2024-10-01 22:44:36.393825] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:41.316 [2024-10-01 22:44:36.407959] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:41.316 [2024-10-01 22:44:36.407975] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:41.316 [2024-10-01 22:44:36.420828] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:41.316 [2024-10-01 22:44:36.420844] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:41.316 [2024-10-01 22:44:36.433097] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:41.316 [2024-10-01 22:44:36.433112] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:41.316 [2024-10-01 22:44:36.445459] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:41.316 [2024-10-01 22:44:36.445474] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:41.316 [2024-10-01 22:44:36.460674] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:41.316 [2024-10-01 22:44:36.460690] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:41.316 [2024-10-01 22:44:36.473635] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:41.316 [2024-10-01 22:44:36.473650] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:41.316 [2024-10-01 22:44:36.488488] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:41.316 [2024-10-01 22:44:36.488508] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:41.316 [2024-10-01 22:44:36.501098] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:41.316 [2024-10-01 22:44:36.501114] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:41.316 [2024-10-01 22:44:36.513650] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:41.316 [2024-10-01 22:44:36.513666] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:41.316 [2024-10-01 22:44:36.528270] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:41.316 [2024-10-01 22:44:36.528285] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:41.316 [2024-10-01 22:44:36.541280] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:41.316 [2024-10-01 22:44:36.541295] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:41.316 [2024-10-01 22:44:36.556511] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:41.316 [2024-10-01 22:44:36.556526] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:41.576 [2024-10-01 22:44:36.569160] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:41.576 [2024-10-01 22:44:36.569176] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:41.576 [2024-10-01 22:44:36.581869] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:41.576 [2024-10-01 22:44:36.581884] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:41.576 [2024-10-01 22:44:36.595989] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:41.576 [2024-10-01 22:44:36.596004] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:41.576 [2024-10-01 22:44:36.608949] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:41.576 [2024-10-01 22:44:36.608964] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:41.576 [2024-10-01 22:44:36.621125] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:41.576 [2024-10-01 22:44:36.621139] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:41.576 [2024-10-01 22:44:36.635748] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:41.576 [2024-10-01 22:44:36.635764] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:41.576 [2024-10-01 22:44:36.649159] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:41.576 [2024-10-01 22:44:36.649174] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:41.576 [2024-10-01 22:44:36.661789] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:41.576 [2024-10-01 22:44:36.661804] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:41.576 [2024-10-01 22:44:36.676681] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:41.576 [2024-10-01 22:44:36.676696] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:41.576 [2024-10-01 22:44:36.689637] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:41.576 [2024-10-01 22:44:36.689652] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:41.576 [2024-10-01 22:44:36.704485] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:41.576 [2024-10-01 22:44:36.704501] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:41.576 [2024-10-01 22:44:36.717442] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:41.576 [2024-10-01 22:44:36.717457] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:41.576 [2024-10-01 22:44:36.731805] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:41.576 [2024-10-01 22:44:36.731820] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:41.576 [2024-10-01 22:44:36.744686] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:41.576 [2024-10-01 22:44:36.744705] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:41.576 [2024-10-01 22:44:36.757612] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:41.576 [2024-10-01 22:44:36.757631] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:41.576 [2024-10-01 22:44:36.772264] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:41.576 [2024-10-01 22:44:36.772279] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:41.576 [2024-10-01 22:44:36.785372] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:41.576 [2024-10-01 22:44:36.785386] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:41.576 [2024-10-01 22:44:36.800563] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:41.576 [2024-10-01 22:44:36.800579] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:41.576 [2024-10-01 22:44:36.813395] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:41.576 [2024-10-01 22:44:36.813409] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:41.576 [2024-10-01 22:44:36.827898] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:41.576 [2024-10-01 22:44:36.827913] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:41.837 18762.00 IOPS, 146.58 MiB/s [2024-10-01 22:44:36.839899] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:41.837 [2024-10-01 22:44:36.839915] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:41.837 [2024-10-01 22:44:36.852754] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:41.837 [2024-10-01 22:44:36.852770] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:41.837 [2024-10-01 22:44:36.865751] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:41.837 [2024-10-01 22:44:36.865766] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:41.837 [2024-10-01 22:44:36.879733] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:41.837 [2024-10-01 22:44:36.879748] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:41.837 [2024-10-01 22:44:36.892856] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:41.837 [2024-10-01 22:44:36.892871] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:41.837 [2024-10-01 22:44:36.905596] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:41.837 [2024-10-01 22:44:36.905611] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:41.837 [2024-10-01 22:44:36.920153] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:41.837 [2024-10-01 22:44:36.920168] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:41.837 [2024-10-01 22:44:36.933582] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:41.837 [2024-10-01 22:44:36.933597] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:41.837 [2024-10-01 22:44:36.948231] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:41.837 [2024-10-01 22:44:36.948246] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:41.837 [2024-10-01 22:44:36.961563] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:41.837 [2024-10-01 22:44:36.961577] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:41.837 [2024-10-01 22:44:36.976255] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:41.837 [2024-10-01 22:44:36.976270] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:41.837 [2024-10-01 22:44:36.988943] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:41.837 [2024-10-01 22:44:36.988960] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:41.837 [2024-10-01 22:44:37.001035] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:41.837 [2024-10-01 22:44:37.001051] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:41.837 [2024-10-01 22:44:37.012594] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:41.837 [2024-10-01 22:44:37.012610] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:41.837 [2024-10-01 22:44:37.025740] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:41.837 [2024-10-01 22:44:37.025755] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:41.837 [2024-10-01 22:44:37.039783] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:41.837 [2024-10-01 22:44:37.039799] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:41.837 [2024-10-01 22:44:37.052370] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:41.837 [2024-10-01 22:44:37.052386] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:41.837 [2024-10-01 22:44:37.065069] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:41.837 [2024-10-01 22:44:37.065085] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:41.837 [2024-10-01 22:44:37.077983] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:41.837 [2024-10-01 22:44:37.077999] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:42.097 [2024-10-01 22:44:37.092100] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:42.097 [2024-10-01 22:44:37.092116] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:42.097 [2024-10-01 22:44:37.104782] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:42.097 [2024-10-01 22:44:37.104797] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:42.097 [2024-10-01 22:44:37.117407] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:42.097 [2024-10-01 22:44:37.117421] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:42.097 [2024-10-01 22:44:37.132754] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:42.097 [2024-10-01 22:44:37.132770] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:42.097 [2024-10-01 22:44:37.145375] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:42.097 [2024-10-01 22:44:37.145390] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:42.097 [2024-10-01 22:44:37.160219] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:42.097 [2024-10-01 22:44:37.160234] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:42.097 [2024-10-01 22:44:37.173211] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:42.097 [2024-10-01 22:44:37.173226] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:42.097 [2024-10-01 22:44:37.188222] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:42.097 [2024-10-01 22:44:37.188237] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:42.097 [2024-10-01 22:44:37.201065] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:42.097 [2024-10-01 22:44:37.201081] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:42.097 [2024-10-01 22:44:37.213559] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:42.097 [2024-10-01 22:44:37.213575] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:42.097 [2024-10-01 22:44:37.228230] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:42.097 [2024-10-01 22:44:37.228246] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:42.097 [2024-10-01 22:44:37.241435] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:42.097 [2024-10-01 22:44:37.241450] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:42.097 [2024-10-01 22:44:37.256260] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:42.097 [2024-10-01 22:44:37.256275] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:42.097 [2024-10-01 22:44:37.269606] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:42.097 [2024-10-01 22:44:37.269621] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:42.097 [2024-10-01 22:44:37.284351] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:42.097 [2024-10-01 22:44:37.284366] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:42.097 [2024-10-01 22:44:37.297586] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:42.097 [2024-10-01 22:44:37.297601] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:42.097 [2024-10-01 22:44:37.312548] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:42.097 [2024-10-01 22:44:37.312564] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:42.097 [2024-10-01 22:44:37.325564] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:42.097 [2024-10-01 22:44:37.325580] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:42.097 [2024-10-01 22:44:37.340010] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:42.097 [2024-10-01 22:44:37.340026] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:42.358 [2024-10-01 22:44:37.353440] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:42.358 [2024-10-01 22:44:37.353456] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:42.358 [2024-10-01 22:44:37.368398] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:42.358 [2024-10-01 22:44:37.368414] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:42.358 [2024-10-01 22:44:37.381066] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:42.358 [2024-10-01 22:44:37.381082] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:42.358 [2024-10-01 22:44:37.394018] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:42.358 [2024-10-01 22:44:37.394034] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:42.358 [2024-10-01 22:44:37.408271] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:42.358 [2024-10-01 22:44:37.408287] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:42.358 [2024-10-01 22:44:37.421224] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:42.358 [2024-10-01 22:44:37.421240] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:42.358 [2024-10-01 22:44:37.433471] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:42.358 [2024-10-01 22:44:37.433486] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:42.358 [2024-10-01 22:44:37.448487] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:42.358 [2024-10-01 22:44:37.448503] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:42.358 [2024-10-01 22:44:37.461510] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:42.358 [2024-10-01 22:44:37.461525] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:42.358 [2024-10-01 22:44:37.476881] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:42.358 [2024-10-01 22:44:37.476897] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:42.358 [2024-10-01 22:44:37.489356] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:42.358 [2024-10-01 22:44:37.489371] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:42.358 [2024-10-01 22:44:37.503994] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:42.358 [2024-10-01 22:44:37.504010] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:42.358 [2024-10-01 22:44:37.516778] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:42.358 [2024-10-01 22:44:37.516794] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:42.358 [2024-10-01 22:44:37.529513] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:42.358 [2024-10-01 22:44:37.529527] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:42.358 [2024-10-01 22:44:37.544118] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:42.358 [2024-10-01 22:44:37.544133] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:42.358 [2024-10-01 22:44:37.557372] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:42.358 [2024-10-01 22:44:37.557387] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:42.358 [2024-10-01 22:44:37.572237] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:42.358 [2024-10-01 22:44:37.572253] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:42.358 [2024-10-01 22:44:37.585176] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:42.358 [2024-10-01 22:44:37.585191] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:42.358 [2024-10-01 22:44:37.600364] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:42.358 [2024-10-01 22:44:37.600380] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:42.619 [2024-10-01 22:44:37.613334] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:42.619 [2024-10-01 22:44:37.613350] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:42.619 [2024-10-01 22:44:37.628544] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:42.619 [2024-10-01 22:44:37.628560] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:42.619 [2024-10-01 22:44:37.641191] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:42.619 [2024-10-01 22:44:37.641206] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:42.619 [2024-10-01 22:44:37.656085] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:42.619 [2024-10-01 22:44:37.656101] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:42.619 [2024-10-01 22:44:37.668822] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:42.619 [2024-10-01 22:44:37.668838] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:42.619 [2024-10-01 22:44:37.680922] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:42.619 [2024-10-01 22:44:37.680939] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:42.619 [2024-10-01 22:44:37.693376] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:42.619 [2024-10-01 22:44:37.693391] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:42.619 [2024-10-01 22:44:37.708315] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:42.619 [2024-10-01 22:44:37.708331] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:42.619 [2024-10-01 22:44:37.721722] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:42.619 [2024-10-01 22:44:37.721738] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:42.619 [2024-10-01 22:44:37.736817] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:42.619 [2024-10-01 22:44:37.736832] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:42.619 [2024-10-01 22:44:37.749591] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:42.619 [2024-10-01 22:44:37.749605] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:42.619 [2024-10-01 22:44:37.764246] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:42.619 [2024-10-01 22:44:37.764262] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:42.619 [2024-10-01 22:44:37.777047] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:42.619 [2024-10-01 22:44:37.777062] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:42.619 [2024-10-01 22:44:37.789733] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:42.619 [2024-10-01 22:44:37.789747] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:42.619 [2024-10-01 22:44:37.803821] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:42.619 [2024-10-01 22:44:37.803836] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:42.619 [2024-10-01 22:44:37.816777] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:42.619 [2024-10-01 22:44:37.816792] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:42.619 [2024-10-01 22:44:37.828735] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:42.619 [2024-10-01 22:44:37.828750] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:42.619 [2024-10-01 22:44:37.841823] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:42.619 [2024-10-01 22:44:37.841838] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:42.619 18783.67 IOPS, 146.75 MiB/s [2024-10-01 22:44:37.856414] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:42.619 [2024-10-01 22:44:37.856429] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:42.619 [2024-10-01 22:44:37.869246] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:42.619 [2024-10-01 22:44:37.869260] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:42.880 [2024-10-01 22:44:37.884291] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:42.880 [2024-10-01 22:44:37.884306] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:42.880 [2024-10-01 22:44:37.897339] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:42.880 [2024-10-01 22:44:37.897353] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:42.880 [2024-10-01 22:44:37.912459] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:42.880 [2024-10-01 22:44:37.912474] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:42.880 [2024-10-01 22:44:37.925106] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:42.880 [2024-10-01 22:44:37.925121] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:42.880 [2024-10-01 22:44:37.938198] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:42.880 [2024-10-01 22:44:37.938214] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:42.880 [2024-10-01 22:44:37.951852] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:42.880 [2024-10-01 22:44:37.951867] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:42.880 [2024-10-01 22:44:37.964671] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:42.880 [2024-10-01 22:44:37.964687] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:42.880 [2024-10-01 22:44:37.977474] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:42.880 [2024-10-01 22:44:37.977489] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:42.880 [2024-10-01 22:44:37.992260] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:42.880 [2024-10-01 22:44:37.992275] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:42.880 [2024-10-01 22:44:38.004991] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:42.880 [2024-10-01 22:44:38.005006] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:42.880 [2024-10-01 22:44:38.016921] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:42.880 [2024-10-01 22:44:38.016941] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:42.880 [2024-10-01 22:44:38.030089] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:42.880 [2024-10-01 22:44:38.030104] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:42.880 [2024-10-01 22:44:38.044000] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:42.880 [2024-10-01 22:44:38.044015] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:42.880 [2024-10-01 22:44:38.057391] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:42.880 [2024-10-01 22:44:38.057405] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:42.880 [2024-10-01 22:44:38.072251] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:42.880 [2024-10-01 22:44:38.072266] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:42.880 [2024-10-01 22:44:38.085099] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:42.880 [2024-10-01 22:44:38.085114] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:42.880 [2024-10-01 22:44:38.099817] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:42.880 [2024-10-01 22:44:38.099832] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:42.880 [2024-10-01 22:44:38.112455] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:42.880 [2024-10-01 22:44:38.112470] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:42.880 [2024-10-01 22:44:38.125534] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:42.880 [2024-10-01 22:44:38.125549] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:43.140 [2024-10-01 22:44:38.140245] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:43.140 [2024-10-01 22:44:38.140260] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:43.140 [2024-10-01 22:44:38.152813] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:43.140 [2024-10-01 22:44:38.152828] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:43.140 [2024-10-01 22:44:38.165671] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:43.140 [2024-10-01 22:44:38.165686] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:43.140 [2024-10-01 22:44:38.180418] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:43.140 [2024-10-01 22:44:38.180433] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:43.140 [2024-10-01 22:44:38.193129] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:43.140 [2024-10-01 22:44:38.193143] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:43.140 [2024-10-01 22:44:38.208035] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:43.140 [2024-10-01 22:44:38.208049] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:43.140 [2024-10-01 22:44:38.221441] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:43.140 [2024-10-01 22:44:38.221455] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:43.140 [2024-10-01 22:44:38.236389] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:43.140 [2024-10-01 22:44:38.236404] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:43.140 [2024-10-01 22:44:38.249613] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:43.140 [2024-10-01 22:44:38.249631] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:43.140 [2024-10-01 22:44:38.264182] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:43.140 [2024-10-01 22:44:38.264198] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:43.140 [2024-10-01 22:44:38.277735] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:43.140 [2024-10-01 22:44:38.277754] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:43.140 [2024-10-01 22:44:38.292755] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:43.140 [2024-10-01 22:44:38.292770] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:43.140 [2024-10-01 22:44:38.305379] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:43.140 [2024-10-01 22:44:38.305393] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:43.140 [2024-10-01 22:44:38.320214] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:43.140 [2024-10-01 22:44:38.320229] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:43.140 [2024-10-01 22:44:38.332892] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:43.140 [2024-10-01 22:44:38.332907] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:43.140 [2024-10-01 22:44:38.345207] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:43.140 [2024-10-01 22:44:38.345222] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:43.140 [2024-10-01 22:44:38.360047] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:43.140 [2024-10-01 22:44:38.360063] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:43.140 [2024-10-01 22:44:38.373101] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:43.140 [2024-10-01 22:44:38.373115] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:43.140 [2024-10-01 22:44:38.388514] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:43.140 [2024-10-01 22:44:38.388529] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:43.401 [2024-10-01 22:44:38.401844] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:43.401 [2024-10-01 22:44:38.401860] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:43.401 [2024-10-01 22:44:38.415850] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:43.401 [2024-10-01 22:44:38.415865] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:43.401 [2024-10-01 22:44:38.428254] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:43.401 [2024-10-01 22:44:38.428269] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:43.401 [2024-10-01 22:44:38.440940] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:43.401 [2024-10-01 22:44:38.440955] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:43.401 [2024-10-01 22:44:38.453798] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:43.401 [2024-10-01 22:44:38.453813] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:43.401 [2024-10-01 22:44:38.467808] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:43.401 [2024-10-01 22:44:38.467824] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:43.401 [2024-10-01 22:44:38.480638] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:43.401 [2024-10-01 22:44:38.480653] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:43.401 [2024-10-01 22:44:38.493056] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:43.401 [2024-10-01 22:44:38.493071] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:43.401 [2024-10-01 22:44:38.505942] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:43.401 [2024-10-01 22:44:38.505958] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:43.401 [2024-10-01 22:44:38.520438] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:43.401 [2024-10-01 22:44:38.520454] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:43.401 [2024-10-01 22:44:38.533370] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:43.401 [2024-10-01 22:44:38.533388] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:43.401 [2024-10-01 22:44:38.548168] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:43.401 [2024-10-01 22:44:38.548183] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:43.401 [2024-10-01 22:44:38.561838] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:43.401 [2024-10-01 22:44:38.561853] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:43.401 [2024-10-01 22:44:38.577041] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:43.401 [2024-10-01 22:44:38.577057] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:43.401 [2024-10-01 22:44:38.589995] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:43.401 [2024-10-01 22:44:38.590010] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:43.401 [2024-10-01 22:44:38.603579] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:43.401 [2024-10-01 22:44:38.603594] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:43.401 [2024-10-01 22:44:38.616635] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:43.402 [2024-10-01 22:44:38.616650] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:43.402 [2024-10-01 22:44:38.629470] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:43.402 [2024-10-01 22:44:38.629485] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:43.402 [2024-10-01 22:44:38.644215] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:43.402 [2024-10-01 22:44:38.644231] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:43.663 [2024-10-01 22:44:38.657436] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:43.663 [2024-10-01 22:44:38.657451] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:43.663 [2024-10-01 22:44:38.672170] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:43.663 [2024-10-01 22:44:38.672186] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:43.663 [2024-10-01 22:44:38.684895] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:43.663 [2024-10-01 22:44:38.684910] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:43.663 [2024-10-01 22:44:38.697031] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:43.663 [2024-10-01 22:44:38.697047] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:43.663 [2024-10-01 22:44:38.709529] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:43.663 [2024-10-01 22:44:38.709544] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:43.663 [2024-10-01 22:44:38.724465] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:43.663 [2024-10-01 22:44:38.724480] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:43.663 [2024-10-01 22:44:38.737245] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:43.663 [2024-10-01 22:44:38.737261] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:43.663 [2024-10-01 22:44:38.752178] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:43.663 [2024-10-01 22:44:38.752193] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:43.663 [2024-10-01 22:44:38.765418] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:43.663 [2024-10-01 22:44:38.765433] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:43.663 [2024-10-01 22:44:38.780196] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:43.663 [2024-10-01 22:44:38.780210] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:43.663 [2024-10-01 22:44:38.792915] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:43.663 [2024-10-01 22:44:38.792931] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:43.663 [2024-10-01 22:44:38.805223] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:43.663 [2024-10-01 22:44:38.805238] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:43.663 [2024-10-01 22:44:38.820650] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:43.663 [2024-10-01 22:44:38.820665] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:43.663 [2024-10-01 22:44:38.833704] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:43.663 [2024-10-01 22:44:38.833719] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:43.663 18785.75 IOPS, 146.76 MiB/s [2024-10-01 22:44:38.848088] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:43.663 [2024-10-01 22:44:38.848103] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:43.663 [2024-10-01 22:44:38.860860] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:43.663 [2024-10-01 22:44:38.860876] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:43.663 [2024-10-01 22:44:38.872989] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:43.663 [2024-10-01 22:44:38.873004] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:43.663 [2024-10-01 22:44:38.886075] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:43.663 [2024-10-01 22:44:38.886090] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:43.663 [2024-10-01 22:44:38.900028] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:43.663 [2024-10-01 22:44:38.900043] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:43.663 [2024-10-01 22:44:38.912485] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:43.663 [2024-10-01 22:44:38.912502] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:43.926 [2024-10-01 22:44:38.925403] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:43.926 [2024-10-01 22:44:38.925418] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:43.926 [2024-10-01 22:44:38.940384] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:43.926 [2024-10-01 22:44:38.940400] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:43.926 [2024-10-01 22:44:38.953189] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:43.926 [2024-10-01 22:44:38.953204] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:43.926 [2024-10-01 22:44:38.965302] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:43.926 [2024-10-01 22:44:38.965317] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:43.926 [2024-10-01 22:44:38.980391] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:43.926 [2024-10-01 22:44:38.980408] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:43.926 [2024-10-01 22:44:38.994126] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:43.926 [2024-10-01 22:44:38.994142] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:43.926 [2024-10-01 22:44:39.008629] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:43.926 [2024-10-01 22:44:39.008645] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:43.926 [2024-10-01 22:44:39.021490] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:43.926 [2024-10-01 22:44:39.021505] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:43.926 [2024-10-01 22:44:39.035824] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:43.926 [2024-10-01 22:44:39.035839] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:43.926 [2024-10-01 22:44:39.049172] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:43.926 [2024-10-01 22:44:39.049187] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:43.926 [2024-10-01 22:44:39.060968] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:43.927 [2024-10-01 22:44:39.060983] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:43.927 [2024-10-01 22:44:39.073958] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:43.927 [2024-10-01 22:44:39.073973] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:43.927 [2024-10-01 22:44:39.088275] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:43.927 [2024-10-01 22:44:39.088290] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:43.927 [2024-10-01 22:44:39.100898] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:43.927 [2024-10-01 22:44:39.100913] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:43.927 [2024-10-01 22:44:39.112808] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:43.927 [2024-10-01 22:44:39.112823] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:43.927 [2024-10-01 22:44:39.125574] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:43.927 [2024-10-01 22:44:39.125589] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:43.927 [2024-10-01 22:44:39.140495] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:43.927 [2024-10-01 22:44:39.140511] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:43.927 [2024-10-01 22:44:39.153909] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:43.927 [2024-10-01 22:44:39.153924] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:43.927 [2024-10-01 22:44:39.168154] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:43.927 [2024-10-01 22:44:39.168170] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:44.190 [2024-10-01 22:44:39.180991] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:44.190 [2024-10-01 22:44:39.181007] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:44.190 [2024-10-01 22:44:39.193812] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:44.190 [2024-10-01 22:44:39.193827] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:44.190 [2024-10-01 22:44:39.208210] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:44.190 [2024-10-01 22:44:39.208226] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:44.190 [2024-10-01 22:44:39.221250] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:44.190 [2024-10-01 22:44:39.221265] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:44.190 [2024-10-01 22:44:39.236236] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:44.190 [2024-10-01 22:44:39.236252] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:44.190 [2024-10-01 22:44:39.249312] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:44.190 [2024-10-01 22:44:39.249327] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:44.190 [2024-10-01 22:44:39.264164] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:44.190 [2024-10-01 22:44:39.264180] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:44.190 [2024-10-01 22:44:39.276878] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:44.190 [2024-10-01 22:44:39.276893] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:44.190 [2024-10-01 22:44:39.289721] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:44.190 [2024-10-01 22:44:39.289743] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:44.190 [2024-10-01 22:44:39.304306] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:44.190 [2024-10-01 22:44:39.304321] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:44.190 [2024-10-01 22:44:39.317448] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:44.190 [2024-10-01 22:44:39.317463] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:44.190 [2024-10-01 22:44:39.332711] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:44.190 [2024-10-01 22:44:39.332726] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:44.190 [2024-10-01 22:44:39.345599] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:44.190 [2024-10-01 22:44:39.345614] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:44.190 [2024-10-01 22:44:39.360558] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:44.190 [2024-10-01 22:44:39.360574] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:44.190 [2024-10-01 22:44:39.373254] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:44.190 [2024-10-01 22:44:39.373269] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:44.190 [2024-10-01 22:44:39.387979] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:44.190 [2024-10-01 22:44:39.387995] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:44.190 [2024-10-01 22:44:39.401050] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:44.190 [2024-10-01 22:44:39.401066] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:44.190 [2024-10-01 22:44:39.414106] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:44.190 [2024-10-01 22:44:39.414121] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:44.190 [2024-10-01 22:44:39.428313] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:44.190 [2024-10-01 22:44:39.428329] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:44.190 [2024-10-01 22:44:39.441203] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:44.190 [2024-10-01 22:44:39.441218] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:44.451 [2024-10-01 22:44:39.452980] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:44.451 [2024-10-01 22:44:39.452995] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:44.451 [2024-10-01 22:44:39.465879] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:44.451 [2024-10-01 22:44:39.465894] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:44.451 [2024-10-01 22:44:39.480812] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:44.451 [2024-10-01 22:44:39.480828] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:44.451 [2024-10-01 22:44:39.493839] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:44.451 [2024-10-01 22:44:39.493855] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:44.451 [2024-10-01 22:44:39.508350] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:44.451 [2024-10-01 22:44:39.508365] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:44.451 [2024-10-01 22:44:39.520805] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:44.451 [2024-10-01 22:44:39.520821] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:44.451 [2024-10-01 22:44:39.533738] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:44.451 [2024-10-01 22:44:39.533754] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:44.451 [2024-10-01 22:44:39.547929] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:44.451 [2024-10-01 22:44:39.547948] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:44.451 [2024-10-01 22:44:39.560565] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:44.451 [2024-10-01 22:44:39.560581] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:44.451 [2024-10-01 22:44:39.573538] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:44.451 [2024-10-01 22:44:39.573553] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:44.451 [2024-10-01 22:44:39.587920] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:44.451 [2024-10-01 22:44:39.587935] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:44.451 [2024-10-01 22:44:39.601294] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:44.451 [2024-10-01 22:44:39.601308] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:44.451 [2024-10-01 22:44:39.616222] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:44.451 [2024-10-01 22:44:39.616237] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:44.451 [2024-10-01 22:44:39.629182] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:44.451 [2024-10-01 22:44:39.629196] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:44.451 [2024-10-01 22:44:39.644199] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:44.451 [2024-10-01 22:44:39.644215] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:44.451 [2024-10-01 22:44:39.657480] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:44.451 [2024-10-01 22:44:39.657495] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:44.451 [2024-10-01 22:44:39.672549] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:44.451 [2024-10-01 22:44:39.672564] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:44.451 [2024-10-01 22:44:39.685623] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:44.451 [2024-10-01 22:44:39.685642] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:44.451 [2024-10-01 22:44:39.700618] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:44.451 [2024-10-01 22:44:39.700639] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:44.712 [2024-10-01 22:44:39.713215] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:44.712 [2024-10-01 22:44:39.713231] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:44.712 [2024-10-01 22:44:39.728345] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:44.712 [2024-10-01 22:44:39.728361] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:44.712 [2024-10-01 22:44:39.741206] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:44.712 [2024-10-01 22:44:39.741222] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:44.712 [2024-10-01 22:44:39.753609] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:44.712 [2024-10-01 22:44:39.753628] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:44.712 [2024-10-01 22:44:39.768403] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:44.712 [2024-10-01 22:44:39.768418] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:44.712 [2024-10-01 22:44:39.781122] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:44.712 [2024-10-01 22:44:39.781136] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:44.712 [2024-10-01 22:44:39.796132] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:44.712 [2024-10-01 22:44:39.796147] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:44.712 [2024-10-01 22:44:39.809070] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:44.712 [2024-10-01 22:44:39.809088] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:44.712 [2024-10-01 22:44:39.820839] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:44.712 [2024-10-01 22:44:39.820853] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:44.712 [2024-10-01 22:44:39.833538] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:44.712 [2024-10-01 22:44:39.833552] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:44.712 18785.80 IOPS, 146.76 MiB/s [2024-10-01 22:44:39.847571] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:44.712 [2024-10-01 22:44:39.847586] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:44.712 00:45:44.712 Latency(us) 00:45:44.712 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:45:44.712 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:45:44.712 Nvme1n1 : 5.01 18788.50 146.79 0.00 0.00 6805.64 2621.44 11905.71 00:45:44.712 =================================================================================================================== 00:45:44.712 Total : 18788.50 146.79 0.00 0.00 6805.64 2621.44 11905.71 00:45:44.712 [2024-10-01 22:44:39.856983] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:44.712 [2024-10-01 22:44:39.856997] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:44.712 [2024-10-01 22:44:39.868980] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:44.712 [2024-10-01 22:44:39.868992] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:44.712 [2024-10-01 22:44:39.880984] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:44.712 [2024-10-01 22:44:39.880996] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:44.712 [2024-10-01 22:44:39.892983] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:44.712 [2024-10-01 22:44:39.892995] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:44.712 [2024-10-01 22:44:39.904978] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:44.712 [2024-10-01 22:44:39.904987] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:44.712 [2024-10-01 22:44:39.916976] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:44.712 [2024-10-01 22:44:39.916985] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:44.712 [2024-10-01 22:44:39.928977] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:44.712 [2024-10-01 22:44:39.928987] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:44.712 [2024-10-01 22:44:39.940979] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:44.712 [2024-10-01 22:44:39.940989] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:44.712 [2024-10-01 22:44:39.952979] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:44.712 [2024-10-01 22:44:39.952990] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:44.973 [2024-10-01 22:44:39.964976] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:44.973 [2024-10-01 22:44:39.964987] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:44.973 [2024-10-01 22:44:39.976976] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:44.973 [2024-10-01 22:44:39.976984] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:44.973 [2024-10-01 22:44:39.988979] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:44.973 [2024-10-01 22:44:39.988988] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:44.973 [2024-10-01 22:44:40.000979] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:44.973 [2024-10-01 22:44:40.000989] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:44.973 [2024-10-01 22:44:40.012980] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:44.973 [2024-10-01 22:44:40.012993] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:44.973 [2024-10-01 22:44:40.024976] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:44.973 [2024-10-01 22:44:40.024985] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:44.973 [2024-10-01 22:44:40.036976] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:44.973 [2024-10-01 22:44:40.036985] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:44.973 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (468935) - No such process 00:45:44.973 22:44:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@49 -- # wait 468935 00:45:44.973 22:44:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:45:44.973 22:44:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:45:44.973 22:44:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:45:44.973 22:44:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:45:44.973 22:44:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:45:44.973 22:44:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:45:44.973 22:44:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:45:44.973 delay0 00:45:44.973 22:44:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:45:44.973 22:44:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:45:44.973 22:44:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:45:44.973 22:44:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:45:44.973 22:44:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:45:44.973 22:44:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 ns:1' 00:45:44.974 [2024-10-01 22:44:40.174004] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:45:51.555 Initializing NVMe Controllers 00:45:51.555 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:45:51.555 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:45:51.555 Initialization complete. Launching workers. 00:45:51.555 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 290, failed: 8916 00:45:51.555 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 9138, failed to submit 68 00:45:51.555 success 8916, unsuccessful 222, failed 0 00:45:51.555 22:44:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:45:51.555 22:44:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@60 -- # nvmftestfini 00:45:51.555 22:44:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@514 -- # nvmfcleanup 00:45:51.555 22:44:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@121 -- # sync 00:45:51.555 22:44:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:45:51.555 22:44:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@124 -- # set +e 00:45:51.555 22:44:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@125 -- # for i in {1..20} 00:45:51.555 22:44:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:45:51.555 rmmod nvme_tcp 00:45:51.555 rmmod nvme_fabrics 00:45:51.555 rmmod nvme_keyring 00:45:51.555 22:44:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:45:51.555 22:44:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@128 -- # set -e 00:45:51.555 22:44:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@129 -- # return 0 00:45:51.555 22:44:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@515 -- # '[' -n 466589 ']' 00:45:51.555 22:44:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@516 -- # killprocess 466589 00:45:51.555 22:44:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@950 -- # '[' -z 466589 ']' 00:45:51.555 22:44:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@954 -- # kill -0 466589 00:45:51.555 22:44:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@955 -- # uname 00:45:51.555 22:44:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:45:51.555 22:44:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 466589 00:45:51.555 22:44:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:45:51.555 22:44:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:45:51.555 22:44:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@968 -- # echo 'killing process with pid 466589' 00:45:51.555 killing process with pid 466589 00:45:51.555 22:44:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@969 -- # kill 466589 00:45:51.555 22:44:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@974 -- # wait 466589 00:45:51.815 22:44:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:45:51.815 22:44:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:45:51.815 22:44:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:45:51.815 22:44:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@297 -- # iptr 00:45:51.815 22:44:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@789 -- # iptables-save 00:45:51.815 22:44:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:45:51.815 22:44:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@789 -- # iptables-restore 00:45:51.815 22:44:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:45:51.815 22:44:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@302 -- # remove_spdk_ns 00:45:51.815 22:44:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:45:51.815 22:44:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:45:51.815 22:44:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:45:54.357 22:44:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:45:54.357 00:45:54.357 real 0m33.933s 00:45:54.357 user 0m43.587s 00:45:54.357 sys 0m12.085s 00:45:54.357 22:44:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1126 -- # xtrace_disable 00:45:54.357 22:44:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:45:54.357 ************************************ 00:45:54.357 END TEST nvmf_zcopy 00:45:54.357 ************************************ 00:45:54.357 22:44:49 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@33 -- # run_test nvmf_nmic /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp --interrupt-mode 00:45:54.357 22:44:49 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:45:54.357 22:44:49 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1107 -- # xtrace_disable 00:45:54.357 22:44:49 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:45:54.357 ************************************ 00:45:54.357 START TEST nvmf_nmic 00:45:54.357 ************************************ 00:45:54.357 22:44:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp --interrupt-mode 00:45:54.357 * Looking for test storage... 00:45:54.357 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:45:54.357 22:44:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:45:54.357 22:44:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1681 -- # lcov --version 00:45:54.357 22:44:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:45:54.357 22:44:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:45:54.357 22:44:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:45:54.357 22:44:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@333 -- # local ver1 ver1_l 00:45:54.357 22:44:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@334 -- # local ver2 ver2_l 00:45:54.357 22:44:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@336 -- # IFS=.-: 00:45:54.357 22:44:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@336 -- # read -ra ver1 00:45:54.357 22:44:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@337 -- # IFS=.-: 00:45:54.357 22:44:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@337 -- # read -ra ver2 00:45:54.357 22:44:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@338 -- # local 'op=<' 00:45:54.357 22:44:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@340 -- # ver1_l=2 00:45:54.357 22:44:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@341 -- # ver2_l=1 00:45:54.357 22:44:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:45:54.357 22:44:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@344 -- # case "$op" in 00:45:54.357 22:44:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@345 -- # : 1 00:45:54.357 22:44:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@364 -- # (( v = 0 )) 00:45:54.357 22:44:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:45:54.357 22:44:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@365 -- # decimal 1 00:45:54.357 22:44:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@353 -- # local d=1 00:45:54.357 22:44:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:45:54.357 22:44:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@355 -- # echo 1 00:45:54.357 22:44:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@365 -- # ver1[v]=1 00:45:54.357 22:44:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@366 -- # decimal 2 00:45:54.357 22:44:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@353 -- # local d=2 00:45:54.357 22:44:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:45:54.357 22:44:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@355 -- # echo 2 00:45:54.357 22:44:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@366 -- # ver2[v]=2 00:45:54.357 22:44:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:45:54.357 22:44:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:45:54.357 22:44:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@368 -- # return 0 00:45:54.357 22:44:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:45:54.357 22:44:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:45:54.357 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:45:54.357 --rc genhtml_branch_coverage=1 00:45:54.357 --rc genhtml_function_coverage=1 00:45:54.357 --rc genhtml_legend=1 00:45:54.357 --rc geninfo_all_blocks=1 00:45:54.357 --rc geninfo_unexecuted_blocks=1 00:45:54.357 00:45:54.357 ' 00:45:54.357 22:44:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:45:54.357 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:45:54.357 --rc genhtml_branch_coverage=1 00:45:54.357 --rc genhtml_function_coverage=1 00:45:54.357 --rc genhtml_legend=1 00:45:54.357 --rc geninfo_all_blocks=1 00:45:54.357 --rc geninfo_unexecuted_blocks=1 00:45:54.357 00:45:54.357 ' 00:45:54.357 22:44:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:45:54.357 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:45:54.357 --rc genhtml_branch_coverage=1 00:45:54.357 --rc genhtml_function_coverage=1 00:45:54.357 --rc genhtml_legend=1 00:45:54.357 --rc geninfo_all_blocks=1 00:45:54.357 --rc geninfo_unexecuted_blocks=1 00:45:54.357 00:45:54.357 ' 00:45:54.357 22:44:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:45:54.357 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:45:54.357 --rc genhtml_branch_coverage=1 00:45:54.357 --rc genhtml_function_coverage=1 00:45:54.357 --rc genhtml_legend=1 00:45:54.357 --rc geninfo_all_blocks=1 00:45:54.357 --rc geninfo_unexecuted_blocks=1 00:45:54.357 00:45:54.357 ' 00:45:54.357 22:44:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:45:54.357 22:44:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@7 -- # uname -s 00:45:54.357 22:44:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:45:54.357 22:44:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:45:54.357 22:44:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:45:54.357 22:44:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:45:54.357 22:44:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:45:54.358 22:44:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:45:54.358 22:44:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:45:54.358 22:44:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:45:54.358 22:44:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:45:54.358 22:44:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:45:54.358 22:44:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 00:45:54.358 22:44:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@18 -- # NVME_HOSTID=80c5a598-37ec-ec11-9bc7-a4bf01928204 00:45:54.358 22:44:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:45:54.358 22:44:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:45:54.358 22:44:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:45:54.358 22:44:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:45:54.358 22:44:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:45:54.358 22:44:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@15 -- # shopt -s extglob 00:45:54.358 22:44:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:45:54.358 22:44:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:45:54.358 22:44:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:45:54.358 22:44:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:45:54.358 22:44:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:45:54.358 22:44:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:45:54.358 22:44:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@5 -- # export PATH 00:45:54.358 22:44:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:45:54.358 22:44:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@51 -- # : 0 00:45:54.358 22:44:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:45:54.358 22:44:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:45:54.358 22:44:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:45:54.358 22:44:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:45:54.358 22:44:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:45:54.358 22:44:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:45:54.358 22:44:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:45:54.358 22:44:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:45:54.358 22:44:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:45:54.358 22:44:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@55 -- # have_pci_nics=0 00:45:54.358 22:44:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:45:54.358 22:44:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:45:54.358 22:44:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@14 -- # nvmftestinit 00:45:54.358 22:44:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:45:54.358 22:44:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:45:54.358 22:44:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@474 -- # prepare_net_devs 00:45:54.358 22:44:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@436 -- # local -g is_hw=no 00:45:54.358 22:44:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@438 -- # remove_spdk_ns 00:45:54.358 22:44:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:45:54.358 22:44:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:45:54.358 22:44:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:45:54.358 22:44:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:45:54.358 22:44:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:45:54.358 22:44:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@309 -- # xtrace_disable 00:45:54.358 22:44:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:46:00.947 22:44:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:46:00.947 22:44:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@315 -- # pci_devs=() 00:46:00.947 22:44:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@315 -- # local -a pci_devs 00:46:00.947 22:44:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@316 -- # pci_net_devs=() 00:46:00.947 22:44:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:46:00.947 22:44:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@317 -- # pci_drivers=() 00:46:00.947 22:44:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@317 -- # local -A pci_drivers 00:46:00.947 22:44:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@319 -- # net_devs=() 00:46:00.947 22:44:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@319 -- # local -ga net_devs 00:46:00.947 22:44:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@320 -- # e810=() 00:46:00.947 22:44:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@320 -- # local -ga e810 00:46:00.947 22:44:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@321 -- # x722=() 00:46:00.947 22:44:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@321 -- # local -ga x722 00:46:00.947 22:44:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@322 -- # mlx=() 00:46:00.947 22:44:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@322 -- # local -ga mlx 00:46:00.947 22:44:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:46:00.947 22:44:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:46:00.947 22:44:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:46:00.947 22:44:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:46:00.947 22:44:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:46:00.947 22:44:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:46:00.947 22:44:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:46:00.947 22:44:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:46:00.947 22:44:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:46:00.947 22:44:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:46:00.948 22:44:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:46:00.948 22:44:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:46:00.948 22:44:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:46:00.948 22:44:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:46:00.948 22:44:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:46:00.948 22:44:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:46:00.948 22:44:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:46:00.948 22:44:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:46:00.948 22:44:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:46:00.948 22:44:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:46:00.948 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:46:00.948 22:44:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:46:00.948 22:44:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:46:00.948 22:44:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:46:00.948 22:44:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:46:00.948 22:44:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:46:00.948 22:44:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:46:00.948 22:44:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:46:00.948 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:46:00.948 22:44:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:46:00.948 22:44:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:46:00.948 22:44:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:46:00.948 22:44:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:46:00.948 22:44:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:46:00.948 22:44:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:46:00.948 22:44:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:46:00.948 22:44:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:46:00.948 22:44:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:46:00.948 22:44:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:46:00.948 22:44:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:46:00.948 22:44:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:46:00.948 22:44:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@416 -- # [[ up == up ]] 00:46:00.948 22:44:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:46:00.948 22:44:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:46:00.948 22:44:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:46:00.948 Found net devices under 0000:4b:00.0: cvl_0_0 00:46:00.948 22:44:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:46:00.948 22:44:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:46:00.948 22:44:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:46:00.948 22:44:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:46:00.948 22:44:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:46:00.948 22:44:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@416 -- # [[ up == up ]] 00:46:00.948 22:44:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:46:00.948 22:44:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:46:00.948 22:44:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:46:00.948 Found net devices under 0000:4b:00.1: cvl_0_1 00:46:00.948 22:44:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:46:00.948 22:44:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:46:00.948 22:44:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@440 -- # is_hw=yes 00:46:00.948 22:44:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:46:00.948 22:44:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:46:00.948 22:44:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:46:00.948 22:44:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:46:00.948 22:44:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:46:00.948 22:44:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:46:00.948 22:44:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:46:00.948 22:44:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:46:00.948 22:44:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:46:00.948 22:44:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:46:00.948 22:44:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:46:00.948 22:44:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:46:00.948 22:44:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:46:00.948 22:44:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:46:00.948 22:44:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:46:00.948 22:44:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:46:00.948 22:44:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:46:00.948 22:44:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:46:01.207 22:44:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:46:01.207 22:44:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:46:01.207 22:44:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:46:01.207 22:44:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:46:01.207 22:44:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:46:01.207 22:44:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:46:01.207 22:44:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:46:01.207 22:44:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:46:01.207 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:46:01.207 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.619 ms 00:46:01.207 00:46:01.207 --- 10.0.0.2 ping statistics --- 00:46:01.207 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:46:01.207 rtt min/avg/max/mdev = 0.619/0.619/0.619/0.000 ms 00:46:01.207 22:44:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:46:01.207 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:46:01.207 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.283 ms 00:46:01.207 00:46:01.207 --- 10.0.0.1 ping statistics --- 00:46:01.207 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:46:01.207 rtt min/avg/max/mdev = 0.283/0.283/0.283/0.000 ms 00:46:01.207 22:44:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:46:01.207 22:44:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@448 -- # return 0 00:46:01.207 22:44:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:46:01.207 22:44:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:46:01.207 22:44:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:46:01.207 22:44:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:46:01.207 22:44:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:46:01.207 22:44:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:46:01.207 22:44:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:46:01.466 22:44:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:46:01.466 22:44:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:46:01.466 22:44:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@724 -- # xtrace_disable 00:46:01.466 22:44:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:46:01.466 22:44:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@507 -- # nvmfpid=475279 00:46:01.466 22:44:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@508 -- # waitforlisten 475279 00:46:01.466 22:44:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xF 00:46:01.466 22:44:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@831 -- # '[' -z 475279 ']' 00:46:01.466 22:44:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:46:01.466 22:44:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@836 -- # local max_retries=100 00:46:01.466 22:44:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:46:01.466 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:46:01.466 22:44:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@840 -- # xtrace_disable 00:46:01.466 22:44:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:46:01.466 [2024-10-01 22:44:56.551156] thread.c:2945:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:46:01.466 [2024-10-01 22:44:56.552161] Starting SPDK v25.01-pre git sha1 1b1c3081e / DPDK 24.03.0 initialization... 00:46:01.466 [2024-10-01 22:44:56.552200] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:46:01.466 [2024-10-01 22:44:56.620141] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:46:01.466 [2024-10-01 22:44:56.689223] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:46:01.466 [2024-10-01 22:44:56.689262] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:46:01.466 [2024-10-01 22:44:56.689270] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:46:01.466 [2024-10-01 22:44:56.689277] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:46:01.466 [2024-10-01 22:44:56.689283] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:46:01.466 [2024-10-01 22:44:56.689426] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:46:01.466 [2024-10-01 22:44:56.689543] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:46:01.466 [2024-10-01 22:44:56.689700] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:46:01.466 [2024-10-01 22:44:56.689700] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:46:01.725 [2024-10-01 22:44:56.792441] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:46:01.725 [2024-10-01 22:44:56.792501] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:46:01.725 [2024-10-01 22:44:56.793377] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:46:01.725 [2024-10-01 22:44:56.794271] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:46:01.725 [2024-10-01 22:44:56.794348] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:46:02.293 22:44:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:46:02.293 22:44:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@864 -- # return 0 00:46:02.293 22:44:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:46:02.293 22:44:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@730 -- # xtrace_disable 00:46:02.293 22:44:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:46:02.293 22:44:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:46:02.293 22:44:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:46:02.293 22:44:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:46:02.293 22:44:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:46:02.293 [2024-10-01 22:44:57.390130] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:46:02.293 22:44:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:46:02.293 22:44:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:46:02.293 22:44:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:46:02.293 22:44:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:46:02.293 Malloc0 00:46:02.293 22:44:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:46:02.293 22:44:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:46:02.293 22:44:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:46:02.293 22:44:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:46:02.293 22:44:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:46:02.293 22:44:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:46:02.293 22:44:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:46:02.293 22:44:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:46:02.293 22:44:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:46:02.293 22:44:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:46:02.293 22:44:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:46:02.293 22:44:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:46:02.293 [2024-10-01 22:44:57.442292] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:46:02.293 22:44:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:46:02.293 22:44:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:46:02.293 test case1: single bdev can't be used in multiple subsystems 00:46:02.293 22:44:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:46:02.293 22:44:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:46:02.293 22:44:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:46:02.293 22:44:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:46:02.293 22:44:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:46:02.293 22:44:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:46:02.293 22:44:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:46:02.294 22:44:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:46:02.294 22:44:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@28 -- # nmic_status=0 00:46:02.294 22:44:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:46:02.294 22:44:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:46:02.294 22:44:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:46:02.294 [2024-10-01 22:44:57.478064] bdev.c:8241:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:46:02.294 [2024-10-01 22:44:57.478084] subsystem.c:2157:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:46:02.294 [2024-10-01 22:44:57.478092] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:46:02.294 request: 00:46:02.294 { 00:46:02.294 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:46:02.294 "namespace": { 00:46:02.294 "bdev_name": "Malloc0", 00:46:02.294 "no_auto_visible": false 00:46:02.294 }, 00:46:02.294 "method": "nvmf_subsystem_add_ns", 00:46:02.294 "req_id": 1 00:46:02.294 } 00:46:02.294 Got JSON-RPC error response 00:46:02.294 response: 00:46:02.294 { 00:46:02.294 "code": -32602, 00:46:02.294 "message": "Invalid parameters" 00:46:02.294 } 00:46:02.294 22:44:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:46:02.294 22:44:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@29 -- # nmic_status=1 00:46:02.294 22:44:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:46:02.294 22:44:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:46:02.294 Adding namespace failed - expected result. 00:46:02.294 22:44:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:46:02.294 test case2: host connect to nvmf target in multiple paths 00:46:02.294 22:44:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:46:02.294 22:44:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:46:02.294 22:44:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:46:02.294 [2024-10-01 22:44:57.490168] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:46:02.294 22:44:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:46:02.294 22:44:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 --hostid=80c5a598-37ec-ec11-9bc7-a4bf01928204 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:46:02.611 22:44:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 --hostid=80c5a598-37ec-ec11-9bc7-a4bf01928204 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4421 00:46:03.179 22:44:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:46:03.179 22:44:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1198 -- # local i=0 00:46:03.179 22:44:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:46:03.179 22:44:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:46:03.179 22:44:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1205 -- # sleep 2 00:46:05.086 22:45:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:46:05.086 22:45:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:46:05.086 22:45:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:46:05.086 22:45:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:46:05.086 22:45:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:46:05.086 22:45:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1208 -- # return 0 00:46:05.086 22:45:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:46:05.086 [global] 00:46:05.086 thread=1 00:46:05.086 invalidate=1 00:46:05.086 rw=write 00:46:05.087 time_based=1 00:46:05.087 runtime=1 00:46:05.087 ioengine=libaio 00:46:05.087 direct=1 00:46:05.087 bs=4096 00:46:05.087 iodepth=1 00:46:05.087 norandommap=0 00:46:05.087 numjobs=1 00:46:05.087 00:46:05.087 verify_dump=1 00:46:05.087 verify_backlog=512 00:46:05.087 verify_state_save=0 00:46:05.087 do_verify=1 00:46:05.087 verify=crc32c-intel 00:46:05.087 [job0] 00:46:05.087 filename=/dev/nvme0n1 00:46:05.087 Could not set queue depth (nvme0n1) 00:46:05.349 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:46:05.349 fio-3.35 00:46:05.349 Starting 1 thread 00:46:06.733 00:46:06.733 job0: (groupid=0, jobs=1): err= 0: pid=476237: Tue Oct 1 22:45:01 2024 00:46:06.733 read: IOPS=114, BW=458KiB/s (469kB/s)(460KiB/1005msec) 00:46:06.733 slat (nsec): min=8212, max=41082, avg=25938.86, stdev=2232.60 00:46:06.733 clat (usec): min=734, max=43004, avg=5637.49, stdev=13065.93 00:46:06.733 lat (usec): min=759, max=43032, avg=5663.43, stdev=13066.28 00:46:06.733 clat percentiles (usec): 00:46:06.733 | 1.00th=[ 734], 5.00th=[ 799], 10.00th=[ 865], 20.00th=[ 914], 00:46:06.733 | 30.00th=[ 979], 40.00th=[ 1020], 50.00th=[ 1037], 60.00th=[ 1057], 00:46:06.733 | 70.00th=[ 1057], 80.00th=[ 1090], 90.00th=[41681], 95.00th=[42206], 00:46:06.733 | 99.00th=[42730], 99.50th=[43254], 99.90th=[43254], 99.95th=[43254], 00:46:06.733 | 99.99th=[43254] 00:46:06.733 write: IOPS=509, BW=2038KiB/s (2087kB/s)(2048KiB/1005msec); 0 zone resets 00:46:06.733 slat (usec): min=9, max=27662, avg=84.80, stdev=1221.18 00:46:06.733 clat (usec): min=213, max=982, avg=588.98, stdev=101.32 00:46:06.733 lat (usec): min=223, max=28180, avg=673.78, stdev=1222.67 00:46:06.733 clat percentiles (usec): 00:46:06.733 | 1.00th=[ 334], 5.00th=[ 404], 10.00th=[ 449], 20.00th=[ 515], 00:46:06.733 | 30.00th=[ 553], 40.00th=[ 570], 50.00th=[ 594], 60.00th=[ 619], 00:46:06.733 | 70.00th=[ 644], 80.00th=[ 676], 90.00th=[ 709], 95.00th=[ 734], 00:46:06.733 | 99.00th=[ 791], 99.50th=[ 840], 99.90th=[ 979], 99.95th=[ 979], 00:46:06.733 | 99.99th=[ 979] 00:46:06.733 bw ( KiB/s): min= 4096, max= 4096, per=100.00%, avg=4096.00, stdev= 0.00, samples=1 00:46:06.733 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:46:06.733 lat (usec) : 250=0.32%, 500=13.08%, 750=66.35%, 1000=8.13% 00:46:06.733 lat (msec) : 2=10.05%, 50=2.07% 00:46:06.733 cpu : usr=1.29%, sys=1.49%, ctx=632, majf=0, minf=1 00:46:06.733 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:46:06.733 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:46:06.733 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:46:06.733 issued rwts: total=115,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:46:06.733 latency : target=0, window=0, percentile=100.00%, depth=1 00:46:06.733 00:46:06.733 Run status group 0 (all jobs): 00:46:06.733 READ: bw=458KiB/s (469kB/s), 458KiB/s-458KiB/s (469kB/s-469kB/s), io=460KiB (471kB), run=1005-1005msec 00:46:06.733 WRITE: bw=2038KiB/s (2087kB/s), 2038KiB/s-2038KiB/s (2087kB/s-2087kB/s), io=2048KiB (2097kB), run=1005-1005msec 00:46:06.733 00:46:06.733 Disk stats (read/write): 00:46:06.733 nvme0n1: ios=164/512, merge=0/0, ticks=1113/285, in_queue=1398, util=98.60% 00:46:06.733 22:45:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:46:06.733 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:46:06.733 22:45:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:46:06.733 22:45:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1219 -- # local i=0 00:46:06.733 22:45:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:46:06.733 22:45:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:46:06.733 22:45:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:46:06.733 22:45:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:46:06.733 22:45:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1231 -- # return 0 00:46:06.733 22:45:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:46:06.733 22:45:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@53 -- # nvmftestfini 00:46:06.733 22:45:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@514 -- # nvmfcleanup 00:46:06.733 22:45:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@121 -- # sync 00:46:06.733 22:45:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:46:06.733 22:45:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@124 -- # set +e 00:46:06.733 22:45:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@125 -- # for i in {1..20} 00:46:06.733 22:45:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:46:06.733 rmmod nvme_tcp 00:46:06.733 rmmod nvme_fabrics 00:46:06.992 rmmod nvme_keyring 00:46:06.993 22:45:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:46:06.993 22:45:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@128 -- # set -e 00:46:06.993 22:45:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@129 -- # return 0 00:46:06.993 22:45:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@515 -- # '[' -n 475279 ']' 00:46:06.993 22:45:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@516 -- # killprocess 475279 00:46:06.993 22:45:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@950 -- # '[' -z 475279 ']' 00:46:06.993 22:45:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@954 -- # kill -0 475279 00:46:06.993 22:45:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@955 -- # uname 00:46:06.993 22:45:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:46:06.993 22:45:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 475279 00:46:06.993 22:45:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:46:06.993 22:45:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:46:06.993 22:45:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@968 -- # echo 'killing process with pid 475279' 00:46:06.993 killing process with pid 475279 00:46:06.993 22:45:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@969 -- # kill 475279 00:46:06.993 22:45:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@974 -- # wait 475279 00:46:07.253 22:45:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:46:07.253 22:45:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:46:07.253 22:45:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:46:07.253 22:45:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@297 -- # iptr 00:46:07.253 22:45:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@789 -- # iptables-save 00:46:07.253 22:45:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:46:07.253 22:45:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@789 -- # iptables-restore 00:46:07.253 22:45:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:46:07.253 22:45:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@302 -- # remove_spdk_ns 00:46:07.253 22:45:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:46:07.253 22:45:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:46:07.253 22:45:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:46:09.226 22:45:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:46:09.226 00:46:09.226 real 0m15.309s 00:46:09.226 user 0m35.480s 00:46:09.226 sys 0m7.207s 00:46:09.226 22:45:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1126 -- # xtrace_disable 00:46:09.226 22:45:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:46:09.226 ************************************ 00:46:09.226 END TEST nvmf_nmic 00:46:09.226 ************************************ 00:46:09.226 22:45:04 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@34 -- # run_test nvmf_fio_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp --interrupt-mode 00:46:09.226 22:45:04 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:46:09.226 22:45:04 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1107 -- # xtrace_disable 00:46:09.226 22:45:04 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:46:09.226 ************************************ 00:46:09.226 START TEST nvmf_fio_target 00:46:09.226 ************************************ 00:46:09.226 22:45:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp --interrupt-mode 00:46:09.488 * Looking for test storage... 00:46:09.488 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:46:09.488 22:45:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:46:09.488 22:45:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1681 -- # lcov --version 00:46:09.488 22:45:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:46:09.488 22:45:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:46:09.488 22:45:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:46:09.488 22:45:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:46:09.488 22:45:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:46:09.488 22:45:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@336 -- # IFS=.-: 00:46:09.488 22:45:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@336 -- # read -ra ver1 00:46:09.488 22:45:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@337 -- # IFS=.-: 00:46:09.488 22:45:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@337 -- # read -ra ver2 00:46:09.488 22:45:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@338 -- # local 'op=<' 00:46:09.488 22:45:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@340 -- # ver1_l=2 00:46:09.489 22:45:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@341 -- # ver2_l=1 00:46:09.489 22:45:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:46:09.489 22:45:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@344 -- # case "$op" in 00:46:09.489 22:45:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@345 -- # : 1 00:46:09.489 22:45:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:46:09.489 22:45:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:46:09.489 22:45:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@365 -- # decimal 1 00:46:09.489 22:45:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@353 -- # local d=1 00:46:09.489 22:45:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:46:09.489 22:45:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@355 -- # echo 1 00:46:09.489 22:45:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@365 -- # ver1[v]=1 00:46:09.489 22:45:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@366 -- # decimal 2 00:46:09.489 22:45:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@353 -- # local d=2 00:46:09.489 22:45:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:46:09.489 22:45:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@355 -- # echo 2 00:46:09.489 22:45:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@366 -- # ver2[v]=2 00:46:09.489 22:45:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:46:09.489 22:45:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:46:09.489 22:45:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@368 -- # return 0 00:46:09.489 22:45:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:46:09.489 22:45:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:46:09.489 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:46:09.489 --rc genhtml_branch_coverage=1 00:46:09.489 --rc genhtml_function_coverage=1 00:46:09.489 --rc genhtml_legend=1 00:46:09.489 --rc geninfo_all_blocks=1 00:46:09.489 --rc geninfo_unexecuted_blocks=1 00:46:09.489 00:46:09.489 ' 00:46:09.489 22:45:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:46:09.489 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:46:09.489 --rc genhtml_branch_coverage=1 00:46:09.489 --rc genhtml_function_coverage=1 00:46:09.489 --rc genhtml_legend=1 00:46:09.489 --rc geninfo_all_blocks=1 00:46:09.489 --rc geninfo_unexecuted_blocks=1 00:46:09.489 00:46:09.489 ' 00:46:09.489 22:45:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:46:09.489 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:46:09.489 --rc genhtml_branch_coverage=1 00:46:09.489 --rc genhtml_function_coverage=1 00:46:09.489 --rc genhtml_legend=1 00:46:09.489 --rc geninfo_all_blocks=1 00:46:09.489 --rc geninfo_unexecuted_blocks=1 00:46:09.489 00:46:09.489 ' 00:46:09.489 22:45:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:46:09.489 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:46:09.489 --rc genhtml_branch_coverage=1 00:46:09.489 --rc genhtml_function_coverage=1 00:46:09.489 --rc genhtml_legend=1 00:46:09.489 --rc geninfo_all_blocks=1 00:46:09.489 --rc geninfo_unexecuted_blocks=1 00:46:09.489 00:46:09.489 ' 00:46:09.489 22:45:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:46:09.489 22:45:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@7 -- # uname -s 00:46:09.489 22:45:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:46:09.489 22:45:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:46:09.489 22:45:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:46:09.489 22:45:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:46:09.489 22:45:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:46:09.489 22:45:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:46:09.489 22:45:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:46:09.489 22:45:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:46:09.489 22:45:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:46:09.489 22:45:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:46:09.489 22:45:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 00:46:09.489 22:45:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@18 -- # NVME_HOSTID=80c5a598-37ec-ec11-9bc7-a4bf01928204 00:46:09.489 22:45:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:46:09.489 22:45:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:46:09.489 22:45:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:46:09.489 22:45:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:46:09.489 22:45:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:46:09.489 22:45:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@15 -- # shopt -s extglob 00:46:09.489 22:45:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:46:09.489 22:45:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:46:09.489 22:45:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:46:09.489 22:45:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:46:09.489 22:45:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:46:09.489 22:45:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:46:09.489 22:45:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@5 -- # export PATH 00:46:09.489 22:45:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:46:09.489 22:45:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@51 -- # : 0 00:46:09.489 22:45:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:46:09.489 22:45:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:46:09.489 22:45:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:46:09.489 22:45:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:46:09.489 22:45:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:46:09.489 22:45:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:46:09.489 22:45:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:46:09.489 22:45:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:46:09.489 22:45:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:46:09.489 22:45:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:46:09.489 22:45:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:46:09.489 22:45:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:46:09.489 22:45:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:46:09.489 22:45:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@16 -- # nvmftestinit 00:46:09.489 22:45:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:46:09.489 22:45:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:46:09.489 22:45:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@474 -- # prepare_net_devs 00:46:09.489 22:45:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@436 -- # local -g is_hw=no 00:46:09.489 22:45:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@438 -- # remove_spdk_ns 00:46:09.490 22:45:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:46:09.490 22:45:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:46:09.490 22:45:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:46:09.490 22:45:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:46:09.490 22:45:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:46:09.490 22:45:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@309 -- # xtrace_disable 00:46:09.490 22:45:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:46:17.802 22:45:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:46:17.802 22:45:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@315 -- # pci_devs=() 00:46:17.802 22:45:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:46:17.802 22:45:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:46:17.802 22:45:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:46:17.802 22:45:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:46:17.802 22:45:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:46:17.802 22:45:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@319 -- # net_devs=() 00:46:17.802 22:45:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:46:17.802 22:45:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@320 -- # e810=() 00:46:17.802 22:45:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@320 -- # local -ga e810 00:46:17.802 22:45:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@321 -- # x722=() 00:46:17.802 22:45:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@321 -- # local -ga x722 00:46:17.802 22:45:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@322 -- # mlx=() 00:46:17.802 22:45:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@322 -- # local -ga mlx 00:46:17.802 22:45:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:46:17.802 22:45:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:46:17.802 22:45:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:46:17.802 22:45:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:46:17.802 22:45:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:46:17.802 22:45:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:46:17.802 22:45:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:46:17.802 22:45:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:46:17.802 22:45:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:46:17.802 22:45:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:46:17.802 22:45:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:46:17.802 22:45:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:46:17.802 22:45:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:46:17.802 22:45:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:46:17.802 22:45:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:46:17.802 22:45:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:46:17.802 22:45:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:46:17.802 22:45:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:46:17.802 22:45:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:46:17.802 22:45:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:46:17.802 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:46:17.802 22:45:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:46:17.802 22:45:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:46:17.802 22:45:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:46:17.802 22:45:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:46:17.802 22:45:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:46:17.802 22:45:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:46:17.802 22:45:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:46:17.802 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:46:17.802 22:45:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:46:17.802 22:45:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:46:17.802 22:45:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:46:17.802 22:45:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:46:17.802 22:45:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:46:17.802 22:45:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:46:17.802 22:45:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:46:17.802 22:45:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:46:17.802 22:45:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:46:17.802 22:45:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:46:17.802 22:45:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:46:17.802 22:45:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:46:17.802 22:45:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ up == up ]] 00:46:17.802 22:45:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:46:17.802 22:45:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:46:17.802 22:45:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:46:17.802 Found net devices under 0000:4b:00.0: cvl_0_0 00:46:17.802 22:45:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:46:17.802 22:45:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:46:17.802 22:45:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:46:17.802 22:45:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:46:17.802 22:45:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:46:17.802 22:45:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ up == up ]] 00:46:17.802 22:45:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:46:17.802 22:45:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:46:17.803 22:45:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:46:17.803 Found net devices under 0000:4b:00.1: cvl_0_1 00:46:17.803 22:45:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:46:17.803 22:45:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:46:17.803 22:45:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@440 -- # is_hw=yes 00:46:17.803 22:45:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:46:17.803 22:45:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:46:17.803 22:45:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:46:17.803 22:45:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:46:17.803 22:45:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:46:17.803 22:45:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:46:17.803 22:45:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:46:17.803 22:45:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:46:17.803 22:45:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:46:17.803 22:45:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:46:17.803 22:45:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:46:17.803 22:45:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:46:17.803 22:45:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:46:17.803 22:45:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:46:17.803 22:45:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:46:17.803 22:45:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:46:17.803 22:45:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:46:17.803 22:45:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:46:17.803 22:45:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:46:17.803 22:45:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:46:17.803 22:45:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:46:17.803 22:45:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:46:17.803 22:45:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:46:17.803 22:45:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:46:17.803 22:45:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:46:17.803 22:45:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:46:17.803 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:46:17.803 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.629 ms 00:46:17.803 00:46:17.803 --- 10.0.0.2 ping statistics --- 00:46:17.803 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:46:17.803 rtt min/avg/max/mdev = 0.629/0.629/0.629/0.000 ms 00:46:17.803 22:45:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:46:17.803 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:46:17.803 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.275 ms 00:46:17.803 00:46:17.803 --- 10.0.0.1 ping statistics --- 00:46:17.803 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:46:17.803 rtt min/avg/max/mdev = 0.275/0.275/0.275/0.000 ms 00:46:17.803 22:45:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:46:17.803 22:45:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@448 -- # return 0 00:46:17.803 22:45:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:46:17.803 22:45:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:46:17.803 22:45:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:46:17.803 22:45:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:46:17.803 22:45:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:46:17.803 22:45:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:46:17.803 22:45:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:46:17.803 22:45:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:46:17.803 22:45:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:46:17.803 22:45:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@724 -- # xtrace_disable 00:46:17.803 22:45:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:46:17.803 22:45:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@507 -- # nvmfpid=481381 00:46:17.803 22:45:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@508 -- # waitforlisten 481381 00:46:17.803 22:45:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xF 00:46:17.803 22:45:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@831 -- # '[' -z 481381 ']' 00:46:17.803 22:45:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:46:17.803 22:45:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@836 -- # local max_retries=100 00:46:17.803 22:45:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:46:17.803 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:46:17.803 22:45:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@840 -- # xtrace_disable 00:46:17.803 22:45:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:46:17.803 [2024-10-01 22:45:12.392348] thread.c:2945:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:46:17.803 [2024-10-01 22:45:12.393459] Starting SPDK v25.01-pre git sha1 1b1c3081e / DPDK 24.03.0 initialization... 00:46:17.803 [2024-10-01 22:45:12.393513] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:46:17.803 [2024-10-01 22:45:12.466290] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:46:17.803 [2024-10-01 22:45:12.540452] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:46:17.803 [2024-10-01 22:45:12.540494] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:46:17.803 [2024-10-01 22:45:12.540502] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:46:17.803 [2024-10-01 22:45:12.540509] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:46:17.803 [2024-10-01 22:45:12.540515] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:46:17.803 [2024-10-01 22:45:12.540663] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:46:17.803 [2024-10-01 22:45:12.540900] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:46:17.803 [2024-10-01 22:45:12.540738] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:46:17.803 [2024-10-01 22:45:12.540901] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:46:17.803 [2024-10-01 22:45:12.647073] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:46:17.803 [2024-10-01 22:45:12.647337] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:46:17.803 [2024-10-01 22:45:12.648235] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:46:17.803 [2024-10-01 22:45:12.648602] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:46:17.803 [2024-10-01 22:45:12.648740] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:46:18.064 22:45:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:46:18.064 22:45:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@864 -- # return 0 00:46:18.064 22:45:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:46:18.064 22:45:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@730 -- # xtrace_disable 00:46:18.064 22:45:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:46:18.064 22:45:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:46:18.064 22:45:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:46:18.324 [2024-10-01 22:45:13.377697] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:46:18.324 22:45:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:46:18.584 22:45:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:46:18.584 22:45:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:46:18.584 22:45:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:46:18.584 22:45:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:46:18.844 22:45:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:46:18.844 22:45:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:46:19.102 22:45:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:46:19.103 22:45:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:46:19.103 22:45:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:46:19.362 22:45:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:46:19.362 22:45:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:46:19.622 22:45:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:46:19.622 22:45:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:46:19.622 22:45:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:46:19.622 22:45:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:46:19.881 22:45:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:46:20.141 22:45:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:46:20.141 22:45:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:46:20.141 22:45:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:46:20.141 22:45:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:46:20.401 22:45:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:46:20.401 [2024-10-01 22:45:15.653504] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:46:20.661 22:45:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:46:20.661 22:45:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:46:20.922 22:45:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 --hostid=80c5a598-37ec-ec11-9bc7-a4bf01928204 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:46:21.183 22:45:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:46:21.183 22:45:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1198 -- # local i=0 00:46:21.183 22:45:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:46:21.183 22:45:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1200 -- # [[ -n 4 ]] 00:46:21.183 22:45:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1201 -- # nvme_device_counter=4 00:46:21.183 22:45:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1205 -- # sleep 2 00:46:23.726 22:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:46:23.726 22:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:46:23.726 22:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:46:23.726 22:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1207 -- # nvme_devices=4 00:46:23.726 22:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:46:23.726 22:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1208 -- # return 0 00:46:23.726 22:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:46:23.726 [global] 00:46:23.726 thread=1 00:46:23.726 invalidate=1 00:46:23.726 rw=write 00:46:23.726 time_based=1 00:46:23.726 runtime=1 00:46:23.726 ioengine=libaio 00:46:23.726 direct=1 00:46:23.726 bs=4096 00:46:23.726 iodepth=1 00:46:23.726 norandommap=0 00:46:23.726 numjobs=1 00:46:23.726 00:46:23.726 verify_dump=1 00:46:23.726 verify_backlog=512 00:46:23.726 verify_state_save=0 00:46:23.726 do_verify=1 00:46:23.726 verify=crc32c-intel 00:46:23.726 [job0] 00:46:23.726 filename=/dev/nvme0n1 00:46:23.726 [job1] 00:46:23.726 filename=/dev/nvme0n2 00:46:23.726 [job2] 00:46:23.726 filename=/dev/nvme0n3 00:46:23.726 [job3] 00:46:23.726 filename=/dev/nvme0n4 00:46:23.726 Could not set queue depth (nvme0n1) 00:46:23.726 Could not set queue depth (nvme0n2) 00:46:23.726 Could not set queue depth (nvme0n3) 00:46:23.726 Could not set queue depth (nvme0n4) 00:46:23.726 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:46:23.726 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:46:23.726 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:46:23.726 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:46:23.726 fio-3.35 00:46:23.726 Starting 4 threads 00:46:25.112 00:46:25.112 job0: (groupid=0, jobs=1): err= 0: pid=482663: Tue Oct 1 22:45:20 2024 00:46:25.112 read: IOPS=19, BW=78.9KiB/s (80.8kB/s)(80.0KiB/1014msec) 00:46:25.112 slat (nsec): min=25795, max=29857, avg=26501.20, stdev=996.07 00:46:25.112 clat (usec): min=928, max=42136, avg=37617.37, stdev=12537.49 00:46:25.112 lat (usec): min=955, max=42162, avg=37643.88, stdev=12536.80 00:46:25.112 clat percentiles (usec): 00:46:25.112 | 1.00th=[ 930], 5.00th=[ 930], 10.00th=[ 1029], 20.00th=[40633], 00:46:25.112 | 30.00th=[41157], 40.00th=[41681], 50.00th=[41681], 60.00th=[41681], 00:46:25.112 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:46:25.112 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:46:25.112 | 99.99th=[42206] 00:46:25.112 write: IOPS=504, BW=2020KiB/s (2068kB/s)(2048KiB/1014msec); 0 zone resets 00:46:25.112 slat (usec): min=3, max=42391, avg=109.46, stdev=1872.33 00:46:25.112 clat (usec): min=103, max=734, avg=391.61, stdev=111.16 00:46:25.112 lat (usec): min=108, max=42905, avg=501.06, stdev=1881.43 00:46:25.112 clat percentiles (usec): 00:46:25.112 | 1.00th=[ 178], 5.00th=[ 229], 10.00th=[ 262], 20.00th=[ 302], 00:46:25.112 | 30.00th=[ 322], 40.00th=[ 338], 50.00th=[ 375], 60.00th=[ 416], 00:46:25.112 | 70.00th=[ 453], 80.00th=[ 498], 90.00th=[ 545], 95.00th=[ 586], 00:46:25.112 | 99.00th=[ 660], 99.50th=[ 676], 99.90th=[ 734], 99.95th=[ 734], 00:46:25.112 | 99.99th=[ 734] 00:46:25.112 bw ( KiB/s): min= 4096, max= 4096, per=36.10%, avg=4096.00, stdev= 0.00, samples=1 00:46:25.112 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:46:25.112 lat (usec) : 250=7.89%, 500=69.92%, 750=18.42%, 1000=0.19% 00:46:25.112 lat (msec) : 2=0.19%, 50=3.38% 00:46:25.112 cpu : usr=1.09%, sys=1.38%, ctx=539, majf=0, minf=1 00:46:25.112 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:46:25.112 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:46:25.112 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:46:25.112 issued rwts: total=20,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:46:25.112 latency : target=0, window=0, percentile=100.00%, depth=1 00:46:25.112 job1: (groupid=0, jobs=1): err= 0: pid=482677: Tue Oct 1 22:45:20 2024 00:46:25.112 read: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec) 00:46:25.112 slat (nsec): min=7137, max=47745, avg=27958.92, stdev=2509.96 00:46:25.112 clat (usec): min=746, max=1176, avg=956.52, stdev=57.08 00:46:25.112 lat (usec): min=774, max=1221, avg=984.48, stdev=57.32 00:46:25.112 clat percentiles (usec): 00:46:25.112 | 1.00th=[ 807], 5.00th=[ 857], 10.00th=[ 889], 20.00th=[ 914], 00:46:25.112 | 30.00th=[ 930], 40.00th=[ 947], 50.00th=[ 955], 60.00th=[ 971], 00:46:25.112 | 70.00th=[ 979], 80.00th=[ 996], 90.00th=[ 1029], 95.00th=[ 1045], 00:46:25.112 | 99.00th=[ 1106], 99.50th=[ 1139], 99.90th=[ 1172], 99.95th=[ 1172], 00:46:25.112 | 99.99th=[ 1172] 00:46:25.112 write: IOPS=827, BW=3309KiB/s (3388kB/s)(3312KiB/1001msec); 0 zone resets 00:46:25.112 slat (usec): min=8, max=808, avg=32.07, stdev=29.92 00:46:25.112 clat (usec): min=119, max=1082, avg=552.79, stdev=125.30 00:46:25.112 lat (usec): min=160, max=1707, avg=584.86, stdev=134.23 00:46:25.112 clat percentiles (usec): 00:46:25.112 | 1.00th=[ 277], 5.00th=[ 330], 10.00th=[ 379], 20.00th=[ 449], 00:46:25.112 | 30.00th=[ 490], 40.00th=[ 529], 50.00th=[ 553], 60.00th=[ 586], 00:46:25.112 | 70.00th=[ 627], 80.00th=[ 660], 90.00th=[ 709], 95.00th=[ 742], 00:46:25.112 | 99.00th=[ 832], 99.50th=[ 889], 99.90th=[ 1090], 99.95th=[ 1090], 00:46:25.112 | 99.99th=[ 1090] 00:46:25.112 bw ( KiB/s): min= 4096, max= 4096, per=36.10%, avg=4096.00, stdev= 0.00, samples=1 00:46:25.112 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:46:25.112 lat (usec) : 250=0.30%, 500=19.48%, 750=39.70%, 1000=33.21% 00:46:25.112 lat (msec) : 2=7.31% 00:46:25.112 cpu : usr=2.50%, sys=5.50%, ctx=1344, majf=0, minf=1 00:46:25.112 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:46:25.112 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:46:25.112 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:46:25.112 issued rwts: total=512,828,0,0 short=0,0,0,0 dropped=0,0,0,0 00:46:25.112 latency : target=0, window=0, percentile=100.00%, depth=1 00:46:25.112 job2: (groupid=0, jobs=1): err= 0: pid=482700: Tue Oct 1 22:45:20 2024 00:46:25.112 read: IOPS=20, BW=83.2KiB/s (85.2kB/s)(84.0KiB/1009msec) 00:46:25.112 slat (nsec): min=27604, max=29539, avg=28069.10, stdev=468.90 00:46:25.112 clat (usec): min=875, max=41914, avg=37253.69, stdev=12086.84 00:46:25.112 lat (usec): min=904, max=41942, avg=37281.76, stdev=12086.60 00:46:25.112 clat percentiles (usec): 00:46:25.112 | 1.00th=[ 873], 5.00th=[ 938], 10.00th=[40633], 20.00th=[41157], 00:46:25.112 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:46:25.112 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41681], 00:46:25.112 | 99.00th=[41681], 99.50th=[41681], 99.90th=[41681], 99.95th=[41681], 00:46:25.112 | 99.99th=[41681] 00:46:25.112 write: IOPS=507, BW=2030KiB/s (2078kB/s)(2048KiB/1009msec); 0 zone resets 00:46:25.112 slat (nsec): min=3478, max=79298, avg=20870.64, stdev=13236.49 00:46:25.112 clat (usec): min=114, max=2661, avg=411.75, stdev=142.62 00:46:25.112 lat (usec): min=125, max=2672, avg=432.62, stdev=147.34 00:46:25.112 clat percentiles (usec): 00:46:25.112 | 1.00th=[ 208], 5.00th=[ 277], 10.00th=[ 297], 20.00th=[ 326], 00:46:25.112 | 30.00th=[ 347], 40.00th=[ 367], 50.00th=[ 383], 60.00th=[ 441], 00:46:25.112 | 70.00th=[ 474], 80.00th=[ 494], 90.00th=[ 519], 95.00th=[ 545], 00:46:25.112 | 99.00th=[ 693], 99.50th=[ 750], 99.90th=[ 2671], 99.95th=[ 2671], 00:46:25.112 | 99.99th=[ 2671] 00:46:25.112 bw ( KiB/s): min= 4096, max= 4096, per=36.10%, avg=4096.00, stdev= 0.00, samples=1 00:46:25.112 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:46:25.112 lat (usec) : 250=2.06%, 500=78.24%, 750=15.20%, 1000=0.56% 00:46:25.112 lat (msec) : 2=0.19%, 4=0.19%, 50=3.56% 00:46:25.112 cpu : usr=0.60%, sys=0.99%, ctx=535, majf=0, minf=1 00:46:25.112 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:46:25.112 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:46:25.112 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:46:25.112 issued rwts: total=21,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:46:25.112 latency : target=0, window=0, percentile=100.00%, depth=1 00:46:25.112 job3: (groupid=0, jobs=1): err= 0: pid=482708: Tue Oct 1 22:45:20 2024 00:46:25.112 read: IOPS=645, BW=2581KiB/s (2643kB/s)(2584KiB/1001msec) 00:46:25.112 slat (nsec): min=7507, max=61167, avg=23143.23, stdev=8170.57 00:46:25.112 clat (usec): min=516, max=1429, avg=781.12, stdev=71.67 00:46:25.112 lat (usec): min=524, max=1456, avg=804.26, stdev=73.99 00:46:25.112 clat percentiles (usec): 00:46:25.112 | 1.00th=[ 611], 5.00th=[ 660], 10.00th=[ 685], 20.00th=[ 725], 00:46:25.112 | 30.00th=[ 758], 40.00th=[ 775], 50.00th=[ 791], 60.00th=[ 799], 00:46:25.112 | 70.00th=[ 816], 80.00th=[ 832], 90.00th=[ 857], 95.00th=[ 881], 00:46:25.112 | 99.00th=[ 930], 99.50th=[ 963], 99.90th=[ 1434], 99.95th=[ 1434], 00:46:25.112 | 99.99th=[ 1434] 00:46:25.112 write: IOPS=1022, BW=4092KiB/s (4190kB/s)(4096KiB/1001msec); 0 zone resets 00:46:25.112 slat (nsec): min=3951, max=63715, avg=28635.93, stdev=11220.93 00:46:25.112 clat (usec): min=220, max=765, avg=428.67, stdev=78.40 00:46:25.112 lat (usec): min=233, max=769, avg=457.31, stdev=83.19 00:46:25.112 clat percentiles (usec): 00:46:25.112 | 1.00th=[ 235], 5.00th=[ 289], 10.00th=[ 330], 20.00th=[ 355], 00:46:25.112 | 30.00th=[ 392], 40.00th=[ 429], 50.00th=[ 441], 60.00th=[ 457], 00:46:25.112 | 70.00th=[ 469], 80.00th=[ 486], 90.00th=[ 515], 95.00th=[ 545], 00:46:25.112 | 99.00th=[ 611], 99.50th=[ 644], 99.90th=[ 750], 99.95th=[ 766], 00:46:25.112 | 99.99th=[ 766] 00:46:25.112 bw ( KiB/s): min= 4096, max= 4096, per=36.10%, avg=4096.00, stdev= 0.00, samples=1 00:46:25.112 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:46:25.112 lat (usec) : 250=1.38%, 500=51.20%, 750=18.80%, 1000=28.50% 00:46:25.112 lat (msec) : 2=0.12% 00:46:25.112 cpu : usr=2.10%, sys=4.70%, ctx=1671, majf=0, minf=1 00:46:25.112 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:46:25.112 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:46:25.113 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:46:25.113 issued rwts: total=646,1024,0,0 short=0,0,0,0 dropped=0,0,0,0 00:46:25.113 latency : target=0, window=0, percentile=100.00%, depth=1 00:46:25.113 00:46:25.113 Run status group 0 (all jobs): 00:46:25.113 READ: bw=4730KiB/s (4843kB/s), 78.9KiB/s-2581KiB/s (80.8kB/s-2643kB/s), io=4796KiB (4911kB), run=1001-1014msec 00:46:25.113 WRITE: bw=11.1MiB/s (11.6MB/s), 2020KiB/s-4092KiB/s (2068kB/s-4190kB/s), io=11.2MiB (11.8MB), run=1001-1014msec 00:46:25.113 00:46:25.113 Disk stats (read/write): 00:46:25.113 nvme0n1: ios=64/512, merge=0/0, ticks=703/154, in_queue=857, util=83.87% 00:46:25.113 nvme0n2: ios=564/550, merge=0/0, ticks=608/267, in_queue=875, util=91.32% 00:46:25.113 nvme0n3: ios=73/512, merge=0/0, ticks=993/205, in_queue=1198, util=91.75% 00:46:25.113 nvme0n4: ios=534/896, merge=0/0, ticks=1275/377, in_queue=1652, util=93.90% 00:46:25.113 22:45:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:46:25.113 [global] 00:46:25.113 thread=1 00:46:25.113 invalidate=1 00:46:25.113 rw=randwrite 00:46:25.113 time_based=1 00:46:25.113 runtime=1 00:46:25.113 ioengine=libaio 00:46:25.113 direct=1 00:46:25.113 bs=4096 00:46:25.113 iodepth=1 00:46:25.113 norandommap=0 00:46:25.113 numjobs=1 00:46:25.113 00:46:25.113 verify_dump=1 00:46:25.113 verify_backlog=512 00:46:25.113 verify_state_save=0 00:46:25.113 do_verify=1 00:46:25.113 verify=crc32c-intel 00:46:25.113 [job0] 00:46:25.113 filename=/dev/nvme0n1 00:46:25.113 [job1] 00:46:25.113 filename=/dev/nvme0n2 00:46:25.113 [job2] 00:46:25.113 filename=/dev/nvme0n3 00:46:25.113 [job3] 00:46:25.113 filename=/dev/nvme0n4 00:46:25.113 Could not set queue depth (nvme0n1) 00:46:25.113 Could not set queue depth (nvme0n2) 00:46:25.113 Could not set queue depth (nvme0n3) 00:46:25.113 Could not set queue depth (nvme0n4) 00:46:25.383 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:46:25.383 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:46:25.383 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:46:25.383 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:46:25.383 fio-3.35 00:46:25.383 Starting 4 threads 00:46:26.770 00:46:26.770 job0: (groupid=0, jobs=1): err= 0: pid=483170: Tue Oct 1 22:45:21 2024 00:46:26.770 read: IOPS=16, BW=67.3KiB/s (68.9kB/s)(68.0KiB/1011msec) 00:46:26.770 slat (nsec): min=7899, max=27273, avg=24840.12, stdev=4388.58 00:46:26.770 clat (usec): min=952, max=42130, avg=39416.88, stdev=9918.11 00:46:26.770 lat (usec): min=978, max=42155, avg=39441.72, stdev=9917.68 00:46:26.770 clat percentiles (usec): 00:46:26.770 | 1.00th=[ 955], 5.00th=[ 955], 10.00th=[40633], 20.00th=[41681], 00:46:26.770 | 30.00th=[41681], 40.00th=[41681], 50.00th=[41681], 60.00th=[41681], 00:46:26.770 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:46:26.770 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:46:26.770 | 99.99th=[42206] 00:46:26.770 write: IOPS=506, BW=2026KiB/s (2074kB/s)(2048KiB/1011msec); 0 zone resets 00:46:26.770 slat (nsec): min=9014, max=68027, avg=28770.20, stdev=8929.60 00:46:26.770 clat (usec): min=242, max=918, avg=627.36, stdev=121.88 00:46:26.770 lat (usec): min=252, max=949, avg=656.13, stdev=125.72 00:46:26.770 clat percentiles (usec): 00:46:26.770 | 1.00th=[ 343], 5.00th=[ 404], 10.00th=[ 469], 20.00th=[ 519], 00:46:26.770 | 30.00th=[ 570], 40.00th=[ 603], 50.00th=[ 635], 60.00th=[ 668], 00:46:26.770 | 70.00th=[ 701], 80.00th=[ 742], 90.00th=[ 783], 95.00th=[ 799], 00:46:26.770 | 99.00th=[ 857], 99.50th=[ 906], 99.90th=[ 922], 99.95th=[ 922], 00:46:26.770 | 99.99th=[ 922] 00:46:26.770 bw ( KiB/s): min= 4096, max= 4096, per=37.19%, avg=4096.00, stdev= 0.00, samples=1 00:46:26.770 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:46:26.770 lat (usec) : 250=0.19%, 500=16.07%, 750=65.22%, 1000=15.50% 00:46:26.770 lat (msec) : 50=3.02% 00:46:26.770 cpu : usr=0.79%, sys=2.18%, ctx=529, majf=0, minf=1 00:46:26.770 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:46:26.770 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:46:26.770 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:46:26.770 issued rwts: total=17,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:46:26.770 latency : target=0, window=0, percentile=100.00%, depth=1 00:46:26.770 job1: (groupid=0, jobs=1): err= 0: pid=483178: Tue Oct 1 22:45:21 2024 00:46:26.770 read: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec) 00:46:26.770 slat (nsec): min=7610, max=63063, avg=26909.54, stdev=3339.56 00:46:26.770 clat (usec): min=712, max=1856, avg=1010.85, stdev=106.20 00:46:26.770 lat (usec): min=739, max=1887, avg=1037.76, stdev=106.44 00:46:26.770 clat percentiles (usec): 00:46:26.770 | 1.00th=[ 783], 5.00th=[ 840], 10.00th=[ 881], 20.00th=[ 922], 00:46:26.770 | 30.00th=[ 963], 40.00th=[ 996], 50.00th=[ 1012], 60.00th=[ 1037], 00:46:26.770 | 70.00th=[ 1057], 80.00th=[ 1090], 90.00th=[ 1123], 95.00th=[ 1172], 00:46:26.770 | 99.00th=[ 1270], 99.50th=[ 1319], 99.90th=[ 1860], 99.95th=[ 1860], 00:46:26.770 | 99.99th=[ 1860] 00:46:26.770 write: IOPS=730, BW=2921KiB/s (2991kB/s)(2924KiB/1001msec); 0 zone resets 00:46:26.770 slat (nsec): min=8755, max=52600, avg=28700.64, stdev=9555.42 00:46:26.770 clat (usec): min=267, max=1393, avg=599.14, stdev=140.79 00:46:26.770 lat (usec): min=277, max=1431, avg=627.84, stdev=145.00 00:46:26.770 clat percentiles (usec): 00:46:26.770 | 1.00th=[ 289], 5.00th=[ 351], 10.00th=[ 408], 20.00th=[ 478], 00:46:26.770 | 30.00th=[ 529], 40.00th=[ 578], 50.00th=[ 611], 60.00th=[ 635], 00:46:26.770 | 70.00th=[ 668], 80.00th=[ 709], 90.00th=[ 766], 95.00th=[ 807], 00:46:26.770 | 99.00th=[ 873], 99.50th=[ 922], 99.90th=[ 1401], 99.95th=[ 1401], 00:46:26.770 | 99.99th=[ 1401] 00:46:26.770 bw ( KiB/s): min= 4096, max= 4096, per=37.19%, avg=4096.00, stdev= 0.00, samples=1 00:46:26.770 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:46:26.770 lat (usec) : 500=14.64%, 750=36.69%, 1000=24.54% 00:46:26.770 lat (msec) : 2=24.14% 00:46:26.770 cpu : usr=2.40%, sys=4.90%, ctx=1243, majf=0, minf=2 00:46:26.770 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:46:26.770 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:46:26.770 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:46:26.770 issued rwts: total=512,731,0,0 short=0,0,0,0 dropped=0,0,0,0 00:46:26.770 latency : target=0, window=0, percentile=100.00%, depth=1 00:46:26.770 job2: (groupid=0, jobs=1): err= 0: pid=483187: Tue Oct 1 22:45:21 2024 00:46:26.770 read: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec) 00:46:26.770 slat (nsec): min=25772, max=59805, avg=27013.69, stdev=3055.99 00:46:26.770 clat (usec): min=944, max=1423, avg=1224.37, stdev=75.02 00:46:26.770 lat (usec): min=970, max=1449, avg=1251.38, stdev=74.92 00:46:26.770 clat percentiles (usec): 00:46:26.770 | 1.00th=[ 988], 5.00th=[ 1090], 10.00th=[ 1139], 20.00th=[ 1172], 00:46:26.770 | 30.00th=[ 1188], 40.00th=[ 1221], 50.00th=[ 1221], 60.00th=[ 1237], 00:46:26.770 | 70.00th=[ 1270], 80.00th=[ 1287], 90.00th=[ 1319], 95.00th=[ 1336], 00:46:26.770 | 99.00th=[ 1385], 99.50th=[ 1401], 99.90th=[ 1418], 99.95th=[ 1418], 00:46:26.770 | 99.99th=[ 1418] 00:46:26.770 write: IOPS=516, BW=2066KiB/s (2116kB/s)(2068KiB/1001msec); 0 zone resets 00:46:26.770 slat (nsec): min=10072, max=55243, avg=30508.19, stdev=8961.34 00:46:26.770 clat (usec): min=227, max=896, avg=647.16, stdev=121.15 00:46:26.770 lat (usec): min=238, max=925, avg=677.67, stdev=123.97 00:46:26.770 clat percentiles (usec): 00:46:26.770 | 1.00th=[ 363], 5.00th=[ 412], 10.00th=[ 478], 20.00th=[ 537], 00:46:26.770 | 30.00th=[ 603], 40.00th=[ 619], 50.00th=[ 652], 60.00th=[ 693], 00:46:26.770 | 70.00th=[ 734], 80.00th=[ 758], 90.00th=[ 791], 95.00th=[ 824], 00:46:26.770 | 99.00th=[ 873], 99.50th=[ 881], 99.90th=[ 898], 99.95th=[ 898], 00:46:26.770 | 99.99th=[ 898] 00:46:26.770 bw ( KiB/s): min= 4096, max= 4096, per=37.19%, avg=4096.00, stdev= 0.00, samples=1 00:46:26.770 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:46:26.770 lat (usec) : 250=0.10%, 500=7.00%, 750=32.17%, 1000=11.56% 00:46:26.770 lat (msec) : 2=49.17% 00:46:26.770 cpu : usr=1.70%, sys=2.90%, ctx=1031, majf=0, minf=1 00:46:26.770 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:46:26.770 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:46:26.770 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:46:26.770 issued rwts: total=512,517,0,0 short=0,0,0,0 dropped=0,0,0,0 00:46:26.770 latency : target=0, window=0, percentile=100.00%, depth=1 00:46:26.770 job3: (groupid=0, jobs=1): err= 0: pid=483191: Tue Oct 1 22:45:21 2024 00:46:26.770 read: IOPS=540, BW=2162KiB/s (2214kB/s)(2164KiB/1001msec) 00:46:26.770 slat (nsec): min=7493, max=55384, avg=25240.04, stdev=3329.83 00:46:26.770 clat (usec): min=498, max=1237, avg=906.51, stdev=128.66 00:46:26.770 lat (usec): min=523, max=1262, avg=931.75, stdev=128.65 00:46:26.770 clat percentiles (usec): 00:46:26.770 | 1.00th=[ 570], 5.00th=[ 668], 10.00th=[ 742], 20.00th=[ 807], 00:46:26.770 | 30.00th=[ 840], 40.00th=[ 889], 50.00th=[ 930], 60.00th=[ 963], 00:46:26.770 | 70.00th=[ 979], 80.00th=[ 1012], 90.00th=[ 1045], 95.00th=[ 1090], 00:46:26.770 | 99.00th=[ 1188], 99.50th=[ 1205], 99.90th=[ 1237], 99.95th=[ 1237], 00:46:26.770 | 99.99th=[ 1237] 00:46:26.770 write: IOPS=1022, BW=4092KiB/s (4190kB/s)(4096KiB/1001msec); 0 zone resets 00:46:26.770 slat (nsec): min=9417, max=81827, avg=29249.56, stdev=7833.08 00:46:26.770 clat (usec): min=119, max=829, avg=443.04, stdev=136.50 00:46:26.770 lat (usec): min=141, max=861, avg=472.29, stdev=137.60 00:46:26.770 clat percentiles (usec): 00:46:26.770 | 1.00th=[ 147], 5.00th=[ 245], 10.00th=[ 285], 20.00th=[ 322], 00:46:26.770 | 30.00th=[ 351], 40.00th=[ 392], 50.00th=[ 437], 60.00th=[ 474], 00:46:26.770 | 70.00th=[ 519], 80.00th=[ 570], 90.00th=[ 627], 95.00th=[ 685], 00:46:26.770 | 99.00th=[ 775], 99.50th=[ 791], 99.90th=[ 824], 99.95th=[ 832], 00:46:26.770 | 99.99th=[ 832] 00:46:26.770 bw ( KiB/s): min= 4096, max= 4096, per=37.19%, avg=4096.00, stdev= 0.00, samples=1 00:46:26.770 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:46:26.770 lat (usec) : 250=3.58%, 500=39.81%, 750=24.92%, 1000=23.07% 00:46:26.771 lat (msec) : 2=8.63% 00:46:26.771 cpu : usr=2.60%, sys=4.20%, ctx=1566, majf=0, minf=1 00:46:26.771 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:46:26.771 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:46:26.771 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:46:26.771 issued rwts: total=541,1024,0,0 short=0,0,0,0 dropped=0,0,0,0 00:46:26.771 latency : target=0, window=0, percentile=100.00%, depth=1 00:46:26.771 00:46:26.771 Run status group 0 (all jobs): 00:46:26.771 READ: bw=6259KiB/s (6409kB/s), 67.3KiB/s-2162KiB/s (68.9kB/s-2214kB/s), io=6328KiB (6480kB), run=1001-1011msec 00:46:26.771 WRITE: bw=10.8MiB/s (11.3MB/s), 2026KiB/s-4092KiB/s (2074kB/s-4190kB/s), io=10.9MiB (11.4MB), run=1001-1011msec 00:46:26.771 00:46:26.771 Disk stats (read/write): 00:46:26.771 nvme0n1: ios=62/512, merge=0/0, ticks=799/248, in_queue=1047, util=94.89% 00:46:26.771 nvme0n2: ios=518/512, merge=0/0, ticks=490/240, in_queue=730, util=86.53% 00:46:26.771 nvme0n3: ios=426/512, merge=0/0, ticks=1280/311, in_queue=1591, util=97.89% 00:46:26.771 nvme0n4: ios=512/729, merge=0/0, ticks=456/300, in_queue=756, util=89.52% 00:46:26.771 22:45:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:46:26.771 [global] 00:46:26.771 thread=1 00:46:26.771 invalidate=1 00:46:26.771 rw=write 00:46:26.771 time_based=1 00:46:26.771 runtime=1 00:46:26.771 ioengine=libaio 00:46:26.771 direct=1 00:46:26.771 bs=4096 00:46:26.771 iodepth=128 00:46:26.771 norandommap=0 00:46:26.771 numjobs=1 00:46:26.771 00:46:26.771 verify_dump=1 00:46:26.771 verify_backlog=512 00:46:26.771 verify_state_save=0 00:46:26.771 do_verify=1 00:46:26.771 verify=crc32c-intel 00:46:26.771 [job0] 00:46:26.771 filename=/dev/nvme0n1 00:46:26.771 [job1] 00:46:26.771 filename=/dev/nvme0n2 00:46:26.771 [job2] 00:46:26.771 filename=/dev/nvme0n3 00:46:26.771 [job3] 00:46:26.771 filename=/dev/nvme0n4 00:46:26.771 Could not set queue depth (nvme0n1) 00:46:26.771 Could not set queue depth (nvme0n2) 00:46:26.771 Could not set queue depth (nvme0n3) 00:46:26.771 Could not set queue depth (nvme0n4) 00:46:27.031 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:46:27.031 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:46:27.031 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:46:27.031 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:46:27.031 fio-3.35 00:46:27.031 Starting 4 threads 00:46:28.415 00:46:28.415 job0: (groupid=0, jobs=1): err= 0: pid=483691: Tue Oct 1 22:45:23 2024 00:46:28.415 read: IOPS=5084, BW=19.9MiB/s (20.8MB/s)(20.0MiB/1007msec) 00:46:28.415 slat (nsec): min=1012, max=10136k, avg=92188.90, stdev=632496.46 00:46:28.415 clat (usec): min=5267, max=38883, avg=11896.08, stdev=4054.25 00:46:28.415 lat (usec): min=5271, max=38890, avg=11988.27, stdev=4093.49 00:46:28.415 clat percentiles (usec): 00:46:28.415 | 1.00th=[ 6063], 5.00th=[ 7308], 10.00th=[ 7832], 20.00th=[ 8848], 00:46:28.415 | 30.00th=[ 9634], 40.00th=[10421], 50.00th=[11338], 60.00th=[11994], 00:46:28.415 | 70.00th=[12649], 80.00th=[14353], 90.00th=[16319], 95.00th=[18744], 00:46:28.415 | 99.00th=[27657], 99.50th=[32113], 99.90th=[35390], 99.95th=[39060], 00:46:28.415 | 99.99th=[39060] 00:46:28.415 write: IOPS=5215, BW=20.4MiB/s (21.4MB/s)(20.5MiB/1007msec); 0 zone resets 00:46:28.415 slat (nsec): min=1654, max=7988.6k, avg=95472.16, stdev=598195.51 00:46:28.415 clat (usec): min=1187, max=38891, avg=12714.49, stdev=6129.34 00:46:28.415 lat (usec): min=1198, max=38915, avg=12809.96, stdev=6159.67 00:46:28.415 clat percentiles (usec): 00:46:28.415 | 1.00th=[ 5997], 5.00th=[ 6587], 10.00th=[ 7504], 20.00th=[ 8455], 00:46:28.415 | 30.00th=[ 9241], 40.00th=[ 9765], 50.00th=[10683], 60.00th=[12256], 00:46:28.415 | 70.00th=[14222], 80.00th=[15139], 90.00th=[20841], 95.00th=[29230], 00:46:28.415 | 99.00th=[33817], 99.50th=[34866], 99.90th=[35390], 99.95th=[35390], 00:46:28.415 | 99.99th=[39060] 00:46:28.415 bw ( KiB/s): min=16440, max=24576, per=22.16%, avg=20508.00, stdev=5753.02, samples=2 00:46:28.415 iops : min= 4110, max= 6144, avg=5127.00, stdev=1438.26, samples=2 00:46:28.415 lat (msec) : 2=0.11%, 4=0.09%, 10=37.67%, 20=55.46%, 50=6.68% 00:46:28.415 cpu : usr=4.08%, sys=5.77%, ctx=370, majf=0, minf=1 00:46:28.415 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:46:28.415 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:46:28.415 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:46:28.415 issued rwts: total=5120,5252,0,0 short=0,0,0,0 dropped=0,0,0,0 00:46:28.415 latency : target=0, window=0, percentile=100.00%, depth=128 00:46:28.415 job1: (groupid=0, jobs=1): err= 0: pid=483692: Tue Oct 1 22:45:23 2024 00:46:28.415 read: IOPS=2029, BW=8119KiB/s (8314kB/s)(8192KiB/1009msec) 00:46:28.415 slat (usec): min=2, max=17012, avg=246.31, stdev=1277.73 00:46:28.415 clat (usec): min=10297, max=69458, avg=31019.40, stdev=13198.65 00:46:28.415 lat (usec): min=10306, max=72138, avg=31265.71, stdev=13282.87 00:46:28.415 clat percentiles (usec): 00:46:28.415 | 1.00th=[10945], 5.00th=[13173], 10.00th=[14353], 20.00th=[16450], 00:46:28.415 | 30.00th=[18220], 40.00th=[25297], 50.00th=[34866], 60.00th=[37487], 00:46:28.415 | 70.00th=[39584], 80.00th=[42730], 90.00th=[47449], 95.00th=[49021], 00:46:28.415 | 99.00th=[62653], 99.50th=[67634], 99.90th=[69731], 99.95th=[69731], 00:46:28.415 | 99.99th=[69731] 00:46:28.415 write: IOPS=2293, BW=9173KiB/s (9394kB/s)(9256KiB/1009msec); 0 zone resets 00:46:28.415 slat (usec): min=2, max=17214, avg=209.74, stdev=1255.63 00:46:28.415 clat (usec): min=2524, max=97800, avg=27613.56, stdev=20504.83 00:46:28.415 lat (usec): min=6599, max=97810, avg=27823.30, stdev=20654.23 00:46:28.415 clat percentiles (usec): 00:46:28.415 | 1.00th=[10028], 5.00th=[11338], 10.00th=[11994], 20.00th=[12387], 00:46:28.415 | 30.00th=[13173], 40.00th=[13829], 50.00th=[16450], 60.00th=[26870], 00:46:28.415 | 70.00th=[34341], 80.00th=[42730], 90.00th=[49546], 95.00th=[83362], 00:46:28.415 | 99.00th=[91751], 99.50th=[91751], 99.90th=[98042], 99.95th=[98042], 00:46:28.415 | 99.99th=[98042] 00:46:28.415 bw ( KiB/s): min= 8192, max= 9296, per=9.45%, avg=8744.00, stdev=780.65, samples=2 00:46:28.415 iops : min= 2048, max= 2324, avg=2186.00, stdev=195.16, samples=2 00:46:28.415 lat (msec) : 4=0.02%, 10=0.48%, 20=43.60%, 50=48.76%, 100=7.13% 00:46:28.415 cpu : usr=2.08%, sys=2.98%, ctx=189, majf=0, minf=1 00:46:28.415 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.7%, >=64=98.6% 00:46:28.415 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:46:28.415 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:46:28.415 issued rwts: total=2048,2314,0,0 short=0,0,0,0 dropped=0,0,0,0 00:46:28.415 latency : target=0, window=0, percentile=100.00%, depth=128 00:46:28.415 job2: (groupid=0, jobs=1): err= 0: pid=483694: Tue Oct 1 22:45:23 2024 00:46:28.415 read: IOPS=7626, BW=29.8MiB/s (31.2MB/s)(30.0MiB/1007msec) 00:46:28.415 slat (nsec): min=940, max=7589.3k, avg=62567.80, stdev=448301.58 00:46:28.415 clat (usec): min=1762, max=18466, avg=8548.68, stdev=2467.04 00:46:28.415 lat (usec): min=1764, max=18473, avg=8611.25, stdev=2480.90 00:46:28.415 clat percentiles (usec): 00:46:28.415 | 1.00th=[ 2573], 5.00th=[ 4817], 10.00th=[ 5800], 20.00th=[ 6783], 00:46:28.415 | 30.00th=[ 7111], 40.00th=[ 7832], 50.00th=[ 8225], 60.00th=[ 9110], 00:46:28.415 | 70.00th=[ 9765], 80.00th=[10290], 90.00th=[11600], 95.00th=[12780], 00:46:28.415 | 99.00th=[15926], 99.50th=[16712], 99.90th=[17957], 99.95th=[18482], 00:46:28.415 | 99.99th=[18482] 00:46:28.415 write: IOPS=7841, BW=30.6MiB/s (32.1MB/s)(30.8MiB/1007msec); 0 zone resets 00:46:28.415 slat (nsec): min=1637, max=7584.9k, avg=57233.86, stdev=356797.14 00:46:28.415 clat (usec): min=315, max=18609, avg=7786.80, stdev=2401.24 00:46:28.415 lat (usec): min=572, max=18618, avg=7844.03, stdev=2405.58 00:46:28.415 clat percentiles (usec): 00:46:28.415 | 1.00th=[ 1745], 5.00th=[ 3687], 10.00th=[ 4948], 20.00th=[ 5997], 00:46:28.415 | 30.00th=[ 7046], 40.00th=[ 7570], 50.00th=[ 7832], 60.00th=[ 8029], 00:46:28.415 | 70.00th=[ 8586], 80.00th=[ 9372], 90.00th=[10421], 95.00th=[11863], 00:46:28.415 | 99.00th=[15795], 99.50th=[16319], 99.90th=[17695], 99.95th=[18482], 00:46:28.415 | 99.99th=[18482] 00:46:28.415 bw ( KiB/s): min=29723, max=32488, per=33.61%, avg=31105.50, stdev=1955.15, samples=2 00:46:28.415 iops : min= 7430, max= 8122, avg=7776.00, stdev=489.32, samples=2 00:46:28.415 lat (usec) : 500=0.01% 00:46:28.415 lat (msec) : 2=0.78%, 4=3.94%, 10=77.04%, 20=18.23% 00:46:28.415 cpu : usr=4.47%, sys=7.75%, ctx=681, majf=0, minf=2 00:46:28.415 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.6% 00:46:28.415 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:46:28.415 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:46:28.415 issued rwts: total=7680,7896,0,0 short=0,0,0,0 dropped=0,0,0,0 00:46:28.415 latency : target=0, window=0, percentile=100.00%, depth=128 00:46:28.415 job3: (groupid=0, jobs=1): err= 0: pid=483695: Tue Oct 1 22:45:23 2024 00:46:28.415 read: IOPS=7619, BW=29.8MiB/s (31.2MB/s)(30.0MiB/1008msec) 00:46:28.415 slat (nsec): min=909, max=9125.9k, avg=64657.32, stdev=414209.55 00:46:28.415 clat (usec): min=1838, max=17547, avg=8442.73, stdev=1668.24 00:46:28.415 lat (usec): min=1884, max=17573, avg=8507.39, stdev=1695.55 00:46:28.415 clat percentiles (usec): 00:46:28.415 | 1.00th=[ 3621], 5.00th=[ 6063], 10.00th=[ 6783], 20.00th=[ 7439], 00:46:28.415 | 30.00th=[ 7767], 40.00th=[ 7963], 50.00th=[ 8291], 60.00th=[ 8586], 00:46:28.415 | 70.00th=[ 8848], 80.00th=[ 9503], 90.00th=[10552], 95.00th=[11338], 00:46:28.415 | 99.00th=[13960], 99.50th=[14222], 99.90th=[15533], 99.95th=[17171], 00:46:28.415 | 99.99th=[17433] 00:46:28.415 write: IOPS=7823, BW=30.6MiB/s (32.0MB/s)(30.8MiB/1008msec); 0 zone resets 00:46:28.415 slat (nsec): min=1553, max=6747.5k, avg=58128.56, stdev=331195.88 00:46:28.415 clat (usec): min=543, max=17708, avg=7922.33, stdev=1983.80 00:46:28.415 lat (usec): min=638, max=17714, avg=7980.46, stdev=1999.52 00:46:28.415 clat percentiles (usec): 00:46:28.416 | 1.00th=[ 1860], 5.00th=[ 4621], 10.00th=[ 5407], 20.00th=[ 7046], 00:46:28.416 | 30.00th=[ 7570], 40.00th=[ 7832], 50.00th=[ 7963], 60.00th=[ 8160], 00:46:28.416 | 70.00th=[ 8455], 80.00th=[ 8848], 90.00th=[ 9765], 95.00th=[10945], 00:46:28.416 | 99.00th=[14746], 99.50th=[15533], 99.90th=[17433], 99.95th=[17433], 00:46:28.416 | 99.99th=[17695] 00:46:28.416 bw ( KiB/s): min=29888, max=32248, per=33.57%, avg=31068.00, stdev=1668.77, samples=2 00:46:28.416 iops : min= 7472, max= 8062, avg=7767.00, stdev=417.19, samples=2 00:46:28.416 lat (usec) : 750=0.01%, 1000=0.04% 00:46:28.416 lat (msec) : 2=0.54%, 4=2.06%, 10=86.05%, 20=11.29% 00:46:28.416 cpu : usr=4.17%, sys=6.16%, ctx=804, majf=0, minf=2 00:46:28.416 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.6% 00:46:28.416 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:46:28.416 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:46:28.416 issued rwts: total=7680,7886,0,0 short=0,0,0,0 dropped=0,0,0,0 00:46:28.416 latency : target=0, window=0, percentile=100.00%, depth=128 00:46:28.416 00:46:28.416 Run status group 0 (all jobs): 00:46:28.416 READ: bw=87.2MiB/s (91.5MB/s), 8119KiB/s-29.8MiB/s (8314kB/s-31.2MB/s), io=88.0MiB (92.3MB), run=1007-1009msec 00:46:28.416 WRITE: bw=90.4MiB/s (94.8MB/s), 9173KiB/s-30.6MiB/s (9394kB/s-32.1MB/s), io=91.2MiB (95.6MB), run=1007-1009msec 00:46:28.416 00:46:28.416 Disk stats (read/write): 00:46:28.416 nvme0n1: ios=4116/4181, merge=0/0, ticks=48595/53836, in_queue=102431, util=98.80% 00:46:28.416 nvme0n2: ios=2034/2048, merge=0/0, ticks=19231/12700, in_queue=31931, util=100.00% 00:46:28.416 nvme0n3: ios=6144/6621, merge=0/0, ticks=45762/42773, in_queue=88535, util=88.28% 00:46:28.416 nvme0n4: ios=6144/6575, merge=0/0, ticks=29725/29329, in_queue=59054, util=89.20% 00:46:28.416 22:45:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:46:28.416 [global] 00:46:28.416 thread=1 00:46:28.416 invalidate=1 00:46:28.416 rw=randwrite 00:46:28.416 time_based=1 00:46:28.416 runtime=1 00:46:28.416 ioengine=libaio 00:46:28.416 direct=1 00:46:28.416 bs=4096 00:46:28.416 iodepth=128 00:46:28.416 norandommap=0 00:46:28.416 numjobs=1 00:46:28.416 00:46:28.416 verify_dump=1 00:46:28.416 verify_backlog=512 00:46:28.416 verify_state_save=0 00:46:28.416 do_verify=1 00:46:28.416 verify=crc32c-intel 00:46:28.416 [job0] 00:46:28.416 filename=/dev/nvme0n1 00:46:28.416 [job1] 00:46:28.416 filename=/dev/nvme0n2 00:46:28.416 [job2] 00:46:28.416 filename=/dev/nvme0n3 00:46:28.416 [job3] 00:46:28.416 filename=/dev/nvme0n4 00:46:28.416 Could not set queue depth (nvme0n1) 00:46:28.416 Could not set queue depth (nvme0n2) 00:46:28.416 Could not set queue depth (nvme0n3) 00:46:28.416 Could not set queue depth (nvme0n4) 00:46:28.676 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:46:28.676 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:46:28.676 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:46:28.676 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:46:28.676 fio-3.35 00:46:28.676 Starting 4 threads 00:46:30.059 00:46:30.059 job0: (groupid=0, jobs=1): err= 0: pid=484212: Tue Oct 1 22:45:25 2024 00:46:30.059 read: IOPS=7130, BW=27.9MiB/s (29.2MB/s)(28.0MiB/1004msec) 00:46:30.059 slat (nsec): min=966, max=10501k, avg=69336.87, stdev=519580.06 00:46:30.059 clat (usec): min=1698, max=40072, avg=9421.57, stdev=4373.81 00:46:30.059 lat (usec): min=1706, max=40081, avg=9490.90, stdev=4399.61 00:46:30.059 clat percentiles (usec): 00:46:30.059 | 1.00th=[ 3163], 5.00th=[ 5211], 10.00th=[ 5997], 20.00th=[ 7242], 00:46:30.059 | 30.00th=[ 7504], 40.00th=[ 7832], 50.00th=[ 8291], 60.00th=[ 8717], 00:46:30.059 | 70.00th=[ 9765], 80.00th=[11600], 90.00th=[13566], 95.00th=[15270], 00:46:30.059 | 99.00th=[33162], 99.50th=[35914], 99.90th=[39060], 99.95th=[40109], 00:46:30.059 | 99.99th=[40109] 00:46:30.059 write: IOPS=7139, BW=27.9MiB/s (29.2MB/s)(28.0MiB/1004msec); 0 zone resets 00:46:30.059 slat (nsec): min=1575, max=12609k, avg=53650.20, stdev=419680.10 00:46:30.059 clat (usec): min=649, max=24748, avg=8352.18, stdev=3258.92 00:46:30.059 lat (usec): min=673, max=24752, avg=8405.83, stdev=3263.00 00:46:30.059 clat percentiles (usec): 00:46:30.059 | 1.00th=[ 2311], 5.00th=[ 4424], 10.00th=[ 5211], 20.00th=[ 6259], 00:46:30.059 | 30.00th=[ 6783], 40.00th=[ 7111], 50.00th=[ 7439], 60.00th=[ 7898], 00:46:30.059 | 70.00th=[ 8979], 80.00th=[10945], 90.00th=[12780], 95.00th=[14484], 00:46:30.059 | 99.00th=[19006], 99.50th=[19530], 99.90th=[24773], 99.95th=[24773], 00:46:30.059 | 99.99th=[24773] 00:46:30.059 bw ( KiB/s): min=26688, max=30656, per=31.04%, avg=28672.00, stdev=2805.80, samples=2 00:46:30.059 iops : min= 6672, max= 7664, avg=7168.00, stdev=701.45, samples=2 00:46:30.059 lat (usec) : 750=0.01%, 1000=0.07% 00:46:30.060 lat (msec) : 2=0.41%, 4=2.21%, 10=71.31%, 20=24.22%, 50=1.76% 00:46:30.060 cpu : usr=4.69%, sys=8.87%, ctx=456, majf=0, minf=1 00:46:30.060 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.6% 00:46:30.060 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:46:30.060 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:46:30.060 issued rwts: total=7159,7168,0,0 short=0,0,0,0 dropped=0,0,0,0 00:46:30.060 latency : target=0, window=0, percentile=100.00%, depth=128 00:46:30.060 job1: (groupid=0, jobs=1): err= 0: pid=484213: Tue Oct 1 22:45:25 2024 00:46:30.060 read: IOPS=6642, BW=25.9MiB/s (27.2MB/s)(26.0MiB/1002msec) 00:46:30.060 slat (nsec): min=878, max=14933k, avg=75190.17, stdev=506489.61 00:46:30.060 clat (usec): min=3685, max=42405, avg=9827.57, stdev=4357.20 00:46:30.060 lat (usec): min=3819, max=42411, avg=9902.76, stdev=4384.05 00:46:30.060 clat percentiles (usec): 00:46:30.060 | 1.00th=[ 4424], 5.00th=[ 6128], 10.00th=[ 6915], 20.00th=[ 7308], 00:46:30.060 | 30.00th=[ 8160], 40.00th=[ 8586], 50.00th=[ 8848], 60.00th=[ 9110], 00:46:30.060 | 70.00th=[10159], 80.00th=[10945], 90.00th=[13042], 95.00th=[15664], 00:46:30.060 | 99.00th=[31065], 99.50th=[39060], 99.90th=[42206], 99.95th=[42206], 00:46:30.060 | 99.99th=[42206] 00:46:30.060 write: IOPS=7056, BW=27.6MiB/s (28.9MB/s)(27.6MiB/1002msec); 0 zone resets 00:46:30.060 slat (nsec): min=1484, max=10198k, avg=66643.46, stdev=414204.10 00:46:30.060 clat (usec): min=864, max=33146, avg=8724.55, stdev=3291.80 00:46:30.060 lat (usec): min=1181, max=33153, avg=8791.20, stdev=3299.66 00:46:30.060 clat percentiles (usec): 00:46:30.060 | 1.00th=[ 4621], 5.00th=[ 5211], 10.00th=[ 5669], 20.00th=[ 6783], 00:46:30.060 | 30.00th=[ 7046], 40.00th=[ 7570], 50.00th=[ 8225], 60.00th=[ 8586], 00:46:30.060 | 70.00th=[ 8848], 80.00th=[ 9372], 90.00th=[13173], 95.00th=[15270], 00:46:30.060 | 99.00th=[19530], 99.50th=[24249], 99.90th=[32637], 99.95th=[33162], 00:46:30.060 | 99.99th=[33162] 00:46:30.060 bw ( KiB/s): min=26120, max=29424, per=30.06%, avg=27772.00, stdev=2336.28, samples=2 00:46:30.060 iops : min= 6530, max= 7356, avg=6943.00, stdev=584.07, samples=2 00:46:30.060 lat (usec) : 1000=0.01% 00:46:30.060 lat (msec) : 2=0.07%, 4=0.07%, 10=75.33%, 20=22.57%, 50=1.96% 00:46:30.060 cpu : usr=3.00%, sys=6.09%, ctx=598, majf=0, minf=2 00:46:30.060 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.5% 00:46:30.060 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:46:30.060 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:46:30.060 issued rwts: total=6656,7071,0,0 short=0,0,0,0 dropped=0,0,0,0 00:46:30.060 latency : target=0, window=0, percentile=100.00%, depth=128 00:46:30.060 job2: (groupid=0, jobs=1): err= 0: pid=484214: Tue Oct 1 22:45:25 2024 00:46:30.060 read: IOPS=6057, BW=23.7MiB/s (24.8MB/s)(23.8MiB/1004msec) 00:46:30.060 slat (nsec): min=968, max=9667.5k, avg=84140.97, stdev=498358.06 00:46:30.060 clat (usec): min=984, max=42074, avg=10519.05, stdev=5734.03 00:46:30.060 lat (usec): min=3116, max=42082, avg=10603.19, stdev=5780.70 00:46:30.060 clat percentiles (usec): 00:46:30.060 | 1.00th=[ 5669], 5.00th=[ 6718], 10.00th=[ 7308], 20.00th=[ 7963], 00:46:30.060 | 30.00th=[ 8225], 40.00th=[ 8455], 50.00th=[ 8717], 60.00th=[ 8979], 00:46:30.060 | 70.00th=[ 9372], 80.00th=[10159], 90.00th=[16909], 95.00th=[24249], 00:46:30.060 | 99.00th=[36439], 99.50th=[36439], 99.90th=[38536], 99.95th=[39060], 00:46:30.060 | 99.99th=[42206] 00:46:30.060 write: IOPS=6119, BW=23.9MiB/s (25.1MB/s)(24.0MiB/1004msec); 0 zone resets 00:46:30.060 slat (nsec): min=1577, max=10689k, avg=76644.68, stdev=518566.69 00:46:30.060 clat (usec): min=1262, max=41655, avg=10254.70, stdev=5411.38 00:46:30.060 lat (usec): min=1272, max=41664, avg=10331.34, stdev=5455.54 00:46:30.060 clat percentiles (usec): 00:46:30.060 | 1.00th=[ 4621], 5.00th=[ 6587], 10.00th=[ 7242], 20.00th=[ 7767], 00:46:30.060 | 30.00th=[ 8029], 40.00th=[ 8160], 50.00th=[ 8291], 60.00th=[ 8455], 00:46:30.060 | 70.00th=[ 8979], 80.00th=[ 9765], 90.00th=[19530], 95.00th=[23725], 00:46:30.060 | 99.00th=[31327], 99.50th=[34866], 99.90th=[34866], 99.95th=[34866], 00:46:30.060 | 99.99th=[41681] 00:46:30.060 bw ( KiB/s): min=20480, max=28672, per=26.60%, avg=24576.00, stdev=5792.62, samples=2 00:46:30.060 iops : min= 5120, max= 7168, avg=6144.00, stdev=1448.15, samples=2 00:46:30.060 lat (usec) : 1000=0.01% 00:46:30.060 lat (msec) : 2=0.13%, 4=0.61%, 10=79.82%, 20=11.95%, 50=7.48% 00:46:30.060 cpu : usr=2.79%, sys=3.79%, ctx=592, majf=0, minf=1 00:46:30.060 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.5% 00:46:30.060 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:46:30.060 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:46:30.060 issued rwts: total=6082,6144,0,0 short=0,0,0,0 dropped=0,0,0,0 00:46:30.060 latency : target=0, window=0, percentile=100.00%, depth=128 00:46:30.060 job3: (groupid=0, jobs=1): err= 0: pid=484215: Tue Oct 1 22:45:25 2024 00:46:30.060 read: IOPS=2554, BW=9.98MiB/s (10.5MB/s)(10.0MiB/1002msec) 00:46:30.060 slat (nsec): min=952, max=11771k, avg=136090.76, stdev=732556.19 00:46:30.060 clat (usec): min=7738, max=73311, avg=16653.52, stdev=9398.93 00:46:30.060 lat (usec): min=8071, max=73317, avg=16789.61, stdev=9459.90 00:46:30.060 clat percentiles (usec): 00:46:30.060 | 1.00th=[ 8160], 5.00th=[ 8717], 10.00th=[ 8979], 20.00th=[ 9503], 00:46:30.060 | 30.00th=[10945], 40.00th=[13173], 50.00th=[14091], 60.00th=[16909], 00:46:30.060 | 70.00th=[18482], 80.00th=[19268], 90.00th=[26870], 95.00th=[31065], 00:46:30.060 | 99.00th=[63177], 99.50th=[70779], 99.90th=[72877], 99.95th=[72877], 00:46:30.060 | 99.99th=[72877] 00:46:30.060 write: IOPS=2799, BW=10.9MiB/s (11.5MB/s)(11.0MiB/1002msec); 0 zone resets 00:46:30.060 slat (nsec): min=1615, max=18872k, avg=225385.94, stdev=1041403.39 00:46:30.060 clat (usec): min=903, max=81806, avg=29575.82, stdev=21819.61 00:46:30.060 lat (usec): min=912, max=81811, avg=29801.21, stdev=21943.48 00:46:30.060 clat percentiles (usec): 00:46:30.060 | 1.00th=[ 2540], 5.00th=[ 7504], 10.00th=[ 9503], 20.00th=[10421], 00:46:30.060 | 30.00th=[14484], 40.00th=[16909], 50.00th=[21627], 60.00th=[27132], 00:46:30.060 | 70.00th=[33424], 80.00th=[52691], 90.00th=[65799], 95.00th=[78119], 00:46:30.060 | 99.00th=[80217], 99.50th=[81265], 99.90th=[82314], 99.95th=[82314], 00:46:30.060 | 99.99th=[82314] 00:46:30.060 bw ( KiB/s): min= 9776, max=11648, per=11.60%, avg=10712.00, stdev=1323.70, samples=2 00:46:30.060 iops : min= 2444, max= 2912, avg=2678.00, stdev=330.93, samples=2 00:46:30.060 lat (usec) : 1000=0.07% 00:46:30.060 lat (msec) : 2=0.13%, 4=0.62%, 10=19.14%, 20=42.95%, 50=25.20% 00:46:30.060 lat (msec) : 100=11.89% 00:46:30.060 cpu : usr=1.30%, sys=3.20%, ctx=359, majf=0, minf=1 00:46:30.060 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.8% 00:46:30.060 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:46:30.060 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:46:30.060 issued rwts: total=2560,2805,0,0 short=0,0,0,0 dropped=0,0,0,0 00:46:30.060 latency : target=0, window=0, percentile=100.00%, depth=128 00:46:30.060 00:46:30.060 Run status group 0 (all jobs): 00:46:30.060 READ: bw=87.4MiB/s (91.6MB/s), 9.98MiB/s-27.9MiB/s (10.5MB/s-29.2MB/s), io=87.7MiB (92.0MB), run=1002-1004msec 00:46:30.060 WRITE: bw=90.2MiB/s (94.6MB/s), 10.9MiB/s-27.9MiB/s (11.5MB/s-29.2MB/s), io=90.6MiB (95.0MB), run=1002-1004msec 00:46:30.060 00:46:30.060 Disk stats (read/write): 00:46:30.060 nvme0n1: ios=5169/5631, merge=0/0, ticks=45806/45630, in_queue=91436, util=83.77% 00:46:30.060 nvme0n2: ios=5682/5855, merge=0/0, ticks=29960/29527, in_queue=59487, util=86.47% 00:46:30.060 nvme0n3: ios=4584/4608, merge=0/0, ticks=17180/16063, in_queue=33243, util=91.33% 00:46:30.060 nvme0n4: ios=1692/2048, merge=0/0, ticks=18975/39422, in_queue=58397, util=96.29% 00:46:30.060 22:45:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@55 -- # sync 00:46:30.060 22:45:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@59 -- # fio_pid=484549 00:46:30.060 22:45:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@61 -- # sleep 3 00:46:30.060 22:45:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:46:30.060 [global] 00:46:30.060 thread=1 00:46:30.060 invalidate=1 00:46:30.060 rw=read 00:46:30.060 time_based=1 00:46:30.060 runtime=10 00:46:30.060 ioengine=libaio 00:46:30.060 direct=1 00:46:30.060 bs=4096 00:46:30.060 iodepth=1 00:46:30.060 norandommap=1 00:46:30.060 numjobs=1 00:46:30.060 00:46:30.060 [job0] 00:46:30.060 filename=/dev/nvme0n1 00:46:30.060 [job1] 00:46:30.060 filename=/dev/nvme0n2 00:46:30.060 [job2] 00:46:30.060 filename=/dev/nvme0n3 00:46:30.060 [job3] 00:46:30.060 filename=/dev/nvme0n4 00:46:30.060 Could not set queue depth (nvme0n1) 00:46:30.060 Could not set queue depth (nvme0n2) 00:46:30.060 Could not set queue depth (nvme0n3) 00:46:30.060 Could not set queue depth (nvme0n4) 00:46:30.321 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:46:30.321 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:46:30.321 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:46:30.321 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:46:30.321 fio-3.35 00:46:30.321 Starting 4 threads 00:46:33.617 22:45:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete concat0 00:46:33.617 22:45:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete raid0 00:46:33.617 fio: io_u error on file /dev/nvme0n4: Operation not supported: read offset=700416, buflen=4096 00:46:33.617 fio: pid=484743, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:46:33.617 22:45:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:46:33.617 22:45:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:46:33.617 fio: io_u error on file /dev/nvme0n3: Operation not supported: read offset=1081344, buflen=4096 00:46:33.617 fio: pid=484742, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:46:33.617 22:45:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:46:33.617 22:45:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:46:33.617 fio: io_u error on file /dev/nvme0n1: Operation not supported: read offset=14254080, buflen=4096 00:46:33.617 fio: pid=484740, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:46:33.617 fio: io_u error on file /dev/nvme0n2: Operation not supported: read offset=7786496, buflen=4096 00:46:33.617 fio: pid=484741, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:46:33.879 22:45:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:46:33.879 22:45:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:46:33.879 00:46:33.879 job0: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=484740: Tue Oct 1 22:45:28 2024 00:46:33.879 read: IOPS=1194, BW=4775KiB/s (4890kB/s)(13.6MiB/2915msec) 00:46:33.879 slat (usec): min=4, max=26175, avg=34.02, stdev=520.84 00:46:33.879 clat (usec): min=398, max=41400, avg=792.02, stdev=974.93 00:46:33.879 lat (usec): min=419, max=41408, avg=826.05, stdev=1104.89 00:46:33.879 clat percentiles (usec): 00:46:33.879 | 1.00th=[ 578], 5.00th=[ 652], 10.00th=[ 676], 20.00th=[ 717], 00:46:33.879 | 30.00th=[ 758], 40.00th=[ 766], 50.00th=[ 775], 60.00th=[ 791], 00:46:33.879 | 70.00th=[ 799], 80.00th=[ 816], 90.00th=[ 840], 95.00th=[ 857], 00:46:33.879 | 99.00th=[ 930], 99.50th=[ 963], 99.90th=[ 1074], 99.95th=[41157], 00:46:33.879 | 99.99th=[41157] 00:46:33.879 bw ( KiB/s): min= 4144, max= 5096, per=64.65%, avg=4864.00, stdev=404.42, samples=5 00:46:33.879 iops : min= 1036, max= 1274, avg=1216.00, stdev=101.10, samples=5 00:46:33.879 lat (usec) : 500=0.17%, 750=27.98%, 1000=71.56% 00:46:33.879 lat (msec) : 2=0.17%, 4=0.03%, 50=0.06% 00:46:33.879 cpu : usr=1.17%, sys=3.02%, ctx=3484, majf=0, minf=1 00:46:33.879 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:46:33.879 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:46:33.879 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:46:33.879 issued rwts: total=3481,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:46:33.879 latency : target=0, window=0, percentile=100.00%, depth=1 00:46:33.879 job1: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=484741: Tue Oct 1 22:45:28 2024 00:46:33.879 read: IOPS=615, BW=2459KiB/s (2518kB/s)(7604KiB/3092msec) 00:46:33.879 slat (usec): min=3, max=22064, avg=49.57, stdev=615.92 00:46:33.879 clat (usec): min=464, max=41846, avg=1556.90, stdev=5111.94 00:46:33.879 lat (usec): min=469, max=41873, avg=1606.48, stdev=5145.92 00:46:33.879 clat percentiles (usec): 00:46:33.879 | 1.00th=[ 562], 5.00th=[ 660], 10.00th=[ 717], 20.00th=[ 775], 00:46:33.879 | 30.00th=[ 824], 40.00th=[ 857], 50.00th=[ 881], 60.00th=[ 906], 00:46:33.879 | 70.00th=[ 955], 80.00th=[ 1029], 90.00th=[ 1106], 95.00th=[ 1172], 00:46:33.879 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41681], 99.95th=[41681], 00:46:33.879 | 99.99th=[41681] 00:46:33.879 bw ( KiB/s): min= 664, max= 4736, per=32.79%, avg=2467.50, stdev=1813.81, samples=6 00:46:33.879 iops : min= 166, max= 1184, avg=616.83, stdev=453.44, samples=6 00:46:33.879 lat (usec) : 500=0.26%, 750=14.09%, 1000=61.67% 00:46:33.879 lat (msec) : 2=22.13%, 10=0.16%, 50=1.63% 00:46:33.879 cpu : usr=0.81%, sys=2.30%, ctx=1908, majf=0, minf=2 00:46:33.879 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:46:33.879 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:46:33.879 complete : 0=0.1%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:46:33.879 issued rwts: total=1902,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:46:33.879 latency : target=0, window=0, percentile=100.00%, depth=1 00:46:33.879 job2: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=484742: Tue Oct 1 22:45:28 2024 00:46:33.879 read: IOPS=95, BW=380KiB/s (390kB/s)(1056KiB/2776msec) 00:46:33.879 slat (usec): min=3, max=13637, avg=74.51, stdev=836.36 00:46:33.879 clat (usec): min=357, max=41500, avg=10335.55, stdev=17081.85 00:46:33.879 lat (usec): min=362, max=41526, avg=10410.24, stdev=17075.28 00:46:33.879 clat percentiles (usec): 00:46:33.879 | 1.00th=[ 619], 5.00th=[ 685], 10.00th=[ 750], 20.00th=[ 824], 00:46:33.879 | 30.00th=[ 848], 40.00th=[ 865], 50.00th=[ 898], 60.00th=[ 955], 00:46:33.879 | 70.00th=[ 1020], 80.00th=[40633], 90.00th=[41157], 95.00th=[41157], 00:46:33.879 | 99.00th=[41681], 99.50th=[41681], 99.90th=[41681], 99.95th=[41681], 00:46:33.879 | 99.99th=[41681] 00:46:33.879 bw ( KiB/s): min= 232, max= 584, per=5.04%, avg=379.20, stdev=132.01, samples=5 00:46:33.879 iops : min= 58, max= 146, avg=94.80, stdev=33.00, samples=5 00:46:33.879 lat (usec) : 500=0.75%, 750=9.43%, 1000=57.36% 00:46:33.879 lat (msec) : 2=7.55%, 4=1.13%, 50=23.40% 00:46:33.879 cpu : usr=0.07%, sys=0.40%, ctx=266, majf=0, minf=2 00:46:33.879 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:46:33.879 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:46:33.879 complete : 0=0.4%, 4=99.6%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:46:33.879 issued rwts: total=265,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:46:33.879 latency : target=0, window=0, percentile=100.00%, depth=1 00:46:33.879 job3: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=484743: Tue Oct 1 22:45:28 2024 00:46:33.879 read: IOPS=66, BW=265KiB/s (271kB/s)(684KiB/2581msec) 00:46:33.879 slat (nsec): min=11231, max=55308, avg=26234.83, stdev=3295.61 00:46:33.879 clat (usec): min=822, max=42087, avg=14930.41, stdev=19408.79 00:46:33.879 lat (usec): min=848, max=42113, avg=14956.64, stdev=19408.56 00:46:33.879 clat percentiles (usec): 00:46:33.879 | 1.00th=[ 881], 5.00th=[ 930], 10.00th=[ 971], 20.00th=[ 1037], 00:46:33.879 | 30.00th=[ 1074], 40.00th=[ 1090], 50.00th=[ 1123], 60.00th=[ 1172], 00:46:33.879 | 70.00th=[41681], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:46:33.879 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:46:33.879 | 99.99th=[42206] 00:46:33.879 bw ( KiB/s): min= 96, max= 968, per=3.59%, avg=270.40, stdev=389.97, samples=5 00:46:33.879 iops : min= 24, max= 242, avg=67.60, stdev=97.49, samples=5 00:46:33.879 lat (usec) : 1000=15.12% 00:46:33.879 lat (msec) : 2=50.58%, 50=33.72% 00:46:33.879 cpu : usr=0.00%, sys=0.31%, ctx=173, majf=0, minf=2 00:46:33.879 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:46:33.879 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:46:33.879 complete : 0=0.6%, 4=99.4%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:46:33.879 issued rwts: total=172,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:46:33.879 latency : target=0, window=0, percentile=100.00%, depth=1 00:46:33.879 00:46:33.879 Run status group 0 (all jobs): 00:46:33.879 READ: bw=7524KiB/s (7705kB/s), 265KiB/s-4775KiB/s (271kB/s-4890kB/s), io=22.7MiB (23.8MB), run=2581-3092msec 00:46:33.879 00:46:33.879 Disk stats (read/write): 00:46:33.879 nvme0n1: ios=3345/0, merge=0/0, ticks=2590/0, in_queue=2590, util=91.69% 00:46:33.879 nvme0n2: ios=1874/0, merge=0/0, ticks=2771/0, in_queue=2771, util=93.03% 00:46:33.879 nvme0n3: ios=239/0, merge=0/0, ticks=2492/0, in_queue=2492, util=95.59% 00:46:33.879 nvme0n4: ios=211/0, merge=0/0, ticks=3372/0, in_queue=3372, util=99.24% 00:46:33.879 22:45:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:46:33.879 22:45:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:46:34.141 22:45:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:46:34.141 22:45:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:46:34.402 22:45:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:46:34.402 22:45:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:46:34.402 22:45:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:46:34.402 22:45:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:46:34.663 22:45:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@69 -- # fio_status=0 00:46:34.663 22:45:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@70 -- # wait 484549 00:46:34.663 22:45:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@70 -- # fio_status=4 00:46:34.663 22:45:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:46:34.663 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:46:34.663 22:45:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:46:34.663 22:45:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1219 -- # local i=0 00:46:34.663 22:45:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:46:34.663 22:45:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:46:34.663 22:45:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:46:34.663 22:45:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:46:34.663 22:45:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1231 -- # return 0 00:46:34.663 22:45:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:46:34.663 22:45:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:46:34.663 nvmf hotplug test: fio failed as expected 00:46:34.663 22:45:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:46:34.924 22:45:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:46:34.924 22:45:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:46:34.924 22:45:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:46:34.924 22:45:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:46:34.924 22:45:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@91 -- # nvmftestfini 00:46:34.924 22:45:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@514 -- # nvmfcleanup 00:46:34.924 22:45:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@121 -- # sync 00:46:34.925 22:45:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:46:34.925 22:45:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@124 -- # set +e 00:46:34.925 22:45:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:46:34.925 22:45:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:46:34.925 rmmod nvme_tcp 00:46:34.925 rmmod nvme_fabrics 00:46:34.925 rmmod nvme_keyring 00:46:34.925 22:45:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:46:35.186 22:45:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@128 -- # set -e 00:46:35.186 22:45:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@129 -- # return 0 00:46:35.186 22:45:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@515 -- # '[' -n 481381 ']' 00:46:35.186 22:45:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@516 -- # killprocess 481381 00:46:35.186 22:45:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@950 -- # '[' -z 481381 ']' 00:46:35.186 22:45:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@954 -- # kill -0 481381 00:46:35.186 22:45:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@955 -- # uname 00:46:35.186 22:45:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:46:35.186 22:45:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 481381 00:46:35.186 22:45:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:46:35.186 22:45:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:46:35.186 22:45:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@968 -- # echo 'killing process with pid 481381' 00:46:35.186 killing process with pid 481381 00:46:35.186 22:45:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@969 -- # kill 481381 00:46:35.186 22:45:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@974 -- # wait 481381 00:46:35.446 22:45:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:46:35.447 22:45:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:46:35.447 22:45:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:46:35.447 22:45:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@297 -- # iptr 00:46:35.447 22:45:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:46:35.447 22:45:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@789 -- # iptables-save 00:46:35.447 22:45:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@789 -- # iptables-restore 00:46:35.447 22:45:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:46:35.447 22:45:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@302 -- # remove_spdk_ns 00:46:35.447 22:45:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:46:35.447 22:45:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:46:35.447 22:45:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:46:37.360 22:45:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:46:37.360 00:46:37.360 real 0m28.052s 00:46:37.360 user 2m19.796s 00:46:37.360 sys 0m12.651s 00:46:37.360 22:45:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1126 -- # xtrace_disable 00:46:37.360 22:45:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:46:37.360 ************************************ 00:46:37.360 END TEST nvmf_fio_target 00:46:37.360 ************************************ 00:46:37.360 22:45:32 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@35 -- # run_test nvmf_bdevio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --interrupt-mode 00:46:37.360 22:45:32 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:46:37.360 22:45:32 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1107 -- # xtrace_disable 00:46:37.360 22:45:32 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:46:37.360 ************************************ 00:46:37.361 START TEST nvmf_bdevio 00:46:37.361 ************************************ 00:46:37.361 22:45:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --interrupt-mode 00:46:37.622 * Looking for test storage... 00:46:37.622 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:46:37.622 22:45:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:46:37.622 22:45:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:46:37.622 22:45:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1681 -- # lcov --version 00:46:37.622 22:45:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:46:37.622 22:45:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:46:37.622 22:45:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@333 -- # local ver1 ver1_l 00:46:37.622 22:45:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@334 -- # local ver2 ver2_l 00:46:37.622 22:45:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@336 -- # IFS=.-: 00:46:37.622 22:45:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@336 -- # read -ra ver1 00:46:37.622 22:45:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@337 -- # IFS=.-: 00:46:37.622 22:45:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@337 -- # read -ra ver2 00:46:37.622 22:45:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@338 -- # local 'op=<' 00:46:37.622 22:45:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@340 -- # ver1_l=2 00:46:37.622 22:45:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@341 -- # ver2_l=1 00:46:37.622 22:45:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:46:37.622 22:45:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@344 -- # case "$op" in 00:46:37.622 22:45:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@345 -- # : 1 00:46:37.622 22:45:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@364 -- # (( v = 0 )) 00:46:37.622 22:45:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:46:37.622 22:45:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@365 -- # decimal 1 00:46:37.622 22:45:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@353 -- # local d=1 00:46:37.622 22:45:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:46:37.622 22:45:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@355 -- # echo 1 00:46:37.622 22:45:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@365 -- # ver1[v]=1 00:46:37.622 22:45:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@366 -- # decimal 2 00:46:37.622 22:45:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@353 -- # local d=2 00:46:37.622 22:45:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:46:37.622 22:45:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@355 -- # echo 2 00:46:37.622 22:45:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@366 -- # ver2[v]=2 00:46:37.622 22:45:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:46:37.622 22:45:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:46:37.622 22:45:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@368 -- # return 0 00:46:37.622 22:45:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:46:37.622 22:45:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:46:37.622 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:46:37.622 --rc genhtml_branch_coverage=1 00:46:37.622 --rc genhtml_function_coverage=1 00:46:37.622 --rc genhtml_legend=1 00:46:37.622 --rc geninfo_all_blocks=1 00:46:37.622 --rc geninfo_unexecuted_blocks=1 00:46:37.622 00:46:37.622 ' 00:46:37.622 22:45:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:46:37.622 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:46:37.622 --rc genhtml_branch_coverage=1 00:46:37.622 --rc genhtml_function_coverage=1 00:46:37.622 --rc genhtml_legend=1 00:46:37.622 --rc geninfo_all_blocks=1 00:46:37.622 --rc geninfo_unexecuted_blocks=1 00:46:37.622 00:46:37.622 ' 00:46:37.622 22:45:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:46:37.622 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:46:37.622 --rc genhtml_branch_coverage=1 00:46:37.622 --rc genhtml_function_coverage=1 00:46:37.622 --rc genhtml_legend=1 00:46:37.622 --rc geninfo_all_blocks=1 00:46:37.622 --rc geninfo_unexecuted_blocks=1 00:46:37.622 00:46:37.622 ' 00:46:37.622 22:45:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:46:37.622 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:46:37.622 --rc genhtml_branch_coverage=1 00:46:37.622 --rc genhtml_function_coverage=1 00:46:37.622 --rc genhtml_legend=1 00:46:37.622 --rc geninfo_all_blocks=1 00:46:37.622 --rc geninfo_unexecuted_blocks=1 00:46:37.622 00:46:37.622 ' 00:46:37.622 22:45:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:46:37.622 22:45:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@7 -- # uname -s 00:46:37.622 22:45:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:46:37.622 22:45:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:46:37.622 22:45:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:46:37.622 22:45:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:46:37.623 22:45:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:46:37.623 22:45:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:46:37.623 22:45:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:46:37.623 22:45:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:46:37.623 22:45:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:46:37.623 22:45:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:46:37.623 22:45:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 00:46:37.623 22:45:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@18 -- # NVME_HOSTID=80c5a598-37ec-ec11-9bc7-a4bf01928204 00:46:37.623 22:45:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:46:37.623 22:45:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:46:37.623 22:45:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:46:37.623 22:45:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:46:37.623 22:45:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:46:37.623 22:45:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@15 -- # shopt -s extglob 00:46:37.623 22:45:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:46:37.623 22:45:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:46:37.623 22:45:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:46:37.623 22:45:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:46:37.623 22:45:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:46:37.623 22:45:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:46:37.623 22:45:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@5 -- # export PATH 00:46:37.623 22:45:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:46:37.623 22:45:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@51 -- # : 0 00:46:37.623 22:45:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:46:37.623 22:45:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:46:37.623 22:45:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:46:37.623 22:45:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:46:37.623 22:45:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:46:37.623 22:45:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:46:37.623 22:45:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:46:37.623 22:45:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:46:37.623 22:45:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:46:37.623 22:45:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@55 -- # have_pci_nics=0 00:46:37.623 22:45:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:46:37.623 22:45:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:46:37.623 22:45:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@14 -- # nvmftestinit 00:46:37.623 22:45:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:46:37.623 22:45:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:46:37.623 22:45:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@474 -- # prepare_net_devs 00:46:37.623 22:45:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@436 -- # local -g is_hw=no 00:46:37.623 22:45:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@438 -- # remove_spdk_ns 00:46:37.623 22:45:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:46:37.623 22:45:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:46:37.623 22:45:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:46:37.623 22:45:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:46:37.623 22:45:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:46:37.623 22:45:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@309 -- # xtrace_disable 00:46:37.623 22:45:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:46:45.769 22:45:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:46:45.769 22:45:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@315 -- # pci_devs=() 00:46:45.769 22:45:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@315 -- # local -a pci_devs 00:46:45.769 22:45:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@316 -- # pci_net_devs=() 00:46:45.769 22:45:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:46:45.769 22:45:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@317 -- # pci_drivers=() 00:46:45.769 22:45:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@317 -- # local -A pci_drivers 00:46:45.769 22:45:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@319 -- # net_devs=() 00:46:45.769 22:45:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@319 -- # local -ga net_devs 00:46:45.769 22:45:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@320 -- # e810=() 00:46:45.769 22:45:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@320 -- # local -ga e810 00:46:45.769 22:45:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@321 -- # x722=() 00:46:45.769 22:45:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@321 -- # local -ga x722 00:46:45.769 22:45:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@322 -- # mlx=() 00:46:45.769 22:45:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@322 -- # local -ga mlx 00:46:45.769 22:45:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:46:45.769 22:45:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:46:45.769 22:45:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:46:45.769 22:45:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:46:45.769 22:45:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:46:45.769 22:45:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:46:45.769 22:45:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:46:45.769 22:45:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:46:45.769 22:45:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:46:45.769 22:45:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:46:45.769 22:45:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:46:45.769 22:45:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:46:45.769 22:45:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:46:45.769 22:45:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:46:45.769 22:45:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:46:45.769 22:45:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:46:45.769 22:45:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:46:45.769 22:45:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:46:45.769 22:45:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:46:45.769 22:45:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:46:45.769 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:46:45.769 22:45:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:46:45.769 22:45:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:46:45.769 22:45:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:46:45.769 22:45:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:46:45.769 22:45:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:46:45.769 22:45:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:46:45.769 22:45:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:46:45.769 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:46:45.769 22:45:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:46:45.769 22:45:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:46:45.769 22:45:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:46:45.769 22:45:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:46:45.769 22:45:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:46:45.769 22:45:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:46:45.769 22:45:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:46:45.769 22:45:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:46:45.769 22:45:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:46:45.769 22:45:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:46:45.769 22:45:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:46:45.769 22:45:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:46:45.769 22:45:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ up == up ]] 00:46:45.769 22:45:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:46:45.769 22:45:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:46:45.769 22:45:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:46:45.769 Found net devices under 0000:4b:00.0: cvl_0_0 00:46:45.769 22:45:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:46:45.769 22:45:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:46:45.769 22:45:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:46:45.769 22:45:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:46:45.769 22:45:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:46:45.769 22:45:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ up == up ]] 00:46:45.769 22:45:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:46:45.769 22:45:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:46:45.769 22:45:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:46:45.769 Found net devices under 0000:4b:00.1: cvl_0_1 00:46:45.769 22:45:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:46:45.769 22:45:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:46:45.769 22:45:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@440 -- # is_hw=yes 00:46:45.769 22:45:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:46:45.769 22:45:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:46:45.769 22:45:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:46:45.769 22:45:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:46:45.769 22:45:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:46:45.769 22:45:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:46:45.769 22:45:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:46:45.769 22:45:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:46:45.769 22:45:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:46:45.769 22:45:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:46:45.769 22:45:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:46:45.769 22:45:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:46:45.769 22:45:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:46:45.769 22:45:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:46:45.769 22:45:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:46:45.769 22:45:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:46:45.769 22:45:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:46:45.769 22:45:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:46:45.769 22:45:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:46:45.769 22:45:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:46:45.769 22:45:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:46:45.769 22:45:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:46:45.769 22:45:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:46:45.769 22:45:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:46:45.769 22:45:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:46:45.769 22:45:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:46:45.769 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:46:45.769 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.633 ms 00:46:45.769 00:46:45.769 --- 10.0.0.2 ping statistics --- 00:46:45.769 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:46:45.770 rtt min/avg/max/mdev = 0.633/0.633/0.633/0.000 ms 00:46:45.770 22:45:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:46:45.770 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:46:45.770 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.310 ms 00:46:45.770 00:46:45.770 --- 10.0.0.1 ping statistics --- 00:46:45.770 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:46:45.770 rtt min/avg/max/mdev = 0.310/0.310/0.310/0.000 ms 00:46:45.770 22:45:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:46:45.770 22:45:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@448 -- # return 0 00:46:45.770 22:45:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:46:45.770 22:45:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:46:45.770 22:45:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:46:45.770 22:45:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:46:45.770 22:45:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:46:45.770 22:45:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:46:45.770 22:45:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:46:45.770 22:45:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:46:45.770 22:45:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:46:45.770 22:45:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@724 -- # xtrace_disable 00:46:45.770 22:45:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:46:45.770 22:45:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@507 -- # nvmfpid=489755 00:46:45.770 22:45:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@508 -- # waitforlisten 489755 00:46:45.770 22:45:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x78 00:46:45.770 22:45:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@831 -- # '[' -z 489755 ']' 00:46:45.770 22:45:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:46:45.770 22:45:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@836 -- # local max_retries=100 00:46:45.770 22:45:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:46:45.770 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:46:45.770 22:45:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@840 -- # xtrace_disable 00:46:45.770 22:45:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:46:45.770 [2024-10-01 22:45:40.257819] thread.c:2945:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:46:45.770 [2024-10-01 22:45:40.258813] Starting SPDK v25.01-pre git sha1 1b1c3081e / DPDK 24.03.0 initialization... 00:46:45.770 [2024-10-01 22:45:40.258851] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:46:45.770 [2024-10-01 22:45:40.345985] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:46:45.770 [2024-10-01 22:45:40.434707] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:46:45.770 [2024-10-01 22:45:40.434769] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:46:45.770 [2024-10-01 22:45:40.434778] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:46:45.770 [2024-10-01 22:45:40.434786] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:46:45.770 [2024-10-01 22:45:40.434792] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:46:45.770 [2024-10-01 22:45:40.434968] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 4 00:46:45.770 [2024-10-01 22:45:40.435131] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 5 00:46:45.770 [2024-10-01 22:45:40.435291] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:46:45.770 [2024-10-01 22:45:40.435292] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 6 00:46:45.770 [2024-10-01 22:45:40.595710] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:46:45.770 [2024-10-01 22:45:40.596545] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:46:45.770 [2024-10-01 22:45:40.596956] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:46:45.770 [2024-10-01 22:45:40.597477] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:46:45.770 [2024-10-01 22:45:40.597521] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:46:46.030 22:45:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:46:46.030 22:45:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@864 -- # return 0 00:46:46.030 22:45:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:46:46.030 22:45:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@730 -- # xtrace_disable 00:46:46.030 22:45:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:46:46.030 22:45:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:46:46.030 22:45:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:46:46.030 22:45:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:46:46.030 22:45:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:46:46.030 [2024-10-01 22:45:41.132168] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:46:46.030 22:45:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:46:46.030 22:45:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:46:46.030 22:45:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:46:46.030 22:45:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:46:46.030 Malloc0 00:46:46.030 22:45:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:46:46.030 22:45:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:46:46.030 22:45:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:46:46.030 22:45:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:46:46.030 22:45:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:46:46.030 22:45:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:46:46.030 22:45:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:46:46.030 22:45:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:46:46.030 22:45:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:46:46.030 22:45:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:46:46.030 22:45:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:46:46.030 22:45:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:46:46.030 [2024-10-01 22:45:41.212256] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:46:46.030 22:45:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:46:46.030 22:45:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:46:46.030 22:45:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:46:46.030 22:45:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@558 -- # config=() 00:46:46.030 22:45:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@558 -- # local subsystem config 00:46:46.030 22:45:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:46:46.030 22:45:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:46:46.030 { 00:46:46.030 "params": { 00:46:46.030 "name": "Nvme$subsystem", 00:46:46.030 "trtype": "$TEST_TRANSPORT", 00:46:46.030 "traddr": "$NVMF_FIRST_TARGET_IP", 00:46:46.030 "adrfam": "ipv4", 00:46:46.030 "trsvcid": "$NVMF_PORT", 00:46:46.030 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:46:46.030 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:46:46.030 "hdgst": ${hdgst:-false}, 00:46:46.030 "ddgst": ${ddgst:-false} 00:46:46.030 }, 00:46:46.030 "method": "bdev_nvme_attach_controller" 00:46:46.030 } 00:46:46.030 EOF 00:46:46.030 )") 00:46:46.030 22:45:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@580 -- # cat 00:46:46.030 22:45:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@582 -- # jq . 00:46:46.030 22:45:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@583 -- # IFS=, 00:46:46.030 22:45:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:46:46.030 "params": { 00:46:46.030 "name": "Nvme1", 00:46:46.030 "trtype": "tcp", 00:46:46.030 "traddr": "10.0.0.2", 00:46:46.030 "adrfam": "ipv4", 00:46:46.030 "trsvcid": "4420", 00:46:46.031 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:46:46.031 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:46:46.031 "hdgst": false, 00:46:46.031 "ddgst": false 00:46:46.031 }, 00:46:46.031 "method": "bdev_nvme_attach_controller" 00:46:46.031 }' 00:46:46.031 [2024-10-01 22:45:41.268245] Starting SPDK v25.01-pre git sha1 1b1c3081e / DPDK 24.03.0 initialization... 00:46:46.031 [2024-10-01 22:45:41.268302] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid489995 ] 00:46:46.291 [2024-10-01 22:45:41.334299] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:46:46.291 [2024-10-01 22:45:41.410855] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:46:46.291 [2024-10-01 22:45:41.411062] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:46:46.291 [2024-10-01 22:45:41.411065] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:46:46.550 I/O targets: 00:46:46.550 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:46:46.550 00:46:46.550 00:46:46.550 CUnit - A unit testing framework for C - Version 2.1-3 00:46:46.550 http://cunit.sourceforge.net/ 00:46:46.550 00:46:46.550 00:46:46.550 Suite: bdevio tests on: Nvme1n1 00:46:46.810 Test: blockdev write read block ...passed 00:46:46.810 Test: blockdev write zeroes read block ...passed 00:46:46.810 Test: blockdev write zeroes read no split ...passed 00:46:46.810 Test: blockdev write zeroes read split ...passed 00:46:46.810 Test: blockdev write zeroes read split partial ...passed 00:46:46.810 Test: blockdev reset ...[2024-10-01 22:45:41.876590] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:46:46.810 [2024-10-01 22:45:41.876671] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf21270 (9): Bad file descriptor 00:46:46.810 [2024-10-01 22:45:41.883586] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:46:46.810 passed 00:46:46.810 Test: blockdev write read 8 blocks ...passed 00:46:46.810 Test: blockdev write read size > 128k ...passed 00:46:46.810 Test: blockdev write read invalid size ...passed 00:46:46.810 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:46:46.810 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:46:46.810 Test: blockdev write read max offset ...passed 00:46:47.070 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:46:47.070 Test: blockdev writev readv 8 blocks ...passed 00:46:47.070 Test: blockdev writev readv 30 x 1block ...passed 00:46:47.070 Test: blockdev writev readv block ...passed 00:46:47.070 Test: blockdev writev readv size > 128k ...passed 00:46:47.070 Test: blockdev writev readv size > 128k in two iovs ...passed 00:46:47.070 Test: blockdev comparev and writev ...[2024-10-01 22:45:42.149225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:46:47.070 [2024-10-01 22:45:42.149251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:46:47.070 [2024-10-01 22:45:42.149263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:46:47.070 [2024-10-01 22:45:42.149269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:46:47.070 [2024-10-01 22:45:42.149789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:46:47.070 [2024-10-01 22:45:42.149799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:46:47.070 [2024-10-01 22:45:42.149808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:46:47.070 [2024-10-01 22:45:42.149815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:46:47.070 [2024-10-01 22:45:42.150379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:46:47.070 [2024-10-01 22:45:42.150387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:46:47.070 [2024-10-01 22:45:42.150397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:46:47.070 [2024-10-01 22:45:42.150402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:46:47.070 [2024-10-01 22:45:42.150958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:46:47.070 [2024-10-01 22:45:42.150967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:46:47.070 [2024-10-01 22:45:42.150976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:46:47.070 [2024-10-01 22:45:42.150982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:46:47.070 passed 00:46:47.070 Test: blockdev nvme passthru rw ...passed 00:46:47.070 Test: blockdev nvme passthru vendor specific ...[2024-10-01 22:45:42.235435] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:46:47.070 [2024-10-01 22:45:42.235451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:46:47.070 [2024-10-01 22:45:42.235697] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:46:47.070 [2024-10-01 22:45:42.235705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:46:47.070 [2024-10-01 22:45:42.235967] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:46:47.070 [2024-10-01 22:45:42.235975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:46:47.070 [2024-10-01 22:45:42.236200] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:46:47.070 [2024-10-01 22:45:42.236208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:46:47.070 passed 00:46:47.070 Test: blockdev nvme admin passthru ...passed 00:46:47.070 Test: blockdev copy ...passed 00:46:47.070 00:46:47.070 Run Summary: Type Total Ran Passed Failed Inactive 00:46:47.070 suites 1 1 n/a 0 0 00:46:47.070 tests 23 23 23 0 0 00:46:47.070 asserts 152 152 152 0 n/a 00:46:47.070 00:46:47.070 Elapsed time = 1.069 seconds 00:46:47.331 22:45:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:46:47.331 22:45:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:46:47.331 22:45:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:46:47.331 22:45:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:46:47.331 22:45:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:46:47.331 22:45:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@30 -- # nvmftestfini 00:46:47.331 22:45:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@514 -- # nvmfcleanup 00:46:47.331 22:45:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@121 -- # sync 00:46:47.331 22:45:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:46:47.331 22:45:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@124 -- # set +e 00:46:47.331 22:45:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@125 -- # for i in {1..20} 00:46:47.331 22:45:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:46:47.331 rmmod nvme_tcp 00:46:47.331 rmmod nvme_fabrics 00:46:47.331 rmmod nvme_keyring 00:46:47.331 22:45:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:46:47.331 22:45:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@128 -- # set -e 00:46:47.331 22:45:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@129 -- # return 0 00:46:47.331 22:45:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@515 -- # '[' -n 489755 ']' 00:46:47.331 22:45:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@516 -- # killprocess 489755 00:46:47.331 22:45:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@950 -- # '[' -z 489755 ']' 00:46:47.331 22:45:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@954 -- # kill -0 489755 00:46:47.331 22:45:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@955 -- # uname 00:46:47.331 22:45:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:46:47.331 22:45:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 489755 00:46:47.592 22:45:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@956 -- # process_name=reactor_3 00:46:47.592 22:45:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@960 -- # '[' reactor_3 = sudo ']' 00:46:47.592 22:45:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@968 -- # echo 'killing process with pid 489755' 00:46:47.592 killing process with pid 489755 00:46:47.592 22:45:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@969 -- # kill 489755 00:46:47.592 22:45:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@974 -- # wait 489755 00:46:47.853 22:45:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:46:47.853 22:45:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:46:47.853 22:45:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:46:47.853 22:45:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@297 -- # iptr 00:46:47.854 22:45:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@789 -- # iptables-save 00:46:47.854 22:45:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:46:47.854 22:45:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@789 -- # iptables-restore 00:46:47.854 22:45:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:46:47.854 22:45:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@302 -- # remove_spdk_ns 00:46:47.854 22:45:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:46:47.854 22:45:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:46:47.854 22:45:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:46:49.762 22:45:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:46:49.762 00:46:49.762 real 0m12.379s 00:46:49.762 user 0m10.469s 00:46:49.762 sys 0m6.645s 00:46:49.762 22:45:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1126 -- # xtrace_disable 00:46:49.762 22:45:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:46:49.762 ************************************ 00:46:49.762 END TEST nvmf_bdevio 00:46:49.762 ************************************ 00:46:50.021 22:45:45 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:46:50.021 00:46:50.021 real 4m59.080s 00:46:50.021 user 10m16.647s 00:46:50.021 sys 2m4.020s 00:46:50.021 22:45:45 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1126 -- # xtrace_disable 00:46:50.021 22:45:45 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:46:50.021 ************************************ 00:46:50.021 END TEST nvmf_target_core_interrupt_mode 00:46:50.021 ************************************ 00:46:50.022 22:45:45 nvmf_tcp -- nvmf/nvmf.sh@21 -- # run_test nvmf_interrupt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/interrupt.sh --transport=tcp --interrupt-mode 00:46:50.022 22:45:45 nvmf_tcp -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:46:50.022 22:45:45 nvmf_tcp -- common/autotest_common.sh@1107 -- # xtrace_disable 00:46:50.022 22:45:45 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:46:50.022 ************************************ 00:46:50.022 START TEST nvmf_interrupt 00:46:50.022 ************************************ 00:46:50.022 22:45:45 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/interrupt.sh --transport=tcp --interrupt-mode 00:46:50.022 * Looking for test storage... 00:46:50.022 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:46:50.022 22:45:45 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:46:50.022 22:45:45 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1681 -- # lcov --version 00:46:50.022 22:45:45 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:46:50.022 22:45:45 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:46:50.022 22:45:45 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:46:50.022 22:45:45 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@333 -- # local ver1 ver1_l 00:46:50.022 22:45:45 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@334 -- # local ver2 ver2_l 00:46:50.022 22:45:45 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@336 -- # IFS=.-: 00:46:50.022 22:45:45 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@336 -- # read -ra ver1 00:46:50.022 22:45:45 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@337 -- # IFS=.-: 00:46:50.022 22:45:45 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@337 -- # read -ra ver2 00:46:50.022 22:45:45 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@338 -- # local 'op=<' 00:46:50.022 22:45:45 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@340 -- # ver1_l=2 00:46:50.022 22:45:45 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@341 -- # ver2_l=1 00:46:50.022 22:45:45 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:46:50.022 22:45:45 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@344 -- # case "$op" in 00:46:50.022 22:45:45 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@345 -- # : 1 00:46:50.022 22:45:45 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@364 -- # (( v = 0 )) 00:46:50.022 22:45:45 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:46:50.022 22:45:45 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@365 -- # decimal 1 00:46:50.022 22:45:45 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@353 -- # local d=1 00:46:50.022 22:45:45 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:46:50.022 22:45:45 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@355 -- # echo 1 00:46:50.022 22:45:45 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@365 -- # ver1[v]=1 00:46:50.022 22:45:45 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@366 -- # decimal 2 00:46:50.022 22:45:45 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@353 -- # local d=2 00:46:50.022 22:45:45 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:46:50.022 22:45:45 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@355 -- # echo 2 00:46:50.022 22:45:45 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@366 -- # ver2[v]=2 00:46:50.022 22:45:45 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:46:50.022 22:45:45 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:46:50.022 22:45:45 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@368 -- # return 0 00:46:50.022 22:45:45 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:46:50.022 22:45:45 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:46:50.022 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:46:50.022 --rc genhtml_branch_coverage=1 00:46:50.022 --rc genhtml_function_coverage=1 00:46:50.022 --rc genhtml_legend=1 00:46:50.022 --rc geninfo_all_blocks=1 00:46:50.022 --rc geninfo_unexecuted_blocks=1 00:46:50.022 00:46:50.022 ' 00:46:50.022 22:45:45 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:46:50.022 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:46:50.022 --rc genhtml_branch_coverage=1 00:46:50.022 --rc genhtml_function_coverage=1 00:46:50.022 --rc genhtml_legend=1 00:46:50.022 --rc geninfo_all_blocks=1 00:46:50.022 --rc geninfo_unexecuted_blocks=1 00:46:50.022 00:46:50.022 ' 00:46:50.022 22:45:45 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:46:50.022 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:46:50.022 --rc genhtml_branch_coverage=1 00:46:50.022 --rc genhtml_function_coverage=1 00:46:50.022 --rc genhtml_legend=1 00:46:50.022 --rc geninfo_all_blocks=1 00:46:50.022 --rc geninfo_unexecuted_blocks=1 00:46:50.022 00:46:50.022 ' 00:46:50.022 22:45:45 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:46:50.022 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:46:50.022 --rc genhtml_branch_coverage=1 00:46:50.022 --rc genhtml_function_coverage=1 00:46:50.022 --rc genhtml_legend=1 00:46:50.022 --rc geninfo_all_blocks=1 00:46:50.022 --rc geninfo_unexecuted_blocks=1 00:46:50.022 00:46:50.022 ' 00:46:50.022 22:45:45 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:46:50.022 22:45:45 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@7 -- # uname -s 00:46:50.022 22:45:45 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:46:50.022 22:45:45 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:46:50.022 22:45:45 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:46:50.022 22:45:45 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:46:50.022 22:45:45 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:46:50.022 22:45:45 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:46:50.022 22:45:45 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:46:50.022 22:45:45 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:46:50.022 22:45:45 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:46:50.022 22:45:45 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:46:50.301 22:45:45 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 00:46:50.301 22:45:45 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@18 -- # NVME_HOSTID=80c5a598-37ec-ec11-9bc7-a4bf01928204 00:46:50.301 22:45:45 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:46:50.301 22:45:45 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:46:50.301 22:45:45 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:46:50.301 22:45:45 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:46:50.301 22:45:45 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:46:50.301 22:45:45 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@15 -- # shopt -s extglob 00:46:50.301 22:45:45 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:46:50.301 22:45:45 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:46:50.301 22:45:45 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:46:50.301 22:45:45 nvmf_tcp.nvmf_interrupt -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:46:50.301 22:45:45 nvmf_tcp.nvmf_interrupt -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:46:50.301 22:45:45 nvmf_tcp.nvmf_interrupt -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:46:50.301 22:45:45 nvmf_tcp.nvmf_interrupt -- paths/export.sh@5 -- # export PATH 00:46:50.301 22:45:45 nvmf_tcp.nvmf_interrupt -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:46:50.301 22:45:45 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@51 -- # : 0 00:46:50.301 22:45:45 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:46:50.301 22:45:45 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:46:50.301 22:45:45 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:46:50.301 22:45:45 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:46:50.301 22:45:45 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:46:50.301 22:45:45 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:46:50.301 22:45:45 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:46:50.301 22:45:45 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:46:50.301 22:45:45 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:46:50.301 22:45:45 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@55 -- # have_pci_nics=0 00:46:50.301 22:45:45 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/interrupt/common.sh 00:46:50.301 22:45:45 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@12 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:46:50.301 22:45:45 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@14 -- # nvmftestinit 00:46:50.301 22:45:45 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:46:50.301 22:45:45 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:46:50.301 22:45:45 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@474 -- # prepare_net_devs 00:46:50.301 22:45:45 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@436 -- # local -g is_hw=no 00:46:50.301 22:45:45 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@438 -- # remove_spdk_ns 00:46:50.301 22:45:45 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:46:50.302 22:45:45 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:46:50.302 22:45:45 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:46:50.302 22:45:45 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:46:50.302 22:45:45 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:46:50.302 22:45:45 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@309 -- # xtrace_disable 00:46:50.302 22:45:45 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:46:58.439 22:45:52 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:46:58.439 22:45:52 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@315 -- # pci_devs=() 00:46:58.439 22:45:52 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@315 -- # local -a pci_devs 00:46:58.439 22:45:52 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@316 -- # pci_net_devs=() 00:46:58.439 22:45:52 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:46:58.439 22:45:52 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@317 -- # pci_drivers=() 00:46:58.439 22:45:52 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@317 -- # local -A pci_drivers 00:46:58.439 22:45:52 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@319 -- # net_devs=() 00:46:58.439 22:45:52 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@319 -- # local -ga net_devs 00:46:58.439 22:45:52 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@320 -- # e810=() 00:46:58.439 22:45:52 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@320 -- # local -ga e810 00:46:58.439 22:45:52 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@321 -- # x722=() 00:46:58.439 22:45:52 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@321 -- # local -ga x722 00:46:58.439 22:45:52 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@322 -- # mlx=() 00:46:58.439 22:45:52 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@322 -- # local -ga mlx 00:46:58.439 22:45:52 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:46:58.439 22:45:52 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:46:58.439 22:45:52 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:46:58.439 22:45:52 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:46:58.439 22:45:52 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:46:58.439 22:45:52 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:46:58.439 22:45:52 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:46:58.439 22:45:52 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:46:58.439 22:45:52 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:46:58.439 22:45:52 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:46:58.439 22:45:52 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:46:58.439 22:45:52 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:46:58.439 22:45:52 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:46:58.439 22:45:52 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:46:58.439 22:45:52 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:46:58.439 22:45:52 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:46:58.439 22:45:52 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:46:58.439 22:45:52 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:46:58.439 22:45:52 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:46:58.439 22:45:52 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:46:58.439 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:46:58.439 22:45:52 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:46:58.439 22:45:52 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:46:58.439 22:45:52 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:46:58.439 22:45:52 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:46:58.439 22:45:52 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:46:58.439 22:45:52 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:46:58.439 22:45:52 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:46:58.439 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:46:58.439 22:45:52 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:46:58.439 22:45:52 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:46:58.439 22:45:52 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:46:58.439 22:45:52 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:46:58.439 22:45:52 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:46:58.439 22:45:52 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:46:58.439 22:45:52 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:46:58.439 22:45:52 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:46:58.439 22:45:52 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:46:58.439 22:45:52 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:46:58.439 22:45:52 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:46:58.439 22:45:52 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:46:58.439 22:45:52 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@416 -- # [[ up == up ]] 00:46:58.439 22:45:52 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:46:58.439 22:45:52 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:46:58.439 22:45:52 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:46:58.439 Found net devices under 0000:4b:00.0: cvl_0_0 00:46:58.439 22:45:52 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:46:58.439 22:45:52 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:46:58.439 22:45:52 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:46:58.439 22:45:52 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:46:58.439 22:45:52 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:46:58.439 22:45:52 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@416 -- # [[ up == up ]] 00:46:58.439 22:45:52 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:46:58.439 22:45:52 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:46:58.439 22:45:52 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:46:58.439 Found net devices under 0000:4b:00.1: cvl_0_1 00:46:58.439 22:45:52 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:46:58.439 22:45:52 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:46:58.439 22:45:52 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@440 -- # is_hw=yes 00:46:58.439 22:45:52 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:46:58.439 22:45:52 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:46:58.439 22:45:52 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:46:58.439 22:45:52 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:46:58.439 22:45:52 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:46:58.439 22:45:52 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:46:58.439 22:45:52 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:46:58.439 22:45:52 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:46:58.439 22:45:52 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:46:58.439 22:45:52 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:46:58.439 22:45:52 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:46:58.439 22:45:52 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:46:58.439 22:45:52 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:46:58.439 22:45:52 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:46:58.439 22:45:52 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:46:58.439 22:45:52 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:46:58.439 22:45:52 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:46:58.439 22:45:52 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:46:58.439 22:45:52 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:46:58.439 22:45:52 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:46:58.439 22:45:52 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:46:58.439 22:45:52 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:46:58.439 22:45:52 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:46:58.439 22:45:52 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:46:58.439 22:45:52 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:46:58.439 22:45:52 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:46:58.439 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:46:58.439 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.699 ms 00:46:58.439 00:46:58.439 --- 10.0.0.2 ping statistics --- 00:46:58.440 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:46:58.440 rtt min/avg/max/mdev = 0.699/0.699/0.699/0.000 ms 00:46:58.440 22:45:52 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:46:58.440 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:46:58.440 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.276 ms 00:46:58.440 00:46:58.440 --- 10.0.0.1 ping statistics --- 00:46:58.440 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:46:58.440 rtt min/avg/max/mdev = 0.276/0.276/0.276/0.000 ms 00:46:58.440 22:45:52 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:46:58.440 22:45:52 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@448 -- # return 0 00:46:58.440 22:45:52 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:46:58.440 22:45:52 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:46:58.440 22:45:52 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:46:58.440 22:45:52 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:46:58.440 22:45:52 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:46:58.440 22:45:52 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:46:58.440 22:45:52 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:46:58.440 22:45:52 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@15 -- # nvmfappstart -m 0x3 00:46:58.440 22:45:52 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:46:58.440 22:45:52 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@724 -- # xtrace_disable 00:46:58.440 22:45:52 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:46:58.440 22:45:52 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@507 -- # nvmfpid=494453 00:46:58.440 22:45:52 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@508 -- # waitforlisten 494453 00:46:58.440 22:45:52 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x3 00:46:58.440 22:45:52 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@831 -- # '[' -z 494453 ']' 00:46:58.440 22:45:52 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:46:58.440 22:45:52 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@836 -- # local max_retries=100 00:46:58.440 22:45:52 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:46:58.440 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:46:58.440 22:45:52 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@840 -- # xtrace_disable 00:46:58.440 22:45:52 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:46:58.440 [2024-10-01 22:45:52.703465] thread.c:2945:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:46:58.440 [2024-10-01 22:45:52.704588] Starting SPDK v25.01-pre git sha1 1b1c3081e / DPDK 24.03.0 initialization... 00:46:58.440 [2024-10-01 22:45:52.704660] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:46:58.440 [2024-10-01 22:45:52.776567] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:46:58.440 [2024-10-01 22:45:52.850552] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:46:58.440 [2024-10-01 22:45:52.850590] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:46:58.440 [2024-10-01 22:45:52.850597] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:46:58.440 [2024-10-01 22:45:52.850604] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:46:58.440 [2024-10-01 22:45:52.850610] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:46:58.440 [2024-10-01 22:45:52.850755] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:46:58.440 [2024-10-01 22:45:52.850845] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:46:58.440 [2024-10-01 22:45:52.957243] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:46:58.440 [2024-10-01 22:45:52.957758] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:46:58.440 [2024-10-01 22:45:52.958096] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:46:58.440 22:45:53 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:46:58.440 22:45:53 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@864 -- # return 0 00:46:58.440 22:45:53 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:46:58.440 22:45:53 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@730 -- # xtrace_disable 00:46:58.440 22:45:53 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:46:58.440 22:45:53 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:46:58.440 22:45:53 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@16 -- # setup_bdev_aio 00:46:58.440 22:45:53 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@77 -- # uname -s 00:46:58.440 22:45:53 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@77 -- # [[ Linux != \F\r\e\e\B\S\D ]] 00:46:58.440 22:45:53 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@78 -- # dd if=/dev/zero of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aiofile bs=2048 count=5000 00:46:58.440 5000+0 records in 00:46:58.440 5000+0 records out 00:46:58.440 10240000 bytes (10 MB, 9.8 MiB) copied, 0.0054382 s, 1.9 GB/s 00:46:58.440 22:45:53 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@79 -- # rpc_cmd bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aiofile AIO0 2048 00:46:58.440 22:45:53 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@561 -- # xtrace_disable 00:46:58.440 22:45:53 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:46:58.440 AIO0 00:46:58.440 22:45:53 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:46:58.440 22:45:53 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -q 256 00:46:58.440 22:45:53 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@561 -- # xtrace_disable 00:46:58.440 22:45:53 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:46:58.440 [2024-10-01 22:45:53.575362] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:46:58.440 22:45:53 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:46:58.440 22:45:53 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:46:58.440 22:45:53 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@561 -- # xtrace_disable 00:46:58.440 22:45:53 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:46:58.440 22:45:53 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:46:58.440 22:45:53 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 AIO0 00:46:58.440 22:45:53 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@561 -- # xtrace_disable 00:46:58.440 22:45:53 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:46:58.440 22:45:53 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:46:58.440 22:45:53 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:46:58.440 22:45:53 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@561 -- # xtrace_disable 00:46:58.440 22:45:53 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:46:58.440 [2024-10-01 22:45:53.615988] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:46:58.440 22:45:53 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:46:58.440 22:45:53 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@24 -- # for i in {0..1} 00:46:58.440 22:45:53 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@25 -- # reactor_is_idle 494453 0 00:46:58.440 22:45:53 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 494453 0 idle 00:46:58.440 22:45:53 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=494453 00:46:58.440 22:45:53 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:46:58.440 22:45:53 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:46:58.440 22:45:53 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:46:58.440 22:45:53 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:46:58.440 22:45:53 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:46:58.440 22:45:53 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:46:58.440 22:45:53 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:46:58.440 22:45:53 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:46:58.440 22:45:53 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:46:58.440 22:45:53 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 494453 -w 256 00:46:58.440 22:45:53 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:46:58.701 22:45:53 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor=' 494453 root 20 0 128.2g 42624 32256 S 0.0 0.0 0:00.31 reactor_0' 00:46:58.701 22:45:53 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 494453 root 20 0 128.2g 42624 32256 S 0.0 0.0 0:00.31 reactor_0 00:46:58.701 22:45:53 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:46:58.701 22:45:53 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:46:58.701 22:45:53 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:46:58.701 22:45:53 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:46:58.701 22:45:53 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:46:58.701 22:45:53 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:46:58.701 22:45:53 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:46:58.701 22:45:53 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:46:58.701 22:45:53 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@24 -- # for i in {0..1} 00:46:58.701 22:45:53 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@25 -- # reactor_is_idle 494453 1 00:46:58.701 22:45:53 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 494453 1 idle 00:46:58.701 22:45:53 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=494453 00:46:58.701 22:45:53 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:46:58.701 22:45:53 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:46:58.701 22:45:53 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:46:58.701 22:45:53 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:46:58.701 22:45:53 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:46:58.701 22:45:53 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:46:58.702 22:45:53 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:46:58.702 22:45:53 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:46:58.702 22:45:53 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:46:58.702 22:45:53 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 494453 -w 256 00:46:58.702 22:45:53 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:46:58.962 22:45:53 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor=' 494460 root 20 0 128.2g 42624 32256 S 0.0 0.0 0:00.00 reactor_1' 00:46:58.962 22:45:53 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 494460 root 20 0 128.2g 42624 32256 S 0.0 0.0 0:00.00 reactor_1 00:46:58.962 22:45:53 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:46:58.962 22:45:53 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:46:58.962 22:45:54 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:46:58.962 22:45:54 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:46:58.963 22:45:54 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:46:58.963 22:45:54 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:46:58.963 22:45:54 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:46:58.963 22:45:54 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:46:58.963 22:45:54 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@28 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:46:58.963 22:45:54 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@35 -- # perf_pid=494695 00:46:58.963 22:45:54 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@38 -- # for i in {0..1} 00:46:58.963 22:45:54 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # BUSY_THRESHOLD=30 00:46:58.963 22:45:54 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 256 -o 4096 -w randrw -M 30 -t 10 -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:46:58.963 22:45:54 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # reactor_is_busy 494453 0 00:46:58.963 22:45:54 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@49 -- # reactor_is_busy_or_idle 494453 0 busy 00:46:58.963 22:45:54 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=494453 00:46:58.963 22:45:54 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:46:58.963 22:45:54 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=busy 00:46:58.963 22:45:54 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=30 00:46:58.963 22:45:54 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:46:58.963 22:45:54 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ busy != \b\u\s\y ]] 00:46:58.963 22:45:54 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:46:58.963 22:45:54 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:46:58.963 22:45:54 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:46:58.963 22:45:54 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 494453 -w 256 00:46:58.963 22:45:54 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:46:58.963 22:45:54 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor=' 494453 root 20 0 128.2g 43776 32256 R 99.9 0.0 0:00.55 reactor_0' 00:46:58.963 22:45:54 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 494453 root 20 0 128.2g 43776 32256 R 99.9 0.0 0:00.55 reactor_0 00:46:58.963 22:45:54 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:46:58.963 22:45:54 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:46:58.963 22:45:54 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=99.9 00:46:58.963 22:45:54 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=99 00:46:58.963 22:45:54 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ busy = \b\u\s\y ]] 00:46:58.963 22:45:54 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # (( cpu_rate < busy_threshold )) 00:46:58.963 22:45:54 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ busy = \i\d\l\e ]] 00:46:58.963 22:45:54 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:46:58.963 22:45:54 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@38 -- # for i in {0..1} 00:46:58.963 22:45:54 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # BUSY_THRESHOLD=30 00:46:58.963 22:45:54 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # reactor_is_busy 494453 1 00:46:58.963 22:45:54 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@49 -- # reactor_is_busy_or_idle 494453 1 busy 00:46:58.963 22:45:54 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=494453 00:46:58.963 22:45:54 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:46:58.963 22:45:54 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=busy 00:46:58.963 22:45:54 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=30 00:46:58.963 22:45:54 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:46:58.963 22:45:54 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ busy != \b\u\s\y ]] 00:46:58.963 22:45:54 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:46:58.963 22:45:54 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:46:58.963 22:45:54 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:46:58.963 22:45:54 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 494453 -w 256 00:46:58.963 22:45:54 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:46:59.224 22:45:54 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor=' 494460 root 20 0 128.2g 43776 32256 R 87.5 0.0 0:00.30 reactor_1' 00:46:59.224 22:45:54 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 494460 root 20 0 128.2g 43776 32256 R 87.5 0.0 0:00.30 reactor_1 00:46:59.224 22:45:54 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:46:59.224 22:45:54 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:46:59.224 22:45:54 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=87.5 00:46:59.224 22:45:54 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=87 00:46:59.224 22:45:54 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ busy = \b\u\s\y ]] 00:46:59.224 22:45:54 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # (( cpu_rate < busy_threshold )) 00:46:59.225 22:45:54 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ busy = \i\d\l\e ]] 00:46:59.225 22:45:54 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:46:59.225 22:45:54 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@42 -- # wait 494695 00:47:09.228 Initializing NVMe Controllers 00:47:09.228 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:47:09.228 Controller IO queue size 256, less than required. 00:47:09.228 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:47:09.228 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:47:09.228 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:47:09.228 Initialization complete. Launching workers. 00:47:09.228 ======================================================== 00:47:09.228 Latency(us) 00:47:09.228 Device Information : IOPS MiB/s Average min max 00:47:09.228 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 16477.65 64.37 15545.97 2549.32 18042.95 00:47:09.228 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 19327.95 75.50 13246.55 7594.25 27684.32 00:47:09.228 ======================================================== 00:47:09.228 Total : 35805.60 139.87 14304.74 2549.32 27684.32 00:47:09.228 00:47:09.228 22:46:04 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@45 -- # for i in {0..1} 00:47:09.228 22:46:04 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@46 -- # reactor_is_idle 494453 0 00:47:09.228 22:46:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 494453 0 idle 00:47:09.228 22:46:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=494453 00:47:09.228 22:46:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:47:09.228 22:46:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:47:09.228 22:46:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:47:09.228 22:46:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:47:09.228 22:46:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:47:09.228 22:46:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:47:09.228 22:46:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:47:09.228 22:46:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:47:09.228 22:46:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:47:09.228 22:46:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 494453 -w 256 00:47:09.228 22:46:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:47:09.228 22:46:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor=' 494453 root 20 0 128.2g 43776 32256 S 6.7 0.0 0:20.32 reactor_0' 00:47:09.228 22:46:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 494453 root 20 0 128.2g 43776 32256 S 6.7 0.0 0:20.32 reactor_0 00:47:09.228 22:46:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:47:09.228 22:46:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:47:09.228 22:46:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=6.7 00:47:09.228 22:46:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=6 00:47:09.228 22:46:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:47:09.228 22:46:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:47:09.228 22:46:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:47:09.228 22:46:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:47:09.228 22:46:04 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@45 -- # for i in {0..1} 00:47:09.228 22:46:04 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@46 -- # reactor_is_idle 494453 1 00:47:09.228 22:46:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 494453 1 idle 00:47:09.228 22:46:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=494453 00:47:09.228 22:46:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:47:09.228 22:46:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:47:09.228 22:46:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:47:09.228 22:46:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:47:09.228 22:46:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:47:09.228 22:46:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:47:09.228 22:46:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:47:09.228 22:46:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:47:09.228 22:46:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:47:09.228 22:46:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 494453 -w 256 00:47:09.228 22:46:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:47:09.228 22:46:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor=' 494460 root 20 0 128.2g 43776 32256 S 0.0 0.0 0:10.00 reactor_1' 00:47:09.228 22:46:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:47:09.489 22:46:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 494460 root 20 0 128.2g 43776 32256 S 0.0 0.0 0:10.00 reactor_1 00:47:09.489 22:46:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:47:09.489 22:46:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:47:09.489 22:46:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:47:09.489 22:46:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:47:09.489 22:46:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:47:09.489 22:46:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:47:09.489 22:46:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:47:09.489 22:46:04 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@50 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 --hostid=80c5a598-37ec-ec11-9bc7-a4bf01928204 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:47:09.749 22:46:04 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@51 -- # waitforserial SPDKISFASTANDAWESOME 00:47:09.749 22:46:04 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1198 -- # local i=0 00:47:09.749 22:46:04 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:47:09.749 22:46:04 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:47:09.749 22:46:04 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1205 -- # sleep 2 00:47:12.294 22:46:07 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:47:12.294 22:46:07 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:47:12.294 22:46:07 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:47:12.294 22:46:07 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:47:12.294 22:46:07 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:47:12.294 22:46:07 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1208 -- # return 0 00:47:12.294 22:46:07 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@52 -- # for i in {0..1} 00:47:12.294 22:46:07 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@53 -- # reactor_is_idle 494453 0 00:47:12.294 22:46:07 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 494453 0 idle 00:47:12.294 22:46:07 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=494453 00:47:12.294 22:46:07 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:47:12.295 22:46:07 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:47:12.295 22:46:07 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:47:12.295 22:46:07 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:47:12.295 22:46:07 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:47:12.295 22:46:07 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:47:12.295 22:46:07 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:47:12.295 22:46:07 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:47:12.295 22:46:07 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:47:12.295 22:46:07 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 494453 -w 256 00:47:12.295 22:46:07 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:47:12.295 22:46:07 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor=' 494453 root 20 0 128.2g 78336 32256 S 0.0 0.1 0:20.56 reactor_0' 00:47:12.295 22:46:07 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 494453 root 20 0 128.2g 78336 32256 S 0.0 0.1 0:20.56 reactor_0 00:47:12.295 22:46:07 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:47:12.295 22:46:07 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:47:12.295 22:46:07 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:47:12.295 22:46:07 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:47:12.295 22:46:07 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:47:12.295 22:46:07 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:47:12.295 22:46:07 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:47:12.295 22:46:07 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:47:12.295 22:46:07 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@52 -- # for i in {0..1} 00:47:12.295 22:46:07 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@53 -- # reactor_is_idle 494453 1 00:47:12.295 22:46:07 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 494453 1 idle 00:47:12.295 22:46:07 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=494453 00:47:12.295 22:46:07 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:47:12.295 22:46:07 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:47:12.295 22:46:07 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:47:12.295 22:46:07 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:47:12.295 22:46:07 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:47:12.295 22:46:07 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:47:12.295 22:46:07 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:47:12.295 22:46:07 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:47:12.295 22:46:07 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:47:12.295 22:46:07 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:47:12.295 22:46:07 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 494453 -w 256 00:47:12.295 22:46:07 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor=' 494460 root 20 0 128.2g 78336 32256 S 0.0 0.1 0:10.14 reactor_1' 00:47:12.295 22:46:07 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 494460 root 20 0 128.2g 78336 32256 S 0.0 0.1 0:10.14 reactor_1 00:47:12.295 22:46:07 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:47:12.295 22:46:07 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:47:12.295 22:46:07 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:47:12.295 22:46:07 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:47:12.295 22:46:07 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:47:12.295 22:46:07 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:47:12.295 22:46:07 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:47:12.295 22:46:07 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:47:12.295 22:46:07 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@55 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:47:12.556 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:47:12.556 22:46:07 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@56 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:47:12.556 22:46:07 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1219 -- # local i=0 00:47:12.556 22:46:07 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:47:12.556 22:46:07 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:47:12.556 22:46:07 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:47:12.556 22:46:07 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:47:12.556 22:46:07 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1231 -- # return 0 00:47:12.556 22:46:07 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@58 -- # trap - SIGINT SIGTERM EXIT 00:47:12.556 22:46:07 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@59 -- # nvmftestfini 00:47:12.556 22:46:07 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@514 -- # nvmfcleanup 00:47:12.556 22:46:07 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@121 -- # sync 00:47:12.556 22:46:07 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:47:12.556 22:46:07 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@124 -- # set +e 00:47:12.556 22:46:07 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@125 -- # for i in {1..20} 00:47:12.556 22:46:07 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:47:12.556 rmmod nvme_tcp 00:47:12.556 rmmod nvme_fabrics 00:47:12.556 rmmod nvme_keyring 00:47:12.556 22:46:07 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:47:12.556 22:46:07 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@128 -- # set -e 00:47:12.556 22:46:07 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@129 -- # return 0 00:47:12.556 22:46:07 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@515 -- # '[' -n 494453 ']' 00:47:12.556 22:46:07 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@516 -- # killprocess 494453 00:47:12.556 22:46:07 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@950 -- # '[' -z 494453 ']' 00:47:12.556 22:46:07 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@954 -- # kill -0 494453 00:47:12.556 22:46:07 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@955 -- # uname 00:47:12.556 22:46:07 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:47:12.556 22:46:07 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 494453 00:47:12.556 22:46:07 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:47:12.556 22:46:07 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:47:12.556 22:46:07 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@968 -- # echo 'killing process with pid 494453' 00:47:12.556 killing process with pid 494453 00:47:12.556 22:46:07 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@969 -- # kill 494453 00:47:12.556 22:46:07 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@974 -- # wait 494453 00:47:12.817 22:46:07 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:47:12.817 22:46:07 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:47:12.817 22:46:07 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:47:12.817 22:46:07 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@297 -- # iptr 00:47:12.817 22:46:07 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@789 -- # iptables-save 00:47:12.817 22:46:07 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:47:12.817 22:46:07 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@789 -- # iptables-restore 00:47:12.817 22:46:07 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:47:12.817 22:46:07 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@302 -- # remove_spdk_ns 00:47:12.817 22:46:07 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:47:12.817 22:46:07 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:47:12.817 22:46:07 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:47:15.361 22:46:10 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:47:15.361 00:47:15.361 real 0m24.957s 00:47:15.361 user 0m40.085s 00:47:15.361 sys 0m9.525s 00:47:15.361 22:46:10 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1126 -- # xtrace_disable 00:47:15.361 22:46:10 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:47:15.361 ************************************ 00:47:15.361 END TEST nvmf_interrupt 00:47:15.361 ************************************ 00:47:15.361 00:47:15.361 real 30m4.925s 00:47:15.361 user 61m59.053s 00:47:15.361 sys 9m58.454s 00:47:15.361 22:46:10 nvmf_tcp -- common/autotest_common.sh@1126 -- # xtrace_disable 00:47:15.361 22:46:10 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:47:15.361 ************************************ 00:47:15.361 END TEST nvmf_tcp 00:47:15.361 ************************************ 00:47:15.361 22:46:10 -- spdk/autotest.sh@281 -- # [[ 0 -eq 0 ]] 00:47:15.361 22:46:10 -- spdk/autotest.sh@282 -- # run_test spdkcli_nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:47:15.362 22:46:10 -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:47:15.362 22:46:10 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:47:15.362 22:46:10 -- common/autotest_common.sh@10 -- # set +x 00:47:15.362 ************************************ 00:47:15.362 START TEST spdkcli_nvmf_tcp 00:47:15.362 ************************************ 00:47:15.362 22:46:10 spdkcli_nvmf_tcp -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:47:15.362 * Looking for test storage... 00:47:15.362 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:47:15.362 22:46:10 spdkcli_nvmf_tcp -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:47:15.362 22:46:10 spdkcli_nvmf_tcp -- common/autotest_common.sh@1681 -- # lcov --version 00:47:15.362 22:46:10 spdkcli_nvmf_tcp -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:47:15.362 22:46:10 spdkcli_nvmf_tcp -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:47:15.362 22:46:10 spdkcli_nvmf_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:47:15.362 22:46:10 spdkcli_nvmf_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:47:15.362 22:46:10 spdkcli_nvmf_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:47:15.362 22:46:10 spdkcli_nvmf_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:47:15.362 22:46:10 spdkcli_nvmf_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:47:15.362 22:46:10 spdkcli_nvmf_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:47:15.362 22:46:10 spdkcli_nvmf_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:47:15.362 22:46:10 spdkcli_nvmf_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:47:15.362 22:46:10 spdkcli_nvmf_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:47:15.362 22:46:10 spdkcli_nvmf_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:47:15.362 22:46:10 spdkcli_nvmf_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:47:15.362 22:46:10 spdkcli_nvmf_tcp -- scripts/common.sh@344 -- # case "$op" in 00:47:15.362 22:46:10 spdkcli_nvmf_tcp -- scripts/common.sh@345 -- # : 1 00:47:15.362 22:46:10 spdkcli_nvmf_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:47:15.362 22:46:10 spdkcli_nvmf_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:47:15.362 22:46:10 spdkcli_nvmf_tcp -- scripts/common.sh@365 -- # decimal 1 00:47:15.362 22:46:10 spdkcli_nvmf_tcp -- scripts/common.sh@353 -- # local d=1 00:47:15.362 22:46:10 spdkcli_nvmf_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:47:15.362 22:46:10 spdkcli_nvmf_tcp -- scripts/common.sh@355 -- # echo 1 00:47:15.362 22:46:10 spdkcli_nvmf_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:47:15.362 22:46:10 spdkcli_nvmf_tcp -- scripts/common.sh@366 -- # decimal 2 00:47:15.362 22:46:10 spdkcli_nvmf_tcp -- scripts/common.sh@353 -- # local d=2 00:47:15.362 22:46:10 spdkcli_nvmf_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:47:15.362 22:46:10 spdkcli_nvmf_tcp -- scripts/common.sh@355 -- # echo 2 00:47:15.362 22:46:10 spdkcli_nvmf_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:47:15.362 22:46:10 spdkcli_nvmf_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:47:15.362 22:46:10 spdkcli_nvmf_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:47:15.362 22:46:10 spdkcli_nvmf_tcp -- scripts/common.sh@368 -- # return 0 00:47:15.362 22:46:10 spdkcli_nvmf_tcp -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:47:15.362 22:46:10 spdkcli_nvmf_tcp -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:47:15.362 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:47:15.362 --rc genhtml_branch_coverage=1 00:47:15.362 --rc genhtml_function_coverage=1 00:47:15.362 --rc genhtml_legend=1 00:47:15.362 --rc geninfo_all_blocks=1 00:47:15.362 --rc geninfo_unexecuted_blocks=1 00:47:15.362 00:47:15.362 ' 00:47:15.362 22:46:10 spdkcli_nvmf_tcp -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:47:15.362 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:47:15.362 --rc genhtml_branch_coverage=1 00:47:15.362 --rc genhtml_function_coverage=1 00:47:15.362 --rc genhtml_legend=1 00:47:15.362 --rc geninfo_all_blocks=1 00:47:15.362 --rc geninfo_unexecuted_blocks=1 00:47:15.362 00:47:15.362 ' 00:47:15.362 22:46:10 spdkcli_nvmf_tcp -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:47:15.362 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:47:15.362 --rc genhtml_branch_coverage=1 00:47:15.362 --rc genhtml_function_coverage=1 00:47:15.362 --rc genhtml_legend=1 00:47:15.362 --rc geninfo_all_blocks=1 00:47:15.362 --rc geninfo_unexecuted_blocks=1 00:47:15.362 00:47:15.362 ' 00:47:15.362 22:46:10 spdkcli_nvmf_tcp -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:47:15.362 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:47:15.362 --rc genhtml_branch_coverage=1 00:47:15.362 --rc genhtml_function_coverage=1 00:47:15.362 --rc genhtml_legend=1 00:47:15.362 --rc geninfo_all_blocks=1 00:47:15.362 --rc geninfo_unexecuted_blocks=1 00:47:15.362 00:47:15.362 ' 00:47:15.362 22:46:10 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:47:15.362 22:46:10 spdkcli_nvmf_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:47:15.362 22:46:10 spdkcli_nvmf_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:47:15.362 22:46:10 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:47:15.362 22:46:10 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # uname -s 00:47:15.362 22:46:10 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:47:15.362 22:46:10 spdkcli_nvmf_tcp -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:47:15.362 22:46:10 spdkcli_nvmf_tcp -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:47:15.362 22:46:10 spdkcli_nvmf_tcp -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:47:15.362 22:46:10 spdkcli_nvmf_tcp -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:47:15.362 22:46:10 spdkcli_nvmf_tcp -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:47:15.362 22:46:10 spdkcli_nvmf_tcp -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:47:15.362 22:46:10 spdkcli_nvmf_tcp -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:47:15.362 22:46:10 spdkcli_nvmf_tcp -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:47:15.362 22:46:10 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:47:15.362 22:46:10 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 00:47:15.362 22:46:10 spdkcli_nvmf_tcp -- nvmf/common.sh@18 -- # NVME_HOSTID=80c5a598-37ec-ec11-9bc7-a4bf01928204 00:47:15.362 22:46:10 spdkcli_nvmf_tcp -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:47:15.362 22:46:10 spdkcli_nvmf_tcp -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:47:15.362 22:46:10 spdkcli_nvmf_tcp -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:47:15.362 22:46:10 spdkcli_nvmf_tcp -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:47:15.362 22:46:10 spdkcli_nvmf_tcp -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:47:15.362 22:46:10 spdkcli_nvmf_tcp -- scripts/common.sh@15 -- # shopt -s extglob 00:47:15.362 22:46:10 spdkcli_nvmf_tcp -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:47:15.362 22:46:10 spdkcli_nvmf_tcp -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:47:15.362 22:46:10 spdkcli_nvmf_tcp -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:47:15.362 22:46:10 spdkcli_nvmf_tcp -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:47:15.362 22:46:10 spdkcli_nvmf_tcp -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:47:15.362 22:46:10 spdkcli_nvmf_tcp -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:47:15.362 22:46:10 spdkcli_nvmf_tcp -- paths/export.sh@5 -- # export PATH 00:47:15.362 22:46:10 spdkcli_nvmf_tcp -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:47:15.362 22:46:10 spdkcli_nvmf_tcp -- nvmf/common.sh@51 -- # : 0 00:47:15.362 22:46:10 spdkcli_nvmf_tcp -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:47:15.362 22:46:10 spdkcli_nvmf_tcp -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:47:15.362 22:46:10 spdkcli_nvmf_tcp -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:47:15.362 22:46:10 spdkcli_nvmf_tcp -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:47:15.362 22:46:10 spdkcli_nvmf_tcp -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:47:15.362 22:46:10 spdkcli_nvmf_tcp -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:47:15.362 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:47:15.362 22:46:10 spdkcli_nvmf_tcp -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:47:15.362 22:46:10 spdkcli_nvmf_tcp -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:47:15.362 22:46:10 spdkcli_nvmf_tcp -- nvmf/common.sh@55 -- # have_pci_nics=0 00:47:15.362 22:46:10 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@12 -- # MATCH_FILE=spdkcli_nvmf.test 00:47:15.362 22:46:10 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@13 -- # SPDKCLI_BRANCH=/nvmf 00:47:15.362 22:46:10 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@15 -- # trap cleanup EXIT 00:47:15.362 22:46:10 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@17 -- # timing_enter run_nvmf_tgt 00:47:15.362 22:46:10 spdkcli_nvmf_tcp -- common/autotest_common.sh@724 -- # xtrace_disable 00:47:15.362 22:46:10 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:47:15.362 22:46:10 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@18 -- # run_nvmf_tgt 00:47:15.362 22:46:10 spdkcli_nvmf_tcp -- spdkcli/common.sh@33 -- # nvmf_tgt_pid=497995 00:47:15.362 22:46:10 spdkcli_nvmf_tcp -- spdkcli/common.sh@34 -- # waitforlisten 497995 00:47:15.362 22:46:10 spdkcli_nvmf_tcp -- common/autotest_common.sh@831 -- # '[' -z 497995 ']' 00:47:15.362 22:46:10 spdkcli_nvmf_tcp -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:47:15.362 22:46:10 spdkcli_nvmf_tcp -- common/autotest_common.sh@836 -- # local max_retries=100 00:47:15.362 22:46:10 spdkcli_nvmf_tcp -- spdkcli/common.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x3 -p 0 00:47:15.362 22:46:10 spdkcli_nvmf_tcp -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:47:15.363 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:47:15.363 22:46:10 spdkcli_nvmf_tcp -- common/autotest_common.sh@840 -- # xtrace_disable 00:47:15.363 22:46:10 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:47:15.363 [2024-10-01 22:46:10.458497] Starting SPDK v25.01-pre git sha1 1b1c3081e / DPDK 24.03.0 initialization... 00:47:15.363 [2024-10-01 22:46:10.458551] [ DPDK EAL parameters: nvmf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid497995 ] 00:47:15.363 [2024-10-01 22:46:10.518294] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:47:15.363 [2024-10-01 22:46:10.583714] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:47:15.363 [2024-10-01 22:46:10.583877] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:47:16.305 22:46:11 spdkcli_nvmf_tcp -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:47:16.305 22:46:11 spdkcli_nvmf_tcp -- common/autotest_common.sh@864 -- # return 0 00:47:16.305 22:46:11 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@19 -- # timing_exit run_nvmf_tgt 00:47:16.305 22:46:11 spdkcli_nvmf_tcp -- common/autotest_common.sh@730 -- # xtrace_disable 00:47:16.305 22:46:11 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:47:16.305 22:46:11 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@21 -- # NVMF_TARGET_IP=127.0.0.1 00:47:16.305 22:46:11 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@22 -- # [[ tcp == \r\d\m\a ]] 00:47:16.305 22:46:11 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@27 -- # timing_enter spdkcli_create_nvmf_config 00:47:16.305 22:46:11 spdkcli_nvmf_tcp -- common/autotest_common.sh@724 -- # xtrace_disable 00:47:16.305 22:46:11 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:47:16.305 22:46:11 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc create 32 512 Malloc1'\'' '\''Malloc1'\'' True 00:47:16.305 '\''/bdevs/malloc create 32 512 Malloc2'\'' '\''Malloc2'\'' True 00:47:16.305 '\''/bdevs/malloc create 32 512 Malloc3'\'' '\''Malloc3'\'' True 00:47:16.305 '\''/bdevs/malloc create 32 512 Malloc4'\'' '\''Malloc4'\'' True 00:47:16.305 '\''/bdevs/malloc create 32 512 Malloc5'\'' '\''Malloc5'\'' True 00:47:16.305 '\''/bdevs/malloc create 32 512 Malloc6'\'' '\''Malloc6'\'' True 00:47:16.305 '\''nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192'\'' '\'''\'' True 00:47:16.305 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:47:16.305 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1'\'' '\''Malloc3'\'' True 00:47:16.305 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2'\'' '\''Malloc4'\'' True 00:47:16.305 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:47:16.305 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:47:16.305 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2'\'' '\''Malloc2'\'' True 00:47:16.305 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:47:16.305 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:47:16.305 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1'\'' '\''Malloc1'\'' True 00:47:16.305 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:47:16.305 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:47:16.305 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:47:16.305 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:47:16.306 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True'\'' '\''Allow any host'\'' 00:47:16.306 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False'\'' '\''Allow any host'\'' True 00:47:16.306 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:47:16.306 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4'\'' '\''127.0.0.1:4262'\'' True 00:47:16.306 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:47:16.306 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5'\'' '\''Malloc5'\'' True 00:47:16.306 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6'\'' '\''Malloc6'\'' True 00:47:16.306 '\''/nvmf/referral create tcp 127.0.0.2 4030 IPv4'\'' 00:47:16.306 ' 00:47:18.931 [2024-10-01 22:46:13.918671] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:47:20.388 [2024-10-01 22:46:15.283023] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4260 *** 00:47:22.933 [2024-10-01 22:46:17.806386] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4261 *** 00:47:24.851 [2024-10-01 22:46:19.808317] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4262 *** 00:47:26.237 Executing command: ['/bdevs/malloc create 32 512 Malloc1', 'Malloc1', True] 00:47:26.237 Executing command: ['/bdevs/malloc create 32 512 Malloc2', 'Malloc2', True] 00:47:26.237 Executing command: ['/bdevs/malloc create 32 512 Malloc3', 'Malloc3', True] 00:47:26.237 Executing command: ['/bdevs/malloc create 32 512 Malloc4', 'Malloc4', True] 00:47:26.237 Executing command: ['/bdevs/malloc create 32 512 Malloc5', 'Malloc5', True] 00:47:26.237 Executing command: ['/bdevs/malloc create 32 512 Malloc6', 'Malloc6', True] 00:47:26.237 Executing command: ['nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192', '', True] 00:47:26.237 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode1', True] 00:47:26.237 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1', 'Malloc3', True] 00:47:26.237 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2', 'Malloc4', True] 00:47:26.237 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:47:26.237 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:47:26.237 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2', 'Malloc2', True] 00:47:26.237 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:47:26.237 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:47:26.237 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1', 'Malloc1', True] 00:47:26.237 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:47:26.237 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:47:26.237 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1', 'nqn.2014-08.org.spdk:cnode1', True] 00:47:26.237 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:47:26.237 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True', 'Allow any host', False] 00:47:26.237 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False', 'Allow any host', True] 00:47:26.237 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:47:26.237 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4', '127.0.0.1:4262', True] 00:47:26.237 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:47:26.237 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5', 'Malloc5', True] 00:47:26.237 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6', 'Malloc6', True] 00:47:26.237 Executing command: ['/nvmf/referral create tcp 127.0.0.2 4030 IPv4', False] 00:47:26.237 22:46:21 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@66 -- # timing_exit spdkcli_create_nvmf_config 00:47:26.237 22:46:21 spdkcli_nvmf_tcp -- common/autotest_common.sh@730 -- # xtrace_disable 00:47:26.237 22:46:21 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:47:26.237 22:46:21 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@68 -- # timing_enter spdkcli_check_match 00:47:26.237 22:46:21 spdkcli_nvmf_tcp -- common/autotest_common.sh@724 -- # xtrace_disable 00:47:26.237 22:46:21 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:47:26.237 22:46:21 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@69 -- # check_match 00:47:26.237 22:46:21 spdkcli_nvmf_tcp -- spdkcli/common.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdkcli.py ll /nvmf 00:47:26.806 22:46:21 spdkcli_nvmf_tcp -- spdkcli/common.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/match/match /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test.match 00:47:26.807 22:46:21 spdkcli_nvmf_tcp -- spdkcli/common.sh@46 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test 00:47:26.807 22:46:21 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@70 -- # timing_exit spdkcli_check_match 00:47:26.807 22:46:21 spdkcli_nvmf_tcp -- common/autotest_common.sh@730 -- # xtrace_disable 00:47:26.807 22:46:21 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:47:26.807 22:46:21 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@72 -- # timing_enter spdkcli_clear_nvmf_config 00:47:26.807 22:46:21 spdkcli_nvmf_tcp -- common/autotest_common.sh@724 -- # xtrace_disable 00:47:26.807 22:46:21 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:47:26.807 22:46:21 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1'\'' '\''Malloc3'\'' 00:47:26.807 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all'\'' '\''Malloc4'\'' 00:47:26.807 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:47:26.807 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' 00:47:26.807 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262'\'' '\''127.0.0.1:4262'\'' 00:47:26.807 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all'\'' '\''127.0.0.1:4261'\'' 00:47:26.807 '\''/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3'\'' '\''nqn.2014-08.org.spdk:cnode3'\'' 00:47:26.807 '\''/nvmf/subsystem delete_all'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:47:26.807 '\''/bdevs/malloc delete Malloc6'\'' '\''Malloc6'\'' 00:47:26.807 '\''/bdevs/malloc delete Malloc5'\'' '\''Malloc5'\'' 00:47:26.807 '\''/bdevs/malloc delete Malloc4'\'' '\''Malloc4'\'' 00:47:26.807 '\''/bdevs/malloc delete Malloc3'\'' '\''Malloc3'\'' 00:47:26.807 '\''/bdevs/malloc delete Malloc2'\'' '\''Malloc2'\'' 00:47:26.807 '\''/bdevs/malloc delete Malloc1'\'' '\''Malloc1'\'' 00:47:26.807 ' 00:47:32.095 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1', 'Malloc3', False] 00:47:32.095 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all', 'Malloc4', False] 00:47:32.095 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', False] 00:47:32.095 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all', 'nqn.2014-08.org.spdk:cnode1', False] 00:47:32.095 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262', '127.0.0.1:4262', False] 00:47:32.095 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all', '127.0.0.1:4261', False] 00:47:32.095 Executing command: ['/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3', 'nqn.2014-08.org.spdk:cnode3', False] 00:47:32.095 Executing command: ['/nvmf/subsystem delete_all', 'nqn.2014-08.org.spdk:cnode2', False] 00:47:32.095 Executing command: ['/bdevs/malloc delete Malloc6', 'Malloc6', False] 00:47:32.095 Executing command: ['/bdevs/malloc delete Malloc5', 'Malloc5', False] 00:47:32.095 Executing command: ['/bdevs/malloc delete Malloc4', 'Malloc4', False] 00:47:32.095 Executing command: ['/bdevs/malloc delete Malloc3', 'Malloc3', False] 00:47:32.095 Executing command: ['/bdevs/malloc delete Malloc2', 'Malloc2', False] 00:47:32.095 Executing command: ['/bdevs/malloc delete Malloc1', 'Malloc1', False] 00:47:32.095 22:46:26 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@88 -- # timing_exit spdkcli_clear_nvmf_config 00:47:32.095 22:46:26 spdkcli_nvmf_tcp -- common/autotest_common.sh@730 -- # xtrace_disable 00:47:32.095 22:46:26 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:47:32.095 22:46:26 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@90 -- # killprocess 497995 00:47:32.095 22:46:26 spdkcli_nvmf_tcp -- common/autotest_common.sh@950 -- # '[' -z 497995 ']' 00:47:32.095 22:46:26 spdkcli_nvmf_tcp -- common/autotest_common.sh@954 -- # kill -0 497995 00:47:32.095 22:46:26 spdkcli_nvmf_tcp -- common/autotest_common.sh@955 -- # uname 00:47:32.095 22:46:26 spdkcli_nvmf_tcp -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:47:32.095 22:46:26 spdkcli_nvmf_tcp -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 497995 00:47:32.095 22:46:27 spdkcli_nvmf_tcp -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:47:32.095 22:46:27 spdkcli_nvmf_tcp -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:47:32.095 22:46:27 spdkcli_nvmf_tcp -- common/autotest_common.sh@968 -- # echo 'killing process with pid 497995' 00:47:32.095 killing process with pid 497995 00:47:32.095 22:46:27 spdkcli_nvmf_tcp -- common/autotest_common.sh@969 -- # kill 497995 00:47:32.095 22:46:27 spdkcli_nvmf_tcp -- common/autotest_common.sh@974 -- # wait 497995 00:47:32.095 22:46:27 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@1 -- # cleanup 00:47:32.095 22:46:27 spdkcli_nvmf_tcp -- spdkcli/common.sh@10 -- # '[' -n '' ']' 00:47:32.095 22:46:27 spdkcli_nvmf_tcp -- spdkcli/common.sh@13 -- # '[' -n 497995 ']' 00:47:32.095 22:46:27 spdkcli_nvmf_tcp -- spdkcli/common.sh@14 -- # killprocess 497995 00:47:32.095 22:46:27 spdkcli_nvmf_tcp -- common/autotest_common.sh@950 -- # '[' -z 497995 ']' 00:47:32.095 22:46:27 spdkcli_nvmf_tcp -- common/autotest_common.sh@954 -- # kill -0 497995 00:47:32.095 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 954: kill: (497995) - No such process 00:47:32.095 22:46:27 spdkcli_nvmf_tcp -- common/autotest_common.sh@977 -- # echo 'Process with pid 497995 is not found' 00:47:32.096 Process with pid 497995 is not found 00:47:32.096 22:46:27 spdkcli_nvmf_tcp -- spdkcli/common.sh@16 -- # '[' -n '' ']' 00:47:32.096 22:46:27 spdkcli_nvmf_tcp -- spdkcli/common.sh@19 -- # '[' -n '' ']' 00:47:32.096 22:46:27 spdkcli_nvmf_tcp -- spdkcli/common.sh@22 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_nvmf.test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_details_vhost.test /tmp/sample_aio 00:47:32.096 00:47:32.096 real 0m17.050s 00:47:32.096 user 0m36.419s 00:47:32.096 sys 0m0.772s 00:47:32.096 22:46:27 spdkcli_nvmf_tcp -- common/autotest_common.sh@1126 -- # xtrace_disable 00:47:32.096 22:46:27 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:47:32.096 ************************************ 00:47:32.096 END TEST spdkcli_nvmf_tcp 00:47:32.096 ************************************ 00:47:32.096 22:46:27 -- spdk/autotest.sh@283 -- # run_test nvmf_identify_passthru /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:47:32.096 22:46:27 -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:47:32.096 22:46:27 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:47:32.096 22:46:27 -- common/autotest_common.sh@10 -- # set +x 00:47:32.096 ************************************ 00:47:32.096 START TEST nvmf_identify_passthru 00:47:32.096 ************************************ 00:47:32.096 22:46:27 nvmf_identify_passthru -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:47:32.358 * Looking for test storage... 00:47:32.358 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:47:32.358 22:46:27 nvmf_identify_passthru -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:47:32.358 22:46:27 nvmf_identify_passthru -- common/autotest_common.sh@1681 -- # lcov --version 00:47:32.358 22:46:27 nvmf_identify_passthru -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:47:32.358 22:46:27 nvmf_identify_passthru -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:47:32.358 22:46:27 nvmf_identify_passthru -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:47:32.358 22:46:27 nvmf_identify_passthru -- scripts/common.sh@333 -- # local ver1 ver1_l 00:47:32.358 22:46:27 nvmf_identify_passthru -- scripts/common.sh@334 -- # local ver2 ver2_l 00:47:32.358 22:46:27 nvmf_identify_passthru -- scripts/common.sh@336 -- # IFS=.-: 00:47:32.358 22:46:27 nvmf_identify_passthru -- scripts/common.sh@336 -- # read -ra ver1 00:47:32.358 22:46:27 nvmf_identify_passthru -- scripts/common.sh@337 -- # IFS=.-: 00:47:32.358 22:46:27 nvmf_identify_passthru -- scripts/common.sh@337 -- # read -ra ver2 00:47:32.358 22:46:27 nvmf_identify_passthru -- scripts/common.sh@338 -- # local 'op=<' 00:47:32.358 22:46:27 nvmf_identify_passthru -- scripts/common.sh@340 -- # ver1_l=2 00:47:32.358 22:46:27 nvmf_identify_passthru -- scripts/common.sh@341 -- # ver2_l=1 00:47:32.358 22:46:27 nvmf_identify_passthru -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:47:32.358 22:46:27 nvmf_identify_passthru -- scripts/common.sh@344 -- # case "$op" in 00:47:32.358 22:46:27 nvmf_identify_passthru -- scripts/common.sh@345 -- # : 1 00:47:32.358 22:46:27 nvmf_identify_passthru -- scripts/common.sh@364 -- # (( v = 0 )) 00:47:32.358 22:46:27 nvmf_identify_passthru -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:47:32.358 22:46:27 nvmf_identify_passthru -- scripts/common.sh@365 -- # decimal 1 00:47:32.358 22:46:27 nvmf_identify_passthru -- scripts/common.sh@353 -- # local d=1 00:47:32.358 22:46:27 nvmf_identify_passthru -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:47:32.358 22:46:27 nvmf_identify_passthru -- scripts/common.sh@355 -- # echo 1 00:47:32.358 22:46:27 nvmf_identify_passthru -- scripts/common.sh@365 -- # ver1[v]=1 00:47:32.358 22:46:27 nvmf_identify_passthru -- scripts/common.sh@366 -- # decimal 2 00:47:32.358 22:46:27 nvmf_identify_passthru -- scripts/common.sh@353 -- # local d=2 00:47:32.358 22:46:27 nvmf_identify_passthru -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:47:32.358 22:46:27 nvmf_identify_passthru -- scripts/common.sh@355 -- # echo 2 00:47:32.358 22:46:27 nvmf_identify_passthru -- scripts/common.sh@366 -- # ver2[v]=2 00:47:32.358 22:46:27 nvmf_identify_passthru -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:47:32.358 22:46:27 nvmf_identify_passthru -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:47:32.358 22:46:27 nvmf_identify_passthru -- scripts/common.sh@368 -- # return 0 00:47:32.358 22:46:27 nvmf_identify_passthru -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:47:32.358 22:46:27 nvmf_identify_passthru -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:47:32.358 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:47:32.358 --rc genhtml_branch_coverage=1 00:47:32.358 --rc genhtml_function_coverage=1 00:47:32.358 --rc genhtml_legend=1 00:47:32.358 --rc geninfo_all_blocks=1 00:47:32.358 --rc geninfo_unexecuted_blocks=1 00:47:32.358 00:47:32.358 ' 00:47:32.358 22:46:27 nvmf_identify_passthru -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:47:32.358 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:47:32.358 --rc genhtml_branch_coverage=1 00:47:32.358 --rc genhtml_function_coverage=1 00:47:32.358 --rc genhtml_legend=1 00:47:32.358 --rc geninfo_all_blocks=1 00:47:32.358 --rc geninfo_unexecuted_blocks=1 00:47:32.358 00:47:32.358 ' 00:47:32.358 22:46:27 nvmf_identify_passthru -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:47:32.358 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:47:32.358 --rc genhtml_branch_coverage=1 00:47:32.358 --rc genhtml_function_coverage=1 00:47:32.358 --rc genhtml_legend=1 00:47:32.358 --rc geninfo_all_blocks=1 00:47:32.358 --rc geninfo_unexecuted_blocks=1 00:47:32.358 00:47:32.358 ' 00:47:32.358 22:46:27 nvmf_identify_passthru -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:47:32.358 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:47:32.358 --rc genhtml_branch_coverage=1 00:47:32.358 --rc genhtml_function_coverage=1 00:47:32.358 --rc genhtml_legend=1 00:47:32.358 --rc geninfo_all_blocks=1 00:47:32.358 --rc geninfo_unexecuted_blocks=1 00:47:32.358 00:47:32.358 ' 00:47:32.358 22:46:27 nvmf_identify_passthru -- target/identify_passthru.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:47:32.358 22:46:27 nvmf_identify_passthru -- nvmf/common.sh@7 -- # uname -s 00:47:32.358 22:46:27 nvmf_identify_passthru -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:47:32.358 22:46:27 nvmf_identify_passthru -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:47:32.358 22:46:27 nvmf_identify_passthru -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:47:32.358 22:46:27 nvmf_identify_passthru -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:47:32.358 22:46:27 nvmf_identify_passthru -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:47:32.358 22:46:27 nvmf_identify_passthru -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:47:32.358 22:46:27 nvmf_identify_passthru -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:47:32.358 22:46:27 nvmf_identify_passthru -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:47:32.358 22:46:27 nvmf_identify_passthru -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:47:32.358 22:46:27 nvmf_identify_passthru -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:47:32.358 22:46:27 nvmf_identify_passthru -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 00:47:32.358 22:46:27 nvmf_identify_passthru -- nvmf/common.sh@18 -- # NVME_HOSTID=80c5a598-37ec-ec11-9bc7-a4bf01928204 00:47:32.358 22:46:27 nvmf_identify_passthru -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:47:32.358 22:46:27 nvmf_identify_passthru -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:47:32.358 22:46:27 nvmf_identify_passthru -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:47:32.358 22:46:27 nvmf_identify_passthru -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:47:32.358 22:46:27 nvmf_identify_passthru -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:47:32.358 22:46:27 nvmf_identify_passthru -- scripts/common.sh@15 -- # shopt -s extglob 00:47:32.359 22:46:27 nvmf_identify_passthru -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:47:32.359 22:46:27 nvmf_identify_passthru -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:47:32.359 22:46:27 nvmf_identify_passthru -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:47:32.359 22:46:27 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:47:32.359 22:46:27 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:47:32.359 22:46:27 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:47:32.359 22:46:27 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:47:32.359 22:46:27 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:47:32.359 22:46:27 nvmf_identify_passthru -- nvmf/common.sh@51 -- # : 0 00:47:32.359 22:46:27 nvmf_identify_passthru -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:47:32.359 22:46:27 nvmf_identify_passthru -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:47:32.359 22:46:27 nvmf_identify_passthru -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:47:32.359 22:46:27 nvmf_identify_passthru -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:47:32.359 22:46:27 nvmf_identify_passthru -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:47:32.359 22:46:27 nvmf_identify_passthru -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:47:32.359 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:47:32.359 22:46:27 nvmf_identify_passthru -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:47:32.359 22:46:27 nvmf_identify_passthru -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:47:32.359 22:46:27 nvmf_identify_passthru -- nvmf/common.sh@55 -- # have_pci_nics=0 00:47:32.359 22:46:27 nvmf_identify_passthru -- target/identify_passthru.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:47:32.359 22:46:27 nvmf_identify_passthru -- scripts/common.sh@15 -- # shopt -s extglob 00:47:32.359 22:46:27 nvmf_identify_passthru -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:47:32.359 22:46:27 nvmf_identify_passthru -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:47:32.359 22:46:27 nvmf_identify_passthru -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:47:32.359 22:46:27 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:47:32.359 22:46:27 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:47:32.359 22:46:27 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:47:32.359 22:46:27 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:47:32.359 22:46:27 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:47:32.359 22:46:27 nvmf_identify_passthru -- target/identify_passthru.sh@12 -- # nvmftestinit 00:47:32.359 22:46:27 nvmf_identify_passthru -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:47:32.359 22:46:27 nvmf_identify_passthru -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:47:32.359 22:46:27 nvmf_identify_passthru -- nvmf/common.sh@474 -- # prepare_net_devs 00:47:32.359 22:46:27 nvmf_identify_passthru -- nvmf/common.sh@436 -- # local -g is_hw=no 00:47:32.359 22:46:27 nvmf_identify_passthru -- nvmf/common.sh@438 -- # remove_spdk_ns 00:47:32.359 22:46:27 nvmf_identify_passthru -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:47:32.359 22:46:27 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:47:32.359 22:46:27 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:47:32.359 22:46:27 nvmf_identify_passthru -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:47:32.359 22:46:27 nvmf_identify_passthru -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:47:32.359 22:46:27 nvmf_identify_passthru -- nvmf/common.sh@309 -- # xtrace_disable 00:47:32.359 22:46:27 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:47:40.495 22:46:34 nvmf_identify_passthru -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:47:40.495 22:46:34 nvmf_identify_passthru -- nvmf/common.sh@315 -- # pci_devs=() 00:47:40.495 22:46:34 nvmf_identify_passthru -- nvmf/common.sh@315 -- # local -a pci_devs 00:47:40.495 22:46:34 nvmf_identify_passthru -- nvmf/common.sh@316 -- # pci_net_devs=() 00:47:40.495 22:46:34 nvmf_identify_passthru -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:47:40.495 22:46:34 nvmf_identify_passthru -- nvmf/common.sh@317 -- # pci_drivers=() 00:47:40.495 22:46:34 nvmf_identify_passthru -- nvmf/common.sh@317 -- # local -A pci_drivers 00:47:40.495 22:46:34 nvmf_identify_passthru -- nvmf/common.sh@319 -- # net_devs=() 00:47:40.495 22:46:34 nvmf_identify_passthru -- nvmf/common.sh@319 -- # local -ga net_devs 00:47:40.495 22:46:34 nvmf_identify_passthru -- nvmf/common.sh@320 -- # e810=() 00:47:40.495 22:46:34 nvmf_identify_passthru -- nvmf/common.sh@320 -- # local -ga e810 00:47:40.495 22:46:34 nvmf_identify_passthru -- nvmf/common.sh@321 -- # x722=() 00:47:40.495 22:46:34 nvmf_identify_passthru -- nvmf/common.sh@321 -- # local -ga x722 00:47:40.495 22:46:34 nvmf_identify_passthru -- nvmf/common.sh@322 -- # mlx=() 00:47:40.495 22:46:34 nvmf_identify_passthru -- nvmf/common.sh@322 -- # local -ga mlx 00:47:40.495 22:46:34 nvmf_identify_passthru -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:47:40.495 22:46:34 nvmf_identify_passthru -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:47:40.495 22:46:34 nvmf_identify_passthru -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:47:40.495 22:46:34 nvmf_identify_passthru -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:47:40.495 22:46:34 nvmf_identify_passthru -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:47:40.495 22:46:34 nvmf_identify_passthru -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:47:40.495 22:46:34 nvmf_identify_passthru -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:47:40.495 22:46:34 nvmf_identify_passthru -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:47:40.495 22:46:34 nvmf_identify_passthru -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:47:40.495 22:46:34 nvmf_identify_passthru -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:47:40.495 22:46:34 nvmf_identify_passthru -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:47:40.495 22:46:34 nvmf_identify_passthru -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:47:40.495 22:46:34 nvmf_identify_passthru -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:47:40.495 22:46:34 nvmf_identify_passthru -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:47:40.495 22:46:34 nvmf_identify_passthru -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:47:40.495 22:46:34 nvmf_identify_passthru -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:47:40.495 22:46:34 nvmf_identify_passthru -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:47:40.495 22:46:34 nvmf_identify_passthru -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:47:40.495 22:46:34 nvmf_identify_passthru -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:47:40.495 22:46:34 nvmf_identify_passthru -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:47:40.495 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:47:40.495 22:46:34 nvmf_identify_passthru -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:47:40.495 22:46:34 nvmf_identify_passthru -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:47:40.495 22:46:34 nvmf_identify_passthru -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:47:40.495 22:46:34 nvmf_identify_passthru -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:47:40.495 22:46:34 nvmf_identify_passthru -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:47:40.495 22:46:34 nvmf_identify_passthru -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:47:40.495 22:46:34 nvmf_identify_passthru -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:47:40.495 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:47:40.495 22:46:34 nvmf_identify_passthru -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:47:40.495 22:46:34 nvmf_identify_passthru -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:47:40.495 22:46:34 nvmf_identify_passthru -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:47:40.495 22:46:34 nvmf_identify_passthru -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:47:40.495 22:46:34 nvmf_identify_passthru -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:47:40.495 22:46:34 nvmf_identify_passthru -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:47:40.495 22:46:34 nvmf_identify_passthru -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:47:40.495 22:46:34 nvmf_identify_passthru -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:47:40.495 22:46:34 nvmf_identify_passthru -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:47:40.495 22:46:34 nvmf_identify_passthru -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:47:40.495 22:46:34 nvmf_identify_passthru -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:47:40.495 22:46:34 nvmf_identify_passthru -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:47:40.495 22:46:34 nvmf_identify_passthru -- nvmf/common.sh@416 -- # [[ up == up ]] 00:47:40.495 22:46:34 nvmf_identify_passthru -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:47:40.495 22:46:34 nvmf_identify_passthru -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:47:40.495 22:46:34 nvmf_identify_passthru -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:47:40.495 Found net devices under 0000:4b:00.0: cvl_0_0 00:47:40.495 22:46:34 nvmf_identify_passthru -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:47:40.495 22:46:34 nvmf_identify_passthru -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:47:40.495 22:46:34 nvmf_identify_passthru -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:47:40.495 22:46:34 nvmf_identify_passthru -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:47:40.495 22:46:34 nvmf_identify_passthru -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:47:40.495 22:46:34 nvmf_identify_passthru -- nvmf/common.sh@416 -- # [[ up == up ]] 00:47:40.495 22:46:34 nvmf_identify_passthru -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:47:40.495 22:46:34 nvmf_identify_passthru -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:47:40.495 22:46:34 nvmf_identify_passthru -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:47:40.495 Found net devices under 0000:4b:00.1: cvl_0_1 00:47:40.495 22:46:34 nvmf_identify_passthru -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:47:40.495 22:46:34 nvmf_identify_passthru -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:47:40.495 22:46:34 nvmf_identify_passthru -- nvmf/common.sh@440 -- # is_hw=yes 00:47:40.495 22:46:34 nvmf_identify_passthru -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:47:40.495 22:46:34 nvmf_identify_passthru -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:47:40.495 22:46:34 nvmf_identify_passthru -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:47:40.495 22:46:34 nvmf_identify_passthru -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:47:40.495 22:46:34 nvmf_identify_passthru -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:47:40.495 22:46:34 nvmf_identify_passthru -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:47:40.495 22:46:34 nvmf_identify_passthru -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:47:40.495 22:46:34 nvmf_identify_passthru -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:47:40.495 22:46:34 nvmf_identify_passthru -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:47:40.495 22:46:34 nvmf_identify_passthru -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:47:40.495 22:46:34 nvmf_identify_passthru -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:47:40.495 22:46:34 nvmf_identify_passthru -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:47:40.495 22:46:34 nvmf_identify_passthru -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:47:40.495 22:46:34 nvmf_identify_passthru -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:47:40.495 22:46:34 nvmf_identify_passthru -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:47:40.495 22:46:34 nvmf_identify_passthru -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:47:40.495 22:46:34 nvmf_identify_passthru -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:47:40.495 22:46:34 nvmf_identify_passthru -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:47:40.495 22:46:34 nvmf_identify_passthru -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:47:40.495 22:46:34 nvmf_identify_passthru -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:47:40.495 22:46:34 nvmf_identify_passthru -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:47:40.495 22:46:34 nvmf_identify_passthru -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:47:40.495 22:46:34 nvmf_identify_passthru -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:47:40.495 22:46:34 nvmf_identify_passthru -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:47:40.495 22:46:34 nvmf_identify_passthru -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:47:40.496 22:46:34 nvmf_identify_passthru -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:47:40.496 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:47:40.496 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.725 ms 00:47:40.496 00:47:40.496 --- 10.0.0.2 ping statistics --- 00:47:40.496 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:47:40.496 rtt min/avg/max/mdev = 0.725/0.725/0.725/0.000 ms 00:47:40.496 22:46:34 nvmf_identify_passthru -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:47:40.496 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:47:40.496 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.325 ms 00:47:40.496 00:47:40.496 --- 10.0.0.1 ping statistics --- 00:47:40.496 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:47:40.496 rtt min/avg/max/mdev = 0.325/0.325/0.325/0.000 ms 00:47:40.496 22:46:34 nvmf_identify_passthru -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:47:40.496 22:46:34 nvmf_identify_passthru -- nvmf/common.sh@448 -- # return 0 00:47:40.496 22:46:34 nvmf_identify_passthru -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:47:40.496 22:46:34 nvmf_identify_passthru -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:47:40.496 22:46:34 nvmf_identify_passthru -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:47:40.496 22:46:34 nvmf_identify_passthru -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:47:40.496 22:46:34 nvmf_identify_passthru -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:47:40.496 22:46:34 nvmf_identify_passthru -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:47:40.496 22:46:34 nvmf_identify_passthru -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:47:40.496 22:46:34 nvmf_identify_passthru -- target/identify_passthru.sh@14 -- # timing_enter nvme_identify 00:47:40.496 22:46:34 nvmf_identify_passthru -- common/autotest_common.sh@724 -- # xtrace_disable 00:47:40.496 22:46:34 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:47:40.496 22:46:34 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # get_first_nvme_bdf 00:47:40.496 22:46:34 nvmf_identify_passthru -- common/autotest_common.sh@1507 -- # bdfs=() 00:47:40.496 22:46:34 nvmf_identify_passthru -- common/autotest_common.sh@1507 -- # local bdfs 00:47:40.496 22:46:34 nvmf_identify_passthru -- common/autotest_common.sh@1508 -- # bdfs=($(get_nvme_bdfs)) 00:47:40.496 22:46:34 nvmf_identify_passthru -- common/autotest_common.sh@1508 -- # get_nvme_bdfs 00:47:40.496 22:46:34 nvmf_identify_passthru -- common/autotest_common.sh@1496 -- # bdfs=() 00:47:40.496 22:46:34 nvmf_identify_passthru -- common/autotest_common.sh@1496 -- # local bdfs 00:47:40.496 22:46:34 nvmf_identify_passthru -- common/autotest_common.sh@1497 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:47:40.496 22:46:34 nvmf_identify_passthru -- common/autotest_common.sh@1497 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:47:40.496 22:46:34 nvmf_identify_passthru -- common/autotest_common.sh@1497 -- # jq -r '.config[].params.traddr' 00:47:40.496 22:46:34 nvmf_identify_passthru -- common/autotest_common.sh@1498 -- # (( 1 == 0 )) 00:47:40.496 22:46:34 nvmf_identify_passthru -- common/autotest_common.sh@1502 -- # printf '%s\n' 0000:65:00.0 00:47:40.496 22:46:34 nvmf_identify_passthru -- common/autotest_common.sh@1510 -- # echo 0000:65:00.0 00:47:40.496 22:46:34 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # bdf=0000:65:00.0 00:47:40.496 22:46:34 nvmf_identify_passthru -- target/identify_passthru.sh@17 -- # '[' -z 0000:65:00.0 ']' 00:47:40.496 22:46:34 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # grep 'Serial Number:' 00:47:40.496 22:46:34 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:65:00.0' -i 0 00:47:40.496 22:46:34 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # awk '{print $3}' 00:47:40.496 22:46:35 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # nvme_serial_number=S64GNE0R605480 00:47:40.496 22:46:35 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:65:00.0' -i 0 00:47:40.496 22:46:35 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # grep 'Model Number:' 00:47:40.496 22:46:35 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # awk '{print $3}' 00:47:40.496 22:46:35 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # nvme_model_number=SAMSUNG 00:47:40.496 22:46:35 nvmf_identify_passthru -- target/identify_passthru.sh@26 -- # timing_exit nvme_identify 00:47:40.496 22:46:35 nvmf_identify_passthru -- common/autotest_common.sh@730 -- # xtrace_disable 00:47:40.496 22:46:35 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:47:40.496 22:46:35 nvmf_identify_passthru -- target/identify_passthru.sh@28 -- # timing_enter start_nvmf_tgt 00:47:40.496 22:46:35 nvmf_identify_passthru -- common/autotest_common.sh@724 -- # xtrace_disable 00:47:40.496 22:46:35 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:47:40.496 22:46:35 nvmf_identify_passthru -- target/identify_passthru.sh@31 -- # nvmfpid=505094 00:47:40.496 22:46:35 nvmf_identify_passthru -- target/identify_passthru.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:47:40.496 22:46:35 nvmf_identify_passthru -- target/identify_passthru.sh@30 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:47:40.496 22:46:35 nvmf_identify_passthru -- target/identify_passthru.sh@35 -- # waitforlisten 505094 00:47:40.496 22:46:35 nvmf_identify_passthru -- common/autotest_common.sh@831 -- # '[' -z 505094 ']' 00:47:40.496 22:46:35 nvmf_identify_passthru -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:47:40.496 22:46:35 nvmf_identify_passthru -- common/autotest_common.sh@836 -- # local max_retries=100 00:47:40.496 22:46:35 nvmf_identify_passthru -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:47:40.496 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:47:40.496 22:46:35 nvmf_identify_passthru -- common/autotest_common.sh@840 -- # xtrace_disable 00:47:40.496 22:46:35 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:47:40.756 [2024-10-01 22:46:35.775420] Starting SPDK v25.01-pre git sha1 1b1c3081e / DPDK 24.03.0 initialization... 00:47:40.756 [2024-10-01 22:46:35.775472] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:47:40.756 [2024-10-01 22:46:35.841300] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:47:40.756 [2024-10-01 22:46:35.906798] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:47:40.756 [2024-10-01 22:46:35.906832] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:47:40.756 [2024-10-01 22:46:35.906840] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:47:40.756 [2024-10-01 22:46:35.906847] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:47:40.756 [2024-10-01 22:46:35.906853] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:47:40.757 [2024-10-01 22:46:35.906989] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:47:40.757 [2024-10-01 22:46:35.907103] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:47:40.757 [2024-10-01 22:46:35.907232] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:47:40.757 [2024-10-01 22:46:35.907234] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:47:41.327 22:46:36 nvmf_identify_passthru -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:47:41.327 22:46:36 nvmf_identify_passthru -- common/autotest_common.sh@864 -- # return 0 00:47:41.327 22:46:36 nvmf_identify_passthru -- target/identify_passthru.sh@36 -- # rpc_cmd -v nvmf_set_config --passthru-identify-ctrlr 00:47:41.327 22:46:36 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:47:41.327 22:46:36 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:47:41.327 INFO: Log level set to 20 00:47:41.327 INFO: Requests: 00:47:41.327 { 00:47:41.327 "jsonrpc": "2.0", 00:47:41.327 "method": "nvmf_set_config", 00:47:41.327 "id": 1, 00:47:41.327 "params": { 00:47:41.327 "admin_cmd_passthru": { 00:47:41.327 "identify_ctrlr": true 00:47:41.327 } 00:47:41.327 } 00:47:41.327 } 00:47:41.327 00:47:41.587 INFO: response: 00:47:41.587 { 00:47:41.587 "jsonrpc": "2.0", 00:47:41.587 "id": 1, 00:47:41.588 "result": true 00:47:41.588 } 00:47:41.588 00:47:41.588 22:46:36 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:47:41.588 22:46:36 nvmf_identify_passthru -- target/identify_passthru.sh@37 -- # rpc_cmd -v framework_start_init 00:47:41.588 22:46:36 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:47:41.588 22:46:36 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:47:41.588 INFO: Setting log level to 20 00:47:41.588 INFO: Setting log level to 20 00:47:41.588 INFO: Log level set to 20 00:47:41.588 INFO: Log level set to 20 00:47:41.588 INFO: Requests: 00:47:41.588 { 00:47:41.588 "jsonrpc": "2.0", 00:47:41.588 "method": "framework_start_init", 00:47:41.588 "id": 1 00:47:41.588 } 00:47:41.588 00:47:41.588 INFO: Requests: 00:47:41.588 { 00:47:41.588 "jsonrpc": "2.0", 00:47:41.588 "method": "framework_start_init", 00:47:41.588 "id": 1 00:47:41.588 } 00:47:41.588 00:47:41.588 [2024-10-01 22:46:36.695210] nvmf_tgt.c: 462:nvmf_tgt_advance_state: *NOTICE*: Custom identify ctrlr handler enabled 00:47:41.588 INFO: response: 00:47:41.588 { 00:47:41.588 "jsonrpc": "2.0", 00:47:41.588 "id": 1, 00:47:41.588 "result": true 00:47:41.588 } 00:47:41.588 00:47:41.588 INFO: response: 00:47:41.588 { 00:47:41.588 "jsonrpc": "2.0", 00:47:41.588 "id": 1, 00:47:41.588 "result": true 00:47:41.588 } 00:47:41.588 00:47:41.588 22:46:36 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:47:41.588 22:46:36 nvmf_identify_passthru -- target/identify_passthru.sh@38 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:47:41.588 22:46:36 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:47:41.588 22:46:36 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:47:41.588 INFO: Setting log level to 40 00:47:41.588 INFO: Setting log level to 40 00:47:41.588 INFO: Setting log level to 40 00:47:41.588 [2024-10-01 22:46:36.708541] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:47:41.588 22:46:36 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:47:41.588 22:46:36 nvmf_identify_passthru -- target/identify_passthru.sh@39 -- # timing_exit start_nvmf_tgt 00:47:41.588 22:46:36 nvmf_identify_passthru -- common/autotest_common.sh@730 -- # xtrace_disable 00:47:41.588 22:46:36 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:47:41.588 22:46:36 nvmf_identify_passthru -- target/identify_passthru.sh@41 -- # rpc_cmd bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:65:00.0 00:47:41.588 22:46:36 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:47:41.588 22:46:36 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:47:41.849 Nvme0n1 00:47:41.849 22:46:37 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:47:41.849 22:46:37 nvmf_identify_passthru -- target/identify_passthru.sh@42 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 1 00:47:41.849 22:46:37 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:47:41.849 22:46:37 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:47:41.849 22:46:37 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:47:41.850 22:46:37 nvmf_identify_passthru -- target/identify_passthru.sh@43 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:47:41.850 22:46:37 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:47:41.850 22:46:37 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:47:41.850 22:46:37 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:47:41.850 22:46:37 nvmf_identify_passthru -- target/identify_passthru.sh@44 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:47:41.850 22:46:37 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:47:41.850 22:46:37 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:47:41.850 [2024-10-01 22:46:37.091841] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:47:41.850 22:46:37 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:47:41.850 22:46:37 nvmf_identify_passthru -- target/identify_passthru.sh@46 -- # rpc_cmd nvmf_get_subsystems 00:47:41.850 22:46:37 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:47:41.850 22:46:37 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:47:42.111 [ 00:47:42.111 { 00:47:42.111 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:47:42.111 "subtype": "Discovery", 00:47:42.111 "listen_addresses": [], 00:47:42.111 "allow_any_host": true, 00:47:42.111 "hosts": [] 00:47:42.111 }, 00:47:42.111 { 00:47:42.111 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:47:42.111 "subtype": "NVMe", 00:47:42.111 "listen_addresses": [ 00:47:42.111 { 00:47:42.111 "trtype": "TCP", 00:47:42.111 "adrfam": "IPv4", 00:47:42.111 "traddr": "10.0.0.2", 00:47:42.111 "trsvcid": "4420" 00:47:42.111 } 00:47:42.111 ], 00:47:42.111 "allow_any_host": true, 00:47:42.111 "hosts": [], 00:47:42.111 "serial_number": "SPDK00000000000001", 00:47:42.111 "model_number": "SPDK bdev Controller", 00:47:42.111 "max_namespaces": 1, 00:47:42.111 "min_cntlid": 1, 00:47:42.111 "max_cntlid": 65519, 00:47:42.111 "namespaces": [ 00:47:42.111 { 00:47:42.111 "nsid": 1, 00:47:42.111 "bdev_name": "Nvme0n1", 00:47:42.111 "name": "Nvme0n1", 00:47:42.111 "nguid": "36344730526054800025384500000051", 00:47:42.111 "uuid": "36344730-5260-5480-0025-384500000051" 00:47:42.111 } 00:47:42.111 ] 00:47:42.111 } 00:47:42.111 ] 00:47:42.111 22:46:37 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:47:42.111 22:46:37 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:47:42.111 22:46:37 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # grep 'Serial Number:' 00:47:42.111 22:46:37 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # awk '{print $3}' 00:47:42.111 22:46:37 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # nvmf_serial_number=S64GNE0R605480 00:47:42.111 22:46:37 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:47:42.111 22:46:37 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # grep 'Model Number:' 00:47:42.111 22:46:37 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # awk '{print $3}' 00:47:42.372 22:46:37 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # nvmf_model_number=SAMSUNG 00:47:42.372 22:46:37 nvmf_identify_passthru -- target/identify_passthru.sh@63 -- # '[' S64GNE0R605480 '!=' S64GNE0R605480 ']' 00:47:42.372 22:46:37 nvmf_identify_passthru -- target/identify_passthru.sh@68 -- # '[' SAMSUNG '!=' SAMSUNG ']' 00:47:42.372 22:46:37 nvmf_identify_passthru -- target/identify_passthru.sh@73 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:47:42.372 22:46:37 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:47:42.372 22:46:37 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:47:42.372 22:46:37 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:47:42.372 22:46:37 nvmf_identify_passthru -- target/identify_passthru.sh@75 -- # trap - SIGINT SIGTERM EXIT 00:47:42.372 22:46:37 nvmf_identify_passthru -- target/identify_passthru.sh@77 -- # nvmftestfini 00:47:42.372 22:46:37 nvmf_identify_passthru -- nvmf/common.sh@514 -- # nvmfcleanup 00:47:42.372 22:46:37 nvmf_identify_passthru -- nvmf/common.sh@121 -- # sync 00:47:42.372 22:46:37 nvmf_identify_passthru -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:47:42.372 22:46:37 nvmf_identify_passthru -- nvmf/common.sh@124 -- # set +e 00:47:42.372 22:46:37 nvmf_identify_passthru -- nvmf/common.sh@125 -- # for i in {1..20} 00:47:42.372 22:46:37 nvmf_identify_passthru -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:47:42.372 rmmod nvme_tcp 00:47:42.372 rmmod nvme_fabrics 00:47:42.372 rmmod nvme_keyring 00:47:42.372 22:46:37 nvmf_identify_passthru -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:47:42.372 22:46:37 nvmf_identify_passthru -- nvmf/common.sh@128 -- # set -e 00:47:42.372 22:46:37 nvmf_identify_passthru -- nvmf/common.sh@129 -- # return 0 00:47:42.372 22:46:37 nvmf_identify_passthru -- nvmf/common.sh@515 -- # '[' -n 505094 ']' 00:47:42.372 22:46:37 nvmf_identify_passthru -- nvmf/common.sh@516 -- # killprocess 505094 00:47:42.372 22:46:37 nvmf_identify_passthru -- common/autotest_common.sh@950 -- # '[' -z 505094 ']' 00:47:42.372 22:46:37 nvmf_identify_passthru -- common/autotest_common.sh@954 -- # kill -0 505094 00:47:42.372 22:46:37 nvmf_identify_passthru -- common/autotest_common.sh@955 -- # uname 00:47:42.372 22:46:37 nvmf_identify_passthru -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:47:42.372 22:46:37 nvmf_identify_passthru -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 505094 00:47:42.632 22:46:37 nvmf_identify_passthru -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:47:42.632 22:46:37 nvmf_identify_passthru -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:47:42.632 22:46:37 nvmf_identify_passthru -- common/autotest_common.sh@968 -- # echo 'killing process with pid 505094' 00:47:42.632 killing process with pid 505094 00:47:42.633 22:46:37 nvmf_identify_passthru -- common/autotest_common.sh@969 -- # kill 505094 00:47:42.633 22:46:37 nvmf_identify_passthru -- common/autotest_common.sh@974 -- # wait 505094 00:47:42.893 22:46:37 nvmf_identify_passthru -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:47:42.893 22:46:37 nvmf_identify_passthru -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:47:42.893 22:46:37 nvmf_identify_passthru -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:47:42.893 22:46:37 nvmf_identify_passthru -- nvmf/common.sh@297 -- # iptr 00:47:42.893 22:46:37 nvmf_identify_passthru -- nvmf/common.sh@789 -- # iptables-save 00:47:42.893 22:46:37 nvmf_identify_passthru -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:47:42.893 22:46:37 nvmf_identify_passthru -- nvmf/common.sh@789 -- # iptables-restore 00:47:42.893 22:46:37 nvmf_identify_passthru -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:47:42.893 22:46:37 nvmf_identify_passthru -- nvmf/common.sh@302 -- # remove_spdk_ns 00:47:42.893 22:46:37 nvmf_identify_passthru -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:47:42.893 22:46:37 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:47:42.893 22:46:37 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:47:44.807 22:46:40 nvmf_identify_passthru -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:47:44.807 00:47:44.807 real 0m12.753s 00:47:44.807 user 0m10.314s 00:47:44.807 sys 0m6.382s 00:47:44.807 22:46:40 nvmf_identify_passthru -- common/autotest_common.sh@1126 -- # xtrace_disable 00:47:44.807 22:46:40 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:47:44.807 ************************************ 00:47:44.807 END TEST nvmf_identify_passthru 00:47:44.807 ************************************ 00:47:45.069 22:46:40 -- spdk/autotest.sh@285 -- # run_test nvmf_dif /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:47:45.069 22:46:40 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:47:45.069 22:46:40 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:47:45.069 22:46:40 -- common/autotest_common.sh@10 -- # set +x 00:47:45.069 ************************************ 00:47:45.069 START TEST nvmf_dif 00:47:45.069 ************************************ 00:47:45.069 22:46:40 nvmf_dif -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:47:45.069 * Looking for test storage... 00:47:45.069 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:47:45.069 22:46:40 nvmf_dif -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:47:45.069 22:46:40 nvmf_dif -- common/autotest_common.sh@1681 -- # lcov --version 00:47:45.069 22:46:40 nvmf_dif -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:47:45.069 22:46:40 nvmf_dif -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:47:45.069 22:46:40 nvmf_dif -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:47:45.069 22:46:40 nvmf_dif -- scripts/common.sh@333 -- # local ver1 ver1_l 00:47:45.069 22:46:40 nvmf_dif -- scripts/common.sh@334 -- # local ver2 ver2_l 00:47:45.069 22:46:40 nvmf_dif -- scripts/common.sh@336 -- # IFS=.-: 00:47:45.069 22:46:40 nvmf_dif -- scripts/common.sh@336 -- # read -ra ver1 00:47:45.069 22:46:40 nvmf_dif -- scripts/common.sh@337 -- # IFS=.-: 00:47:45.069 22:46:40 nvmf_dif -- scripts/common.sh@337 -- # read -ra ver2 00:47:45.069 22:46:40 nvmf_dif -- scripts/common.sh@338 -- # local 'op=<' 00:47:45.069 22:46:40 nvmf_dif -- scripts/common.sh@340 -- # ver1_l=2 00:47:45.069 22:46:40 nvmf_dif -- scripts/common.sh@341 -- # ver2_l=1 00:47:45.069 22:46:40 nvmf_dif -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:47:45.069 22:46:40 nvmf_dif -- scripts/common.sh@344 -- # case "$op" in 00:47:45.069 22:46:40 nvmf_dif -- scripts/common.sh@345 -- # : 1 00:47:45.069 22:46:40 nvmf_dif -- scripts/common.sh@364 -- # (( v = 0 )) 00:47:45.069 22:46:40 nvmf_dif -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:47:45.069 22:46:40 nvmf_dif -- scripts/common.sh@365 -- # decimal 1 00:47:45.069 22:46:40 nvmf_dif -- scripts/common.sh@353 -- # local d=1 00:47:45.069 22:46:40 nvmf_dif -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:47:45.331 22:46:40 nvmf_dif -- scripts/common.sh@355 -- # echo 1 00:47:45.331 22:46:40 nvmf_dif -- scripts/common.sh@365 -- # ver1[v]=1 00:47:45.331 22:46:40 nvmf_dif -- scripts/common.sh@366 -- # decimal 2 00:47:45.331 22:46:40 nvmf_dif -- scripts/common.sh@353 -- # local d=2 00:47:45.331 22:46:40 nvmf_dif -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:47:45.331 22:46:40 nvmf_dif -- scripts/common.sh@355 -- # echo 2 00:47:45.331 22:46:40 nvmf_dif -- scripts/common.sh@366 -- # ver2[v]=2 00:47:45.331 22:46:40 nvmf_dif -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:47:45.331 22:46:40 nvmf_dif -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:47:45.331 22:46:40 nvmf_dif -- scripts/common.sh@368 -- # return 0 00:47:45.331 22:46:40 nvmf_dif -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:47:45.331 22:46:40 nvmf_dif -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:47:45.331 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:47:45.331 --rc genhtml_branch_coverage=1 00:47:45.331 --rc genhtml_function_coverage=1 00:47:45.331 --rc genhtml_legend=1 00:47:45.331 --rc geninfo_all_blocks=1 00:47:45.331 --rc geninfo_unexecuted_blocks=1 00:47:45.331 00:47:45.331 ' 00:47:45.331 22:46:40 nvmf_dif -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:47:45.331 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:47:45.331 --rc genhtml_branch_coverage=1 00:47:45.331 --rc genhtml_function_coverage=1 00:47:45.331 --rc genhtml_legend=1 00:47:45.331 --rc geninfo_all_blocks=1 00:47:45.331 --rc geninfo_unexecuted_blocks=1 00:47:45.331 00:47:45.331 ' 00:47:45.331 22:46:40 nvmf_dif -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:47:45.331 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:47:45.331 --rc genhtml_branch_coverage=1 00:47:45.331 --rc genhtml_function_coverage=1 00:47:45.331 --rc genhtml_legend=1 00:47:45.331 --rc geninfo_all_blocks=1 00:47:45.331 --rc geninfo_unexecuted_blocks=1 00:47:45.331 00:47:45.331 ' 00:47:45.331 22:46:40 nvmf_dif -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:47:45.331 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:47:45.331 --rc genhtml_branch_coverage=1 00:47:45.331 --rc genhtml_function_coverage=1 00:47:45.331 --rc genhtml_legend=1 00:47:45.331 --rc geninfo_all_blocks=1 00:47:45.331 --rc geninfo_unexecuted_blocks=1 00:47:45.331 00:47:45.331 ' 00:47:45.331 22:46:40 nvmf_dif -- target/dif.sh@13 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:47:45.331 22:46:40 nvmf_dif -- nvmf/common.sh@7 -- # uname -s 00:47:45.331 22:46:40 nvmf_dif -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:47:45.331 22:46:40 nvmf_dif -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:47:45.331 22:46:40 nvmf_dif -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:47:45.331 22:46:40 nvmf_dif -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:47:45.331 22:46:40 nvmf_dif -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:47:45.331 22:46:40 nvmf_dif -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:47:45.331 22:46:40 nvmf_dif -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:47:45.331 22:46:40 nvmf_dif -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:47:45.331 22:46:40 nvmf_dif -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:47:45.331 22:46:40 nvmf_dif -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:47:45.331 22:46:40 nvmf_dif -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 00:47:45.331 22:46:40 nvmf_dif -- nvmf/common.sh@18 -- # NVME_HOSTID=80c5a598-37ec-ec11-9bc7-a4bf01928204 00:47:45.331 22:46:40 nvmf_dif -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:47:45.331 22:46:40 nvmf_dif -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:47:45.331 22:46:40 nvmf_dif -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:47:45.331 22:46:40 nvmf_dif -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:47:45.331 22:46:40 nvmf_dif -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:47:45.331 22:46:40 nvmf_dif -- scripts/common.sh@15 -- # shopt -s extglob 00:47:45.332 22:46:40 nvmf_dif -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:47:45.332 22:46:40 nvmf_dif -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:47:45.332 22:46:40 nvmf_dif -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:47:45.332 22:46:40 nvmf_dif -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:47:45.332 22:46:40 nvmf_dif -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:47:45.332 22:46:40 nvmf_dif -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:47:45.332 22:46:40 nvmf_dif -- paths/export.sh@5 -- # export PATH 00:47:45.332 22:46:40 nvmf_dif -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:47:45.332 22:46:40 nvmf_dif -- nvmf/common.sh@51 -- # : 0 00:47:45.332 22:46:40 nvmf_dif -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:47:45.332 22:46:40 nvmf_dif -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:47:45.332 22:46:40 nvmf_dif -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:47:45.332 22:46:40 nvmf_dif -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:47:45.332 22:46:40 nvmf_dif -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:47:45.332 22:46:40 nvmf_dif -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:47:45.332 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:47:45.332 22:46:40 nvmf_dif -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:47:45.332 22:46:40 nvmf_dif -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:47:45.332 22:46:40 nvmf_dif -- nvmf/common.sh@55 -- # have_pci_nics=0 00:47:45.332 22:46:40 nvmf_dif -- target/dif.sh@15 -- # NULL_META=16 00:47:45.332 22:46:40 nvmf_dif -- target/dif.sh@15 -- # NULL_BLOCK_SIZE=512 00:47:45.332 22:46:40 nvmf_dif -- target/dif.sh@15 -- # NULL_SIZE=64 00:47:45.332 22:46:40 nvmf_dif -- target/dif.sh@15 -- # NULL_DIF=1 00:47:45.332 22:46:40 nvmf_dif -- target/dif.sh@135 -- # nvmftestinit 00:47:45.332 22:46:40 nvmf_dif -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:47:45.332 22:46:40 nvmf_dif -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:47:45.332 22:46:40 nvmf_dif -- nvmf/common.sh@474 -- # prepare_net_devs 00:47:45.332 22:46:40 nvmf_dif -- nvmf/common.sh@436 -- # local -g is_hw=no 00:47:45.332 22:46:40 nvmf_dif -- nvmf/common.sh@438 -- # remove_spdk_ns 00:47:45.332 22:46:40 nvmf_dif -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:47:45.332 22:46:40 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:47:45.332 22:46:40 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:47:45.332 22:46:40 nvmf_dif -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:47:45.332 22:46:40 nvmf_dif -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:47:45.332 22:46:40 nvmf_dif -- nvmf/common.sh@309 -- # xtrace_disable 00:47:45.332 22:46:40 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:47:53.474 22:46:47 nvmf_dif -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:47:53.474 22:46:47 nvmf_dif -- nvmf/common.sh@315 -- # pci_devs=() 00:47:53.474 22:46:47 nvmf_dif -- nvmf/common.sh@315 -- # local -a pci_devs 00:47:53.474 22:46:47 nvmf_dif -- nvmf/common.sh@316 -- # pci_net_devs=() 00:47:53.474 22:46:47 nvmf_dif -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:47:53.474 22:46:47 nvmf_dif -- nvmf/common.sh@317 -- # pci_drivers=() 00:47:53.474 22:46:47 nvmf_dif -- nvmf/common.sh@317 -- # local -A pci_drivers 00:47:53.474 22:46:47 nvmf_dif -- nvmf/common.sh@319 -- # net_devs=() 00:47:53.474 22:46:47 nvmf_dif -- nvmf/common.sh@319 -- # local -ga net_devs 00:47:53.474 22:46:47 nvmf_dif -- nvmf/common.sh@320 -- # e810=() 00:47:53.474 22:46:47 nvmf_dif -- nvmf/common.sh@320 -- # local -ga e810 00:47:53.474 22:46:47 nvmf_dif -- nvmf/common.sh@321 -- # x722=() 00:47:53.474 22:46:47 nvmf_dif -- nvmf/common.sh@321 -- # local -ga x722 00:47:53.474 22:46:47 nvmf_dif -- nvmf/common.sh@322 -- # mlx=() 00:47:53.474 22:46:47 nvmf_dif -- nvmf/common.sh@322 -- # local -ga mlx 00:47:53.474 22:46:47 nvmf_dif -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:47:53.474 22:46:47 nvmf_dif -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:47:53.474 22:46:47 nvmf_dif -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:47:53.474 22:46:47 nvmf_dif -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:47:53.474 22:46:47 nvmf_dif -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:47:53.474 22:46:47 nvmf_dif -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:47:53.474 22:46:47 nvmf_dif -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:47:53.474 22:46:47 nvmf_dif -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:47:53.474 22:46:47 nvmf_dif -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:47:53.474 22:46:47 nvmf_dif -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:47:53.474 22:46:47 nvmf_dif -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:47:53.474 22:46:47 nvmf_dif -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:47:53.474 22:46:47 nvmf_dif -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:47:53.474 22:46:47 nvmf_dif -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:47:53.474 22:46:47 nvmf_dif -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:47:53.474 22:46:47 nvmf_dif -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:47:53.474 22:46:47 nvmf_dif -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:47:53.474 22:46:47 nvmf_dif -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:47:53.474 22:46:47 nvmf_dif -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:47:53.474 22:46:47 nvmf_dif -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:47:53.474 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:47:53.474 22:46:47 nvmf_dif -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:47:53.474 22:46:47 nvmf_dif -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:47:53.474 22:46:47 nvmf_dif -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:47:53.474 22:46:47 nvmf_dif -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:47:53.474 22:46:47 nvmf_dif -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:47:53.474 22:46:47 nvmf_dif -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:47:53.474 22:46:47 nvmf_dif -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:47:53.474 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:47:53.474 22:46:47 nvmf_dif -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:47:53.474 22:46:47 nvmf_dif -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:47:53.474 22:46:47 nvmf_dif -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:47:53.474 22:46:47 nvmf_dif -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:47:53.474 22:46:47 nvmf_dif -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:47:53.474 22:46:47 nvmf_dif -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:47:53.474 22:46:47 nvmf_dif -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:47:53.474 22:46:47 nvmf_dif -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:47:53.474 22:46:47 nvmf_dif -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:47:53.474 22:46:47 nvmf_dif -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:47:53.474 22:46:47 nvmf_dif -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:47:53.474 22:46:47 nvmf_dif -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:47:53.474 22:46:47 nvmf_dif -- nvmf/common.sh@416 -- # [[ up == up ]] 00:47:53.474 22:46:47 nvmf_dif -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:47:53.474 22:46:47 nvmf_dif -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:47:53.474 22:46:47 nvmf_dif -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:47:53.474 Found net devices under 0000:4b:00.0: cvl_0_0 00:47:53.474 22:46:47 nvmf_dif -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:47:53.474 22:46:47 nvmf_dif -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:47:53.474 22:46:47 nvmf_dif -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:47:53.474 22:46:47 nvmf_dif -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:47:53.474 22:46:47 nvmf_dif -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:47:53.474 22:46:47 nvmf_dif -- nvmf/common.sh@416 -- # [[ up == up ]] 00:47:53.474 22:46:47 nvmf_dif -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:47:53.474 22:46:47 nvmf_dif -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:47:53.474 22:46:47 nvmf_dif -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:47:53.474 Found net devices under 0000:4b:00.1: cvl_0_1 00:47:53.474 22:46:47 nvmf_dif -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:47:53.474 22:46:47 nvmf_dif -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:47:53.474 22:46:47 nvmf_dif -- nvmf/common.sh@440 -- # is_hw=yes 00:47:53.474 22:46:47 nvmf_dif -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:47:53.474 22:46:47 nvmf_dif -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:47:53.474 22:46:47 nvmf_dif -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:47:53.474 22:46:47 nvmf_dif -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:47:53.474 22:46:47 nvmf_dif -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:47:53.474 22:46:47 nvmf_dif -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:47:53.474 22:46:47 nvmf_dif -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:47:53.474 22:46:47 nvmf_dif -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:47:53.474 22:46:47 nvmf_dif -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:47:53.474 22:46:47 nvmf_dif -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:47:53.474 22:46:47 nvmf_dif -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:47:53.474 22:46:47 nvmf_dif -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:47:53.474 22:46:47 nvmf_dif -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:47:53.474 22:46:47 nvmf_dif -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:47:53.474 22:46:47 nvmf_dif -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:47:53.474 22:46:47 nvmf_dif -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:47:53.474 22:46:47 nvmf_dif -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:47:53.474 22:46:47 nvmf_dif -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:47:53.474 22:46:47 nvmf_dif -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:47:53.474 22:46:47 nvmf_dif -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:47:53.474 22:46:47 nvmf_dif -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:47:53.474 22:46:47 nvmf_dif -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:47:53.475 22:46:47 nvmf_dif -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:47:53.475 22:46:47 nvmf_dif -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:47:53.475 22:46:47 nvmf_dif -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:47:53.475 22:46:47 nvmf_dif -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:47:53.475 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:47:53.475 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.657 ms 00:47:53.475 00:47:53.475 --- 10.0.0.2 ping statistics --- 00:47:53.475 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:47:53.475 rtt min/avg/max/mdev = 0.657/0.657/0.657/0.000 ms 00:47:53.475 22:46:47 nvmf_dif -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:47:53.475 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:47:53.475 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.329 ms 00:47:53.475 00:47:53.475 --- 10.0.0.1 ping statistics --- 00:47:53.475 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:47:53.475 rtt min/avg/max/mdev = 0.329/0.329/0.329/0.000 ms 00:47:53.475 22:46:47 nvmf_dif -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:47:53.475 22:46:47 nvmf_dif -- nvmf/common.sh@448 -- # return 0 00:47:53.475 22:46:47 nvmf_dif -- nvmf/common.sh@476 -- # '[' iso == iso ']' 00:47:53.475 22:46:47 nvmf_dif -- nvmf/common.sh@477 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:47:56.019 0000:80:01.6 (8086 0b00): Already using the vfio-pci driver 00:47:56.019 0000:80:01.7 (8086 0b00): Already using the vfio-pci driver 00:47:56.019 0000:80:01.4 (8086 0b00): Already using the vfio-pci driver 00:47:56.019 0000:80:01.5 (8086 0b00): Already using the vfio-pci driver 00:47:56.019 0000:80:01.2 (8086 0b00): Already using the vfio-pci driver 00:47:56.019 0000:80:01.3 (8086 0b00): Already using the vfio-pci driver 00:47:56.019 0000:80:01.0 (8086 0b00): Already using the vfio-pci driver 00:47:56.019 0000:80:01.1 (8086 0b00): Already using the vfio-pci driver 00:47:56.019 0000:00:01.6 (8086 0b00): Already using the vfio-pci driver 00:47:56.019 0000:65:00.0 (144d a80a): Already using the vfio-pci driver 00:47:56.019 0000:00:01.7 (8086 0b00): Already using the vfio-pci driver 00:47:56.019 0000:00:01.4 (8086 0b00): Already using the vfio-pci driver 00:47:56.019 0000:00:01.5 (8086 0b00): Already using the vfio-pci driver 00:47:56.019 0000:00:01.2 (8086 0b00): Already using the vfio-pci driver 00:47:56.019 0000:00:01.3 (8086 0b00): Already using the vfio-pci driver 00:47:56.019 0000:00:01.0 (8086 0b00): Already using the vfio-pci driver 00:47:56.019 0000:00:01.1 (8086 0b00): Already using the vfio-pci driver 00:47:56.280 22:46:51 nvmf_dif -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:47:56.280 22:46:51 nvmf_dif -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:47:56.280 22:46:51 nvmf_dif -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:47:56.280 22:46:51 nvmf_dif -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:47:56.280 22:46:51 nvmf_dif -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:47:56.280 22:46:51 nvmf_dif -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:47:56.280 22:46:51 nvmf_dif -- target/dif.sh@136 -- # NVMF_TRANSPORT_OPTS+=' --dif-insert-or-strip' 00:47:56.280 22:46:51 nvmf_dif -- target/dif.sh@137 -- # nvmfappstart 00:47:56.280 22:46:51 nvmf_dif -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:47:56.280 22:46:51 nvmf_dif -- common/autotest_common.sh@724 -- # xtrace_disable 00:47:56.280 22:46:51 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:47:56.280 22:46:51 nvmf_dif -- nvmf/common.sh@507 -- # nvmfpid=511052 00:47:56.280 22:46:51 nvmf_dif -- nvmf/common.sh@508 -- # waitforlisten 511052 00:47:56.280 22:46:51 nvmf_dif -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:47:56.280 22:46:51 nvmf_dif -- common/autotest_common.sh@831 -- # '[' -z 511052 ']' 00:47:56.280 22:46:51 nvmf_dif -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:47:56.280 22:46:51 nvmf_dif -- common/autotest_common.sh@836 -- # local max_retries=100 00:47:56.280 22:46:51 nvmf_dif -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:47:56.280 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:47:56.280 22:46:51 nvmf_dif -- common/autotest_common.sh@840 -- # xtrace_disable 00:47:56.280 22:46:51 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:47:56.280 [2024-10-01 22:46:51.454047] Starting SPDK v25.01-pre git sha1 1b1c3081e / DPDK 24.03.0 initialization... 00:47:56.280 [2024-10-01 22:46:51.454101] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:47:56.280 [2024-10-01 22:46:51.523474] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:47:56.540 [2024-10-01 22:46:51.593971] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:47:56.540 [2024-10-01 22:46:51.594013] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:47:56.540 [2024-10-01 22:46:51.594021] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:47:56.540 [2024-10-01 22:46:51.594027] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:47:56.540 [2024-10-01 22:46:51.594033] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:47:56.540 [2024-10-01 22:46:51.594052] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:47:57.109 22:46:52 nvmf_dif -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:47:57.109 22:46:52 nvmf_dif -- common/autotest_common.sh@864 -- # return 0 00:47:57.109 22:46:52 nvmf_dif -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:47:57.109 22:46:52 nvmf_dif -- common/autotest_common.sh@730 -- # xtrace_disable 00:47:57.109 22:46:52 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:47:57.109 22:46:52 nvmf_dif -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:47:57.109 22:46:52 nvmf_dif -- target/dif.sh@139 -- # create_transport 00:47:57.109 22:46:52 nvmf_dif -- target/dif.sh@50 -- # rpc_cmd nvmf_create_transport -t tcp -o --dif-insert-or-strip 00:47:57.109 22:46:52 nvmf_dif -- common/autotest_common.sh@561 -- # xtrace_disable 00:47:57.109 22:46:52 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:47:57.109 [2024-10-01 22:46:52.287978] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:47:57.109 22:46:52 nvmf_dif -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:47:57.109 22:46:52 nvmf_dif -- target/dif.sh@141 -- # run_test fio_dif_1_default fio_dif_1 00:47:57.109 22:46:52 nvmf_dif -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:47:57.109 22:46:52 nvmf_dif -- common/autotest_common.sh@1107 -- # xtrace_disable 00:47:57.109 22:46:52 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:47:57.109 ************************************ 00:47:57.109 START TEST fio_dif_1_default 00:47:57.109 ************************************ 00:47:57.109 22:46:52 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1125 -- # fio_dif_1 00:47:57.109 22:46:52 nvmf_dif.fio_dif_1_default -- target/dif.sh@86 -- # create_subsystems 0 00:47:57.109 22:46:52 nvmf_dif.fio_dif_1_default -- target/dif.sh@28 -- # local sub 00:47:57.109 22:46:52 nvmf_dif.fio_dif_1_default -- target/dif.sh@30 -- # for sub in "$@" 00:47:57.109 22:46:52 nvmf_dif.fio_dif_1_default -- target/dif.sh@31 -- # create_subsystem 0 00:47:57.109 22:46:52 nvmf_dif.fio_dif_1_default -- target/dif.sh@18 -- # local sub_id=0 00:47:57.109 22:46:52 nvmf_dif.fio_dif_1_default -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:47:57.109 22:46:52 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@561 -- # xtrace_disable 00:47:57.109 22:46:52 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:47:57.109 bdev_null0 00:47:57.109 22:46:52 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:47:57.109 22:46:52 nvmf_dif.fio_dif_1_default -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:47:57.109 22:46:52 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@561 -- # xtrace_disable 00:47:57.109 22:46:52 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:47:57.109 22:46:52 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:47:57.110 22:46:52 nvmf_dif.fio_dif_1_default -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:47:57.110 22:46:52 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@561 -- # xtrace_disable 00:47:57.110 22:46:52 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:47:57.370 22:46:52 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:47:57.370 22:46:52 nvmf_dif.fio_dif_1_default -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:47:57.370 22:46:52 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@561 -- # xtrace_disable 00:47:57.370 22:46:52 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:47:57.370 [2024-10-01 22:46:52.372328] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:47:57.370 22:46:52 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:47:57.370 22:46:52 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # fio /dev/fd/62 00:47:57.370 22:46:52 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # create_json_sub_conf 0 00:47:57.370 22:46:52 nvmf_dif.fio_dif_1_default -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:47:57.370 22:46:52 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@558 -- # config=() 00:47:57.370 22:46:52 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:47:57.370 22:46:52 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@558 -- # local subsystem config 00:47:57.370 22:46:52 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:47:57.370 22:46:52 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:47:57.370 22:46:52 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:47:57.370 { 00:47:57.370 "params": { 00:47:57.370 "name": "Nvme$subsystem", 00:47:57.370 "trtype": "$TEST_TRANSPORT", 00:47:57.370 "traddr": "$NVMF_FIRST_TARGET_IP", 00:47:57.370 "adrfam": "ipv4", 00:47:57.370 "trsvcid": "$NVMF_PORT", 00:47:57.370 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:47:57.370 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:47:57.370 "hdgst": ${hdgst:-false}, 00:47:57.370 "ddgst": ${ddgst:-false} 00:47:57.370 }, 00:47:57.370 "method": "bdev_nvme_attach_controller" 00:47:57.370 } 00:47:57.370 EOF 00:47:57.370 )") 00:47:57.370 22:46:52 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # gen_fio_conf 00:47:57.370 22:46:52 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:47:57.370 22:46:52 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:47:57.370 22:46:52 nvmf_dif.fio_dif_1_default -- target/dif.sh@54 -- # local file 00:47:57.370 22:46:52 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1339 -- # local sanitizers 00:47:57.370 22:46:52 nvmf_dif.fio_dif_1_default -- target/dif.sh@56 -- # cat 00:47:57.370 22:46:52 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:47:57.370 22:46:52 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # shift 00:47:57.370 22:46:52 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1343 -- # local asan_lib= 00:47:57.370 22:46:52 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:47:57.370 22:46:52 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@580 -- # cat 00:47:57.370 22:46:52 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:47:57.370 22:46:52 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file = 1 )) 00:47:57.370 22:46:52 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # grep libasan 00:47:57.370 22:46:52 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file <= files )) 00:47:57.370 22:46:52 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:47:57.370 22:46:52 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@582 -- # jq . 00:47:57.370 22:46:52 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@583 -- # IFS=, 00:47:57.370 22:46:52 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:47:57.370 "params": { 00:47:57.370 "name": "Nvme0", 00:47:57.370 "trtype": "tcp", 00:47:57.370 "traddr": "10.0.0.2", 00:47:57.370 "adrfam": "ipv4", 00:47:57.370 "trsvcid": "4420", 00:47:57.370 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:47:57.370 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:47:57.370 "hdgst": false, 00:47:57.370 "ddgst": false 00:47:57.370 }, 00:47:57.370 "method": "bdev_nvme_attach_controller" 00:47:57.370 }' 00:47:57.370 22:46:52 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # asan_lib= 00:47:57.370 22:46:52 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:47:57.370 22:46:52 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:47:57.370 22:46:52 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:47:57.370 22:46:52 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:47:57.370 22:46:52 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:47:57.370 22:46:52 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # asan_lib= 00:47:57.370 22:46:52 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:47:57.370 22:46:52 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:47:57.370 22:46:52 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:47:57.631 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:47:57.631 fio-3.35 00:47:57.631 Starting 1 thread 00:48:09.856 00:48:09.856 filename0: (groupid=0, jobs=1): err= 0: pid=511624: Tue Oct 1 22:47:03 2024 00:48:09.856 read: IOPS=191, BW=765KiB/s (784kB/s)(7664KiB/10012msec) 00:48:09.856 slat (nsec): min=5523, max=32885, avg=6642.48, stdev=1535.65 00:48:09.856 clat (usec): min=769, max=44615, avg=20883.28, stdev=20097.79 00:48:09.856 lat (usec): min=775, max=44648, avg=20889.92, stdev=20097.79 00:48:09.856 clat percentiles (usec): 00:48:09.856 | 1.00th=[ 832], 5.00th=[ 865], 10.00th=[ 889], 20.00th=[ 914], 00:48:09.856 | 30.00th=[ 930], 40.00th=[ 947], 50.00th=[ 1029], 60.00th=[41157], 00:48:09.856 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:48:09.856 | 99.00th=[42206], 99.50th=[42206], 99.90th=[44827], 99.95th=[44827], 00:48:09.856 | 99.99th=[44827] 00:48:09.856 bw ( KiB/s): min= 704, max= 896, per=99.81%, avg=764.80, stdev=38.71, samples=20 00:48:09.856 iops : min= 176, max= 224, avg=191.20, stdev= 9.68, samples=20 00:48:09.856 lat (usec) : 1000=49.27% 00:48:09.856 lat (msec) : 2=1.04%, 50=49.69% 00:48:09.856 cpu : usr=93.75%, sys=6.03%, ctx=9, majf=0, minf=228 00:48:09.856 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:48:09.856 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:48:09.856 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:48:09.856 issued rwts: total=1916,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:48:09.856 latency : target=0, window=0, percentile=100.00%, depth=4 00:48:09.856 00:48:09.856 Run status group 0 (all jobs): 00:48:09.856 READ: bw=765KiB/s (784kB/s), 765KiB/s-765KiB/s (784kB/s-784kB/s), io=7664KiB (7848kB), run=10012-10012msec 00:48:09.856 22:47:03 nvmf_dif.fio_dif_1_default -- target/dif.sh@88 -- # destroy_subsystems 0 00:48:09.856 22:47:03 nvmf_dif.fio_dif_1_default -- target/dif.sh@43 -- # local sub 00:48:09.856 22:47:03 nvmf_dif.fio_dif_1_default -- target/dif.sh@45 -- # for sub in "$@" 00:48:09.856 22:47:03 nvmf_dif.fio_dif_1_default -- target/dif.sh@46 -- # destroy_subsystem 0 00:48:09.856 22:47:03 nvmf_dif.fio_dif_1_default -- target/dif.sh@36 -- # local sub_id=0 00:48:09.856 22:47:03 nvmf_dif.fio_dif_1_default -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:48:09.856 22:47:03 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@561 -- # xtrace_disable 00:48:09.856 22:47:03 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:48:09.856 22:47:03 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:48:09.856 22:47:03 nvmf_dif.fio_dif_1_default -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:48:09.856 22:47:03 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@561 -- # xtrace_disable 00:48:09.856 22:47:03 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:48:09.856 22:47:03 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:48:09.856 00:48:09.856 real 0m11.327s 00:48:09.856 user 0m28.023s 00:48:09.856 sys 0m0.953s 00:48:09.856 22:47:03 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1126 -- # xtrace_disable 00:48:09.856 22:47:03 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:48:09.856 ************************************ 00:48:09.856 END TEST fio_dif_1_default 00:48:09.856 ************************************ 00:48:09.856 22:47:03 nvmf_dif -- target/dif.sh@142 -- # run_test fio_dif_1_multi_subsystems fio_dif_1_multi_subsystems 00:48:09.856 22:47:03 nvmf_dif -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:48:09.856 22:47:03 nvmf_dif -- common/autotest_common.sh@1107 -- # xtrace_disable 00:48:09.856 22:47:03 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:48:09.856 ************************************ 00:48:09.856 START TEST fio_dif_1_multi_subsystems 00:48:09.856 ************************************ 00:48:09.856 22:47:03 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1125 -- # fio_dif_1_multi_subsystems 00:48:09.856 22:47:03 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@92 -- # local files=1 00:48:09.856 22:47:03 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@94 -- # create_subsystems 0 1 00:48:09.856 22:47:03 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@28 -- # local sub 00:48:09.856 22:47:03 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:48:09.856 22:47:03 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 0 00:48:09.856 22:47:03 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=0 00:48:09.856 22:47:03 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:48:09.856 22:47:03 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:48:09.856 22:47:03 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:48:09.856 bdev_null0 00:48:09.856 22:47:03 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:48:09.856 22:47:03 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:48:09.856 22:47:03 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:48:09.856 22:47:03 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:48:09.856 22:47:03 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:48:09.856 22:47:03 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:48:09.856 22:47:03 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:48:09.856 22:47:03 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:48:09.856 22:47:03 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:48:09.856 22:47:03 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:48:09.856 22:47:03 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:48:09.856 22:47:03 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:48:09.856 [2024-10-01 22:47:03.754653] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:48:09.856 22:47:03 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:48:09.856 22:47:03 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:48:09.856 22:47:03 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 1 00:48:09.856 22:47:03 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=1 00:48:09.856 22:47:03 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:48:09.856 22:47:03 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:48:09.856 22:47:03 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:48:09.856 bdev_null1 00:48:09.856 22:47:03 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:48:09.856 22:47:03 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:48:09.856 22:47:03 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:48:09.856 22:47:03 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:48:09.856 22:47:03 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:48:09.856 22:47:03 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:48:09.856 22:47:03 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:48:09.856 22:47:03 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:48:09.856 22:47:03 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:48:09.856 22:47:03 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:48:09.856 22:47:03 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:48:09.856 22:47:03 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:48:09.856 22:47:03 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:48:09.856 22:47:03 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # fio /dev/fd/62 00:48:09.856 22:47:03 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # create_json_sub_conf 0 1 00:48:09.856 22:47:03 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:48:09.856 22:47:03 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@558 -- # config=() 00:48:09.856 22:47:03 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:48:09.856 22:47:03 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@558 -- # local subsystem config 00:48:09.856 22:47:03 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:48:09.856 22:47:03 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:48:09.856 22:47:03 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # gen_fio_conf 00:48:09.856 22:47:03 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:48:09.856 { 00:48:09.856 "params": { 00:48:09.856 "name": "Nvme$subsystem", 00:48:09.856 "trtype": "$TEST_TRANSPORT", 00:48:09.856 "traddr": "$NVMF_FIRST_TARGET_IP", 00:48:09.856 "adrfam": "ipv4", 00:48:09.856 "trsvcid": "$NVMF_PORT", 00:48:09.856 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:48:09.856 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:48:09.856 "hdgst": ${hdgst:-false}, 00:48:09.856 "ddgst": ${ddgst:-false} 00:48:09.856 }, 00:48:09.856 "method": "bdev_nvme_attach_controller" 00:48:09.856 } 00:48:09.856 EOF 00:48:09.856 )") 00:48:09.857 22:47:03 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:48:09.857 22:47:03 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:48:09.857 22:47:03 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@54 -- # local file 00:48:09.857 22:47:03 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1339 -- # local sanitizers 00:48:09.857 22:47:03 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@56 -- # cat 00:48:09.857 22:47:03 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:48:09.857 22:47:03 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # shift 00:48:09.857 22:47:03 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1343 -- # local asan_lib= 00:48:09.857 22:47:03 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:48:09.857 22:47:03 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@580 -- # cat 00:48:09.857 22:47:03 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file = 1 )) 00:48:09.857 22:47:03 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:48:09.857 22:47:03 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:48:09.857 22:47:03 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@73 -- # cat 00:48:09.857 22:47:03 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # grep libasan 00:48:09.857 22:47:03 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:48:09.857 22:47:03 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:48:09.857 22:47:03 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:48:09.857 { 00:48:09.857 "params": { 00:48:09.857 "name": "Nvme$subsystem", 00:48:09.857 "trtype": "$TEST_TRANSPORT", 00:48:09.857 "traddr": "$NVMF_FIRST_TARGET_IP", 00:48:09.857 "adrfam": "ipv4", 00:48:09.857 "trsvcid": "$NVMF_PORT", 00:48:09.857 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:48:09.857 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:48:09.857 "hdgst": ${hdgst:-false}, 00:48:09.857 "ddgst": ${ddgst:-false} 00:48:09.857 }, 00:48:09.857 "method": "bdev_nvme_attach_controller" 00:48:09.857 } 00:48:09.857 EOF 00:48:09.857 )") 00:48:09.857 22:47:03 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file++ )) 00:48:09.857 22:47:03 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:48:09.857 22:47:03 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@580 -- # cat 00:48:09.857 22:47:03 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # jq . 00:48:09.857 22:47:03 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@583 -- # IFS=, 00:48:09.857 22:47:03 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:48:09.857 "params": { 00:48:09.857 "name": "Nvme0", 00:48:09.857 "trtype": "tcp", 00:48:09.857 "traddr": "10.0.0.2", 00:48:09.857 "adrfam": "ipv4", 00:48:09.857 "trsvcid": "4420", 00:48:09.857 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:48:09.857 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:48:09.857 "hdgst": false, 00:48:09.857 "ddgst": false 00:48:09.857 }, 00:48:09.857 "method": "bdev_nvme_attach_controller" 00:48:09.857 },{ 00:48:09.857 "params": { 00:48:09.857 "name": "Nvme1", 00:48:09.857 "trtype": "tcp", 00:48:09.857 "traddr": "10.0.0.2", 00:48:09.857 "adrfam": "ipv4", 00:48:09.857 "trsvcid": "4420", 00:48:09.857 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:48:09.857 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:48:09.857 "hdgst": false, 00:48:09.857 "ddgst": false 00:48:09.857 }, 00:48:09.857 "method": "bdev_nvme_attach_controller" 00:48:09.857 }' 00:48:09.857 22:47:03 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # asan_lib= 00:48:09.857 22:47:03 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:48:09.857 22:47:03 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:48:09.857 22:47:03 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:48:09.857 22:47:03 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:48:09.857 22:47:03 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:48:09.857 22:47:03 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # asan_lib= 00:48:09.857 22:47:03 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:48:09.857 22:47:03 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:48:09.857 22:47:03 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:48:09.857 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:48:09.857 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:48:09.857 fio-3.35 00:48:09.857 Starting 2 threads 00:48:19.854 00:48:19.854 filename0: (groupid=0, jobs=1): err= 0: pid=513993: Tue Oct 1 22:47:14 2024 00:48:19.854 read: IOPS=97, BW=390KiB/s (399kB/s)(3904KiB/10010msec) 00:48:19.854 slat (nsec): min=5516, max=29807, avg=6418.92, stdev=1387.35 00:48:19.854 clat (usec): min=40897, max=43965, avg=41003.11, stdev=210.41 00:48:19.854 lat (usec): min=40903, max=43995, avg=41009.53, stdev=210.96 00:48:19.854 clat percentiles (usec): 00:48:19.854 | 1.00th=[41157], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:48:19.854 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:48:19.854 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:48:19.854 | 99.00th=[42206], 99.50th=[42206], 99.90th=[43779], 99.95th=[43779], 00:48:19.854 | 99.99th=[43779] 00:48:19.854 bw ( KiB/s): min= 384, max= 416, per=49.64%, avg=388.80, stdev=11.72, samples=20 00:48:19.854 iops : min= 96, max= 104, avg=97.20, stdev= 2.93, samples=20 00:48:19.854 lat (msec) : 50=100.00% 00:48:19.854 cpu : usr=95.60%, sys=4.18%, ctx=12, majf=0, minf=91 00:48:19.854 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:48:19.854 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:48:19.854 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:48:19.854 issued rwts: total=976,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:48:19.854 latency : target=0, window=0, percentile=100.00%, depth=4 00:48:19.854 filename1: (groupid=0, jobs=1): err= 0: pid=513994: Tue Oct 1 22:47:14 2024 00:48:19.854 read: IOPS=97, BW=392KiB/s (401kB/s)(3920KiB/10006msec) 00:48:19.854 slat (nsec): min=5521, max=52958, avg=6497.47, stdev=2089.79 00:48:19.854 clat (usec): min=558, max=41806, avg=40822.29, stdev=2579.28 00:48:19.854 lat (usec): min=563, max=41837, avg=40828.79, stdev=2579.33 00:48:19.854 clat percentiles (usec): 00:48:19.854 | 1.00th=[40633], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:48:19.854 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:48:19.854 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:48:19.854 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41681], 99.95th=[41681], 00:48:19.854 | 99.99th=[41681] 00:48:19.854 bw ( KiB/s): min= 384, max= 416, per=49.90%, avg=390.40, stdev=13.13, samples=20 00:48:19.854 iops : min= 96, max= 104, avg=97.60, stdev= 3.28, samples=20 00:48:19.854 lat (usec) : 750=0.41% 00:48:19.854 lat (msec) : 50=99.59% 00:48:19.854 cpu : usr=95.61%, sys=4.16%, ctx=32, majf=0, minf=205 00:48:19.854 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:48:19.854 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:48:19.854 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:48:19.854 issued rwts: total=980,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:48:19.854 latency : target=0, window=0, percentile=100.00%, depth=4 00:48:19.854 00:48:19.854 Run status group 0 (all jobs): 00:48:19.854 READ: bw=782KiB/s (800kB/s), 390KiB/s-392KiB/s (399kB/s-401kB/s), io=7824KiB (8012kB), run=10006-10010msec 00:48:19.854 22:47:15 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@96 -- # destroy_subsystems 0 1 00:48:19.854 22:47:15 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@43 -- # local sub 00:48:19.854 22:47:15 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:48:19.854 22:47:15 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 0 00:48:19.854 22:47:15 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=0 00:48:19.854 22:47:15 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:48:19.854 22:47:15 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:48:19.854 22:47:15 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:48:20.115 22:47:15 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:48:20.115 22:47:15 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:48:20.115 22:47:15 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:48:20.115 22:47:15 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:48:20.115 22:47:15 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:48:20.115 22:47:15 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:48:20.115 22:47:15 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 1 00:48:20.115 22:47:15 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=1 00:48:20.115 22:47:15 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:48:20.115 22:47:15 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:48:20.115 22:47:15 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:48:20.115 22:47:15 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:48:20.115 22:47:15 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:48:20.115 22:47:15 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:48:20.115 22:47:15 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:48:20.115 22:47:15 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:48:20.115 00:48:20.115 real 0m11.441s 00:48:20.115 user 0m37.823s 00:48:20.115 sys 0m1.255s 00:48:20.115 22:47:15 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1126 -- # xtrace_disable 00:48:20.115 22:47:15 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:48:20.115 ************************************ 00:48:20.115 END TEST fio_dif_1_multi_subsystems 00:48:20.115 ************************************ 00:48:20.115 22:47:15 nvmf_dif -- target/dif.sh@143 -- # run_test fio_dif_rand_params fio_dif_rand_params 00:48:20.115 22:47:15 nvmf_dif -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:48:20.115 22:47:15 nvmf_dif -- common/autotest_common.sh@1107 -- # xtrace_disable 00:48:20.115 22:47:15 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:48:20.115 ************************************ 00:48:20.115 START TEST fio_dif_rand_params 00:48:20.115 ************************************ 00:48:20.115 22:47:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1125 -- # fio_dif_rand_params 00:48:20.115 22:47:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@100 -- # local NULL_DIF 00:48:20.115 22:47:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@101 -- # local bs numjobs runtime iodepth files 00:48:20.115 22:47:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # NULL_DIF=3 00:48:20.115 22:47:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # bs=128k 00:48:20.115 22:47:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # numjobs=3 00:48:20.115 22:47:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # iodepth=3 00:48:20.115 22:47:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # runtime=5 00:48:20.115 22:47:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@105 -- # create_subsystems 0 00:48:20.115 22:47:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:48:20.115 22:47:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:48:20.115 22:47:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:48:20.115 22:47:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:48:20.115 22:47:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:48:20.115 22:47:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:48:20.115 22:47:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:48:20.115 bdev_null0 00:48:20.115 22:47:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:48:20.115 22:47:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:48:20.115 22:47:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:48:20.115 22:47:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:48:20.115 22:47:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:48:20.115 22:47:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:48:20.115 22:47:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:48:20.115 22:47:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:48:20.115 22:47:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:48:20.115 22:47:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:48:20.115 22:47:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:48:20.115 22:47:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:48:20.115 [2024-10-01 22:47:15.275893] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:48:20.115 22:47:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:48:20.116 22:47:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # fio /dev/fd/62 00:48:20.116 22:47:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # create_json_sub_conf 0 00:48:20.116 22:47:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:48:20.116 22:47:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:48:20.116 22:47:15 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # config=() 00:48:20.116 22:47:15 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # local subsystem config 00:48:20.116 22:47:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:48:20.116 22:47:15 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:48:20.116 22:47:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:48:20.116 22:47:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:48:20.116 22:47:15 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:48:20.116 { 00:48:20.116 "params": { 00:48:20.116 "name": "Nvme$subsystem", 00:48:20.116 "trtype": "$TEST_TRANSPORT", 00:48:20.116 "traddr": "$NVMF_FIRST_TARGET_IP", 00:48:20.116 "adrfam": "ipv4", 00:48:20.116 "trsvcid": "$NVMF_PORT", 00:48:20.116 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:48:20.116 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:48:20.116 "hdgst": ${hdgst:-false}, 00:48:20.116 "ddgst": ${ddgst:-false} 00:48:20.116 }, 00:48:20.116 "method": "bdev_nvme_attach_controller" 00:48:20.116 } 00:48:20.116 EOF 00:48:20.116 )") 00:48:20.116 22:47:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:48:20.116 22:47:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:48:20.116 22:47:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local sanitizers 00:48:20.116 22:47:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:48:20.116 22:47:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:48:20.116 22:47:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # shift 00:48:20.116 22:47:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local asan_lib= 00:48:20.116 22:47:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:48:20.116 22:47:15 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@580 -- # cat 00:48:20.116 22:47:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:48:20.116 22:47:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libasan 00:48:20.116 22:47:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:48:20.116 22:47:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:48:20.116 22:47:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:48:20.116 22:47:15 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # jq . 00:48:20.116 22:47:15 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@583 -- # IFS=, 00:48:20.116 22:47:15 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:48:20.116 "params": { 00:48:20.116 "name": "Nvme0", 00:48:20.116 "trtype": "tcp", 00:48:20.116 "traddr": "10.0.0.2", 00:48:20.116 "adrfam": "ipv4", 00:48:20.116 "trsvcid": "4420", 00:48:20.116 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:48:20.116 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:48:20.116 "hdgst": false, 00:48:20.116 "ddgst": false 00:48:20.116 }, 00:48:20.116 "method": "bdev_nvme_attach_controller" 00:48:20.116 }' 00:48:20.116 22:47:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:48:20.116 22:47:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:48:20.116 22:47:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:48:20.116 22:47:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:48:20.116 22:47:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:48:20.116 22:47:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:48:20.116 22:47:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:48:20.116 22:47:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:48:20.116 22:47:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:48:20.116 22:47:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:48:20.709 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:48:20.709 ... 00:48:20.709 fio-3.35 00:48:20.709 Starting 3 threads 00:48:27.286 00:48:27.286 filename0: (groupid=0, jobs=1): err= 0: pid=516205: Tue Oct 1 22:47:21 2024 00:48:27.286 read: IOPS=253, BW=31.7MiB/s (33.3MB/s)(159MiB/5008msec) 00:48:27.286 slat (nsec): min=5631, max=32557, avg=8402.89, stdev=2374.33 00:48:27.286 clat (usec): min=5234, max=54566, avg=11807.92, stdev=5229.57 00:48:27.287 lat (usec): min=5240, max=54572, avg=11816.32, stdev=5229.65 00:48:27.287 clat percentiles (usec): 00:48:27.287 | 1.00th=[ 5932], 5.00th=[ 7242], 10.00th=[ 7898], 20.00th=[ 8848], 00:48:27.287 | 30.00th=[ 9372], 40.00th=[10290], 50.00th=[11338], 60.00th=[12518], 00:48:27.287 | 70.00th=[13304], 80.00th=[13829], 90.00th=[14746], 95.00th=[15270], 00:48:27.287 | 99.00th=[47973], 99.50th=[49021], 99.90th=[53740], 99.95th=[54789], 00:48:27.287 | 99.99th=[54789] 00:48:27.287 bw ( KiB/s): min=29696, max=34304, per=35.51%, avg=32460.80, stdev=1515.96, samples=10 00:48:27.287 iops : min= 232, max= 268, avg=253.60, stdev=11.84, samples=10 00:48:27.287 lat (msec) : 10=38.32%, 20=60.27%, 50=0.94%, 100=0.47% 00:48:27.287 cpu : usr=94.99%, sys=4.77%, ctx=7, majf=0, minf=101 00:48:27.287 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:48:27.287 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:48:27.287 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:48:27.287 issued rwts: total=1271,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:48:27.287 latency : target=0, window=0, percentile=100.00%, depth=3 00:48:27.287 filename0: (groupid=0, jobs=1): err= 0: pid=516206: Tue Oct 1 22:47:21 2024 00:48:27.287 read: IOPS=238, BW=29.8MiB/s (31.3MB/s)(151MiB/5046msec) 00:48:27.287 slat (nsec): min=5605, max=32541, avg=8466.17, stdev=2516.22 00:48:27.287 clat (usec): min=5335, max=90869, avg=12514.65, stdev=6514.86 00:48:27.287 lat (usec): min=5341, max=90878, avg=12523.11, stdev=6515.01 00:48:27.287 clat percentiles (usec): 00:48:27.287 | 1.00th=[ 5997], 5.00th=[ 7832], 10.00th=[ 8586], 20.00th=[ 9372], 00:48:27.287 | 30.00th=[10159], 40.00th=[10814], 50.00th=[11731], 60.00th=[12780], 00:48:27.287 | 70.00th=[13435], 80.00th=[13960], 90.00th=[14615], 95.00th=[15401], 00:48:27.287 | 99.00th=[49546], 99.50th=[51119], 99.90th=[53216], 99.95th=[90702], 00:48:27.287 | 99.99th=[90702] 00:48:27.287 bw ( KiB/s): min=18944, max=35584, per=33.69%, avg=30796.80, stdev=4554.03, samples=10 00:48:27.287 iops : min= 148, max= 278, avg=240.60, stdev=35.58, samples=10 00:48:27.287 lat (msec) : 10=27.88%, 20=69.71%, 50=1.66%, 100=0.75% 00:48:27.287 cpu : usr=95.10%, sys=4.64%, ctx=11, majf=0, minf=144 00:48:27.287 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:48:27.287 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:48:27.287 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:48:27.287 issued rwts: total=1205,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:48:27.287 latency : target=0, window=0, percentile=100.00%, depth=3 00:48:27.287 filename0: (groupid=0, jobs=1): err= 0: pid=516207: Tue Oct 1 22:47:21 2024 00:48:27.287 read: IOPS=223, BW=27.9MiB/s (29.3MB/s)(141MiB/5045msec) 00:48:27.287 slat (nsec): min=5611, max=46734, avg=8559.65, stdev=2373.41 00:48:27.287 clat (usec): min=4638, max=92826, avg=13366.50, stdev=12948.43 00:48:27.287 lat (usec): min=4647, max=92833, avg=13375.06, stdev=12948.31 00:48:27.287 clat percentiles (usec): 00:48:27.287 | 1.00th=[ 4948], 5.00th=[ 7177], 10.00th=[ 7898], 20.00th=[ 8455], 00:48:27.287 | 30.00th=[ 8717], 40.00th=[ 9110], 50.00th=[ 9372], 60.00th=[ 9765], 00:48:27.287 | 70.00th=[10028], 80.00th=[10683], 90.00th=[13698], 95.00th=[50594], 00:48:27.287 | 99.00th=[52167], 99.50th=[53740], 99.90th=[92799], 99.95th=[92799], 00:48:27.287 | 99.99th=[92799] 00:48:27.287 bw ( KiB/s): min=19712, max=37120, per=31.53%, avg=28825.60, stdev=5371.53, samples=10 00:48:27.287 iops : min= 154, max= 290, avg=225.20, stdev=41.97, samples=10 00:48:27.287 lat (msec) : 10=69.15%, 20=21.19%, 50=2.84%, 100=6.83% 00:48:27.287 cpu : usr=95.90%, sys=3.87%, ctx=9, majf=0, minf=130 00:48:27.287 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:48:27.287 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:48:27.287 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:48:27.287 issued rwts: total=1128,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:48:27.287 latency : target=0, window=0, percentile=100.00%, depth=3 00:48:27.287 00:48:27.287 Run status group 0 (all jobs): 00:48:27.287 READ: bw=89.3MiB/s (93.6MB/s), 27.9MiB/s-31.7MiB/s (29.3MB/s-33.3MB/s), io=451MiB (472MB), run=5008-5046msec 00:48:27.287 22:47:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@107 -- # destroy_subsystems 0 00:48:27.287 22:47:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:48:27.287 22:47:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:48:27.287 22:47:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:48:27.287 22:47:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:48:27.287 22:47:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:48:27.287 22:47:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:48:27.287 22:47:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:48:27.287 22:47:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:48:27.287 22:47:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:48:27.287 22:47:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:48:27.287 22:47:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:48:27.287 22:47:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:48:27.287 22:47:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # NULL_DIF=2 00:48:27.287 22:47:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # bs=4k 00:48:27.287 22:47:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # numjobs=8 00:48:27.287 22:47:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # iodepth=16 00:48:27.287 22:47:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # runtime= 00:48:27.287 22:47:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # files=2 00:48:27.287 22:47:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@111 -- # create_subsystems 0 1 2 00:48:27.287 22:47:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:48:27.287 22:47:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:48:27.287 22:47:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:48:27.287 22:47:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:48:27.287 22:47:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 2 00:48:27.287 22:47:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:48:27.287 22:47:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:48:27.287 bdev_null0 00:48:27.287 22:47:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:48:27.287 22:47:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:48:27.287 22:47:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:48:27.287 22:47:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:48:27.287 22:47:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:48:27.287 22:47:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:48:27.287 22:47:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:48:27.287 22:47:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:48:27.287 22:47:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:48:27.287 22:47:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:48:27.287 22:47:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:48:27.287 22:47:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:48:27.287 [2024-10-01 22:47:21.635524] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:48:27.287 22:47:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:48:27.287 22:47:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:48:27.287 22:47:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:48:27.287 22:47:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:48:27.287 22:47:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 2 00:48:27.287 22:47:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:48:27.287 22:47:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:48:27.287 bdev_null1 00:48:27.287 22:47:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:48:27.287 22:47:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:48:27.287 22:47:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:48:27.287 22:47:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:48:27.287 22:47:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:48:27.287 22:47:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:48:27.287 22:47:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:48:27.287 22:47:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:48:27.287 22:47:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:48:27.287 22:47:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:48:27.287 22:47:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:48:27.287 22:47:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:48:27.287 22:47:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:48:27.287 22:47:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:48:27.287 22:47:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 2 00:48:27.287 22:47:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=2 00:48:27.287 22:47:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null2 64 512 --md-size 16 --dif-type 2 00:48:27.287 22:47:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:48:27.287 22:47:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:48:27.287 bdev_null2 00:48:27.287 22:47:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:48:27.288 22:47:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 --serial-number 53313233-2 --allow-any-host 00:48:27.288 22:47:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:48:27.288 22:47:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:48:27.288 22:47:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:48:27.288 22:47:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 bdev_null2 00:48:27.288 22:47:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:48:27.288 22:47:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:48:27.288 22:47:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:48:27.288 22:47:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:48:27.288 22:47:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:48:27.288 22:47:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:48:27.288 22:47:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:48:27.288 22:47:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # fio /dev/fd/62 00:48:27.288 22:47:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # create_json_sub_conf 0 1 2 00:48:27.288 22:47:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 2 00:48:27.288 22:47:21 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # config=() 00:48:27.288 22:47:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:48:27.288 22:47:21 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # local subsystem config 00:48:27.288 22:47:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:48:27.288 22:47:21 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:48:27.288 22:47:21 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:48:27.288 { 00:48:27.288 "params": { 00:48:27.288 "name": "Nvme$subsystem", 00:48:27.288 "trtype": "$TEST_TRANSPORT", 00:48:27.288 "traddr": "$NVMF_FIRST_TARGET_IP", 00:48:27.288 "adrfam": "ipv4", 00:48:27.288 "trsvcid": "$NVMF_PORT", 00:48:27.288 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:48:27.288 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:48:27.288 "hdgst": ${hdgst:-false}, 00:48:27.288 "ddgst": ${ddgst:-false} 00:48:27.288 }, 00:48:27.288 "method": "bdev_nvme_attach_controller" 00:48:27.288 } 00:48:27.288 EOF 00:48:27.288 )") 00:48:27.288 22:47:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:48:27.288 22:47:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:48:27.288 22:47:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:48:27.288 22:47:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:48:27.288 22:47:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local sanitizers 00:48:27.288 22:47:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:48:27.288 22:47:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:48:27.288 22:47:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # shift 00:48:27.288 22:47:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local asan_lib= 00:48:27.288 22:47:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:48:27.288 22:47:21 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@580 -- # cat 00:48:27.288 22:47:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:48:27.288 22:47:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libasan 00:48:27.288 22:47:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:48:27.288 22:47:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:48:27.288 22:47:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:48:27.288 22:47:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:48:27.288 22:47:21 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:48:27.288 22:47:21 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:48:27.288 { 00:48:27.288 "params": { 00:48:27.288 "name": "Nvme$subsystem", 00:48:27.288 "trtype": "$TEST_TRANSPORT", 00:48:27.288 "traddr": "$NVMF_FIRST_TARGET_IP", 00:48:27.288 "adrfam": "ipv4", 00:48:27.288 "trsvcid": "$NVMF_PORT", 00:48:27.288 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:48:27.288 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:48:27.288 "hdgst": ${hdgst:-false}, 00:48:27.288 "ddgst": ${ddgst:-false} 00:48:27.288 }, 00:48:27.288 "method": "bdev_nvme_attach_controller" 00:48:27.288 } 00:48:27.288 EOF 00:48:27.288 )") 00:48:27.288 22:47:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:48:27.288 22:47:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:48:27.288 22:47:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:48:27.288 22:47:21 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@580 -- # cat 00:48:27.288 22:47:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:48:27.288 22:47:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:48:27.288 22:47:21 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:48:27.288 22:47:21 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:48:27.288 { 00:48:27.288 "params": { 00:48:27.288 "name": "Nvme$subsystem", 00:48:27.288 "trtype": "$TEST_TRANSPORT", 00:48:27.288 "traddr": "$NVMF_FIRST_TARGET_IP", 00:48:27.288 "adrfam": "ipv4", 00:48:27.288 "trsvcid": "$NVMF_PORT", 00:48:27.288 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:48:27.288 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:48:27.288 "hdgst": ${hdgst:-false}, 00:48:27.288 "ddgst": ${ddgst:-false} 00:48:27.288 }, 00:48:27.288 "method": "bdev_nvme_attach_controller" 00:48:27.288 } 00:48:27.288 EOF 00:48:27.288 )") 00:48:27.288 22:47:21 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@580 -- # cat 00:48:27.288 22:47:21 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # jq . 00:48:27.288 22:47:21 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@583 -- # IFS=, 00:48:27.288 22:47:21 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:48:27.288 "params": { 00:48:27.288 "name": "Nvme0", 00:48:27.288 "trtype": "tcp", 00:48:27.288 "traddr": "10.0.0.2", 00:48:27.288 "adrfam": "ipv4", 00:48:27.288 "trsvcid": "4420", 00:48:27.288 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:48:27.288 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:48:27.288 "hdgst": false, 00:48:27.288 "ddgst": false 00:48:27.288 }, 00:48:27.288 "method": "bdev_nvme_attach_controller" 00:48:27.288 },{ 00:48:27.288 "params": { 00:48:27.288 "name": "Nvme1", 00:48:27.288 "trtype": "tcp", 00:48:27.288 "traddr": "10.0.0.2", 00:48:27.288 "adrfam": "ipv4", 00:48:27.288 "trsvcid": "4420", 00:48:27.288 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:48:27.288 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:48:27.288 "hdgst": false, 00:48:27.288 "ddgst": false 00:48:27.288 }, 00:48:27.288 "method": "bdev_nvme_attach_controller" 00:48:27.288 },{ 00:48:27.288 "params": { 00:48:27.288 "name": "Nvme2", 00:48:27.288 "trtype": "tcp", 00:48:27.288 "traddr": "10.0.0.2", 00:48:27.288 "adrfam": "ipv4", 00:48:27.288 "trsvcid": "4420", 00:48:27.288 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:48:27.288 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:48:27.288 "hdgst": false, 00:48:27.288 "ddgst": false 00:48:27.288 }, 00:48:27.288 "method": "bdev_nvme_attach_controller" 00:48:27.288 }' 00:48:27.288 22:47:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:48:27.288 22:47:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:48:27.288 22:47:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:48:27.288 22:47:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:48:27.288 22:47:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:48:27.288 22:47:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:48:27.288 22:47:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:48:27.288 22:47:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:48:27.288 22:47:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:48:27.288 22:47:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:48:27.288 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:48:27.288 ... 00:48:27.288 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:48:27.288 ... 00:48:27.288 filename2: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:48:27.288 ... 00:48:27.288 fio-3.35 00:48:27.288 Starting 24 threads 00:48:39.515 00:48:39.515 filename0: (groupid=0, jobs=1): err= 0: pid=517718: Tue Oct 1 22:47:33 2024 00:48:39.515 read: IOPS=487, BW=1950KiB/s (1996kB/s)(19.1MiB/10012msec) 00:48:39.515 slat (usec): min=4, max=125, avg=20.60, stdev=19.63 00:48:39.515 clat (usec): min=18643, max=56554, avg=32667.05, stdev=2190.64 00:48:39.515 lat (usec): min=18651, max=56569, avg=32687.65, stdev=2189.99 00:48:39.515 clat percentiles (usec): 00:48:39.515 | 1.00th=[24249], 5.00th=[31851], 10.00th=[32113], 20.00th=[32113], 00:48:39.515 | 30.00th=[32375], 40.00th=[32375], 50.00th=[32375], 60.00th=[32637], 00:48:39.515 | 70.00th=[32900], 80.00th=[33162], 90.00th=[33817], 95.00th=[33817], 00:48:39.515 | 99.00th=[41157], 99.50th=[41681], 99.90th=[56361], 99.95th=[56361], 00:48:39.515 | 99.99th=[56361] 00:48:39.515 bw ( KiB/s): min= 1795, max= 2048, per=4.11%, avg=1947.11, stdev=66.67, samples=19 00:48:39.515 iops : min= 448, max= 512, avg=486.74, stdev=16.76, samples=19 00:48:39.515 lat (msec) : 20=0.14%, 50=99.53%, 100=0.33% 00:48:39.515 cpu : usr=98.65%, sys=0.93%, ctx=47, majf=0, minf=25 00:48:39.515 IO depths : 1=4.4%, 2=10.5%, 4=24.4%, 8=52.6%, 16=8.1%, 32=0.0%, >=64=0.0% 00:48:39.515 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:48:39.515 complete : 0=0.0%, 4=94.1%, 8=0.2%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:48:39.515 issued rwts: total=4880,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:48:39.515 latency : target=0, window=0, percentile=100.00%, depth=16 00:48:39.515 filename0: (groupid=0, jobs=1): err= 0: pid=517719: Tue Oct 1 22:47:33 2024 00:48:39.515 read: IOPS=488, BW=1954KiB/s (2001kB/s)(19.1MiB/10020msec) 00:48:39.515 slat (nsec): min=4845, max=87020, avg=27938.28, stdev=14119.02 00:48:39.515 clat (usec): min=19741, max=35693, avg=32510.79, stdev=929.81 00:48:39.515 lat (usec): min=19751, max=35713, avg=32538.73, stdev=929.63 00:48:39.515 clat percentiles (usec): 00:48:39.515 | 1.00th=[31327], 5.00th=[31851], 10.00th=[32113], 20.00th=[32113], 00:48:39.515 | 30.00th=[32113], 40.00th=[32375], 50.00th=[32375], 60.00th=[32637], 00:48:39.515 | 70.00th=[32900], 80.00th=[32900], 90.00th=[33162], 95.00th=[33817], 00:48:39.515 | 99.00th=[34341], 99.50th=[34341], 99.90th=[34866], 99.95th=[34866], 00:48:39.515 | 99.99th=[35914] 00:48:39.515 bw ( KiB/s): min= 1792, max= 2048, per=4.12%, avg=1953.37, stdev=71.61, samples=19 00:48:39.515 iops : min= 448, max= 512, avg=488.26, stdev=17.87, samples=19 00:48:39.515 lat (msec) : 20=0.33%, 50=99.67% 00:48:39.515 cpu : usr=98.70%, sys=1.02%, ctx=17, majf=0, minf=38 00:48:39.515 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:48:39.515 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:48:39.515 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:48:39.515 issued rwts: total=4896,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:48:39.515 latency : target=0, window=0, percentile=100.00%, depth=16 00:48:39.515 filename0: (groupid=0, jobs=1): err= 0: pid=517720: Tue Oct 1 22:47:33 2024 00:48:39.515 read: IOPS=487, BW=1951KiB/s (1998kB/s)(19.1MiB/10015msec) 00:48:39.515 slat (usec): min=5, max=118, avg=35.11, stdev=20.03 00:48:39.515 clat (usec): min=15083, max=57440, avg=32457.86, stdev=1508.60 00:48:39.515 lat (usec): min=15095, max=57457, avg=32492.97, stdev=1508.13 00:48:39.515 clat percentiles (usec): 00:48:39.515 | 1.00th=[31327], 5.00th=[31851], 10.00th=[31851], 20.00th=[32113], 00:48:39.515 | 30.00th=[32113], 40.00th=[32113], 50.00th=[32375], 60.00th=[32375], 00:48:39.515 | 70.00th=[32637], 80.00th=[32900], 90.00th=[33162], 95.00th=[33817], 00:48:39.515 | 99.00th=[34341], 99.50th=[34866], 99.90th=[47449], 99.95th=[47449], 00:48:39.515 | 99.99th=[57410] 00:48:39.515 bw ( KiB/s): min= 1840, max= 2048, per=4.11%, avg=1949.47, stdev=63.16, samples=19 00:48:39.515 iops : min= 460, max= 512, avg=487.37, stdev=15.79, samples=19 00:48:39.515 lat (msec) : 20=0.37%, 50=99.59%, 100=0.04% 00:48:39.515 cpu : usr=98.93%, sys=0.66%, ctx=48, majf=0, minf=35 00:48:39.515 IO depths : 1=6.1%, 2=12.4%, 4=24.9%, 8=50.2%, 16=6.4%, 32=0.0%, >=64=0.0% 00:48:39.515 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:48:39.515 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:48:39.515 issued rwts: total=4886,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:48:39.515 latency : target=0, window=0, percentile=100.00%, depth=16 00:48:39.515 filename0: (groupid=0, jobs=1): err= 0: pid=517721: Tue Oct 1 22:47:33 2024 00:48:39.515 read: IOPS=506, BW=2026KiB/s (2074kB/s)(19.8MiB/10028msec) 00:48:39.515 slat (nsec): min=5672, max=82233, avg=11407.55, stdev=9331.84 00:48:39.515 clat (usec): min=7840, max=62332, avg=31517.33, stdev=4973.07 00:48:39.515 lat (usec): min=7850, max=62339, avg=31528.74, stdev=4973.71 00:48:39.515 clat percentiles (usec): 00:48:39.515 | 1.00th=[12911], 5.00th=[22152], 10.00th=[24773], 20.00th=[31065], 00:48:39.515 | 30.00th=[32113], 40.00th=[32375], 50.00th=[32375], 60.00th=[32637], 00:48:39.515 | 70.00th=[32900], 80.00th=[33162], 90.00th=[33817], 95.00th=[37487], 00:48:39.515 | 99.00th=[47973], 99.50th=[50070], 99.90th=[62129], 99.95th=[62129], 00:48:39.515 | 99.99th=[62129] 00:48:39.515 bw ( KiB/s): min= 1872, max= 2304, per=4.27%, avg=2025.00, stdev=124.58, samples=19 00:48:39.515 iops : min= 468, max= 576, avg=506.21, stdev=31.12, samples=19 00:48:39.515 lat (msec) : 10=0.85%, 20=1.67%, 50=97.09%, 100=0.39% 00:48:39.515 cpu : usr=98.33%, sys=1.40%, ctx=15, majf=0, minf=23 00:48:39.515 IO depths : 1=3.4%, 2=6.9%, 4=16.2%, 8=64.0%, 16=9.5%, 32=0.0%, >=64=0.0% 00:48:39.515 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:48:39.515 complete : 0=0.0%, 4=91.8%, 8=2.9%, 16=5.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:48:39.515 issued rwts: total=5078,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:48:39.515 latency : target=0, window=0, percentile=100.00%, depth=16 00:48:39.515 filename0: (groupid=0, jobs=1): err= 0: pid=517722: Tue Oct 1 22:47:33 2024 00:48:39.515 read: IOPS=488, BW=1954KiB/s (2001kB/s)(19.1MiB/10020msec) 00:48:39.515 slat (nsec): min=3686, max=86925, avg=12059.63, stdev=8883.37 00:48:39.515 clat (usec): min=23307, max=41303, avg=32636.89, stdev=1371.18 00:48:39.515 lat (usec): min=23313, max=41314, avg=32648.95, stdev=1371.27 00:48:39.515 clat percentiles (usec): 00:48:39.515 | 1.00th=[24773], 5.00th=[32113], 10.00th=[32113], 20.00th=[32375], 00:48:39.515 | 30.00th=[32375], 40.00th=[32375], 50.00th=[32637], 60.00th=[32900], 00:48:39.515 | 70.00th=[32900], 80.00th=[33162], 90.00th=[33424], 95.00th=[33817], 00:48:39.515 | 99.00th=[34866], 99.50th=[40633], 99.90th=[41157], 99.95th=[41157], 00:48:39.515 | 99.99th=[41157] 00:48:39.515 bw ( KiB/s): min= 1916, max= 2048, per=4.12%, avg=1953.26, stdev=56.44, samples=19 00:48:39.515 iops : min= 479, max= 512, avg=488.32, stdev=14.11, samples=19 00:48:39.515 lat (msec) : 50=100.00% 00:48:39.515 cpu : usr=98.88%, sys=0.85%, ctx=19, majf=0, minf=28 00:48:39.515 IO depths : 1=6.0%, 2=12.3%, 4=25.0%, 8=50.2%, 16=6.5%, 32=0.0%, >=64=0.0% 00:48:39.515 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:48:39.515 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:48:39.515 issued rwts: total=4896,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:48:39.515 latency : target=0, window=0, percentile=100.00%, depth=16 00:48:39.515 filename0: (groupid=0, jobs=1): err= 0: pid=517723: Tue Oct 1 22:47:33 2024 00:48:39.515 read: IOPS=500, BW=2003KiB/s (2051kB/s)(19.6MiB/10007msec) 00:48:39.515 slat (nsec): min=5682, max=82824, avg=12584.87, stdev=9953.55 00:48:39.515 clat (usec): min=10795, max=57929, avg=31845.56, stdev=4206.59 00:48:39.515 lat (usec): min=10801, max=57947, avg=31858.15, stdev=4207.62 00:48:39.515 clat percentiles (usec): 00:48:39.515 | 1.00th=[12649], 5.00th=[24249], 10.00th=[27132], 20.00th=[32113], 00:48:39.515 | 30.00th=[32375], 40.00th=[32375], 50.00th=[32375], 60.00th=[32637], 00:48:39.515 | 70.00th=[32900], 80.00th=[33162], 90.00th=[33817], 95.00th=[35914], 00:48:39.516 | 99.00th=[40633], 99.50th=[41681], 99.90th=[57934], 99.95th=[57934], 00:48:39.516 | 99.99th=[57934] 00:48:39.516 bw ( KiB/s): min= 1916, max= 2336, per=4.21%, avg=1994.74, stdev=102.62, samples=19 00:48:39.516 iops : min= 479, max= 584, avg=498.68, stdev=25.66, samples=19 00:48:39.516 lat (msec) : 20=2.17%, 50=97.55%, 100=0.28% 00:48:39.516 cpu : usr=99.15%, sys=0.56%, ctx=12, majf=0, minf=30 00:48:39.516 IO depths : 1=2.5%, 2=7.4%, 4=20.5%, 8=59.4%, 16=10.1%, 32=0.0%, >=64=0.0% 00:48:39.516 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:48:39.516 complete : 0=0.0%, 4=93.2%, 8=1.3%, 16=5.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:48:39.516 issued rwts: total=5012,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:48:39.516 latency : target=0, window=0, percentile=100.00%, depth=16 00:48:39.516 filename0: (groupid=0, jobs=1): err= 0: pid=517724: Tue Oct 1 22:47:33 2024 00:48:39.516 read: IOPS=511, BW=2044KiB/s (2093kB/s)(20.0MiB/10010msec) 00:48:39.516 slat (nsec): min=5747, max=70915, avg=12342.46, stdev=8357.60 00:48:39.516 clat (usec): min=6207, max=35294, avg=31209.51, stdev=4625.40 00:48:39.516 lat (usec): min=6219, max=35305, avg=31221.85, stdev=4625.76 00:48:39.516 clat percentiles (usec): 00:48:39.516 | 1.00th=[ 8455], 5.00th=[18482], 10.00th=[29230], 20.00th=[32113], 00:48:39.516 | 30.00th=[32375], 40.00th=[32375], 50.00th=[32375], 60.00th=[32375], 00:48:39.516 | 70.00th=[32900], 80.00th=[32900], 90.00th=[33424], 95.00th=[33817], 00:48:39.516 | 99.00th=[34866], 99.50th=[35390], 99.90th=[35390], 99.95th=[35390], 00:48:39.516 | 99.99th=[35390] 00:48:39.516 bw ( KiB/s): min= 1920, max= 2688, per=4.31%, avg=2040.20, stdev=192.75, samples=20 00:48:39.516 iops : min= 480, max= 672, avg=510.05, stdev=48.19, samples=20 00:48:39.516 lat (msec) : 10=1.21%, 20=7.00%, 50=91.79% 00:48:39.516 cpu : usr=98.76%, sys=0.92%, ctx=77, majf=0, minf=45 00:48:39.516 IO depths : 1=5.4%, 2=11.0%, 4=22.9%, 8=53.6%, 16=7.1%, 32=0.0%, >=64=0.0% 00:48:39.516 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:48:39.516 complete : 0=0.0%, 4=93.5%, 8=0.6%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:48:39.516 issued rwts: total=5116,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:48:39.516 latency : target=0, window=0, percentile=100.00%, depth=16 00:48:39.516 filename0: (groupid=0, jobs=1): err= 0: pid=517725: Tue Oct 1 22:47:33 2024 00:48:39.516 read: IOPS=488, BW=1954KiB/s (2001kB/s)(19.1MiB/10020msec) 00:48:39.516 slat (usec): min=4, max=123, avg=30.71, stdev=21.31 00:48:39.516 clat (usec): min=20327, max=40561, avg=32486.37, stdev=1041.46 00:48:39.516 lat (usec): min=20333, max=40567, avg=32517.08, stdev=1039.04 00:48:39.516 clat percentiles (usec): 00:48:39.516 | 1.00th=[31065], 5.00th=[31851], 10.00th=[31851], 20.00th=[32113], 00:48:39.516 | 30.00th=[32113], 40.00th=[32375], 50.00th=[32375], 60.00th=[32637], 00:48:39.516 | 70.00th=[32637], 80.00th=[32900], 90.00th=[33424], 95.00th=[33817], 00:48:39.516 | 99.00th=[34341], 99.50th=[34866], 99.90th=[40633], 99.95th=[40633], 00:48:39.516 | 99.99th=[40633] 00:48:39.516 bw ( KiB/s): min= 1792, max= 2048, per=4.12%, avg=1953.21, stdev=71.68, samples=19 00:48:39.516 iops : min= 448, max= 512, avg=488.26, stdev=17.87, samples=19 00:48:39.516 lat (msec) : 50=100.00% 00:48:39.516 cpu : usr=98.80%, sys=0.76%, ctx=128, majf=0, minf=25 00:48:39.516 IO depths : 1=6.2%, 2=12.4%, 4=25.0%, 8=50.1%, 16=6.3%, 32=0.0%, >=64=0.0% 00:48:39.516 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:48:39.516 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:48:39.516 issued rwts: total=4896,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:48:39.516 latency : target=0, window=0, percentile=100.00%, depth=16 00:48:39.516 filename1: (groupid=0, jobs=1): err= 0: pid=517726: Tue Oct 1 22:47:33 2024 00:48:39.516 read: IOPS=538, BW=2156KiB/s (2208kB/s)(21.1MiB/10021msec) 00:48:39.516 slat (usec): min=5, max=107, avg=10.52, stdev= 9.20 00:48:39.516 clat (usec): min=8427, max=57840, avg=29598.15, stdev=6158.16 00:48:39.516 lat (usec): min=8437, max=57871, avg=29608.68, stdev=6159.88 00:48:39.516 clat percentiles (usec): 00:48:39.516 | 1.00th=[10683], 5.00th=[16450], 10.00th=[20579], 20.00th=[25035], 00:48:39.516 | 30.00th=[28967], 40.00th=[32113], 50.00th=[32375], 60.00th=[32375], 00:48:39.516 | 70.00th=[32637], 80.00th=[32900], 90.00th=[33162], 95.00th=[33817], 00:48:39.516 | 99.00th=[40109], 99.50th=[46924], 99.90th=[57934], 99.95th=[57934], 00:48:39.516 | 99.99th=[57934] 00:48:39.516 bw ( KiB/s): min= 1920, max= 2592, per=4.52%, avg=2139.95, stdev=208.64, samples=19 00:48:39.516 iops : min= 480, max= 648, avg=534.95, stdev=52.13, samples=19 00:48:39.516 lat (msec) : 10=0.37%, 20=8.54%, 50=90.76%, 100=0.33% 00:48:39.516 cpu : usr=98.87%, sys=0.83%, ctx=13, majf=0, minf=26 00:48:39.516 IO depths : 1=2.8%, 2=6.5%, 4=16.9%, 8=63.8%, 16=10.0%, 32=0.0%, >=64=0.0% 00:48:39.516 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:48:39.516 complete : 0=0.0%, 4=91.9%, 8=2.8%, 16=5.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:48:39.516 issued rwts: total=5401,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:48:39.516 latency : target=0, window=0, percentile=100.00%, depth=16 00:48:39.516 filename1: (groupid=0, jobs=1): err= 0: pid=517727: Tue Oct 1 22:47:33 2024 00:48:39.516 read: IOPS=487, BW=1949KiB/s (1996kB/s)(19.1MiB/10013msec) 00:48:39.516 slat (usec): min=5, max=122, avg=32.02, stdev=18.34 00:48:39.516 clat (usec): min=19780, max=59785, avg=32516.53, stdev=1812.63 00:48:39.516 lat (usec): min=19802, max=59801, avg=32548.55, stdev=1811.73 00:48:39.516 clat percentiles (usec): 00:48:39.516 | 1.00th=[31327], 5.00th=[31851], 10.00th=[31851], 20.00th=[32113], 00:48:39.516 | 30.00th=[32113], 40.00th=[32375], 50.00th=[32375], 60.00th=[32637], 00:48:39.516 | 70.00th=[32637], 80.00th=[32900], 90.00th=[33162], 95.00th=[33817], 00:48:39.516 | 99.00th=[34341], 99.50th=[34866], 99.90th=[59507], 99.95th=[60031], 00:48:39.516 | 99.99th=[60031] 00:48:39.516 bw ( KiB/s): min= 1792, max= 2048, per=4.11%, avg=1946.95, stdev=68.52, samples=19 00:48:39.516 iops : min= 448, max= 512, avg=486.74, stdev=17.13, samples=19 00:48:39.516 lat (msec) : 20=0.29%, 50=99.39%, 100=0.33% 00:48:39.516 cpu : usr=98.60%, sys=1.00%, ctx=147, majf=0, minf=24 00:48:39.516 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:48:39.516 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:48:39.516 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:48:39.516 issued rwts: total=4880,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:48:39.516 latency : target=0, window=0, percentile=100.00%, depth=16 00:48:39.516 filename1: (groupid=0, jobs=1): err= 0: pid=517728: Tue Oct 1 22:47:33 2024 00:48:39.516 read: IOPS=487, BW=1950KiB/s (1997kB/s)(19.1MiB/10011msec) 00:48:39.516 slat (usec): min=4, max=122, avg=32.73, stdev=23.00 00:48:39.516 clat (usec): min=23281, max=56736, avg=32539.99, stdev=2018.41 00:48:39.516 lat (usec): min=23288, max=56748, avg=32572.73, stdev=2016.15 00:48:39.516 clat percentiles (usec): 00:48:39.516 | 1.00th=[25297], 5.00th=[31589], 10.00th=[31851], 20.00th=[32113], 00:48:39.516 | 30.00th=[32113], 40.00th=[32375], 50.00th=[32375], 60.00th=[32637], 00:48:39.516 | 70.00th=[32900], 80.00th=[32900], 90.00th=[33424], 95.00th=[33817], 00:48:39.516 | 99.00th=[40633], 99.50th=[41157], 99.90th=[56886], 99.95th=[56886], 00:48:39.516 | 99.99th=[56886] 00:48:39.516 bw ( KiB/s): min= 1795, max= 2048, per=4.11%, avg=1947.11, stdev=68.14, samples=19 00:48:39.516 iops : min= 448, max= 512, avg=486.74, stdev=17.13, samples=19 00:48:39.516 lat (msec) : 50=99.67%, 100=0.33% 00:48:39.516 cpu : usr=98.59%, sys=0.89%, ctx=132, majf=0, minf=17 00:48:39.516 IO depths : 1=6.2%, 2=12.4%, 4=25.0%, 8=50.1%, 16=6.3%, 32=0.0%, >=64=0.0% 00:48:39.516 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:48:39.516 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:48:39.516 issued rwts: total=4880,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:48:39.516 latency : target=0, window=0, percentile=100.00%, depth=16 00:48:39.516 filename1: (groupid=0, jobs=1): err= 0: pid=517729: Tue Oct 1 22:47:33 2024 00:48:39.516 read: IOPS=487, BW=1951KiB/s (1998kB/s)(19.1MiB/10015msec) 00:48:39.516 slat (usec): min=4, max=111, avg=31.46, stdev=17.54 00:48:39.516 clat (usec): min=19909, max=58007, avg=32501.48, stdev=1622.00 00:48:39.516 lat (usec): min=19916, max=58023, avg=32532.94, stdev=1621.37 00:48:39.516 clat percentiles (usec): 00:48:39.516 | 1.00th=[28967], 5.00th=[31851], 10.00th=[31851], 20.00th=[32113], 00:48:39.516 | 30.00th=[32113], 40.00th=[32375], 50.00th=[32375], 60.00th=[32637], 00:48:39.516 | 70.00th=[32637], 80.00th=[32900], 90.00th=[33162], 95.00th=[33817], 00:48:39.516 | 99.00th=[34866], 99.50th=[45351], 99.90th=[45876], 99.95th=[45876], 00:48:39.516 | 99.99th=[57934] 00:48:39.516 bw ( KiB/s): min= 1840, max= 2048, per=4.11%, avg=1949.47, stdev=63.16, samples=19 00:48:39.516 iops : min= 460, max= 512, avg=487.37, stdev=15.79, samples=19 00:48:39.516 lat (msec) : 20=0.10%, 50=99.86%, 100=0.04% 00:48:39.516 cpu : usr=98.84%, sys=0.86%, ctx=24, majf=0, minf=26 00:48:39.516 IO depths : 1=6.0%, 2=12.2%, 4=24.8%, 8=50.6%, 16=6.5%, 32=0.0%, >=64=0.0% 00:48:39.516 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:48:39.516 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:48:39.516 issued rwts: total=4886,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:48:39.516 latency : target=0, window=0, percentile=100.00%, depth=16 00:48:39.516 filename1: (groupid=0, jobs=1): err= 0: pid=517730: Tue Oct 1 22:47:33 2024 00:48:39.516 read: IOPS=489, BW=1956KiB/s (2003kB/s)(19.1MiB/10010msec) 00:48:39.516 slat (usec): min=5, max=110, avg=29.60, stdev=17.98 00:48:39.516 clat (usec): min=19589, max=40199, avg=32462.60, stdev=1139.86 00:48:39.516 lat (usec): min=19612, max=40224, avg=32492.20, stdev=1138.79 00:48:39.516 clat percentiles (usec): 00:48:39.516 | 1.00th=[29230], 5.00th=[31851], 10.00th=[31851], 20.00th=[32113], 00:48:39.516 | 30.00th=[32113], 40.00th=[32375], 50.00th=[32375], 60.00th=[32637], 00:48:39.516 | 70.00th=[32900], 80.00th=[32900], 90.00th=[33424], 95.00th=[33817], 00:48:39.516 | 99.00th=[34341], 99.50th=[34341], 99.90th=[34866], 99.95th=[34866], 00:48:39.516 | 99.99th=[40109] 00:48:39.516 bw ( KiB/s): min= 1920, max= 2048, per=4.12%, avg=1953.84, stdev=57.82, samples=19 00:48:39.516 iops : min= 480, max= 512, avg=488.42, stdev=14.48, samples=19 00:48:39.516 lat (msec) : 20=0.33%, 50=99.67% 00:48:39.516 cpu : usr=98.66%, sys=0.90%, ctx=113, majf=0, minf=25 00:48:39.516 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:48:39.516 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:48:39.516 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:48:39.516 issued rwts: total=4896,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:48:39.516 latency : target=0, window=0, percentile=100.00%, depth=16 00:48:39.516 filename1: (groupid=0, jobs=1): err= 0: pid=517731: Tue Oct 1 22:47:33 2024 00:48:39.516 read: IOPS=488, BW=1954KiB/s (2001kB/s)(19.1MiB/10020msec) 00:48:39.516 slat (usec): min=4, max=100, avg=21.91, stdev=19.56 00:48:39.516 clat (usec): min=19972, max=42586, avg=32570.54, stdev=1193.71 00:48:39.516 lat (usec): min=19978, max=42592, avg=32592.45, stdev=1191.13 00:48:39.516 clat percentiles (usec): 00:48:39.517 | 1.00th=[28967], 5.00th=[31851], 10.00th=[32113], 20.00th=[32113], 00:48:39.517 | 30.00th=[32375], 40.00th=[32375], 50.00th=[32375], 60.00th=[32637], 00:48:39.517 | 70.00th=[32900], 80.00th=[32900], 90.00th=[33424], 95.00th=[33817], 00:48:39.517 | 99.00th=[34866], 99.50th=[35390], 99.90th=[41681], 99.95th=[41681], 00:48:39.517 | 99.99th=[42730] 00:48:39.517 bw ( KiB/s): min= 1916, max= 2048, per=4.12%, avg=1953.26, stdev=58.18, samples=19 00:48:39.517 iops : min= 479, max= 512, avg=488.32, stdev=14.55, samples=19 00:48:39.517 lat (msec) : 20=0.10%, 50=99.90% 00:48:39.517 cpu : usr=98.71%, sys=0.98%, ctx=54, majf=0, minf=21 00:48:39.517 IO depths : 1=6.0%, 2=12.2%, 4=25.0%, 8=50.3%, 16=6.5%, 32=0.0%, >=64=0.0% 00:48:39.517 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:48:39.517 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:48:39.517 issued rwts: total=4896,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:48:39.517 latency : target=0, window=0, percentile=100.00%, depth=16 00:48:39.517 filename1: (groupid=0, jobs=1): err= 0: pid=517732: Tue Oct 1 22:47:33 2024 00:48:39.517 read: IOPS=493, BW=1976KiB/s (2023kB/s)(19.3MiB/10009msec) 00:48:39.517 slat (nsec): min=5755, max=66029, avg=14934.99, stdev=9558.78 00:48:39.517 clat (usec): min=7987, max=54793, avg=32263.76, stdev=3103.60 00:48:39.517 lat (usec): min=8001, max=54826, avg=32278.70, stdev=3103.08 00:48:39.517 clat percentiles (usec): 00:48:39.517 | 1.00th=[11731], 5.00th=[31589], 10.00th=[32113], 20.00th=[32113], 00:48:39.517 | 30.00th=[32375], 40.00th=[32375], 50.00th=[32375], 60.00th=[32637], 00:48:39.517 | 70.00th=[32900], 80.00th=[33162], 90.00th=[33424], 95.00th=[33817], 00:48:39.517 | 99.00th=[34866], 99.50th=[35390], 99.90th=[35390], 99.95th=[54789], 00:48:39.517 | 99.99th=[54789] 00:48:39.517 bw ( KiB/s): min= 1920, max= 2304, per=4.17%, avg=1973.63, stdev=98.17, samples=19 00:48:39.517 iops : min= 480, max= 576, avg=493.37, stdev=24.51, samples=19 00:48:39.517 lat (msec) : 10=0.97%, 20=1.03%, 50=97.94%, 100=0.06% 00:48:39.517 cpu : usr=98.94%, sys=0.77%, ctx=11, majf=0, minf=30 00:48:39.517 IO depths : 1=6.0%, 2=12.2%, 4=24.9%, 8=50.4%, 16=6.5%, 32=0.0%, >=64=0.0% 00:48:39.517 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:48:39.517 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:48:39.517 issued rwts: total=4944,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:48:39.517 latency : target=0, window=0, percentile=100.00%, depth=16 00:48:39.517 filename1: (groupid=0, jobs=1): err= 0: pid=517733: Tue Oct 1 22:47:33 2024 00:48:39.517 read: IOPS=492, BW=1968KiB/s (2015kB/s)(19.2MiB/10012msec) 00:48:39.517 slat (usec): min=4, max=124, avg=15.35, stdev=14.04 00:48:39.517 clat (usec): min=9756, max=57765, avg=32403.55, stdev=3021.03 00:48:39.517 lat (usec): min=9763, max=57778, avg=32418.91, stdev=3021.49 00:48:39.517 clat percentiles (usec): 00:48:39.517 | 1.00th=[21627], 5.00th=[27395], 10.00th=[31851], 20.00th=[32113], 00:48:39.517 | 30.00th=[32375], 40.00th=[32375], 50.00th=[32375], 60.00th=[32637], 00:48:39.517 | 70.00th=[32900], 80.00th=[33162], 90.00th=[33817], 95.00th=[34341], 00:48:39.517 | 99.00th=[39584], 99.50th=[40633], 99.90th=[57934], 99.95th=[57934], 00:48:39.517 | 99.99th=[57934] 00:48:39.517 bw ( KiB/s): min= 1792, max= 2144, per=4.15%, avg=1966.32, stdev=82.78, samples=19 00:48:39.517 iops : min= 448, max= 536, avg=491.58, stdev=20.69, samples=19 00:48:39.517 lat (msec) : 10=0.12%, 20=0.79%, 50=98.76%, 100=0.32% 00:48:39.517 cpu : usr=98.28%, sys=1.11%, ctx=285, majf=0, minf=22 00:48:39.517 IO depths : 1=3.8%, 2=9.4%, 4=23.0%, 8=55.0%, 16=8.8%, 32=0.0%, >=64=0.0% 00:48:39.517 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:48:39.517 complete : 0=0.0%, 4=93.7%, 8=0.6%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:48:39.517 issued rwts: total=4926,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:48:39.517 latency : target=0, window=0, percentile=100.00%, depth=16 00:48:39.517 filename2: (groupid=0, jobs=1): err= 0: pid=517734: Tue Oct 1 22:47:33 2024 00:48:39.517 read: IOPS=493, BW=1972KiB/s (2020kB/s)(19.3MiB/10027msec) 00:48:39.517 slat (nsec): min=5676, max=80424, avg=15565.11, stdev=10005.73 00:48:39.517 clat (usec): min=8091, max=37398, avg=32308.91, stdev=2643.63 00:48:39.517 lat (usec): min=8126, max=37405, avg=32324.47, stdev=2643.46 00:48:39.517 clat percentiles (usec): 00:48:39.517 | 1.00th=[13173], 5.00th=[31589], 10.00th=[32113], 20.00th=[32113], 00:48:39.517 | 30.00th=[32375], 40.00th=[32375], 50.00th=[32375], 60.00th=[32637], 00:48:39.517 | 70.00th=[32900], 80.00th=[33162], 90.00th=[33424], 95.00th=[33817], 00:48:39.517 | 99.00th=[34866], 99.50th=[35390], 99.90th=[35390], 99.95th=[37487], 00:48:39.517 | 99.99th=[37487] 00:48:39.517 bw ( KiB/s): min= 1916, max= 2176, per=4.17%, avg=1973.68, stdev=77.85, samples=19 00:48:39.517 iops : min= 479, max= 544, avg=493.42, stdev=19.46, samples=19 00:48:39.517 lat (msec) : 10=0.32%, 20=1.29%, 50=98.38% 00:48:39.517 cpu : usr=98.98%, sys=0.71%, ctx=64, majf=0, minf=29 00:48:39.517 IO depths : 1=6.1%, 2=12.4%, 4=25.0%, 8=50.1%, 16=6.4%, 32=0.0%, >=64=0.0% 00:48:39.517 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:48:39.517 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:48:39.517 issued rwts: total=4944,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:48:39.517 latency : target=0, window=0, percentile=100.00%, depth=16 00:48:39.517 filename2: (groupid=0, jobs=1): err= 0: pid=517735: Tue Oct 1 22:47:33 2024 00:48:39.517 read: IOPS=488, BW=1955KiB/s (2002kB/s)(19.1MiB/10019msec) 00:48:39.517 slat (usec): min=4, max=118, avg=34.35, stdev=20.11 00:48:39.517 clat (usec): min=19958, max=40060, avg=32444.08, stdev=994.06 00:48:39.517 lat (usec): min=19966, max=40073, avg=32478.43, stdev=992.88 00:48:39.517 clat percentiles (usec): 00:48:39.517 | 1.00th=[31065], 5.00th=[31851], 10.00th=[31851], 20.00th=[32113], 00:48:39.517 | 30.00th=[32113], 40.00th=[32375], 50.00th=[32375], 60.00th=[32637], 00:48:39.517 | 70.00th=[32637], 80.00th=[32900], 90.00th=[33424], 95.00th=[33817], 00:48:39.517 | 99.00th=[34341], 99.50th=[34341], 99.90th=[37487], 99.95th=[38536], 00:48:39.517 | 99.99th=[40109] 00:48:39.517 bw ( KiB/s): min= 1792, max= 2048, per=4.12%, avg=1953.37, stdev=71.61, samples=19 00:48:39.517 iops : min= 448, max= 512, avg=488.26, stdev=17.87, samples=19 00:48:39.517 lat (msec) : 20=0.12%, 50=99.88% 00:48:39.517 cpu : usr=98.61%, sys=0.94%, ctx=136, majf=0, minf=21 00:48:39.517 IO depths : 1=6.2%, 2=12.4%, 4=25.0%, 8=50.1%, 16=6.3%, 32=0.0%, >=64=0.0% 00:48:39.517 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:48:39.517 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:48:39.517 issued rwts: total=4896,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:48:39.517 latency : target=0, window=0, percentile=100.00%, depth=16 00:48:39.517 filename2: (groupid=0, jobs=1): err= 0: pid=517736: Tue Oct 1 22:47:33 2024 00:48:39.517 read: IOPS=489, BW=1956KiB/s (2003kB/s)(19.1MiB/10012msec) 00:48:39.517 slat (usec): min=4, max=109, avg=29.91, stdev=19.47 00:48:39.517 clat (usec): min=19544, max=34864, avg=32488.20, stdev=1046.13 00:48:39.517 lat (usec): min=19552, max=34870, avg=32518.11, stdev=1044.41 00:48:39.517 clat percentiles (usec): 00:48:39.517 | 1.00th=[31065], 5.00th=[31851], 10.00th=[31851], 20.00th=[32113], 00:48:39.517 | 30.00th=[32113], 40.00th=[32375], 50.00th=[32375], 60.00th=[32637], 00:48:39.517 | 70.00th=[32900], 80.00th=[32900], 90.00th=[33424], 95.00th=[33817], 00:48:39.517 | 99.00th=[34341], 99.50th=[34341], 99.90th=[34866], 99.95th=[34866], 00:48:39.517 | 99.99th=[34866] 00:48:39.517 bw ( KiB/s): min= 1916, max= 2048, per=4.12%, avg=1953.47, stdev=58.05, samples=19 00:48:39.517 iops : min= 479, max= 512, avg=488.37, stdev=14.51, samples=19 00:48:39.517 lat (msec) : 20=0.33%, 50=99.67% 00:48:39.517 cpu : usr=98.90%, sys=0.82%, ctx=16, majf=0, minf=28 00:48:39.517 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:48:39.517 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:48:39.517 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:48:39.517 issued rwts: total=4896,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:48:39.517 latency : target=0, window=0, percentile=100.00%, depth=16 00:48:39.517 filename2: (groupid=0, jobs=1): err= 0: pid=517737: Tue Oct 1 22:47:33 2024 00:48:39.517 read: IOPS=487, BW=1950KiB/s (1996kB/s)(19.1MiB/10012msec) 00:48:39.517 slat (usec): min=5, max=118, avg=33.80, stdev=18.75 00:48:39.517 clat (usec): min=19804, max=56276, avg=32503.74, stdev=1642.34 00:48:39.517 lat (usec): min=19823, max=56291, avg=32537.54, stdev=1641.53 00:48:39.517 clat percentiles (usec): 00:48:39.517 | 1.00th=[31589], 5.00th=[31851], 10.00th=[31851], 20.00th=[32113], 00:48:39.517 | 30.00th=[32113], 40.00th=[32375], 50.00th=[32375], 60.00th=[32637], 00:48:39.517 | 70.00th=[32637], 80.00th=[32900], 90.00th=[33162], 95.00th=[33817], 00:48:39.517 | 99.00th=[34341], 99.50th=[34866], 99.90th=[56361], 99.95th=[56361], 00:48:39.517 | 99.99th=[56361] 00:48:39.517 bw ( KiB/s): min= 1792, max= 2048, per=4.11%, avg=1946.95, stdev=68.52, samples=19 00:48:39.517 iops : min= 448, max= 512, avg=486.74, stdev=17.13, samples=19 00:48:39.517 lat (msec) : 20=0.20%, 50=99.47%, 100=0.33% 00:48:39.517 cpu : usr=98.82%, sys=0.89%, ctx=16, majf=0, minf=30 00:48:39.517 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:48:39.517 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:48:39.517 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:48:39.517 issued rwts: total=4880,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:48:39.517 latency : target=0, window=0, percentile=100.00%, depth=16 00:48:39.517 filename2: (groupid=0, jobs=1): err= 0: pid=517738: Tue Oct 1 22:47:33 2024 00:48:39.517 read: IOPS=487, BW=1949KiB/s (1996kB/s)(19.1MiB/10013msec) 00:48:39.517 slat (usec): min=5, max=122, avg=34.18, stdev=21.61 00:48:39.517 clat (usec): min=19652, max=56421, avg=32473.86, stdev=1654.76 00:48:39.517 lat (usec): min=19670, max=56436, avg=32508.05, stdev=1654.35 00:48:39.517 clat percentiles (usec): 00:48:39.517 | 1.00th=[31327], 5.00th=[31851], 10.00th=[31851], 20.00th=[32113], 00:48:39.517 | 30.00th=[32113], 40.00th=[32113], 50.00th=[32375], 60.00th=[32375], 00:48:39.517 | 70.00th=[32637], 80.00th=[32900], 90.00th=[33162], 95.00th=[33817], 00:48:39.517 | 99.00th=[34341], 99.50th=[34341], 99.90th=[56361], 99.95th=[56361], 00:48:39.517 | 99.99th=[56361] 00:48:39.517 bw ( KiB/s): min= 1792, max= 2048, per=4.11%, avg=1946.95, stdev=68.52, samples=19 00:48:39.517 iops : min= 448, max= 512, avg=486.74, stdev=17.13, samples=19 00:48:39.517 lat (msec) : 20=0.33%, 50=99.34%, 100=0.33% 00:48:39.517 cpu : usr=98.86%, sys=0.81%, ctx=96, majf=0, minf=24 00:48:39.517 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:48:39.517 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:48:39.517 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:48:39.517 issued rwts: total=4880,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:48:39.517 latency : target=0, window=0, percentile=100.00%, depth=16 00:48:39.518 filename2: (groupid=0, jobs=1): err= 0: pid=517739: Tue Oct 1 22:47:33 2024 00:48:39.518 read: IOPS=495, BW=1982KiB/s (2030kB/s)(19.4MiB/10013msec) 00:48:39.518 slat (usec): min=4, max=125, avg=28.68, stdev=21.69 00:48:39.518 clat (usec): min=12743, max=57028, avg=32032.63, stdev=3937.06 00:48:39.518 lat (usec): min=12755, max=57041, avg=32061.31, stdev=3939.70 00:48:39.518 clat percentiles (usec): 00:48:39.518 | 1.00th=[19530], 5.00th=[22938], 10.00th=[30016], 20.00th=[31851], 00:48:39.518 | 30.00th=[32113], 40.00th=[32113], 50.00th=[32375], 60.00th=[32375], 00:48:39.518 | 70.00th=[32637], 80.00th=[32900], 90.00th=[33424], 95.00th=[34866], 00:48:39.518 | 99.00th=[46400], 99.50th=[54264], 99.90th=[56886], 99.95th=[56886], 00:48:39.518 | 99.99th=[56886] 00:48:39.518 bw ( KiB/s): min= 1795, max= 2192, per=4.17%, avg=1975.74, stdev=101.37, samples=19 00:48:39.518 iops : min= 448, max= 548, avg=493.89, stdev=25.42, samples=19 00:48:39.518 lat (msec) : 20=1.45%, 50=97.78%, 100=0.77% 00:48:39.518 cpu : usr=98.54%, sys=1.00%, ctx=97, majf=0, minf=26 00:48:39.518 IO depths : 1=4.7%, 2=9.3%, 4=19.8%, 8=57.9%, 16=8.3%, 32=0.0%, >=64=0.0% 00:48:39.518 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:48:39.518 complete : 0=0.0%, 4=92.7%, 8=2.0%, 16=5.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:48:39.518 issued rwts: total=4962,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:48:39.518 latency : target=0, window=0, percentile=100.00%, depth=16 00:48:39.518 filename2: (groupid=0, jobs=1): err= 0: pid=517740: Tue Oct 1 22:47:33 2024 00:48:39.518 read: IOPS=488, BW=1956KiB/s (2003kB/s)(19.1MiB/10006msec) 00:48:39.518 slat (usec): min=5, max=109, avg=19.28, stdev=17.53 00:48:39.518 clat (usec): min=16116, max=48093, avg=32574.06, stdev=1727.71 00:48:39.518 lat (usec): min=16124, max=48130, avg=32593.34, stdev=1727.07 00:48:39.518 clat percentiles (usec): 00:48:39.518 | 1.00th=[25822], 5.00th=[31851], 10.00th=[32113], 20.00th=[32113], 00:48:39.518 | 30.00th=[32375], 40.00th=[32375], 50.00th=[32375], 60.00th=[32637], 00:48:39.518 | 70.00th=[32900], 80.00th=[32900], 90.00th=[33817], 95.00th=[33817], 00:48:39.518 | 99.00th=[39060], 99.50th=[40109], 99.90th=[45876], 99.95th=[45876], 00:48:39.518 | 99.99th=[47973] 00:48:39.518 bw ( KiB/s): min= 1916, max= 2048, per=4.12%, avg=1951.58, stdev=55.70, samples=19 00:48:39.518 iops : min= 479, max= 512, avg=487.89, stdev=13.92, samples=19 00:48:39.518 lat (msec) : 20=0.16%, 50=99.84% 00:48:39.518 cpu : usr=99.00%, sys=0.71%, ctx=40, majf=0, minf=25 00:48:39.518 IO depths : 1=5.5%, 2=11.5%, 4=24.4%, 8=51.5%, 16=7.1%, 32=0.0%, >=64=0.0% 00:48:39.518 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:48:39.518 complete : 0=0.0%, 4=94.0%, 8=0.2%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:48:39.518 issued rwts: total=4892,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:48:39.518 latency : target=0, window=0, percentile=100.00%, depth=16 00:48:39.518 filename2: (groupid=0, jobs=1): err= 0: pid=517741: Tue Oct 1 22:47:33 2024 00:48:39.518 read: IOPS=503, BW=2012KiB/s (2061kB/s)(19.7MiB/10012msec) 00:48:39.518 slat (nsec): min=5682, max=64674, avg=9501.74, stdev=6033.90 00:48:39.518 clat (usec): min=4510, max=35392, avg=31727.38, stdev=4563.37 00:48:39.518 lat (usec): min=4520, max=35401, avg=31736.88, stdev=4562.76 00:48:39.518 clat percentiles (usec): 00:48:39.518 | 1.00th=[ 6390], 5.00th=[29492], 10.00th=[31327], 20.00th=[32113], 00:48:39.518 | 30.00th=[32375], 40.00th=[32375], 50.00th=[32375], 60.00th=[32637], 00:48:39.518 | 70.00th=[32900], 80.00th=[33162], 90.00th=[33817], 95.00th=[34341], 00:48:39.518 | 99.00th=[35390], 99.50th=[35390], 99.90th=[35390], 99.95th=[35390], 00:48:39.518 | 99.99th=[35390] 00:48:39.518 bw ( KiB/s): min= 1920, max= 2432, per=4.24%, avg=2008.40, stdev=122.97, samples=20 00:48:39.518 iops : min= 480, max= 608, avg=502.10, stdev=30.74, samples=20 00:48:39.518 lat (msec) : 10=2.62%, 20=1.35%, 50=96.03% 00:48:39.518 cpu : usr=98.83%, sys=0.87%, ctx=11, majf=0, minf=75 00:48:39.518 IO depths : 1=5.3%, 2=10.9%, 4=22.8%, 8=53.6%, 16=7.4%, 32=0.0%, >=64=0.0% 00:48:39.518 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:48:39.518 complete : 0=0.0%, 4=93.6%, 8=0.7%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:48:39.518 issued rwts: total=5037,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:48:39.518 latency : target=0, window=0, percentile=100.00%, depth=16 00:48:39.518 00:48:39.518 Run status group 0 (all jobs): 00:48:39.518 READ: bw=46.3MiB/s (48.5MB/s), 1949KiB/s-2156KiB/s (1996kB/s-2208kB/s), io=464MiB (486MB), run=10006-10028msec 00:48:39.518 22:47:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@113 -- # destroy_subsystems 0 1 2 00:48:39.518 22:47:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:48:39.518 22:47:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:48:39.518 22:47:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:48:39.518 22:47:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:48:39.518 22:47:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:48:39.518 22:47:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:48:39.518 22:47:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:48:39.518 22:47:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:48:39.518 22:47:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:48:39.518 22:47:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:48:39.518 22:47:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:48:39.518 22:47:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:48:39.518 22:47:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:48:39.518 22:47:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:48:39.518 22:47:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:48:39.518 22:47:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:48:39.518 22:47:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:48:39.518 22:47:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:48:39.518 22:47:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:48:39.518 22:47:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:48:39.518 22:47:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:48:39.518 22:47:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:48:39.518 22:47:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:48:39.518 22:47:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:48:39.518 22:47:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 2 00:48:39.518 22:47:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=2 00:48:39.518 22:47:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:48:39.518 22:47:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:48:39.518 22:47:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:48:39.518 22:47:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:48:39.518 22:47:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null2 00:48:39.518 22:47:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:48:39.518 22:47:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:48:39.518 22:47:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:48:39.518 22:47:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # NULL_DIF=1 00:48:39.518 22:47:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # bs=8k,16k,128k 00:48:39.518 22:47:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # numjobs=2 00:48:39.518 22:47:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # iodepth=8 00:48:39.518 22:47:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # runtime=5 00:48:39.518 22:47:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # files=1 00:48:39.518 22:47:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@117 -- # create_subsystems 0 1 00:48:39.518 22:47:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:48:39.518 22:47:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:48:39.518 22:47:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:48:39.518 22:47:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:48:39.518 22:47:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:48:39.518 22:47:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:48:39.518 22:47:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:48:39.518 bdev_null0 00:48:39.518 22:47:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:48:39.518 22:47:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:48:39.518 22:47:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:48:39.518 22:47:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:48:39.518 22:47:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:48:39.518 22:47:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:48:39.518 22:47:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:48:39.518 22:47:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:48:39.518 22:47:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:48:39.518 22:47:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:48:39.518 22:47:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:48:39.518 22:47:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:48:39.518 [2024-10-01 22:47:33.538237] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:48:39.518 22:47:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:48:39.518 22:47:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:48:39.518 22:47:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:48:39.518 22:47:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:48:39.518 22:47:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:48:39.518 22:47:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:48:39.518 22:47:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:48:39.518 bdev_null1 00:48:39.518 22:47:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:48:39.518 22:47:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:48:39.519 22:47:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:48:39.519 22:47:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:48:39.519 22:47:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:48:39.519 22:47:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:48:39.519 22:47:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:48:39.519 22:47:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:48:39.519 22:47:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:48:39.519 22:47:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:48:39.519 22:47:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:48:39.519 22:47:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:48:39.519 22:47:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:48:39.519 22:47:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # fio /dev/fd/62 00:48:39.519 22:47:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # create_json_sub_conf 0 1 00:48:39.519 22:47:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:48:39.519 22:47:33 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # config=() 00:48:39.519 22:47:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:48:39.519 22:47:33 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # local subsystem config 00:48:39.519 22:47:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:48:39.519 22:47:33 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:48:39.519 22:47:33 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:48:39.519 { 00:48:39.519 "params": { 00:48:39.519 "name": "Nvme$subsystem", 00:48:39.519 "trtype": "$TEST_TRANSPORT", 00:48:39.519 "traddr": "$NVMF_FIRST_TARGET_IP", 00:48:39.519 "adrfam": "ipv4", 00:48:39.519 "trsvcid": "$NVMF_PORT", 00:48:39.519 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:48:39.519 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:48:39.519 "hdgst": ${hdgst:-false}, 00:48:39.519 "ddgst": ${ddgst:-false} 00:48:39.519 }, 00:48:39.519 "method": "bdev_nvme_attach_controller" 00:48:39.519 } 00:48:39.519 EOF 00:48:39.519 )") 00:48:39.519 22:47:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:48:39.519 22:47:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:48:39.519 22:47:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:48:39.519 22:47:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:48:39.519 22:47:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local sanitizers 00:48:39.519 22:47:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:48:39.519 22:47:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:48:39.519 22:47:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # shift 00:48:39.519 22:47:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local asan_lib= 00:48:39.519 22:47:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:48:39.519 22:47:33 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@580 -- # cat 00:48:39.519 22:47:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:48:39.519 22:47:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:48:39.519 22:47:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libasan 00:48:39.519 22:47:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:48:39.519 22:47:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:48:39.519 22:47:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:48:39.519 22:47:33 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:48:39.519 22:47:33 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:48:39.519 { 00:48:39.519 "params": { 00:48:39.519 "name": "Nvme$subsystem", 00:48:39.519 "trtype": "$TEST_TRANSPORT", 00:48:39.519 "traddr": "$NVMF_FIRST_TARGET_IP", 00:48:39.519 "adrfam": "ipv4", 00:48:39.519 "trsvcid": "$NVMF_PORT", 00:48:39.519 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:48:39.519 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:48:39.519 "hdgst": ${hdgst:-false}, 00:48:39.519 "ddgst": ${ddgst:-false} 00:48:39.519 }, 00:48:39.519 "method": "bdev_nvme_attach_controller" 00:48:39.519 } 00:48:39.519 EOF 00:48:39.519 )") 00:48:39.519 22:47:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:48:39.519 22:47:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:48:39.519 22:47:33 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@580 -- # cat 00:48:39.519 22:47:33 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # jq . 00:48:39.519 22:47:33 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@583 -- # IFS=, 00:48:39.519 22:47:33 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:48:39.519 "params": { 00:48:39.519 "name": "Nvme0", 00:48:39.519 "trtype": "tcp", 00:48:39.519 "traddr": "10.0.0.2", 00:48:39.519 "adrfam": "ipv4", 00:48:39.519 "trsvcid": "4420", 00:48:39.519 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:48:39.519 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:48:39.519 "hdgst": false, 00:48:39.519 "ddgst": false 00:48:39.519 }, 00:48:39.519 "method": "bdev_nvme_attach_controller" 00:48:39.519 },{ 00:48:39.519 "params": { 00:48:39.519 "name": "Nvme1", 00:48:39.519 "trtype": "tcp", 00:48:39.519 "traddr": "10.0.0.2", 00:48:39.519 "adrfam": "ipv4", 00:48:39.519 "trsvcid": "4420", 00:48:39.519 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:48:39.519 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:48:39.519 "hdgst": false, 00:48:39.519 "ddgst": false 00:48:39.519 }, 00:48:39.519 "method": "bdev_nvme_attach_controller" 00:48:39.519 }' 00:48:39.519 22:47:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:48:39.519 22:47:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:48:39.519 22:47:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:48:39.519 22:47:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:48:39.519 22:47:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:48:39.519 22:47:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:48:39.519 22:47:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:48:39.519 22:47:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:48:39.519 22:47:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:48:39.519 22:47:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:48:39.519 filename0: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:48:39.519 ... 00:48:39.519 filename1: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:48:39.519 ... 00:48:39.519 fio-3.35 00:48:39.519 Starting 4 threads 00:48:44.881 00:48:44.881 filename0: (groupid=0, jobs=1): err= 0: pid=520119: Tue Oct 1 22:47:39 2024 00:48:44.881 read: IOPS=1982, BW=15.5MiB/s (16.2MB/s)(77.5MiB/5003msec) 00:48:44.881 slat (nsec): min=2931, max=24645, avg=6200.69, stdev=1860.17 00:48:44.881 clat (usec): min=1320, max=6501, avg=4019.22, stdev=548.15 00:48:44.881 lat (usec): min=1325, max=6507, avg=4025.42, stdev=548.07 00:48:44.881 clat percentiles (usec): 00:48:44.881 | 1.00th=[ 2966], 5.00th=[ 3490], 10.00th=[ 3589], 20.00th=[ 3752], 00:48:44.881 | 30.00th=[ 3818], 40.00th=[ 3818], 50.00th=[ 3851], 60.00th=[ 3884], 00:48:44.881 | 70.00th=[ 4080], 80.00th=[ 4228], 90.00th=[ 4555], 95.00th=[ 5407], 00:48:44.881 | 99.00th=[ 5932], 99.50th=[ 6063], 99.90th=[ 6325], 99.95th=[ 6390], 00:48:44.881 | 99.99th=[ 6521] 00:48:44.881 bw ( KiB/s): min=15248, max=16480, per=23.71%, avg=15811.56, stdev=417.83, samples=9 00:48:44.881 iops : min= 1906, max= 2060, avg=1976.44, stdev=52.23, samples=9 00:48:44.881 lat (msec) : 2=0.16%, 4=65.99%, 10=33.84% 00:48:44.881 cpu : usr=96.48%, sys=3.28%, ctx=7, majf=0, minf=0 00:48:44.881 IO depths : 1=0.1%, 2=0.2%, 4=70.6%, 8=29.1%, 16=0.0%, 32=0.0%, >=64=0.0% 00:48:44.881 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:48:44.881 complete : 0=0.0%, 4=93.7%, 8=6.3%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:48:44.881 issued rwts: total=9916,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:48:44.881 latency : target=0, window=0, percentile=100.00%, depth=8 00:48:44.881 filename0: (groupid=0, jobs=1): err= 0: pid=520120: Tue Oct 1 22:47:39 2024 00:48:44.881 read: IOPS=2024, BW=15.8MiB/s (16.6MB/s)(79.2MiB/5008msec) 00:48:44.881 slat (nsec): min=5522, max=43322, avg=6317.18, stdev=2365.87 00:48:44.881 clat (usec): min=1515, max=8727, avg=3932.27, stdev=480.05 00:48:44.881 lat (usec): min=1521, max=8733, avg=3938.59, stdev=479.90 00:48:44.881 clat percentiles (usec): 00:48:44.881 | 1.00th=[ 2933], 5.00th=[ 3294], 10.00th=[ 3523], 20.00th=[ 3687], 00:48:44.881 | 30.00th=[ 3785], 40.00th=[ 3818], 50.00th=[ 3851], 60.00th=[ 3851], 00:48:44.881 | 70.00th=[ 3982], 80.00th=[ 4146], 90.00th=[ 4490], 95.00th=[ 4621], 00:48:44.881 | 99.00th=[ 5866], 99.50th=[ 6063], 99.90th=[ 6325], 99.95th=[ 6325], 00:48:44.881 | 99.99th=[ 8717] 00:48:44.881 bw ( KiB/s): min=15808, max=16672, per=24.32%, avg=16220.80, stdev=247.27, samples=10 00:48:44.881 iops : min= 1976, max= 2084, avg=2027.60, stdev=30.91, samples=10 00:48:44.881 lat (msec) : 2=0.03%, 4=70.31%, 10=29.66% 00:48:44.881 cpu : usr=97.12%, sys=2.66%, ctx=5, majf=0, minf=0 00:48:44.881 IO depths : 1=0.1%, 2=0.1%, 4=69.1%, 8=30.7%, 16=0.0%, 32=0.0%, >=64=0.0% 00:48:44.881 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:48:44.881 complete : 0=0.0%, 4=95.1%, 8=4.9%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:48:44.881 issued rwts: total=10141,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:48:44.881 latency : target=0, window=0, percentile=100.00%, depth=8 00:48:44.881 filename1: (groupid=0, jobs=1): err= 0: pid=520121: Tue Oct 1 22:47:39 2024 00:48:44.881 read: IOPS=1973, BW=15.4MiB/s (16.2MB/s)(77.1MiB/5002msec) 00:48:44.881 slat (nsec): min=5524, max=72383, avg=6369.46, stdev=3441.08 00:48:44.881 clat (usec): min=2480, max=7554, avg=4036.15, stdev=563.33 00:48:44.881 lat (usec): min=2486, max=7559, avg=4042.51, stdev=563.36 00:48:44.881 clat percentiles (usec): 00:48:44.881 | 1.00th=[ 3195], 5.00th=[ 3490], 10.00th=[ 3589], 20.00th=[ 3720], 00:48:44.881 | 30.00th=[ 3785], 40.00th=[ 3818], 50.00th=[ 3884], 60.00th=[ 4015], 00:48:44.881 | 70.00th=[ 4113], 80.00th=[ 4178], 90.00th=[ 4555], 95.00th=[ 5604], 00:48:44.881 | 99.00th=[ 6063], 99.50th=[ 6259], 99.90th=[ 6718], 99.95th=[ 7177], 00:48:44.881 | 99.99th=[ 7570] 00:48:44.881 bw ( KiB/s): min=15616, max=16144, per=23.72%, avg=15818.67, stdev=161.79, samples=9 00:48:44.881 iops : min= 1952, max= 2018, avg=1977.33, stdev=20.22, samples=9 00:48:44.881 lat (msec) : 4=58.52%, 10=41.48% 00:48:44.881 cpu : usr=95.16%, sys=3.62%, ctx=181, majf=0, minf=9 00:48:44.881 IO depths : 1=0.1%, 2=0.1%, 4=73.0%, 8=26.9%, 16=0.0%, 32=0.0%, >=64=0.0% 00:48:44.881 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:48:44.881 complete : 0=0.0%, 4=92.0%, 8=8.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:48:44.881 issued rwts: total=9871,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:48:44.881 latency : target=0, window=0, percentile=100.00%, depth=8 00:48:44.881 filename1: (groupid=0, jobs=1): err= 0: pid=520122: Tue Oct 1 22:47:39 2024 00:48:44.881 read: IOPS=2361, BW=18.5MiB/s (19.3MB/s)(92.4MiB/5006msec) 00:48:44.881 slat (nsec): min=5538, max=94888, avg=6450.93, stdev=3590.53 00:48:44.881 clat (usec): min=1667, max=7378, avg=3369.23, stdev=504.57 00:48:44.881 lat (usec): min=1674, max=7384, avg=3375.68, stdev=504.77 00:48:44.881 clat percentiles (usec): 00:48:44.881 | 1.00th=[ 2409], 5.00th=[ 2671], 10.00th=[ 2835], 20.00th=[ 2900], 00:48:44.881 | 30.00th=[ 3064], 40.00th=[ 3163], 50.00th=[ 3261], 60.00th=[ 3490], 00:48:44.881 | 70.00th=[ 3752], 80.00th=[ 3818], 90.00th=[ 3851], 95.00th=[ 4113], 00:48:44.881 | 99.00th=[ 4948], 99.50th=[ 5145], 99.90th=[ 5604], 99.95th=[ 6128], 00:48:44.881 | 99.99th=[ 7373] 00:48:44.881 bw ( KiB/s): min=18304, max=19648, per=28.34%, avg=18904.00, stdev=462.02, samples=10 00:48:44.881 iops : min= 2288, max= 2456, avg=2363.00, stdev=57.75, samples=10 00:48:44.881 lat (msec) : 2=0.18%, 4=93.03%, 10=6.79% 00:48:44.881 cpu : usr=97.12%, sys=2.62%, ctx=10, majf=0, minf=11 00:48:44.881 IO depths : 1=0.1%, 2=6.0%, 4=63.2%, 8=30.8%, 16=0.0%, 32=0.0%, >=64=0.0% 00:48:44.881 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:48:44.882 complete : 0=0.0%, 4=95.0%, 8=5.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:48:44.882 issued rwts: total=11823,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:48:44.882 latency : target=0, window=0, percentile=100.00%, depth=8 00:48:44.882 00:48:44.882 Run status group 0 (all jobs): 00:48:44.882 READ: bw=65.1MiB/s (68.3MB/s), 15.4MiB/s-18.5MiB/s (16.2MB/s-19.3MB/s), io=326MiB (342MB), run=5002-5008msec 00:48:44.882 22:47:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@119 -- # destroy_subsystems 0 1 00:48:44.882 22:47:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:48:44.882 22:47:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:48:44.882 22:47:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:48:44.882 22:47:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:48:44.882 22:47:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:48:44.882 22:47:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:48:44.882 22:47:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:48:44.882 22:47:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:48:44.882 22:47:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:48:44.882 22:47:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:48:44.882 22:47:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:48:44.882 22:47:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:48:44.882 22:47:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:48:44.882 22:47:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:48:44.882 22:47:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:48:44.882 22:47:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:48:44.882 22:47:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:48:44.882 22:47:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:48:45.144 22:47:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:48:45.144 22:47:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:48:45.144 22:47:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:48:45.144 22:47:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:48:45.144 22:47:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:48:45.144 00:48:45.144 real 0m24.907s 00:48:45.144 user 5m14.020s 00:48:45.144 sys 0m4.634s 00:48:45.144 22:47:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1126 -- # xtrace_disable 00:48:45.144 22:47:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:48:45.144 ************************************ 00:48:45.144 END TEST fio_dif_rand_params 00:48:45.144 ************************************ 00:48:45.144 22:47:40 nvmf_dif -- target/dif.sh@144 -- # run_test fio_dif_digest fio_dif_digest 00:48:45.144 22:47:40 nvmf_dif -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:48:45.144 22:47:40 nvmf_dif -- common/autotest_common.sh@1107 -- # xtrace_disable 00:48:45.144 22:47:40 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:48:45.144 ************************************ 00:48:45.144 START TEST fio_dif_digest 00:48:45.144 ************************************ 00:48:45.144 22:47:40 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1125 -- # fio_dif_digest 00:48:45.144 22:47:40 nvmf_dif.fio_dif_digest -- target/dif.sh@123 -- # local NULL_DIF 00:48:45.144 22:47:40 nvmf_dif.fio_dif_digest -- target/dif.sh@124 -- # local bs numjobs runtime iodepth files 00:48:45.144 22:47:40 nvmf_dif.fio_dif_digest -- target/dif.sh@125 -- # local hdgst ddgst 00:48:45.144 22:47:40 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # NULL_DIF=3 00:48:45.144 22:47:40 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # bs=128k,128k,128k 00:48:45.144 22:47:40 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # numjobs=3 00:48:45.144 22:47:40 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # iodepth=3 00:48:45.144 22:47:40 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # runtime=10 00:48:45.144 22:47:40 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # hdgst=true 00:48:45.144 22:47:40 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # ddgst=true 00:48:45.144 22:47:40 nvmf_dif.fio_dif_digest -- target/dif.sh@130 -- # create_subsystems 0 00:48:45.144 22:47:40 nvmf_dif.fio_dif_digest -- target/dif.sh@28 -- # local sub 00:48:45.144 22:47:40 nvmf_dif.fio_dif_digest -- target/dif.sh@30 -- # for sub in "$@" 00:48:45.144 22:47:40 nvmf_dif.fio_dif_digest -- target/dif.sh@31 -- # create_subsystem 0 00:48:45.144 22:47:40 nvmf_dif.fio_dif_digest -- target/dif.sh@18 -- # local sub_id=0 00:48:45.144 22:47:40 nvmf_dif.fio_dif_digest -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:48:45.144 22:47:40 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@561 -- # xtrace_disable 00:48:45.144 22:47:40 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:48:45.144 bdev_null0 00:48:45.144 22:47:40 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:48:45.144 22:47:40 nvmf_dif.fio_dif_digest -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:48:45.144 22:47:40 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@561 -- # xtrace_disable 00:48:45.144 22:47:40 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:48:45.144 22:47:40 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:48:45.144 22:47:40 nvmf_dif.fio_dif_digest -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:48:45.144 22:47:40 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@561 -- # xtrace_disable 00:48:45.144 22:47:40 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:48:45.144 22:47:40 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:48:45.144 22:47:40 nvmf_dif.fio_dif_digest -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:48:45.144 22:47:40 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@561 -- # xtrace_disable 00:48:45.144 22:47:40 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:48:45.144 [2024-10-01 22:47:40.266836] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:48:45.144 22:47:40 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:48:45.144 22:47:40 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # fio /dev/fd/62 00:48:45.144 22:47:40 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # create_json_sub_conf 0 00:48:45.144 22:47:40 nvmf_dif.fio_dif_digest -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:48:45.144 22:47:40 nvmf_dif.fio_dif_digest -- nvmf/common.sh@558 -- # config=() 00:48:45.144 22:47:40 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:48:45.144 22:47:40 nvmf_dif.fio_dif_digest -- nvmf/common.sh@558 -- # local subsystem config 00:48:45.144 22:47:40 nvmf_dif.fio_dif_digest -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:48:45.144 22:47:40 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:48:45.144 22:47:40 nvmf_dif.fio_dif_digest -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:48:45.144 { 00:48:45.144 "params": { 00:48:45.144 "name": "Nvme$subsystem", 00:48:45.144 "trtype": "$TEST_TRANSPORT", 00:48:45.144 "traddr": "$NVMF_FIRST_TARGET_IP", 00:48:45.144 "adrfam": "ipv4", 00:48:45.144 "trsvcid": "$NVMF_PORT", 00:48:45.145 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:48:45.145 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:48:45.145 "hdgst": ${hdgst:-false}, 00:48:45.145 "ddgst": ${ddgst:-false} 00:48:45.145 }, 00:48:45.145 "method": "bdev_nvme_attach_controller" 00:48:45.145 } 00:48:45.145 EOF 00:48:45.145 )") 00:48:45.145 22:47:40 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # gen_fio_conf 00:48:45.145 22:47:40 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:48:45.145 22:47:40 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:48:45.145 22:47:40 nvmf_dif.fio_dif_digest -- target/dif.sh@54 -- # local file 00:48:45.145 22:47:40 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1339 -- # local sanitizers 00:48:45.145 22:47:40 nvmf_dif.fio_dif_digest -- target/dif.sh@56 -- # cat 00:48:45.145 22:47:40 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:48:45.145 22:47:40 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # shift 00:48:45.145 22:47:40 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1343 -- # local asan_lib= 00:48:45.145 22:47:40 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:48:45.145 22:47:40 nvmf_dif.fio_dif_digest -- nvmf/common.sh@580 -- # cat 00:48:45.145 22:47:40 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:48:45.145 22:47:40 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file = 1 )) 00:48:45.145 22:47:40 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # grep libasan 00:48:45.145 22:47:40 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file <= files )) 00:48:45.145 22:47:40 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:48:45.145 22:47:40 nvmf_dif.fio_dif_digest -- nvmf/common.sh@582 -- # jq . 00:48:45.145 22:47:40 nvmf_dif.fio_dif_digest -- nvmf/common.sh@583 -- # IFS=, 00:48:45.145 22:47:40 nvmf_dif.fio_dif_digest -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:48:45.145 "params": { 00:48:45.145 "name": "Nvme0", 00:48:45.145 "trtype": "tcp", 00:48:45.145 "traddr": "10.0.0.2", 00:48:45.145 "adrfam": "ipv4", 00:48:45.145 "trsvcid": "4420", 00:48:45.145 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:48:45.145 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:48:45.145 "hdgst": true, 00:48:45.145 "ddgst": true 00:48:45.145 }, 00:48:45.145 "method": "bdev_nvme_attach_controller" 00:48:45.145 }' 00:48:45.145 22:47:40 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # asan_lib= 00:48:45.145 22:47:40 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:48:45.145 22:47:40 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:48:45.145 22:47:40 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:48:45.145 22:47:40 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:48:45.145 22:47:40 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:48:45.145 22:47:40 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # asan_lib= 00:48:45.145 22:47:40 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:48:45.145 22:47:40 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:48:45.145 22:47:40 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:48:45.716 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:48:45.716 ... 00:48:45.716 fio-3.35 00:48:45.716 Starting 3 threads 00:48:57.951 00:48:57.951 filename0: (groupid=0, jobs=1): err= 0: pid=521439: Tue Oct 1 22:47:51 2024 00:48:57.951 read: IOPS=208, BW=26.0MiB/s (27.3MB/s)(261MiB/10009msec) 00:48:57.951 slat (nsec): min=5903, max=46935, avg=7589.94, stdev=1795.76 00:48:57.951 clat (usec): min=8874, max=55746, avg=14397.52, stdev=2421.53 00:48:57.951 lat (usec): min=8881, max=55752, avg=14405.11, stdev=2421.44 00:48:57.951 clat percentiles (usec): 00:48:57.951 | 1.00th=[11338], 5.00th=[12649], 10.00th=[13042], 20.00th=[13435], 00:48:57.951 | 30.00th=[13698], 40.00th=[13960], 50.00th=[14222], 60.00th=[14484], 00:48:57.951 | 70.00th=[14746], 80.00th=[15139], 90.00th=[15664], 95.00th=[16188], 00:48:57.951 | 99.00th=[17171], 99.50th=[17433], 99.90th=[55313], 99.95th=[55313], 00:48:57.951 | 99.99th=[55837] 00:48:57.951 bw ( KiB/s): min=24576, max=28160, per=30.98%, avg=26649.60, stdev=783.12, samples=20 00:48:57.951 iops : min= 192, max= 220, avg=208.20, stdev= 6.12, samples=20 00:48:57.951 lat (msec) : 10=0.29%, 20=99.42%, 100=0.29% 00:48:57.951 cpu : usr=95.54%, sys=4.22%, ctx=21, majf=0, minf=151 00:48:57.951 IO depths : 1=0.1%, 2=100.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:48:57.951 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:48:57.951 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:48:57.951 issued rwts: total=2084,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:48:57.951 latency : target=0, window=0, percentile=100.00%, depth=3 00:48:57.951 filename0: (groupid=0, jobs=1): err= 0: pid=521440: Tue Oct 1 22:47:51 2024 00:48:57.951 read: IOPS=214, BW=26.8MiB/s (28.1MB/s)(269MiB/10007msec) 00:48:57.951 slat (nsec): min=5836, max=33469, avg=7817.90, stdev=1635.95 00:48:57.951 clat (usec): min=8387, max=57329, avg=13964.00, stdev=2499.66 00:48:57.951 lat (usec): min=8394, max=57336, avg=13971.82, stdev=2499.95 00:48:57.951 clat percentiles (usec): 00:48:57.951 | 1.00th=[10814], 5.00th=[12125], 10.00th=[12518], 20.00th=[12911], 00:48:57.951 | 30.00th=[13304], 40.00th=[13566], 50.00th=[13829], 60.00th=[14091], 00:48:57.951 | 70.00th=[14484], 80.00th=[14746], 90.00th=[15270], 95.00th=[15795], 00:48:57.951 | 99.00th=[16909], 99.50th=[17433], 99.90th=[55837], 99.95th=[56886], 00:48:57.951 | 99.99th=[57410] 00:48:57.951 bw ( KiB/s): min=25394, max=29184, per=31.97%, avg=27502.42, stdev=913.86, samples=19 00:48:57.951 iops : min= 198, max= 228, avg=214.84, stdev= 7.19, samples=19 00:48:57.951 lat (msec) : 10=0.51%, 20=99.21%, 100=0.28% 00:48:57.951 cpu : usr=94.92%, sys=4.84%, ctx=19, majf=0, minf=127 00:48:57.951 IO depths : 1=0.2%, 2=99.8%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:48:57.951 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:48:57.951 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:48:57.951 issued rwts: total=2148,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:48:57.951 latency : target=0, window=0, percentile=100.00%, depth=3 00:48:57.951 filename0: (groupid=0, jobs=1): err= 0: pid=521441: Tue Oct 1 22:47:51 2024 00:48:57.951 read: IOPS=250, BW=31.4MiB/s (32.9MB/s)(315MiB/10046msec) 00:48:57.951 slat (nsec): min=5956, max=66325, avg=7923.79, stdev=2052.28 00:48:57.951 clat (usec): min=7559, max=53714, avg=11932.41, stdev=1414.25 00:48:57.951 lat (usec): min=7568, max=53721, avg=11940.33, stdev=1414.09 00:48:57.951 clat percentiles (usec): 00:48:57.951 | 1.00th=[ 8455], 5.00th=[10552], 10.00th=[10814], 20.00th=[11207], 00:48:57.951 | 30.00th=[11469], 40.00th=[11731], 50.00th=[11994], 60.00th=[12125], 00:48:57.951 | 70.00th=[12387], 80.00th=[12649], 90.00th=[12911], 95.00th=[13173], 00:48:57.951 | 99.00th=[13698], 99.50th=[13960], 99.90th=[15401], 99.95th=[46400], 00:48:57.951 | 99.99th=[53740] 00:48:57.951 bw ( KiB/s): min=30976, max=33280, per=37.46%, avg=32230.40, stdev=568.81, samples=20 00:48:57.951 iops : min= 242, max= 260, avg=251.80, stdev= 4.44, samples=20 00:48:57.951 lat (msec) : 10=2.46%, 20=97.46%, 50=0.04%, 100=0.04% 00:48:57.951 cpu : usr=93.25%, sys=4.96%, ctx=612, majf=0, minf=144 00:48:57.951 IO depths : 1=0.1%, 2=100.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:48:57.951 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:48:57.951 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:48:57.951 issued rwts: total=2520,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:48:57.951 latency : target=0, window=0, percentile=100.00%, depth=3 00:48:57.951 00:48:57.951 Run status group 0 (all jobs): 00:48:57.951 READ: bw=84.0MiB/s (88.1MB/s), 26.0MiB/s-31.4MiB/s (27.3MB/s-32.9MB/s), io=844MiB (885MB), run=10007-10046msec 00:48:57.951 22:47:51 nvmf_dif.fio_dif_digest -- target/dif.sh@132 -- # destroy_subsystems 0 00:48:57.951 22:47:51 nvmf_dif.fio_dif_digest -- target/dif.sh@43 -- # local sub 00:48:57.951 22:47:51 nvmf_dif.fio_dif_digest -- target/dif.sh@45 -- # for sub in "$@" 00:48:57.951 22:47:51 nvmf_dif.fio_dif_digest -- target/dif.sh@46 -- # destroy_subsystem 0 00:48:57.951 22:47:51 nvmf_dif.fio_dif_digest -- target/dif.sh@36 -- # local sub_id=0 00:48:57.951 22:47:51 nvmf_dif.fio_dif_digest -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:48:57.951 22:47:51 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@561 -- # xtrace_disable 00:48:57.951 22:47:51 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:48:57.951 22:47:51 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:48:57.951 22:47:51 nvmf_dif.fio_dif_digest -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:48:57.951 22:47:51 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@561 -- # xtrace_disable 00:48:57.951 22:47:51 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:48:57.951 22:47:51 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:48:57.951 00:48:57.951 real 0m11.245s 00:48:57.951 user 0m40.071s 00:48:57.951 sys 0m1.793s 00:48:57.951 22:47:51 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1126 -- # xtrace_disable 00:48:57.951 22:47:51 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:48:57.951 ************************************ 00:48:57.951 END TEST fio_dif_digest 00:48:57.951 ************************************ 00:48:57.951 22:47:51 nvmf_dif -- target/dif.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:48:57.951 22:47:51 nvmf_dif -- target/dif.sh@147 -- # nvmftestfini 00:48:57.951 22:47:51 nvmf_dif -- nvmf/common.sh@514 -- # nvmfcleanup 00:48:57.951 22:47:51 nvmf_dif -- nvmf/common.sh@121 -- # sync 00:48:57.951 22:47:51 nvmf_dif -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:48:57.951 22:47:51 nvmf_dif -- nvmf/common.sh@124 -- # set +e 00:48:57.951 22:47:51 nvmf_dif -- nvmf/common.sh@125 -- # for i in {1..20} 00:48:57.951 22:47:51 nvmf_dif -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:48:57.951 rmmod nvme_tcp 00:48:57.951 rmmod nvme_fabrics 00:48:57.951 rmmod nvme_keyring 00:48:57.951 22:47:51 nvmf_dif -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:48:57.951 22:47:51 nvmf_dif -- nvmf/common.sh@128 -- # set -e 00:48:57.951 22:47:51 nvmf_dif -- nvmf/common.sh@129 -- # return 0 00:48:57.951 22:47:51 nvmf_dif -- nvmf/common.sh@515 -- # '[' -n 511052 ']' 00:48:57.951 22:47:51 nvmf_dif -- nvmf/common.sh@516 -- # killprocess 511052 00:48:57.951 22:47:51 nvmf_dif -- common/autotest_common.sh@950 -- # '[' -z 511052 ']' 00:48:57.951 22:47:51 nvmf_dif -- common/autotest_common.sh@954 -- # kill -0 511052 00:48:57.951 22:47:51 nvmf_dif -- common/autotest_common.sh@955 -- # uname 00:48:57.951 22:47:51 nvmf_dif -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:48:57.951 22:47:51 nvmf_dif -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 511052 00:48:57.951 22:47:51 nvmf_dif -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:48:57.951 22:47:51 nvmf_dif -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:48:57.951 22:47:51 nvmf_dif -- common/autotest_common.sh@968 -- # echo 'killing process with pid 511052' 00:48:57.951 killing process with pid 511052 00:48:57.951 22:47:51 nvmf_dif -- common/autotest_common.sh@969 -- # kill 511052 00:48:57.951 22:47:51 nvmf_dif -- common/autotest_common.sh@974 -- # wait 511052 00:48:57.951 22:47:51 nvmf_dif -- nvmf/common.sh@518 -- # '[' iso == iso ']' 00:48:57.951 22:47:51 nvmf_dif -- nvmf/common.sh@519 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:48:59.864 Waiting for block devices as requested 00:48:59.864 0000:80:01.6 (8086 0b00): vfio-pci -> ioatdma 00:48:59.864 0000:80:01.7 (8086 0b00): vfio-pci -> ioatdma 00:48:59.864 0000:80:01.4 (8086 0b00): vfio-pci -> ioatdma 00:48:59.864 0000:80:01.5 (8086 0b00): vfio-pci -> ioatdma 00:48:59.864 0000:80:01.2 (8086 0b00): vfio-pci -> ioatdma 00:49:00.125 0000:80:01.3 (8086 0b00): vfio-pci -> ioatdma 00:49:00.125 0000:80:01.0 (8086 0b00): vfio-pci -> ioatdma 00:49:00.125 0000:80:01.1 (8086 0b00): vfio-pci -> ioatdma 00:49:00.386 0000:65:00.0 (144d a80a): vfio-pci -> nvme 00:49:00.386 0000:00:01.6 (8086 0b00): vfio-pci -> ioatdma 00:49:00.386 0000:00:01.7 (8086 0b00): vfio-pci -> ioatdma 00:49:00.648 0000:00:01.4 (8086 0b00): vfio-pci -> ioatdma 00:49:00.648 0000:00:01.5 (8086 0b00): vfio-pci -> ioatdma 00:49:00.648 0000:00:01.2 (8086 0b00): vfio-pci -> ioatdma 00:49:00.909 0000:00:01.3 (8086 0b00): vfio-pci -> ioatdma 00:49:00.909 0000:00:01.0 (8086 0b00): vfio-pci -> ioatdma 00:49:00.909 0000:00:01.1 (8086 0b00): vfio-pci -> ioatdma 00:49:01.170 22:47:56 nvmf_dif -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:49:01.170 22:47:56 nvmf_dif -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:49:01.170 22:47:56 nvmf_dif -- nvmf/common.sh@297 -- # iptr 00:49:01.170 22:47:56 nvmf_dif -- nvmf/common.sh@789 -- # iptables-save 00:49:01.170 22:47:56 nvmf_dif -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:49:01.170 22:47:56 nvmf_dif -- nvmf/common.sh@789 -- # iptables-restore 00:49:01.170 22:47:56 nvmf_dif -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:49:01.170 22:47:56 nvmf_dif -- nvmf/common.sh@302 -- # remove_spdk_ns 00:49:01.170 22:47:56 nvmf_dif -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:49:01.170 22:47:56 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:49:01.170 22:47:56 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:49:03.715 22:47:58 nvmf_dif -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:49:03.715 00:49:03.715 real 1m18.295s 00:49:03.715 user 8m2.628s 00:49:03.715 sys 0m21.602s 00:49:03.715 22:47:58 nvmf_dif -- common/autotest_common.sh@1126 -- # xtrace_disable 00:49:03.715 22:47:58 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:49:03.715 ************************************ 00:49:03.715 END TEST nvmf_dif 00:49:03.715 ************************************ 00:49:03.715 22:47:58 -- spdk/autotest.sh@286 -- # run_test nvmf_abort_qd_sizes /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:49:03.715 22:47:58 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:49:03.715 22:47:58 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:49:03.715 22:47:58 -- common/autotest_common.sh@10 -- # set +x 00:49:03.715 ************************************ 00:49:03.715 START TEST nvmf_abort_qd_sizes 00:49:03.715 ************************************ 00:49:03.715 22:47:58 nvmf_abort_qd_sizes -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:49:03.715 * Looking for test storage... 00:49:03.715 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:49:03.715 22:47:58 nvmf_abort_qd_sizes -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:49:03.715 22:47:58 nvmf_abort_qd_sizes -- common/autotest_common.sh@1681 -- # lcov --version 00:49:03.715 22:47:58 nvmf_abort_qd_sizes -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:49:03.715 22:47:58 nvmf_abort_qd_sizes -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:49:03.715 22:47:58 nvmf_abort_qd_sizes -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:49:03.715 22:47:58 nvmf_abort_qd_sizes -- scripts/common.sh@333 -- # local ver1 ver1_l 00:49:03.715 22:47:58 nvmf_abort_qd_sizes -- scripts/common.sh@334 -- # local ver2 ver2_l 00:49:03.715 22:47:58 nvmf_abort_qd_sizes -- scripts/common.sh@336 -- # IFS=.-: 00:49:03.715 22:47:58 nvmf_abort_qd_sizes -- scripts/common.sh@336 -- # read -ra ver1 00:49:03.715 22:47:58 nvmf_abort_qd_sizes -- scripts/common.sh@337 -- # IFS=.-: 00:49:03.715 22:47:58 nvmf_abort_qd_sizes -- scripts/common.sh@337 -- # read -ra ver2 00:49:03.715 22:47:58 nvmf_abort_qd_sizes -- scripts/common.sh@338 -- # local 'op=<' 00:49:03.715 22:47:58 nvmf_abort_qd_sizes -- scripts/common.sh@340 -- # ver1_l=2 00:49:03.715 22:47:58 nvmf_abort_qd_sizes -- scripts/common.sh@341 -- # ver2_l=1 00:49:03.715 22:47:58 nvmf_abort_qd_sizes -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:49:03.715 22:47:58 nvmf_abort_qd_sizes -- scripts/common.sh@344 -- # case "$op" in 00:49:03.715 22:47:58 nvmf_abort_qd_sizes -- scripts/common.sh@345 -- # : 1 00:49:03.716 22:47:58 nvmf_abort_qd_sizes -- scripts/common.sh@364 -- # (( v = 0 )) 00:49:03.716 22:47:58 nvmf_abort_qd_sizes -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:49:03.716 22:47:58 nvmf_abort_qd_sizes -- scripts/common.sh@365 -- # decimal 1 00:49:03.716 22:47:58 nvmf_abort_qd_sizes -- scripts/common.sh@353 -- # local d=1 00:49:03.716 22:47:58 nvmf_abort_qd_sizes -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:49:03.716 22:47:58 nvmf_abort_qd_sizes -- scripts/common.sh@355 -- # echo 1 00:49:03.716 22:47:58 nvmf_abort_qd_sizes -- scripts/common.sh@365 -- # ver1[v]=1 00:49:03.716 22:47:58 nvmf_abort_qd_sizes -- scripts/common.sh@366 -- # decimal 2 00:49:03.716 22:47:58 nvmf_abort_qd_sizes -- scripts/common.sh@353 -- # local d=2 00:49:03.716 22:47:58 nvmf_abort_qd_sizes -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:49:03.716 22:47:58 nvmf_abort_qd_sizes -- scripts/common.sh@355 -- # echo 2 00:49:03.716 22:47:58 nvmf_abort_qd_sizes -- scripts/common.sh@366 -- # ver2[v]=2 00:49:03.716 22:47:58 nvmf_abort_qd_sizes -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:49:03.716 22:47:58 nvmf_abort_qd_sizes -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:49:03.716 22:47:58 nvmf_abort_qd_sizes -- scripts/common.sh@368 -- # return 0 00:49:03.716 22:47:58 nvmf_abort_qd_sizes -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:49:03.716 22:47:58 nvmf_abort_qd_sizes -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:49:03.716 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:49:03.716 --rc genhtml_branch_coverage=1 00:49:03.716 --rc genhtml_function_coverage=1 00:49:03.716 --rc genhtml_legend=1 00:49:03.716 --rc geninfo_all_blocks=1 00:49:03.716 --rc geninfo_unexecuted_blocks=1 00:49:03.716 00:49:03.716 ' 00:49:03.716 22:47:58 nvmf_abort_qd_sizes -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:49:03.716 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:49:03.716 --rc genhtml_branch_coverage=1 00:49:03.716 --rc genhtml_function_coverage=1 00:49:03.716 --rc genhtml_legend=1 00:49:03.716 --rc geninfo_all_blocks=1 00:49:03.716 --rc geninfo_unexecuted_blocks=1 00:49:03.716 00:49:03.716 ' 00:49:03.716 22:47:58 nvmf_abort_qd_sizes -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:49:03.716 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:49:03.716 --rc genhtml_branch_coverage=1 00:49:03.716 --rc genhtml_function_coverage=1 00:49:03.716 --rc genhtml_legend=1 00:49:03.716 --rc geninfo_all_blocks=1 00:49:03.716 --rc geninfo_unexecuted_blocks=1 00:49:03.716 00:49:03.716 ' 00:49:03.716 22:47:58 nvmf_abort_qd_sizes -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:49:03.716 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:49:03.716 --rc genhtml_branch_coverage=1 00:49:03.716 --rc genhtml_function_coverage=1 00:49:03.716 --rc genhtml_legend=1 00:49:03.716 --rc geninfo_all_blocks=1 00:49:03.716 --rc geninfo_unexecuted_blocks=1 00:49:03.716 00:49:03.716 ' 00:49:03.716 22:47:58 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:49:03.716 22:47:58 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # uname -s 00:49:03.716 22:47:58 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:49:03.716 22:47:58 nvmf_abort_qd_sizes -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:49:03.716 22:47:58 nvmf_abort_qd_sizes -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:49:03.716 22:47:58 nvmf_abort_qd_sizes -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:49:03.716 22:47:58 nvmf_abort_qd_sizes -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:49:03.716 22:47:58 nvmf_abort_qd_sizes -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:49:03.716 22:47:58 nvmf_abort_qd_sizes -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:49:03.716 22:47:58 nvmf_abort_qd_sizes -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:49:03.716 22:47:58 nvmf_abort_qd_sizes -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:49:03.716 22:47:58 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:49:03.716 22:47:58 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 00:49:03.716 22:47:58 nvmf_abort_qd_sizes -- nvmf/common.sh@18 -- # NVME_HOSTID=80c5a598-37ec-ec11-9bc7-a4bf01928204 00:49:03.716 22:47:58 nvmf_abort_qd_sizes -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:49:03.716 22:47:58 nvmf_abort_qd_sizes -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:49:03.716 22:47:58 nvmf_abort_qd_sizes -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:49:03.716 22:47:58 nvmf_abort_qd_sizes -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:49:03.716 22:47:58 nvmf_abort_qd_sizes -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:49:03.716 22:47:58 nvmf_abort_qd_sizes -- scripts/common.sh@15 -- # shopt -s extglob 00:49:03.716 22:47:58 nvmf_abort_qd_sizes -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:49:03.716 22:47:58 nvmf_abort_qd_sizes -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:49:03.716 22:47:58 nvmf_abort_qd_sizes -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:49:03.716 22:47:58 nvmf_abort_qd_sizes -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:49:03.716 22:47:58 nvmf_abort_qd_sizes -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:49:03.716 22:47:58 nvmf_abort_qd_sizes -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:49:03.716 22:47:58 nvmf_abort_qd_sizes -- paths/export.sh@5 -- # export PATH 00:49:03.716 22:47:58 nvmf_abort_qd_sizes -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:49:03.716 22:47:58 nvmf_abort_qd_sizes -- nvmf/common.sh@51 -- # : 0 00:49:03.716 22:47:58 nvmf_abort_qd_sizes -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:49:03.716 22:47:58 nvmf_abort_qd_sizes -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:49:03.716 22:47:58 nvmf_abort_qd_sizes -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:49:03.716 22:47:58 nvmf_abort_qd_sizes -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:49:03.716 22:47:58 nvmf_abort_qd_sizes -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:49:03.716 22:47:58 nvmf_abort_qd_sizes -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:49:03.716 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:49:03.716 22:47:58 nvmf_abort_qd_sizes -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:49:03.716 22:47:58 nvmf_abort_qd_sizes -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:49:03.716 22:47:58 nvmf_abort_qd_sizes -- nvmf/common.sh@55 -- # have_pci_nics=0 00:49:03.716 22:47:58 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@70 -- # nvmftestinit 00:49:03.716 22:47:58 nvmf_abort_qd_sizes -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:49:03.716 22:47:58 nvmf_abort_qd_sizes -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:49:03.716 22:47:58 nvmf_abort_qd_sizes -- nvmf/common.sh@474 -- # prepare_net_devs 00:49:03.716 22:47:58 nvmf_abort_qd_sizes -- nvmf/common.sh@436 -- # local -g is_hw=no 00:49:03.716 22:47:58 nvmf_abort_qd_sizes -- nvmf/common.sh@438 -- # remove_spdk_ns 00:49:03.716 22:47:58 nvmf_abort_qd_sizes -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:49:03.716 22:47:58 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:49:03.716 22:47:58 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:49:03.716 22:47:58 nvmf_abort_qd_sizes -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:49:03.716 22:47:58 nvmf_abort_qd_sizes -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:49:03.716 22:47:58 nvmf_abort_qd_sizes -- nvmf/common.sh@309 -- # xtrace_disable 00:49:03.716 22:47:58 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:49:11.857 22:48:05 nvmf_abort_qd_sizes -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:49:11.857 22:48:05 nvmf_abort_qd_sizes -- nvmf/common.sh@315 -- # pci_devs=() 00:49:11.857 22:48:05 nvmf_abort_qd_sizes -- nvmf/common.sh@315 -- # local -a pci_devs 00:49:11.857 22:48:05 nvmf_abort_qd_sizes -- nvmf/common.sh@316 -- # pci_net_devs=() 00:49:11.857 22:48:05 nvmf_abort_qd_sizes -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:49:11.857 22:48:05 nvmf_abort_qd_sizes -- nvmf/common.sh@317 -- # pci_drivers=() 00:49:11.857 22:48:05 nvmf_abort_qd_sizes -- nvmf/common.sh@317 -- # local -A pci_drivers 00:49:11.857 22:48:05 nvmf_abort_qd_sizes -- nvmf/common.sh@319 -- # net_devs=() 00:49:11.857 22:48:05 nvmf_abort_qd_sizes -- nvmf/common.sh@319 -- # local -ga net_devs 00:49:11.857 22:48:05 nvmf_abort_qd_sizes -- nvmf/common.sh@320 -- # e810=() 00:49:11.857 22:48:05 nvmf_abort_qd_sizes -- nvmf/common.sh@320 -- # local -ga e810 00:49:11.857 22:48:05 nvmf_abort_qd_sizes -- nvmf/common.sh@321 -- # x722=() 00:49:11.857 22:48:05 nvmf_abort_qd_sizes -- nvmf/common.sh@321 -- # local -ga x722 00:49:11.857 22:48:05 nvmf_abort_qd_sizes -- nvmf/common.sh@322 -- # mlx=() 00:49:11.857 22:48:05 nvmf_abort_qd_sizes -- nvmf/common.sh@322 -- # local -ga mlx 00:49:11.857 22:48:05 nvmf_abort_qd_sizes -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:49:11.857 22:48:05 nvmf_abort_qd_sizes -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:49:11.857 22:48:05 nvmf_abort_qd_sizes -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:49:11.857 22:48:05 nvmf_abort_qd_sizes -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:49:11.857 22:48:05 nvmf_abort_qd_sizes -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:49:11.857 22:48:05 nvmf_abort_qd_sizes -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:49:11.857 22:48:05 nvmf_abort_qd_sizes -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:49:11.857 22:48:05 nvmf_abort_qd_sizes -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:49:11.857 22:48:05 nvmf_abort_qd_sizes -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:49:11.857 22:48:05 nvmf_abort_qd_sizes -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:49:11.857 22:48:05 nvmf_abort_qd_sizes -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:49:11.857 22:48:05 nvmf_abort_qd_sizes -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:49:11.857 22:48:05 nvmf_abort_qd_sizes -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:49:11.857 22:48:05 nvmf_abort_qd_sizes -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:49:11.857 22:48:05 nvmf_abort_qd_sizes -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:49:11.857 22:48:05 nvmf_abort_qd_sizes -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:49:11.857 22:48:05 nvmf_abort_qd_sizes -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:49:11.857 22:48:05 nvmf_abort_qd_sizes -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:49:11.857 22:48:05 nvmf_abort_qd_sizes -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:49:11.857 22:48:05 nvmf_abort_qd_sizes -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:49:11.857 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:49:11.858 22:48:05 nvmf_abort_qd_sizes -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:49:11.858 22:48:05 nvmf_abort_qd_sizes -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:49:11.858 22:48:05 nvmf_abort_qd_sizes -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:49:11.858 22:48:05 nvmf_abort_qd_sizes -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:49:11.858 22:48:05 nvmf_abort_qd_sizes -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:49:11.858 22:48:05 nvmf_abort_qd_sizes -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:49:11.858 22:48:05 nvmf_abort_qd_sizes -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:49:11.858 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:49:11.858 22:48:05 nvmf_abort_qd_sizes -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:49:11.858 22:48:05 nvmf_abort_qd_sizes -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:49:11.858 22:48:05 nvmf_abort_qd_sizes -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:49:11.858 22:48:05 nvmf_abort_qd_sizes -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:49:11.858 22:48:05 nvmf_abort_qd_sizes -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:49:11.858 22:48:05 nvmf_abort_qd_sizes -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:49:11.858 22:48:05 nvmf_abort_qd_sizes -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:49:11.858 22:48:05 nvmf_abort_qd_sizes -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:49:11.858 22:48:05 nvmf_abort_qd_sizes -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:49:11.858 22:48:05 nvmf_abort_qd_sizes -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:49:11.858 22:48:05 nvmf_abort_qd_sizes -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:49:11.858 22:48:05 nvmf_abort_qd_sizes -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:49:11.858 22:48:05 nvmf_abort_qd_sizes -- nvmf/common.sh@416 -- # [[ up == up ]] 00:49:11.858 22:48:05 nvmf_abort_qd_sizes -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:49:11.858 22:48:05 nvmf_abort_qd_sizes -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:49:11.858 22:48:05 nvmf_abort_qd_sizes -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:49:11.858 Found net devices under 0000:4b:00.0: cvl_0_0 00:49:11.858 22:48:05 nvmf_abort_qd_sizes -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:49:11.858 22:48:05 nvmf_abort_qd_sizes -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:49:11.858 22:48:05 nvmf_abort_qd_sizes -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:49:11.858 22:48:05 nvmf_abort_qd_sizes -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:49:11.858 22:48:05 nvmf_abort_qd_sizes -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:49:11.858 22:48:05 nvmf_abort_qd_sizes -- nvmf/common.sh@416 -- # [[ up == up ]] 00:49:11.858 22:48:05 nvmf_abort_qd_sizes -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:49:11.858 22:48:05 nvmf_abort_qd_sizes -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:49:11.858 22:48:05 nvmf_abort_qd_sizes -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:49:11.858 Found net devices under 0000:4b:00.1: cvl_0_1 00:49:11.858 22:48:05 nvmf_abort_qd_sizes -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:49:11.858 22:48:05 nvmf_abort_qd_sizes -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:49:11.858 22:48:05 nvmf_abort_qd_sizes -- nvmf/common.sh@440 -- # is_hw=yes 00:49:11.858 22:48:05 nvmf_abort_qd_sizes -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:49:11.858 22:48:05 nvmf_abort_qd_sizes -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:49:11.858 22:48:05 nvmf_abort_qd_sizes -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:49:11.858 22:48:05 nvmf_abort_qd_sizes -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:49:11.858 22:48:05 nvmf_abort_qd_sizes -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:49:11.858 22:48:05 nvmf_abort_qd_sizes -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:49:11.858 22:48:05 nvmf_abort_qd_sizes -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:49:11.858 22:48:05 nvmf_abort_qd_sizes -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:49:11.858 22:48:05 nvmf_abort_qd_sizes -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:49:11.858 22:48:05 nvmf_abort_qd_sizes -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:49:11.858 22:48:05 nvmf_abort_qd_sizes -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:49:11.858 22:48:05 nvmf_abort_qd_sizes -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:49:11.858 22:48:05 nvmf_abort_qd_sizes -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:49:11.858 22:48:05 nvmf_abort_qd_sizes -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:49:11.858 22:48:05 nvmf_abort_qd_sizes -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:49:11.858 22:48:05 nvmf_abort_qd_sizes -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:49:11.858 22:48:05 nvmf_abort_qd_sizes -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:49:11.858 22:48:05 nvmf_abort_qd_sizes -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:49:11.858 22:48:06 nvmf_abort_qd_sizes -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:49:11.858 22:48:06 nvmf_abort_qd_sizes -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:49:11.858 22:48:06 nvmf_abort_qd_sizes -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:49:11.858 22:48:06 nvmf_abort_qd_sizes -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:49:11.858 22:48:06 nvmf_abort_qd_sizes -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:49:11.858 22:48:06 nvmf_abort_qd_sizes -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:49:11.858 22:48:06 nvmf_abort_qd_sizes -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:49:11.858 22:48:06 nvmf_abort_qd_sizes -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:49:11.858 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:49:11.858 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.643 ms 00:49:11.858 00:49:11.858 --- 10.0.0.2 ping statistics --- 00:49:11.858 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:49:11.858 rtt min/avg/max/mdev = 0.643/0.643/0.643/0.000 ms 00:49:11.858 22:48:06 nvmf_abort_qd_sizes -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:49:11.858 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:49:11.858 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.322 ms 00:49:11.858 00:49:11.858 --- 10.0.0.1 ping statistics --- 00:49:11.858 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:49:11.858 rtt min/avg/max/mdev = 0.322/0.322/0.322/0.000 ms 00:49:11.858 22:48:06 nvmf_abort_qd_sizes -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:49:11.858 22:48:06 nvmf_abort_qd_sizes -- nvmf/common.sh@448 -- # return 0 00:49:11.858 22:48:06 nvmf_abort_qd_sizes -- nvmf/common.sh@476 -- # '[' iso == iso ']' 00:49:11.858 22:48:06 nvmf_abort_qd_sizes -- nvmf/common.sh@477 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:49:14.405 0000:80:01.6 (8086 0b00): ioatdma -> vfio-pci 00:49:14.405 0000:80:01.7 (8086 0b00): ioatdma -> vfio-pci 00:49:14.406 0000:80:01.4 (8086 0b00): ioatdma -> vfio-pci 00:49:14.666 0000:80:01.5 (8086 0b00): ioatdma -> vfio-pci 00:49:14.666 0000:80:01.2 (8086 0b00): ioatdma -> vfio-pci 00:49:14.666 0000:80:01.3 (8086 0b00): ioatdma -> vfio-pci 00:49:14.666 0000:80:01.0 (8086 0b00): ioatdma -> vfio-pci 00:49:14.666 0000:80:01.1 (8086 0b00): ioatdma -> vfio-pci 00:49:14.666 0000:00:01.6 (8086 0b00): ioatdma -> vfio-pci 00:49:14.666 0000:00:01.7 (8086 0b00): ioatdma -> vfio-pci 00:49:14.666 0000:00:01.4 (8086 0b00): ioatdma -> vfio-pci 00:49:14.666 0000:00:01.5 (8086 0b00): ioatdma -> vfio-pci 00:49:14.666 0000:00:01.2 (8086 0b00): ioatdma -> vfio-pci 00:49:14.666 0000:00:01.3 (8086 0b00): ioatdma -> vfio-pci 00:49:14.666 0000:00:01.0 (8086 0b00): ioatdma -> vfio-pci 00:49:14.666 0000:00:01.1 (8086 0b00): ioatdma -> vfio-pci 00:49:14.666 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:49:15.236 22:48:10 nvmf_abort_qd_sizes -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:49:15.236 22:48:10 nvmf_abort_qd_sizes -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:49:15.236 22:48:10 nvmf_abort_qd_sizes -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:49:15.236 22:48:10 nvmf_abort_qd_sizes -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:49:15.236 22:48:10 nvmf_abort_qd_sizes -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:49:15.236 22:48:10 nvmf_abort_qd_sizes -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:49:15.236 22:48:10 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@71 -- # nvmfappstart -m 0xf 00:49:15.236 22:48:10 nvmf_abort_qd_sizes -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:49:15.236 22:48:10 nvmf_abort_qd_sizes -- common/autotest_common.sh@724 -- # xtrace_disable 00:49:15.236 22:48:10 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:49:15.236 22:48:10 nvmf_abort_qd_sizes -- nvmf/common.sh@507 -- # nvmfpid=531270 00:49:15.236 22:48:10 nvmf_abort_qd_sizes -- nvmf/common.sh@508 -- # waitforlisten 531270 00:49:15.236 22:48:10 nvmf_abort_qd_sizes -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xf 00:49:15.236 22:48:10 nvmf_abort_qd_sizes -- common/autotest_common.sh@831 -- # '[' -z 531270 ']' 00:49:15.236 22:48:10 nvmf_abort_qd_sizes -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:49:15.236 22:48:10 nvmf_abort_qd_sizes -- common/autotest_common.sh@836 -- # local max_retries=100 00:49:15.236 22:48:10 nvmf_abort_qd_sizes -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:49:15.236 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:49:15.236 22:48:10 nvmf_abort_qd_sizes -- common/autotest_common.sh@840 -- # xtrace_disable 00:49:15.236 22:48:10 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:49:15.236 [2024-10-01 22:48:10.296950] Starting SPDK v25.01-pre git sha1 1b1c3081e / DPDK 24.03.0 initialization... 00:49:15.236 [2024-10-01 22:48:10.297000] [ DPDK EAL parameters: nvmf -c 0xf --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:49:15.236 [2024-10-01 22:48:10.363192] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:49:15.236 [2024-10-01 22:48:10.429607] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:49:15.236 [2024-10-01 22:48:10.429648] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:49:15.236 [2024-10-01 22:48:10.429656] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:49:15.236 [2024-10-01 22:48:10.429663] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:49:15.236 [2024-10-01 22:48:10.429669] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:49:15.236 [2024-10-01 22:48:10.429734] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:49:15.236 [2024-10-01 22:48:10.429848] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:49:15.236 [2024-10-01 22:48:10.430002] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:49:15.236 [2024-10-01 22:48:10.430003] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:49:16.208 22:48:11 nvmf_abort_qd_sizes -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:49:16.208 22:48:11 nvmf_abort_qd_sizes -- common/autotest_common.sh@864 -- # return 0 00:49:16.208 22:48:11 nvmf_abort_qd_sizes -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:49:16.208 22:48:11 nvmf_abort_qd_sizes -- common/autotest_common.sh@730 -- # xtrace_disable 00:49:16.208 22:48:11 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:49:16.208 22:48:11 nvmf_abort_qd_sizes -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:49:16.208 22:48:11 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@73 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini || :; clean_kernel_target' SIGINT SIGTERM EXIT 00:49:16.208 22:48:11 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # mapfile -t nvmes 00:49:16.208 22:48:11 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # nvme_in_userspace 00:49:16.208 22:48:11 nvmf_abort_qd_sizes -- scripts/common.sh@312 -- # local bdf bdfs 00:49:16.208 22:48:11 nvmf_abort_qd_sizes -- scripts/common.sh@313 -- # local nvmes 00:49:16.208 22:48:11 nvmf_abort_qd_sizes -- scripts/common.sh@315 -- # [[ -n 0000:65:00.0 ]] 00:49:16.208 22:48:11 nvmf_abort_qd_sizes -- scripts/common.sh@316 -- # nvmes=(${pci_bus_cache["0x010802"]}) 00:49:16.208 22:48:11 nvmf_abort_qd_sizes -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:49:16.208 22:48:11 nvmf_abort_qd_sizes -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:65:00.0 ]] 00:49:16.208 22:48:11 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # uname -s 00:49:16.208 22:48:11 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:49:16.208 22:48:11 nvmf_abort_qd_sizes -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:49:16.208 22:48:11 nvmf_abort_qd_sizes -- scripts/common.sh@328 -- # (( 1 )) 00:49:16.208 22:48:11 nvmf_abort_qd_sizes -- scripts/common.sh@329 -- # printf '%s\n' 0000:65:00.0 00:49:16.208 22:48:11 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@76 -- # (( 1 > 0 )) 00:49:16.208 22:48:11 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@78 -- # nvme=0000:65:00.0 00:49:16.208 22:48:11 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@80 -- # run_test spdk_target_abort spdk_target 00:49:16.208 22:48:11 nvmf_abort_qd_sizes -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:49:16.208 22:48:11 nvmf_abort_qd_sizes -- common/autotest_common.sh@1107 -- # xtrace_disable 00:49:16.208 22:48:11 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:49:16.208 ************************************ 00:49:16.208 START TEST spdk_target_abort 00:49:16.208 ************************************ 00:49:16.208 22:48:11 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1125 -- # spdk_target 00:49:16.208 22:48:11 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@43 -- # local name=spdk_target 00:49:16.208 22:48:11 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@45 -- # rpc_cmd bdev_nvme_attach_controller -t pcie -a 0000:65:00.0 -b spdk_target 00:49:16.208 22:48:11 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:49:16.208 22:48:11 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:49:16.468 spdk_targetn1 00:49:16.468 22:48:11 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:49:16.468 22:48:11 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@47 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:49:16.468 22:48:11 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:49:16.468 22:48:11 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:49:16.468 [2024-10-01 22:48:11.507631] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:49:16.468 22:48:11 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:49:16.468 22:48:11 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:testnqn -a -s SPDKISFASTANDAWESOME 00:49:16.468 22:48:11 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:49:16.468 22:48:11 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:49:16.468 22:48:11 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:49:16.468 22:48:11 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@49 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:testnqn spdk_targetn1 00:49:16.468 22:48:11 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:49:16.468 22:48:11 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:49:16.468 22:48:11 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:49:16.468 22:48:11 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@50 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:testnqn -t tcp -a 10.0.0.2 -s 4420 00:49:16.468 22:48:11 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:49:16.468 22:48:11 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:49:16.468 [2024-10-01 22:48:11.547908] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:49:16.468 22:48:11 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:49:16.468 22:48:11 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@52 -- # rabort tcp IPv4 10.0.0.2 4420 nqn.2016-06.io.spdk:testnqn 00:49:16.468 22:48:11 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:49:16.468 22:48:11 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:49:16.468 22:48:11 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.2 00:49:16.468 22:48:11 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:49:16.468 22:48:11 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:49:16.468 22:48:11 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:49:16.468 22:48:11 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:49:16.468 22:48:11 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:49:16.468 22:48:11 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:49:16.468 22:48:11 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:49:16.468 22:48:11 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:49:16.468 22:48:11 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:49:16.468 22:48:11 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:49:16.468 22:48:11 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2' 00:49:16.468 22:48:11 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:49:16.468 22:48:11 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:49:16.468 22:48:11 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:49:16.468 22:48:11 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:49:16.468 22:48:11 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:49:16.468 22:48:11 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:49:16.729 [2024-10-01 22:48:11.724150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:190 nsid:1 lba:160 len:8 PRP1 0x2000078c0000 PRP2 0x0 00:49:16.729 [2024-10-01 22:48:11.724181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:190 cdw0:0 sqhd:0015 p:1 m:0 dnr:0 00:49:16.729 [2024-10-01 22:48:11.730144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:188 nsid:1 lba:280 len:8 PRP1 0x2000078c4000 PRP2 0x0 00:49:16.729 [2024-10-01 22:48:11.730160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:188 cdw0:0 sqhd:0026 p:1 m:0 dnr:0 00:49:16.729 [2024-10-01 22:48:11.746199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:188 nsid:1 lba:840 len:8 PRP1 0x2000078c2000 PRP2 0x0 00:49:16.729 [2024-10-01 22:48:11.746221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:188 cdw0:0 sqhd:006a p:1 m:0 dnr:0 00:49:16.729 [2024-10-01 22:48:11.755373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:188 nsid:1 lba:1176 len:8 PRP1 0x2000078be000 PRP2 0x0 00:49:16.729 [2024-10-01 22:48:11.755390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:188 cdw0:0 sqhd:0094 p:1 m:0 dnr:0 00:49:16.729 [2024-10-01 22:48:11.768996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:190 nsid:1 lba:1640 len:8 PRP1 0x2000078c0000 PRP2 0x0 00:49:16.729 [2024-10-01 22:48:11.769013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:190 cdw0:0 sqhd:00ce p:1 m:0 dnr:0 00:49:16.729 [2024-10-01 22:48:11.826868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:191 nsid:1 lba:3744 len:8 PRP1 0x2000078c2000 PRP2 0x0 00:49:16.729 [2024-10-01 22:48:11.826886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:191 cdw0:0 sqhd:00d5 p:0 m:0 dnr:0 00:49:20.024 Initializing NVMe Controllers 00:49:20.024 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:49:20.024 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:49:20.024 Initialization complete. Launching workers. 00:49:20.024 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 12501, failed: 6 00:49:20.024 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 3390, failed to submit 9117 00:49:20.024 success 6, unsuccessful 3384, failed 0 00:49:20.024 22:48:14 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:49:20.024 22:48:14 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:49:20.024 [2024-10-01 22:48:15.046667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:4 cid:181 nsid:1 lba:488 len:8 PRP1 0x200007c50000 PRP2 0x0 00:49:20.024 [2024-10-01 22:48:15.046713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:4 cid:181 cdw0:0 sqhd:0048 p:1 m:0 dnr:0 00:49:20.024 [2024-10-01 22:48:15.100499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:4 cid:184 nsid:1 lba:1704 len:8 PRP1 0x200007c52000 PRP2 0x0 00:49:20.024 [2024-10-01 22:48:15.100527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:4 cid:184 cdw0:0 sqhd:00da p:1 m:0 dnr:0 00:49:20.024 [2024-10-01 22:48:15.107496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:4 cid:168 nsid:1 lba:1848 len:8 PRP1 0x200007c54000 PRP2 0x0 00:49:20.024 [2024-10-01 22:48:15.107518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:4 cid:168 cdw0:0 sqhd:00e8 p:1 m:0 dnr:0 00:49:20.024 [2024-10-01 22:48:15.146726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:4 cid:177 nsid:1 lba:2640 len:8 PRP1 0x200007c46000 PRP2 0x0 00:49:20.024 [2024-10-01 22:48:15.146751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:4 cid:177 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:49:21.410 [2024-10-01 22:48:16.349808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:4 cid:174 nsid:1 lba:29672 len:8 PRP1 0x200007c58000 PRP2 0x0 00:49:21.410 [2024-10-01 22:48:16.349848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:4 cid:174 cdw0:0 sqhd:0085 p:1 m:0 dnr:0 00:49:21.410 [2024-10-01 22:48:16.605960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:4 cid:187 nsid:1 lba:35648 len:8 PRP1 0x200007c5a000 PRP2 0x0 00:49:21.410 [2024-10-01 22:48:16.605989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:4 cid:187 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:49:23.324 Initializing NVMe Controllers 00:49:23.324 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:49:23.324 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:49:23.324 Initialization complete. Launching workers. 00:49:23.324 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 8481, failed: 6 00:49:23.324 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1192, failed to submit 7295 00:49:23.324 success 6, unsuccessful 1186, failed 0 00:49:23.324 22:48:18 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:49:23.324 22:48:18 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:49:26.622 Initializing NVMe Controllers 00:49:26.622 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:49:26.622 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:49:26.622 Initialization complete. Launching workers. 00:49:26.622 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 41679, failed: 0 00:49:26.622 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 2651, failed to submit 39028 00:49:26.622 success 0, unsuccessful 2651, failed 0 00:49:26.622 22:48:21 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@54 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:testnqn 00:49:26.622 22:48:21 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:49:26.622 22:48:21 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:49:26.622 22:48:21 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:49:26.622 22:48:21 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@55 -- # rpc_cmd bdev_nvme_detach_controller spdk_target 00:49:26.622 22:48:21 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:49:26.622 22:48:21 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:49:28.003 22:48:23 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:49:28.003 22:48:23 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@61 -- # killprocess 531270 00:49:28.003 22:48:23 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@950 -- # '[' -z 531270 ']' 00:49:28.003 22:48:23 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@954 -- # kill -0 531270 00:49:28.003 22:48:23 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@955 -- # uname 00:49:28.003 22:48:23 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:49:28.003 22:48:23 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 531270 00:49:28.264 22:48:23 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:49:28.264 22:48:23 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:49:28.264 22:48:23 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@968 -- # echo 'killing process with pid 531270' 00:49:28.264 killing process with pid 531270 00:49:28.264 22:48:23 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@969 -- # kill 531270 00:49:28.264 22:48:23 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@974 -- # wait 531270 00:49:28.264 00:49:28.264 real 0m12.255s 00:49:28.264 user 0m49.975s 00:49:28.264 sys 0m1.839s 00:49:28.264 22:48:23 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1126 -- # xtrace_disable 00:49:28.264 22:48:23 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:49:28.264 ************************************ 00:49:28.264 END TEST spdk_target_abort 00:49:28.264 ************************************ 00:49:28.264 22:48:23 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@81 -- # run_test kernel_target_abort kernel_target 00:49:28.264 22:48:23 nvmf_abort_qd_sizes -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:49:28.264 22:48:23 nvmf_abort_qd_sizes -- common/autotest_common.sh@1107 -- # xtrace_disable 00:49:28.264 22:48:23 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:49:28.524 ************************************ 00:49:28.524 START TEST kernel_target_abort 00:49:28.524 ************************************ 00:49:28.524 22:48:23 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1125 -- # kernel_target 00:49:28.524 22:48:23 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # get_main_ns_ip 00:49:28.524 22:48:23 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@767 -- # local ip 00:49:28.524 22:48:23 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@768 -- # ip_candidates=() 00:49:28.524 22:48:23 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@768 -- # local -A ip_candidates 00:49:28.524 22:48:23 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:49:28.525 22:48:23 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:49:28.525 22:48:23 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:49:28.525 22:48:23 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:49:28.525 22:48:23 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:49:28.525 22:48:23 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:49:28.525 22:48:23 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:49:28.525 22:48:23 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:49:28.525 22:48:23 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@658 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:49:28.525 22:48:23 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@660 -- # nvmet=/sys/kernel/config/nvmet 00:49:28.525 22:48:23 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@661 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:49:28.525 22:48:23 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@662 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:49:28.525 22:48:23 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@663 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:49:28.525 22:48:23 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@665 -- # local block nvme 00:49:28.525 22:48:23 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@667 -- # [[ ! -e /sys/module/nvmet ]] 00:49:28.525 22:48:23 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@668 -- # modprobe nvmet 00:49:28.525 22:48:23 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@671 -- # [[ -e /sys/kernel/config/nvmet ]] 00:49:28.525 22:48:23 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@673 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:49:31.829 Waiting for block devices as requested 00:49:31.829 0000:80:01.6 (8086 0b00): vfio-pci -> ioatdma 00:49:31.829 0000:80:01.7 (8086 0b00): vfio-pci -> ioatdma 00:49:32.091 0000:80:01.4 (8086 0b00): vfio-pci -> ioatdma 00:49:32.091 0000:80:01.5 (8086 0b00): vfio-pci -> ioatdma 00:49:32.091 0000:80:01.2 (8086 0b00): vfio-pci -> ioatdma 00:49:32.091 0000:80:01.3 (8086 0b00): vfio-pci -> ioatdma 00:49:32.352 0000:80:01.0 (8086 0b00): vfio-pci -> ioatdma 00:49:32.352 0000:80:01.1 (8086 0b00): vfio-pci -> ioatdma 00:49:32.352 0000:65:00.0 (144d a80a): vfio-pci -> nvme 00:49:32.614 0000:00:01.6 (8086 0b00): vfio-pci -> ioatdma 00:49:32.614 0000:00:01.7 (8086 0b00): vfio-pci -> ioatdma 00:49:32.876 0000:00:01.4 (8086 0b00): vfio-pci -> ioatdma 00:49:32.876 0000:00:01.5 (8086 0b00): vfio-pci -> ioatdma 00:49:32.876 0000:00:01.2 (8086 0b00): vfio-pci -> ioatdma 00:49:33.138 0000:00:01.3 (8086 0b00): vfio-pci -> ioatdma 00:49:33.138 0000:00:01.0 (8086 0b00): vfio-pci -> ioatdma 00:49:33.138 0000:00:01.1 (8086 0b00): vfio-pci -> ioatdma 00:49:33.399 22:48:28 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@676 -- # for block in /sys/block/nvme* 00:49:33.399 22:48:28 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@677 -- # [[ -e /sys/block/nvme0n1 ]] 00:49:33.399 22:48:28 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@678 -- # is_block_zoned nvme0n1 00:49:33.399 22:48:28 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:49:33.399 22:48:28 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:49:33.399 22:48:28 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:49:33.399 22:48:28 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@679 -- # block_in_use nvme0n1 00:49:33.399 22:48:28 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:49:33.399 22:48:28 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:49:33.399 No valid GPT data, bailing 00:49:33.399 22:48:28 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:49:33.399 22:48:28 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # pt= 00:49:33.660 22:48:28 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@395 -- # return 1 00:49:33.660 22:48:28 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@679 -- # nvme=/dev/nvme0n1 00:49:33.660 22:48:28 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@682 -- # [[ -b /dev/nvme0n1 ]] 00:49:33.660 22:48:28 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@684 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:49:33.660 22:48:28 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@685 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:49:33.660 22:48:28 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@686 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:49:33.660 22:48:28 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@691 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:49:33.660 22:48:28 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@693 -- # echo 1 00:49:33.660 22:48:28 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@694 -- # echo /dev/nvme0n1 00:49:33.660 22:48:28 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@695 -- # echo 1 00:49:33.660 22:48:28 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@697 -- # echo 10.0.0.1 00:49:33.660 22:48:28 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@698 -- # echo tcp 00:49:33.660 22:48:28 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@699 -- # echo 4420 00:49:33.660 22:48:28 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@700 -- # echo ipv4 00:49:33.660 22:48:28 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@703 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:49:33.660 22:48:28 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@706 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 --hostid=80c5a598-37ec-ec11-9bc7-a4bf01928204 -a 10.0.0.1 -t tcp -s 4420 00:49:33.660 00:49:33.660 Discovery Log Number of Records 2, Generation counter 2 00:49:33.660 =====Discovery Log Entry 0====== 00:49:33.660 trtype: tcp 00:49:33.660 adrfam: ipv4 00:49:33.660 subtype: current discovery subsystem 00:49:33.660 treq: not specified, sq flow control disable supported 00:49:33.660 portid: 1 00:49:33.660 trsvcid: 4420 00:49:33.660 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:49:33.660 traddr: 10.0.0.1 00:49:33.660 eflags: none 00:49:33.660 sectype: none 00:49:33.660 =====Discovery Log Entry 1====== 00:49:33.660 trtype: tcp 00:49:33.660 adrfam: ipv4 00:49:33.660 subtype: nvme subsystem 00:49:33.660 treq: not specified, sq flow control disable supported 00:49:33.660 portid: 1 00:49:33.660 trsvcid: 4420 00:49:33.660 subnqn: nqn.2016-06.io.spdk:testnqn 00:49:33.660 traddr: 10.0.0.1 00:49:33.660 eflags: none 00:49:33.660 sectype: none 00:49:33.660 22:48:28 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@66 -- # rabort tcp IPv4 10.0.0.1 4420 nqn.2016-06.io.spdk:testnqn 00:49:33.660 22:48:28 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:49:33.660 22:48:28 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:49:33.660 22:48:28 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.1 00:49:33.660 22:48:28 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:49:33.660 22:48:28 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:49:33.660 22:48:28 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:49:33.660 22:48:28 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:49:33.660 22:48:28 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:49:33.660 22:48:28 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:49:33.660 22:48:28 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:49:33.660 22:48:28 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:49:33.660 22:48:28 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:49:33.660 22:48:28 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:49:33.660 22:48:28 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1' 00:49:33.660 22:48:28 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:49:33.660 22:48:28 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420' 00:49:33.660 22:48:28 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:49:33.660 22:48:28 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:49:33.660 22:48:28 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:49:33.660 22:48:28 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:49:36.961 Initializing NVMe Controllers 00:49:36.961 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:49:36.961 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:49:36.961 Initialization complete. Launching workers. 00:49:36.961 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 68000, failed: 0 00:49:36.961 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 68000, failed to submit 0 00:49:36.961 success 0, unsuccessful 68000, failed 0 00:49:36.961 22:48:31 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:49:36.961 22:48:31 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:49:40.258 Initializing NVMe Controllers 00:49:40.258 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:49:40.258 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:49:40.258 Initialization complete. Launching workers. 00:49:40.258 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 109045, failed: 0 00:49:40.258 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 27494, failed to submit 81551 00:49:40.258 success 0, unsuccessful 27494, failed 0 00:49:40.258 22:48:34 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:49:40.258 22:48:34 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:49:43.555 Initializing NVMe Controllers 00:49:43.555 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:49:43.555 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:49:43.555 Initialization complete. Launching workers. 00:49:43.555 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 103012, failed: 0 00:49:43.555 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 25754, failed to submit 77258 00:49:43.555 success 0, unsuccessful 25754, failed 0 00:49:43.555 22:48:38 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@67 -- # clean_kernel_target 00:49:43.555 22:48:38 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@710 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:49:43.555 22:48:38 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@712 -- # echo 0 00:49:43.555 22:48:38 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@714 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:49:43.555 22:48:38 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@715 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:49:43.555 22:48:38 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@716 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:49:43.555 22:48:38 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@717 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:49:43.555 22:48:38 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@719 -- # modules=(/sys/module/nvmet/holders/*) 00:49:43.555 22:48:38 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@721 -- # modprobe -r nvmet_tcp nvmet 00:49:43.555 22:48:38 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@724 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:49:46.862 0000:80:01.6 (8086 0b00): ioatdma -> vfio-pci 00:49:46.862 0000:80:01.7 (8086 0b00): ioatdma -> vfio-pci 00:49:46.862 0000:80:01.4 (8086 0b00): ioatdma -> vfio-pci 00:49:46.862 0000:80:01.5 (8086 0b00): ioatdma -> vfio-pci 00:49:46.862 0000:80:01.2 (8086 0b00): ioatdma -> vfio-pci 00:49:46.862 0000:80:01.3 (8086 0b00): ioatdma -> vfio-pci 00:49:46.862 0000:80:01.0 (8086 0b00): ioatdma -> vfio-pci 00:49:46.862 0000:80:01.1 (8086 0b00): ioatdma -> vfio-pci 00:49:46.862 0000:00:01.6 (8086 0b00): ioatdma -> vfio-pci 00:49:46.862 0000:00:01.7 (8086 0b00): ioatdma -> vfio-pci 00:49:46.862 0000:00:01.4 (8086 0b00): ioatdma -> vfio-pci 00:49:46.862 0000:00:01.5 (8086 0b00): ioatdma -> vfio-pci 00:49:46.862 0000:00:01.2 (8086 0b00): ioatdma -> vfio-pci 00:49:46.862 0000:00:01.3 (8086 0b00): ioatdma -> vfio-pci 00:49:46.862 0000:00:01.0 (8086 0b00): ioatdma -> vfio-pci 00:49:46.862 0000:00:01.1 (8086 0b00): ioatdma -> vfio-pci 00:49:48.245 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:49:48.504 00:49:48.504 real 0m20.221s 00:49:48.504 user 0m9.984s 00:49:48.504 sys 0m6.052s 00:49:48.505 22:48:43 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1126 -- # xtrace_disable 00:49:48.505 22:48:43 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@10 -- # set +x 00:49:48.505 ************************************ 00:49:48.505 END TEST kernel_target_abort 00:49:48.505 ************************************ 00:49:48.765 22:48:43 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:49:48.765 22:48:43 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@84 -- # nvmftestfini 00:49:48.765 22:48:43 nvmf_abort_qd_sizes -- nvmf/common.sh@514 -- # nvmfcleanup 00:49:48.765 22:48:43 nvmf_abort_qd_sizes -- nvmf/common.sh@121 -- # sync 00:49:48.765 22:48:43 nvmf_abort_qd_sizes -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:49:48.765 22:48:43 nvmf_abort_qd_sizes -- nvmf/common.sh@124 -- # set +e 00:49:48.765 22:48:43 nvmf_abort_qd_sizes -- nvmf/common.sh@125 -- # for i in {1..20} 00:49:48.765 22:48:43 nvmf_abort_qd_sizes -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:49:48.765 rmmod nvme_tcp 00:49:48.765 rmmod nvme_fabrics 00:49:48.765 rmmod nvme_keyring 00:49:48.765 22:48:43 nvmf_abort_qd_sizes -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:49:48.765 22:48:43 nvmf_abort_qd_sizes -- nvmf/common.sh@128 -- # set -e 00:49:48.765 22:48:43 nvmf_abort_qd_sizes -- nvmf/common.sh@129 -- # return 0 00:49:48.765 22:48:43 nvmf_abort_qd_sizes -- nvmf/common.sh@515 -- # '[' -n 531270 ']' 00:49:48.765 22:48:43 nvmf_abort_qd_sizes -- nvmf/common.sh@516 -- # killprocess 531270 00:49:48.765 22:48:43 nvmf_abort_qd_sizes -- common/autotest_common.sh@950 -- # '[' -z 531270 ']' 00:49:48.765 22:48:43 nvmf_abort_qd_sizes -- common/autotest_common.sh@954 -- # kill -0 531270 00:49:48.765 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 954: kill: (531270) - No such process 00:49:48.765 22:48:43 nvmf_abort_qd_sizes -- common/autotest_common.sh@977 -- # echo 'Process with pid 531270 is not found' 00:49:48.765 Process with pid 531270 is not found 00:49:48.765 22:48:43 nvmf_abort_qd_sizes -- nvmf/common.sh@518 -- # '[' iso == iso ']' 00:49:48.765 22:48:43 nvmf_abort_qd_sizes -- nvmf/common.sh@519 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:49:52.115 Waiting for block devices as requested 00:49:52.115 0000:80:01.6 (8086 0b00): vfio-pci -> ioatdma 00:49:52.115 0000:80:01.7 (8086 0b00): vfio-pci -> ioatdma 00:49:52.115 0000:80:01.4 (8086 0b00): vfio-pci -> ioatdma 00:49:52.115 0000:80:01.5 (8086 0b00): vfio-pci -> ioatdma 00:49:52.433 0000:80:01.2 (8086 0b00): vfio-pci -> ioatdma 00:49:52.433 0000:80:01.3 (8086 0b00): vfio-pci -> ioatdma 00:49:52.433 0000:80:01.0 (8086 0b00): vfio-pci -> ioatdma 00:49:52.433 0000:80:01.1 (8086 0b00): vfio-pci -> ioatdma 00:49:52.693 0000:65:00.0 (144d a80a): vfio-pci -> nvme 00:49:52.693 0000:00:01.6 (8086 0b00): vfio-pci -> ioatdma 00:49:52.952 0000:00:01.7 (8086 0b00): vfio-pci -> ioatdma 00:49:52.952 0000:00:01.4 (8086 0b00): vfio-pci -> ioatdma 00:49:52.952 0000:00:01.5 (8086 0b00): vfio-pci -> ioatdma 00:49:52.952 0000:00:01.2 (8086 0b00): vfio-pci -> ioatdma 00:49:53.214 0000:00:01.3 (8086 0b00): vfio-pci -> ioatdma 00:49:53.214 0000:00:01.0 (8086 0b00): vfio-pci -> ioatdma 00:49:53.214 0000:00:01.1 (8086 0b00): vfio-pci -> ioatdma 00:49:53.473 22:48:48 nvmf_abort_qd_sizes -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:49:53.473 22:48:48 nvmf_abort_qd_sizes -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:49:53.473 22:48:48 nvmf_abort_qd_sizes -- nvmf/common.sh@297 -- # iptr 00:49:53.473 22:48:48 nvmf_abort_qd_sizes -- nvmf/common.sh@789 -- # iptables-save 00:49:53.473 22:48:48 nvmf_abort_qd_sizes -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:49:53.473 22:48:48 nvmf_abort_qd_sizes -- nvmf/common.sh@789 -- # iptables-restore 00:49:53.473 22:48:48 nvmf_abort_qd_sizes -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:49:53.473 22:48:48 nvmf_abort_qd_sizes -- nvmf/common.sh@302 -- # remove_spdk_ns 00:49:53.473 22:48:48 nvmf_abort_qd_sizes -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:49:53.473 22:48:48 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:49:53.473 22:48:48 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:49:56.018 22:48:50 nvmf_abort_qd_sizes -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:49:56.018 00:49:56.018 real 0m52.267s 00:49:56.018 user 1m5.405s 00:49:56.018 sys 0m18.794s 00:49:56.018 22:48:50 nvmf_abort_qd_sizes -- common/autotest_common.sh@1126 -- # xtrace_disable 00:49:56.018 22:48:50 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:49:56.018 ************************************ 00:49:56.018 END TEST nvmf_abort_qd_sizes 00:49:56.018 ************************************ 00:49:56.018 22:48:50 -- spdk/autotest.sh@288 -- # run_test keyring_file /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/file.sh 00:49:56.018 22:48:50 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:49:56.018 22:48:50 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:49:56.018 22:48:50 -- common/autotest_common.sh@10 -- # set +x 00:49:56.018 ************************************ 00:49:56.018 START TEST keyring_file 00:49:56.018 ************************************ 00:49:56.018 22:48:50 keyring_file -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/file.sh 00:49:56.018 * Looking for test storage... 00:49:56.018 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring 00:49:56.018 22:48:50 keyring_file -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:49:56.018 22:48:50 keyring_file -- common/autotest_common.sh@1681 -- # lcov --version 00:49:56.018 22:48:50 keyring_file -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:49:56.018 22:48:51 keyring_file -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:49:56.018 22:48:51 keyring_file -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:49:56.018 22:48:51 keyring_file -- scripts/common.sh@333 -- # local ver1 ver1_l 00:49:56.018 22:48:51 keyring_file -- scripts/common.sh@334 -- # local ver2 ver2_l 00:49:56.018 22:48:51 keyring_file -- scripts/common.sh@336 -- # IFS=.-: 00:49:56.018 22:48:51 keyring_file -- scripts/common.sh@336 -- # read -ra ver1 00:49:56.018 22:48:51 keyring_file -- scripts/common.sh@337 -- # IFS=.-: 00:49:56.018 22:48:51 keyring_file -- scripts/common.sh@337 -- # read -ra ver2 00:49:56.018 22:48:51 keyring_file -- scripts/common.sh@338 -- # local 'op=<' 00:49:56.018 22:48:51 keyring_file -- scripts/common.sh@340 -- # ver1_l=2 00:49:56.018 22:48:51 keyring_file -- scripts/common.sh@341 -- # ver2_l=1 00:49:56.018 22:48:51 keyring_file -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:49:56.018 22:48:51 keyring_file -- scripts/common.sh@344 -- # case "$op" in 00:49:56.018 22:48:51 keyring_file -- scripts/common.sh@345 -- # : 1 00:49:56.018 22:48:51 keyring_file -- scripts/common.sh@364 -- # (( v = 0 )) 00:49:56.018 22:48:51 keyring_file -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:49:56.018 22:48:51 keyring_file -- scripts/common.sh@365 -- # decimal 1 00:49:56.018 22:48:51 keyring_file -- scripts/common.sh@353 -- # local d=1 00:49:56.018 22:48:51 keyring_file -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:49:56.018 22:48:51 keyring_file -- scripts/common.sh@355 -- # echo 1 00:49:56.018 22:48:51 keyring_file -- scripts/common.sh@365 -- # ver1[v]=1 00:49:56.018 22:48:51 keyring_file -- scripts/common.sh@366 -- # decimal 2 00:49:56.018 22:48:51 keyring_file -- scripts/common.sh@353 -- # local d=2 00:49:56.018 22:48:51 keyring_file -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:49:56.018 22:48:51 keyring_file -- scripts/common.sh@355 -- # echo 2 00:49:56.018 22:48:51 keyring_file -- scripts/common.sh@366 -- # ver2[v]=2 00:49:56.018 22:48:51 keyring_file -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:49:56.018 22:48:51 keyring_file -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:49:56.018 22:48:51 keyring_file -- scripts/common.sh@368 -- # return 0 00:49:56.019 22:48:51 keyring_file -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:49:56.019 22:48:51 keyring_file -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:49:56.019 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:49:56.019 --rc genhtml_branch_coverage=1 00:49:56.019 --rc genhtml_function_coverage=1 00:49:56.019 --rc genhtml_legend=1 00:49:56.019 --rc geninfo_all_blocks=1 00:49:56.019 --rc geninfo_unexecuted_blocks=1 00:49:56.019 00:49:56.019 ' 00:49:56.019 22:48:51 keyring_file -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:49:56.019 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:49:56.019 --rc genhtml_branch_coverage=1 00:49:56.019 --rc genhtml_function_coverage=1 00:49:56.019 --rc genhtml_legend=1 00:49:56.019 --rc geninfo_all_blocks=1 00:49:56.019 --rc geninfo_unexecuted_blocks=1 00:49:56.019 00:49:56.019 ' 00:49:56.019 22:48:51 keyring_file -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:49:56.019 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:49:56.019 --rc genhtml_branch_coverage=1 00:49:56.019 --rc genhtml_function_coverage=1 00:49:56.019 --rc genhtml_legend=1 00:49:56.019 --rc geninfo_all_blocks=1 00:49:56.019 --rc geninfo_unexecuted_blocks=1 00:49:56.019 00:49:56.019 ' 00:49:56.019 22:48:51 keyring_file -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:49:56.019 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:49:56.019 --rc genhtml_branch_coverage=1 00:49:56.019 --rc genhtml_function_coverage=1 00:49:56.019 --rc genhtml_legend=1 00:49:56.019 --rc geninfo_all_blocks=1 00:49:56.019 --rc geninfo_unexecuted_blocks=1 00:49:56.019 00:49:56.019 ' 00:49:56.019 22:48:51 keyring_file -- keyring/file.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/common.sh 00:49:56.019 22:48:51 keyring_file -- keyring/common.sh@4 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:49:56.019 22:48:51 keyring_file -- nvmf/common.sh@7 -- # uname -s 00:49:56.019 22:48:51 keyring_file -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:49:56.019 22:48:51 keyring_file -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:49:56.019 22:48:51 keyring_file -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:49:56.019 22:48:51 keyring_file -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:49:56.019 22:48:51 keyring_file -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:49:56.019 22:48:51 keyring_file -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:49:56.019 22:48:51 keyring_file -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:49:56.019 22:48:51 keyring_file -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:49:56.019 22:48:51 keyring_file -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:49:56.019 22:48:51 keyring_file -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:49:56.019 22:48:51 keyring_file -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 00:49:56.019 22:48:51 keyring_file -- nvmf/common.sh@18 -- # NVME_HOSTID=80c5a598-37ec-ec11-9bc7-a4bf01928204 00:49:56.019 22:48:51 keyring_file -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:49:56.019 22:48:51 keyring_file -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:49:56.019 22:48:51 keyring_file -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:49:56.019 22:48:51 keyring_file -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:49:56.019 22:48:51 keyring_file -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:49:56.019 22:48:51 keyring_file -- scripts/common.sh@15 -- # shopt -s extglob 00:49:56.019 22:48:51 keyring_file -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:49:56.019 22:48:51 keyring_file -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:49:56.019 22:48:51 keyring_file -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:49:56.019 22:48:51 keyring_file -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:49:56.019 22:48:51 keyring_file -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:49:56.019 22:48:51 keyring_file -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:49:56.019 22:48:51 keyring_file -- paths/export.sh@5 -- # export PATH 00:49:56.019 22:48:51 keyring_file -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:49:56.019 22:48:51 keyring_file -- nvmf/common.sh@51 -- # : 0 00:49:56.019 22:48:51 keyring_file -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:49:56.019 22:48:51 keyring_file -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:49:56.019 22:48:51 keyring_file -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:49:56.019 22:48:51 keyring_file -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:49:56.019 22:48:51 keyring_file -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:49:56.019 22:48:51 keyring_file -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:49:56.019 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:49:56.019 22:48:51 keyring_file -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:49:56.019 22:48:51 keyring_file -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:49:56.019 22:48:51 keyring_file -- nvmf/common.sh@55 -- # have_pci_nics=0 00:49:56.019 22:48:51 keyring_file -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:49:56.019 22:48:51 keyring_file -- keyring/file.sh@13 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:49:56.019 22:48:51 keyring_file -- keyring/file.sh@14 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:49:56.019 22:48:51 keyring_file -- keyring/file.sh@15 -- # key0=00112233445566778899aabbccddeeff 00:49:56.019 22:48:51 keyring_file -- keyring/file.sh@16 -- # key1=112233445566778899aabbccddeeff00 00:49:56.019 22:48:51 keyring_file -- keyring/file.sh@24 -- # trap cleanup EXIT 00:49:56.019 22:48:51 keyring_file -- keyring/file.sh@26 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:49:56.019 22:48:51 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:49:56.019 22:48:51 keyring_file -- keyring/common.sh@17 -- # name=key0 00:49:56.019 22:48:51 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:49:56.019 22:48:51 keyring_file -- keyring/common.sh@17 -- # digest=0 00:49:56.019 22:48:51 keyring_file -- keyring/common.sh@18 -- # mktemp 00:49:56.019 22:48:51 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.l0fxidWMpT 00:49:56.019 22:48:51 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:49:56.019 22:48:51 keyring_file -- nvmf/common.sh@741 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:49:56.019 22:48:51 keyring_file -- nvmf/common.sh@728 -- # local prefix key digest 00:49:56.019 22:48:51 keyring_file -- nvmf/common.sh@730 -- # prefix=NVMeTLSkey-1 00:49:56.019 22:48:51 keyring_file -- nvmf/common.sh@730 -- # key=00112233445566778899aabbccddeeff 00:49:56.019 22:48:51 keyring_file -- nvmf/common.sh@730 -- # digest=0 00:49:56.019 22:48:51 keyring_file -- nvmf/common.sh@731 -- # python - 00:49:56.019 22:48:51 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.l0fxidWMpT 00:49:56.019 22:48:51 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.l0fxidWMpT 00:49:56.019 22:48:51 keyring_file -- keyring/file.sh@26 -- # key0path=/tmp/tmp.l0fxidWMpT 00:49:56.019 22:48:51 keyring_file -- keyring/file.sh@27 -- # prep_key key1 112233445566778899aabbccddeeff00 0 00:49:56.019 22:48:51 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:49:56.019 22:48:51 keyring_file -- keyring/common.sh@17 -- # name=key1 00:49:56.019 22:48:51 keyring_file -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:49:56.019 22:48:51 keyring_file -- keyring/common.sh@17 -- # digest=0 00:49:56.019 22:48:51 keyring_file -- keyring/common.sh@18 -- # mktemp 00:49:56.019 22:48:51 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.dHwOb92rrS 00:49:56.019 22:48:51 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:49:56.019 22:48:51 keyring_file -- nvmf/common.sh@741 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:49:56.019 22:48:51 keyring_file -- nvmf/common.sh@728 -- # local prefix key digest 00:49:56.019 22:48:51 keyring_file -- nvmf/common.sh@730 -- # prefix=NVMeTLSkey-1 00:49:56.019 22:48:51 keyring_file -- nvmf/common.sh@730 -- # key=112233445566778899aabbccddeeff00 00:49:56.019 22:48:51 keyring_file -- nvmf/common.sh@730 -- # digest=0 00:49:56.019 22:48:51 keyring_file -- nvmf/common.sh@731 -- # python - 00:49:56.019 22:48:51 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.dHwOb92rrS 00:49:56.019 22:48:51 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.dHwOb92rrS 00:49:56.019 22:48:51 keyring_file -- keyring/file.sh@27 -- # key1path=/tmp/tmp.dHwOb92rrS 00:49:56.019 22:48:51 keyring_file -- keyring/file.sh@30 -- # tgtpid=541820 00:49:56.019 22:48:51 keyring_file -- keyring/file.sh@32 -- # waitforlisten 541820 00:49:56.019 22:48:51 keyring_file -- keyring/file.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:49:56.019 22:48:51 keyring_file -- common/autotest_common.sh@831 -- # '[' -z 541820 ']' 00:49:56.019 22:48:51 keyring_file -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:49:56.019 22:48:51 keyring_file -- common/autotest_common.sh@836 -- # local max_retries=100 00:49:56.019 22:48:51 keyring_file -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:49:56.019 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:49:56.019 22:48:51 keyring_file -- common/autotest_common.sh@840 -- # xtrace_disable 00:49:56.019 22:48:51 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:49:56.019 [2024-10-01 22:48:51.250567] Starting SPDK v25.01-pre git sha1 1b1c3081e / DPDK 24.03.0 initialization... 00:49:56.020 [2024-10-01 22:48:51.250635] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid541820 ] 00:49:56.279 [2024-10-01 22:48:51.310427] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:49:56.279 [2024-10-01 22:48:51.375172] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:49:56.850 22:48:52 keyring_file -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:49:56.850 22:48:52 keyring_file -- common/autotest_common.sh@864 -- # return 0 00:49:56.850 22:48:52 keyring_file -- keyring/file.sh@33 -- # rpc_cmd 00:49:56.850 22:48:52 keyring_file -- common/autotest_common.sh@561 -- # xtrace_disable 00:49:56.850 22:48:52 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:49:56.850 [2024-10-01 22:48:52.032616] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:49:56.850 null0 00:49:56.850 [2024-10-01 22:48:52.064670] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:49:56.850 [2024-10-01 22:48:52.064946] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:49:56.850 22:48:52 keyring_file -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:49:56.850 22:48:52 keyring_file -- keyring/file.sh@44 -- # NOT rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:49:56.850 22:48:52 keyring_file -- common/autotest_common.sh@650 -- # local es=0 00:49:56.850 22:48:52 keyring_file -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:49:56.850 22:48:52 keyring_file -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:49:56.850 22:48:52 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:49:56.850 22:48:52 keyring_file -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:49:56.850 22:48:52 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:49:56.850 22:48:52 keyring_file -- common/autotest_common.sh@653 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:49:56.850 22:48:52 keyring_file -- common/autotest_common.sh@561 -- # xtrace_disable 00:49:56.850 22:48:52 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:49:56.850 [2024-10-01 22:48:52.096743] nvmf_rpc.c: 762:nvmf_rpc_listen_paused: *ERROR*: Listener already exists 00:49:56.850 request: 00:49:56.850 { 00:49:56.850 "nqn": "nqn.2016-06.io.spdk:cnode0", 00:49:56.850 "secure_channel": false, 00:49:56.850 "listen_address": { 00:49:56.850 "trtype": "tcp", 00:49:56.850 "traddr": "127.0.0.1", 00:49:56.850 "trsvcid": "4420" 00:49:56.850 }, 00:49:56.850 "method": "nvmf_subsystem_add_listener", 00:49:56.850 "req_id": 1 00:49:56.850 } 00:49:56.850 Got JSON-RPC error response 00:49:57.110 response: 00:49:57.110 { 00:49:57.110 "code": -32602, 00:49:57.110 "message": "Invalid parameters" 00:49:57.110 } 00:49:57.110 22:48:52 keyring_file -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:49:57.110 22:48:52 keyring_file -- common/autotest_common.sh@653 -- # es=1 00:49:57.110 22:48:52 keyring_file -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:49:57.110 22:48:52 keyring_file -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:49:57.110 22:48:52 keyring_file -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:49:57.110 22:48:52 keyring_file -- keyring/file.sh@47 -- # bperfpid=541982 00:49:57.110 22:48:52 keyring_file -- keyring/file.sh@49 -- # waitforlisten 541982 /var/tmp/bperf.sock 00:49:57.110 22:48:52 keyring_file -- keyring/file.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z 00:49:57.110 22:48:52 keyring_file -- common/autotest_common.sh@831 -- # '[' -z 541982 ']' 00:49:57.110 22:48:52 keyring_file -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:49:57.110 22:48:52 keyring_file -- common/autotest_common.sh@836 -- # local max_retries=100 00:49:57.110 22:48:52 keyring_file -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:49:57.110 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:49:57.110 22:48:52 keyring_file -- common/autotest_common.sh@840 -- # xtrace_disable 00:49:57.110 22:48:52 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:49:57.110 [2024-10-01 22:48:52.155346] Starting SPDK v25.01-pre git sha1 1b1c3081e / DPDK 24.03.0 initialization... 00:49:57.110 [2024-10-01 22:48:52.155395] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid541982 ] 00:49:57.110 [2024-10-01 22:48:52.233411] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:49:57.110 [2024-10-01 22:48:52.298867] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:49:58.051 22:48:52 keyring_file -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:49:58.051 22:48:52 keyring_file -- common/autotest_common.sh@864 -- # return 0 00:49:58.051 22:48:52 keyring_file -- keyring/file.sh@50 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.l0fxidWMpT 00:49:58.051 22:48:52 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.l0fxidWMpT 00:49:58.051 22:48:53 keyring_file -- keyring/file.sh@51 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.dHwOb92rrS 00:49:58.051 22:48:53 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.dHwOb92rrS 00:49:58.051 22:48:53 keyring_file -- keyring/file.sh@52 -- # get_key key0 00:49:58.051 22:48:53 keyring_file -- keyring/file.sh@52 -- # jq -r .path 00:49:58.051 22:48:53 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:49:58.051 22:48:53 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:49:58.051 22:48:53 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:49:58.312 22:48:53 keyring_file -- keyring/file.sh@52 -- # [[ /tmp/tmp.l0fxidWMpT == \/\t\m\p\/\t\m\p\.\l\0\f\x\i\d\W\M\p\T ]] 00:49:58.312 22:48:53 keyring_file -- keyring/file.sh@53 -- # get_key key1 00:49:58.312 22:48:53 keyring_file -- keyring/file.sh@53 -- # jq -r .path 00:49:58.312 22:48:53 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:49:58.312 22:48:53 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:49:58.312 22:48:53 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:49:58.573 22:48:53 keyring_file -- keyring/file.sh@53 -- # [[ /tmp/tmp.dHwOb92rrS == \/\t\m\p\/\t\m\p\.\d\H\w\O\b\9\2\r\r\S ]] 00:49:58.573 22:48:53 keyring_file -- keyring/file.sh@54 -- # get_refcnt key0 00:49:58.573 22:48:53 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:49:58.573 22:48:53 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:49:58.573 22:48:53 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:49:58.573 22:48:53 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:49:58.573 22:48:53 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:49:58.573 22:48:53 keyring_file -- keyring/file.sh@54 -- # (( 1 == 1 )) 00:49:58.573 22:48:53 keyring_file -- keyring/file.sh@55 -- # get_refcnt key1 00:49:58.573 22:48:53 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:49:58.573 22:48:53 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:49:58.573 22:48:53 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:49:58.573 22:48:53 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:49:58.573 22:48:53 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:49:58.835 22:48:53 keyring_file -- keyring/file.sh@55 -- # (( 1 == 1 )) 00:49:58.835 22:48:53 keyring_file -- keyring/file.sh@58 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:49:58.835 22:48:53 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:49:59.096 [2024-10-01 22:48:54.138856] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:49:59.096 nvme0n1 00:49:59.096 22:48:54 keyring_file -- keyring/file.sh@60 -- # get_refcnt key0 00:49:59.096 22:48:54 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:49:59.096 22:48:54 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:49:59.096 22:48:54 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:49:59.096 22:48:54 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:49:59.096 22:48:54 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:49:59.357 22:48:54 keyring_file -- keyring/file.sh@60 -- # (( 2 == 2 )) 00:49:59.357 22:48:54 keyring_file -- keyring/file.sh@61 -- # get_refcnt key1 00:49:59.357 22:48:54 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:49:59.357 22:48:54 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:49:59.357 22:48:54 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:49:59.357 22:48:54 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:49:59.357 22:48:54 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:49:59.357 22:48:54 keyring_file -- keyring/file.sh@61 -- # (( 1 == 1 )) 00:49:59.357 22:48:54 keyring_file -- keyring/file.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:49:59.618 Running I/O for 1 seconds... 00:50:00.562 15823.00 IOPS, 61.81 MiB/s 00:50:00.562 Latency(us) 00:50:00.562 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:50:00.562 Job: nvme0n1 (Core Mask 0x2, workload: randrw, percentage: 50, depth: 128, IO size: 4096) 00:50:00.562 nvme0n1 : 1.01 15857.82 61.94 0.00 0.00 8046.12 4642.13 16056.32 00:50:00.562 =================================================================================================================== 00:50:00.562 Total : 15857.82 61.94 0.00 0.00 8046.12 4642.13 16056.32 00:50:00.562 { 00:50:00.562 "results": [ 00:50:00.562 { 00:50:00.562 "job": "nvme0n1", 00:50:00.562 "core_mask": "0x2", 00:50:00.562 "workload": "randrw", 00:50:00.562 "percentage": 50, 00:50:00.562 "status": "finished", 00:50:00.562 "queue_depth": 128, 00:50:00.562 "io_size": 4096, 00:50:00.562 "runtime": 1.006002, 00:50:00.562 "iops": 15857.821356219967, 00:50:00.562 "mibps": 61.94461467273425, 00:50:00.562 "io_failed": 0, 00:50:00.562 "io_timeout": 0, 00:50:00.562 "avg_latency_us": 8046.118245680018, 00:50:00.562 "min_latency_us": 4642.133333333333, 00:50:00.562 "max_latency_us": 16056.32 00:50:00.562 } 00:50:00.562 ], 00:50:00.562 "core_count": 1 00:50:00.562 } 00:50:00.562 22:48:55 keyring_file -- keyring/file.sh@65 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:50:00.562 22:48:55 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:50:00.823 22:48:55 keyring_file -- keyring/file.sh@66 -- # get_refcnt key0 00:50:00.823 22:48:55 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:50:00.823 22:48:55 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:50:00.823 22:48:55 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:50:00.823 22:48:55 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:50:00.823 22:48:55 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:50:00.823 22:48:56 keyring_file -- keyring/file.sh@66 -- # (( 1 == 1 )) 00:50:00.823 22:48:56 keyring_file -- keyring/file.sh@67 -- # get_refcnt key1 00:50:00.823 22:48:56 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:50:00.823 22:48:56 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:50:00.823 22:48:56 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:50:00.823 22:48:56 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:50:00.823 22:48:56 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:50:01.083 22:48:56 keyring_file -- keyring/file.sh@67 -- # (( 1 == 1 )) 00:50:01.083 22:48:56 keyring_file -- keyring/file.sh@70 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:50:01.083 22:48:56 keyring_file -- common/autotest_common.sh@650 -- # local es=0 00:50:01.083 22:48:56 keyring_file -- common/autotest_common.sh@652 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:50:01.083 22:48:56 keyring_file -- common/autotest_common.sh@638 -- # local arg=bperf_cmd 00:50:01.083 22:48:56 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:50:01.083 22:48:56 keyring_file -- common/autotest_common.sh@642 -- # type -t bperf_cmd 00:50:01.083 22:48:56 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:50:01.083 22:48:56 keyring_file -- common/autotest_common.sh@653 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:50:01.083 22:48:56 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:50:01.345 [2024-10-01 22:48:56.387770] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:50:01.345 [2024-10-01 22:48:56.388587] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc56e80 (107): Transport endpoint is not connected 00:50:01.345 [2024-10-01 22:48:56.389582] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc56e80 (9): Bad file descriptor 00:50:01.345 [2024-10-01 22:48:56.390584] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:50:01.345 [2024-10-01 22:48:56.390591] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:50:01.345 [2024-10-01 22:48:56.390597] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, Operation not permitted 00:50:01.345 [2024-10-01 22:48:56.390604] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:50:01.345 request: 00:50:01.345 { 00:50:01.345 "name": "nvme0", 00:50:01.345 "trtype": "tcp", 00:50:01.345 "traddr": "127.0.0.1", 00:50:01.345 "adrfam": "ipv4", 00:50:01.345 "trsvcid": "4420", 00:50:01.345 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:50:01.345 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:50:01.345 "prchk_reftag": false, 00:50:01.345 "prchk_guard": false, 00:50:01.345 "hdgst": false, 00:50:01.345 "ddgst": false, 00:50:01.345 "psk": "key1", 00:50:01.345 "allow_unrecognized_csi": false, 00:50:01.345 "method": "bdev_nvme_attach_controller", 00:50:01.345 "req_id": 1 00:50:01.345 } 00:50:01.345 Got JSON-RPC error response 00:50:01.345 response: 00:50:01.345 { 00:50:01.345 "code": -5, 00:50:01.345 "message": "Input/output error" 00:50:01.345 } 00:50:01.345 22:48:56 keyring_file -- common/autotest_common.sh@653 -- # es=1 00:50:01.345 22:48:56 keyring_file -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:50:01.345 22:48:56 keyring_file -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:50:01.345 22:48:56 keyring_file -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:50:01.345 22:48:56 keyring_file -- keyring/file.sh@72 -- # get_refcnt key0 00:50:01.345 22:48:56 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:50:01.345 22:48:56 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:50:01.345 22:48:56 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:50:01.345 22:48:56 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:50:01.345 22:48:56 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:50:01.345 22:48:56 keyring_file -- keyring/file.sh@72 -- # (( 1 == 1 )) 00:50:01.345 22:48:56 keyring_file -- keyring/file.sh@73 -- # get_refcnt key1 00:50:01.345 22:48:56 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:50:01.345 22:48:56 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:50:01.345 22:48:56 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:50:01.345 22:48:56 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:50:01.345 22:48:56 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:50:01.606 22:48:56 keyring_file -- keyring/file.sh@73 -- # (( 1 == 1 )) 00:50:01.606 22:48:56 keyring_file -- keyring/file.sh@76 -- # bperf_cmd keyring_file_remove_key key0 00:50:01.606 22:48:56 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:50:01.867 22:48:56 keyring_file -- keyring/file.sh@77 -- # bperf_cmd keyring_file_remove_key key1 00:50:01.867 22:48:56 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key1 00:50:01.867 22:48:57 keyring_file -- keyring/file.sh@78 -- # bperf_cmd keyring_get_keys 00:50:01.867 22:48:57 keyring_file -- keyring/file.sh@78 -- # jq length 00:50:01.867 22:48:57 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:50:02.128 22:48:57 keyring_file -- keyring/file.sh@78 -- # (( 0 == 0 )) 00:50:02.128 22:48:57 keyring_file -- keyring/file.sh@81 -- # chmod 0660 /tmp/tmp.l0fxidWMpT 00:50:02.128 22:48:57 keyring_file -- keyring/file.sh@82 -- # NOT bperf_cmd keyring_file_add_key key0 /tmp/tmp.l0fxidWMpT 00:50:02.128 22:48:57 keyring_file -- common/autotest_common.sh@650 -- # local es=0 00:50:02.128 22:48:57 keyring_file -- common/autotest_common.sh@652 -- # valid_exec_arg bperf_cmd keyring_file_add_key key0 /tmp/tmp.l0fxidWMpT 00:50:02.128 22:48:57 keyring_file -- common/autotest_common.sh@638 -- # local arg=bperf_cmd 00:50:02.128 22:48:57 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:50:02.128 22:48:57 keyring_file -- common/autotest_common.sh@642 -- # type -t bperf_cmd 00:50:02.128 22:48:57 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:50:02.128 22:48:57 keyring_file -- common/autotest_common.sh@653 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.l0fxidWMpT 00:50:02.128 22:48:57 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.l0fxidWMpT 00:50:02.389 [2024-10-01 22:48:57.405245] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.l0fxidWMpT': 0100660 00:50:02.389 [2024-10-01 22:48:57.405264] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:50:02.389 request: 00:50:02.389 { 00:50:02.389 "name": "key0", 00:50:02.389 "path": "/tmp/tmp.l0fxidWMpT", 00:50:02.389 "method": "keyring_file_add_key", 00:50:02.389 "req_id": 1 00:50:02.389 } 00:50:02.389 Got JSON-RPC error response 00:50:02.389 response: 00:50:02.389 { 00:50:02.389 "code": -1, 00:50:02.389 "message": "Operation not permitted" 00:50:02.389 } 00:50:02.389 22:48:57 keyring_file -- common/autotest_common.sh@653 -- # es=1 00:50:02.389 22:48:57 keyring_file -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:50:02.389 22:48:57 keyring_file -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:50:02.389 22:48:57 keyring_file -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:50:02.389 22:48:57 keyring_file -- keyring/file.sh@85 -- # chmod 0600 /tmp/tmp.l0fxidWMpT 00:50:02.389 22:48:57 keyring_file -- keyring/file.sh@86 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.l0fxidWMpT 00:50:02.389 22:48:57 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.l0fxidWMpT 00:50:02.389 22:48:57 keyring_file -- keyring/file.sh@87 -- # rm -f /tmp/tmp.l0fxidWMpT 00:50:02.389 22:48:57 keyring_file -- keyring/file.sh@89 -- # get_refcnt key0 00:50:02.389 22:48:57 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:50:02.389 22:48:57 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:50:02.389 22:48:57 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:50:02.389 22:48:57 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:50:02.389 22:48:57 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:50:02.651 22:48:57 keyring_file -- keyring/file.sh@89 -- # (( 1 == 1 )) 00:50:02.651 22:48:57 keyring_file -- keyring/file.sh@91 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:50:02.651 22:48:57 keyring_file -- common/autotest_common.sh@650 -- # local es=0 00:50:02.651 22:48:57 keyring_file -- common/autotest_common.sh@652 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:50:02.651 22:48:57 keyring_file -- common/autotest_common.sh@638 -- # local arg=bperf_cmd 00:50:02.651 22:48:57 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:50:02.651 22:48:57 keyring_file -- common/autotest_common.sh@642 -- # type -t bperf_cmd 00:50:02.651 22:48:57 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:50:02.651 22:48:57 keyring_file -- common/autotest_common.sh@653 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:50:02.651 22:48:57 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:50:02.913 [2024-10-01 22:48:57.926567] keyring.c: 31:keyring_file_check_path: *ERROR*: Could not stat key file '/tmp/tmp.l0fxidWMpT': No such file or directory 00:50:02.913 [2024-10-01 22:48:57.926585] nvme_tcp.c:2609:nvme_tcp_generate_tls_credentials: *ERROR*: Failed to obtain key 'key0': No such file or directory 00:50:02.913 [2024-10-01 22:48:57.926598] nvme.c: 682:nvme_ctrlr_probe: *ERROR*: Failed to construct NVMe controller for SSD: 127.0.0.1 00:50:02.913 [2024-10-01 22:48:57.926603] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, No such device 00:50:02.913 [2024-10-01 22:48:57.926609] nvme.c: 831:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:50:02.913 [2024-10-01 22:48:57.926614] bdev_nvme.c:6447:spdk_bdev_nvme_create: *ERROR*: No controller was found with provided trid (traddr: 127.0.0.1) 00:50:02.913 request: 00:50:02.913 { 00:50:02.913 "name": "nvme0", 00:50:02.913 "trtype": "tcp", 00:50:02.913 "traddr": "127.0.0.1", 00:50:02.913 "adrfam": "ipv4", 00:50:02.913 "trsvcid": "4420", 00:50:02.913 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:50:02.913 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:50:02.913 "prchk_reftag": false, 00:50:02.913 "prchk_guard": false, 00:50:02.913 "hdgst": false, 00:50:02.913 "ddgst": false, 00:50:02.913 "psk": "key0", 00:50:02.913 "allow_unrecognized_csi": false, 00:50:02.913 "method": "bdev_nvme_attach_controller", 00:50:02.913 "req_id": 1 00:50:02.913 } 00:50:02.913 Got JSON-RPC error response 00:50:02.913 response: 00:50:02.913 { 00:50:02.913 "code": -19, 00:50:02.913 "message": "No such device" 00:50:02.913 } 00:50:02.913 22:48:57 keyring_file -- common/autotest_common.sh@653 -- # es=1 00:50:02.913 22:48:57 keyring_file -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:50:02.913 22:48:57 keyring_file -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:50:02.913 22:48:57 keyring_file -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:50:02.913 22:48:57 keyring_file -- keyring/file.sh@93 -- # bperf_cmd keyring_file_remove_key key0 00:50:02.913 22:48:57 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:50:02.913 22:48:58 keyring_file -- keyring/file.sh@96 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:50:02.913 22:48:58 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:50:02.913 22:48:58 keyring_file -- keyring/common.sh@17 -- # name=key0 00:50:02.913 22:48:58 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:50:02.913 22:48:58 keyring_file -- keyring/common.sh@17 -- # digest=0 00:50:02.913 22:48:58 keyring_file -- keyring/common.sh@18 -- # mktemp 00:50:02.913 22:48:58 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.Gc4yRZtwMD 00:50:02.913 22:48:58 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:50:02.913 22:48:58 keyring_file -- nvmf/common.sh@741 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:50:02.913 22:48:58 keyring_file -- nvmf/common.sh@728 -- # local prefix key digest 00:50:02.913 22:48:58 keyring_file -- nvmf/common.sh@730 -- # prefix=NVMeTLSkey-1 00:50:02.913 22:48:58 keyring_file -- nvmf/common.sh@730 -- # key=00112233445566778899aabbccddeeff 00:50:02.913 22:48:58 keyring_file -- nvmf/common.sh@730 -- # digest=0 00:50:02.913 22:48:58 keyring_file -- nvmf/common.sh@731 -- # python - 00:50:02.913 22:48:58 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.Gc4yRZtwMD 00:50:02.913 22:48:58 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.Gc4yRZtwMD 00:50:02.913 22:48:58 keyring_file -- keyring/file.sh@96 -- # key0path=/tmp/tmp.Gc4yRZtwMD 00:50:02.913 22:48:58 keyring_file -- keyring/file.sh@97 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.Gc4yRZtwMD 00:50:02.913 22:48:58 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.Gc4yRZtwMD 00:50:03.174 22:48:58 keyring_file -- keyring/file.sh@98 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:50:03.174 22:48:58 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:50:03.435 nvme0n1 00:50:03.435 22:48:58 keyring_file -- keyring/file.sh@100 -- # get_refcnt key0 00:50:03.435 22:48:58 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:50:03.435 22:48:58 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:50:03.435 22:48:58 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:50:03.435 22:48:58 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:50:03.435 22:48:58 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:50:03.695 22:48:58 keyring_file -- keyring/file.sh@100 -- # (( 2 == 2 )) 00:50:03.695 22:48:58 keyring_file -- keyring/file.sh@101 -- # bperf_cmd keyring_file_remove_key key0 00:50:03.695 22:48:58 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:50:03.695 22:48:58 keyring_file -- keyring/file.sh@102 -- # get_key key0 00:50:03.695 22:48:58 keyring_file -- keyring/file.sh@102 -- # jq -r .removed 00:50:03.695 22:48:58 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:50:03.695 22:48:58 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:50:03.695 22:48:58 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:50:03.955 22:48:59 keyring_file -- keyring/file.sh@102 -- # [[ true == \t\r\u\e ]] 00:50:03.955 22:48:59 keyring_file -- keyring/file.sh@103 -- # get_refcnt key0 00:50:03.955 22:48:59 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:50:03.955 22:48:59 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:50:03.955 22:48:59 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:50:03.955 22:48:59 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:50:03.955 22:48:59 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:50:04.216 22:48:59 keyring_file -- keyring/file.sh@103 -- # (( 1 == 1 )) 00:50:04.216 22:48:59 keyring_file -- keyring/file.sh@104 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:50:04.216 22:48:59 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:50:04.216 22:48:59 keyring_file -- keyring/file.sh@105 -- # jq length 00:50:04.216 22:48:59 keyring_file -- keyring/file.sh@105 -- # bperf_cmd keyring_get_keys 00:50:04.216 22:48:59 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:50:04.479 22:48:59 keyring_file -- keyring/file.sh@105 -- # (( 0 == 0 )) 00:50:04.479 22:48:59 keyring_file -- keyring/file.sh@108 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.Gc4yRZtwMD 00:50:04.479 22:48:59 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.Gc4yRZtwMD 00:50:04.740 22:48:59 keyring_file -- keyring/file.sh@109 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.dHwOb92rrS 00:50:04.740 22:48:59 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.dHwOb92rrS 00:50:04.740 22:48:59 keyring_file -- keyring/file.sh@110 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:50:04.740 22:48:59 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:50:05.002 nvme0n1 00:50:05.002 22:49:00 keyring_file -- keyring/file.sh@113 -- # bperf_cmd save_config 00:50:05.002 22:49:00 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock save_config 00:50:05.264 22:49:00 keyring_file -- keyring/file.sh@113 -- # config='{ 00:50:05.264 "subsystems": [ 00:50:05.264 { 00:50:05.264 "subsystem": "keyring", 00:50:05.264 "config": [ 00:50:05.264 { 00:50:05.264 "method": "keyring_file_add_key", 00:50:05.264 "params": { 00:50:05.264 "name": "key0", 00:50:05.264 "path": "/tmp/tmp.Gc4yRZtwMD" 00:50:05.264 } 00:50:05.264 }, 00:50:05.264 { 00:50:05.264 "method": "keyring_file_add_key", 00:50:05.264 "params": { 00:50:05.264 "name": "key1", 00:50:05.264 "path": "/tmp/tmp.dHwOb92rrS" 00:50:05.264 } 00:50:05.264 } 00:50:05.264 ] 00:50:05.264 }, 00:50:05.264 { 00:50:05.264 "subsystem": "iobuf", 00:50:05.264 "config": [ 00:50:05.264 { 00:50:05.264 "method": "iobuf_set_options", 00:50:05.264 "params": { 00:50:05.264 "small_pool_count": 8192, 00:50:05.264 "large_pool_count": 1024, 00:50:05.264 "small_bufsize": 8192, 00:50:05.264 "large_bufsize": 135168 00:50:05.264 } 00:50:05.264 } 00:50:05.264 ] 00:50:05.264 }, 00:50:05.264 { 00:50:05.264 "subsystem": "sock", 00:50:05.264 "config": [ 00:50:05.264 { 00:50:05.264 "method": "sock_set_default_impl", 00:50:05.264 "params": { 00:50:05.264 "impl_name": "posix" 00:50:05.264 } 00:50:05.264 }, 00:50:05.264 { 00:50:05.264 "method": "sock_impl_set_options", 00:50:05.264 "params": { 00:50:05.264 "impl_name": "ssl", 00:50:05.264 "recv_buf_size": 4096, 00:50:05.264 "send_buf_size": 4096, 00:50:05.264 "enable_recv_pipe": true, 00:50:05.264 "enable_quickack": false, 00:50:05.264 "enable_placement_id": 0, 00:50:05.264 "enable_zerocopy_send_server": true, 00:50:05.264 "enable_zerocopy_send_client": false, 00:50:05.264 "zerocopy_threshold": 0, 00:50:05.264 "tls_version": 0, 00:50:05.264 "enable_ktls": false 00:50:05.264 } 00:50:05.264 }, 00:50:05.264 { 00:50:05.264 "method": "sock_impl_set_options", 00:50:05.264 "params": { 00:50:05.264 "impl_name": "posix", 00:50:05.264 "recv_buf_size": 2097152, 00:50:05.264 "send_buf_size": 2097152, 00:50:05.264 "enable_recv_pipe": true, 00:50:05.264 "enable_quickack": false, 00:50:05.264 "enable_placement_id": 0, 00:50:05.264 "enable_zerocopy_send_server": true, 00:50:05.264 "enable_zerocopy_send_client": false, 00:50:05.264 "zerocopy_threshold": 0, 00:50:05.264 "tls_version": 0, 00:50:05.264 "enable_ktls": false 00:50:05.264 } 00:50:05.264 } 00:50:05.264 ] 00:50:05.264 }, 00:50:05.264 { 00:50:05.264 "subsystem": "vmd", 00:50:05.264 "config": [] 00:50:05.264 }, 00:50:05.264 { 00:50:05.264 "subsystem": "accel", 00:50:05.264 "config": [ 00:50:05.264 { 00:50:05.264 "method": "accel_set_options", 00:50:05.264 "params": { 00:50:05.264 "small_cache_size": 128, 00:50:05.264 "large_cache_size": 16, 00:50:05.264 "task_count": 2048, 00:50:05.264 "sequence_count": 2048, 00:50:05.264 "buf_count": 2048 00:50:05.264 } 00:50:05.264 } 00:50:05.264 ] 00:50:05.264 }, 00:50:05.264 { 00:50:05.264 "subsystem": "bdev", 00:50:05.264 "config": [ 00:50:05.264 { 00:50:05.264 "method": "bdev_set_options", 00:50:05.264 "params": { 00:50:05.264 "bdev_io_pool_size": 65535, 00:50:05.264 "bdev_io_cache_size": 256, 00:50:05.264 "bdev_auto_examine": true, 00:50:05.264 "iobuf_small_cache_size": 128, 00:50:05.264 "iobuf_large_cache_size": 16, 00:50:05.264 "bdev_io_stack_size": 4096 00:50:05.264 } 00:50:05.264 }, 00:50:05.264 { 00:50:05.264 "method": "bdev_raid_set_options", 00:50:05.264 "params": { 00:50:05.264 "process_window_size_kb": 1024, 00:50:05.264 "process_max_bandwidth_mb_sec": 0 00:50:05.264 } 00:50:05.264 }, 00:50:05.264 { 00:50:05.264 "method": "bdev_iscsi_set_options", 00:50:05.264 "params": { 00:50:05.264 "timeout_sec": 30 00:50:05.264 } 00:50:05.264 }, 00:50:05.264 { 00:50:05.264 "method": "bdev_nvme_set_options", 00:50:05.264 "params": { 00:50:05.264 "action_on_timeout": "none", 00:50:05.264 "timeout_us": 0, 00:50:05.264 "timeout_admin_us": 0, 00:50:05.264 "keep_alive_timeout_ms": 10000, 00:50:05.264 "arbitration_burst": 0, 00:50:05.264 "low_priority_weight": 0, 00:50:05.264 "medium_priority_weight": 0, 00:50:05.264 "high_priority_weight": 0, 00:50:05.264 "nvme_adminq_poll_period_us": 10000, 00:50:05.265 "nvme_ioq_poll_period_us": 0, 00:50:05.265 "io_queue_requests": 512, 00:50:05.265 "delay_cmd_submit": true, 00:50:05.265 "transport_retry_count": 4, 00:50:05.265 "bdev_retry_count": 3, 00:50:05.265 "transport_ack_timeout": 0, 00:50:05.265 "ctrlr_loss_timeout_sec": 0, 00:50:05.265 "reconnect_delay_sec": 0, 00:50:05.265 "fast_io_fail_timeout_sec": 0, 00:50:05.265 "disable_auto_failback": false, 00:50:05.265 "generate_uuids": false, 00:50:05.265 "transport_tos": 0, 00:50:05.265 "nvme_error_stat": false, 00:50:05.265 "rdma_srq_size": 0, 00:50:05.265 "io_path_stat": false, 00:50:05.265 "allow_accel_sequence": false, 00:50:05.265 "rdma_max_cq_size": 0, 00:50:05.265 "rdma_cm_event_timeout_ms": 0, 00:50:05.265 "dhchap_digests": [ 00:50:05.265 "sha256", 00:50:05.265 "sha384", 00:50:05.265 "sha512" 00:50:05.265 ], 00:50:05.265 "dhchap_dhgroups": [ 00:50:05.265 "null", 00:50:05.265 "ffdhe2048", 00:50:05.265 "ffdhe3072", 00:50:05.265 "ffdhe4096", 00:50:05.265 "ffdhe6144", 00:50:05.265 "ffdhe8192" 00:50:05.265 ] 00:50:05.265 } 00:50:05.265 }, 00:50:05.265 { 00:50:05.265 "method": "bdev_nvme_attach_controller", 00:50:05.265 "params": { 00:50:05.265 "name": "nvme0", 00:50:05.265 "trtype": "TCP", 00:50:05.265 "adrfam": "IPv4", 00:50:05.265 "traddr": "127.0.0.1", 00:50:05.265 "trsvcid": "4420", 00:50:05.265 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:50:05.265 "prchk_reftag": false, 00:50:05.265 "prchk_guard": false, 00:50:05.265 "ctrlr_loss_timeout_sec": 0, 00:50:05.265 "reconnect_delay_sec": 0, 00:50:05.265 "fast_io_fail_timeout_sec": 0, 00:50:05.265 "psk": "key0", 00:50:05.265 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:50:05.265 "hdgst": false, 00:50:05.265 "ddgst": false 00:50:05.265 } 00:50:05.265 }, 00:50:05.265 { 00:50:05.265 "method": "bdev_nvme_set_hotplug", 00:50:05.265 "params": { 00:50:05.265 "period_us": 100000, 00:50:05.265 "enable": false 00:50:05.265 } 00:50:05.265 }, 00:50:05.265 { 00:50:05.265 "method": "bdev_wait_for_examine" 00:50:05.265 } 00:50:05.265 ] 00:50:05.265 }, 00:50:05.265 { 00:50:05.265 "subsystem": "nbd", 00:50:05.265 "config": [] 00:50:05.265 } 00:50:05.265 ] 00:50:05.265 }' 00:50:05.265 22:49:00 keyring_file -- keyring/file.sh@115 -- # killprocess 541982 00:50:05.265 22:49:00 keyring_file -- common/autotest_common.sh@950 -- # '[' -z 541982 ']' 00:50:05.265 22:49:00 keyring_file -- common/autotest_common.sh@954 -- # kill -0 541982 00:50:05.265 22:49:00 keyring_file -- common/autotest_common.sh@955 -- # uname 00:50:05.265 22:49:00 keyring_file -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:50:05.265 22:49:00 keyring_file -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 541982 00:50:05.265 22:49:00 keyring_file -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:50:05.265 22:49:00 keyring_file -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:50:05.265 22:49:00 keyring_file -- common/autotest_common.sh@968 -- # echo 'killing process with pid 541982' 00:50:05.265 killing process with pid 541982 00:50:05.265 22:49:00 keyring_file -- common/autotest_common.sh@969 -- # kill 541982 00:50:05.265 Received shutdown signal, test time was about 1.000000 seconds 00:50:05.265 00:50:05.265 Latency(us) 00:50:05.265 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:50:05.265 =================================================================================================================== 00:50:05.265 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:50:05.265 22:49:00 keyring_file -- common/autotest_common.sh@974 -- # wait 541982 00:50:05.527 22:49:00 keyring_file -- keyring/file.sh@118 -- # bperfpid=543797 00:50:05.527 22:49:00 keyring_file -- keyring/file.sh@120 -- # waitforlisten 543797 /var/tmp/bperf.sock 00:50:05.527 22:49:00 keyring_file -- common/autotest_common.sh@831 -- # '[' -z 543797 ']' 00:50:05.527 22:49:00 keyring_file -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:50:05.527 22:49:00 keyring_file -- common/autotest_common.sh@836 -- # local max_retries=100 00:50:05.527 22:49:00 keyring_file -- keyring/file.sh@116 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z -c /dev/fd/63 00:50:05.527 22:49:00 keyring_file -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:50:05.527 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:50:05.527 22:49:00 keyring_file -- common/autotest_common.sh@840 -- # xtrace_disable 00:50:05.527 22:49:00 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:50:05.527 22:49:00 keyring_file -- keyring/file.sh@116 -- # echo '{ 00:50:05.527 "subsystems": [ 00:50:05.527 { 00:50:05.527 "subsystem": "keyring", 00:50:05.527 "config": [ 00:50:05.527 { 00:50:05.527 "method": "keyring_file_add_key", 00:50:05.527 "params": { 00:50:05.527 "name": "key0", 00:50:05.527 "path": "/tmp/tmp.Gc4yRZtwMD" 00:50:05.527 } 00:50:05.527 }, 00:50:05.527 { 00:50:05.527 "method": "keyring_file_add_key", 00:50:05.527 "params": { 00:50:05.527 "name": "key1", 00:50:05.527 "path": "/tmp/tmp.dHwOb92rrS" 00:50:05.527 } 00:50:05.527 } 00:50:05.527 ] 00:50:05.527 }, 00:50:05.527 { 00:50:05.527 "subsystem": "iobuf", 00:50:05.527 "config": [ 00:50:05.527 { 00:50:05.527 "method": "iobuf_set_options", 00:50:05.527 "params": { 00:50:05.527 "small_pool_count": 8192, 00:50:05.527 "large_pool_count": 1024, 00:50:05.527 "small_bufsize": 8192, 00:50:05.527 "large_bufsize": 135168 00:50:05.527 } 00:50:05.527 } 00:50:05.527 ] 00:50:05.527 }, 00:50:05.527 { 00:50:05.527 "subsystem": "sock", 00:50:05.527 "config": [ 00:50:05.527 { 00:50:05.527 "method": "sock_set_default_impl", 00:50:05.527 "params": { 00:50:05.527 "impl_name": "posix" 00:50:05.527 } 00:50:05.527 }, 00:50:05.527 { 00:50:05.527 "method": "sock_impl_set_options", 00:50:05.527 "params": { 00:50:05.527 "impl_name": "ssl", 00:50:05.527 "recv_buf_size": 4096, 00:50:05.527 "send_buf_size": 4096, 00:50:05.527 "enable_recv_pipe": true, 00:50:05.527 "enable_quickack": false, 00:50:05.527 "enable_placement_id": 0, 00:50:05.527 "enable_zerocopy_send_server": true, 00:50:05.527 "enable_zerocopy_send_client": false, 00:50:05.527 "zerocopy_threshold": 0, 00:50:05.527 "tls_version": 0, 00:50:05.527 "enable_ktls": false 00:50:05.527 } 00:50:05.527 }, 00:50:05.527 { 00:50:05.527 "method": "sock_impl_set_options", 00:50:05.527 "params": { 00:50:05.527 "impl_name": "posix", 00:50:05.527 "recv_buf_size": 2097152, 00:50:05.527 "send_buf_size": 2097152, 00:50:05.527 "enable_recv_pipe": true, 00:50:05.527 "enable_quickack": false, 00:50:05.527 "enable_placement_id": 0, 00:50:05.527 "enable_zerocopy_send_server": true, 00:50:05.527 "enable_zerocopy_send_client": false, 00:50:05.527 "zerocopy_threshold": 0, 00:50:05.527 "tls_version": 0, 00:50:05.527 "enable_ktls": false 00:50:05.527 } 00:50:05.527 } 00:50:05.527 ] 00:50:05.527 }, 00:50:05.527 { 00:50:05.527 "subsystem": "vmd", 00:50:05.527 "config": [] 00:50:05.527 }, 00:50:05.527 { 00:50:05.527 "subsystem": "accel", 00:50:05.527 "config": [ 00:50:05.527 { 00:50:05.527 "method": "accel_set_options", 00:50:05.527 "params": { 00:50:05.527 "small_cache_size": 128, 00:50:05.527 "large_cache_size": 16, 00:50:05.527 "task_count": 2048, 00:50:05.527 "sequence_count": 2048, 00:50:05.527 "buf_count": 2048 00:50:05.527 } 00:50:05.527 } 00:50:05.527 ] 00:50:05.527 }, 00:50:05.527 { 00:50:05.527 "subsystem": "bdev", 00:50:05.527 "config": [ 00:50:05.527 { 00:50:05.527 "method": "bdev_set_options", 00:50:05.527 "params": { 00:50:05.527 "bdev_io_pool_size": 65535, 00:50:05.527 "bdev_io_cache_size": 256, 00:50:05.527 "bdev_auto_examine": true, 00:50:05.527 "iobuf_small_cache_size": 128, 00:50:05.527 "iobuf_large_cache_size": 16, 00:50:05.527 "bdev_io_stack_size": 4096 00:50:05.527 } 00:50:05.527 }, 00:50:05.527 { 00:50:05.527 "method": "bdev_raid_set_options", 00:50:05.527 "params": { 00:50:05.527 "process_window_size_kb": 1024, 00:50:05.527 "process_max_bandwidth_mb_sec": 0 00:50:05.527 } 00:50:05.527 }, 00:50:05.527 { 00:50:05.527 "method": "bdev_iscsi_set_options", 00:50:05.527 "params": { 00:50:05.527 "timeout_sec": 30 00:50:05.527 } 00:50:05.527 }, 00:50:05.527 { 00:50:05.527 "method": "bdev_nvme_set_options", 00:50:05.527 "params": { 00:50:05.527 "action_on_timeout": "none", 00:50:05.527 "timeout_us": 0, 00:50:05.527 "timeout_admin_us": 0, 00:50:05.527 "keep_alive_timeout_ms": 10000, 00:50:05.527 "arbitration_burst": 0, 00:50:05.527 "low_priority_weight": 0, 00:50:05.527 "medium_priority_weight": 0, 00:50:05.527 "high_priority_weight": 0, 00:50:05.527 "nvme_adminq_poll_period_us": 10000, 00:50:05.527 "nvme_ioq_poll_period_us": 0, 00:50:05.527 "io_queue_requests": 512, 00:50:05.527 "delay_cmd_submit": true, 00:50:05.527 "transport_retry_count": 4, 00:50:05.527 "bdev_retry_count": 3, 00:50:05.527 "transport_ack_timeout": 0, 00:50:05.527 "ctrlr_loss_timeout_sec": 0, 00:50:05.527 "reconnect_delay_sec": 0, 00:50:05.527 "fast_io_fail_timeout_sec": 0, 00:50:05.527 "disable_auto_failback": false, 00:50:05.527 "generate_uuids": false, 00:50:05.527 "transport_tos": 0, 00:50:05.527 "nvme_error_stat": false, 00:50:05.527 "rdma_srq_size": 0, 00:50:05.527 "io_path_stat": false, 00:50:05.527 "allow_accel_sequence": false, 00:50:05.527 "rdma_max_cq_size": 0, 00:50:05.527 "rdma_cm_event_timeout_ms": 0, 00:50:05.527 "dhchap_digests": [ 00:50:05.527 "sha256", 00:50:05.527 "sha384", 00:50:05.527 "sha512" 00:50:05.527 ], 00:50:05.527 "dhchap_dhgroups": [ 00:50:05.527 "null", 00:50:05.527 "ffdhe2048", 00:50:05.528 "ffdhe3072", 00:50:05.528 "ffdhe4096", 00:50:05.528 "ffdhe6144", 00:50:05.528 "ffdhe8192" 00:50:05.528 ] 00:50:05.528 } 00:50:05.528 }, 00:50:05.528 { 00:50:05.528 "method": "bdev_nvme_attach_controller", 00:50:05.528 "params": { 00:50:05.528 "name": "nvme0", 00:50:05.528 "trtype": "TCP", 00:50:05.528 "adrfam": "IPv4", 00:50:05.528 "traddr": "127.0.0.1", 00:50:05.528 "trsvcid": "4420", 00:50:05.528 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:50:05.528 "prchk_reftag": false, 00:50:05.528 "prchk_guard": false, 00:50:05.528 "ctrlr_loss_timeout_sec": 0, 00:50:05.528 "reconnect_delay_sec": 0, 00:50:05.528 "fast_io_fail_timeout_sec": 0, 00:50:05.528 "psk": "key0", 00:50:05.528 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:50:05.528 "hdgst": false, 00:50:05.528 "ddgst": false 00:50:05.528 } 00:50:05.528 }, 00:50:05.528 { 00:50:05.528 "method": "bdev_nvme_set_hotplug", 00:50:05.528 "params": { 00:50:05.528 "period_us": 100000, 00:50:05.528 "enable": false 00:50:05.528 } 00:50:05.528 }, 00:50:05.528 { 00:50:05.528 "method": "bdev_wait_for_examine" 00:50:05.528 } 00:50:05.528 ] 00:50:05.528 }, 00:50:05.528 { 00:50:05.528 "subsystem": "nbd", 00:50:05.528 "config": [] 00:50:05.528 } 00:50:05.528 ] 00:50:05.528 }' 00:50:05.528 [2024-10-01 22:49:00.709216] Starting SPDK v25.01-pre git sha1 1b1c3081e / DPDK 24.03.0 initialization... 00:50:05.528 [2024-10-01 22:49:00.709272] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid543797 ] 00:50:05.790 [2024-10-01 22:49:00.785540] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:50:05.790 [2024-10-01 22:49:00.839397] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:50:05.790 [2024-10-01 22:49:01.032602] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:50:06.361 22:49:01 keyring_file -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:50:06.361 22:49:01 keyring_file -- common/autotest_common.sh@864 -- # return 0 00:50:06.361 22:49:01 keyring_file -- keyring/file.sh@121 -- # bperf_cmd keyring_get_keys 00:50:06.361 22:49:01 keyring_file -- keyring/file.sh@121 -- # jq length 00:50:06.361 22:49:01 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:50:06.623 22:49:01 keyring_file -- keyring/file.sh@121 -- # (( 2 == 2 )) 00:50:06.623 22:49:01 keyring_file -- keyring/file.sh@122 -- # get_refcnt key0 00:50:06.623 22:49:01 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:50:06.623 22:49:01 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:50:06.623 22:49:01 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:50:06.623 22:49:01 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:50:06.623 22:49:01 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:50:06.623 22:49:01 keyring_file -- keyring/file.sh@122 -- # (( 2 == 2 )) 00:50:06.623 22:49:01 keyring_file -- keyring/file.sh@123 -- # get_refcnt key1 00:50:06.623 22:49:01 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:50:06.623 22:49:01 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:50:06.623 22:49:01 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:50:06.623 22:49:01 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:50:06.623 22:49:01 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:50:06.884 22:49:02 keyring_file -- keyring/file.sh@123 -- # (( 1 == 1 )) 00:50:06.884 22:49:02 keyring_file -- keyring/file.sh@124 -- # bperf_cmd bdev_nvme_get_controllers 00:50:06.884 22:49:02 keyring_file -- keyring/file.sh@124 -- # jq -r '.[].name' 00:50:06.884 22:49:02 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_get_controllers 00:50:07.145 22:49:02 keyring_file -- keyring/file.sh@124 -- # [[ nvme0 == nvme0 ]] 00:50:07.145 22:49:02 keyring_file -- keyring/file.sh@1 -- # cleanup 00:50:07.145 22:49:02 keyring_file -- keyring/file.sh@19 -- # rm -f /tmp/tmp.Gc4yRZtwMD /tmp/tmp.dHwOb92rrS 00:50:07.145 22:49:02 keyring_file -- keyring/file.sh@20 -- # killprocess 543797 00:50:07.145 22:49:02 keyring_file -- common/autotest_common.sh@950 -- # '[' -z 543797 ']' 00:50:07.145 22:49:02 keyring_file -- common/autotest_common.sh@954 -- # kill -0 543797 00:50:07.145 22:49:02 keyring_file -- common/autotest_common.sh@955 -- # uname 00:50:07.145 22:49:02 keyring_file -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:50:07.145 22:49:02 keyring_file -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 543797 00:50:07.145 22:49:02 keyring_file -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:50:07.145 22:49:02 keyring_file -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:50:07.145 22:49:02 keyring_file -- common/autotest_common.sh@968 -- # echo 'killing process with pid 543797' 00:50:07.145 killing process with pid 543797 00:50:07.145 22:49:02 keyring_file -- common/autotest_common.sh@969 -- # kill 543797 00:50:07.145 Received shutdown signal, test time was about 1.000000 seconds 00:50:07.145 00:50:07.145 Latency(us) 00:50:07.145 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:50:07.145 =================================================================================================================== 00:50:07.145 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:50:07.145 22:49:02 keyring_file -- common/autotest_common.sh@974 -- # wait 543797 00:50:07.406 22:49:02 keyring_file -- keyring/file.sh@21 -- # killprocess 541820 00:50:07.406 22:49:02 keyring_file -- common/autotest_common.sh@950 -- # '[' -z 541820 ']' 00:50:07.406 22:49:02 keyring_file -- common/autotest_common.sh@954 -- # kill -0 541820 00:50:07.406 22:49:02 keyring_file -- common/autotest_common.sh@955 -- # uname 00:50:07.406 22:49:02 keyring_file -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:50:07.406 22:49:02 keyring_file -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 541820 00:50:07.406 22:49:02 keyring_file -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:50:07.406 22:49:02 keyring_file -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:50:07.406 22:49:02 keyring_file -- common/autotest_common.sh@968 -- # echo 'killing process with pid 541820' 00:50:07.406 killing process with pid 541820 00:50:07.406 22:49:02 keyring_file -- common/autotest_common.sh@969 -- # kill 541820 00:50:07.406 22:49:02 keyring_file -- common/autotest_common.sh@974 -- # wait 541820 00:50:07.667 00:50:07.667 real 0m11.905s 00:50:07.667 user 0m28.383s 00:50:07.667 sys 0m2.706s 00:50:07.667 22:49:02 keyring_file -- common/autotest_common.sh@1126 -- # xtrace_disable 00:50:07.667 22:49:02 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:50:07.667 ************************************ 00:50:07.667 END TEST keyring_file 00:50:07.667 ************************************ 00:50:07.667 22:49:02 -- spdk/autotest.sh@289 -- # [[ y == y ]] 00:50:07.667 22:49:02 -- spdk/autotest.sh@290 -- # run_test keyring_linux /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/keyctl-session-wrapper /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/linux.sh 00:50:07.667 22:49:02 -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:50:07.667 22:49:02 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:50:07.667 22:49:02 -- common/autotest_common.sh@10 -- # set +x 00:50:07.667 ************************************ 00:50:07.667 START TEST keyring_linux 00:50:07.667 ************************************ 00:50:07.667 22:49:02 keyring_linux -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/keyctl-session-wrapper /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/linux.sh 00:50:07.667 Joined session keyring: 475071648 00:50:07.928 * Looking for test storage... 00:50:07.928 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring 00:50:07.928 22:49:02 keyring_linux -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:50:07.928 22:49:02 keyring_linux -- common/autotest_common.sh@1681 -- # lcov --version 00:50:07.928 22:49:02 keyring_linux -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:50:07.928 22:49:03 keyring_linux -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:50:07.928 22:49:03 keyring_linux -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:50:07.928 22:49:03 keyring_linux -- scripts/common.sh@333 -- # local ver1 ver1_l 00:50:07.928 22:49:03 keyring_linux -- scripts/common.sh@334 -- # local ver2 ver2_l 00:50:07.928 22:49:03 keyring_linux -- scripts/common.sh@336 -- # IFS=.-: 00:50:07.928 22:49:03 keyring_linux -- scripts/common.sh@336 -- # read -ra ver1 00:50:07.928 22:49:03 keyring_linux -- scripts/common.sh@337 -- # IFS=.-: 00:50:07.929 22:49:03 keyring_linux -- scripts/common.sh@337 -- # read -ra ver2 00:50:07.929 22:49:03 keyring_linux -- scripts/common.sh@338 -- # local 'op=<' 00:50:07.929 22:49:03 keyring_linux -- scripts/common.sh@340 -- # ver1_l=2 00:50:07.929 22:49:03 keyring_linux -- scripts/common.sh@341 -- # ver2_l=1 00:50:07.929 22:49:03 keyring_linux -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:50:07.929 22:49:03 keyring_linux -- scripts/common.sh@344 -- # case "$op" in 00:50:07.929 22:49:03 keyring_linux -- scripts/common.sh@345 -- # : 1 00:50:07.929 22:49:03 keyring_linux -- scripts/common.sh@364 -- # (( v = 0 )) 00:50:07.929 22:49:03 keyring_linux -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:50:07.929 22:49:03 keyring_linux -- scripts/common.sh@365 -- # decimal 1 00:50:07.929 22:49:03 keyring_linux -- scripts/common.sh@353 -- # local d=1 00:50:07.929 22:49:03 keyring_linux -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:50:07.929 22:49:03 keyring_linux -- scripts/common.sh@355 -- # echo 1 00:50:07.929 22:49:03 keyring_linux -- scripts/common.sh@365 -- # ver1[v]=1 00:50:07.929 22:49:03 keyring_linux -- scripts/common.sh@366 -- # decimal 2 00:50:07.929 22:49:03 keyring_linux -- scripts/common.sh@353 -- # local d=2 00:50:07.929 22:49:03 keyring_linux -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:50:07.929 22:49:03 keyring_linux -- scripts/common.sh@355 -- # echo 2 00:50:07.929 22:49:03 keyring_linux -- scripts/common.sh@366 -- # ver2[v]=2 00:50:07.929 22:49:03 keyring_linux -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:50:07.929 22:49:03 keyring_linux -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:50:07.929 22:49:03 keyring_linux -- scripts/common.sh@368 -- # return 0 00:50:07.929 22:49:03 keyring_linux -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:50:07.929 22:49:03 keyring_linux -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:50:07.929 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:50:07.929 --rc genhtml_branch_coverage=1 00:50:07.929 --rc genhtml_function_coverage=1 00:50:07.929 --rc genhtml_legend=1 00:50:07.929 --rc geninfo_all_blocks=1 00:50:07.929 --rc geninfo_unexecuted_blocks=1 00:50:07.929 00:50:07.929 ' 00:50:07.929 22:49:03 keyring_linux -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:50:07.929 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:50:07.929 --rc genhtml_branch_coverage=1 00:50:07.929 --rc genhtml_function_coverage=1 00:50:07.929 --rc genhtml_legend=1 00:50:07.929 --rc geninfo_all_blocks=1 00:50:07.929 --rc geninfo_unexecuted_blocks=1 00:50:07.929 00:50:07.929 ' 00:50:07.929 22:49:03 keyring_linux -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:50:07.929 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:50:07.929 --rc genhtml_branch_coverage=1 00:50:07.929 --rc genhtml_function_coverage=1 00:50:07.929 --rc genhtml_legend=1 00:50:07.929 --rc geninfo_all_blocks=1 00:50:07.929 --rc geninfo_unexecuted_blocks=1 00:50:07.929 00:50:07.929 ' 00:50:07.929 22:49:03 keyring_linux -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:50:07.929 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:50:07.929 --rc genhtml_branch_coverage=1 00:50:07.929 --rc genhtml_function_coverage=1 00:50:07.929 --rc genhtml_legend=1 00:50:07.929 --rc geninfo_all_blocks=1 00:50:07.929 --rc geninfo_unexecuted_blocks=1 00:50:07.929 00:50:07.929 ' 00:50:07.929 22:49:03 keyring_linux -- keyring/linux.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/common.sh 00:50:07.929 22:49:03 keyring_linux -- keyring/common.sh@4 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:50:07.929 22:49:03 keyring_linux -- nvmf/common.sh@7 -- # uname -s 00:50:07.929 22:49:03 keyring_linux -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:50:07.929 22:49:03 keyring_linux -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:50:07.929 22:49:03 keyring_linux -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:50:07.929 22:49:03 keyring_linux -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:50:07.929 22:49:03 keyring_linux -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:50:07.929 22:49:03 keyring_linux -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:50:07.929 22:49:03 keyring_linux -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:50:07.929 22:49:03 keyring_linux -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:50:07.929 22:49:03 keyring_linux -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:50:07.929 22:49:03 keyring_linux -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:50:07.929 22:49:03 keyring_linux -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 00:50:07.929 22:49:03 keyring_linux -- nvmf/common.sh@18 -- # NVME_HOSTID=80c5a598-37ec-ec11-9bc7-a4bf01928204 00:50:07.929 22:49:03 keyring_linux -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:50:07.929 22:49:03 keyring_linux -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:50:07.929 22:49:03 keyring_linux -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:50:07.929 22:49:03 keyring_linux -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:50:07.929 22:49:03 keyring_linux -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:50:07.929 22:49:03 keyring_linux -- scripts/common.sh@15 -- # shopt -s extglob 00:50:07.929 22:49:03 keyring_linux -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:50:07.929 22:49:03 keyring_linux -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:50:07.929 22:49:03 keyring_linux -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:50:07.929 22:49:03 keyring_linux -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:50:07.929 22:49:03 keyring_linux -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:50:07.929 22:49:03 keyring_linux -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:50:07.929 22:49:03 keyring_linux -- paths/export.sh@5 -- # export PATH 00:50:07.930 22:49:03 keyring_linux -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:50:07.930 22:49:03 keyring_linux -- nvmf/common.sh@51 -- # : 0 00:50:07.930 22:49:03 keyring_linux -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:50:07.930 22:49:03 keyring_linux -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:50:07.930 22:49:03 keyring_linux -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:50:07.930 22:49:03 keyring_linux -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:50:07.930 22:49:03 keyring_linux -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:50:07.930 22:49:03 keyring_linux -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:50:07.930 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:50:07.930 22:49:03 keyring_linux -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:50:07.930 22:49:03 keyring_linux -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:50:07.930 22:49:03 keyring_linux -- nvmf/common.sh@55 -- # have_pci_nics=0 00:50:07.930 22:49:03 keyring_linux -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:50:07.930 22:49:03 keyring_linux -- keyring/linux.sh@11 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:50:07.930 22:49:03 keyring_linux -- keyring/linux.sh@12 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:50:07.930 22:49:03 keyring_linux -- keyring/linux.sh@13 -- # key0=00112233445566778899aabbccddeeff 00:50:07.930 22:49:03 keyring_linux -- keyring/linux.sh@14 -- # key1=112233445566778899aabbccddeeff00 00:50:07.930 22:49:03 keyring_linux -- keyring/linux.sh@45 -- # trap cleanup EXIT 00:50:07.930 22:49:03 keyring_linux -- keyring/linux.sh@47 -- # prep_key key0 00112233445566778899aabbccddeeff 0 /tmp/:spdk-test:key0 00:50:07.930 22:49:03 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:50:07.930 22:49:03 keyring_linux -- keyring/common.sh@17 -- # name=key0 00:50:07.930 22:49:03 keyring_linux -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:50:07.930 22:49:03 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:50:07.930 22:49:03 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key0 00:50:07.930 22:49:03 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:50:07.930 22:49:03 keyring_linux -- nvmf/common.sh@741 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:50:07.930 22:49:03 keyring_linux -- nvmf/common.sh@728 -- # local prefix key digest 00:50:07.930 22:49:03 keyring_linux -- nvmf/common.sh@730 -- # prefix=NVMeTLSkey-1 00:50:07.930 22:49:03 keyring_linux -- nvmf/common.sh@730 -- # key=00112233445566778899aabbccddeeff 00:50:07.930 22:49:03 keyring_linux -- nvmf/common.sh@730 -- # digest=0 00:50:07.930 22:49:03 keyring_linux -- nvmf/common.sh@731 -- # python - 00:50:07.930 22:49:03 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key0 00:50:07.930 22:49:03 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key0 00:50:07.930 /tmp/:spdk-test:key0 00:50:07.930 22:49:03 keyring_linux -- keyring/linux.sh@48 -- # prep_key key1 112233445566778899aabbccddeeff00 0 /tmp/:spdk-test:key1 00:50:07.930 22:49:03 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:50:07.930 22:49:03 keyring_linux -- keyring/common.sh@17 -- # name=key1 00:50:07.930 22:49:03 keyring_linux -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:50:07.930 22:49:03 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:50:07.930 22:49:03 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key1 00:50:07.930 22:49:03 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:50:07.930 22:49:03 keyring_linux -- nvmf/common.sh@741 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:50:07.930 22:49:03 keyring_linux -- nvmf/common.sh@728 -- # local prefix key digest 00:50:07.930 22:49:03 keyring_linux -- nvmf/common.sh@730 -- # prefix=NVMeTLSkey-1 00:50:07.930 22:49:03 keyring_linux -- nvmf/common.sh@730 -- # key=112233445566778899aabbccddeeff00 00:50:07.930 22:49:03 keyring_linux -- nvmf/common.sh@730 -- # digest=0 00:50:07.930 22:49:03 keyring_linux -- nvmf/common.sh@731 -- # python - 00:50:07.930 22:49:03 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key1 00:50:07.930 22:49:03 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key1 00:50:07.930 /tmp/:spdk-test:key1 00:50:07.930 22:49:03 keyring_linux -- keyring/linux.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:50:07.930 22:49:03 keyring_linux -- keyring/linux.sh@51 -- # tgtpid=544236 00:50:07.930 22:49:03 keyring_linux -- keyring/linux.sh@53 -- # waitforlisten 544236 00:50:07.930 22:49:03 keyring_linux -- common/autotest_common.sh@831 -- # '[' -z 544236 ']' 00:50:07.930 22:49:03 keyring_linux -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:50:07.930 22:49:03 keyring_linux -- common/autotest_common.sh@836 -- # local max_retries=100 00:50:07.930 22:49:03 keyring_linux -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:50:07.930 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:50:07.930 22:49:03 keyring_linux -- common/autotest_common.sh@840 -- # xtrace_disable 00:50:07.930 22:49:03 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:50:08.192 [2024-10-01 22:49:03.212228] Starting SPDK v25.01-pre git sha1 1b1c3081e / DPDK 24.03.0 initialization... 00:50:08.192 [2024-10-01 22:49:03.212283] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid544236 ] 00:50:08.192 [2024-10-01 22:49:03.274304] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:50:08.192 [2024-10-01 22:49:03.338692] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:50:08.764 22:49:04 keyring_linux -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:50:08.764 22:49:04 keyring_linux -- common/autotest_common.sh@864 -- # return 0 00:50:08.764 22:49:04 keyring_linux -- keyring/linux.sh@54 -- # rpc_cmd 00:50:08.764 22:49:04 keyring_linux -- common/autotest_common.sh@561 -- # xtrace_disable 00:50:08.764 22:49:04 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:50:08.764 [2024-10-01 22:49:04.005519] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:50:09.025 null0 00:50:09.025 [2024-10-01 22:49:04.037570] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:50:09.025 [2024-10-01 22:49:04.037945] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:50:09.025 22:49:04 keyring_linux -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:50:09.025 22:49:04 keyring_linux -- keyring/linux.sh@66 -- # keyctl add user :spdk-test:key0 NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: @s 00:50:09.025 486929409 00:50:09.025 22:49:04 keyring_linux -- keyring/linux.sh@67 -- # keyctl add user :spdk-test:key1 NVMeTLSkey-1:00:MTEyMjMzNDQ1NTY2Nzc4ODk5YWFiYmNjZGRlZWZmMDA6CPcs: @s 00:50:09.025 366855898 00:50:09.025 22:49:04 keyring_linux -- keyring/linux.sh@70 -- # bperfpid=544509 00:50:09.025 22:49:04 keyring_linux -- keyring/linux.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randread -t 1 -m 2 -r /var/tmp/bperf.sock -z --wait-for-rpc 00:50:09.025 22:49:04 keyring_linux -- keyring/linux.sh@72 -- # waitforlisten 544509 /var/tmp/bperf.sock 00:50:09.025 22:49:04 keyring_linux -- common/autotest_common.sh@831 -- # '[' -z 544509 ']' 00:50:09.025 22:49:04 keyring_linux -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:50:09.025 22:49:04 keyring_linux -- common/autotest_common.sh@836 -- # local max_retries=100 00:50:09.025 22:49:04 keyring_linux -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:50:09.025 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:50:09.025 22:49:04 keyring_linux -- common/autotest_common.sh@840 -- # xtrace_disable 00:50:09.025 22:49:04 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:50:09.025 [2024-10-01 22:49:04.113466] Starting SPDK v25.01-pre git sha1 1b1c3081e / DPDK 24.03.0 initialization... 00:50:09.025 [2024-10-01 22:49:04.113514] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid544509 ] 00:50:09.025 [2024-10-01 22:49:04.189080] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:50:09.025 [2024-10-01 22:49:04.242873] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:50:09.966 22:49:04 keyring_linux -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:50:09.966 22:49:04 keyring_linux -- common/autotest_common.sh@864 -- # return 0 00:50:09.966 22:49:04 keyring_linux -- keyring/linux.sh@73 -- # bperf_cmd keyring_linux_set_options --enable 00:50:09.966 22:49:04 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_linux_set_options --enable 00:50:09.966 22:49:05 keyring_linux -- keyring/linux.sh@74 -- # bperf_cmd framework_start_init 00:50:09.966 22:49:05 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:50:10.227 22:49:05 keyring_linux -- keyring/linux.sh@75 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:50:10.227 22:49:05 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:50:10.487 [2024-10-01 22:49:05.495294] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:50:10.487 nvme0n1 00:50:10.487 22:49:05 keyring_linux -- keyring/linux.sh@77 -- # check_keys 1 :spdk-test:key0 00:50:10.487 22:49:05 keyring_linux -- keyring/linux.sh@19 -- # local count=1 name=:spdk-test:key0 00:50:10.487 22:49:05 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:50:10.487 22:49:05 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:50:10.487 22:49:05 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:50:10.488 22:49:05 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:50:10.748 22:49:05 keyring_linux -- keyring/linux.sh@22 -- # (( 1 == count )) 00:50:10.748 22:49:05 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:50:10.748 22:49:05 keyring_linux -- keyring/linux.sh@25 -- # get_key :spdk-test:key0 00:50:10.748 22:49:05 keyring_linux -- keyring/linux.sh@25 -- # jq -r .sn 00:50:10.748 22:49:05 keyring_linux -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:50:10.748 22:49:05 keyring_linux -- keyring/common.sh@10 -- # jq '.[] | select(.name == ":spdk-test:key0")' 00:50:10.748 22:49:05 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:50:10.748 22:49:05 keyring_linux -- keyring/linux.sh@25 -- # sn=486929409 00:50:10.748 22:49:05 keyring_linux -- keyring/linux.sh@26 -- # get_keysn :spdk-test:key0 00:50:10.748 22:49:05 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:50:10.748 22:49:05 keyring_linux -- keyring/linux.sh@26 -- # [[ 486929409 == \4\8\6\9\2\9\4\0\9 ]] 00:50:10.748 22:49:05 keyring_linux -- keyring/linux.sh@27 -- # keyctl print 486929409 00:50:10.748 22:49:05 keyring_linux -- keyring/linux.sh@27 -- # [[ NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: == \N\V\M\e\T\L\S\k\e\y\-\1\:\0\0\:\M\D\A\x\M\T\I\y\M\z\M\0\N\D\U\1\N\j\Y\3\N\z\g\4\O\T\l\h\Y\W\J\i\Y\2\N\k\Z\G\V\l\Z\m\Z\w\J\E\i\Q\: ]] 00:50:10.748 22:49:05 keyring_linux -- keyring/linux.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:50:11.009 Running I/O for 1 seconds... 00:50:11.951 17417.00 IOPS, 68.04 MiB/s 00:50:11.951 Latency(us) 00:50:11.951 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:50:11.951 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:50:11.951 nvme0n1 : 1.01 17416.45 68.03 0.00 0.00 7322.70 1815.89 8519.68 00:50:11.951 =================================================================================================================== 00:50:11.951 Total : 17416.45 68.03 0.00 0.00 7322.70 1815.89 8519.68 00:50:11.951 { 00:50:11.951 "results": [ 00:50:11.951 { 00:50:11.951 "job": "nvme0n1", 00:50:11.951 "core_mask": "0x2", 00:50:11.951 "workload": "randread", 00:50:11.951 "status": "finished", 00:50:11.951 "queue_depth": 128, 00:50:11.951 "io_size": 4096, 00:50:11.951 "runtime": 1.007381, 00:50:11.951 "iops": 17416.44918853939, 00:50:11.951 "mibps": 68.033004642732, 00:50:11.951 "io_failed": 0, 00:50:11.951 "io_timeout": 0, 00:50:11.951 "avg_latency_us": 7322.696274342167, 00:50:11.951 "min_latency_us": 1815.8933333333334, 00:50:11.951 "max_latency_us": 8519.68 00:50:11.951 } 00:50:11.951 ], 00:50:11.951 "core_count": 1 00:50:11.951 } 00:50:11.951 22:49:07 keyring_linux -- keyring/linux.sh@80 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:50:11.951 22:49:07 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:50:12.212 22:49:07 keyring_linux -- keyring/linux.sh@81 -- # check_keys 0 00:50:12.212 22:49:07 keyring_linux -- keyring/linux.sh@19 -- # local count=0 name= 00:50:12.212 22:49:07 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:50:12.212 22:49:07 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:50:12.212 22:49:07 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:50:12.212 22:49:07 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:50:12.212 22:49:07 keyring_linux -- keyring/linux.sh@22 -- # (( 0 == count )) 00:50:12.212 22:49:07 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:50:12.212 22:49:07 keyring_linux -- keyring/linux.sh@23 -- # return 00:50:12.212 22:49:07 keyring_linux -- keyring/linux.sh@84 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:50:12.212 22:49:07 keyring_linux -- common/autotest_common.sh@650 -- # local es=0 00:50:12.212 22:49:07 keyring_linux -- common/autotest_common.sh@652 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:50:12.212 22:49:07 keyring_linux -- common/autotest_common.sh@638 -- # local arg=bperf_cmd 00:50:12.212 22:49:07 keyring_linux -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:50:12.212 22:49:07 keyring_linux -- common/autotest_common.sh@642 -- # type -t bperf_cmd 00:50:12.212 22:49:07 keyring_linux -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:50:12.212 22:49:07 keyring_linux -- common/autotest_common.sh@653 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:50:12.212 22:49:07 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:50:12.473 [2024-10-01 22:49:07.570040] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:50:12.473 [2024-10-01 22:49:07.570775] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe7ec10 (107): Transport endpoint is not connected 00:50:12.473 [2024-10-01 22:49:07.571771] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe7ec10 (9): Bad file descriptor 00:50:12.473 [2024-10-01 22:49:07.572772] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:50:12.473 [2024-10-01 22:49:07.572781] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:50:12.473 [2024-10-01 22:49:07.572787] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, Operation not permitted 00:50:12.473 [2024-10-01 22:49:07.572793] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:50:12.473 request: 00:50:12.473 { 00:50:12.473 "name": "nvme0", 00:50:12.473 "trtype": "tcp", 00:50:12.473 "traddr": "127.0.0.1", 00:50:12.473 "adrfam": "ipv4", 00:50:12.473 "trsvcid": "4420", 00:50:12.473 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:50:12.473 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:50:12.473 "prchk_reftag": false, 00:50:12.473 "prchk_guard": false, 00:50:12.473 "hdgst": false, 00:50:12.473 "ddgst": false, 00:50:12.473 "psk": ":spdk-test:key1", 00:50:12.473 "allow_unrecognized_csi": false, 00:50:12.473 "method": "bdev_nvme_attach_controller", 00:50:12.473 "req_id": 1 00:50:12.473 } 00:50:12.473 Got JSON-RPC error response 00:50:12.473 response: 00:50:12.473 { 00:50:12.473 "code": -5, 00:50:12.473 "message": "Input/output error" 00:50:12.473 } 00:50:12.473 22:49:07 keyring_linux -- common/autotest_common.sh@653 -- # es=1 00:50:12.473 22:49:07 keyring_linux -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:50:12.473 22:49:07 keyring_linux -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:50:12.473 22:49:07 keyring_linux -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:50:12.473 22:49:07 keyring_linux -- keyring/linux.sh@1 -- # cleanup 00:50:12.473 22:49:07 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:50:12.473 22:49:07 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key0 00:50:12.473 22:49:07 keyring_linux -- keyring/linux.sh@31 -- # local name=key0 sn 00:50:12.473 22:49:07 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key0 00:50:12.473 22:49:07 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:50:12.473 22:49:07 keyring_linux -- keyring/linux.sh@33 -- # sn=486929409 00:50:12.473 22:49:07 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 486929409 00:50:12.473 1 links removed 00:50:12.473 22:49:07 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:50:12.473 22:49:07 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key1 00:50:12.473 22:49:07 keyring_linux -- keyring/linux.sh@31 -- # local name=key1 sn 00:50:12.473 22:49:07 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key1 00:50:12.473 22:49:07 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key1 00:50:12.473 22:49:07 keyring_linux -- keyring/linux.sh@33 -- # sn=366855898 00:50:12.473 22:49:07 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 366855898 00:50:12.473 1 links removed 00:50:12.473 22:49:07 keyring_linux -- keyring/linux.sh@41 -- # killprocess 544509 00:50:12.473 22:49:07 keyring_linux -- common/autotest_common.sh@950 -- # '[' -z 544509 ']' 00:50:12.473 22:49:07 keyring_linux -- common/autotest_common.sh@954 -- # kill -0 544509 00:50:12.473 22:49:07 keyring_linux -- common/autotest_common.sh@955 -- # uname 00:50:12.473 22:49:07 keyring_linux -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:50:12.473 22:49:07 keyring_linux -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 544509 00:50:12.473 22:49:07 keyring_linux -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:50:12.473 22:49:07 keyring_linux -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:50:12.473 22:49:07 keyring_linux -- common/autotest_common.sh@968 -- # echo 'killing process with pid 544509' 00:50:12.473 killing process with pid 544509 00:50:12.473 22:49:07 keyring_linux -- common/autotest_common.sh@969 -- # kill 544509 00:50:12.473 Received shutdown signal, test time was about 1.000000 seconds 00:50:12.473 00:50:12.473 Latency(us) 00:50:12.473 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:50:12.473 =================================================================================================================== 00:50:12.473 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:50:12.473 22:49:07 keyring_linux -- common/autotest_common.sh@974 -- # wait 544509 00:50:12.734 22:49:07 keyring_linux -- keyring/linux.sh@42 -- # killprocess 544236 00:50:12.734 22:49:07 keyring_linux -- common/autotest_common.sh@950 -- # '[' -z 544236 ']' 00:50:12.734 22:49:07 keyring_linux -- common/autotest_common.sh@954 -- # kill -0 544236 00:50:12.734 22:49:07 keyring_linux -- common/autotest_common.sh@955 -- # uname 00:50:12.734 22:49:07 keyring_linux -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:50:12.734 22:49:07 keyring_linux -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 544236 00:50:12.734 22:49:07 keyring_linux -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:50:12.734 22:49:07 keyring_linux -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:50:12.734 22:49:07 keyring_linux -- common/autotest_common.sh@968 -- # echo 'killing process with pid 544236' 00:50:12.734 killing process with pid 544236 00:50:12.734 22:49:07 keyring_linux -- common/autotest_common.sh@969 -- # kill 544236 00:50:12.734 22:49:07 keyring_linux -- common/autotest_common.sh@974 -- # wait 544236 00:50:12.993 00:50:12.993 real 0m5.333s 00:50:12.993 user 0m9.832s 00:50:12.993 sys 0m1.408s 00:50:12.993 22:49:08 keyring_linux -- common/autotest_common.sh@1126 -- # xtrace_disable 00:50:12.993 22:49:08 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:50:12.993 ************************************ 00:50:12.993 END TEST keyring_linux 00:50:12.993 ************************************ 00:50:12.993 22:49:08 -- spdk/autotest.sh@307 -- # '[' 0 -eq 1 ']' 00:50:12.993 22:49:08 -- spdk/autotest.sh@311 -- # '[' 0 -eq 1 ']' 00:50:12.993 22:49:08 -- spdk/autotest.sh@315 -- # '[' 0 -eq 1 ']' 00:50:12.993 22:49:08 -- spdk/autotest.sh@320 -- # '[' 0 -eq 1 ']' 00:50:12.993 22:49:08 -- spdk/autotest.sh@329 -- # '[' 0 -eq 1 ']' 00:50:12.993 22:49:08 -- spdk/autotest.sh@334 -- # '[' 0 -eq 1 ']' 00:50:12.993 22:49:08 -- spdk/autotest.sh@338 -- # '[' 0 -eq 1 ']' 00:50:12.993 22:49:08 -- spdk/autotest.sh@342 -- # '[' 0 -eq 1 ']' 00:50:12.993 22:49:08 -- spdk/autotest.sh@346 -- # '[' 0 -eq 1 ']' 00:50:12.993 22:49:08 -- spdk/autotest.sh@351 -- # '[' 0 -eq 1 ']' 00:50:12.993 22:49:08 -- spdk/autotest.sh@355 -- # '[' 0 -eq 1 ']' 00:50:12.993 22:49:08 -- spdk/autotest.sh@362 -- # [[ 0 -eq 1 ]] 00:50:12.993 22:49:08 -- spdk/autotest.sh@366 -- # [[ 0 -eq 1 ]] 00:50:12.993 22:49:08 -- spdk/autotest.sh@370 -- # [[ 0 -eq 1 ]] 00:50:12.993 22:49:08 -- spdk/autotest.sh@374 -- # [[ '' -eq 1 ]] 00:50:12.993 22:49:08 -- spdk/autotest.sh@381 -- # trap - SIGINT SIGTERM EXIT 00:50:12.993 22:49:08 -- spdk/autotest.sh@383 -- # timing_enter post_cleanup 00:50:12.993 22:49:08 -- common/autotest_common.sh@724 -- # xtrace_disable 00:50:12.993 22:49:08 -- common/autotest_common.sh@10 -- # set +x 00:50:12.993 22:49:08 -- spdk/autotest.sh@384 -- # autotest_cleanup 00:50:12.993 22:49:08 -- common/autotest_common.sh@1392 -- # local autotest_es=0 00:50:12.993 22:49:08 -- common/autotest_common.sh@1393 -- # xtrace_disable 00:50:12.993 22:49:08 -- common/autotest_common.sh@10 -- # set +x 00:50:21.125 INFO: APP EXITING 00:50:21.125 INFO: killing all VMs 00:50:21.125 INFO: killing vhost app 00:50:21.125 WARN: no vhost pid file found 00:50:21.125 INFO: EXIT DONE 00:50:23.668 0000:80:01.6 (8086 0b00): Already using the ioatdma driver 00:50:23.668 0000:80:01.7 (8086 0b00): Already using the ioatdma driver 00:50:23.668 0000:80:01.4 (8086 0b00): Already using the ioatdma driver 00:50:23.668 0000:80:01.5 (8086 0b00): Already using the ioatdma driver 00:50:23.668 0000:80:01.2 (8086 0b00): Already using the ioatdma driver 00:50:23.668 0000:80:01.3 (8086 0b00): Already using the ioatdma driver 00:50:23.668 0000:80:01.0 (8086 0b00): Already using the ioatdma driver 00:50:23.668 0000:80:01.1 (8086 0b00): Already using the ioatdma driver 00:50:23.668 0000:65:00.0 (144d a80a): Already using the nvme driver 00:50:23.668 0000:00:01.6 (8086 0b00): Already using the ioatdma driver 00:50:23.669 0000:00:01.7 (8086 0b00): Already using the ioatdma driver 00:50:23.932 0000:00:01.4 (8086 0b00): Already using the ioatdma driver 00:50:23.932 0000:00:01.5 (8086 0b00): Already using the ioatdma driver 00:50:23.932 0000:00:01.2 (8086 0b00): Already using the ioatdma driver 00:50:23.932 0000:00:01.3 (8086 0b00): Already using the ioatdma driver 00:50:23.932 0000:00:01.0 (8086 0b00): Already using the ioatdma driver 00:50:23.932 0000:00:01.1 (8086 0b00): Already using the ioatdma driver 00:50:28.147 Cleaning 00:50:28.147 Removing: /var/run/dpdk/spdk0/config 00:50:28.147 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:50:28.147 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:50:28.147 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:50:28.147 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:50:28.147 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-0 00:50:28.147 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-1 00:50:28.147 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-2 00:50:28.147 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-3 00:50:28.147 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:50:28.147 Removing: /var/run/dpdk/spdk0/hugepage_info 00:50:28.147 Removing: /var/run/dpdk/spdk1/config 00:50:28.147 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-0 00:50:28.147 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-1 00:50:28.147 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-2 00:50:28.147 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-3 00:50:28.147 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-0 00:50:28.147 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-1 00:50:28.147 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-2 00:50:28.147 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-3 00:50:28.147 Removing: /var/run/dpdk/spdk1/fbarray_memzone 00:50:28.147 Removing: /var/run/dpdk/spdk1/hugepage_info 00:50:28.147 Removing: /var/run/dpdk/spdk2/config 00:50:28.147 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-0 00:50:28.147 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-1 00:50:28.147 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-2 00:50:28.147 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-3 00:50:28.147 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-0 00:50:28.147 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-1 00:50:28.147 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-2 00:50:28.147 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-3 00:50:28.147 Removing: /var/run/dpdk/spdk2/fbarray_memzone 00:50:28.147 Removing: /var/run/dpdk/spdk2/hugepage_info 00:50:28.147 Removing: /var/run/dpdk/spdk3/config 00:50:28.147 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-0 00:50:28.147 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-1 00:50:28.147 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-2 00:50:28.147 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-3 00:50:28.147 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-0 00:50:28.147 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-1 00:50:28.147 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-2 00:50:28.147 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-3 00:50:28.147 Removing: /var/run/dpdk/spdk3/fbarray_memzone 00:50:28.147 Removing: /var/run/dpdk/spdk3/hugepage_info 00:50:28.147 Removing: /var/run/dpdk/spdk4/config 00:50:28.147 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-0 00:50:28.147 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-1 00:50:28.147 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-2 00:50:28.147 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-3 00:50:28.147 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-0 00:50:28.147 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-1 00:50:28.147 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-2 00:50:28.147 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-3 00:50:28.147 Removing: /var/run/dpdk/spdk4/fbarray_memzone 00:50:28.147 Removing: /var/run/dpdk/spdk4/hugepage_info 00:50:28.147 Removing: /dev/shm/bdev_svc_trace.1 00:50:28.147 Removing: /dev/shm/nvmf_trace.0 00:50:28.147 Removing: /dev/shm/spdk_tgt_trace.pid4160079 00:50:28.147 Removing: /var/run/dpdk/spdk0 00:50:28.147 Removing: /var/run/dpdk/spdk1 00:50:28.147 Removing: /var/run/dpdk/spdk2 00:50:28.147 Removing: /var/run/dpdk/spdk3 00:50:28.147 Removing: /var/run/dpdk/spdk4 00:50:28.147 Removing: /var/run/dpdk/spdk_pid12838 00:50:28.147 Removing: /var/run/dpdk/spdk_pid13270 00:50:28.147 Removing: /var/run/dpdk/spdk_pid138907 00:50:28.147 Removing: /var/run/dpdk/spdk_pid145408 00:50:28.147 Removing: /var/run/dpdk/spdk_pid152435 00:50:28.147 Removing: /var/run/dpdk/spdk_pid159957 00:50:28.147 Removing: /var/run/dpdk/spdk_pid159962 00:50:28.147 Removing: /var/run/dpdk/spdk_pid160965 00:50:28.147 Removing: /var/run/dpdk/spdk_pid161975 00:50:28.147 Removing: /var/run/dpdk/spdk_pid162982 00:50:28.147 Removing: /var/run/dpdk/spdk_pid163659 00:50:28.147 Removing: /var/run/dpdk/spdk_pid163661 00:50:28.147 Removing: /var/run/dpdk/spdk_pid163992 00:50:28.147 Removing: /var/run/dpdk/spdk_pid164033 00:50:28.147 Removing: /var/run/dpdk/spdk_pid164169 00:50:28.147 Removing: /var/run/dpdk/spdk_pid165220 00:50:28.147 Removing: /var/run/dpdk/spdk_pid166232 00:50:28.147 Removing: /var/run/dpdk/spdk_pid167320 00:50:28.147 Removing: /var/run/dpdk/spdk_pid167932 00:50:28.147 Removing: /var/run/dpdk/spdk_pid168017 00:50:28.147 Removing: /var/run/dpdk/spdk_pid168317 00:50:28.147 Removing: /var/run/dpdk/spdk_pid169803 00:50:28.147 Removing: /var/run/dpdk/spdk_pid171183 00:50:28.147 Removing: /var/run/dpdk/spdk_pid181802 00:50:28.147 Removing: /var/run/dpdk/spdk_pid18331 00:50:28.147 Removing: /var/run/dpdk/spdk_pid218154 00:50:28.147 Removing: /var/run/dpdk/spdk_pid224143 00:50:28.147 Removing: /var/run/dpdk/spdk_pid226033 00:50:28.147 Removing: /var/run/dpdk/spdk_pid228371 00:50:28.147 Removing: /var/run/dpdk/spdk_pid228558 00:50:28.147 Removing: /var/run/dpdk/spdk_pid228748 00:50:28.147 Removing: /var/run/dpdk/spdk_pid229074 00:50:28.147 Removing: /var/run/dpdk/spdk_pid229786 00:50:28.147 Removing: /var/run/dpdk/spdk_pid232108 00:50:28.147 Removing: /var/run/dpdk/spdk_pid233222 00:50:28.147 Removing: /var/run/dpdk/spdk_pid233826 00:50:28.147 Removing: /var/run/dpdk/spdk_pid236467 00:50:28.147 Removing: /var/run/dpdk/spdk_pid237339 00:50:28.147 Removing: /var/run/dpdk/spdk_pid238057 00:50:28.147 Removing: /var/run/dpdk/spdk_pid243120 00:50:28.147 Removing: /var/run/dpdk/spdk_pid249838 00:50:28.147 Removing: /var/run/dpdk/spdk_pid249839 00:50:28.147 Removing: /var/run/dpdk/spdk_pid249840 00:50:28.147 Removing: /var/run/dpdk/spdk_pid254534 00:50:28.147 Removing: /var/run/dpdk/spdk_pid25982 00:50:28.147 Removing: /var/run/dpdk/spdk_pid265120 00:50:28.147 Removing: /var/run/dpdk/spdk_pid270725 00:50:28.147 Removing: /var/run/dpdk/spdk_pid278039 00:50:28.147 Removing: /var/run/dpdk/spdk_pid279538 00:50:28.147 Removing: /var/run/dpdk/spdk_pid281387 00:50:28.147 Removing: /var/run/dpdk/spdk_pid282983 00:50:28.147 Removing: /var/run/dpdk/spdk_pid288732 00:50:28.147 Removing: /var/run/dpdk/spdk_pid29354 00:50:28.147 Removing: /var/run/dpdk/spdk_pid293706 00:50:28.147 Removing: /var/run/dpdk/spdk_pid302790 00:50:28.147 Removing: /var/run/dpdk/spdk_pid302820 00:50:28.147 Removing: /var/run/dpdk/spdk_pid307948 00:50:28.147 Removing: /var/run/dpdk/spdk_pid308184 00:50:28.147 Removing: /var/run/dpdk/spdk_pid308512 00:50:28.147 Removing: /var/run/dpdk/spdk_pid309031 00:50:28.147 Removing: /var/run/dpdk/spdk_pid309118 00:50:28.147 Removing: /var/run/dpdk/spdk_pid314565 00:50:28.147 Removing: /var/run/dpdk/spdk_pid315212 00:50:28.147 Removing: /var/run/dpdk/spdk_pid320568 00:50:28.147 Removing: /var/run/dpdk/spdk_pid323873 00:50:28.147 Removing: /var/run/dpdk/spdk_pid330877 00:50:28.147 Removing: /var/run/dpdk/spdk_pid337408 00:50:28.147 Removing: /var/run/dpdk/spdk_pid347661 00:50:28.147 Removing: /var/run/dpdk/spdk_pid356028 00:50:28.147 Removing: /var/run/dpdk/spdk_pid356058 00:50:28.147 Removing: /var/run/dpdk/spdk_pid380896 00:50:28.147 Removing: /var/run/dpdk/spdk_pid381582 00:50:28.147 Removing: /var/run/dpdk/spdk_pid382322 00:50:28.147 Removing: /var/run/dpdk/spdk_pid383195 00:50:28.147 Removing: /var/run/dpdk/spdk_pid384327 00:50:28.147 Removing: /var/run/dpdk/spdk_pid385019 00:50:28.147 Removing: /var/run/dpdk/spdk_pid385700 00:50:28.147 Removing: /var/run/dpdk/spdk_pid386400 00:50:28.147 Removing: /var/run/dpdk/spdk_pid391658 00:50:28.147 Removing: /var/run/dpdk/spdk_pid391906 00:50:28.147 Removing: /var/run/dpdk/spdk_pid399136 00:50:28.147 Removing: /var/run/dpdk/spdk_pid399496 00:50:28.147 Removing: /var/run/dpdk/spdk_pid405972 00:50:28.147 Removing: /var/run/dpdk/spdk_pid411006 00:50:28.147 Removing: /var/run/dpdk/spdk_pid4158589 00:50:28.147 Removing: /var/run/dpdk/spdk_pid4160079 00:50:28.147 Removing: /var/run/dpdk/spdk_pid4160706 00:50:28.147 Removing: /var/run/dpdk/spdk_pid4161960 00:50:28.148 Removing: /var/run/dpdk/spdk_pid4162152 00:50:28.148 Removing: /var/run/dpdk/spdk_pid4163402 00:50:28.148 Removing: /var/run/dpdk/spdk_pid4163529 00:50:28.148 Removing: /var/run/dpdk/spdk_pid4163954 00:50:28.148 Removing: /var/run/dpdk/spdk_pid4165093 00:50:28.148 Removing: /var/run/dpdk/spdk_pid4166240 00:50:28.148 Removing: /var/run/dpdk/spdk_pid4166741 00:50:28.148 Removing: /var/run/dpdk/spdk_pid4167138 00:50:28.148 Removing: /var/run/dpdk/spdk_pid4167553 00:50:28.148 Removing: /var/run/dpdk/spdk_pid4167956 00:50:28.148 Removing: /var/run/dpdk/spdk_pid4168148 00:50:28.148 Removing: /var/run/dpdk/spdk_pid4168345 00:50:28.148 Removing: /var/run/dpdk/spdk_pid4168735 00:50:28.148 Removing: /var/run/dpdk/spdk_pid4170010 00:50:28.148 Removing: /var/run/dpdk/spdk_pid4173508 00:50:28.148 Removing: /var/run/dpdk/spdk_pid4173752 00:50:28.148 Removing: /var/run/dpdk/spdk_pid4174120 00:50:28.148 Removing: /var/run/dpdk/spdk_pid4174156 00:50:28.148 Removing: /var/run/dpdk/spdk_pid4174831 00:50:28.148 Removing: /var/run/dpdk/spdk_pid4175036 00:50:28.148 Removing: /var/run/dpdk/spdk_pid4175538 00:50:28.148 Removing: /var/run/dpdk/spdk_pid4175579 00:50:28.148 Removing: /var/run/dpdk/spdk_pid4175928 00:50:28.148 Removing: /var/run/dpdk/spdk_pid4176260 00:50:28.148 Removing: /var/run/dpdk/spdk_pid4176411 00:50:28.148 Removing: /var/run/dpdk/spdk_pid4176632 00:50:28.148 Removing: /var/run/dpdk/spdk_pid4177109 00:50:28.148 Removing: /var/run/dpdk/spdk_pid4177436 00:50:28.148 Removing: /var/run/dpdk/spdk_pid4177836 00:50:28.148 Removing: /var/run/dpdk/spdk_pid4182450 00:50:28.148 Removing: /var/run/dpdk/spdk_pid4187745 00:50:28.148 Removing: /var/run/dpdk/spdk_pid41936 00:50:28.148 Removing: /var/run/dpdk/spdk_pid422616 00:50:28.148 Removing: /var/run/dpdk/spdk_pid423367 00:50:28.148 Removing: /var/run/dpdk/spdk_pid428914 00:50:28.148 Removing: /var/run/dpdk/spdk_pid429275 00:50:28.148 Removing: /var/run/dpdk/spdk_pid434306 00:50:28.148 Removing: /var/run/dpdk/spdk_pid441153 00:50:28.148 Removing: /var/run/dpdk/spdk_pid444101 00:50:28.148 Removing: /var/run/dpdk/spdk_pid456254 00:50:28.148 Removing: /var/run/dpdk/spdk_pid466936 00:50:28.148 Removing: /var/run/dpdk/spdk_pid468935 00:50:28.148 Removing: /var/run/dpdk/spdk_pid469949 00:50:28.148 Removing: /var/run/dpdk/spdk_pid489995 00:50:28.148 Removing: /var/run/dpdk/spdk_pid494695 00:50:28.148 Removing: /var/run/dpdk/spdk_pid497995 00:50:28.148 Removing: /var/run/dpdk/spdk_pid505343 00:50:28.148 Removing: /var/run/dpdk/spdk_pid505458 00:50:28.148 Removing: /var/run/dpdk/spdk_pid511321 00:50:28.148 Removing: /var/run/dpdk/spdk_pid513571 00:50:28.148 Removing: /var/run/dpdk/spdk_pid516048 00:50:28.148 Removing: /var/run/dpdk/spdk_pid517257 00:50:28.148 Removing: /var/run/dpdk/spdk_pid519757 00:50:28.148 Removing: /var/run/dpdk/spdk_pid521235 00:50:28.148 Removing: /var/run/dpdk/spdk_pid52866 00:50:28.409 Removing: /var/run/dpdk/spdk_pid531779 00:50:28.409 Removing: /var/run/dpdk/spdk_pid532269 00:50:28.409 Removing: /var/run/dpdk/spdk_pid532859 00:50:28.409 Removing: /var/run/dpdk/spdk_pid535767 00:50:28.410 Removing: /var/run/dpdk/spdk_pid536430 00:50:28.410 Removing: /var/run/dpdk/spdk_pid537070 00:50:28.410 Removing: /var/run/dpdk/spdk_pid541820 00:50:28.410 Removing: /var/run/dpdk/spdk_pid541982 00:50:28.410 Removing: /var/run/dpdk/spdk_pid543797 00:50:28.410 Removing: /var/run/dpdk/spdk_pid544236 00:50:28.410 Removing: /var/run/dpdk/spdk_pid544509 00:50:28.410 Removing: /var/run/dpdk/spdk_pid54998 00:50:28.410 Removing: /var/run/dpdk/spdk_pid56027 00:50:28.410 Removing: /var/run/dpdk/spdk_pid6829 00:50:28.410 Removing: /var/run/dpdk/spdk_pid7515 00:50:28.410 Removing: /var/run/dpdk/spdk_pid77032 00:50:28.410 Removing: /var/run/dpdk/spdk_pid82335 00:50:28.410 Clean 00:50:28.410 22:49:23 -- common/autotest_common.sh@1451 -- # return 0 00:50:28.410 22:49:23 -- spdk/autotest.sh@385 -- # timing_exit post_cleanup 00:50:28.410 22:49:23 -- common/autotest_common.sh@730 -- # xtrace_disable 00:50:28.410 22:49:23 -- common/autotest_common.sh@10 -- # set +x 00:50:28.410 22:49:23 -- spdk/autotest.sh@387 -- # timing_exit autotest 00:50:28.410 22:49:23 -- common/autotest_common.sh@730 -- # xtrace_disable 00:50:28.410 22:49:23 -- common/autotest_common.sh@10 -- # set +x 00:50:28.410 22:49:23 -- spdk/autotest.sh@388 -- # chmod a+r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:50:28.410 22:49:23 -- spdk/autotest.sh@390 -- # [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log ]] 00:50:28.410 22:49:23 -- spdk/autotest.sh@390 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log 00:50:28.410 22:49:23 -- spdk/autotest.sh@392 -- # [[ y == y ]] 00:50:28.410 22:49:23 -- spdk/autotest.sh@394 -- # hostname 00:50:28.410 22:49:23 -- spdk/autotest.sh@394 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -t spdk-cyp-10 -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info 00:50:28.671 geninfo: WARNING: invalid characters removed from testname! 00:50:55.253 22:49:49 -- spdk/autotest.sh@395 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:50:57.167 22:49:52 -- spdk/autotest.sh@396 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/dpdk/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:50:59.082 22:49:54 -- spdk/autotest.sh@400 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info --ignore-errors unused,unused '/usr/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:51:00.994 22:49:55 -- spdk/autotest.sh@401 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/examples/vmd/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:51:02.380 22:49:57 -- spdk/autotest.sh@402 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:51:04.427 22:49:59 -- spdk/autotest.sh@403 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:51:05.825 22:50:00 -- spdk/autotest.sh@404 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:51:05.825 22:50:00 -- common/autotest_common.sh@1680 -- $ [[ y == y ]] 00:51:05.825 22:50:00 -- common/autotest_common.sh@1681 -- $ lcov --version 00:51:05.825 22:50:00 -- common/autotest_common.sh@1681 -- $ awk '{print $NF}' 00:51:05.825 22:50:00 -- common/autotest_common.sh@1681 -- $ lt 1.15 2 00:51:05.825 22:50:00 -- scripts/common.sh@373 -- $ cmp_versions 1.15 '<' 2 00:51:05.825 22:50:00 -- scripts/common.sh@333 -- $ local ver1 ver1_l 00:51:05.825 22:50:00 -- scripts/common.sh@334 -- $ local ver2 ver2_l 00:51:05.825 22:50:00 -- scripts/common.sh@336 -- $ IFS=.-: 00:51:05.825 22:50:00 -- scripts/common.sh@336 -- $ read -ra ver1 00:51:05.825 22:50:00 -- scripts/common.sh@337 -- $ IFS=.-: 00:51:05.825 22:50:00 -- scripts/common.sh@337 -- $ read -ra ver2 00:51:05.825 22:50:00 -- scripts/common.sh@338 -- $ local 'op=<' 00:51:05.825 22:50:00 -- scripts/common.sh@340 -- $ ver1_l=2 00:51:05.826 22:50:00 -- scripts/common.sh@341 -- $ ver2_l=1 00:51:05.826 22:50:00 -- scripts/common.sh@343 -- $ local lt=0 gt=0 eq=0 v 00:51:05.826 22:50:00 -- scripts/common.sh@344 -- $ case "$op" in 00:51:05.826 22:50:00 -- scripts/common.sh@345 -- $ : 1 00:51:05.826 22:50:00 -- scripts/common.sh@364 -- $ (( v = 0 )) 00:51:05.826 22:50:00 -- scripts/common.sh@364 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:51:05.826 22:50:00 -- scripts/common.sh@365 -- $ decimal 1 00:51:05.826 22:50:00 -- scripts/common.sh@353 -- $ local d=1 00:51:05.826 22:50:00 -- scripts/common.sh@354 -- $ [[ 1 =~ ^[0-9]+$ ]] 00:51:05.826 22:50:00 -- scripts/common.sh@355 -- $ echo 1 00:51:05.826 22:50:00 -- scripts/common.sh@365 -- $ ver1[v]=1 00:51:05.826 22:50:00 -- scripts/common.sh@366 -- $ decimal 2 00:51:05.826 22:50:01 -- scripts/common.sh@353 -- $ local d=2 00:51:05.826 22:50:01 -- scripts/common.sh@354 -- $ [[ 2 =~ ^[0-9]+$ ]] 00:51:05.826 22:50:01 -- scripts/common.sh@355 -- $ echo 2 00:51:05.826 22:50:01 -- scripts/common.sh@366 -- $ ver2[v]=2 00:51:05.826 22:50:01 -- scripts/common.sh@367 -- $ (( ver1[v] > ver2[v] )) 00:51:05.826 22:50:01 -- scripts/common.sh@368 -- $ (( ver1[v] < ver2[v] )) 00:51:05.826 22:50:01 -- scripts/common.sh@368 -- $ return 0 00:51:05.826 22:50:01 -- common/autotest_common.sh@1682 -- $ lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:51:05.826 22:50:01 -- common/autotest_common.sh@1694 -- $ export 'LCOV_OPTS= 00:51:05.826 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:51:05.826 --rc genhtml_branch_coverage=1 00:51:05.826 --rc genhtml_function_coverage=1 00:51:05.826 --rc genhtml_legend=1 00:51:05.826 --rc geninfo_all_blocks=1 00:51:05.826 --rc geninfo_unexecuted_blocks=1 00:51:05.826 00:51:05.826 ' 00:51:05.826 22:50:01 -- common/autotest_common.sh@1694 -- $ LCOV_OPTS=' 00:51:05.826 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:51:05.826 --rc genhtml_branch_coverage=1 00:51:05.826 --rc genhtml_function_coverage=1 00:51:05.826 --rc genhtml_legend=1 00:51:05.826 --rc geninfo_all_blocks=1 00:51:05.826 --rc geninfo_unexecuted_blocks=1 00:51:05.826 00:51:05.826 ' 00:51:05.826 22:50:01 -- common/autotest_common.sh@1695 -- $ export 'LCOV=lcov 00:51:05.826 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:51:05.826 --rc genhtml_branch_coverage=1 00:51:05.826 --rc genhtml_function_coverage=1 00:51:05.826 --rc genhtml_legend=1 00:51:05.826 --rc geninfo_all_blocks=1 00:51:05.826 --rc geninfo_unexecuted_blocks=1 00:51:05.826 00:51:05.826 ' 00:51:05.826 22:50:01 -- common/autotest_common.sh@1695 -- $ LCOV='lcov 00:51:05.826 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:51:05.826 --rc genhtml_branch_coverage=1 00:51:05.826 --rc genhtml_function_coverage=1 00:51:05.826 --rc genhtml_legend=1 00:51:05.826 --rc geninfo_all_blocks=1 00:51:05.826 --rc geninfo_unexecuted_blocks=1 00:51:05.826 00:51:05.826 ' 00:51:05.826 22:50:01 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:51:05.826 22:50:01 -- scripts/common.sh@15 -- $ shopt -s extglob 00:51:05.826 22:50:01 -- scripts/common.sh@544 -- $ [[ -e /bin/wpdk_common.sh ]] 00:51:05.826 22:50:01 -- scripts/common.sh@552 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:51:05.826 22:50:01 -- scripts/common.sh@553 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:51:05.826 22:50:01 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:51:05.826 22:50:01 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:51:05.826 22:50:01 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:51:05.826 22:50:01 -- paths/export.sh@5 -- $ export PATH 00:51:05.826 22:50:01 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:51:05.826 22:50:01 -- common/autobuild_common.sh@478 -- $ out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:51:05.826 22:50:01 -- common/autobuild_common.sh@479 -- $ date +%s 00:51:05.826 22:50:01 -- common/autobuild_common.sh@479 -- $ mktemp -dt spdk_1727815801.XXXXXX 00:51:05.826 22:50:01 -- common/autobuild_common.sh@479 -- $ SPDK_WORKSPACE=/tmp/spdk_1727815801.4fFxdF 00:51:05.826 22:50:01 -- common/autobuild_common.sh@481 -- $ [[ -n '' ]] 00:51:05.826 22:50:01 -- common/autobuild_common.sh@485 -- $ '[' -n '' ']' 00:51:05.826 22:50:01 -- common/autobuild_common.sh@488 -- $ scanbuild_exclude='--exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/' 00:51:05.826 22:50:01 -- common/autobuild_common.sh@492 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp' 00:51:05.826 22:50:01 -- common/autobuild_common.sh@494 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/ --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:51:05.826 22:50:01 -- common/autobuild_common.sh@495 -- $ get_config_params 00:51:05.826 22:50:01 -- common/autotest_common.sh@407 -- $ xtrace_disable 00:51:05.826 22:50:01 -- common/autotest_common.sh@10 -- $ set +x 00:51:05.826 22:50:01 -- common/autobuild_common.sh@495 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user' 00:51:05.826 22:50:01 -- common/autobuild_common.sh@497 -- $ start_monitor_resources 00:51:05.826 22:50:01 -- pm/common@17 -- $ local monitor 00:51:05.826 22:50:01 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:51:05.826 22:50:01 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:51:05.826 22:50:01 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:51:05.826 22:50:01 -- pm/common@21 -- $ date +%s 00:51:05.826 22:50:01 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:51:05.826 22:50:01 -- pm/common@21 -- $ date +%s 00:51:05.826 22:50:01 -- pm/common@25 -- $ sleep 1 00:51:05.826 22:50:01 -- pm/common@21 -- $ date +%s 00:51:05.826 22:50:01 -- pm/common@21 -- $ date +%s 00:51:05.826 22:50:01 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1727815801 00:51:05.826 22:50:01 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1727815801 00:51:05.826 22:50:01 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1727815801 00:51:05.826 22:50:01 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1727815801 00:51:06.086 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1727815801_collect-vmstat.pm.log 00:51:06.086 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1727815801_collect-cpu-load.pm.log 00:51:06.086 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1727815801_collect-cpu-temp.pm.log 00:51:06.086 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1727815801_collect-bmc-pm.bmc.pm.log 00:51:07.029 22:50:02 -- common/autobuild_common.sh@498 -- $ trap stop_monitor_resources EXIT 00:51:07.029 22:50:02 -- spdk/autopackage.sh@10 -- $ [[ 0 -eq 1 ]] 00:51:07.029 22:50:02 -- spdk/autopackage.sh@14 -- $ timing_finish 00:51:07.029 22:50:02 -- common/autotest_common.sh@736 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:51:07.029 22:50:02 -- common/autotest_common.sh@737 -- $ [[ -x /usr/local/FlameGraph/flamegraph.pl ]] 00:51:07.029 22:50:02 -- common/autotest_common.sh@740 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:51:07.029 22:50:02 -- spdk/autopackage.sh@1 -- $ stop_monitor_resources 00:51:07.029 22:50:02 -- pm/common@29 -- $ signal_monitor_resources TERM 00:51:07.029 22:50:02 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:51:07.029 22:50:02 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:51:07.029 22:50:02 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:51:07.029 22:50:02 -- pm/common@44 -- $ pid=557256 00:51:07.029 22:50:02 -- pm/common@50 -- $ kill -TERM 557256 00:51:07.029 22:50:02 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:51:07.029 22:50:02 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:51:07.029 22:50:02 -- pm/common@44 -- $ pid=557257 00:51:07.029 22:50:02 -- pm/common@50 -- $ kill -TERM 557257 00:51:07.029 22:50:02 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:51:07.029 22:50:02 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:51:07.029 22:50:02 -- pm/common@44 -- $ pid=557259 00:51:07.029 22:50:02 -- pm/common@50 -- $ kill -TERM 557259 00:51:07.029 22:50:02 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:51:07.029 22:50:02 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:51:07.029 22:50:02 -- pm/common@44 -- $ pid=557284 00:51:07.029 22:50:02 -- pm/common@50 -- $ sudo -E kill -TERM 557284 00:51:07.029 + [[ -n 4074102 ]] 00:51:07.029 + sudo kill 4074102 00:51:07.040 [Pipeline] } 00:51:07.055 [Pipeline] // stage 00:51:07.059 [Pipeline] } 00:51:07.074 [Pipeline] // timeout 00:51:07.079 [Pipeline] } 00:51:07.092 [Pipeline] // catchError 00:51:07.097 [Pipeline] } 00:51:07.111 [Pipeline] // wrap 00:51:07.117 [Pipeline] } 00:51:07.129 [Pipeline] // catchError 00:51:07.138 [Pipeline] stage 00:51:07.140 [Pipeline] { (Epilogue) 00:51:07.153 [Pipeline] catchError 00:51:07.155 [Pipeline] { 00:51:07.168 [Pipeline] echo 00:51:07.170 Cleanup processes 00:51:07.175 [Pipeline] sh 00:51:07.462 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:51:07.462 557427 /usr/bin/ipmitool sdr dump /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/sdr.cache 00:51:07.462 557953 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:51:07.477 [Pipeline] sh 00:51:07.767 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:51:07.767 ++ grep -v 'sudo pgrep' 00:51:07.767 ++ awk '{print $1}' 00:51:07.767 + sudo kill -9 557427 00:51:07.780 [Pipeline] sh 00:51:08.068 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:51:20.310 [Pipeline] sh 00:51:20.596 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:51:20.596 Artifacts sizes are good 00:51:20.609 [Pipeline] archiveArtifacts 00:51:20.616 Archiving artifacts 00:51:20.791 [Pipeline] sh 00:51:21.076 + sudo chown -R sys_sgci: /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:51:21.090 [Pipeline] cleanWs 00:51:21.099 [WS-CLEANUP] Deleting project workspace... 00:51:21.099 [WS-CLEANUP] Deferred wipeout is used... 00:51:21.106 [WS-CLEANUP] done 00:51:21.108 [Pipeline] } 00:51:21.123 [Pipeline] // catchError 00:51:21.134 [Pipeline] sh 00:51:21.420 + logger -p user.info -t JENKINS-CI 00:51:21.429 [Pipeline] } 00:51:21.442 [Pipeline] // stage 00:51:21.447 [Pipeline] } 00:51:21.462 [Pipeline] // node 00:51:21.467 [Pipeline] End of Pipeline 00:51:21.503 Finished: SUCCESS